The isotrpic property is to describe BRDFs that represent reflectance properties that are invariant with respect to rotation of surface around the surface normal vector. Materials with this characteristic such as smooth plastics have isotropic BRDFs. Anisotropy, on the other hand, refers to BRDFs that describe reflectance properties that do exhibit change with respect to rotation of the surface around the surface normal vector. Some examples of materials that have anisotropic BRDFs are brushed metal, satin, and hair.
Based on physical laws and considered to be physically plausible, BRDFs have two properties: reciprocity and conservation of energy. The reciprocity property says that if the sense of the traveling light is reversed, say, swapping the incoming and outgoing directions, the value of the BRDF remains unchanged. Conservation of energy property states that the total quantity of scattering light during the light-matter interaction cannot exceed the original quantity of light arriving at the surface.
The paper of Cook-Torrance reflection model: https://graphics.pixar.com/library/ReflectanceModel/paper.pdf
More details of BRDF: https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function
Thanks for providing a more informative link. I will update my slides.
Metals have high reflectance at all angles which is why they are often used for making mirrors (applied as a thin coating on glass).
Seems like Cook-Torrance Reflection Model is a physically based BRDF unlike Blinn-Phong model (the latter doesn't adhere to the energy conservation principle). (https://learnopengl.com/PBR/Theory)
This integral written out is words says the total amount of reflected light is the integral over the hemisphere of the product of 1. how much light from wi gets reflected to wr, and 2. the amount of energy hitting the surface.
The BRDF is a distribution function because it tells you the probability that the light is reflected in a given direction. There's also the BTDF (where T stands for transmission) and if you combine the two, we get the BSDF (where S stands for scattering).
Since the reflectance rho_d is between 0 and 1, the maximum value of the BRDF is 1/pi. Also in Lambert's Cosine Law, the cosine term is not part of the BRDF, which is constant. The cosine affects the amount of energy falling on the surface due to the angle of the light incoming.
Slightly tangential, but taking a look into structured color in nature is really cool! For example, the morpho butterfly is a brilliant blue but has no blue pigment. What we see is from the phenomena of structured color, which is due to the nanostructures. Professor Hanrahan mentioned how in anisotropic materials, there are often these little grooves, and if you take a look into the butterfly wing structure, it has exactly this quality that allow for the blue to be seen at broad angles and for such beautiful iridescence. link
I found that the website cited on the slide was not too informative. This website seems to highlight the capabilities, accomplishments, and history of the light stage technology a little better: https://vgl.ict.usc.edu/LightStages/
As well as the paper (there was a dead link at the website in the slide): https://vgl.ict.usc.edu/LightStages/SIGGRAPHAsia-2012-Debevec-LightStages.pdf
The point at which the P polarized light touches the x-axis is known as Brewster's Angle. This point of no reflection is very useful in optics and photography - one can take advantage of this with a camera polarizer filter to take photos of objects glass and water without nasty glare!
Assuming there is no diffraction..
If this is a grating structure, the simulator has some changes.
These are all in ray optics regime, which is totally safe to use.
There are cases when the structure has anisotropy. But the direction of incidence is immune to it.
This link does a good job explaining and giving examples of anisotropic material.
@bainroot, thanks for adding comments about the location of the camera relative to the scene, I was bit confused where the Fresnel effect was coming from!
In terms of more energy coming out than coming in ...
The reflected light energy is always less than the incoming light energy. But there can be extra emitted energy such as from flames or any other source of light. In that case the outgoing energy could be greater than the incoming energy.
A cool paper for follow-up reading: https://doi.org/10.1021/ac50035a045
Note that this assumes that the surface is passive--meaning there is no additional energy source. Light reflecting onto a burning piece of wood would likely reflect more energy than is coming in from a light source.
In the leftmost picture, the camera is held at an angle above the book. In the middle picture, the camera is held lower but still above the book. In the rightmost picture, the camera is held with its lens parallel to the book, and perpendicular to the table. We notice that the table appears much brighter and reflectance is much higher.
The same idea can be observed with other surfaces. As Pat noted in lecture, we can see through water when looking at it from above, but at a glancing angle we see more reflectance.
This slide illustrates how the input energy must be greater than the energy coming out, because some energy can be absorbed by the surface, but it is impossible for negative output energy or output energy greater than input energy. The symbol rho stands for reflectance, and is a ratio. White objects will have reflectance 1, black objects will have reflectance 0.
A great article about sampling with Hammersley and Halton points! ~
I found a super helpful siggraph paper that delves a bit more into quasi-monte carlo sampling ~ http://statweb.stanford.edu/~owen/courses/362/readings/siggraph03.pdf
For independent samples, the discrepancy is exactly one because maximally they can be all one point.
More details on low-discrepancy sequence, https://www.sciencedirect.com/science/article/pii/S0885064X12000854
Sobol sampling has a convenient and unique feature that it is actually stratified across power-of-two dimensions.
Note that the MSE 1x here is meant to indicate that this is a baseline MSE, not that its value is actually equivalent to 1.0
In audio engineering, electronics, physics, and many other fields, the color of noise refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly different properties: for example, as audio signals they will sound different to human ears, and as images they will have a visibly different texture.
more details of quasi-Monte Carlo method: https://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method
Here we see the effect we would get if we haven't taken enough sample points to fill in the space all the way. We get these stripes or patterns that we were trying to avoid in the first place. So a solution is to generate the samples but randomize the order.
The noise in A is blue noise, compared to white noise in B. The human eye finds A more appealing since most of the power is in the high frequencies, whereas white noise is even power across all frequencies. A few slides later, we can see that lowering the power of low frequencies to 0 is ideal.
V(f) is the bounded variation and measures how quickly a function changes/wiggles (bounded by 0,1 cube that we are sampling in). I guess if the function is changing quickly, then it is harder to estimate it when sampling and the error is larger. Also, Matt mentioned in lecture we can't actually compute variations for functions we use in rendering.
The same permutation is used for all samples for a single base. Also, permutations are often carefully chosen to maintain low discrepancy and remove correlation between dimensions. (https://www.pbr-book.org/3ed-2018/Sampling_and_Reconstruction/The_Halton_Sampler)
I found a website showing some analyses on different sampling methods, and a comparison of jittered sampling techniques. http://extremelearning.com.au/a-simple-method-to-construct-isotropic-quasirandom-blue-noise-point-sequences/
To add to your question, it seems like that highly desirable equivalent distribution for higher and higher numbers of samples would be related to the self similarity property of fractals!
It's pretty interesting how Sobol's sequence generates a Sierpinski-triangle-like triangle. Is there a relation between low-discrepancy sequences and space-filling curves?
In class it was mentioned that Halton sampling generally performs much better than just 1.13x independent random samples. In this image, too, Halton sampling looks substantially better than random sampling. How close of an indicator is MSE to the sampling accuracy and smoothness of the outputted image? What artifacts in the outputted image should one look for to evaluate the quality of the sampling algorithm (such as between Halton and Sobol here)?
I've been reading through this article about stratified point sampling using a technique that begins with uniform sampling of a 3d mesh and placing a charged particle at each sampled position, while also incorporating octree voxelization of the model. Very interesting!
I found these lecture slides really helpful in understanding more about the different types of environmental mapping as well as reflection mapping ~ https://cseweb.ucsd.edu/classes/wi18/cse167-a/lec13.pdf
Sobal' samples have a smaller errors than stratified sampling since we are able to efficiently sample the light source in it's entirety instead of per-splitting the light to stratified sections.
How do you actually construct samples that follow a Blue Noise pattern?
The point illustrated here was that in cases like on the left, there's really no benefit of stratification. On the right however, it would be very useful to make sure we're taking samples from all over. This raises the question of when do we stratify or how should we stratify?
For pixels like the top left or bottom right, we have variance of 0. It's only along the diagonal that we'd have non zero variance. This is why we get the 1/(N^1.5) instead of 1/N, because there's only variance in the square root of the total number of strata.
The MSE of all monte carlo sampling strategies decreases with more samples. You are right, most methods decrease linearly with more samples, where as stratified sampling decreases as N^(3/2).
The cost goes up linearly with the number of samples. But the cost per sample can be different with different sampling strategies. But in our case, most of the cost is in performing ray tracing not generating a sample. So the cost of uniform random sampling and stratified sampling are roughly the same. However, it is possible to use a sampling strategy where the cost of generating a sample could be very different.
In general, as you point up. The fact that the error decreases faster for stratified sampling by the cost is roughly the same, stratified sampling is often a net win.
Stratified sampling is used to highlight differences between groups in a population, as opposed to simple random sampling, which treats all members of a population as equal, with an equal likelihood of being sampled
More information about this: https://en.wikipedia.org/wiki/Importance_sampling
I am fascinated by this weighted method. Information theory tends to prove there are less-point sampling methods. The way listed here is pretty close to the limit.
Ambient occlusion is a shading and rendering method used for global background or indirect shading of objects. This method is used to add more realism to the render because it creates soft global shadows that contribute to the visual separation of objects
If there are two distribution peaks that are close to each other, how we can build a new $$p(x)$$? Say, there are two data cameras that are close to each other.