To clarify, R, TT, TRT are the unscattered lobes and TT_s and TRT_s are the scattering lobes.
Also, the medulla is usually larger for animal fur than for human hairs.
I've been very curious about what that uniform gray spot right above the back tire is... is that a reflection of the light source or something?
Volumetric rendering gives a very realistic lighting. It is remarkable.
Yeah I agree. For the right side, the eye, the mouth, and other tiny muscle details make it look expressionless compared to that on the right.
For context, the 3 images to the left are highly zoomed in versions of the 3 images on the right.
Note how the hairs appear flat in the far field model. This is because the far field model is a rougher approximation in order to render the hairs more efficiently. Far field is generally used when the hairs are less than a pixel in width (i.e., the hairs are far away).
Could someone clarify why these cases are difficult? Is it light emitting from behind glass?
I missed this lecture, so I didn't really get what this splitting thing does. Can somebody refer me to some reference/text/papers that talk about it, or even better, give me a quick explanation here? Thanks!
I ran into this in my project as well when using area lights and a low number of samples / pixel
I was really, really impressed with the hair modeling and rendering in MOANA. Definitely worth look at:
when building my final scene for my project, I ran into this issue a ton! I was building my project on the path integrator since it was more readily amenable to my rendering algorithm.
I had a glass jar filled with glass tubes right underneath a lamp in my scene, and the resulting spray of light caustics was reminiscent of this image.
Also, notice that the reflection on the bathtub appears to get brighter as the #bounces increases. Interesting
Hemisphere lighting goes spreads more than an area light, showing a considerable amount of noise on the left. Area lighting gives a nice soft shadow while area gives noise because of the light direction differences (out of half of a sphere compared to a square)
If an object is in the shade in a scene with such a map, it might oversample the sun and constantly find that the environment map is occluded. (This means we never get the blue light from the blue sky!)
I wonder if there is some smart adjustment for making these sampling algorithms a little more fair for skymaps where the sun is 1000x (more?) brighter than the shadier parts of the map. I wonder if there are situations where variance is introduced because we oversample the sun.
The main difference between a light-field camera and a normal camera is the detection of light. The light field camera gets more information about the light, like direction and intensity, when the normal camera just gets the intensity. There are a couple examples of light field cameras: one of them is just multiple cameras in an array- that would suffice to gather information about light in a certain space.
For clarification: the depth means the number of times the ray bounces/changes direction. You can see the spheres within the sphere on depth = 3, when the spheres were dark when the depth was 2.
A uniform grid would work well for this image because of the consistent amount of detail throughout the image. The blades of grass cover most of the image, but the details of the trees also need similar attention to detail. Thus, there is no need for breaking up the grid into finer parts in this image.
Can GPUs reverse bits in parallel?
It's little operations like this that make me think that maybe a dedicated ray-tracing hardware device would be awesome for rendering.
Looks like these are in existence:
"RayCore (2011), which is the world's first real-time ray tracing semiconductor IP, was announced."
The frequency domain of this image is very similar to the very first image a few slides ago, but the hole in the middle of the frequency domain rids of the depth and color of the original image. However, light outlining tracing is still in tact. It just goes to show that the further outward a frequency domain goes, the more clear the outlining would be.
I've been wondering if there are optimizations for polygons which are at oblique angles relative to the viewing position. If we rotate a triangle, there's a chance that from the viewing position, the samples will be unevenly distributed.
It's really incredible how a little error in your rendering model can cause a hugely visible difference in the final result, but sometimes you don't realize it (like the brown fur looking way darker). And also there are tons of other subtle differences between this and the previous slide. But this could serve as a warning for my future rendering efforts (making good test cases to make sure the big picture is correct too).
There is a tremendous amount of noise along the tips and edges of the model. I wonder if this has anything to do with the type of lighting going on in this scene.
In class I asked what happens when a scene like this needs to be rendered for a movie and there is just no way to get the variance all the way down.
Matt said that there are machine learning algorithms which can identify noisy pixels and just cut them out of the mix (this means that our integrator will be a bit biased).
^ since final-quality renders are very expensive to compute -- perhaps artists preview their work with low-quality renders and aren't always aware of the high-variance objects in the scene
Even with very clever sampling, it seems like objects like this are just extremely difficult to render well. I wonder if artists work while aware of this effect in industry.
Since clouds have so much scattering media, I wonder how many times light bounces within it. With many bounces, it seems like there would be a large amount of absorption too. Consequently, it seems clouds would heat up. If not, clouds must have a really low absorption coefficient.
In my project, I implemented volumetric emission. It turned out that I was able to get really good results with a pretty naive strategy. It seems like most of the variance originating from volumetric media is introduced by the scattering effect.
^ I think this is because some of the reflections have taken multiple bounces off the mesh of the water's surface.
Does anyone know what a typical number of bounces might be in an industrial-quality render?
A little off-topic, but I also wonder this about samples-per-pixel, etc
I noticed that the light bulbs have some absorption to make them appear more visible. Maybe the refraction of the glass is disabled?
I've wondered how a glass lightbulb should be rendered. Because it's behind a refractive surface, it seems that there would need to be a very smart algorithm, like photon-mapping or bidrectional path tracing, to help account for the unpredictable path of the high intensity light from the filament.
This slide made me wonder whether environment maps need to be stored in a special format, since there might be a washed-out sun and dark ground.
Matt told me that these maps are stored as floating point HDR images, so there's no trouble representing the dynamic range.
Maybe using the mirror sphere with multiple shots (e.g. from three positions, at 120 degree intervals, we could stitch together the sphere's shots in a way that there's a nice resolution for each part of the map.
When we learned this I was thinking of base 10. Think of the sequence 00, 01,02,03,...,98,99.
Our radical inverse sequence will be
0.00, 0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90,
0.01, 0.11, 0.21, 0.31, 0.41, 0.51, 0.61, 0.71, 0.81, 0.91,
0.02, 0.12, 0.22, 0.32, 0.42, 0.52, 0.62, 0.72, 0.82, 0.92
0.09, 0.19, 0.29, 0.39, 0.49, 0.59, 0.69, 0.79, 0.89, 0.99
I'm wondering what the bias would be if the estimation of the function weren't very accurate. So far I have only seen "good" estimations that work really well. What is a failure mode?
(unevenly distributed as perceived from the viewing position)
This is a really interesting depiction of the way light interacts with the physical components of our skin. I find it interesting that most of the light seems to scatter in the upper layers of the skin, while the light seems to go easily through blood.
The Nayar et al. paper describes how to separate direct and global lighting without knowing material properties, which is quite impressive considering all the effects one has to consider (e.g. the subsurface scattering in the fruits/marble above). Nayar actually captured images of the scene lit by a high-frequency pattern, such as a checkered light, the theory being that lit areas show direct and global lighting, while unlit areas show only global contributions, and we can reconstruct the scenes from there. The paper suggested that it could use change parameters to directly affect the direct or global component (for example, some of the novel images Nayar created edited the skin tone of a human face).
Do you think these are considered microfacets? I wonder at what point it makes sense to switch between mesh representation of surfaces and other ways. One example which comes to mind is Toy Story's character "Rex". I wonder if his surface is done with some clever bump-mapping or if it's just
I thought it was really interesting that Prof. Hanrahan mentioned that the human eye can't detect the difference between certain diffuse or blurred surfaces, unless the appropriate additional information is given (like edges of the diffuse-surfaced shape, or parallax)
(Or maybe it's just tuned by hand and there is no post-processing)
Does this mean that professional photoshoots will have multiple captures per shot? (like HDR, where you have multiple images captured for a given photo).
Since this picture appeared, I found a blog post stating that Bikker was able to get the mushroom structure in Kajiya's previous image now. Unfortunately I wasn't able to find the updated version of the image–it seems to be gone. This particular image was rendered on a GTX 560 it seems. You can watch the video of Bikker traversing the scene here.
I think one question that was asked during this lecture is "why do we use a film plane rather than a spherical "film plane", like the eye has?
I think this is because when we show an image on a flat display, an appropriate objective is to make it appear that we are looking through a window into another scene. The best way to achieve this is by using a planar film plane.
Maybe there is some VR setting where a spherical film plane would be better... but it's hard for me to imagine.
@mcirino, I don't think I caught onto this during class, but what are the consequences of this property when rendering?
juudmae, I've also been wondering about what is "good enough" for convergence.
One thing that's interesting that Matt pointed out is that sometimes its possible to remove a lot of lingering noise in an image by using machine learning to filter the samples for a given pixel. We might still have some noise in our image with an ordinary filter, but an adaptive one could help a lot in approximating the true expectation of a pixel.
This seems like a very good scenario for stratified sampling to help out. Almost any time we are sampling some area, stratification seems to help out.
I am wondering -- is stratified sampling used for things besides obvious geometric cases? In this course, it seems we only use stratified sampling over areas. But it seems that there might be additional parts of the integration process that are less geometric (e.g. time sampling) which could benefit from this intuition.
I was really curious about this camera. From this picture, it appears to only have one lens (so how can it capture a lightfield?) But apparently it sorts out the rays using an array of "microlenses"
"Lytro's light field sensor uses an array of micro-lenses placed in front of an otherwise conventional image sensor; to sense intensity, color, and directional information. Software then uses this data to create displayable 2D or 3D images."
in photon mapping, rays from both light sources and cameras are traced separately so as to meet some equations.