What's Going On
hcaseyal commented on slide_066 of Reflection Models III ()

To clarify, R, TT, TRT are the unscattered lobes and TT_s and TRT_s are the scattering lobes. Also, the medulla is usually larger for animal fur than for human hairs.


kingofleaves commented on slide_035 of Direct Lighting ()

I've been very curious about what that uniform gray spot right above the back tire is... is that a reflection of the light source or something?


steve_buscemi27 commented on slide_028 of Participating Media and Volume Rendering ()

Volumetric rendering gives a very realistic lighting. It is remarkable.


kingofleaves commented on slide_035 of Reflection Models III ()

Yeah I agree. For the right side, the eye, the mouth, and other tiny muscle details make it look expressionless compared to that on the right.


kingofleaves commented on slide_070 of Reflection Models III ()

For context, the 3 images to the left are highly zoomed in versions of the 3 images on the right.


hcaseyal commented on slide_043 of Reflection Models III ()

Note how the hairs appear flat in the far field model. This is because the far field model is a rougher approximation in order to render the hairs more efficiently. Far field is generally used when the hairs are less than a pixel in width (i.e., the hairs are far away).


hcaseyal commented on slide_036 of Bidirectional Light Transport ()

Could someone clarify why these cases are difficult? Is it light emitting from behind glass?


kingofleaves commented on slide_004 of Global Illumination ()

I missed this lecture, so I didn't really get what this splitting thing does. Can somebody refer me to some reference/text/papers that talk about it, or even better, give me a quick explanation here? Thanks!


hcaseyal commented on slide_005 of Bidirectional Light Transport ()

I ran into this in my project as well when using area lights and a low number of samples / pixel


kefisher commented on slide_073 of Reflection Models III ()

I was really, really impressed with the hair modeling and rendering in MOANA. Definitely worth look at:

https://www.youtube.com/watch?v=dNNUk8oOg4I


kefisher commented on slide_005 of Bidirectional Light Transport ()

when building my final scene for my project, I ran into this issue a ton! I was building my project on the path integrator since it was more readily amenable to my rendering algorithm.

I had a glass jar filled with glass tubes right underneath a lamp in my scene, and the resulting spray of light caustics was reminiscent of this image.


kefisher commented on slide_015 of Global Illumination ()

Also, notice that the reflection on the bathtub appears to get brighter as the #bounces increases. Interesting


steve_buscemi27 commented on slide_009 of Direct Lighting ()

Hemisphere lighting goes spreads more than an area light, showing a considerable amount of noise on the left. Area lighting gives a nice soft shadow while area gives noise because of the light direction differences (out of half of a sphere compared to a square)


kefisher commented on slide_027 of Direct Lighting ()

If an object is in the shade in a scene with such a map, it might oversample the sun and constantly find that the environment map is occluded. (This means we never get the blue light from the blue sky!)


kefisher commented on slide_027 of Direct Lighting ()

I wonder if there is some smart adjustment for making these sampling algorithms a little more fair for skymaps where the sun is 1000x (more?) brighter than the shadier parts of the map. I wonder if there are situations where variance is introduced because we oversample the sun.


steve_buscemi27 commented on slide_001 of The Light Field ()

The main difference between a light-field camera and a normal camera is the detection of light. The light field camera gets more information about the light, like direction and intensity, when the normal camera just gets the intensity. There are a couple examples of light field cameras: one of them is just multiple cameras in an array- that would suffice to gather information about light in a certain space.


steve_buscemi27 commented on slide_007 of Ray Tracing I: Basics ()

For clarification: the depth means the number of times the ray bounces/changes direction. You can see the spheres within the sphere on depth = 3, when the spheres were dark when the depth was 2.


steve_buscemi27 commented on slide_019 of Ray Tracing II: Acceleration Structures ()

A uniform grid would work well for this image because of the consistent amount of detail throughout the image. The blades of grass cover most of the image, but the details of the trees also need similar attention to detail. Thus, there is no need for breaking up the grid into finer parts in this image.


kefisher commented on slide_032 of Monte Carlo 3 ()

Can GPUs reverse bits in parallel?

It's little operations like this that make me think that maybe a dedicated ray-tracing hardware device would be awesome for rendering.

Looks like these are in existence:

https://en.wikipedia.org/wiki/Ray-tracing_hardware

"RayCore (2011), which is the world's first real-time ray tracing semiconductor IP, was announced."


steve_buscemi27 commented on slide_022 of Signal Processing and Sampling ()

The frequency domain of this image is very similar to the very first image a few slides ago, but the hole in the middle of the frequency domain rids of the depth and color of the original image. However, light outlining tracing is still in tact. It just goes to show that the further outward a frequency domain goes, the more clear the outlining would be.


kefisher commented on slide_012 of Monte Carlo II: Variance Reduction ()

I've been wondering if there are optimizations for polygons which are at oblique angles relative to the viewing position. If we rotate a triangle, there's a chance that from the viewing position, the samples will be unevenly distributed.


kefisher commented on slide_068 of Reflection Models III ()

It's really incredible how a little error in your rendering model can cause a hugely visible difference in the final result, but sometimes you don't realize it (like the brown fur looking way darker). And also there are tons of other subtle differences between this and the previous slide. But this could serve as a warning for my future rendering efforts (making good test cases to make sure the big picture is correct too).


steve_buscemi27 commented on slide_052 of Monte Carlo 3 ()

There is a tremendous amount of noise along the tips and edges of the model. I wonder if this has anything to do with the type of lighting going on in this scene.


kefisher commented on slide_034 of Bidirectional Light Transport ()

In class I asked what happens when a scene like this needs to be rendered for a movie and there is just no way to get the variance all the way down.

Matt said that there are machine learning algorithms which can identify noisy pixels and just cut them out of the mix (this means that our integrator will be a bit biased).


kefisher commented on slide_030 of Bidirectional Light Transport ()

^ since final-quality renders are very expensive to compute -- perhaps artists preview their work with low-quality renders and aren't always aware of the high-variance objects in the scene


kefisher commented on slide_030 of Bidirectional Light Transport ()

Even with very clever sampling, it seems like objects like this are just extremely difficult to render well. I wonder if artists work while aware of this effect in industry.


Since clouds have so much scattering media, I wonder how many times light bounces within it. With many bounces, it seems like there would be a large amount of absorption too. Consequently, it seems clouds would heat up. If not, clouds must have a really low absorption coefficient.


In my project, I implemented volumetric emission. It turned out that I was able to get really good results with a pretty naive strategy. It seems like most of the variance originating from volumetric media is introduced by the scattering effect.


kefisher commented on slide_015 of Global Illumination ()

^ I think this is because some of the reflections have taken multiple bounces off the mesh of the water's surface.


kefisher commented on slide_015 of Global Illumination ()

Does anyone know what a typical number of bounces might be in an industrial-quality render?

A little off-topic, but I also wonder this about samples-per-pixel, etc


kefisher commented on slide_018 of Global Illumination ()

I noticed that the light bulbs have some absorption to make them appear more visible. Maybe the refraction of the glass is disabled?

I've wondered how a glass lightbulb should be rendered. Because it's behind a refractive surface, it seems that there would need to be a very smart algorithm, like photon-mapping or bidrectional path tracing, to help account for the unpredictable path of the high intensity light from the filament.


kefisher commented on slide_022 of Direct Lighting ()

This slide made me wonder whether environment maps need to be stored in a special format, since there might be a washed-out sun and dark ground.

Matt told me that these maps are stored as floating point HDR images, so there's no trouble representing the dynamic range.


kefisher commented on slide_023 of Direct Lighting ()

Maybe using the mirror sphere with multiple shots (e.g. from three positions, at 120 degree intervals, we could stitch together the sphere's shots in a way that there's a nice resolution for each part of the map.


kefisher commented on slide_026 of Monte Carlo 3 ()

When we learned this I was thinking of base 10. Think of the sequence 00, 01,02,03,...,98,99.

Our radical inverse sequence will be

0.00, 0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 0.01, 0.11, 0.21, 0.31, 0.41, 0.51, 0.61, 0.71, 0.81, 0.91, 0.02, 0.12, 0.22, 0.32, 0.42, 0.52, 0.62, 0.72, 0.82, 0.92 ... 0.09, 0.19, 0.29, 0.39, 0.49, 0.59, 0.69, 0.79, 0.89, 0.99


kefisher commented on slide_035 of Monte Carlo II: Variance Reduction ()

I'm wondering what the bias would be if the estimation of the function weren't very accurate. So far I have only seen "good" estimations that work really well. What is a failure mode?


kefisher commented on slide_012 of Monte Carlo II: Variance Reduction ()

(unevenly distributed as perceived from the viewing position)


valh commented on slide_006 of Reflection Models III ()

This is a really interesting depiction of the way light interacts with the physical components of our skin. I find it interesting that most of the light seems to scatter in the upper layers of the skin, while the light seems to go easily through blood.


ewang7 commented on slide_019 of Global Illumination ()

The Nayar et al. paper describes how to separate direct and global lighting without knowing material properties, which is quite impressive considering all the effects one has to consider (e.g. the subsurface scattering in the fruits/marble above). Nayar actually captured images of the scene lit by a high-frequency pattern, such as a checkered light, the theory being that lit areas show direct and global lighting, while unlit areas show only global contributions, and we can reconstruct the scenes from there. The paper suggested that it could use change parameters to directly affect the direct or global component (for example, some of the novel images Nayar created edited the skin tone of a human face).


  • or if it's just a complicated mesh.

Do you think these are considered microfacets? I wonder at what point it makes sense to switch between mesh representation of surfaces and other ways. One example which comes to mind is Toy Story's character "Rex". I wonder if his surface is done with some clever bump-mapping or if it's just

https://s-media-cache-ak0.pinimg.com/originals/5a/db/a3/5adba38bbacccac16da89a0915442b54.jpg


I thought it was really interesting that Prof. Hanrahan mentioned that the human eye can't detect the difference between certain diffuse or blurred surfaces, unless the appropriate additional information is given (like edges of the diffuse-surfaced shape, or parallax)


(Or maybe it's just tuned by hand and there is no post-processing)


Does this mean that professional photoshoots will have multiple captures per shot? (like HDR, where you have multiple images captured for a given photo).


ewang7 commented on slide_028 of Global Illumination ()

Since this picture appeared, I found a blog post stating that Bikker was able to get the mushroom structure in Kajiya's previous image now. Unfortunately I wasn't able to find the updated version of the imageā€“it seems to be gone. This particular image was rendered on a GTX 560 it seems. You can watch the video of Bikker traversing the scene here.


kefisher commented on slide_006 of Camera Simulation ()

I think one question that was asked during this lecture is "why do we use a film plane rather than a spherical "film plane", like the eye has?

I think this is because when we show an image on a flat display, an appropriate objective is to make it appear that we are looking through a window into another scene. The best way to achieve this is by using a planar film plane.

Maybe there is some VR setting where a spherical film plane would be better... but it's hard for me to imagine.


valh commented on slide_010 of Reflection Models III ()

@mcirino, I don't think I caught onto this during class, but what are the consequences of this property when rendering?


kefisher commented on slide_008 of Monte Carlo I: Integration ()

juudmae, I've also been wondering about what is "good enough" for convergence.

One thing that's interesting that Matt pointed out is that sometimes its possible to remove a lot of lingering noise in an image by using machine learning to filter the samples for a given pixel. We might still have some noise in our image with an ordinary filter, but an adaptive one could help a lot in approximating the true expectation of a pixel.


kefisher commented on slide_019 of Monte Carlo I: Integration ()

This seems like a very good scenario for stratified sampling to help out. Almost any time we are sampling some area, stratification seems to help out.

I am wondering -- is stratified sampling used for things besides obvious geometric cases? In this course, it seems we only use stratified sampling over areas. But it seems that there might be additional parts of the integration process that are less geometric (e.g. time sampling) which could benefit from this intuition.


kefisher commented on slide_007 of The Light Field ()

I was really curious about this camera. From this picture, it appears to only have one lens (so how can it capture a lightfield?) But apparently it sorts out the rays using an array of "microlenses"

"Lytro's light field sensor uses an array of micro-lenses placed in front of an otherwise conventional image sensor; to sense intensity, color, and directional information. Software then uses this data to create displayable 2D or 3D images."


lion commented on slide_047 of Bidirectional Light Transport ()

in photon mapping, rays from both light sources and cameras are traced separately so as to meet some equations.