Taking into account the wave nature of light would involve solving Maxwell's equations throughout space, which I imagine would be significantly more expensive than raytracing. 3D rays only have 6 dimensions, while waves are much more complicated.
The key insight here is to assume that each hair is radially smaller than a pixel. Thus it's better to compute the average of diffuse and specular reflection over the entire cross section rather than choosing a specific point on the hair.
This works well for point and distant lights, but alas it doesn't scale well for many lights, since we need to keep track of an entire grid of values per light.
Here's an interesting paper on a fast fluid dynamics algorithm: https://pdfs.semanticscholar.org/847f/819a4ea14bd789aca8bc88e85e906cfc657c.pdf
The overall effect makes the shadows look softer as light seeps around edges.
This integral is much harder to compute (higher dimensional) than the previous integral, and thus Monte Carlo techniques should be used.
What do the colors represent in terms of surface normals? It seems like all surfaces of the same color are generally facing the same direction, if I am correct. Were the colors arbitrarily chosen?
Albedo is the proportion of light that is scattered vs absorbed.
Compared to the previous slide, the color here has less variance and less noise.
I am also wondering how he was able to put in 8k light sources.
This is a frame from Beeple's Zero-Day animation.
Also the license that Beeple included with their work is funny:
License: "These files are available under a sort of "open source" concept. They are intended for educational use but really can be used for whatever the fuck you want. You don't need to credit me for whatever commercial or non-commercial use you make of them, but if you could shout me a holla with any project that do come from them, I'd love to see it :)"
Interesting side note, Bui Tuong Phong was a Vietnamese-born computer graphics pioneer. His inventions are remembered under his given name, Phong. He also taught at Stanford! wikipedia page
Why is the water so shiny? It looks like mercury. I thought with more bounces we should be able to see the transparency of the water???
I wonder why the light source look so aliased while the corners of the walls and shadows look so smooth/anti-aliased?
What does the grey box represent?
What is the zoom-in on the black table trying to highlight?
Those black holes in the previous comment aren't real, right? It's just rendered based on black hole physics, but is not what an actual black hole looks like? I thought the real black hole rendering was the one mentioned in Lecture 4
Sorry, I can't see any noticeable differences between this slide and the previous slide. What's the difference?
Some intense baby skin rendering from the Death Stranding trailer. However, I think the baby skin in the game looks really rough texture-wise. The image above is smoother and better.
Something I learned from my short stint as an architecture student in high school: many times clients will find computer-rendered images un-artistic, so architects will often render images and then draw over them by hand so that the drafts have the human touch. Sometimes clients may reject a draft rendered by a computer but accept the same one rendered by a human. So, computers may not be better than humans in some respects!
Beyond the visual production of this scene, Doug James and his students work on generating the audio of fluid flows.
Hair is a double whammy for rendering - it's hard to get the lighting right, as seen here, and also quite difficult to get its physical motion correct.
From this paper! The description says "Figure 9: Left: Illustrators sometimes use the convention of white interior edge lines to produce a highlight. Courtesy of Macmillan Reference USA, a division of Ahsuog, Inc. . Right: An image produced by our system, including shading, silhouettes, and white crease line"
Sadly, despite immense effort being put into realistic skin rendering, human perception is so picky that I think we'll be in the uncanny valley for a while longer.
God rays in games are often used to establish atmosphere and scale, and have become a staple of modern AAA titles.
If we're compute-constrained, can we just pass rays that have exhausted their bounce limit through the cloud (with some attenuation factor)? That way we can preserve expected brightness, and just improve cloud photorealism with more compute time.
How important are those specularities to human perception? Would be interesting to see how much users miss them in realworld images, so we can get a sense of how much approximation is OK.
In general, looking for "early outs" like this will help us spend our computation more wisely in the image. However, some parallel architectures don't deal with branching well, so we have to be careful in implementing these checks.
Despite the beautiful demonstration of reflection effects, I can't help but notice the unnaturally uniform textures that really hold back the photorealism of the scene.
MgO2 is commonly used in optics work as a reference reflectant.
Also, this model is easy to intuit and implement, so it sees widespread use in introductory graphics courses - including our own EE267!
The high reflectivity of metals is directly linked to their electron mobility. I'm a little fuzzy on the details though - maybe a physics student could chime in?
In terms of practicality, it's easier and cheaper to calibrate a large array of lights vs. cameras.
Different colors of noise find use in different domains. In sound, pink noise is often used to EQ tune speaker systems, since power is normalized per-octave.
This integral operates over the hemisphere of rays visible from each surface point.
Modern games used to use supersampling to reduce jaggies, but that colossally increases rendering costs, so now we use more perceptually efficient techniques like MSAA (good but still expensive) or FXAA (not super great visually, but cheap).
What do the blue lines $p_1$ and $p_3$ represent? The ground? Mirrors?
In practice, we often need to sample at 5 to 10 times the Nyquist rate to get full reconstruction of the signal. We can get away with less if we use heuristic interpolation, though.
This API design leaves each shape responsible for defining intersection conditions, which makes it easier to support more complex geometries.
I believe $\rho(H^2->\omega_r)$ refers to the reflectance over the hemisphere
What is the diagram on the left showing? Is the flat line representing the platform that the rabbit is sitting on?
Even though this model isn't physically realistic, we can make it obey conservation of energy by bounding reflectance by 1.
Here is the Clarberg paper mentioned above
In the right image, why there is no soft shadow around the inner ball?
A link to their paper
If I remember correctly, Matt mentioned that the right side has slightly higher Monte Carlo efficiency