
I think this photo presents and interesting contradiction within the graphics world. Obviously we want to be able to make things look as realistic as possible, but in practice, although something like this may not be exactly right, it is close enough to the real thing that it's believable. And thus we arrive at a crossroads where we can begin to experiment with what aspects can be compromised in the name of efficiency while still creating a visually appealing and realistic photo.



Pat mentioned that a few of the issues with this visualization included bars being occluded because of the camera's view and their relative heights varying with distance because of the perspective projection and foreshortening making it a horrible idea for these old and slow early visualization Softwares.
Interestingly, something about this image reminds me of modern city-sim games like Cities: Skyline or SimCity. In these games where you do benefit from seeing the height and other characteristics of your buildings,, the problem of occlusion and not being able to easily see far away things is solved by real-time rendering and a camera that can move, while the issue of perspective can be solved again with a camera you can move around a map or with an orthographic-projecting camera as was common with early -sim and -tycoon games.

Thank you everyone for a fantastic quarter and a whole lot of hands-on learning about physically based rendering!
For everyone's bookmarking convenience, here are clickable links to the resources in this slide:
https://kesen.realtimerendering.com/
http://www.realtimerendering.com/raytracinggems
https://wjakob.github.io/nori/
https://rgl.epfl.ch/publications/NimierDavidVicini2019Mitsuba2
And the other resources Matt showed us in lecture that aren't on this slide:


I guess the takeaway for me is that EVERYTHING we see in the real world could influence our perception of a virtual world if these things aren't adequately accounted for. Some visual cues (e.g. perspective), if not present, have more pronounced impact on our perception than others. The items in the slide above show historically how we've recognized and categorized these cues.





I looked into Radiosity as I was curious: https://en.wikipedia.org/wiki/Radiosity_(computer_graphics). (The language I use here may be incorrect) It basically involves baking illumination onto surfaces in a way that is viewpoint independent, hence radiosity. It is how modern games like Battlefield get their amazing light effects, though it comes at the disadvantage of being more computationally expensive, hence if the light source moves, light must be "rebaked".





From lecture, the point of contention of many flight simulators for a long time was which parts of realism they should strive for with the limited computing - was shading more important than textures, should surfaces even be shaded, etc.
An interesting anecdote was that the lack of antialiasing could actually make far-away airplanes more visible, because they would blink in and out of of existence on-screen.






Pat mentioned Brunelleschi in lecture (one of my all-time historic favorites--and coincidentally a huge prankster)--here is a blog post about his re-discovery of perspective drawing: https://maitaly.wordpress.com/2011/04/28/brunelleschi-and-the-re-discovery-of-linear-perspective/



Professor Hanrahan mentioned the baby from Pixar's Tin Toy around here in the lecture. I did some digging into the film and found this 1989 NYTimes article. An interesting quote from the article:
The cost of making the award-winning cartoon ''The Tin Toy'' was several thousand dollars per second of running time, according to its creators.
It shows how far computer graphics have come that frames can now be rendered in just minutes on affordable hardware!



From lecture: this picture will be visually confusing if it is treated as a photograph because it looks as if it is taken from some wide fish-eye lens, but the left part will stretch farther to the left while the right part will stretch farther to the right.




The difference between direct lighting and global illumination is astonishing...

This is a very interesting short video on how Pixar simulated hair movement for Brave

Is this effect simulated in graphics by simply changing the color/texture of a given model, or is this an effect of the lighting and optics?

Here is an interesting article outlining the Impasto painting technique. It is interesting to see that one of the main uses for this technique was to add depth to a painting, giving it almost a 3D effect. Similarly, computer graphics uses similar ideas to gain a deeper effect, that being subsurface scattering.

Since this lecture, I have tried to look for the reflections in the hair of my family at home, which has been quite entertaining—haven’t yet found the purported ‘circle’ effect yet, but my sample of hair to look at is pretty small in quarantine!

This reminds me of Monet’s style of painting snow, which uses layers of different colors that you wouldn’t normally associate with a field of white : https://denverartmuseum.org/article/did-you-know-monet-painted-more-100-snow-scenes

To elaborate, medulla is the innermost layer of the hair shaft and in humans, the medulla is so thin that it doesn’t lead to significant scattering. This is compared to fur where medulla scattering is non negligible and so a double cylinder model was developed. The authors then went on to obtain parameters for their model for different animals by conducting lighting experiments and saving the results. The models can then be used efficiently using these stored parameters.
Here is another example of how this is being used. Not only has it been used for training, but now is being implemented to perform remote training.