CS 348B - Computer Graphics: Image Synthesis Techniques
Introductory lecture
March 31, 1998
-------------------------------------------------------------------------------
LECTURE OUTLINE
1. What is rendering?
2. Course mechanics
3. Open problems in computer graphics
-------------------------------------------------------------------------------
INTRODUCTIONS
Marc Levoy
Tamara Munzner
-------------------------------------------------------------------------------
WHAT IS RENDERING?
(PowerPoint slides)
1. What is rendering?
2. Visual cues for realism
3. The rendering pipeline
o Modeling: o Arranging geometric primitives in space
o Assembling objects from sets or hierarchies of
primitives
o Assigning appearance parameters to the objects -
color, shininess, texture, transparency
o Describing how the objects move over time - animation
o Visibility: o Deciding which objects are seen from a particular
observer position using a particular camera model -
solving the hidden-line, hidden-surface, or
hidden-volume problem
o Shading: o Once you've decided what objects you see, you next
need to decide what they look like, which depends not
only on their appearance parameters, but also on how
they're illuminated.
o Display: o Once you've decided what the objects look like, you
need to decide how to reproduce their appearance
using a frame buffer of a certain resolution and
bit-depth and using a color CRT having certain
display characteristics and being viewed under
certain ambient lighting conditions.
In theory, all of these steps could be performed on continuous
functions:
o Computational geometry gives us algorithms for clipping, for
example, polygons against polygons to produce a set of visible
polygon fragments.
o The physics literature gives us integrals for computing
object appearance as a function of reflectance, transmittance,
emittance, and illumination.
Unfortunately:
o No technology exists for displaying color as a continuous
function of two spatial dimensions.
o Direct evaluation of the so-called rendering equation is
infeasible.
For these reasons, we discretize the entire process back to visibility.
4. The discrete rendering pipeline
o Discrete visibility: o A Z-buffer hidden-surface algorithm breaks
geometric primitives into point samples and
decides for each sample whether it is seen or
not.
o A ray tracer divides the image plane into
point samples and decides for each sample
position what geometric primitives it sees.
o Discrete shading: o Distribution ray tracing (which we'll talk
about later in the course) distributes a
number of rays throughout the viewing frustum
and computes object appearance only at points
where viewing rays (or reflection rays, or
refraction rays) intersect objects.
o The radiosity method (which we'll also talk
about) approximates the appearance of objects
by computing the illumination falling on
small surface patches or within small volume
patches.
o Discrete display: o The pixel.
So, a first course on discrete rendering.
5. Rendering a surface
6. Theoretical foundations
Researchers both inside and outside computer graphics are fond of
asking:
"Where is the theoretical foundation of computer graphics?"
"What is its intellectual core?"
Most of what we consider computer graphics is not a science, but rather
an engineering discipline. As such, it has no single intellectual
core, but rather borrows bodies of theory from other disciplines.
Consider a simple ray tracer.
1. A light source (1) illuminates a surface (2). The surface
reflects (3) some portion of the light towards the observer
(4). The modeling of the interaction between light and surface
is, at its root, physics, coupled perhaps with the material
sciences.
2. We sample this distribution of reflected light by dividing
the image plane (5) into pixels (6) and casting a ray (7) from
the center of each pixel into the scene. If we see something
interesting, we might cast more rays (that's adaptive
supersampling - you'll learn more about this later in the
course). If we don't cast enough rays, we miss objects (that's
aliasing). What is this but a direct application of sampling
theory, perhaps coupled with some statistics?
3. Computing the intersection between this viewing ray and, for
example, a polygonally defined object (8) is analytic geometry.
Computing it efficiently is computational geometry. Both
disciplines have mature theoretical foundations.
4. Finally, knowing what aspects of the light distribution to
display on a CRT (9), and reproducing them accurately (10),
requires a solid understanding of the human visual system and
display hardware.
So, an alternation of theory and practice.
o After a review of basic ray tracing on Thursday,
o I'll start off next Tuesday with an overview of sampling
theory and the causes of aliasing in computer graphics.
o Then I'll discuss practical strategies for anti-aliasing:
supersampling, nonuniform sampling, prefiltering, and we'll
talk about algorithms for implementing these strategies.
o Then I'll switch back to theory and talk about resampling in
two dimensions.
o Then back to practice - practical texture mapping algorithms.
o Then we switch to shading, again starting with theory - the
rendering equation.
o Then practical shading models.
-------------------------------------------------------------------------------
COURSE OUTLINE
-------------------------------------------------------------------------------
TEXTBOOK AND COURSE READINGS
There are three books for this course:
1. FvDFH
It's a good how-to on older material, but for newer material it tends
toward poorly encapsulated synopses of papers from the recent
literature, and not in sufficient depth for this course.
So this year I've made this text optional, and what I'll do is:
o assign many short readings scattered throughout the book,
o use the lectures to give you a conceptual framework on which
to hang the readings.
Most, but unfortunately not all of the material, is covered at a
much more appropriate level by these two research monographs:
2. Cohen and Wallace
3. Glassner
The book by Cohen and Wallace, contains the best tutorial I've seen on
the rendering equation and its solutions.
The book by Glassner steps you through the basics of writing a ray
tracer. It even includes skeletal C code for a ray tracer. You're
welcome to build on it.
I'll also be assembling a course reader consisting of about a dozen journal
papers and excerpts from various tutorials. I hope to have this available in
the campus bookstore by next week.
-------------------------------------------------------------------------------
COURSE PREREQUISITES
The course outline handout enumerates which chapters in Foley and van Dam I
assume you know, and which chapters I will cover. Those chapters not listed at
all represent material I will not cover, and you don't need to know. Example:
chapter 10 on user interface software.
So there is a body of material you should know before taking this course. In
particular:
o 2D scan-conversion algorithms
o 3D matrix transformations
o The most commonly used visible surface algorithms: Z-buffer and ray
tracing. (You will be writing a ray tracer in this course, but we'll
focus on sampling and shading issues. I assume you already know how to
intersect a ray with a polygon. I'll review this briefly on Thursday.)
I think that any of you who do not have the prerequisites can catch up - if
you're sufficiently motivated - by reading a few chapters of the textbook and
Glassner's tutorial. The programming assignments are designed not to require
in-depth knowledge of material I don't cover myself, but some familiarity with
that material will be necessary. It's a decision each of you will have to make
on your own.
348B is not a first course in graphics - it doesn't pretend to cover the entire
field. However, many people without prior graphics backgrounds have taken
348B, and survived - perhaps 20% of each year's class - that's 20% didn't have
the background, not 20% survived :-)
-------------------------------------------------------------------------------
COURSE ASSIGNMENTS
There will be two written assignments and three programming assignments,
as listed in the handout.
The first two programming assignments are individual efforts, and will be
"handed in"; the last programming assignment can be a team effort, and will be
demoed face-to-face. There will be no exams in this course.
More on the programming assignments.
The project for this quarter, as I've said, is to write a ray tracer. If
you've already written one, this one will be better. It will include some but
not all of translucency, gloss, depth of field, motion blur, adaptive sampling,
nonuniform sampling, and texture mapping. If you've already written a ray
tracer with all these capabilities, you probably shouldn't be taking this
course.
We'll also be running a rendering competition. The author of the "best"
rendering will go free to Siggraph next summer. More on this later.
The purpose of this course is to learn rendering, not graphics workstation
programming. So you won't learn GL here (that's Silicon Graphics's graphics
library).
The scheme is:
o Use a modeling package to build 3D objects.
o Use an Inventor-based front-end program to assemble the objects into
scenes, set shading parameters, and select viewpoints using the
real-time display hardware of the SGIs.
o The object geometry and viewing parameters will then be passed to you
for ray tracing. So you'll be writing a renderer, which probably won't
be real-time, in C or C++, from scratch, using geometry and rendering
parameters provided by the modeling package.
o Your ray tracer will have its own simple user interface for
specifying additional rendering parameters not settable within the
modeling packages (like gloss).
You must give your demos in the SGILAB, and we can't let the modeling packages
out of the SGILAB, so it's going to be hard to take this class if you can't do
your work on campus. The front-end program's executable can be copied, but not
recompiled unless you have a license for Inventor. It might be possible to
write your ray tracer as a standalone program that you plug into the modeler
and front-end-program at the last minute, but you'll be using the front-end
program to choose viewpoints, so it sounds painful. If you have your own
modeler and front-end program, there's another problem: we also can't export
the libraries (Inventor and Motif) underlying the simple user interface for
your ray tracer. If you have an SGI with a modeler, Inventor, and Motif, well,
maybe you can swing it.
--> SITN students:
o All handouts will be online on the Web, so you won't be
hampered by SITN handout delivery delays.
--> SITN students only:
o Please use the online questionnaire so that we don't have to
wait for paper forms.
--> Reading for next time:
Glass pp 1-17 overview of ray tracing (RV)
Glass pp 33-59 intersections w spheres & polygons (RV)
Glass ch 6 acceleration techniques (skim, RV)
FvDFH ch 15.10 another review of ray tracing (RV)
FvDFH ch 16.12 recursive ray tracing (RV)
Readings will be listed in an online file. See the web pages.
--> Problem sessions
Friday, April 10 3:15pm for up to 75 minutes
Friday, May 1 same time and place
Friday, May 25-29 TBA
Place: Gates B01 Broadcast: E4
-------------------------------------------------------------------------------
OPEN PROBLEMS IN COMPUTER GRAPHICS
-------------------------------------------------------------------------------