Programming assignment #3 - Extending your ray tracer
CS 348B - Computer Graphics: Image Synthesis Techniques
Spring Quarter, 1999
Frank Crow
Demos on Thursday, June 3
Writeups due Monday, June 7
Overview
The goal of this assignment is to extend your ray tracer in a way of your
own choosing. The next section gives a list of approved extensions, but
it leaves plenty of room for creativity. If you have a different extension
in mind, we'd love to hear about it. We'll approve all reasonable requests.
You are permitted (and encouraged) to form teams of two or three people
and partition your planned extensions among the team members. For a team
of size N, you must implement at least N extensions. Extra credit will
be given for extra work. The division of work among members of the team
should be clear; each extension should be assigned to one student. Your
README file must list these assignments. Teams may discuss this assignment
with other teams, but each team is expected to implement the extensions
independently. In particular, code must not be shared between teams.
Approved extensions
(In the online version, the open bullets are links to example images.)
-
Adaptive stochastic supersampling. Use any sample distribution,
subdivision criteria, and reconstruction method you like. Allow interactive
control over key parameters of your sampling scheme. In a separate window
alongside your rendered image, display a visualization of how many rays
were cast per pixel. For extra fun (and credit), develop a "microscopic"
visualization (similar to the illustrations in papers on the subject) showing
the x,y position of each sample used during the rendering of some subportion
of the image.
Even if you don't implement adaptive supersampling, I strongly recommend
that you add some kind of supersampling to your ray tracer for this assignment.
To properly appreciate (and evaluate) many of these extensions, you'll
want some way to minimize the jaggies. At the least, compute a large image
and average it down, thereby trivially simulating regular, non-adaptive
supersampling.
-
Distribution ray tracing. Implement semi-diffuse reflections and
refractions by distributing the secondary rays emanating from each surface
according to a bidirectional reflectance distribution function (BRDF) of
your own choosing. Allow slider control over one or more parameters of
the BRDF. Stop ray recursion if the weight for a ray's color drops below
a slider-selectable threshold. For extra fun (and credit), implement distribution
ray tracing of an area light source to simulate penumbrae, or of a finite
aperture to simulate depth of field, or of a finite shutter interval to
simulate motion blur.
-
Texture mapping. Implement texture mapping for spheres, triangle
meshes, and planar quadrilaterals (which you can represent as two triangles).
Textures may be 2D or 3D, and may be procedurally generated, optically
scanned, or borrowed from a friend. (There are unfortunately no scanners
in Sweet Hall. There are two in the multimedia cluster at Meyer Library,
one in Tressider, and one in the graphics lab in Gates 3B.) Avoid bilevel,
severely quantized, or predithered textures; they will look poor when resampled
- even if supersampled. Assuming you avoid extreme perspective views and
implement some form of supersampling, you needn't bother with texture filtering
at ray-surface intersections; your supersampling should eliminate most
aliasing artifacts.
To map a texture onto a surface, you must compute texture indices
at each ray-surface intersection. Methods for spheres, quadrilaterals,
and triangles are described by Eric Haines in sections 2.5 and 3.3 of chapter
2 in Glassner. A more efficient algorithm for triangles is given by Didier
Badouel (Graphics Gems, p. 390). For a triangle mesh, the methods listed
above can be combined with a global mapping scheme based on vertex coordinates
relative to a corner of the mesh, angles relative to a projection point
not located on the mesh, or another scheme.
Use your texture to modulate something besides just reflectance (i.e.
color). Experiment with transparency, specularity, or orientation (i.e.
bump mapping). Alternatively, try modulating an interpolation between two
other textures or between two entirely different shading models. See Cook's
Siggraph '84 paper on shade trees and Perlin's Siggraph '85 paper on texture
synthesis (FvDFH, p. 1047) for ideas.
For extra fun, implement texture filtering using direct convolution
(section 17.4.2 of FvDFH) or prefiltering (using mip maps or summed area
tables, section 17.4.3) to yield properly filtered textures without the
need for supersampling. This will require comparing the u,v locations of
adjacent rays to locally control the filter kernel size. If you implement
this addon, you might need to adjust your sample region subdivision criteria
in order not to supersample unnecessarily when rendering a textured surface.
-
Shading language. Develop a language for programmable shading formulas
akin to (but simpler than) RenderMan's language (Hanrahan and Lawson, Siggraph
'90). At a minimum, your language should allow the specification of a shade
tree that includes mix nodes driven by textures as in Cook's Siggraph '84
paper on shade trees. Don't spend a lot of time on the interpreter - a
simple syntax will do. For extra fun, implement (in conjunction with texture
mapping) a nontrivial 2D or 3D texture synthesis method. Examples are spot-noise
or reaction-diffusion equations (see the two papers on this subject in
Siggraph '91).
-
Light ray tracing. Model caustics by tracing rays "backwards" from
light sources and accumulating incident illumination as a texture on each
surface. This one is harder, but we'll reward you suitably. Look at Heckbert's
paper on bidirectional ray tracing (Siggraph '90). As an alternative to
storing illumination on surfaces, implement Veach's hybrid ray tracer,
then try his variance reduction schemes (Siggraph '95).
-
Volume rendering. Start by implementing spatially inhomogeneous
atmospheric attenuation. Divide each ray into intervals. For each interval,
interpolate between the ray's color and some constant fog color based on
a procedurally computed opacity for that location in space. Experiment
with opacity functions. Once you get this working, try defining a solid
texture (probably procedurally) that gives color and opacity for each interval.
See Perlin and Hoffert's Siggraph '89 paper on solid texture synthesis
(Siggraph '89) and Kajiya and Kay's teddy bear paper (Siggraph '89) for
ideas. If you want to make your volume renderer fast, use hierarchical
spatial subdivision (e.g. an octree).
-
Fancy primitives. Implement a class of more complicated primitives
from Hanrahan's chapter in Glassner's book. Choose wisely. Quadrics are
too easy; deformed surfaces are too hard. Recommended are bicubic patches
(display directly or by meshing), CSG models, or fractals. Fractals are
relatively easy to implement and fun to use. For extra fun, map textures
onto your patches or fractals. For lots of fun, try fur modeled as geometry
(as opposed to as a volume). See Gavin Miller's paper "From Wire-Frame
to Furry Animals" (Graphics Interface '88 - Ada Glucksman can make you
a copy).
-
Subsurface scattering. Look at Hanrahan and Krueger's Siggraph '93
paper.
-
Exotic wavelength-dependent effects. We can give you some references.
A real object
In addition to implementing N extensions, you are required to model and
render a scene that includes one real (i.e. physical) object that you can
bring with you to the final demo. Mimic the object's appearance as closely
as you can. You will be graded on the difficulty of your object and the
accuracy of your depiction. Since this is a rendering course, we're looking
mainly for rendering complexity, not geometric complexity. On the other
hand, the object should be non-trivial, i.e. don't simulate a pane of glass,
Some ideas for objects:
-
Glassner's
book. Use noise functions (FvDVH, p. 1047) to simulate the uneven reflection
from the cardboard cover, dirt, fingerprints, etc. Use a scanner to input
the cover design.
-
A pencil. See Plate 17 in Steve Upstill's The RenderMan Companion
Do you chew on pencils? Simulate that.
-
A key. All my keys are scratched up. What about yours?
-
Your hand.
Look at the previous
rendering competitions for inspiration. Be creative. Win a free trip
to Siggraph (see below).
Demos
Unlike the first two programming assignments, this one will be graded using
face-to-face demos. A few days before the demo date, a signup sheet will
be posted in the Sweet Hall SGI lab. Each team should sign up for a 15-minute
time slot on Thursday, June 3 to demo your program. All demos must be given
in the Sweet Hall SGI lab.
Bring the object you have modeled with you to the demo. To expedite
demos, be prepared to show precomputed images demonstrating the required
extensions and a rendering of the object you have modeled. These images
should clearly demonstrate your extensions and the object you have modeled.
If we can't see it, you haven't done it. Depending on the nature of your
extensions, we may also ask you to generate a (small) image during the
demo. To facilitate this, your user interface should include a slider for
setting image size prior to rendering.
Grading will be based not so much on the amount of work you have done,
but on cleverness, creativity, and correctness.
No late demos!
Writeups
When you have completed your demo (and gotten some sleep), make a copy
of your code and the images you have generated in a directory called project3
in your home directory. Write a README file describing your extensions,
and clearly stating which student was responsible for each extension. If
you did something especially clever, tell us about it. Be brief but complete;
3-5 pages is about right. Finally, give us a list of image files. Again,
these images should clearly demonstrate your extensions and the object
you have modeled. Then submit your assignment as before. This submission
is due by the end of June 7. Note: you are welcome to compute additional
images between the demos on June 3 and the written submission on June 7.
However, you will be graded only on the features and images you successfully
demonstrated on June 3.
No late writeups!
The competition
At 4:00pm on Thursday, June 3, a judging will be held to select the best
rendering. Participation in this competition is not required.
Entries may be made by individuals or teams, and may consist of any
number of images or animations. You are allowed to use only i3dm, Composer,
the ray tracer you wrote for this course, any software you wrote for the
course (e.g. procedural modeler or shading languages), and tools available
on the Sweet Hall workstations. You may not use GL. If you wish to use
other tools, you must obtain approval and be prepared to make them available
to all competitors. Your entries must be computed on workstations roughly
equal in power to those in the Sweet Hall laboratory. You may use multiple
workstations only if it does not interfere with other people's work. If
you have any question about what constitutes acceptable computing resources,
ask us. Finally, you may import (free) textures or models from the web,
but be prepared to enumerate your sources honestly during the competition.
We don't recommend heavy use of imported content.
The jury will consist of prominent computer graphics experts from academia
and industry. Your prof will preside. During the judging, each individual
or team will display their entry and briefly explain any exotic techniques
used in its creation. This process is expected to take approximately 45
minutes. The winning submission will then be decided in private by the
jury and announced to the gathered multitudes.
While grades for the programming project are based solely on "technical
merit", the competition will be judged on both "technical merit" and "artistic
impression". In particular, the jury will look for photorealism, creativity,
and elegance.
There will be one grand prize - an all-expenses-paid trip to Siggraph
'99 in Los Angeles, August 9-13, one second-place prize - dinner
for two at Il Fornaio in Palo Alto and a copy of the Siggraph '98
Film and Video Show on VHS videotape, and three third-place prizes
- a copy of the Siggraph '98 videotape. If the grand prize is won by a
team, it must be split among the team members. All other prizes will be
duplicated as necessary to cover the team.
The party
Immediately following the render-off, there will be a party in the graphics
lab and on the terrace outside Sweet Hall to celebrate the winner, your
survival of this course, and the arrival of Summer. Refreshments will be
provided.
levoy@cs.stanford.edu
Copyright © 1998 Marc Levoy
Last updated by Frank Crow: Thursday, 13-May-1999