Final Project
Assigned: May 16, 2000
Proposal Due: May 19, 2000
Project Due: June 2, 2000
Your final project is to produce a realistic image of a real object or scene.
The scene or object should be challenging enough to require you to
design and implement an advanced rendering algorithm.
The final project is your chance to investigate an
area that interests you in more depth, and to showcase your creativity.
To get an idea of our expectations, check out the
images produced by past participants.
As extra incentive,
we are offering a grand prize that includes
a free trip to SIGGRAPH in New Orleans for the best image produced.
Think about following when choosing a project:
- What are your goals? Try and phrase this as specific questions
that you would like to know the answers to, e.g. ``How do I model
reflection from surfaces with fine geometric structure, such as fur?''
- What unique imagery would convincingly demonstrate
that you have accomplished your goals?
Try and keep this in mind throughout your project, since in
computer graphics our work is often judged by the images we make.
- What has already been done in this area? You probably won't
have time to completely investigate this, but you should definitely
spend some time reading research papers. We can help you with finding
appropriate references. When you read a paper, look for what has not
been done as well as what is already understood; think about new
things that you could try.
- Depending on the scope of your goals, you may want to work in a
group. Does your project split naturally into several pieces? Look
for projects where each person's work is separable, and yet everyone
contributes toward a shared goal that could not be accomplished
individually.
Possible Projects
Here are some examples of challenging projects:
-
Fancy primitives.
Implement a class of more complicated primitives from Hanrahan's chapter in
Glassner's book. Choose wisely. Quadrics you have already done;
deformed surfaces are much more challenging.
Recommended are bicubic patches (display directly or by meshing),
CSG models, or fractals. Fractals are relatively easy to implement and fun to
use. For extra fun, map textures onto your patches or fractals. For lots of
fun, try fur modeled as geometry (as opposed to as a volume).
-
Exotic wavelength-dependent effects such as dispersion and
thin film effects.
We can give you some references.
-
Adaptive stochastic supersampling.
Use any sample distribution, subdivision criteria, and reconstruction method
you like. Allow interactive control over key parameters of your sampling
scheme. In a separate window alongside your rendered image, display a
visualization of how many rays were cast per pixel.
-
Texture mapping.
Implement texture mapping for spheres, triangle meshes, and planar
quadrilaterals (which you can represent as two triangles). Textures may be 2D
or 3D, and may be procedurally generated, optically scanned, or borrowed from a
friend. (There are unfortunately no scanners in Sweet Hall. There are two in
the multimedia cluster at Meyer Library, one in Tressider, and one in the
graphics lab in Gates 3B.) Avoid bilevel, severely quantized, or predithered
textures; they will look poor when resampled - even if supersampled. Assuming
you avoid extreme perspective views and implement some form of supersampling,
you needn't bother with texture filtering at ray-surface intersections; your
supersampling should eliminate most aliasing artifacts.
Use your texture to modulate something besides just reflectance (i.e. color).
Experiment with transparency, specularity, or orientation (i.e. bump mapping).
Alternatively, try modulating an interpolation between two other textures or
between two entirely different shading models. See Cook's Siggraph '84 paper
on shade trees and Perlin's Siggraph '85 paper on texture synthesis (FvDFH,
p. 1047) for ideas.
For extra fun, implement texture filtering using direct convolution (section
17.4.2 of FvDFH) or prefiltering (using mip maps or summed area tables, section
17.4.3) to yield properly filtered textures without the need for supersampling.
This will require comparing the u,v locations of adjacent rays to locally
control the filter kernel size. If you implement this addon, you might need to
adjust your sample region subdivision criteria in order not to supersample
unnecessarily when rendering a textured surface.
-
Subsurface scattering.
Look at Hanrahan and Krueger's Siggraph '93 paper. For the more ambitious,
model the microgeometry of the surface. For example, consider an explicit
geometric model of the warp and the weft of cloth, the pits in plaster,
the scratches in metal, and the structure of velvet or satin.
Ray trace the microgeometry in order to compute
the brdf. Look at Westin et al. in SIGGRAPH '92.
-
Shading language.
Develop a language for programmable shading formulas akin to (but simpler than)
RenderMan's language (Hanrahan and Lawson, Siggraph '90).
At a minimum, your language should allow the specification of a shade tree that
includes mix nodes driven by textures as in Cook's Siggraph '84 paper on shade
trees. Don't spend a lot of time on the interpreter - a simple syntax will do.
For extra fun, implement (in conjunction with texture mapping) a nontrivial 2D
or 3D texture synthesis method. Examples are spot-noise or reaction-diffusion
equations (see the two papers on this subject in Siggraph '91).
-
Monte Carlo ray tracing.
Extend your area light source integrator into a full
path tracer. Implement semi-diffuse reflections and refractions by
distributing the
secondary rays emanating from each surface according to a bidirectional
reflectance distribution function (BRDF) of your own choosing.
Look at Veach's paper on bidirectional ray
tracing (Siggraph '90).
-
Light ray tracing.
Model caustics by tracing rays "backwards" from light sources and accumulating
incident illumination as a texture on each surface. Implement the Wann Jensen's
photon map algorithm. This one is harder, but we'll reward you suitably.
-
Volume rendering.
Start by implementing spatially inhomogeneous atmospheric attenuation. Divide
each ray into intervals. For each interval, interpolate between the ray's
color and some constant fog color based on a procedurally computed opacity for
that location in space. Experiment with opacity functions. Once you get this
working, try defining a solid texture (probably procedurally) that gives color
and opacity for each interval. See Perlin and Hoffert's Siggraph '89 paper on
solid texture synthesis and Kajiya and Kay's teddy bear
paper for ideas. If you want to make your volume renderer
fast, use hierarchical spatial subdivision (e.g. an octree).
Project Proposal
As a first step you should write a one page project proposal.
The project proposal should be in the form of a web page. To
submit the project proposal, send the url to
cs348b@graphics
This is due Friday, May 19
The proposal should contain a picture of a real object or scene
that you intend to reproduce. We suggest that you first pick
something that you would like to simulate, and then investigate
what techniques need to be used. A real object that you can carry
around with you is best, but a good photograph or painting is
almost as good.
This proposal should
state the goal of your project,
motivate why it is interesting,
identify the key technical challenges you will face,
and outline briefly your approach.
If you are implementing an algorithm described in a particular
paper, provide the reference to the paper.
If you plan on collaborating with others,
briefly describe how each person's project relates
to the others.
We will provide feedback as to
whether we think your idea is reasonable,
and also try to offer some technical guidance,
e.g. papers you might be interested in reading.
Demo/Judgement Day
The project will be due on Friday, June 2nd. During this time
each group will be given 15 minutes to demonstrate their system
and show some images that they produced. All demos will be in
the Sweet Hall SGI Lab.
Remember to bring the object/images that you are modeling and
reproducing. Remember, the goals and technology that you developed
should be obvious from the image itself. After all, this is
graphics.
Grading
The final project will count 1/2 (or more, if based on our
judgement, we consider the project truly outstanding)
towards your final grade in the course. We will consider strongly the
novelty of the idea (if it's never been done before,
you get lots of credit),
your technical skill in implementing the idea,
and the quality of the pictures you produce.
Mega-lines of code does not make a project good.
When you are finished with your project your should submit
the source for your system and any test scenes and images
that you have created. You should also submit your original
project proposal, and an updated version that reads as a
two to three page project summary, more or less of the same format
as the project proposal, but with a brief results section
and any conclusions or comments you have based on your experience.
You are permitted to work in small groups, but each person
will be graded individually. A good group project is a system
consisting of a collection of well defined subsystems.
Each subsystem should be the responsibility of one person
and be clearly identified as their project. A good criteria for whether you
should work in a group is whether the system as a whole is
greater than the sum of its parts!
Rendering Prize
To provide additional incentive, we are offering several prizes
for the best images produced as part of the final project.
The jury, to be named later, will consist of computer graphics
experts from both industry and academia. Both technical
and artistic merit will be used by the judges.