= CS348B Final Project Instructions = === Proposal Due: Friday May 19th, 11:59PM === === Rendering Competition: Friday June 9th, (time TBD) === === Writeup Due: Monday June 12th, 11:59PM === Please create a page for your final project on FinalProjectPages. You will need to add your project proposal and later your final project writeup to this page. ---- == Description == Your final project is to produce a realistic image of a real object or scene. The scene or object should be challenging enough to require you to design and implement an advanced rendering algorithm. The final project is your chance to investigate an area that interests you in more depth, and to showcase your creativity. To get an idea of our expectations, check out the [http://graphics.stanford.edu/courses/cs348b-competition images] produced by past participants. On Friday June 9th, you will present your project to a panel of judges as part of the CS348B rendering competition. As extra incentive, we are offering a grand prize that includes a free trip to SIGGRAPH in Boston in August for the best image produced. Think about following when choosing a project: * What are your goals? Try and phrase this as specific questions that you would like to know the answers to, e.g. ``How do I model reflection from surfaces with fine geometric structure, such as fur?'' * What unique imagery would convincingly demonstrate that you have accomplished your goals? Try and keep this in mind throughout your project, since in computer graphics our work is often judged by the images we make. * What has already been done in this area? You probably won't have time to completely investigate this, but you should definitely spend some time reading research papers. We can help you with finding appropriate references. When you read a paper, look for what has not been done as well as what is already understood; think about new things that you could try. * Depending on the scope of your goals, you may want to work in a group. We encourage two person groups; larger groups will only be allowed to very, very challenging projects. Does your project split naturally into several pieces? Look for projects where each person's work is separable, and yet everyone contributes toward a shared goal that could not be accomplished individually. == Some Ideas == Here are some examples of challanging projects. * '''Fancy primitives'''. Implement a class of more complicated primitives from Hanrahan's chapter in Glassner's book. but choose wisely. Quadrics are too simple; deformed surfaces are much more challenging. Recommended are bicubic patches (display directly or by meshing), CSG models, or fractals. Fractals are relatively easy to implement and fun to use. For extra fun, map textures onto your patches or fractals. For lots of fun, try fur modeled as geometry (as opposed to as a volume). * bilinear patch * full CSG (legos) * partial CSG (indentations) * cone branches * '''Exotic wavelength-dependent effects''' such as dispersion and thin film effects. We can give you some references. * thin-film interference (CD) * thin-film interference (prism) * '''Adaptive stochastic supersampling.''' Use any sample distribution, subdivision criteria, and reconstruction method you like. Allow interactive control over key parameters of your sampling scheme. In a separate window alongside your rendered image, display a visualization of how many rays were cast per pixel. * '''Subsurface scattering.''' Look at Hanrahan and Krueger's Siggraph '93 paper for examples of applying subsurface scattering to plants and faces. For the more ambitious, model the microgeometry of the surface. For example, consider an explicit geometric model of the warp and the weft of cloth, the pits in plaster, the scratches in metal, and the structure of velvet or satin. Ray trace the microgeometry in order to compute the brdf. Look at Westin et al. in SIGGRAPH '92; they describe methods for modeling carpet and velvet. * '''Shading language.''' Develop a language for programmable shading formulas akin to (but simpler than) RenderMan's language (Hanrahan and Lawson, Siggraph '90). At a minimum, your language should allow the specification of a shade tree that includes mix nodes driven by textures as in Cook's Siggraph '84 paper on shade trees. Don't spend a lot of time on the interpreter - a simple syntax will do. For extra fun, implement (in conjunction with texture mapping) a nontrivial 2D or 3D texture synthesis method. Examples are spot-noise or reaction-diffusion equations (see the two papers on this subject in Siggraph '91). * complex shading formula * embedded shading language * '''Volume rendering.''' Start by implementing spatially inhomogeneous atmospheric attenuation. Divide each ray into intervals. For each interval, interpolate between the ray's color and some constant fog color based on a procedurally computed opacity for that location in space. Experiment with opacity functions. Once you get this working, try defining a solid texture (probably procedurally) that gives color and opacity for each interval. See Perlin and Hoffert's Siggraph '89 paper on solid texture synthesis and Kajiya and Kay's teddy bear paper for ideas. If you want to make your volume renderer fast, use hierarchical spatial subdivision (e.g. an octree). * volumetric steam * hypertextured object