Press here for a general introduction to these rendering competitions.
Bennett and Faisal won the hotly contested '97 rendering competition with this rendering of a Colorado license plate. To model the license plate, they wrote a simple program to generate a curved rectangular mesh, which approximated the bend in the original plate. They used a scanned image of the original plate to generate texture, bump, transparency, and specularity maps for the plate. Finally, they applied environment mapping to get more interesting reflections, and area light sources to generate the soft shadows. Note the patterns in the specular term of the two stickers, the crosshatch scratches in the right sticker, and the bump map for the raised white lettering, which extends slightly beyond the paint. Both Bennett and Faisal have lived in Fort Collins.
Eng-Shien, Jeremy, and Li-Wei captured second place with their modeling of natural scenes. The tree model was generated by a program they wrote that was based on Eric Haines' SPD code, with some additional randomization. The trunk and branches were modeled as cone segments, while the leaves were modeled as spheres, with texture and trim mapping. The single leaf used texture, bump, transparency, and trim maps to define its appearance. The scenes were initially inspired by the M.C. Escher engravings "Three Worlds" and "Dewdrop".
The lake was bump-mapped. The first picture has a fractal mountain in the background, and two pictures demonstrate depth of field effects. All of the pictures use environment mapping, which can especially be seen in the reflections off the dewdrop.
Ryan and Syndney received one of four honorable mentions for their renderings of cocktail glasses and a jigger. In addition to their carefully shaped cocktail glasses, they modeled the wine with a constant density volumetric function (computed from the length of the ray passing through the wine). They used a stack to store refraction indices for each ray, so that they could get refraction through the glass (1.5), water (1.33), and ice (1.25). The marble table was generated with a perlin noise function, while the checkered table used texture maps to modify specularity and shininess. The entire scene was environment-mapped, and you can see the reflections in the glasses and the jigger.
Pierre, Mike, and Brad received one of the four honorable mentions with these Yin Yang meditation balls. They implemented partial CSG to model the spherical indentation in the box. The Yin Yang symbols on the balls are texture mapped, with several discrete values that triggered special effects in the raytracer, such as altering the shininess and specular color of the metallic copper. The remainder of the ball surface was mapped with a 3D noise function, which also altered color, shininess, and metallic coefficients. The box was mapped with the scanned texture, and a direction-dependent modulation to mimic the varying appearance of the threads in the cloth.
The image was rendered with area light sources and adaptive distribution raytracing, to give soft shadows and soft reflections. See their final project webpage for an in-depth discussion of their project implementation.
Dan received one of the four honorable mentions for this breakfast scene, composed in the style of a 1950's commercial. The objects were texture mapped, while the ends of the Pillsbury danish container were also bump mapped. The cup stain behind the cereal box was done with a texture map that modulated the material properties of the table.
Song received one of the four honorable mentions for his incremental raytracer, which exploited multiframe coherence to accelerate the rendering of animations. When he renders the first frame, he caches the ray tree (including primary and all secondary rays). Each voxel in space contains pointers to all of the rays that traverse the voxel. When an object moves, he only needs to update the rays that traverse the voxels affected by motion (as well as any rays affected by the rays that change). He used a number of optimizations, such as specifying motion only within a certain region of space, and a top-down update of the ray tree, to minimize storage requirements and the number of rays that must be updated.
Each scene required approximately 700,000 rays to render the first frame, and approximately 3000-15,000 rays for each subsequent frame. Here's an mpeg of the crying Mona Lisa and an mpeg of the falling ball . After the initial rendering, it took his raytracer 47 seconds (on a 250-MHz firebird) to generate the 25 frames of the falling ball movie, at a resolution of 400x400.
Pradeep and Peter did this scene of a bar at night. They used a large number of texture maps to cover the many objects in the scene. The texture maps for the labels altered both color and transparency, so that the label would not wrap around the entire bottle. They altered the shading model for the neon sign texture map, to give it an emissive appearance. The image was rendered with distributed raytracing to give soft shadows and glossy reflections.
David and Jim did this Maglite. They started with texture and bump mapping, to capture the grooves and crosshatch marks stamped into the case. However, they were still unable to capture the unique appearance of the surface, which appeared to have two distinct specular highlights: one colorless highlight off the surface of the paint, and another blue-tinted highlight that penetrated the upper layer of the coating. To model this, they added a second, colored highlight to their shading model. They also added a cosine term to attenuate the light at grazing angles off the case of the Maglite.
Ron modeled an object (physically) close to his heart: his Sun ID badge. He used texture mapping on all of the surfaces. A strategically placed point light illuminated the mini flashlight. Note the careful modeling of the mounting clip and retractable string case attached to his badge.
Mike modeled this scene of his mechanical pencil. His raytracer used maps to modulate color, specularity, transparency, shininess, and bumps. He modeled both the transparent outside of the pencil, and the mechanics inside the pencil. The pencil used 4 maps: a diffuse color to model the black smudges from the lead inside the cylinder, a transparency map for the lead smudges, a bump map for the ridges near the tip of the pencil, and a diffuse color map to put the smudges on the eraser. The lead case was texture mapped, and the paper used both color and trim maps.
Todd Bilsborrow implemented CSG in order to model these unique objects. The globe demonstrates both texture and transparency maps. He carefully placed three mirrors so that rays hitting the mirror on the back wall bounce off two other mirrors and give a top-down view of the globe. His hierarchical CSG allowed for complex associations, such as the fish in the fishbowl, and the practically perfect Lego pieces. His CSG routines parsed a description file, which he edited by hand to exactly place the objects in the scene.
Mark and Erik did this lacquered Japanese-style plate. They used bump mapping to model the lacquered wood finish. The vase is texture mapped, and the steam rising from the vase is a volume rendered 3D density function. Distributed ray tracing gives the glossy reflections, especially noticeable between grooves in the plate.
Mike, Chirag, and Cleve modeled this lava lamp. They used distributed raytracing to get the glossy reflections off the base of the lamp, and the slightly translucent appearance of the lamp's glass. The wax inside the lamp was modeled using "blobby" objects, in which a series of influence points created a scalar field. An isosurface of this field defines the surface of the wax.