Rendering competition background.
Descriptions are almost entirely based on write-ups submitted by authors.
Adaptive supersampling visualization:
Nate and Peter's outstanding images and animations won first place in the competition. Their model is a "Van de Graff generator enclosed in a glass sphere, which , in turn, enclosed by a mixture of noble gases and another, larger, glass sphere. Electrons accumulate at the inner sphere and then are propelled to the outer sphere through the gases by their mutual repulsion. During this trip they light up the gases in long narrow tendrils that flange at the outer sphere. These peach colored tendrils are modelled with hypertextures." Here is a detailed description of Nate's and Peter's work.
Animations:
Maria and Bradley won second place with their Japanese room walk through animation. In their own words:Omen - walk through animation, Cloth animation.
"We implemented texture mapping using SGI RGB files as input. Textures are supported to modulate the diffuse, ambient, emissive, specular, transparency, shininess, and bump mapping.
We created a physical simulation of cloth as a stand alone modeling tool written using OpenInventor. A simple keyboard user interface allows the user to blow bursts of wind at a sheet suspended from a number of points. ... The cloth is modeled as a grid of masses, each connected to its four immediate neighbors by a spring. Also, second neighbors are connected by a weak spring in order to simulate rigidity, so the cloth is not completely flexible. A set of suspension points on one side of the cloth are not allowed to acquire velocity, all other points are updated according to newtonian physics.
We implemented fractal terrains using the "diamond-square" method invented by Gavin Miller. A very good informal description of the method can be found at: http://www.gameprogrammer.com/fractal.html
We implemented simple trees by recursively extending branches out from a trunk segment. The trees are modeled in OpenInventor using Cylinders at the branch segments. Each branch can divide into two sub-branches, each having smaller diameter and length.
We implemented a simple tool in OpenInventor to allow us to generate a camera motion for animation. It allows the user to specify a set of key frame positions for the camera. Frame counts are specified between key frames. The in-betweens are filled in by linearly interpolating between camera positions."
More details for the interested.
Peter won third place with his images of sea shore and marble pool. In his words:
"I implemented a variety of procedural solid textures. These textures use noise functions to alter many different surface properties, including the ambient and diffuse colors, transparency, reflection coefficient, and surface normal. The textures are based on the noise functions of Perlin (Siggraph 1985, pp. 287-296) and Worley (Siggraph 1996, pp. 291-294).
The sky texture in both images uses fractal Perlin noise to modulate
the ambient color.
The ocean water and cliff face are both bump mapped based on the fractal
version of Worley's F1 function.
The marble texture on the columns is created by using Perlin noise
to modulate the phase of a sine wave.
The colored mosaic pattern is created using Worley's functions.
F2-F1 is used to define the gaps between stones, while each stone is colored
based on the identity of the nearest feature point. F1 is used to
bump map the stones, giving them a rounded appearance.
The leaves climbing up the columns are created by using the F2 function.
The dolphin was modeled by hand using Pixels3D on the Macintosh, based
on photographs of bottlenosed dolphins.
The splash consists of 300 tiny spheres. I wrote a Matlab script
to generate the points according to a reasonable, radially symmetric distribution,
and output them as an Inventor file.
The water in the pool was generated by adding together various sine
waves, plus a small amount of random noise, to create a hightfield."
More details for the interested.
Leif, Serkan, and Mike won third place. They implemented texture and bump mapping in order to model various parts of the bottle. The top blue cap of the bottle shows bumpmapping. The label of the bottle was rendered using a texture map. The rest of the bottle is pure geometry. Most of the bottle's body is a surface of revolution.
Sean created a model of his celtic ring. He used diffuse, specular, bump, and roughness maps to model knots of real ring. The detailed description of his work can be found here.
Michael's matchbox and matches are textured with scanned textures from
the four sides of the
cover, from the white cardboard, from the normal cardboard, from the
match stick, and from the
match head. The geometry for the model is very precise and is scaled
with measurements from the
actual box. The matches are modeled as the match stick plus the match
head. The stick is modelled as
a rectangular solid. The head is modeled as a surface of revolution
resembling a hemi-sphere
attached to a hemi-ellipsoid.
Matthew's can is composed of three surfaces of revolution. The top is bump mapped to include the "CA CASH REFUND" imprint and the imprint where the mouth pushes in. The side has texture map (scanned from a cut apart can) modulating the diffuse color, which is a slightly bluish white. The bottom is bump mapped to include an impressed number.
Supersampling visualization:
Larry and Sam's images demonstrate texture and bump mapping. The world's map is applied to the globe. The number on credit card is bump mapped.
Jeff created rippling effect on liquid using bump mapping based on bump period, fade factor, and spread factor. Planets and planet rings are textured with procedural texture. More details of Jeff's work are available here.
Jing and Praveen modeled the Naboo Royal Starship from Star Wars Episode I. It's actually a flying action model rocket. The planet and plane are texture mapped.
Sam applied diffuse and specular color textures to model game CD. Tart box and Earth model are texture mapped. Floor's checkers are procedurally generated.
Xiaosong's textures on his flower model, pot, soil, and background come from digital camera images. Bump mapping is applied to the bottom of the pot.