# CS348B Final Project: Rendering a Lighthouse in Fog

Jason Anderson and David Myszewski

Our Project Proposal

What we did:

1. Modeling Fog

Our goal was to procedurally generate realistic fog using Perlin noise. We started out by overriding pbrt's DensityRegion class and used the Perlin Noise generation function to return density values within the volume region. This ended up generating a cube of noise; not exactly what we wanted. After re-reading Ken Perlin's paper, we realized that using a turbulence function to perturb the points would create wispy effects that are desirable for cloud cover. This equation can be summarized as , where nturb is the number of iterations to run the function, and we allow one to define this value in the pbrt file. Through experimentation, we found that nturb = 3 looks reasonable for testing, while anything greater than nturb = 5 produces marginally improved results.

The problem we still faced was that our volume region was still defined as a cube, and no one ever sees fog/cloud cover as floating cubes in space. Thus, the problem was to shape the fog, without making the modeling in the pbrt file overly complicated. We solved this problem by using techniques related to metaballs. The problem with just using metaballs is that they are mostly designed for generating puffy cumulus clouds and wispy cirrus clouds, not long, flat stratus clouds, of which fog is made. We solved this by evolving the spherical shaped metaballs into two new shapes: cylinders and elliptical cylinders. A single region is generally not sufficient to create the fog-like effects desired; the modeler must create two or more overlapping regions; this creates the rolling effect seen in stratus clouds. One can specify in the pbrt file whether the metaball type used is spheres, cylinders, or elliptical cylinders; so one can generally render any of the cloud types desired.

We allow the user to specify a parameter in the pbrt file, d, which governs the density of the fog. This is useful for getting the appearance of the clouds to be just right.

Here are a few of our test images showing procedurally generated clouds:

 2 metaball spheres 3 "metaball" cylinders

2. Modeling a lighthouse beam

We created a new Light in pbrt to model the effects of a lighthouse beam. We discovered that the time it took to render our cloud regions, even with just single scattering, took a long time time (see section below for a description of the algorithm and improvements we made). Thus, since we wanted reasonable render times, we decided to model our lighthouse beam so that it would work in a single scattering medium.

Our lighthouse beam is modeled somewhat like a spotlight, however it is a bit more complicated. Instead of coming to a point at the end of a cone, our light needs to come to focus as a sphere, just like it would from the lens of a lighthouse. The structure of a lighthouse lens and deformities in the windows help create shafts of light emanating from the beacon. We emulate this by allowing one to specify degree angles for the shafts of light and power distributions for each shaft in the pbrt file. We gathered these values from measuring data from a real lighthouse photograph using angles and intensity values measured in Photoshop. If the power distributions are not specified, the intensity will be uniform until the falloff at the ends of the beam. We compute falloff between each shaft using linear interpolation to prevent jarring visual effects.

3. Improving the Volume Integrator

Because just using the single scatter volume integrator was too slow, we decided to write our own to improve it and to attempt to create high dynamic range effects.

High Dynamic Range -- Blooming effects

In night scenes, because of perceptual effects when light hits the retina, intense light scatters and creates an effect known as blooming. This is extremely difficult to model in a ray tracer, because it is not a property that is fundamental to the light or the objects illuminated; it is a perceptual effect. We attempted to simulate this effect, but ultimately failed. Our approach involved taking the source of a strong light, i.e. our lighthouse beam, and performing ray marching on it to get an intensity value, even if it was not in a density region. The fundamental problem we faced was that rays may never reach the light because of the structure of the ray tracer; blooming effects could illuminate objects that are in front of the light. Getting this to work in pbrt would involve a fundamental overhaul of the system; we believe that accomplishing this effect should be done in a post-processing step.

Optimization

We did succeed in optimizing the volume integrator for the special case where the lighthouse beam provides the vast majority of illumination in the volume. We modified the raymarching technique used in the single scattering integrator. The problem with the single scattering integrator is that it marches through the entire volume, regardless of the extent to which each volume element is illuminated. In areas where the light house beam does not reach, this is very inefficient.

Because the light house beam illuminates a well-defined region, we used a cone as a bounding volume for the light. When a ray cast from the eye intersects the cone, we march only through the volume of the cone pierced by the ray, rather than marching through the entire volume. We created our own cone class to find the points of intersection quickly, since pbrt's built-in code finds only the first intersection, and changing it would require either multiple intersection tests or extra transformations. We precomputed as much of the matrices used in the intersection tests as possible, leading to an efficient intersection method. In most cases, this speeds up the algorithm by a factor of two since we restrict ourselves to the areas in which there is significant illumination.

In addition, within the highly illuminated region, the contribution of the lighthouse beam outweighs the contribution of any other light sources by orders of magnitude. We took importance sampling to an extreme--rather than selecting a random light at each point during the ray march, we only sample the lighthouse beam within the area in which the lighthouse beam dominates. We compared the output of different scenes and found that this assumption did not adversely affect the volume that the lighthouse beam illuminates. In fact, not only did this optimization speed up rendering, it also resulted in considerably less noise.

The following images demonstrate the visual differences between the non-optimized and optimized differences, which also show our attempt at creating a blooming effect near the source of the lighthouse beam.

 Non-optimized (51 seconds) Optimized (33 seconds)

4. Final Output

This image shows our fog/cloud output:

This image demonstrates our lighthouse beam and fog (no texture mapping or environmental lighting with importance sampling):

A side view:

With environmental lighting using importance sampling from hw4:

With a texture-mapped background:

With texture mapped background and environmental lighting:

Higher quality rendering of the above image.

Conclusion

The procedural generation of clouds and fog can create some beautiful images and effects. However, rendering many clouds with a volumetric approach with small step sizes can be excruciatingly slow. Although we made some optimizations to the ray marching algorithm, rendering large high-quality images with these volumes can be quite slow.

Creating blooming effects in the context of a ray tracing system like pbrt would require a fundamental overhaul of the way a scene is rendered. This step, as it is an artifact of visual perception, would be best left to a post-processing step using compositing techniques. We were surprised about the lack of papers on creating high dynamic range effects in the context of a ray tracer; now we know why.

Division of Labor

We worked on this assignment side by side the entire time, thus we put in an equal share. We found that using the extreme programming paradigm was very helpful for a project this challenging, as we were both learning a lot and could assist each other as we proceeded to understand the concepts involved and the results of our output. In the end, we found that the subsystems we built were highly interdependent (for instance, our optimizations are dependent upon intimate knowledge of the lighting), thus working separately would have been difficult. This being said, we still feel like we came up with a better result by working together than if we had each tried to tackle this project alone.

References

1. Ebert, Musgrave, Peachey, Perlin, and Worley. Texturing & Modeling: A Procedural Approach (Third Edition). San Francisco: Morgan Kaufmann, 2003.

2. Perlin and Hoffert. Hypertexture. Proc. Computer graphics and interactive techniques, 1989.

3. Nishita and Dobashi. Modeling and Rendering of Various Natural Phenomena Consisting of Particles. Proc. Computer Graphics International 2001.

4. Eberly, David. Intersection of a Line and a Cone. 2000.

5. Ebert and Bedwell. Implicit Modeling with Procedural Techniques. 1998.

6. Nishita, Tomoyuki. A Shading Model for Atmospheric Scattering Considering Luminous Intensity Distribution of Light Sources. Proc. Computer graphics and interactive techniques, 1987.

7. Spencer, Shirley, Zimmerman, and Greenberg. Physically-Based Glare Effects for Digital Images. Proc. Computer graphics and interactive techniques, 1995.

8. 3D Cafe for the Lighthouse model.