noraw/FinalProject

Final Project

Luke Anderson, Kate Swanson, and Nora Willett

Introduction

"One cannot think well, love well, sleep well, if one has not dined well."

  • ~ Virginia Woolf

"When one has tasted it he knows what the angels eat."

  • ~ Mark Twain

What makes food look delicious, and makes us imagine the smell and taste? Whether we like it or not, humanity naturally revolves around a love for food. For our project, we have chosen to render a slice of cherry pie that Woolf and Twain's stomachs would growl for. Our project is both novel and challenging. Similar to the notion of the famous "uncanny valley", humans need to be able to recognize edible (not to mention desirable) substances. Those who view computer generated food have a natural tendency to be highly critical of the smallest imperfections - even in cases where they cannot always articulate the problem. As described in "Anyone Can Cook -Inside Ratatouille’s Kitchen" (Siggraph 2007) it is a very difficult task to make food look delicious.

Just as everybody is unique, as you might expect, every slice of cherry pie is unique (perhaps with the exception of slices from certain fast food restaurants). We examined many different reference images and even baked our own pie to gain an understanding of what goes into making our particular slice look appealing and realistic. (See reference images above and below). The image we have created is our best interpretation of a realistic and idealistically delicious slice of cherry pie.

The difficult tasks in this project can be divided up into three main categories: (1) rendering the crust, (2) rendering the cherries, and (3) rendering the cherry sauce. Each of these categories poses unique challenges, described in detail later.

Slice we baked as reference.

Reference image

Reference image

Models (Nora)

CV curves in Autodesk Maya were used extensively to model the different objects because of the radial symmetry present. The plate was modeled using a CV curve rotated around the y axis. The bottom and top pie crusts were created in a similar fashion but the curve was only rotated 12 degrees. The edge of the bottom crust was modeled to produced the rippled edge of the crust and then duplicated to create a thicker piece of pie. The top crust was sculpted to give it bumps where the cherries would push the crust upwards. The cherries were modeled by revolving a curve around the y axis. They were then manually placed inside the crust as well as scaled and rotated in different directions to produce more variety. The fork was derived from a CV curve rotated around the x axis. The tines were extruded from one end and then flattened.

The objects in Maya.

Sauce (Luke)

The pie sauce is a very intricate and complex shape, with parts flowing between the cherries and out the sides on to the plate. To accurately model such a shape by hand would be extremely difficult. We instead decided that a fluid simulation would be the best way to create a natural, realistic shape for the sauce.

To achieve this we used the fluid capabilities of Maya. Once the crust and cherries were modeled and positioned as desired, we setup a fluid emitter from the top crust and set all the objects (cherries, crust, plate, fork) as 'colliders' so the sauce would ooze through the gaps in the cherries and out onto the plate as realistically as possible.

Having no prior experience with Maya or fluid simulations, time was spent simply getting the simulation set up. Considerable time was then spent tweaking the fluid settings in Maya. It proved difficult to achieve a sauce-like consistency, with many fluids looking too thick and paste-like. It seems that Maya is much better equipped to produce smoke-like fluids than water based ones. Using an alternative like RealFlow that specializes in producing high quality fluid simulations would probably have made the task much easier. Regardless, with much tweaking we were able to produce a sauce with pleasing consistency.

Besides finding the right consistency, the other major difficulty was controlling the exact shape and position of the resulting fluid. So once the simulation had provided a generally pleasing model, we spent considerable time making manual adjustments to get the precise, desired look. We then converted the fluid to a mesh and then to a .obj file using Maya and then to a .pbrt file containing a triangle mesh volume (see below) using a slightly modified version of [3].

Below are some images of the simulation as it progresses. The fluid emitter in Maya emits from both sides of the top crust. Most of the fluid above eventually sank through to the bottom. Any parts that remained in undesirable locations were manually removed.

The simulation at 10 frames

The simulation at 35 frames

The simulation at 55 frames

Setting up the Scene (Nora)

Using tutorials for photographing food as reference, the scene was designed to highlight the delicious pie. The pie was placed mostly in the center of the scene so that your eye would be attracted to it. The background is shades of gray so that the bright red cherry filling steals the viewer's gaze. Three lights are used to highlight all aspects of the pie. A large area light source is positioned behind the pie close to the ground. This light casts grazing light across the crust of the pie to highlight the flaky texture. In order to make the food appear warmer, the main light is tinted orange. Two small fill spot lights are positioned on either side of the camera. These lights fill in the shadows and create reflections and highlights on the cherry filling.

Pie Crust (Kate)

Reference image

Reference image

Reference image

One of the challenges in rendering the pie was creating a realistic, edible looking crust. There has been previous work done in simulating bread in "Anyone Can Cook -Inside Ratatouille’s Kitchen", however pie crust and bread have a significantly different composition. As we discovered in baking our own cherry pie (which, we would like to add, was very tasty as well as informative) pie crust has a simple composition of mainly oil, water, and flour, giving it a smooth and very dense surface, unlike bread which contains yeast making it much less dense. Furthermore, crust is created by rolling out the uncooked dough before baking which gives it a flat, smooth texture. The bread research also focused highly on sliced bread and generating the air holes, another factor which is not found in crust.

Furthermore, a key feature of pie crust that makes it look realistic is the combination with the juice from the pie filling. Therefore, the crust cannot be rendered without considering the filling. As shown in some of the reference images, to make the pie look appealing and more realistic we were able to push down the model of the crust in key areas to allow the simulated pie filling to "leak" through. To achieve this effect, we perturbed the model in key areas to allow the filling to pass through the mesh. Since the sauce has the correct color and reflectance distribution for a gel, this made the crust look correctly "soaked through" in those areas. Note, this is shown in the final render images, the image shown below is the isolated crust only.

To achieve the complex, nonuniform texture of pie crust we chose to leverage pbrt's "uber" material. This material has parameters for diffuse reflection, glossy reflection, specular reflection, and opacity. Where necessary, a texture can be used as a map for any of these parameters. Uber also allows modifying roughness and index of refraction of the material. The most important variables to render the final crust were the diffuse texture, and glossy reflection texture. We used the UV maps generated in Maya to create a texture for the top and bottom crust separately. This allowed us complete control over the crust's diffuse color: you can see in the images below that the texture maps to where the crust should be "soaked" by the color of the sauce or toasted to a darker color in certain patches to match the reference images. We used a reflectance map, shown below, to specify which parts of the crust should be slightly shiny to make it look oily in various, nonuniform areas (rather than shiny all over as occurs when the scalar value is used for this parameter). To perturb the surface of the model and achieve a crust like "flaky" texture we applied a bump map to the top, also shown below. The bottom of the pie crust has a much more uniform appearance since it is covered by sauce in the final render, therefore it should appear to be the color of the sauce. It is also smooth and therefore did not require a reflectance map, a scalar value sufficed for this variable. To demonstrate the role each texture map plays in the final render of the isolated crust, the images below (from left to right on the second row) show the crust rendered with only the maps applied, without the reflectance map, and without the bump map. Note the differences between the final render, and the images lacking these texture maps.

Crust Top

UV map

Texture

Bump map

Reflectance Map

Using the material defaults

Using all texture maps

Without using a bump map

Without using a reflectance map

Crust Bottom

UV map

Texture

Bump map

Using the material defaults

Using all texture maps

Without using a bump map

Final rendering of pie crust, isolated.

Pie Filling

Image used as reference for the filling in addition to the images above

The filling consists of both the whole cherries and the sauce from the sugar, flour and cherry juices. In order to make the filling look appealing it has light passing through the medium, is a very saturated color, and reflects some light to create shininess.

Cherries (Nora)

To create realistic looking cherries, the uber material in PBRT was chosen because of its wide range of variables. The variables - diffuse reflection, glossy reflection, specular reflection, and opacity - can all be controlled with textures. The variables - roughness of the surface and index of refraction - are floats. To arrive at the finished look for the cherries, we tried many different possibilities for the variables. We discovered that a high index of refraction and a high glossy and specular reflection cause the cherries to have bright highlights that cover most of the cherry and convey freshness to the viewer. By increasing the roughness factor, the highlights were blurred and softened causing them to look wet and delicious.

At this point, the cherries were looking very scrumptious but they looked objectionable because they were all the same. To introduce variety, we divided the cherries into seven different groups making sure that cherries in the same group were separated from each other. Each group was assigned a different texture and bump map. Each texture had a main color which came from a variety of reds from dark maroon to cadmium red. The textures also had blurred out highlights and shadows made from lighter and darker reds. The bump maps were designed to make the cherries look softer and less perfect. They were created by blurring light gray areas into a black background.

UV map

Texture

Bump map

Using the material defaults

Without using groupings

Without using a bump map

Final rendering of cherries, isolated.

Sauce (Luke)

Initially, we attempted to render the sauce as simply as possible: as a triangle mesh with the same material as the cherries. The result was unappealing and we decided that a volume rendering approach would be more suitable. This first required a custom volume implementation in pbrt.

Triangle Mesh Volume

We made the assumption that the sauce was a homogeneous volume. Unfortunately, pbrt only provides capabilities for rendering homogeneous volume boxes so we needed to create a new homogeneous triangle mesh volume.

The main functionality required for such a volume is intersection testing and point inside volume testing. We achieved the former by storing the mesh as individual triangles and leveraging pbrt's existing ray-triangle intersection testing code. The latter was achieved using a simple crossing algorithm: given a point, fire a ray from the point in an arbitrary direction (say, the positive x direction) and count the number of times the ray intersects the volume. An odd number of crossings means the point is inside the volume; an even number outside. All these intersection tests were accelerated by a bounding volume hierarchy (pbrt's) created specifically for the volume.

One major issue we encountered was that when Maya converted the fluid to a mesh it did not guarantee that the mesh was closed. To correctly perform point in volume tests it is essential that the volume is closed otherwise the crossing algorithm will produce incorrect results. To overcome this we manually inspected and filled in holes in the mesh and we also wrote code that would hopefully detect this situation by intersecting rays with the volume's bounding box and comparing its intersections with those from the volume itself.

As a starting point, we used the scatter and absorption coefficients of a red wine like material [6] and then tweaked them to achieve the desired look. We also spent time adjusting the step size of the volume integrator. With a larger step size, the integrator steps over the sauce completely at various places, creating a kind of highlight for the underlying cherries, where perhaps the skin had split. We tweaked this until the look was as natural and appealing as possible. With the fluid simulation and triangle mesh volume completed, we were able to render the sauce using pbrt's single scattering volume integrator and achieve very pleasing results.

Volumetric Photon Mapping

To render multiple scattering within the sauce we implemented volumetric photon mapping. We followed [4] as closely as possible. Since pbrt already supports surface photon mapping, we extended the existing surface integrator to handle volumes. When a photon is fired from a light source and intersects with the geometry, our code does the following:

  1. We check whether it first intersects the volume elements of the scene (in this case only the sauce). If no, continue with pbrt's regular surface intersection code.
  2. If yes, we ray march through the volume using random step sizes (jitter) as described in [4] to avoid aliasing.
  3. At each step, we use the cumulative pdf (transmittance) described in [4] to determine if an interaction occurs.
  4. If an interaction occurs, we randomly pick a point on the current interval as the interaction point (to eliminate bias, as described in [5]), store the photon and then use the albedo to determine if the photon is scattered or absorbed:
    1. If the photon is absorbed, the process ends.
    2. If the photon is scattered, we sample a new direction and attenuate the photon using the Henyey-Greenstein phase function. The photon is further attenuated by the albedo. We intersect this new ray with the geometry and return to 1.
  5. The process continues until we step out of the volume, in which case we continue with pbrt's standard surface intersection code.

To estimate the radiance we modified pbrt's single scattering integrator. It already estimates the direct light so we only need to estimate the radiance from indirect in-scattered light which simply requires looking up nearby photons and scaling the resulting sum [4]. Below is a rendering of pbrt's sample scene spotfog.pbrt, showing how a volume caustic can be captured using volumetric photon mapping.

Volume caustic rendered with volumetric photon mapping.

The major difficulty we encountered was that [4] is vague on some details, e.g. when an interaction occurs, where does it occur? [5] discusses this issue and notes that taking the upper bound of the current interval as the intersection point leads to biasing so taking a random point on the interval is suggested. We use this approach. Other details about attenuating the radiance were also glossed over. This made it difficult to achieve the kind of results given in [4]. Considerable time was spent simply trying various ideas alluded to in [5] and [8], inspecting the implementations of [6] and [7] (and the pseudocode of [9]) for different approaches and tweaking settings in the scene file but we couldn't remove the jaggedness of the caustic and achieve the smooth results of [4]. Regardless, our implementation produced satisfying results for the sauce itself.

Below are some images comparing the various approaches to the sauce rendering. The difference between the homogeneous volume with single scattering and volumetric photon mapping with multiple scattering is fairly subtle. The indirect light in the sauce creates more prominent highlights over the cherries making them look more wet and juicy like in high quality food photography.

The sauce rendered with the same material as the cherries.

The sauce rendered with our homogeneous triangle mesh volume and single scattering.

The sauce rendered with volumetric photon mapping and multiple scattering (20000 volume photons).

Code

code.zip

Plate, Table, and Fork (Nora)

Since porcelain is a rough clay covered in a glass-like finish, we modeled this with two plates nested inside each other, one slightly larger than the other. The large plate is a shiny, off-white uber material and is 40% transparent. The small plate is matte with the same off-white color. The table is a gray matte material. The fork is the silver metal material which is from the sample scenes on the PBRT website. It has a smooth surface with sharp, bright highlights.

Final rendering of the plate, table and fork

Results

Combining the crust, cherries, sauce, plate, fork and table, we created the following final render. It incorporates the flaky crust, glistening cherries, thick sauce, shiny plate, and metallic fork. Post processing was done in Photoshop to create a depth of field effect. Isn't it delicious!

Final render of scene

Additional Results

No post processing was done on these images.

Light Crust

Dark Crust

References

Recent