3D Painting on Scanned Surfaces

Maneesh Agrawala <maneesh@pepper.stanford.edu>
Andrew C. Beers <beers@cs.stanford.edu>
Marc Levoy <levoy@cs.stanford.edu>

Abstract

1. Introduction

2. System Configuration

The block diagram in figure 1 depicts our overall system configuration. Before we can paint, we must create a mesh representing a physical object. We use a Cyberware laser range scanner to take multiple scans of an object and combine them into a single mesh using the zipper software. The Polhemus Fastrak space tracking system tracks the location of a stylus as it is moved over the physical object. The painter application maps these stylus positions to positions on the zippered mesh.

The Cyberware Scanner uses optical triangulation to determine the distance of points on the object from the scanning system. A sheet of laser light is emitted by the scanner. As the object is passed through this sheet of light, a camera, located at a known position and orientation within the scanner, watches the object. The scanner triangulates the depths of points along the intersection of the object and the laser sheet based on the image captured by the camera. As the object passes through the laser sheet, a mesh of points representing the object as seen from this point of view is formed.

The Polhemus Fastrak tracking system reports the 3D position and orientation of a stylus used to select the area on the mesh to paint. A field generator located near the object emits an AC magnetic field which is detected by sensors in the stylus to determine the stylus's position and orientation with respect to the field generator. The painter application continously polls the tracker for the stylus' poisiton and orientation at about 30 Hertz.

3. Data Representation

4. Methods

4.1 Object--mesh registration

4.2 Painting

4.3 Brush effects

4.4 Combating registration errors

5. Results

6. Future Directions

7. Conclusions

8. Acknowledgments