3D Painting on Scanned Surfaces

Maneesh Agrawala <maneesh@pepper.stanford.edu>
Andrew C. Beers <beers@cs.stanford.edu>
Marc Levoy <levoy@cs.stanford.edu>


1. Introduction

2. System Configuration

3. Data Representation

4. Methods

4.1 Object--mesh registration

4.2 Painting

4.3 Brush effects

4.4 Combating registration errors

5. Results

6. Future Directions

One of the drawbacks of our system is that there is a non-trivial amount of set-up time required to register the physical object to the mesh. Registration can take several minutes and must be done every time the user wants to paint an object. Furthermore, if the object is moved after it has been registered, it must be re-registered. The most time-consuming aspect is doing the final hand alignment of the registration points to the surface mesh.

One solution to this problem would be to register the physical object as it is being scanned by the 3D scanner. Assuming the scanner always creates a mesh in the same coordinate system for each scan, we can preregister the tracker coordinate system to this mesh coordinate system using Besl's algorithm. Then, scanning any new object will automatically register it to the tracking system. However, this approach fails when we combine multiple scans using the zipper software, because the physical object must be moved between scans and so we lose the correspondence between the mesh and the object.

Ensuring that the object does not move once it has been registered is can make painting awkward and unnatural. Allowing the object to be moved would let the user to paint more comfortably. One way to permit such object movement would be to attach another sensor of the space tracker to the object and then track the movement of the object in addition to the movement of the brush.

A disadvantage of our approach is that we can only paint meshes for which we have a corresponding physical object. Thus, we can not directly paint a mesh created with a modeling or CAD program for example. However, several new rapid prototyping technologies have recently been developed for synthesizing 3D objects directly from computer models [7] [12]. Although it would be a considerable expense, with such a prototyping system we could create a physical object representing almost any mesh and then use it as a guide for painting on the mesh.

Another problem is that the user is moving the sensor along the physical object while paint is only being applied to the mesh on the monitor. Thus, the user must look at two places at once to see where the paint is being applied. This problem is reduced by placing the physical object in front of the monitor while painting.

One of the problems with polygon meshes is that they are hard to animate. Many animators are used to manipulating the control points of curved surface patches, not the vertices of an irregular mesh. Furthermore, they want to manipulate only a few control points, not the 100,000's of vertices in our typical mesh. One solution we are investigating is to fit NURBS patches to our meshes. The boundaries of these patches would be specified by tracing them using our system. In this case we would replace our space-filling brushes with an algorithm that chains together mesh vertices lying along the path traced out by the stylus.

7. Conclusions

8. Acknowledgments