Creating Generative Models from Range ImagesRavi Ramamoorthi Computer Graphics Laboratory Stanford University | |
Abstract
We describe a new approach for creating concise high-level
generative models from range images or other methods of
obtaining approximate point clouds. Using a variety of
acquisition techniques and a user-defined class of models,
our method produces a compact and intuitive object
description that is robust to noise and is easy to
edit. The algorithm has two inter-related
phases---recognition, which chooses an appropriate model
within a user-specified hierarchy, and parameter
estimation, which adjusts the model to fit the data as
closely as possible. We give a simple method for
automatically making tradeoffs between simplicity and
accuracy to determine the best model within a given
hierarchy. We also describe general techniques to optimize
a specific generative model that include methods for
curve-fitting, and which exploit sparsity. Using a few
simple generative hierarchies, that subsume many of the
models previously used in computer vision, we demonstrate
our approach for model recovery on real and synthetic data.
SummaryIt has recently become possible to acquire reasonably accurate point-clouds or range data from 3D objects. For graphical applications, these point-clouds are usually transformed into polygonal meshes or spline patches. However, these representations are difficult to manipulate, and require a large amound of data. Computer vision approaches can recover models from sparse range data. However, these methods are usually restricted to specific, fairly simple models. By using general algebraic models-the generative models proposed by Snyder-we are able to subsume many of the previous approaches in computer vision and recover high-level models from incomplete range data. Benefits of our approach:
AlgorithmThe input to our algorithm beside the range data is an input model class or hierarchy including an algebraic model description from which the model can be evaluated, constraints on curves in the final model, and code for an initial guess for the root model of the hierarchy. The system chooses the appropriate model within the hierarchy and optimizes the parameters as follows:
ResultsThese illustrate some of our results. More information will be found in the paper. Clicking on each of these figures will bring up a high-resolution version.Figure 1 A scene made of several generative models. The models were each recovered from actual range data using a few simple model hierarchies, and then composed. A smooth compact representation was generated despite noisy and incomplete data. Figure 2 A comparison of the original and recovered models for some of the objects in Figure 1. Figure 3 Recognition tree for a banana model showing the order in which curves are added. Figure 4 Comparison of fitting generative models and superquadrics. Figure 5 An example of using the wrong input hierarchy-one that doesn't have the necessary degrees of freedom to model the data. The left images are of the banana and bowl recovered using the spoon hierarchy while the rotating generalized cylinder is used in the last two to recover the ladle and spoon. We see the algorithm still does the best it can, capturing some of the dominant aspects of the shapes. Figure 6 Generative models allow for easy, intuitive editing, for instance, of a spoon into a ladle. Relevant LinksSiggraph 99 paper, Creating Generative Models from Range Images by Ravi Ramamoorthi and James Arvopdf (1.5M), postscript (2.5M), high-res pdf (9M), high-res ps (29M) FTP site for all the material on the CDROM
|
Figure 1. A scene made of generative models recovered from actual range data |
Figure 2. Comparison of the range data (top) with the recovered models (bottom) for the objects in the scene in figure 1. | |
Figure 3. Comparison of the range data (top) with the recovered models (bottom) for the objects in the scene in figure 1. | |
Figure 4. Comparison of a superquadric fit (left) with the banana model (right), demonstrating the benefit of generative models over commonly used vision primitives. | |
Figure 5. Using the wrong model hierarchy for the banana, bowl, ladle and spoon. The algorithm still produces a simple model that mimics the original object to the extent possible. | |
Figure 6. Generative models allow for easy editing. A recovered spoon (left) is edited intuitively to give a ladle-like shape (right) |