Broad Area Colloquium For AI-Geometry-Graphics-Robotics-Vision


Mesh Processing Sequences for Out-of-core Processing of Large Surface Models


Jack Snoeyink
UNC Chapel Hill

Monday, May 5, 2003, 4:15PM
TCSeq 200
http://robotics.stanford.edu/ba-colloquium/

Abstract

As Stanford folk know well, polygonal models acquired with 3D scanning technology, or from large scale CAD applications, easily reach gigabyte sizes that are larger than the address space of common 32-bit PCs. Thus, they currently require special effort to perform out-of-core processing. We define an abstraction of mesh processing sequences for computing on the surface meshes that represent such models. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. At any time only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh processing sequences provide a seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering.

Researchers at Georgia Tech, the Technion, Caltech, Tuebingen, UNC, and elsewhere have developed compression schemes for storing triangular and polygonal meshes that are based on growing the compressed region by boundary operations. Isenburg and Gumhold have shown that the compressor can be implemented to keep boundary sizes relatively small, so that decompression can generate processing sequences out-of-core. Decompression speeds are CPU-limited and exceed one million vertices and two million triangles per second on a 1.8 GHz Athlon processor. As full connectivity information is available along the decompression boundaries, this provides seamless mesh access for incremental in-core processing on gigantic meshes.

Two slightly higher-level abstractions are supported by processing sequences: boundary-based and buffer-based mesh processing. We illustrate these by adapting two different mesh simplification algorithms to perform their computation using a prototype of our processing sequence API. Processing sequences help each algorithm improve simplification quality, execution speed, and memory footprints. We believe that these abstractions will prove useful for other tasks, such as remeshing, parameterization, or smoothing, for which currently only in-core solutions exist.

About the Speaker

Jack Snoeyink did his PhD in computer science at Stanford under the supervision of Prof. Leo Guibas in 1990. After a year of postdoctoral study in Utrecht, he joined the faculty at the University of British Columbia. He moved to the University of North Carolina at Chapel Hill as a professor at the turn of the millennium. Jack works on computational geometry and likes to call his research elliptical, because he divides his time between theoretical and practical foci. He hopes that people don't find it hyperbolic.
Contact: bac-coordinators@cs.stanford.edu

Back to the Colloquium Page