Visualization and Interaction in Large 3D Virtual Environments

Thomas A. Funkhouser

AT&T Bell Laboratories

Abstract

Interactive systems that simulate the visual experience of inhabiting a three dimensional environment along with other users may be an important application domain. However, interesting three dimensional models may consist of millions of polygons and require gigabytes of data - far more than today's workstations can render at interactive frame rates, or fit into memory simultaneously. Furthermore, hundreds or thousands of users may inhabit the same virtual environment simultaneously, creating a multitude of potential interactions. In order to achieve real-time visual simulations in such large virtual environments, a system must identify a small, relevant subset of the model to store in memory and process at any given time.

I will describe a few techniques to handle the vast complexity of large, sparse virtual environments in visual simulation applications. These techniques rely upon an efficent geometric database that represents the model as a set of objects, each of which is described at multiple levels of detail. The database contains a spatial subdivision which partitions the model into an adjacency graph of regions with similar visibility characteristics. The object hierarchy and spatial subdivision are used by visibility determination and multi-resolution detail elision algorithms to compute a small subset of the model to store in memory and process during each step of the computation.

These techniques have been used in three applications: interactive building walkthroughs, radiosity computations, and multi-user virtual reality. The building walkthrough system is able to maintain more than 15 frames/second during visualization of architectural models containing over one million polygons. The radiosity system generates solutions for input models containing over fifty thousand polygons. The multi-user virtual reality system manages interactions between more than one thousand simultaneous users in real-time. In all cases, the tested data sets are an order of magnitude larger than those supported by previous state-of-the-art systems.

This is joint work with Carlo Sequin (University of California, Berkeley), Seth Teller (MIT), Celeste Fowler (SGI), and Pat Hanrahan (Stanford).