Presenter: Naga Govindraju, University of North Carolina
We present a real-time visibility culling algorithm to render complex 3D environments. Our parallel occlusion culling algorithm uses occlusion-switches to compute the visible primitives for a frame. An occlusion-switch consists of two graphics processing units (GPUs), each GPU is used to either compute an occlusion representation or cull away primitives from a given view point. Moreover, we switch the roles of each GPU across successive frames. We perform our occlusion culling algorithm at both object-level and sub-object level to reduce the size of visible primitives significantly. The visible primitives could either be rendered on a separate GPU or used to generate hard-edged umbral shadows from a moving light source, in parallel.
We use our visibility culling algorithm to compute the potential shadow receivers and shadow casters from the eye and light sources, respectively. We further reduce their size using a novel cross-culling algorithm. Finally, we use a combination of shadow maps and shadow polygons to generate shadows. We present techniques for LOD-selection that eliminate the artifacts in self-shadows. Our algorithms take the low-bandwidth to and from the graphics card and involve no frame buffer readbacks. We utilize frame-to-frame coherence to reduce the network communication bandwidth between the PCs. We also present techniques that reduce graphics pipeline stalls and utilize the processing power of the GPU efficiently.
We have implemented our algorithm on a cluster of three networked PCs, each with an NVIDIA GeForce 4 GPU. We highlight its performance on three complex environments - a powerplant model consisting of 13 million triangles, a house model with 1 million triangles and an oil tanker model consisting of 82 million triangles. In practice, we are able to render these datasets at interactive frame rates, with little loss in image-quality.