Modeling and Rendering Architecture from Photographs

Paul Debevec
UC Berkeley

Abstract

In this talk I will present a new approach for modeling and rendering architectural scenes from a sparse set of still photographs. The modeling approach, which combines both geometry-based and image-based techniques, has two components. The first component is a photogrammetric modeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it takes advantage of the constraints that are characteristic of architectural scenes. The second component is model-based stereo, which recovers how the real scene deviates from the basic model. By making use of the model, this stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail and non-lambertian reflectance than flat texture-mapping. I will present results that demonstrate our approach's ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs, including the Rouen Revisited art installation presented at SIGGRAPH '96.

For more info on these projects, press here and here.