stanford.seal64.gif (1,768 bytes)

Broad Area Colloquium for Artificial Intelligence,
Geometry, Graphics, Robotics and Vision

Computational Imaging

Shree K. Nayar
Department of Computer Science
Columbia University, New York

Monday, November 19, 2001, 4:15PM
Gates B01


The light field associated with a scene is complex. Conventional still and video cameras perform a specific sampling of the light field that is inadequate for many applications in computer vision and computer graphics. Computational imaging seeks to sample the light field in unconventional ways to produce new forms of visual information. Broadly speaking, computational imaging consists of three components: novel imaging optics, one or more image detectors, and a computational module. This combination allows one to sample and process the light field in powerful ways. It provides a general framework for developing imaging systems that significantly improve one or more imaging dimensions, such as, field of view, brightness, color and depth.

In this talk, we present several examples of computational vision sensors. The first part of the talk focuses on the use of catadioptrics (lenses and mirrors) for capturing unusually large fields of view. We describe several methods for obtaining single-viewpoint and multi-viewpoint images using this approach. The second part of the talk addresses the problem of acquiring high dynamic range images using a low dynamic range detector. We present two approaches for extracting the desired extra bits at each pixel; one requires multiple images while the other uses just a single image. Several interactive demonstrations of our results will be shown. These results have implications for digital imaging, immersive imaging, image-based rendering, 3D scene modeling, robotics and advanced interfaces.

About the Speaker

Shree K. Nayar is a Professor in the Department of Computer Science at Columbia University. He received his PhD degree in Electrical and Computer Engineering from the Robotics Institute at Carnegie Mellon University in 1990. He currently heads the Columbia Automated Vision Environment (CAVE), which is dedicated to the development of advanced computer vision systems. His research is focused on three areas, namely: the creation of novel vision sensors, the design of physics based models for vision, and the development of algorithms for scene understanding. His work is motivated by applications in the fields of digital imaging, computer graphics, human-machine interfaces, robotics, and image understanding. He has received the David Marr Prize twice (1990 and 1995), the David and Lucile Packard Fellowship (1992), the National Young Investigator Award (1993) and the Keck Foundation Award for Excellence in Teaching (1995).


Back to the Colloquium Page