Synthetic Aperture Imaging using Dense Camera Arrays

PDF (20MB)

Abstract

Synthetic aperture imaging is a technique that projects images of a scene from different views on to a virtual focal surface, enabling us to ``see through" occlusions in the scene. Using a 100-camera array we have used synthetic aperture imaging to view objects concealed behind dense foliage and to track a person moving through a crowd. The ability to see through occlusions makes synthetic aperture imaging a potentially powerful tool for surveillance.

This work makes two contributions. First, we characterize the image warps required for synthetic aperture imaging using projective geometry. This analysis leads to a robust camera calibration procedure for synthetic aperture imaging. Our analysis also shows the relation between the geometric complexity of the image warps and camera/focal plane configurations. In particular, we show that we can vary the focus through families of frontoparallel and tilted focal planes by simply shifting and adding the camera images. As image shifts are relatively simple to realize in hardware, this leads to a real-time system which we demonstrate for tracking a person moving through a crowd.

Second, we explore methods to achieve sharp focus at every pixel in the synthetic aperture image by reconstructing the 3D surfaces of the occluded objects we wish to see. We compare classical shape from stereo with shape from synthetic aperture focus, and describe variants of stereo that improves image contrast by deleting some of the light rays which are incident on the occluder.


Relevant Publications


Reconstructing Occluded Surfaces using Synthetic Apertures: Stereo, Focus and Robust Measures
Vaibhav Vaish, Richard Szeliski, Larry Zitnick, Sing Bing Kang, Marc Levoy.
Proc. CVPR 2006.
Most algorithms for 3D reconstruction from images use cost functions based on SSD, which assume that the surfaces being reconstructed are visible to all cameras. This makes it difficult to reconstruct objects which are partially occluded. Recently, researchers working with large camera arrays have shown it is possible to ``see through" occlusions using a technique called synthetic aperture focusing. This suggests that we can design alternative cost functions that are robust to occlusions using synthetic apertures. Our paper explores this design space. We compare classical shape from stereo with shape from synthetic aperture focus. We also describe two variants of multi-view stereo based on color medians and entropy that increase robustness to occlusions. We present an experimental comparison of these cost functions on complex light fields, measuring their accuracy against the amount of occlusion.

Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform
Vaibhav Vaish, Gaurav Garg, Eino-Ville Talvala, Emilio Antunez, Bennett Wilburn, Mark Horowitz, Marc Levoy.
Proc. Workshop on Advanced 3D Imaging for Safety and Security
(in conjunction with CVPR 2005)
Oral presentation
Synthetic aperture focusing consists of warping and adding together the images in a 4D light field so that objects lying on a specified surface are aligned and thus in focus, while objects lying off this surface are misaligned and hence blurred. This provides the ability to see through partial occluders such as foliage and crowds, making it a potentially powerful tool for surveillance. If the cameras lie on a plane, it has been previously shown that after an initial homography, one can move the focus through a family of planes that are parallel to the camera plane by merely shifting and adding the images. In this paper, we analyze the warps required for tilted focal planes and arbitrary camera configurations. We characterize the warps using a new rank-1 constraint that lets us focus on any plane, without having to perform a metric calibration of the cameras. We also show that there are camera configurations and families of tilted focal planes for which the warps can be factorized into an initial homography followed by shifts. This homography factorization permits these tilted focal planes to be synthesized as efficiently as frontoparallel planes. Being able to vary the focus by simply shifting and adding images is relatively simple to implement in hardware and facilitates a real-time implementation. We demonstrate this using an array of 30 video-resolution cameras; initial homographies and shifts are performed on per-camera FPGAs, and additions and a final warp are performed on 3 PCs.

Using Plane + Parallax to Calibrate Dense Camera Arrays
Vaibhav Vaish, Bennett Wilburn, Neel Joshi, Marc Levoy.
Proc. CVPR 2004
Oral Presentation
A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields than better results than full metric calibration.

© Vaibhav Vaish
Last update: April 2, 2007 12:04:17 AM