In this talk, I will present a framework for analyzing the geometry of multiple 3D scene points from multiple uncalibrated images, based on the projection of those points onto a (real or virtual) physical reference planar surface in the scene. This approach decomposes the image formation process into two stages: (i) the 3D to 2D projection of scene points from camera centers onto the reference plane, and (ii) the 2D to 2D re-projection of the reference plane image onto the camera image plane. The two stage decomposition separates the inherent 3D to 2D projection relationships from the effects of camera internal parameters and the camera orientation. All standard inherent relationships involving multiple views can be derived in terms of the parallax displacement of the 3D scene points relative to the reference plane. Bi-focal and tri-focal constraints involving multiple points and views are given a concrete physical interpretation in terms of geometric relationships on the Euclidean reference plane. In addition to the standard relationships of multiple views of a single point, we also derive a set of dual relationships involving multiple points viewed from one or more cameras. These dual relationships are similar to those that have been previously derived, but are completely symmetric in their mathematical form, and have physical geometric interpretations.
In addition to showing the basic geometric relationships involving the planar parallax displacements, I will also describe ways of using these for 3D scene reconstruction and new-view synthesis from multiple uncalibrated real images. Without any knowledge of the Euclidean position and orientation of the reference plane, a projective reconstruction is obtained. Limited additional scene domain information can then be used to calibrate the scene in a stratified manner. I will describe an approach for obtaining ordinal, affine and Euclidean reconstruction by gradually increasing the amount of calibration information provided.
Work done jointly with:
Michal Irani, The Weizmann Institute, Rehovot, Israel
Daphna Weinshall, Hebrew University, Jerusalem, Israel
P. Anandan has been working in the field of computer vision for about 15 years, focusing primarily in the areas of motion, stereo, 3D scene recovery, and video analysis. He obtained his PhD in Computer Science from UMass in 1987. From 1987-1991 he was an Assistant Professor at Yale, and from 1991-1997 he was at the Sarnoff Corporation, where he worked in the team that developed Sarnoff's family of techniques for image alignment and model-based motion analysis. From 1995-1997 he led the Video Information Processing Research Group at Sarnoff. Since 1997 he has been a Senior Researcher at Microsoft, where he currently co-manages the Vision Technology Group. His current interests are focused on developing techniques for modeling 3D scenes and video action using image based representations.
He can be contacted at email@example.com.