The panorama above (at right) is a portion of the north side of Castro Street in Mountain View, CA. This panorama was constructed by aligning and pasting together narrow vertical strips extracted from consecutive frames of a video sequence. The sequence was captured by a sideways-looking high-speed video camera mounted in the back of a slowly-moving car (shown at left, manned by PhD student Gaurav Garg). To reduce distortion in the trees and alleyway, a mosaic of cross-slit projections and ordinary perspective views is employed. The lines of sight for this mosaic are shown in plan view beneath the panorama.

The Stanford CityBlock Project:
multi-perspective panoramas of city blocks

Visualization of cities and urban landscapes has been a recurrent theme in western art. The key problem in making these visualizations successful is summarizing in a single image the extended linear architectural fabric seen at eye level along a possibly curving or turning street, and doing so without introducing excessive distortions. At the Stanford Computer Graphics Laboratory, we have been building technology for digitizing commercial city blocks from sideways-looking video taken from a vehicle driving down the street. The input to our system is a set of video frames with known camera pose. The output is a single multi-perspective image that summarizes one or more city blocks. In our work, we have explored the use of pushbroom panoramas, cross-slit panoramas, and mosaics of these non-perspective projections interleaved with ordinary perspective views. Possible applications include in-car navigation, online route visualization, and web-based tourism. Broader applications of multi-perspective panoramas from video captured by moving vehicles include remote sensing and mapping, underwater photography, and archaeological documentation. This project is funded by Google.

Historical note: This project began around March of 2001 when Larry Page, co-founder of Google, gave us a videotape he had captured while driving around the Bay Area, and challenged us to invent a way to summarize the video with a few images. We jointly dubbed the idea "Crawling the physical web". The method we came up with to solve the problem was called "multi-perspective panoramas", and is exemplified by the image at the top of this web page. We proposed a Google-sponsored Stanford research project to develop the idea, which was funded in November 2002. For the results of this project, see our published papers (see below). The project ended in June 2006, and its technology was folded into Google's StreetView. Two of the students (Augusto and Vaibhav) moved to Google at around that same time. Our multi-perspective panoramas were not ultimately used in StreetView - which instead utilize a sequence of 360-degree panoramas captured at distinct locations spaced along the street, but the idea has resurfaced as Microsoft's StreetSlide project.


Recent papers in this area:

Automatic Multiperspective Images
Augusto Román, Hendrik P.A. Lensch
Proc. 2006 Eurographics Symposium on Rendering
Interactive Design of Multi-Perspective Images for Visualizing Urban Landscapes
Augusto Román, Gaurav Garg, Marc Levoy
Proc. Visualization 2004

If images on this page look dark to you, see our note about gamma correction.
A list of technical papers, with abstracts and pointers to additional information, is also available. Or you can return to the research projects page or our home page.
Copyright © 2004 Marc Levoy
Last update: November 21, 2013 08:38:19 PM