Bennett Wilburn

 


Researcher
Microsoft Research Asia
No 49 ZhiChun Road, Haidian District
Beijing 100080, P.R.China
bennett AT stanfordalumni DOT org
http:graphics.stanford.edu/~wilburn

Research Interests:
I'm a system designer who currently thinks computer graphics and vision are pretty cool. In the short term, I'd like to keep working in this space. I've gotten a lot of mileage from building a custom system for these applications (see below), and in the future, I'd like to hunt down other application domains that could benefit from a combined hardware/software approach.

Dissertation:
High Performance Imaging Using Arrays of Inexpensive Cameras.

Research Affiliations while at Stanford:
Dissertation Research:

The Stanford Multiple Camera Array. My research focus at Stanford is high performance imaging using cheap image sensors. To explore the possibilities of inexpensive sensing, I designed the Stanford Multiple Camera Array, shown at left. The system uses MPEG video compression and IEEE1394 communication to capture minutes of video from over 100 CMOS image sensors using just four PC's. The array has been operational since February 2003, and since then we've developed our geometric and radiometric calibration pipelines and explored several applications.

Publications:

Surface Enhancement Using Real-Time Photometric Stereo and Reflectance Transformation. Proc. Eurographics Symposium on Rendering 2006. Photometric stereo recovers per-pixel estimates of surface orientation from images of a surface under varying lighting conditions. Transforming reflectance based on recovered normal directions is useful for enhancing the appearance of subtle surface detail. We present the first system that achieves real-time photometric stereo and reflectance transformation. We also introduce new GPU-accelerated normal transformations that amplify shape detail. Our system allows users in fields such as forensics, archaeology and dermatology to investigate objects and surfaces by simply holding them in front of the camera. See this video for a summary of the work and a demonstration of the system.
Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform. Vaibhav Vaish, Gaurav Garg, Eino-Ville Talvala, Emilio Antunez, Bennnett Wilburn, Mark Horowitz, and Marc Levoy. Proc. Workshop on Advanced 3D Imaging for Safety and Security (in conjunction with CVPR 2005) Oral presentation. 2005). This paper analyzes the warps required for synthetic aperture photography using tilted focal planes and arbitrary camera configurations. We characterize the warps using a new rank-1 contraint that lets us focus on any plane, without having to perform metric calibration of the cameras. We show the advantages of this method with a real-time implementation using 30 cameras from the Stanford Multiple Camera Array. This video shows our results.
High Performance Imaging Using Large Camera Arrays. Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz and Marc Levoy. (Presented at ACM Siggraph 2005). This paper describes the final 100 camera system. We also present applications enabled by the array's flexible mounting system, precise timing control, and processing power at each camera. These include high-resolution, high-dynamic-range video capture; spatiotemporal view interpolation (think Matrix Bullet Time effects, but with virtual camera trajectories chosen after filming); live, real-time synthetic aperture videography; non-linear synthetic aperture photography; and hybrid-aperture photography (read the paper for details!). Here's a video (70MB Quicktime) showing the system and most of these applications.
High Speed Video Using A Dense Camera Array. Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Marc Levoy and Mark Horowitz. Presented at CVPR 2004. We create a 1560fps video camera using 52 cameras from our array. Because we compress data in parallel at each camera, our system can stream indefinitely at this frame rate, eliminating the need for triggers. Here's the video of the popping balloon shown at left. The web page for the paper has several more videos.
Using Plane + Parallax for Calibrating Dense Camera Arrays. Vaibhav Vaish, Bennett Wilburn and Marc Levoy. Presented at CVPR 2004. We present a method for plane + parallax calibration of planar camera arrays. This type of calibration is useful for light field rendering and synthetic aperture photography and is simpler and more robust than full geometric calibration. Synthetic aperture photography uses many cameras to simulate a single large aperture camera with a very shallow depth of field. By digitally focussing beyond partially occluding foreground objects like foliage, we can blur them away to reveal objects in the background. Here's an example Quicktime video.
The light field video camera. Bennett Wilburn, Michal Smulski, Hsiao-Heng Kelin Lee, and Mark Horowitz. Proc Media Processors 2002, SPIE Electronic Imaging 2002. The light field video camera was the prototype for the 100-camera array. This describes our original architecture and 6-camera prototype.
Hardware-accelerated dynamic light field rendering. Bastien Goldlucke, Marcus Magnor, Bennett Wilburn. Vision, Modelling and Visualization 2002. Marcus and I started a collaboration while he was a post-doc at Stanford to render novel views from positions between the 6 cameras in our light field camera prototype. His student Bastien continued this work, adding new disparity estimation methods and hardware acceleration for the image warps.



Older Publications:

  • Applications of on-chip samplers for test and measurement of integrated circuits. R. Ho, B. Amrutur, K. Mai, B. Wilburn, T. Mori, M. Horowitz. IEEE Symposium on VLSI Circuits, June 1998, pages 138-139.
  • Low-power SRAM design using half-swing pulse-mode techniques. K. Mai, T. Mori, B. Amrutur, R. Ho, B. Wilburn, M. Horowitz. IEEE Journal of Solid-State Circuits, November 1998, pages 1659-1671.


  • Other Projects:

    Spatiotemporal Sampling and Interpolation for Dense Camera Arrays. This is a work in progress. We have already shown that we can simulate a high-speed camera by tightly packing many cameras with staggered trigger times. If we spread the cameras out, we can capture views from multiple positions and multiple times for spatiotemporal view interpolation--synthesizing new views from positions and times not in our captured set of images. Our improved temporal resolution lets us use simpler, image-based methods to generate new views. We can also eliminate the alignment errors in the high-speed video work above. Here's an example video.

    More Info About Me:

  • My Wildflower triathlon experience with Team in Training.