CS 478 - Computational photography

Winter, 2012

A cutaway view showing some of the optical and electronic components in the Canon 5D, a modern single lens reflex (SLR) camera. In the first part of this course, we'll take a trip down the capture and image processing pipelines of a typical digital camera. This is the Stanford Frankencamera, an experimental open-source camera we are building in our laboratory. It's bigger, heavier, and uglier than the Canon camera, but it runs Linux, and its metering, focusing, demosaicing, denoising, white balancing, and other post-processing algorithms are programmable. We'll eventually be distributing this camera to researchers worldwide. This is the Nokia N900, the first in a new generation of Linux-based cell phones. It has a 5-megapixel camera and a focusable Carl Zeiss lens. More importantly, it runs the same software as our Frankencamera, so it's programmable right down to its autofocusing algorithm. This is a prototype Nvidia tablet featuring the Tegra 3 processor. It has stereo back-facing cameras, Android OS, and a ported implementation of our FCam API. Each student will receive a tablet for the duration of the course, to try his hands at mobile computational photography. In the second part of the course, we'll consider problems in photography and how they can be solved computationally. One such problem is misfocus. By inserting a microlens array into a camera, one can record light fields. This permits a snapshot to be refocused after capture. Most digital cameras capture movies as well as stills, but handshake is a big problem, as exemplified by the home video above. Fortunately, stabilization algorithms are getting very good; look at this experimental result. We'll survey the state-of-the-art in this evolving area.

Quarter
Winter, 2012
Units
3-4 (same workload) (+/NC or letter grade)
Time
Mon/Wed 2:30 - 3:45
Place
392 Gates Hall (graphics lab conference room)
Course URL
cs478.stanford.edu
Discussion
CS478 @Piazza
Instructors
Jongmin Baek, Dave Jacobs, Kari Pulli (Guest Lecturer)
Staff e-mail
cs478-win1112-staff@lists.stanford.edu
Office hours
Wed 3:45 - 5:00, Thurs 2:30 - 3:45, Gates 360
Prerequisite
An introductory course in graphics or vision, or CS 178; good programming skills
Televised?
No
 
 


Course material


Course abstract

Computational photography refers broadly to sensing strategies and algorithmic techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera. Representative techniques include high dynamic range imaging, flash-noflash imaging, coded aperture and coded exposure imaging, photography under structured illumination, multi-perspective and panoramic stitching, digital photomontage, all-focus imaging, and light field imaging.

Stanford has offered a course on computational photography in even-numbered years since 2004. In the past we've organized the course around whatever techniques were "hot" that year in the research literature. Beginning in 2010, we have started asking the question - what can computational photography do for everyday photographers? To answer this question, we'll spend a few weeks reviewing the pre-capture technologies and post-processing algorithms used in a modern digital camera, then we'll consider some of the problems that arise in real photography, such as extending dynamic range and depth of field, removing camera shake and motion blur, and improving the illumination. For each problem we'll analyze its characteristics, then consider hardware and software solutions people have proposed for them - sometimes by reading research papers together. See the course schedule for details.


Course requirements

The course is targeted to both CS and EE students, reflecting our conviction that successful researchers in this area must understand both the algorithms and the underlying technologies. Most classes will consist of a lecture by one of the instructors. These lectures may be accompanied by readings from the research literature. These readings will be handed out in class or placed on the course web site. Students are expected to:

  1. read the assigned papers, attend the lectures, and participate in class discussions,
  2. complete the "Hello camera" assignment (15% of grade),
  3. complete the "Hello imagestack" assignment (15% of grade),
  4. do a major project of their own design (70% of grade).
The course schedule shows the days set aside for student presentations. The schedule also shows dates for project milestones and the "Hello camera" and "Hello imagestack" assignment. Since the project will constitute the bulk of your work in the course, we've spaced these dates out to cover most of the quarter.

Each student will have access to a prototype Nvidia Tegra tablet that implements the FCam API, as part of a research collaboration between Nvidia and Stanford. We would like our students to implement their course project on the Tegra tablet. (Permission for the use of other platforms, such as the Nokia N900 smartphone or a desktop machine, will be granted on a case-by-case basis.)


© 2011 Jongmin Baek, David Jacobs
Last update: March 12, 2012 10:23:41 PM