Broad Area Colloquium For AI-Geometry-Graphics-Robotics-Vision
Motion Capture from Movies
Jim Rehg
Cambridge Research Lab
Compaq Computer Corporation
Wednesday, February 23, 2000
refreshments 4:05PM, talk begins 4:15PM
TCseq201, Lecture Hall B
http://robotics.stanford.edu/ba-colloquium/
Abstract
Video is the primary archival source for human movement, with examples
ranging from sports coverage of Olympic events to dance routines in
Hollywood movies. If the human figure could be tracked reliably in
unconstrained monocular video, much of this archive could be unlocked
for analysis. Significant technical challenges exist, however, due to
the complexity of human movement, the variability of human appearance,
and the loss of 3-D information.
My talk will describe some recent progress in modeling and estimating
figure motion from monocular video. Two important themes are the
separation of 2-D (registration) and 3-D (reconstruction) effects in
kinematic modeling, and the role of learning in dynamic modeling. In
particular, I will describe a framework for learning switching linear
dynamic system models from data and show its application to figure
motion. I will describe some applications to video editing and
computer animation currently underway at Compaq Research.
About the Speaker
Jim Rehg received his Ph.D. from Carnegie Mellon University in 1995. He
subsequently joined the Cambridge Research Lab where he leads the
vision-based human sensing project. His research interests include computer
vision, novel user-interfaces, and parallel computing.
bac-coordinators@cs.stanford.edu
Back to the Colloquium Page
Last modified: Tue Feb 22 12:28:13 PST 2000