Broad Area Colloquium for Artificial Intelligence,
Geometry, Graphics, Robotics and Vision
Capturing Motion Models for Animation
Chris Bregler
Stanford University
Monday, March 11th, 2002, 4:15PM
Gates B01 http://robotics.stanford.edu/ba-colloquium/
Abstract
We will survey our current research efforts on vision based capture and
animation techniques applied to animals, humans, and cartoon characters.
We
will present new capture techniques that are able to track and infer
kinematic chain and 3D non-rigid blend-shape models directly from 2D
video
data without the use of pre-tracked features and prior models.
Furthermore
we demonstrate how to use such motion capture data to estimate
statistical
models for synthesis and how to retarget motion to new characters. We
show
several examples on capturing kangaroos, giraffes, human body
deformations,
facial expressions, animating hops and dances with natural fluctuations,
and
retargeting expressive cartoon motion.
This reports on joint work with Kathy Pullen, Lorie Loeb, Lorenzo
Torressani, Danny Yang, Gene Alexander,
Erika Chuang, Hrishi Deshpande, Rahul Gupta, Aaron Hertzmann, Henning
Biermann.
About the Speaker
Chris Bregler is an Assistant Professor in Computer Science at Stanford
University. He received his Ph.D. in Computer Science from U.C. Berkeley
in
1998 and his Diplom in Computer Science from Karlsruhe University in
1993.
He also worked for several companies including IBM, Hewlett Packard,
Interval, and Disney Feature Animation. He is a member of the Stanford
Computer Graphics Laboratory and Movement Group. His primary research
interests are in the areas of Vision, Graphics, and Learning. Currently
he
focuses
on topics in visual motion capture, human face, speech, and body gesture
analysis
and animation, image/video based modeling and rendering, and artistic
aspects of animation.
Contact: bac-coordinators@cs.stanford.edu