Broad Area Colloquium for Artificial Intelligence,
Geometry, Graphics, Robotics and Vision
Capturing Motion Models for Animation
Monday, March 11th, 2002, 4:15PM
We will survey our current research efforts on vision based capture and
animation techniques applied to animals, humans, and cartoon characters.
will present new capture techniques that are able to track and infer
kinematic chain and 3D non-rigid blend-shape models directly from 2D
data without the use of pre-tracked features and prior models.
we demonstrate how to use such motion capture data to estimate
models for synthesis and how to retarget motion to new characters. We
several examples on capturing kangaroos, giraffes, human body
facial expressions, animating hops and dances with natural fluctuations,
retargeting expressive cartoon motion.
This reports on joint work with Kathy Pullen, Lorie Loeb, Lorenzo
Torressani, Danny Yang, Gene Alexander,
Erika Chuang, Hrishi Deshpande, Rahul Gupta, Aaron Hertzmann, Henning
About the Speaker
Chris Bregler is an Assistant Professor in Computer Science at Stanford
University. He received his Ph.D. in Computer Science from U.C. Berkeley
1998 and his Diplom in Computer Science from Karlsruhe University in
He also worked for several companies including IBM, Hewlett Packard,
Interval, and Disney Feature Animation. He is a member of the Stanford
Computer Graphics Laboratory and Movement Group. His primary research
interests are in the areas of Vision, Graphics, and Learning. Currently
on topics in visual motion capture, human face, speech, and body gesture
and animation, image/video based modeling and rendering, and artistic
aspects of animation.
Back to the Colloquium Page