Seminar on Human, Avatar, and Robot Motion
The Seminar on Human, Avatar, and Robot Motion is a bi-weekly presentation and
discussion forum for a broad range of topics related to the motion of humans,
robots, and virtual characters, including control, biomechanics, animation,
planning, and tracking. The purpose of the seminar is to bring together
individuals with an interest in the motion of humans, avatars, and robots, in
order to stimulate discussion and create awareness of topics that may be broadly
applicable to related fields.
Spring 2011
|
Date
|
Location |
Speaker |
Title and Abstract |
|
May 31, 2011 |
Gates 104 |
Christian Plagemann Postdoc in Computer Science
Stanford University |
Tracking Human Motion Using RGB-D Cameras
Capturing human motion from a single camera without requiring markers,
a suit or wearable sensors has been a widely studied problem in
academia and industry. Approaches purely based on visual input ("RGB",
e.g., a webcam) have failed to achieve the robustness and accuracy
necessary for practical applications. With the advent of affordable
depth-sensing cameras (the "D" in "RGB-D"), such as time-of-flight
cameras or structured light stereo systems, a powerful new source of
information about the analyzed scene becomes available. In this talk,
we will lay out the basic set of tools required to analyze RGB-D data
and discuss alternative ways of using them to solve the human motion
capture problem.
|
|
May 17, 2011
|
Gates 104 |
Okan Arikan CEO
Animeeple Inc |
Recycling, reusing, and repurposing digital content
We are starting to climb the steep end of the Uncanny Valley. Demand for more, higher-quality digital content (meshes, textures, and animations) continually increases. How will content production keep pace with demand?
Okan is an animation geek. He did his graduate work on content re-use mechanisms for motion capture data, and continued his work at a Silicon Valley startup called Animeeple.
We will talk about tools and techniques Okan is working on how to make the 3D animation world a nicer place, where animations are easier to create and reuse. He will show a non-linear animation interface for content authoring, as well as a multitouch tool for 3D character posing. More broadly, we will also discuss the evolution of the industry in relation to the demand for content.
|
|
May 3, 2011 |
Gates 104 |
Michiel van de Panne Professor of Computer Science
University of British Columbia |
Representations for skilled locomotion: towards the secret sauce?
Humans, animals, robots, and physics-based characters all need motor skills to
interact with the world that surrounds them. Yet general methods for developing
control strategies for skilled movement and manipulation remain
elusive. In this talk
we will take stock of recent advances in biped and quadruped locomotion with an
eye towards identifying features of motor control representations that
support the
scalable development of motor skills and that allow them to generalize well.
We will argue in favour of:
(1) incremental learning in several different guises;
(2) task-specific abstraction of state and dynamics; and
(3) hybrid force and position control.
We will illustrate these examples with simulated bipeds and quadrupeds
that are capable of a broad range of locomotion-related skills.
This is joint work with many of my recent students and postdocs.
|
|
Apr 19, 2011 |
Gates 104 |
Jennifer Hicks OpenSim Project Manager
Stanford University |
Improving the treatment of cerebral palsy through biomechanical modeling and statistical analysis
Many children with cerebral palsy, a common neurological disorder, walk with excessive knee flexion, an inefficient locomotion pattern known as crouch gait that progressively worsens over time without intervention. Patients often receive surgeries to lengthen tight or spastic muscles or to correct bone malalignments, but outcomes are variable and unpredictable. In my talk, I will describe the results of our work to 1) objectively identify biomechanical factors that cause a patient to walk with excess knee flexion and 2) use these factors to predict whether a patients crouch gait will improve after receiving treatment. I will describe the framework we developed--combining biomechanical modeling and statistical analysis--for understanding gait pathology and objectively planning treatment.
|
|
Apr 3, 2011 |
Gates 104 |
Demetri Terzopoulos Professor of Computer Science
University of California, Los Angeles |
Human Simulation: From Biomechanics to Intelligence
I will present our ongoing work on human simulation, specifically
(1) the comprehensive biomechanical simulation of the human body,
confronting the challenge of modeling and controlling more or less all
of the relevant articular bones and skeletal muscles, as well as
simulating the physics-based deformations of the soft tissues, and (2)
an artificial life framework for multi-human simulation that addresses
the problem of realistically emulating the rich complexity of human
activity in urban environments, yielding 3D virtual worlds populated
by lifelike autonomous pedestrians. Among their many uses, our
advanced human models are showing promise in applications to Computer
Vision, including the study of active sensorimotor learning and
neuromuscular control, as well as within a paradigm that we call
"Virtual Vision", in which virtual reality -- specifically 3D virtual
worlds populated by lifelike autonomous pedestrians -- subserves
computer vision research for the purposes of persistent surveillance
by smart camera sensor networks. I will also elucidate the profound
scientific and computational challenges that remain in spanning
physics and AI/ALife for the purposes of tractable, lifelike human
simulation. |
Winter 2011
|
Date
|
Location |
Speaker |
Title and Abstract |
|
March 8, 2011 |
Gates 104 |
Stefan Schaal Professor of Computer Science, Neuroscience, and Biomedical Engineering
University of Southern California |
Learning of Skilled Movement
At some point in the future, robots will be taught new skills in a similar way as we teach skills to children, starting from demonstrating a behavior up to autonomous self-improvement with trial-and error learning. Our research is concerned with how to bootstrap such motor skills. We start with a general representation of motor skills based on learnable nonlinear attractor landscapes. Skills can be initialized by imitation learning or general planning methods.
For autonomous skill improvement, we discovered a new trajectory-based reinforcement learning method, call PI2 (Policy Improvement with Path Integrals). PI2 is based on stochastic optimal control and approximation methods developed in particle physics. The algorithmic form of PI2 is surprisingly simple, and lends itself to both model-based and model-free learning scenarios. Cost functions can have arbitrary state-dependent cost without the requirement of differentiability -- only the command cost is assume to be quadratic.
Learning with hidden states is possible, too. We will demonstrate several synthetic and robotic results that demonstrate that PI2 scales to learning in high dimensional motor systems, that it outperforms previous model-free algorithms in reinforcement learning in accuracy and learning speed, and that it can be applied to topics like learning in stochastic manipulation and variable impedance control. As PI2 has only one open tuning parameter, the exploration noise, it can be considered a significant step forward towards an efficient black-box reinforcement learning approach for high-dimensional motor systems.
|
Feb 15, 2011 (note date) |
Gates 104 |
Karen Liu Assistant Professor
Georgia Institute of Technology |
Biped, Balance, and Beyond
Creating truly agile and responsive virtual humans in a physically simulated world has been a longstanding challenge in computer animation. Underlying human motion is the coordinated operation of 206 bones, more than 600 muscles, and countless tendons and ligaments. My research focuses on simulate and model this complex system by leveraging principles from biomechanics, motion capture data, control theory, machine learning and numerical optimization.
In this talk, I will present a set of computational models that enable scientists, engineers, and artists to simulate and design natural human motion. How do we decode motion data to extract hidden information such as control mechanisms and styles of the subject? How do we use minimal amount of motion data to enhance the accuracy of existing biomechanical models and further expand our understanding of human body? How do we design abstract models of motion that best leverage existing optimization or machine learning algorithms? I will describe how we answer these fundamental questions by formulating appropriate optimal control problems. Furthermore, I will describe the applications of these methods in scientific and engineering disciplines beyond the field of computer graphics.
|
|
Feb 8, 2011 |
Gates 104 |
Sergey Levine PhD Candidate in Computer Science
Stanford University |
Inverse Reinforcement Learning and Human Motion
Inverse reinforcement learning (IRL) aims to infer goals, rewards, and cost functions from observations of an agent behaving according to an optimal policy. IRL algorithms have been used to learn robot walking motions from demonstration, build route navigation algorithms that mimic human drivers, and even to learn about how real people infer the goals of abstract agents. In this talk, I will provide a detailed tutorial on notable inverse reinforcement learning algorithms. I will then discuss preliminary ideas for applying such techniques to analyzing and reproducing human motion.
|
|
Jan 25, 2011 |
Gates 104 |
Chand John PhD Candidate in Computer Science
Stanford University |
How can a CS researcher focus their career on biomechanics?
Join us for a stimulating discussion on how CS researchers can make a big impact in biomechanics. The current plan is for us to brainstorm:
(1) What computational advancements does biomechanics need?
(2) What computational goals does graphics/character animation have that could be relevant to studying human and animal movement?
(3) What computational goals does robotics have that could be relevant to studying human and animal movement?
The goal will be to generate a list of needs in biomechanics that align with research directions of interest to graphics and robotics researchers. Feel free to send me ideas or suggestions on alternative topics on which to brainstorm. In category (1), I will include items that Scott Delp presented in the very first motion seminar. This will work best if we have attendees from the biomechanics, graphics, and robotics labs.
|
|
Jan 11, 2011 |
Gates 104 |
Stefano Corazza CTO and Co-founder
Mixamo |
The challenges in the democratization of 3D character animation
The talk will touch the latest industry developments in the realm of 3D character animation for games and how what was a niche art is becoming more and more a mainstream phenomenon. The challenges in the process of democratization of 3D animation will be presented as a starting point for a deeper discussion on research solutions and mathematical methods for the generation of 3D characters and animations. The main topics will include animation generative models, motion graphs, automatic skinning of 3D meshes, automatic rigging of characters, animation retargeting.
|
Fall 2010
|
Date
|
Location |
Speaker |
Title and Abstract |
|
Nov 9, 2010 |
Gates 104 |
Michael Neff Assistant Professor of Computer Science
UC Davis |
Laban Movement Analysis for Character Animation
In this talk, I will provide an overview of Laban Movement Analysis (LMA).
LMA is a leading framework for describing, annotating and analyzing expressive aspects of movement. The system is used in a wide range of applications, from dance annotation, to performance training, to somatic therapy. I will briefly discuss the four main components of LMA - Body, Effort, Shape and Space - and elaborate on those aspects most useful for character animation. Time permitting, I will provide an overview of some of the computer animation work based on the system and discuss challenges in applying the system in a computational framework.
|
|
Oct 26, 2010 |
Gates 104 |
Francois Conti PhD Candidate in Computer Science
Stanford University |
Interactive Haptic Simulation
Haptics is an emerging technology that involves transmitting information through the sense of touch. This hands-on form of interaction is performed by using small actuated interfaces called haptic devices that apply forces, vibrations, and/or motions to the user. In recent years haptics technology has been integrated into many new applications from gaming devices in the field of computer animation to advanced interfaces for intuitively operating surgical robot systems.
This talk will present recent methodologies and algorithms developed for computer haptic rendering, and address the computational challenges associated with the real-time requirements for haptic simulation and interaction. The presentation will combine live hands-on demonstrations that illustrate the usage of haptics interaction in various application areas.
|
|
Oct 12, 2010 |
Gates 104 |
Jack Wang Postdoc in Computer Science
Stanford University |
Optimizing Controllers for Physics-based Animation
I will discuss an important subproblem in physics-based animation --- controller synthesis for humanoid characters.
We describe a method for optimizing the parameters of a physics-based controller for full-body, 3D walking. The objective function includes terms for power minimization, angular momentum minimization, and minimal head motion, among others. Together these terms produce a number of important features of natural walking, including active toe-off, near-passive knee swing, and leg extension during swing.
We then extend the algorithm to optimize for robustness to uncertainty.
Many unknown factors, such as external forces, control torques, and user control inputs, cannot be known in advance and must be treated as uncertain. Controller optimization entails optimizing the expected value of the objective function, which is computed by Monte Carlo methods.
We demonstrate examples with a variety of sources of uncertainty and task constraints.
Joint work with David Fleet and Aaron Hertzmann, University of Toronto
Note: Some people might have already seen this talk at a NMBL meeting in August.
|
|
Sept 28, 2010 |
Gates 104 |
Michael Sherman Chief Software Architect, SimTK
Stanford University |
Predictive vs. interactive simulation of articulated systems
The NIH-supported Simbios center at Stanford focuses on physics-based simulation of biological structures, including dynamic simulation of normal and abnormal human motion. Our charter includes development and support of an open source, high performance, professionally-developed biosimulation toolkit SimTK, including the Simbody internal-coordinate multibody code (https://simtk.org/home/simbody). Because predictive accuracy is critical to our medical mission, Simbody uses techniques proven in mechanical and aerospace engineering, extended to deal with the inconvenient realities of biology. Engineering simulation includes rigorous control over accuracy at high accuracy, Simbody provides the OpenSim applications predictive simulation engine for study of human motion disorders; at lower accuracy it has been used commercially as a real time virtual world simulation engine. This approach contrasts with the common alternative of achieving performance by using game physics which achieves satisfactory qualitative results but cannot be used for quantitative prediction. That inhibits the smooth flow of knowledge between real-and virtual-world research disciplines, despite their obvious commonalities, because medical simulation must ultimately be validated quantitatively against experiment. I will argue that a shared software infrastructure with selectable accuracy/performance is both practical and advantageous for collaborative research in biomechanics, virtual worlds, robotics, and animation, and propose adoption and ongoing joint development of Simbody for that purpose. |
Summer 2010
|
Date
|
Location |
Speaker |
Title and Abstract |
Aug 31, 2010 2:00 pm |
Gates 104 |
Aaron Hertzmann Associate Professor of Computer Science
University of Toronto |
Feature-Based Locomotion Controllers for Physically-Simulated Characters
Understanding the control forces that drive humans and animals is fundamental to describing their movement. Although physics-based methods hold promise for creating animation, they have long been considered too difficult to design and control. Likewise, physical motion models, if developed, could be very valuable to human pose tracking in computer vision.
I will outline the main problems of human motion modeling. I will then present a new approach to control of physics-based characters based on high-level features of human movement. These controllers provide unprecedented flexibility and generality in real-time character
control: they capture many natural properties of human movement, they can be easily modified and applied to new characters, and they can handle a variety of different terrains and tasks, all within a single control strategy.
Until very recently, even making a controller walk without falling down was extraordinarily difficult. This is no longer the case. Our work, together with other recent results in this area, suggests that the research community is ready to make great strides in locomotion.
Joint work with Martin de Lasa and Igor Mordatch.
|
Spring 2010
|
Date
|
Location |
Speaker |
Title and Abstract |
|
May 25, 2010 |
Gates 2A Open Space |
Liang-Jun Zhang Postdoc in Computer Science
Stanford University |
Motion Planning of Digital Humans for Virtual Prototyping
Digital human modeling (DHM) has emerged as a useful tool for virtual prototyping, ergonomic analysis, and virtual training. Inserting a digital human into a simulation or virtual environment, DHM can facilitate the validation of product design and the prediction of performance and safety in the processes of assembly and maintenance. The current commercially available DHM packages such as JACK and SantosHuman, however, have limited capabilities on generating realistic human motions in complex environments. In this talk, I will describe new algorithms for planning and synthesizing human motions for DHM. Based on the fact that a human model is a tightly coupled system, we first develop an efficient sampling-based planner that coordinates the motion between different human body parts and computes feasible paths for tight environment. We then develop a new motion blending algorithm that refines the path computed by the planner with motion capture data to compute a smooth and plausible trajectory. We demonstrate the performance of our algorithms on a 40-DOF articulated human model and generate efficient motion strategies in complex CAD environments. Finally, I will suggest new research directions on generating human motions for DHM and virtual environment.
|
|
May 11, 2010 |
Gates 2A Open Space |
Alex Perkins PhD Candidate in Mechanical Engineering Stanford University |
Dynamic Simulations of Changing Topology Systems
Many practical systems have changing topology; that is, they are constantly changing the way in which they interact with their environment (legged locomotion and grasping present two of particular interest to the field of robotics). There are a variety of methodologies for simulating these interactions, but most are designed for high-speed computing. As mechanical engineers performing offline simulation studies, our interests lie in understanding what is happening and being able to quickly modify our system design. Thus, we have designed a technique which deliberately sacrifices computational efficiency in exchange for user transparency and simplified implementation.
|
|
Apr 27, 2010 |
Gates 2A Open Space |
J. Zico Kolter PhD Candidate in Computer Science Stanford University |
Machine Learning and Control for Quadruped Locomotion
Legged robots offer the potential to navigate terrain that is completely inaccessible to wheeled vehicles. In this talk I will describe our development of a software architecture for a quadruped robot, capable of climbing over a wide variety of challenging, previously unseen terrains. The system uses both slow "static" walking for highly complex, rocky terrains, and faster "dynamic" maneuvers and gaits for quickly crossing less irregular terrains. New machine learning and new control techniques have played a crucial role in developing this system, and I'll describe key components where we benefitted from such algorithms.
|
|
Apr 13, 2010 |
Gates 2A Open Space |
Oussama
Khatib Professor of Computer Science Stanford University | |
|
Mar 30, 2010 |
Gates 2A Open Space |
Scott Delp Professor of Bioengineering and Mechanical Engineering Stanford University | |
The Seminar on Human, Avatar, and Robot Motion is generously supported by the Stanford Computer Forum.
If you are interested in presenting at the seminar, please contact Vladlen
Koltun. We welcome presenters from a broad range of fields, from both
Stanford University and other industrial or academic institutions.
Please direct other inquiries to
Sergey Levine.
|