CSE Seminar

Physically Plausible Human Motion Understanding for Human-Robot Teams

Chad JenkinsAssociate Professor of Computer ScienceBrown University
SHARE:

A central aim of autonomous robotics is to improve the physical
productivity of human users through their collaboration with robotic partners. Communication in such human-robot teams is a critical capability for giving instructions to robots, responding to a user's intentions, and coordinating group behavior. Further, human-robot teams are becoming increasingly present in highly dynamic physical environments with large numbers of heterogenous robots. Towards this end, our aim has been to enable robots to recognize humans and their non-verbal cues (e.g., pose and gestures) robust to physical disturbances as well as producing software modules that port these recognition capabilities across platforms.

In this talk, I will present our work in recognizing non-verbal
features from humans, such as pose and gesture, using depth-based cameras on mobile robotic platforms. Using predictive models of human motion and depth sensing, our recognition enables robots to recognize, follow, and respond to commands from moving human users operating in real-time across indoor and outdoor environments.

We build on this work in two areas. First, our work in "physically
plausible human tracking" improves the physical validity of monocular pose tracking by incorporating dynamical humanoid simulation for prediction within inference procedures, such as Bayesian filtering. Second, we have aimed to increase the level of interoperability, reproducibility, and accessibility of robotics among users, applications developers, and researchers. Through robot middleware based on network messaging, such as the Robot Operating System (ROS), we have been able to readily port our person and gesture recognition systems to several different robot platforms. We posit that robotics-oriented applications layer network protocols are critical for realizing the "robot apps" for human-robot teams.
Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science at Brown University. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). Prof. Jenkins was selected as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. He has also received Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations. His research addresses problems in robot learning and human-robot interaction, primarily focused on robot learning from demonstration, as well as topics in computer vision, machine learning, and computer animation.

Sponsored by

CSE