Computer Science and Engineering

Faculty Candidate Seminar

Physically Interactive Intelligence — A path towards autonomous embodied agents

Roberto Martin-MartinPostdoctoral ScholarStanford University
WHERE:
Remote/Virtual
SHARE:

Zoom link

CSE/Robotics Faculty Candidate Seminar

Abstract:   What is the role of physical interaction in embodied intelligence? In robotics, physical interaction is often reduced to a minimum because it is considered difficult to plan, control and execute, has unpredictable effects and may be dangerous for the robot and anything or anyone around it. To compensate, we impose extremely high requirements on computation: perception, planning and control. However, when observing humans, we see that our autonomy to perform tasks in a versatile and robust manner come from rich, continuous and resourceful interactions with the environment, what I call Physically Interactive Intelligence.

 

In my research, I develop new learning algorithms to enable embodied AI agents to exploit interactions to gain autonomy, and I test them in realistic integrated robotic systems. I propose to promote physical interaction to the foundational component of novel robotic solutions. I will present new methods to learn to control and exploit physical interactions even for tasks where they are not traditionally used such as perception and navigation. These lines of work support my overall research hypothesis: autonomous behavior and grounded understanding in embodied AI agents are achieved through the resourceful use of physical interaction with the environment, i.e. through physically interactive intelligence. I will also discuss next steps in my research agenda to study the role of physical interaction in intelligent autonomous behavior, and endow robots with new interactive capabilities to achieve more complex tasks autonomously in unstructured environments.

Bio:  Roberto Martin-Martin works as Postdoctoral scholar at the Stanford Vision and Learning Lab with Professors Silvio Savarese and Fei-Fei Li. Before coming to Stanford, he received his PhD and Masters degree from Technische Universität Berlin (TUB) working with Professor Oliver Brock, and his BSc. degree from Universidad Politécnica de Madrid. He researches at the intersection of robotics, computer vision, and machine learning, integrating physical interaction as part of novel perception and learning procedures, and creating fully integrated embodied AI systems and studying their behavior. At Stanford he coordinates three groups: the People, AI & Robots (PAIR) team, where he studies visuo-motor learning skills for manipulation and planning, the iGibson team, where he studies how humans and robots achieve everyday long-horizon interactive tasks, and the JackRabbot team, where he researched about mobility and manipulation in environments with humans. He has been awarded with RSS Best systems paper award and was leading the winning team at the first Amazon picking challenge.

Faculty Host

Justin Johnson