AI Seminar

Following Natural Language Instructions in Unknown Environments

Matt WalterAssistant ProfessorToyota Technological Institute at Chicago
SHARE:

Natural language promises an efficient and flexible means for humans to communicate with robots, whether they are assisting the physically or cognitively impaired, or performing disaster mitigation tasks as our surrogates. Recent advancements have given rise to robots that are able to interpret natural language commands that direct object manipulation and spatial navigation. However, most methods require prior knowledge of the metric and semantic properties of the objects and places that comprise the robot's environment.

In this talk, I will present our work that enables robots to successfully follow natural language navigation instructions within novel, unknown environments. I will first describe a method that treats language as a sensor, exploiting information implicit and explicit in the user's command to learn distributions over the latent spatial and semantic properties of the environment and over the robot's intended behavior. The method then learns a belief space policy that reasons over these distributions to identify suitable navigation actions. In the second part of the talk, I will present an alternative formulation that represents language understanding as a multi-view sequence-to-sequence learning problem. I will introduce an alignment-based neural encoder-decoder architecture that translates free-form instructions to action sequences based on images of the observable world. Unlike previous methods, this architecture uses no specialized linguistic resources and can be trained in a weakly supervised, end-to-end fashion, which allows for generalization to new domains. I will evaluate the efficacy of these methods on a combination of benchmark navigation datasets and through demonstrations on a voice-commandable wheelchair.
Matthew Walter is an assistant professor at the Toyota Technological Institute at Chicago. His interests revolve around the realization of intelligent, perceptually aware robots that are able to act robustly and effectively in unstructured environments, particularly with and alongside people. His research focuses on machine learning-based solutions that allow robots to learn to understand and interact with the people, places, and objects in their surroundings.

Sponsored by

Toyota