AI Seminar

Learning sight from sound

Andrew OwensAssistant Professor in Electrical and Computer EngineeringUniversity of Michigan
WHERE:
Remote/Virtual
SHARE:

Abstract:  Today’s visual learning methods require extensive supervision from human teachers. A major goal of the research community has been to remove the need for this supervision by creating methods that, instead, teach themselves by analyzing unlabeled images. In this talk, I will argue that this focus on learning from vision alone, without the use of other sensory modalities, is making the perception problem unnecessarily difficult. To demonstrate this, I will present unsupervised computer vision methods for learning from co-occurring audio and visual signals. First, I will show that visual models of materials and objects emerge from predicting soundtracks for silent videos. Then, I will present a multimodal video representation that fuses information from both the visual and audio components of a video signal. Finally, I will show that these self-supervised learning techniques can be used to parse a video into a collection of discrete audio-visual objects, and that these objects can be used to solve “downstream” speech-understanding tasks, such as audio source separation.

Bio:   Andrew Owens joined UMich this January as an assistant professor in Electrical and Computer Engineering. Before that, he was a postdoctoral scholar at UC Berkeley, and he received a Ph.D. in EECS from MIT in 2016.
BlueJeens Virtual Seminar Link:  https://bluejeans.com/527479957

Organizer

AI Lab