AI Seminar
Learning sight from sound
This event is free and open to the publicAdd to Google Calendar
Abstract: Today’s visual learning methods require extensive supervision from human teachers. A major goal of the research community has been to remove the need for this supervision by creating methods that, instead, teach themselves by analyzing unlabeled images. In this talk, I will argue that this focus on learning from vision alone, without the use of other sensory modalities, is making the perception problem unnecessarily difficult. To demonstrate this, I will present unsupervised computer vision methods for learning from co-occurring audio and visual signals. First, I will show that visual models of materials and objects emerge from predicting soundtracks for silent videos. Then, I will present a multimodal video representation that fuses information from both the visual and audio components of a video signal. Finally, I will show that these self-supervised learning techniques can be used to parse a video into a collection of discrete audio-visual objects, and that these objects can be used to solve “downstream” speech-understanding tasks, such as audio source separation.