AI Seminar

Measuring and Improving the Internal Conceptual Representations of Deep Learning

Ramakrishna VedantamIndependent ResearcherNew York University
3725 Beyster BuildingMap
Join us in 3725 Beyster or on Zoom:
Endowing machines with abstract, flexible conceptual representations and the ability to combine known concepts to make novel, “conceptual leaps” is a long-standing goal of artificial intelligence (AI). In pursuit of this goal, I will discuss my works on the foundations of concept learning for deep learning models. In particular, I will focus on: multimodal learning (to ground concept representations more precisely into the world), quantifying robustness (to assess if atomic concepts are learned correctly), and machine reasoning (to combine known atomic concepts into novel, emergent ones). Finally, I will speculate on important research directions to pursue realizing the promise of general, robust, and human-interpretable AI systems.
About the speaker:
Ramakrishna Vedantam is an independent researcher visiting NYU, interested in the foundational aspects of general artificial intelligence systems. He was previously at the FAIR lab in New York as a research scientist and before then a PhD student at Georgia Tech. Rama has made seminal contributions to various domains of AI such as captioning, multimodal learning, interpretability, and representation learning. Of particular note are his works CIDEr and GradCAM which are widely used by practitioners of deep learning.


AI Lab