Faculty Candidate Seminar
Socially Responsible and Factual Reasoning for Equitable AI Systems
This event is free and open to the publicAdd to Google Calendar
Seminar location: Michigan Memorial Phoenix Project (between the Dude and the Reflecting Pool, entrance faces Bonisteel)
Zoom link for remote participants, passcode: 277477
Understanding the implications underlying a text is critical to assessing its impact. This requires endowing artificial intelligence (AI) systems with pragmatic reasoning, for example to infer that the statement “Epidemics and cases of disease in the 21st century are “staged”” relates to unfounded conspiracy theories. In this talk, I discuss how shortcomings in the ability of current AI systems to reason about pragmatics leads to inequitable detection of false or harmful language. I demonstrate how these shortcomings can be addressed by imposing human-interpretable structure on deep learning architectures using insights from linguistics.
In the first part of the talk, I describe how adversarial text generation algorithms can be used to improve model robustness. I then introduce a pragmatic formalism for reasoning about harmful implications conveyed by social media text. I show how this pragmatic approach can be combined with generative neural language models to uncover implications of news headlines. I also address the bottleneck to progress in text generation posed by gaps in evaluation of factuality. I conclude with an interdisciplinary study showing how content moderation informed by pragmatics can be used to ensure safe interactions with conversational agents, and my future vision for development of context-aware systems.