CSE Seminar

Explainability and Interpretability through Interaction

Nikola BanovicAssistant ProfessorUniversity of Michigan
WHERE:
Remote/Virtual
SHARE:
Abstract: Recent advances in AI and Machine Learning enable future behavior-aware user interfaces that can automatically reason about and act in response to people’s behaviors. However, current AI-based solutions face challenges when trying to explain their decisions, which is necessary for adoption of such models in practice. This talk explores how to explain the capabilities and limitations of AI models through human-AI interaction. This is unlike existing mathematical approaches that assume the end user is a math-savvy model designer rather than a domain expert or end-user. The talk will propose new empirically-validated mechanisms to deliver explainable and interpretable complex systems, including AI models.
Bio: Nikola Banovic, Ph.D., is an Assistant Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor, where he leads the Computational HCI Lab. Nikola received his Ph.D. from the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University, and his B.Sc. and M.Sc. degrees from the University of Toronto. His research focuses on creating computational models to study, describe, and understand complex human behaviors. Nikola’s research enables behavior-aware user interfaces that automatically reason about and act in response to peoples’ behaviors to help them be productive, healthy, and safe. Nikola has published award-winning research on methods to study and model human behavior in premier Human-Computer Interaction (HCI) conferences and journals.

Organizer

Faculty Seminar Series