CSE Seminar | Division Event
Human-Centered AI Transparency: Lessons Learned and Open Questions in the Age of LLMs. Lecture to be followed by panel discussion.
This event is free and open to the publicAdd to Google Calendar
Note: This talk will be followed by a panel session on the same topic, which will run from 3:30 – 4:30pm. The panel will include Dr. Q. Vera Liao, Prof. Christopher Brooks (UMSI), and Prof. Ben Kuipers (CSE) and will be moderated by Prof. Nikola Banovic (CSE).
Abstract: Transparency—enabling appropriate understanding of AI models or systems—is considered a pillar of Responsible AI. The AI research community and industry have developed an abundance of techniques and artifacts in the hope of achieving transparency, including transparent model reporting, evaluation, explainable AI (XAI), and communication of model uncertainty. Meanwhile, the HCI community has taken human-centered approaches to these topics, building on its long-standing interest in design to support user understanding and appropriate mental models. In this talk, I will give an overview of common approaches and lessons learned from HCI research on AI transparency. With the recent rise of LLMs (large language models) and LLM-infused systems, I will also reflect on their unique challenges in providing transparency and discuss open questions.
Bio: Dr. Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM Research and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM and AAAI venues. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, on the Editorial Board of ACM TiiS, an Editor for CSCW, and an Area Chair for FAccT.