Human-Centered Computing: Using Speech to Understand Behavior & Crowd-Powered Interactive Systems
Add to Google Calendar
Emily Mower Provost: Human-centered computing (HCC) technologies are tightly linked to the science of decoding human behavior. It is part of the emerging Behavioral Signal Processing (BSP) field, which seeks to provide a computational account of aspects of human behavior ranging from interaction patterns to individual emotion expression using techniques from both machine learning and signal processing. In this talk, I will discuss our ongoing HCC efforts in (1) speech-based assistive technology research, highlighting our work estimating speech quality for individuals with aphasia and (2) mood tracking for people with bipolar disorder.
Walter Lasecki: My group creates and deploys interactive systems that use a combination of human and machine intelligence to operate robustly in real-world settings. Unlike prior work in human computation, our "Crowd Agent" model allows crowds of people to support continuous real-time interactive systems that require ongoing context. For example, Chorus allows multi-session conversations with a virtual personal assistant; Scribe allows non-experts to caption speech in real time for deaf and hard of hearing users, where prior approaches were either not accurate enough, or required professionals with years of training; and Apparition allows designers to rapidly prototype new interactive interfaces from sketches in real time. In this talk, I will describe how computationally-mediated groups of people can solve problems that neither people nor computers can solve alone, and scaffold AI systems using the real-world data they collect.
Emily Mower Provost: Emily Mower Provost is an Assistant Professor in Computer Science and Engineering at the University of Michigan. Her research interests are in human-centered speech and video processing, multimodal interfaces design, and speech-based assistive technology. The goals of her research are motivated by the complexities of human emotion generation and perception. She has been awarded the National Science Foundation Graduate Research Fellowship (2004-2007), the Herbert Kunzel Engineering Fellowship from USC (2007-2008, 2010-2011), the Intel Research Fellowship (2008-2010), and the Achievement Rewards For College Scientists (ARCS) Award (2009 "“ 2010). She is a co-author on the Best Student Paper at ACM Multimedia, 2014 (with Yelin Kim). She is also a co-author of the winner paper of the Classifier Sub-Challenge event at the Interspeech 2009 emotion challenge.
Walter Lasecki: Walter S. Lasecki is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor. He creates interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. These systems let people be more productive, and improve access to the world for people with disabilities. Dr. Lasecki received his Ph.D and M.S. from the University of Rochester in 2015 and a B.S. in Computer Science and Mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google[x].