AI Seminar

Modeling of Driver Behavior in Real World Scenarios Using Multiple Noninvasive Sensors

Carlos BussoAssistant ProfessorThe University of Texas at Dallas
SHARE:

With the development of new in-vehicle technology, drivers are exposed to more sources of distraction, which can lead to an unintentional accident. As a result, there is a need for tracking systems that can monitor the drivers' attention. This seminar describes our efforts to understand, quantify and predict driver distractions using a multimodal corpus collected from real world driving scenarios. Our study analyzes the effects in driver behaviors induced by secondary tasks (e.g., operating the radio, phone and a navigation system). The corpus is analyzed to identify multimodal features that can be used to discriminate between normal and task driving conditions. The experimental evaluation demonstrates that multimodal features extracted from various noninvasive sensors including the controller area network-bus (CAN-Bus), video cameras and microphone arrays can be used to train robust classifiers. Furthermore, we explore a joint representation of visual and cognitive distractions that provides insights to better understand the deviation in driving behaviors induced by secondary tasks. This novel cognitive and visual representation and the automatic classification of driving behaviors into the proposed distraction modes offer an alternative paradigm to evaluate the detrimental effects caused by different secondary tasks. These tools are especially useful to evaluate new in-vehicle technologies.
Carlos Busso is an Assistant Professor at the Electrical Engineering Department of The University of Texas at Dallas (UTD). He received his B.S (2000) and M.S (2003) degrees with high honors in electrical engineering from University of Chile, Santiago, Chile, and his Ph.D (2008) in electrical engineering from University of Southern California (USC), Los Angeles, USA. He was selected by the School of Engineering of Chile as the best Electrical Engineer graduated in 2003 across Chilean universities. At USC, he received a Provost Doctoral Fellowship from 2003 to 2005 and a Fellowship in Digital Scholarship from 2007 to 2008. At UTD, he leads the Multimodal Signal Processing (MSP) laboratory [http://msp.utdallas.edu]. He received the Hewlett Packard Best Paper Award at the IEEE ICME 2011 (with J. Jain). He is the co-author of the winner paper of the Classifier Sub-Challenge event at the Interspeech 2009 emotion challenge. His research interests are in digital signal processing, speech and video processing, and multimodal interfaces. His current research includes the broad areas of affective computing, multimodal human-machine interfaces, modeling and synthesis of verbal and nonverbal behaviors, sensing human interaction, in-vehicle active safety system, and machine learning methods for multimodal processing.

Sponsored by

Toyota