Dissertation Defense

Secure and Safe Autonomous Driving: an Adversary’s Perspective

Yulong CaoPh.D. Student

Virtual Dissertation Defense


Autonomous vehicles, also known as self-driving cars, are being developed at a rapid pace due to advances in machine learning. However, the real-world is complex and dynamic, with many different factors that can affect the performance of an autonomous driving (AD) system. Therefore, it is essential to thoroughly test and evaluate AD systems to ensure their safety and reliability in the open-world driving environment. Additionally, due to the high impact of AD systems on road safety, it is important to build robust AD systems that are resistant to adversaries.
However, fully testing and exploiting AD systems can be challenging due to their complexity, as they consist of a combination of sensors, systems, and machine learning models. To address these challenges, my dissertation research focuses on building secure and safe AD systems through systematic analysis of attackers’ capabilities. This involves testing AD systems as a whole, using realistic attacks, and discovering new security problems through proactive analysis.
To achieve this goal, my dissertation starts by formulating realistic attackers’ capabilities against perception systems. Based on this, new attacks on perception systems are discovered that have different impacts (e.g., spoofing ghost objects or removing detected objects). Next, causality analysis is conducted to understand the fundamental limitations of the system (e.g., large receptive fields introducing new attack vectors). This provides insights and guidelines for designing more robust systems in the future. Finally, solutions are developed to improve the modular and integrated robustness of the systems. By leveraging adversarial examples, the training dataset for machine learning models can be augmented to naturally improve modular robustness. Using insights from the causality analysis and formulated attackers’ capabilities, AD systems with enhanced integrated robustness can be designed.


CSE Graduate Programs Office

Faculty Host

Prof. Z. Morley Mao