Faculty Candidate Seminar
Verifiable Machine Learning for Security
This event is free and open to the publicAdd to Google Calendar
Zoom link for remote participants, passcode: 682921
Abstract: In recent years, machine learning techniques have been increasingly applied to many critical problems in the cybersecurity domain, including detecting malware, spam, online fraud, hate speech, etc. However, there are many challenges to reliably deploy these solutions for security applications, since real-world adversaries are constantly trying to evade machine learning systems. My research focuses on solving this problem by increasing the cost for attackers to succeed.
In this talk, I will discuss methods to train security classifiers with verified robustness properties. Robustness properties are security guarantees of the classifier that can eliminate certain classes of evasion attacks. I will show how to use security domain knowledge and economic cost measurement studies to formulate robustness properties to capture general classes of evasion strategies that are inexpensive for attackers. Then, I will describe new algorithms to train security classifiers to satisfy these properties. I will show how to apply the methods to detect PDF malware, Twitter spam, and Cryptojacking, and demonstrate that it is not only sound but also practical. My key result is, enforcing robustness properties can increase the economic cost of evasion. In the future, I want to integrate new machine learning models as a fundamental part to solve hard problems in security.
Bio: Yizheng Chen is a postdoctoral scholar at University of California, Berkeley. Previously, she was a postdoctoral scholar at Columbia University. She holds a Ph.D. in Computer Science from Georgia Institute of Technology. Her research focuses on building robust machine learning algorithms for security applications. Her work has received an ACM CCS Best Paper Award Runner-up and a Google ASPIRE Award. She is a recipient of the Anita Borg Memorial Scholarship.