Faculty Candidate Seminar

Towards Principled Modeling of Inductive Bias for Generalizable Machine Learning

Weiyang LiuPh.D. CandidateMax Planck Institute
WHERE:
3725 Beyster BuildingMap
SHARE:

Zoom link for remote attendees: password 123123

 

 

Abstract: Machine learning (ML) becomes increasingly ubiquitous nowadays, as it enables scalable and accurate decision making in many applications, ranging from autonomous driving to medical diagnosis. Despite its unprecedented success, how to ensure that ML systems are trustworthy and generalize as intended remains a huge challenge. To address this challenge, my research aims to build generalizable ML algorithms through a principled modeling of inductive bias. To this end, I introduce three methods for modeling inductive biases: (1) value-based modeling, (2) data-centric modeling, and (3) structure-guided modeling. While briefly touching upon all three methods, I will focus on my recent efforts in value-based modeling and how it can effectively improve the adaptation of foundation models. Finally, I will conclude by highlighting the critical role of principled inductive bias modeling in unlocking new possibilities in the age of foundation models.

Bio: Weiyang Liu is currently a final-year PhD student at University of Cambridge and Max Planck Institute for Intelligent Systems, advised by Prof. Adrian Weller and Prof. Bernhard Schölkopf under the Cambridge-Tuebingen Machine Learning Fellowship. His research focuses on the principled modeling of inductive biases to achieve generalizable and reliable machine learning. He has received Baidu Fellowship, Hitachi Fellowship and Qualcomm Innovation Fellowship Finalist. His works have received 2023 IEEE Signal Processing Society Best Paper Award, Best Demo Award at HCOMP 2022 and multiple oral/spotlight presentations at conferences such as ICLR, NeurIPS and CVPR.

Organizer

Cindy Estell

Student Host

Daniel Geng

Faculty Host

JJ Park