Automatically Capturing and Reflecting Latent Label Dependencies in Machine Learning Models
Add to Google Calendar
In this talk, I will discuss recent work of automatically capturing latent dependencies of label space from data without explicit knowledge injection. When there exist multivariate dependencies on a large output space, it is nearly impossible to annotate them. I will first briefly introduce the concept of structured energy network models (SPENs) that can capture latent dependencies and then present last year’s NeurIPS publication that utilizes these Structured Energy networks As Loss functions (SEAL) to teach a simple feedforward network. We find that rather than using SPENs as a prediction network for inference, using it as a trainable loss function is not only computationally efficient but also results in higher performance with the examples of multi-label classification, semantic role labeling, and binary image segmentation. I will also briefly introduce the idea of using spatial representations that can capture latent label dependencies such as taxonomic and logical dependencies.
Jay-Yoon Lee is an assistant professor in the Graduate School of Data Science at Seoul National University (SNU). His research interest primarily lies in injecting knowledge, and constraints into machine learning models using the tools of structured prediction, reinforcement learning, and multi-task learning. He has worked on injecting hard constraints and logical rules into neural NLP models during his Ph.D., and now he is expanding his research area towards automatically capturing constraints, human-interactive models, and science problems such as protein interaction. Prior to joining SNU, he conducted his postdoctoral research in the College of Information & Computer Sciences at UMass Amherst with Professor Andrew McCallum. Jay-Yoon received his Ph.D. in Computer Science in 2020 from Carnegie Mellon University where he was advised by Professor Jaime Carbonell and received his B.S. from KAIST in electrical engineering.