Multi-task Hierarchical Reinforcement Learning for Compositional Tasks
This event is free and open to the publicAdd to Google Calendar
ABSTRACT: Real world tasks are hierarchical and compositional. Tasks can be composed of multiple subtasks (or sub-goals) and the subtasks are dependent on each other, forming a hierarchical structure. Moreover, many tasks share common structure and humans can naturally exploit such structure by learning the shared rules and skills to solve a novel task. However, reinforcement learning models often require large amounts of data on a variety of tasks to solve complex hierarchical and compositional tasks. In order to solve real-world tasks, reinforcement learning agents should be capable of tackling such hierarchical and compositional tasks with minimal human supervision in a sample efficient manner.
In this thesis, I address the problem of learning agents that can solve hierarchical and compositional tasks efficiently in various contexts of reinforcement learning.
First, I propose a hierarchical reinforcement learning model that can efficiently solve hierarchical tasks in zero-shot and few-shot reinforcement learning scenarios. I propose to utilize inductive logic programming methods to efficiently infer the latent task structure through interacting with the environment, and a graph neural network-based model to encode the complex task structure input and map it to the policy space to achieve zero-/few-shot task generalization.
Second, I extend this model to multi-task and transfer learning algorithms to achieve stronger generalization and efficient knowledge sharing across different tasks. I will present an algorithm that can model the prior over the training tasks and incorporate it for unseen test tasks by predicting the posterior of latent task structure or abstracting the task structure into a factored form such that it can be generalized to unseen entities.