Learning with Minimal Supervision

 

Harri Edwards


The success of deep learning has largely been the success of supervised learning. Unfortunately obtaining sufficient labelled examples for every task of interest is impractically expensive. In this project we consider various ways of addressing this problem though transfer learning, meta-learning, unsupervised and semi-supervised learning, and learning from weaker and/or cheaper forms of supervision. In particular we focus on using meta-learning to fit high capacity generative models of datasets with few examples, and using intrinsic motivation to explore environments with reinforcement learning agents.

 

Supervisors: Amos Storkey & Iain Murray