Towards Scalable Meta-Learning

UoM administered thesis: Phd

  • Authors:
  • Sebastian Flennerhag


Artificial intelligence questions our understanding of intelligent behaviour and our own intelligence. Recent advances rely on machine learning, where intelligence arises through statistical learning. Often, machine learning assumes agents approach tasks with no prior knowledge. This stands in stark contrast to how humans approach new problems and is unlikely to yield human-level intelligence that can learn to solve new, unseen problems as they arise. To this end, we need a learning paradigm at a higher level of abstraction. One alternative is agents that learn to learn. Within this framework, agents learn not to solve a given set of tasks, but how to solve them. Such agents can generalise prior experiences into abstract concepts for learning and problem solving. Within the context of neural networks, current methods are limited in their ability to generalise and scale in terms of task variation and complexity. This thesis makes four contributions that tackle these challenges. A novel method is proposed that learns to dynamically adapt an agent's parameters to increase its expressive capacity and ability to generalise. Further, a framework is proposed that grounds meta-learning in differential geometry by learning shortest solution paths (geodesics) across tasks. Building on these insights, a novel method for meta-learning is proposed that is simple, scalable, and effective. It is the first gradient-based meta-learner that can be directly applied to any form of learning–including supervised, unsupervised, reinforcement, continual, and online learning–opening up for meta-learning at the scale and at the level of complexity required for sophisticated artificial intelligence. Finally, while the above methods rely on a predefined task distribution, an artificial intelligence should be able to define its own tasks as needed. To this end, a novel system for exploration in reinforcement learning is proposed that creates intrinsic tasks. These drive an agent to explore experiences where it has high uncertainty and evolves continuously as the agent learns about its world.


Original languageEnglish
Awarding Institution
Award date1 Aug 2021