Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Agenda Overview |
| Session | ||
WS 9b - Curiosity, Exploration, and Meta-Reinforcement Learning: Learning What to Learn
| ||
| Session Abstract | ||
|
Brief Description and Outline: Agentic AI is becoming ubiquitous, but how an agent learns to navigate an environment efficiently remains an open problem. This session traces the exploration problem from its classical formulation in reinforcement learning (where the tension between exploiting known rewards and discovering better ones is unavoidable) through to contemporary approaches rooted in intrinsic motivation, meta-learning, and the free energy principle. The arc moves from hand-designed exploration heuristics toward increasingly principled accounts of curiosity, culminating in active inference as a unifying framework that grounds exploration in Bayesian brain theory and statistical mechanics. Relevance: Efficient exploration is a bottleneck in deploying RL to many real-world problems where rewards are sparse, environments are complex, and data is expensive, making it central to both theoretical and applied AI research. Beyond engineering, the topic sits at the intersection of machine learning, dynamical systems, and computational neuroscience, offering a lens through which mathematical structure can illuminate biological as well as artificial intelligence. Researchers across these fields may find direct connections to open problems in their own work, whether in algorithm design, cognitive modelling, or the foundations of learning theory. Outline: 1) Reinforcement Learning and the Exploration Problem (20 min) 2) Curiosity and Intrinsic Motivation in RL (30-40 min) 3) Meta-Reinforcement Learning and Learning-to-Explore (20 min) 4) Active Inference Frameworks (20 min) 5) Discussion and Open Questions (20-30 min) Goals: To provide a conceptual map of modern exploration and curiosity-driven methods in reinforcement learning. To clarify the relationship between intrinsic motivation, meta-learning, and adaptive behaviour and to highlight connections between reinforcement learning, Bayesian inference, and active inference frameworks. Also, to equip participants with principled ways to reason about exploration strategies beyond ε-greedy or ad-hoc heuristics. Presenters Experience: Jonathan Shock is an Associate Professor in the Department of Maths and Applied Maths at the University of Cape Town, where he directs the UCT AI Initiative. He is also an Adjunct Professor at INRS Montréal. His work spans theoretical physics, reinforcement learning, and computational neuroscience, with a focus on understanding intelligence as a dynamical and physical process. He is particularly interested in reinforcement learning, multi-agent systems, and theory-driven AI for science. Much of his research explores how mathematical structure — from statistical mechanics to dynamical systems — can inform the design and analysis of learning systems. He completed his PhD at the University of Southampton in 2005 on applications of string theory to quantum chromodynamics, followed by postdoctoral appointments in Beijing, Santiago de Compostela, and Munich before joining UCT in 2013. Target Audience: a) PhD students, postdoctoral researchers, and early-to mid-career researchers b) Participants with a basic familiarity with machine learning or reinforcement learning c) Researchers from computer science, applied mathematics, neuroscience or robotics. This tutorial is designed as a 2-hour session focusing at a semi-technical level introducing a range of topics but not expecting mastery. No coding or specialised software is required, but some mathematics background is expected. |