skip to content

Mathematical Research at the University of Cambridge

 

Reinforcement learning (RL) and optimal control share a deep intellectual heritage in addressing sequential decision-making under uncertainty. This tutorial develops a computer scientist’s perspective on RL theory—one that places generalization, sample efficiency, and computational tractability at the center of the analysis. A particular focus will be on the stylized setting of linear function approximation, which offers the best prospects for developing and understanding tractable algorithms. The tutorial will illustrate how this perspective shapes problem formulations, abstractions, and algorithmic insights through several representative results. It will conclude by considering how similar ideas might inform reasoning and planning in large language models, raising more questions than answers.
 
The tutorial follows the new MIT Press textbook "Multi-Agent Reinforcement Learning: Foundations and Modern Approaches", available at www.marl-book.com.
 

Further information

Time:

05Nov
Nov 5th 2025
09:30 to 12:30

Venue:

Enigma Room, The Alan Turing Institute

Speaker:

Csaba Szepesvári (University of Alberta)

Series:

Isaac Newton Institute Seminar Series