skip to content

Mathematical Research at the University of Cambridge

 

Multi-agent reinforcement learning, despite its popularity and empirical success, faces significant scalability challenges in large-population dynamic games, especially with heterogeneous players. This talk will use fundamental linear-quadratic games as an example and present recent frameworks that provide principled designs for efficient and scalable learning algorithms in multi-agent systems with heterogeneous players.
In the first part, we will introduce the Graphon Mean Field Game approach and present provably convergent policy gradient algorithms for large-population games in which agents interact weakly through a symmetric graph. The second part of the talk will focus on the Alpha-Potential Game framework, which enables the development of efficient learning algorithms for asymmetric network games that go beyond mean-field approximations.
This talk is based on joint work with Yufei Zhang.

Further information

Time:

12Nov
Nov 12th 2025
11:30 to 12:10

Venue:

Seminar Room 1, Newton Institute

Speaker:

Philipp Plank (Imperial College London)

Series:

Isaac Newton Institute Seminar Series