skip to content

Mathematical Research at the University of Cambridge

 

We prove the stability and global convergence of a coupled actor-critic gradient flow for infinite-horizon and entropy-regularised Markov decision processes (MDPs) in continuous state and action space with linear function approximation under Q-function realisability. We consider a version of the actor critic gradient flow where the critic is updated using temporal difference (TD) learning while the policy is updated using a policy mirror descent method on a separate timescale. We demonstrate stability and exponential convergence of the actor critic flow to the optimal policy. Finally, we address the interplay of the timescale separation and entropy regularisation and its effect on stability and convergence. 
This is joint work with Denis Zorba and Lukasz Szpruch.

Further information

Time:

13Nov
Nov 13th 2025
16:30 to 17:10

Venue:

Seminar Room 1, Newton Institute

Speaker:

David Siska (University of Edinburgh)

Series:

Isaac Newton Institute Seminar Series