skip to content

Mathematical Research at the University of Cambridge

 

A central issue lying at the heart of online reinforcement learning (RL) is data efficiency. While a number of recent works achieved asymptotically minimal regret in online RL, the optimality of these results is only guaranteed in a “large-sample” regime, imposing enormous burn-in cost in order for their algorithms to operate optimally. How to achieve minimax-optimal regret without incurring any burn-in cost has been an open problem in RL theory. We settle this problem for the context of finite-horizon inhomogeneous Markov decision processes. Specifically, we prove that a modified version of Monotonic Value Propagation (MVP) achieves a regret that matches the minimax lower bound for the entire range of sample size, essentially eliminating any burn-in requirement. The key technical innovation lies in the development of a new regret decomposition strategy and a novel analysis paradigm to decouple complicated statistical dependency – a long-standing challenge facing the analysis of online RL in the sample-hungry regime. This is joint work with Zihan Zhang, Jason Lee and Simon Du.

Further information

Time:

11Nov
Nov 11th 2025
09:40 to 10:20

Venue:

Seminar Room 1, Newton Institute

Speaker:

Yuxin Chen (University of Pennsylvania)

Series:

Isaac Newton Institute Seminar Series