skip to content

Mathematical Research at the University of Cambridge

 

In this talk, we investigate model robustness in reinforcement learning (RL) to reduce the sim-to-real gap in practice. We adopt the framework of distributionally robust Markov decision processes (RMDPs), aimed at learning a policy that optimizes the worst-case performance when the deployed environment falls within a prescribed uncertainty set around the nominal MDP. Despite recent efforts, the sample complexity of RMDPs remained mostly unsettled regardless of the uncertainty set in use. It was unclear if distributional robustness bears any statistical consequences when benchmarked against standard RL. Assuming access to a generative model that draws samples based on the nominal MDP, we provide a near-optimal characterization of the sample complexity of RMDPs when the uncertainty set is specified via either the total variation (TV) distance or χ2 divergence. The algorithm studied here is a model-based method called distributionally robust value iteration, which is shown to be near-optimal for the full range of uncertainty levels. 

Further information

Time:

10Nov
Nov 10th 2025
11:30 to 12:10

Venue:

Seminar Room 1, Newton Institute

Speaker:

Yuting Wei (University of Pennsylvania)

Series:

Isaac Newton Institute Seminar Series