skip to content

Mathematical Research at the University of Cambridge

 

Sampling from a probability distribution is a fundamental algorithmic task, and one way to do that is via running a Markov chain. The mixing time of a Markov chain characterizes how long we should run the Markov chain until the random variable converges to the stationary distribution. In this talk, we discuss the “independence time”, which is how long we should run a Markov chain until the initial and final random variables are approximately independent, in the sense that they have small mutual information. We study this question for two natural Markov chains: the Langevin dynamics in continuous time, and the Unadjusted Langevin Algorithm in discrete time. When the target distribution is strongly log-concave, we prove that the mutual information between the initial and final random variables decreases exponentially fast along both Markov chains. These convergence rates are tight, and lead to an estimate of the independence time which is similar to the mixing time guarantees of these Markov chains. We illustrate our proofs using the strong data processing inequality and the regularity properties of Langevin dynamics. Based on joint work with Jiaming Liang and Siddharth Mitra, https://arxiv.org/abs/2402.17067.

Further information

Time:

24May
May 24th 2024
14:00 to 15:00

Venue:

MR12, Centre for Mathematical Sciences

Speaker:

Andre Wibisono, Yale University

Series:

Statistics