skip to content

Mathematical Research at the University of Cambridge

 

The stochastic gradient (SG) method is a widely used approach for solving stochastic optimization problems, but its convergence is typically slow. Existing variance reduction techniques, such as SAGA, improve convergence by leveraging stored gradient information; however, they are restricted to settings where the objective functional is a finite sum, and their performance degrades when the number of terms in the sum is large. In this work, we propose a novel approach that is best suited when the objective is given by an expectation over random variables with a continuous probability distribution. Our method constructs a control variate by fitting a linear model to past gradient evaluations using weighted discrete least-squares, effectively reducing variance while preserving computational efficiency. We establish theoretical sublinear convergence guarantees and demonstrate the method’s effectiveness through numerical experiments on random PDE-constrained optimization problems.

Further information

Time:

03Jul
Jul 3rd 2025
10:30 to 11:30

Venue:

Seminar Room 2, Newton Institute

Speaker:

Fabio Nobile (EPFL - Ecole Polytechnique Fédérale de Lausanne)

Series:

Isaac Newton Institute Seminar Series