skip to content

Mathematical Research at the University of Cambridge

 

We propose a gradient-enhanced algorithm for high-dimensional scalar or vectorial function approximation. The algorithm proceeds in two steps: firstly, we reduce the input dimension by learning the relevant input features from gradient evaluations, and secondly, we regress the function output against the pre-learned features. To ensure theoretical guarantees, we construct the feature map as the first components of a diffeomorphism, which we learn by minimizing an error bound obtained using Poincaré Inequality applied either in the input space or in the feature space. This leads to two different strategies, which we compare both theoretically and numerically and relate to existing methods in the literature. In addition, we propose a dimension augmentation trick to increase the approximation power of feature detection. In practice, we construct the diffeomorphism using coupling flows, a particular class of invertible neural networks. Numerical experiments on various high-dimensional functions show that the proposed algorithm outperforms state-of-the-art competitors, especially with small datasets.

Further information

Time:

08May
May 8th 2025
14:30 to 15:00

Venue:

Seminar Room 1, Newton Institute

Speaker:

Clémentine Prieur (Université Grenoble Alpes)

Series:

Isaac Newton Institute Seminar Series