skip to content

Mathematical Research at the University of Cambridge

 

Data driven algorithms offer a natural framework to make decisions in environments affected by uncertainty, where uncertainty is represented by means of data. Neural networks constitute one class of such data driven decision making tools. However, the ``learned’’ decisions are inherently random as they depend on the data used. In this talk we discuss how tools from statistical learning theory based on the notion of compression and randomized optimization offer a principled framework to analyze the robustness properties of these learned decisions. Our results build ``trust on data’’, and accompany data driven solutions with probabilistic robustness guarantees that capture their generalization properties when it comes to new data, not included in the learning/training process. We review recent advancements in this area that allow to boost performance based on a data outlier removal procedure. We then show how this methodology can be employed to build what will be referred to as safety informed neural networks, that produce  safety and reachability certificates for nonlinear dynamical systems, accompanying them with prescribed probabilistic guarantees with respect to their validity.

Further information

Time:

11Nov
Nov 11th 2025
10:50 to 11:30

Venue:

Seminar Room 1, Newton Institute

Speaker:

Kostas Margellos (University of Oxford)

Series:

Isaac Newton Institute Seminar Series