skip to content

Mathematical Research at the University of Cambridge

 

We propose a new method for assessing calibration based on revisions of probabilistic forecasts. Such revisions refer to how forecasts for an outcome change as the forecasts are updated over time. We begin by extending the definition of a forecast revision from point forecasts to probabilistic forecasts. Then we show that if the forecasts are well calibrated (efficient) then the revisions form a sequence of independent uniform random variables. This motivates new statistical tests to detect miscalibration.

Further information

Time:

06Jun
Jun 6th 2025
14:00 to 15:00

Venue:

Seminar Room 1, Newton Institute

Speaker:

Christopher Ferro (University of Exeter)

Series:

Isaac Newton Institute Seminar Series