Understanding and predicting complex real-world phenomena lies at the core of empirical sciences, from physics and engineering to medicine and Earth system science. A fundamental challenge arises from the need to choose the appropriate scale at which to observe and model emergent behavior. Different scales often give rise to distinct physical mechanisms, each described by complex and computationally demanding models—such as systems of ordinary or partial differential equations—that must be repeatedly evaluated during calibration and inference.
Moreover, models derived from empirical observations are inevitably misspecified. As a result, calibration to specific instances of a problem may lead to biased or unstable predictions, a difficulty that is further exacerbated in large-data regimes and in the presence of outliers. These challenges motivate the development of inference methodologies that are both computationally scalable and robust to model mismatch.
In this talk, I will present an overview of recent advances in computational statistics that leverage AI-based tools to address these issues, with a focus on Bayesian and post-Bayesian inference. The latter provides a principled framework for reasoning under model misspecification, enabling more reliable uncertainty quantification across scales. I will review both stochastic approaches—such as Langevin and mean-field Langevin dynamics—and deterministic methods based on variational principles, including Stein variational gradient descent and neural-network-based score and diffusion models. These methods enable sampling from target distributions that may be available only up to a normalization constant or defined implicitly as optimizers of loss functions. Finally, I will highlight recent work aimed at improving the quality and robustness, of the derived samples and resulting inferences.