Stochastic optimization is ubiquitous across applied sciences. It can refer to a portfolio allocation problem, an optimised certainty equivalent or a risk measure computation, a standard regression or a deep learning problem, an image recognition task or a stochastic chemical kinetics model. At the heart of the optimisation is a probability measure, or a model, which describes the system. It could come from data, simulation or a modelling effort but there is always a degree of uncertainty about it. Wasserstein Distributionally Robust Optimization acknowledges this uncertainty by considering a nonparametric Wasserstein ball around the postulated model. However, such problems, whilst appealingconceptually are often very hard to solve. In this talk I will discuss a series of works in which we develop sensitivity analysis with respect to the degree of model uncertainty. This offers a non-parametric sensitivity (or a “Greek”) to model uncertainty. I will highlight applications from decision theory, through mathematical finance to machine learning and statistics. I will start with simple one-step models which use classical Wasserstein distances and provide explicit formulae for the first order correction to both the value function and the optimizer, and further extend the results to optimization under linear constraints. I will then cover dynamic settings in which model neighbourhoods are considered in causal Wasserstein sense. I will discuss both discrete and continuous time results.Talk based on joint works with Daniel Bartl, Samuel Drapeau, Yifan Jiang and Johannes Wiesel