In the context of computer models, calibration is the process of estimating unknown simulator parameters from observational data, variously referred to as model fitting, parameter estimation/inference, an inverse problem, and model tuning. The need for calibration occurs in most areas of science and engineering, and although the statistical methods used for calibration can vary substantially, the underlying approach is essentially the same and can be considered abstractly. In this talk, I will review the decisions that need to be taken when calibrating a model, and discuss a range of computational methods (new and old) that can be used to compute Bayesian posterior distributions.