arxivst stuff from arxiv that you should probably bookmark

Overcoming model simplifications when quantifying predictive uncertainty

Abstract · Mar 21, 2017 13:02 ·

stat-ml math-pr physics-comp-ph physics-geo-ph stat-me

Arxiv Abstract

  • George M. Mathews
  • John Vial

It is generally accepted that all models are wrong – the difficulty is determining which are useful. Here, a useful model is considered as one that is capable of combining data and expert knowledge, through an inversion or calibration process, to adequately characterize the uncertainty in predictions of interest. This paper derives conditions that specify which simplified models are useful and how they should be calibrated. To start, the notion of an optimal simplification is defined. This relates the model simplifications to the nature of the data and predictions, and determines when a standard probabilistic calibration scheme is capable of accurately characterizing uncertainty. Furthermore, two additional conditions are defined for suboptimal models that determine when the simplifications can be safely ignored. The first allows a suboptimally simplified model to be used in a way that replicates the performance of an optimal model. This is achieved through the judicial selection of a prior term for the calibration process that explicitly includes the nature of the data, predictions and modelling simplifications. The second considers the dependency structure between the predictions and the available data to gain insights into when the simplifications can be overcome by using the right calibration data. Furthermore, the derived conditions are related to the commonly used calibration schemes based on Tikhonov and subspace regularization. To allow concrete insights to be obtained, the analysis is performed under a linear expansion of the model equations and where the predictive uncertainty is characterized via second order moments only.

Read the paper (pdf) »