Articles | Volume 10, issue 4
Methods for assessment of models
27 Apr 2017
Methods for assessment of models |  | 27 Apr 2017

Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

Daniel B. Williamson, Adam T. Blaker, and Bablu Sinha

Abstract. In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

Short summary
We present a method from the statistical science literature to assist in the tuning of global climate models submitted to CMIP. We apply the method to the NEMO ocean model and find choices of its free parameters that lead to improved representations of depth integrated global mean temperature and salinity. We argue against automatic tuning procedures that involve optimising certain outputs of a model and explain why our method avoids common difficulties with/arguments against automatic tuning.