Articles | Volume 10, issue 6
Methods for assessment of models
28 Jun 2017
Methods for assessment of models |  | 28 Jun 2017

Skill and independence weighting for multi-model assessments

Benjamin M. Sanderson, Michael Wehner, and Reto Knutti

Abstract. We present a weighting strategy for use with the CMIP5 multi-model archive in the fourth National Climate Assessment, which considers both skill in the climatological performance of models over North America as well as the inter-dependency of models arising from common parameterizations or tuning practices. The method exploits information relating to the climatological mean state of a number of projection-relevant variables as well as metrics representing long-term statistics of weather extremes. The weights, once computed can be used to simply compute weighted means and significance information from an ensemble containing multiple initial condition members from potentially co-dependent models of varying skill. Two parameters in the algorithm determine the degree to which model climatological skill and model uniqueness are rewarded; these parameters are explored and final values are defended for the assessment. The influence of model weighting on projected temperature and precipitation changes is found to be moderate, partly due to a compensating effect between model skill and uniqueness. However, more aggressive skill weighting and weighting by targeted metrics is found to have a more significant effect on inferred ensemble confidence in future patterns of change for a given projection.

Short summary
How should climate model simulations be combined to produce an overall assessment that reflects both their performance and their interdependencies? This paper presents a strategy for weighting climate model output such that models that are replicated or models that perform poorly in a chosen set of metrics are appropriately weighted. We perform sensitivity tests to show how the method results depend on variables and parameter values.