Articles | Volume 9, issue 5
Geosci. Model Dev., 9, 1827–1851, 2016
Geosci. Model Dev., 9, 1827–1851, 2016

Methods for assessment of models 12 May 2016

Methods for assessment of models | 12 May 2016

Inconsistent strategies to spin up models in CMIP5: implications for ocean biogeochemical model performance assessment

Roland Séférian1, Marion Gehlen2, Laurent Bopp2, Laure Resplandy3,2, James C. Orr2, Olivier Marti2, John P. Dunne4, James R. Christian5, Scott C. Doney6, Tatiana Ilyina7, Keith Lindsay8, Paul R. Halloran9, Christoph Heinze10,11, Joachim Segschneider12, Jerry Tjiputra11, Olivier Aumont13, and Anastasia Romanou14,15 Roland Séférian et al.
  • 1CNRM, Centre National de Recherches Météorologiques, Météo-France/CNRS, 42 Avenue Gaspard Coriolis, 31057 Toulouse, France
  • 2LSCE/IPSL, Laboratoire des Sciences du Climat et de l'Environnement, Orme des Merisiers, CEA/Saclay 91198 Gif-sur-Yvette CEDEX, France
  • 3Scripps Institution of Oceanography, UCSD, La Jolla, CA, USA
  • 4Geophysical Fluid Dynamics Laboratory, NOAA, Princeton, NJ, USA
  • 5Fisheries and Oceans Canada and Canadian Centre for Climate Modelling and Analysis, Victoria, B.C., Canada
  • 6Marine Chemistry and Geochemistry Department, Woods Hole Oceanographic Institution, Woods Hole, MA, USA
  • 7Max Planck Institute for Meteorology, Bundesstraße 53, 20146 Hamburg, Germany
  • 8Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, CO, USA
  • 9College of Life and Environmental Sciences, University of Exeter, Exeter, EX4 4RJ, UK
  • 10Geophysical Institute, University of Bergen, Bergen, Norway
  • 11Uni Research Climate, Bjerknes Centre for Climate Research, Bergen, Norway
  • 12Department of Geosciences, University of Kiel, Kiel, Germany
  • 13Sorbonne Universités (UPMC, Univ Paris 06)-CNRS-IRD-MNHN, LOCEAN-IPSL Laboratory, 4 Place Jussieu, 75005 Paris, France
  • 14Dept. of Applied Math. and Phys., Columbia University, 2880 Broadway, New York, NY 10025, USA
  • 15NASA-Goddard Institute for Space Studies at Columbia University, New York, NY, USA

Abstract. During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessments model historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

Short summary
This paper explores how the large diversity in spin-up protocols used for ocean biogeochemistry in CMIP5 models contributed to inter-model differences in modeled fields. We show that a link between spin-up duration and skill-score metrics emerges from both individual IPSL-CM5A-LR's results and an ensemble of CMIP5 models. Our study suggests that differences in spin-up protocols constitute a source of inter-model uncertainty which would require more attention in future intercomparison exercises.