Articles | Volume 10, issue 11
Geosci. Model Dev., 10, 4285–4305, 2017
Geosci. Model Dev., 10, 4285–4305, 2017

Methods for assessment of models 27 Nov 2017

Methods for assessment of models | 27 Nov 2017

The Cloud Feedback Model Intercomparison Project (CFMIP) Diagnostic Codes Catalogue – metrics, diagnostics and methodologies to evaluate, understand and improve the representation of clouds and cloud feedbacks in climate models

Yoko Tsushima1, Florent Brient2, Stephen A. Klein3, Dimitra Konsta4, Christine C. Nam5, Xin Qu6, Keith D. Williams1, Steven C. Sherwood7, Kentaroh Suzuki8, and Mark D. Zelinka3 Yoko Tsushima et al.
  • 1Met Office Hadley Centre, Exeter, UK
  • 2Centre National de Recherches Météorologiques, Toulouse, France
  • 3Cloud Processes Research and Modeling Group, Lawrence Livermore National Laboratory, Livermore, USA
  • 4National Observatory of Athens, Athens, Greece
  • 5Institute for Meteorology, Universitaet Leipzig, Leipzig, Germany
  • 6Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles, USA
  • 7Climate Change Research Centre and ARC Centre of Excellence for Climate System Science, University of New South Wales, Sydney, Australia
  • 8Atmosphere and Ocean Research Institute, University of Tokyo, Kashiwa, Japan

Abstract. The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. This paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments will also be facilitated by the sharing of diagnostic codes via this catalogue.

Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.

Short summary
Cloud feedback is the largest uncertainty associated with estimates of climate sensitivity. Diagnostics have been developed to evaluate cloud processes in climate models. For this understanding to be reflected in better estimates of cloud feedbacks, it is vital to continue to develop such tools and to exploit them fully during the model development process. Code repositories have been created to store and document the programs which will allow climate modellers to compute these diagnostics.