Articles | Volume 10, issue 11
https://doi.org/10.5194/gmd-10-4285-2017
https://doi.org/10.5194/gmd-10-4285-2017
Methods for assessment of models
 | 
27 Nov 2017
Methods for assessment of models |  | 27 Nov 2017

The Cloud Feedback Model Intercomparison Project (CFMIP) Diagnostic Codes Catalogue – metrics, diagnostics and methodologies to evaluate, understand and improve the representation of clouds and cloud feedbacks in climate models

Yoko Tsushima, Florent Brient, Stephen A. Klein, Dimitra Konsta, Christine C. Nam, Xin Qu, Keith D. Williams, Steven C. Sherwood, Kentaroh Suzuki, and Mark D. Zelinka

Download

Interactive discussion

Status: closed
Status: closed
AC: Author comment | RC: Referee comment | SC: Short comment | EC: Editor comment
Printer-friendly Version - Printer-friendly version Supplement - Supplement

Peer-review completion

AR: Author's response | RR: Referee report | ED: Editor decision
AR by YOKO TSUSHIMA on behalf of the Authors (28 Jul 2017)
ED: Publish subject to minor revisions (Editor review) (05 Sep 2017) by Simon Unterstrasser
AR by YOKO TSUSHIMA on behalf of the Authors (15 Sep 2017)
ED: Publish as is (21 Sep 2017) by Simon Unterstrasser
AR by YOKO TSUSHIMA on behalf of the Authors (16 Oct 2017)
Short summary
Cloud feedback is the largest uncertainty associated with estimates of climate sensitivity. Diagnostics have been developed to evaluate cloud processes in climate models. For this understanding to be reflected in better estimates of cloud feedbacks, it is vital to continue to develop such tools and to exploit them fully during the model development process. Code repositories have been created to store and document the programs which will allow climate modellers to compute these diagnostics.