Articles | Volume 18, issue 3
https://doi.org/10.5194/gmd-18-787-2025
https://doi.org/10.5194/gmd-18-787-2025
Review and perspective paper
 | Highlight paper
 | 
11 Feb 2025
Review and perspective paper | Highlight paper |  | 11 Feb 2025

Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling

Ryan J. O'Loughlin, Dan Li, Richard Neale, and Travis A. O'Brien

Download

Interactive discussion

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Cross-validation, Symbolic Regression, Pareto include', Paul PUKITE, 16 Feb 2024
  • RC1: 'Comment on egusphere-2023-2969', Julie Jebeile, 06 Jun 2024
  • RC2: 'Comment on egusphere-2023-2969', Imme Ebert-Uphoff, 12 Jun 2024
  • RC3: 'Comment on egusphere-2023-2969', Yumin Liu, 29 Jun 2024
  • AC1: 'Comment on egusphere-2023-2969', Ryan O'Loughlin, 07 Sep 2024

Peer review completion

AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Ryan O'Loughlin on behalf of the Authors (07 Sep 2024)  Author's response   Author's tracked changes 
EF by Anna Glados (16 Sep 2024)  Manuscript 
ED: Publish subject to minor revisions (review by editor) (31 Oct 2024) by Richard Mills
AR by Ryan O'Loughlin on behalf of the Authors (07 Nov 2024)  Author's response   Author's tracked changes   Manuscript 
ED: Publish as is (22 Nov 2024) by Richard Mills
ED: Publish as is (06 Dec 2024) by David Ham (Executive editor)
AR by Ryan O'Loughlin on behalf of the Authors (09 Dec 2024)
Download
Editorial statement
This perspective paper examines in detail the concept of explicability in a climate model, whether conventional physics-based dynamical models, or those incorporating components based on machine learning. Everyone with an interest in climate models or their outputs would benefit from understanding the processes by which we can understand the importance and accuracy of these models and the methods by which it is possible to make sense of those outputs. This paper is a major contribution to that understanding. It is also very well written and should be widely read in the field.
Short summary
We draw from traditional climate modeling practices to make recommendations for machine-learning (ML)-driven climate science. Our intended audience is climate modelers who are relatively new to ML. We show how component-level understanding – obtained when scientists can link model behavior to parts within the overall model – should guide the development and evaluation of ML models. Better understanding yields a stronger basis for trust in the models. We highlight several examples to demonstrate.
Share