Articles | Volume 18, issue 3
https://doi.org/10.5194/gmd-18-787-2025
https://doi.org/10.5194/gmd-18-787-2025
Review and perspective paper
 | Highlight paper
 | 
11 Feb 2025
Review and perspective paper | Highlight paper |  | 11 Feb 2025

Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling

Ryan J. O'Loughlin, Dan Li, Richard Neale, and Travis A. O'Brien

Viewed

Total article views: 935 (including HTML, PDF, and XML)
HTML PDF XML Total Supplement BibTeX EndNote
702 179 54 935 34 26 24
  • HTML: 702
  • PDF: 179
  • XML: 54
  • Total: 935
  • Supplement: 34
  • BibTeX: 26
  • EndNote: 24
Views and downloads (calculated since 30 Jan 2024)
Cumulative views and downloads (calculated since 30 Jan 2024)

Viewed (geographical distribution)

Total article views: 935 (including HTML, PDF, and XML) Thereof 933 with geography defined and 2 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 12 Feb 2025
Download
Executive editor
This perspective paper examines in detail the concept of explicability in a climate model, whether conventional physics-based dynamical models, or those incorporating components based on machine learning. Everyone with an interest in climate models or their outputs would benefit from understanding the processes by which we can understand the importance and accuracy of these models and the methods by which it is possible to make sense of those outputs. This paper is a major contribution to that understanding. It is also very well written and should be widely read in the field.
Short summary
We draw from traditional climate modeling practices to make recommendations for machine-learning (ML)-driven climate science. Our intended audience is climate modelers who are relatively new to ML. We show how component-level understanding – obtained when scientists can link model behavior to parts within the overall model – should guide the development and evaluation of ML models. Better understanding yields a stronger basis for trust in the models. We highlight several examples to demonstrate.
Share