Articles | Volume 18, issue 3
https://doi.org/10.5194/gmd-18-787-2025
https://doi.org/10.5194/gmd-18-787-2025
Review and perspective paper
 | Highlight paper
 | 
11 Feb 2025
Review and perspective paper | Highlight paper |  | 11 Feb 2025

Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling

Ryan J. O'Loughlin, Dan Li, Richard Neale, and Travis A. O'Brien

Related authors

Huge ensembles – Part 1: Design of ensemble weather forecasts using spherical Fourier neural operators
Ankur Mahesh, William D. Collins, Boris Bonev, Noah Brenowitz, Yair Cohen, Joshua Elms, Peter Harrington, Karthik Kashinath, Thorsten Kurth, Joshua North, Travis O'Brien, Michael Pritchard, David Pruitt, Mark Risser, Shashank Subramanian, and Jared Willard
Geosci. Model Dev., 18, 5575–5603, https://doi.org/10.5194/gmd-18-5575-2025,https://doi.org/10.5194/gmd-18-5575-2025, 2025
Short summary
Huge ensembles – Part 2: Properties of a huge ensemble of hindcasts generated with spherical Fourier neural operators
Ankur Mahesh, William D. Collins, Boris Bonev, Noah Brenowitz, Yair Cohen, Peter Harrington, Karthik Kashinath, Thorsten Kurth, Joshua North, Travis A. O'Brien, Michael Pritchard, David Pruitt, Mark Risser, Shashank Subramanian, and Jared Willard
Geosci. Model Dev., 18, 5605–5633, https://doi.org/10.5194/gmd-18-5605-2025,https://doi.org/10.5194/gmd-18-5605-2025, 2025
Short summary
A new metrics framework for quantifying and intercomparing atmospheric rivers in observations, reanalyses, and climate models
Bo Dong, Paul Ullrich, Jiwoo Lee, Peter Gleckler, Kristin Chang, and Travis A. O'Brien
Geosci. Model Dev., 18, 961–976, https://doi.org/10.5194/gmd-18-961-2025,https://doi.org/10.5194/gmd-18-961-2025, 2025
Short summary
Identifying atmospheric rivers and their poleward latent heat transport with generalizable neural networks: ARCNNv1
Ankur Mahesh, Travis A. O'Brien, Burlen Loring, Abdelrahman Elbashandy, William Boos, and William D. Collins
Geosci. Model Dev., 17, 3533–3557, https://doi.org/10.5194/gmd-17-3533-2024,https://doi.org/10.5194/gmd-17-3533-2024, 2024
Short summary
Scalable Feature Extraction and Tracking (SCAFET): a general framework for feature extraction from large climate data sets
Arjun Babu Nellikkattil, Danielle Lemmon, Travis Allen O'Brien, June-Yi Lee, and Jung-Eun Chu
Geosci. Model Dev., 17, 301–320, https://doi.org/10.5194/gmd-17-301-2024,https://doi.org/10.5194/gmd-17-301-2024, 2024
Short summary

Cited articles

Balmaceda-Huarte, R., Baño-Medina, J., Olmo, M. E., and Bettolli, M. L.: On the use of convolutional neural networks for downscaling daily temperatures over southern South America in a climate change scenario, Clim. Dynam., 62, 383–397, https://doi.org/10.1007/s00382-023-06912-6, 2023. 
Barnes, E. A., Barnes, R. J., Martin, Z. K., and Rader, J. K.: This Looks Like That There: Interpretable Neural Networks for Image Tasks When Location Matters, Artif. Intell. Earth Syst., 1, e220001, https://doi.org/10.1175/AIES-D-22-0001.1, 2022. 
Baron, S.: Explainable AI and Causal Understanding: Counterfactual Approaches Considered, Minds Mach., 33, 347–377, https://doi.org/10.1007/s11023-023-09637-x, 2023. 
Bau, D., Zhu, J.-Y., Strobelt, H., Zhou, B., Tenenbaum, J. B., Freeman, W. T., and Torralba, A.: GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, arXiv [preprint], https://doi.org/10.48550/arXiv.1811.10597, 8 December 2018. 
Baumberger, C., Knutti, R., and Hadorn, G. H.: Building confidence in climate model projections: an analysis of inferences from fit, WIREs Clim. Change, 8, e454, https://doi.org/10.1002/wcc.454, 2017. 
Download
Executive editor
This perspective paper examines in detail the concept of explicability in a climate model, whether conventional physics-based dynamical models, or those incorporating components based on machine learning. Everyone with an interest in climate models or their outputs would benefit from understanding the processes by which we can understand the importance and accuracy of these models and the methods by which it is possible to make sense of those outputs. This paper is a major contribution to that understanding. It is also very well written and should be widely read in the field.
Short summary
We draw from traditional climate modeling practices to make recommendations for machine-learning (ML)-driven climate science. Our intended audience is climate modelers who are relatively new to ML. We show how component-level understanding – obtained when scientists can link model behavior to parts within the overall model – should guide the development and evaluation of ML models. Better understanding yields a stronger basis for trust in the models. We highlight several examples to demonstrate.
Share