Articles | Volume 18, issue 3
https://doi.org/10.5194/gmd-18-787-2025
https://doi.org/10.5194/gmd-18-787-2025
Review and perspective paper
 | Highlight paper
 | 
11 Feb 2025
Review and perspective paper | Highlight paper |  | 11 Feb 2025

Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling

Ryan J. O'Loughlin, Dan Li, Richard Neale, and Travis A. O'Brien

Related authors

Huge Ensembles Part I: Design of Ensemble Weather Forecasts using Spherical Fourier Neural Operators
Ankur Mahesh, William Collins, Boris Bonev, Noah Brenowitz, Yair Cohen, Joshua Elms, Peter Harrington, Karthik Kashinath, Thorsten Kurth, Joshua North, Travis O'Brien, Michael Pritchard, David Pruitt, Mark Risser, Shashank Subramanian, and Jared Willard
EGUsphere, https://doi.org/10.48550/arXiv.2408.03100,https://doi.org/10.48550/arXiv.2408.03100, 2024
Short summary
Huge Ensembles Part II: Properties of a Huge Ensemble of Hindcasts Generated with Spherical Fourier Neural Operators
Ankur Mahesh, William Collins, Boris Bonev, Noah Brenowitz, Yair Cohen, Peter Harrington, Karthik Kashinath, Thorsten Kurth, Joshua North, Travis A. O'Brien, Michael Pritchard, David Pruitt, Mark Risser, Shashank Subramanian, and Jared Willard
EGUsphere, https://doi.org/10.48550/arXiv.2408.01581,https://doi.org/10.48550/arXiv.2408.01581, 2024
Short summary
Evaluation of atmospheric rivers in reanalyses and climate models in a new metrics framework
Bo Dong, Paul Ullrich, Jiwoo Lee, Peter Gleckler, Kristin Chang, and Travis O'Brien
Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2024-142,https://doi.org/10.5194/gmd-2024-142, 2024
Revised manuscript accepted for GMD
Short summary
Identifying atmospheric rivers and their poleward latent heat transport with generalizable neural networks: ARCNNv1
Ankur Mahesh, Travis A. O'Brien, Burlen Loring, Abdelrahman Elbashandy, William Boos, and William D. Collins
Geosci. Model Dev., 17, 3533–3557, https://doi.org/10.5194/gmd-17-3533-2024,https://doi.org/10.5194/gmd-17-3533-2024, 2024
Short summary
Scalable Feature Extraction and Tracking (SCAFET): a general framework for feature extraction from large climate data sets
Arjun Babu Nellikkattil, Danielle Lemmon, Travis Allen O'Brien, June-Yi Lee, and Jung-Eun Chu
Geosci. Model Dev., 17, 301–320, https://doi.org/10.5194/gmd-17-301-2024,https://doi.org/10.5194/gmd-17-301-2024, 2024
Short summary

Related subject area

Earth and space science informatics
Remote-sensing-based forest canopy height mapping: some models are useful, but might they provide us with even more insights when combined?
Nikola Besic, Nicolas Picard, Cédric Vega, Jean-Daniel Bontemps, Lionel Hertzog, Jean-Pierre Renaud, Fajwel Fogel, Martin Schwartz, Agnès Pellissier-Tanon, Gabriel Destouet, Frédéric Mortier, Milena Planells-Rodriguez, and Philippe Ciais
Geosci. Model Dev., 18, 337–359, https://doi.org/10.5194/gmd-18-337-2025,https://doi.org/10.5194/gmd-18-337-2025, 2025
Short summary
Checking the consistency of 3D geological models
Marion N. Parquer, Eric A. de Kemp, Boyan Brodaric, and Michael J. Hillier
Geosci. Model Dev., 18, 71–100, https://doi.org/10.5194/gmd-18-71-2025,https://doi.org/10.5194/gmd-18-71-2025, 2025
Short summary
The effect of lossy compression of numerical weather prediction data on data analysis: a case study using enstools-compression 2023.11
Oriol Tintó Prims, Robert Redl, Marc Rautenhaus, Tobias Selz, Takumi Matsunobu, Kameswar Rao Modali, and George Craig
Geosci. Model Dev., 17, 8909–8925, https://doi.org/10.5194/gmd-17-8909-2024,https://doi.org/10.5194/gmd-17-8909-2024, 2024
Short summary
GNNWR: an open-source package of spatiotemporal intelligent regression methods for modeling spatial and temporal nonstationarity
Ziyu Yin, Jiale Ding, Yi Liu, Ruoxu Wang, Yige Wang, Yijun Chen, Jin Qi, Sensen Wu, and Zhenhong Du
Geosci. Model Dev., 17, 8455–8468, https://doi.org/10.5194/gmd-17-8455-2024,https://doi.org/10.5194/gmd-17-8455-2024, 2024
Short summary
Can AI be enabled to dynamical downscaling? A Latent Diffusion Model to mimic km-scale COSMO5.0_CLM9 simulations
Elena Tomasi, Gabriele Franch, and Marco Cristoforetti
EGUsphere, https://doi.org/10.48550/arXiv.2406.13627,https://doi.org/10.48550/arXiv.2406.13627, 2024
Short summary

Cited articles

Balmaceda-Huarte, R., Baño-Medina, J., Olmo, M. E., and Bettolli, M. L.: On the use of convolutional neural networks for downscaling daily temperatures over southern South America in a climate change scenario, Clim. Dynam., 62, 383–397, https://doi.org/10.1007/s00382-023-06912-6, 2023. 
Barnes, E. A., Barnes, R. J., Martin, Z. K., and Rader, J. K.: This Looks Like That There: Interpretable Neural Networks for Image Tasks When Location Matters, Artif. Intell. Earth Syst., 1, e220001, https://doi.org/10.1175/AIES-D-22-0001.1, 2022. 
Baron, S.: Explainable AI and Causal Understanding: Counterfactual Approaches Considered, Minds Mach., 33, 347–377, https://doi.org/10.1007/s11023-023-09637-x, 2023. 
Bau, D., Zhu, J.-Y., Strobelt, H., Zhou, B., Tenenbaum, J. B., Freeman, W. T., and Torralba, A.: GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, arXiv [preprint], https://doi.org/10.48550/arXiv.1811.10597, 8 December 2018. 
Baumberger, C., Knutti, R., and Hadorn, G. H.: Building confidence in climate model projections: an analysis of inferences from fit, WIREs Clim. Change, 8, e454, https://doi.org/10.1002/wcc.454, 2017. 
Download
Executive editor
This perspective paper examines in detail the concept of explicability in a climate model, whether conventional physics-based dynamical models, or those incorporating components based on machine learning. Everyone with an interest in climate models or their outputs would benefit from understanding the processes by which we can understand the importance and accuracy of these models and the methods by which it is possible to make sense of those outputs. This paper is a major contribution to that understanding. It is also very well written and should be widely read in the field.
Short summary
We draw from traditional climate modeling practices to make recommendations for machine-learning (ML)-driven climate science. Our intended audience is climate modelers who are relatively new to ML. We show how component-level understanding – obtained when scientists can link model behavior to parts within the overall model – should guide the development and evaluation of ML models. Better understanding yields a stronger basis for trust in the models. We highlight several examples to demonstrate.
Share