Preprints
https://doi.org/10.5194/gmd-2020-311
https://doi.org/10.5194/gmd-2020-311
Submitted as: development and technical paper
 | 
22 Oct 2020
Submitted as: development and technical paper |  | 22 Oct 2020
Status: this preprint was under review for the journal GMD but the revision was not accepted.

Strengths and weaknesses of three Machine Learning methods for pCO2 interpolation

Jake Stamell, Rea R. Rustagi, Lucas Gloege, and Galen A. McKinley

Abstract. Using the Large Enemble Testbed, a collection of 100 members from four independent Earth system models, we test three general-purpose Machine Learning (ML) approaches to understand their strengths and weaknesses in statistically reconstructing full-coverage surface ocean pCO2 from sparse in situ data. To apply the Testbed, we sample the full-field model pCO2 as real-world pCO2 collected from 1982–2016 for each ensemble member. We then use ML approaches to reconstruct the full-field and compare with the original model full-field pCO2 to assess reconstruction skill. We use feed forward neural network (NN), XGBoost (XGB), and random forest (RF) approaches to perform the reconstructions. Our baseline is the NN, since this approach has previously been shown to be a successful method for pCO2 reconstruction. The XGB and RF allow us to test tree-based approaches. We perform comparisons to a test set, which consists of 20% of the real-world sampled data that are withheld from training. Statistical comparisons with this test set are equivalent to that which could be derived using real-world data. Unique to the Testbed is that it allows for comparison to all the "unseen" points to which the ML algorithms extrapolate. When compared to the test set, XGB and RF both perform better than NN based on a suite of regression metrics. However, when compared to the unseen data, degradation of performance is large with XGB and even larger with RF. Degradation is comparatively small with NN, indicating a greater ability to generalize. Despite its larger degradation, in the final comparison to unseen data, XGB slightly outperforms NN and greatly outperforms RF, with lowest mean bias and more consistent performance across Testbed members. All three approaches perform best in the open ocean and for seasonal variability, but performance drops off at longer time scales and in regions of low sampling, such as the Southern Ocean and coastal zones. For decadal variability, all methods overestimate the amplitude of variability and have moderate skill in reconstruction of phase. For this timescale, the greater ability of the NN to generalize allows it to slightly outperform XGB. Taking into account all comparisons, we find XGB to be best able to reconstruct surface ocean pCO2 from the limited available data.

Jake Stamell, Rea R. Rustagi, Lucas Gloege, and Galen A. McKinley
 
Status: closed
Status: closed
AC: Author comment | RC: Referee comment | SC: Short comment | EC: Editor comment
Printer-friendly Version - Printer-friendly version Supplement - Supplement
 
Status: closed
Status: closed
AC: Author comment | RC: Referee comment | SC: Short comment | EC: Editor comment
Printer-friendly Version - Printer-friendly version Supplement - Supplement
Jake Stamell, Rea R. Rustagi, Lucas Gloege, and Galen A. McKinley

Data sets

ML methods for pCO2 reconstruction - Large Ensemble Testbed - NN/XGB/RF Jake Stamell and Galen A. McKinley https://doi.org/10.6084/m9.figshare.c.4568555.v2

Jake Stamell, Rea R. Rustagi, Lucas Gloege, and Galen A. McKinley

Viewed

Total article views: 1,506 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
1,034 441 31 1,506 27 29
  • HTML: 1,034
  • PDF: 441
  • XML: 31
  • Total: 1,506
  • BibTeX: 27
  • EndNote: 29
Views and downloads (calculated since 22 Oct 2020)
Cumulative views and downloads (calculated since 22 Oct 2020)

Viewed (geographical distribution)

Total article views: 1,244 (including HTML, PDF, and XML) Thereof 1,242 with geography defined and 2 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 

Cited

Latest update: 27 Mar 2024
Download
Short summary
Using simulated surface ocean pCO2 from Earth System Models, we test three Machine Learning methods (neural network, XGBoost, random forest) to discern their ability to reconstruct global coverage from sparse observations. Synthetic data means we can train based on real-world sampling patterns and then evaluate against the known full coverage result of the original simulation. ML approaches perform best in the open ocean, but struggle in regions of low sampling. XGBoost saw the best performance.