the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of atmospheric rivers in reanalyses and climate models in a new metrics framework
Abstract. We present a suite of new atmospheric river (AR) metrics that are designed for quick analysis of AR characteristics and statistics in gridded climate datasets such as model output and reanalysis. This package is expected to be particularly useful for climate model evaluation. The metrics include mean bias and spatial pattern correlation, which are efficient for diagnosing systematic AR biases in climate models. For example, the package identifies that in CMIP5 and CMIP6 models, AR tracks in the south Atlantic are positioned farther poleward compared to the ERA5 reanalysis, while in the south Pacific, tracks are generally biased towards the equator. For the landfalling AR peak season, we find that most climate models simulate a completely opposite seasonal cycle over western Africa. This tool is also useful for identifying and characterizing structural differences among different AR detectors (ARDTs). For example, ARs detected with the Mundhenk algorithm exhibit systematically larger size, width and length compared to the TempestExtremes (TE) method. The AR metrics developed from this work can be routinely applied for model benchmarking and during the development cycle to trace performance evolution across model versions or generations and set objective targets for the improvement of models. They can also be used by operational centers to perform near real-time climate and extreme events impact assessment as part of their forecast cycle.
- Preprint
(6597 KB) - Metadata XML
-
Supplement
(469 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on gmd-2024-142', Anonymous Referee #1, 04 Sep 2024
-
In the manuscript titled “Evaluation of atmospheric rivers in reanalyses and climate models in a new metrics framework ”, the authors develop a new model evaluation package to systematically diagnose atmospheric river (AR) biases. The characteristic of this tool is the robust response to structural differences of AR detectors. So this tool is designed to intercompare the ARs as simulated by multiple climate models. There are still certain changes and clarifications that the authors should address prior to publication. For these reasons, I believe that the manuscript can be accepted for publication by the GMD after minor revision. Below, I have some general suggestions to the authors.
General comments:
- The paper appears to be rushed, with inadequate detail and poor organization, and it seems that it was not carefully reviewed after completion. For instance, the introduction lacks systematic coverage and fails to logically present the structure of the package, which is contrary to the reader’s expectations. The crucial code section is that "full environment and python packages include AR metrics," it would be hard to find which section of PMP works for AR evaluation. The writing is not standardized, requiring readers to search backwards/proof/numbers for clarification. For instance, in Section 3, although the topic is metrics, specific numbers (metrics results) are not mentioned frequently, leading to a general lack of data support for qualitative analysis. Readers have to compare data themselves. Therefore, it is recommended to revise the writing based on similar highly-cited papers in AR evaluation.
Specific comments:
- Line #93, the introduction of the paper should not include information just for the sake of writing an introduction. This statement is not an actual argument; it is unnecessary and does not require citation of references to support it. This entire section covers very basic common knowledge and should be removed. The metrics are extremely common and there's no need to list them.
- Line #144-145, the selected model (E3SM-HR, E3SM-LR) should include some basic tabulated information about its parameters, including grid resolution.
- Line #178, considering the target audience of the paper and the need for conciseness in scientific writing, the section introducing the computational methods should not be overly detailed. Lines 178 to 196 should be summarized in a few sentences. The same issue also appears in the pattern correlation section (lines #205-#214).
- Line #235, could the author list the percentage next to this range? The proportion of the pattern variance that the principal components can explain should be quite low for 95%, such as first or second PC, etc. Additionally, within what range was this data calculated?
- Line #272, what is the frequency of the data on which this calculation is based? Monthly?
- Figure 2, the color scale intensity and the magnitude of the numbers are inversely related. I suggest adjusting the scale to consistently increase. Unless all values pass the significance test and are noted in the figure caption, additional markers, such as an asterisk next to the numbers, should be added to indicate significance test results.
- Line #276, Fig.2 needs to be described, such as what percentage of models in a specific ocean area have correlations that pass the significance test, using numbers to support the qualitative description.
- Figure 3, this figure is missing subplot labels like (a), (b), etc. The scale for the difference should be placed at the bottom to avoid confusion.
- Line #280, the spatial pattern correlation in S. Pacific is 0.88, and N. Atlantic is 0.98, these numbers in Fig.2 and the spatial gradient in Fig.3 should be discussed to support this statement. Additionally, just a suggestion—why not choose graphs with larger differences for comparison, such as BCC (0.99) and IPSL-CM5A (0.82) in the Indian Ocean? The comparison would be more intuitive given the same study area. Line #290 could be updated into “be better interpreted together with AR frequency maps with spatial gradient”.
- Figure 4, the hatching obscures the colors; it's recommended to place asterisks next to the numbers or bold the numbers to replace the hatching. This is just a suggestion, in Figure 4, such comparisons could benefit from providing a global ocean basin average shape for each model, which would make deviations in latitude and longitude more intuitive.
- It is a suggestion only. Regarding Fig. S1, why not interpolate or downscale to the same resolution before comparison? It will still prove the difference in original data resolution. Different grid resolutions will inevitably introduce boundary issues. Fig. S1, the default color scheme makes the land boundaries unclear and needs to be adjusted. Fig. S2, the specific meanings of ARCONNECT and TECA are unclear.
- Figure 5, please consider that those with red-green color blindness may have difficulty distinguishing between these two colors.
- Line #325-#330, could you provide some explanation in this section of the results? For example, the characteristics of the models?
- Line #379-#383, Here, it would be better to explain how the different thresholds for tagging the moisture field contribute to the differences in AR shapes.
Language issues:
- Line #78, the abbreviation ARDTs should be defined the first time it appears.
- Line #120, the abbreviation ARDT should be defined the first time it appears.
- Line #124, the TE ARDT should be defined the first time it appears?
- Line #130, “Mundhenk_v3 tags” could be replaced with “fixed-relative (Mundhenk_v3 tags)”
- Line #136, in general, the full name of a climate model abbreviation should be provided the first time it appears.
Citation: https://doi.org/10.5194/gmd-2024-142-RC1 -
AC1: 'Reply on RC1', Bo Dong, 24 Oct 2024
Thank you for your review, comments and suggestions! We are taking the time to reorganize the manuscript with your feedback, including rewriting targeted parts of the manuscript. A new section describing the metrics workflow and code structure is also added, and some technical discussion is being moved to an appendix. Relevant figures are also being revised as suggested. Please see our responses in the attached document.
-
RC3: 'Reply on AC1', Anonymous Referee #1, 05 Nov 2024
Comment 1: I can only see the reply to reviewer comment document, without the updated figures or supplementary material.
Comment 2: This is merely a suggestion, but the writing quality of this paper still falls short of excellent papers, especially in the logical structure of each section, determining which parts should be detailed or summarized, and in the explanations of phenomena and principles in the discussion section. I have not listed all of these in the comments, but I hope the authors can seek assistance from an experienced co-author in this project to conduct a line-by-line review and update the manuscript to improve its citation potential.
Comment 3: Figure 3 appears very blurry, and there is an extra border on the right side of the image.
Citation: https://doi.org/10.5194/gmd-2024-142-RC3 -
AC4: 'Reply on RC3', Bo Dong, 21 Nov 2024
Thank you for your comments. We have now uploaded the revised manuscript.
Citation: https://doi.org/10.5194/gmd-2024-142-AC4
-
AC4: 'Reply on RC3', Bo Dong, 21 Nov 2024
-
RC3: 'Reply on AC1', Anonymous Referee #1, 05 Nov 2024
-
-
CEC1: 'Comment on gmd-2024-142', Astrid Kerkweg, 06 Sep 2024
Dear authors,
in my role as Executive editor of GMD, I would like to bring to your attention our Editorial version 1.2:
https://www.geosci-model-dev.net/12/2215/2019/
This highlights some requirements of papers published in GMD, which is also available on the GMD website in the ‘Manuscript Types’ section:
http://www.geoscientific-model-development.net/submission/manuscript_types.html
In particular, please note that for your paper, the following requirements has not been met in the Discussions paper:
- "Code must be published on a persistent public archive with a unique identifier for the exact model version described in the paper or uploaded to the supplement, unless this is impossible for reasons beyond the control of authors. All papers must include a section, at the end of the paper, entitled "Code availability". Here, either instructions for obtaining the code, or the reasons why the code is not available should be clearly stated. It is preferred for the code to be uploaded as a supplement or to be made available at a data repository with an associated DOI (digital object identifier) for the exact model version described in the paper. Alternatively, for established models, there may be an existing means of accessing the code through a particular system. In this case, there must exist a means of permanently accessing the precise model version described in the paper. In some cases, authors may prefer to put models on their own website, or to act as a point of contact for obtaining the code. Given the impermanence of websites and email addresses, this is not encouraged, and authors should consider improving the availability with a more permanent arrangement. Making code available through personal websites or via email contact to the authors is not sufficient. After the paper is accepted the model archive should be updated to include a link to the GMD paper."
In your code and data availability section you only say, that the metrics are available via github and not even provide the link. Please make the metrics as well as other scripts (e.g. for plotting) as soon as possible available in a permanent archive.
Yours, Astrid Kerkweg (GMD executive editor)
Citation: https://doi.org/10.5194/gmd-2024-142-CEC1 -
AC2: 'Reply on CEC1', Bo Dong, 24 Oct 2024
Thank you for your comments and instructions. we are making another round of enhancement of the code which will be available on Zenodo soon, along with our final submission of the revised manuscript.
Citation: https://doi.org/10.5194/gmd-2024-142-AC2
-
RC2: 'Comment on gmd-2024-142', Anonymous Referee #2, 22 Sep 2024
This paper presents a suite of new atmospheric river (AR) metrics that are designed for quick analysis of AR characteristics and statistics in gridded climate datasets such as model output and reanalysis. The study is very interesting, well organized and written. This work could be published if the following comments are adequately addressed. The author should show more convince evidences for their method robust. For example, for figure 3, what about AR frequency in the South Pacific for BCCCSM2-MR and ERA5, and their differences? What about AR frequency in the North Atlantic for CSIRO-MK3-6-0 and ERA5, and their differences? In addition, what is the advantage of the present method, compared with others’? This should be discussed.
Citation: https://doi.org/10.5194/gmd-2024-142-RC2 -
AC3: 'Reply on RC2', Bo Dong, 24 Oct 2024
Thank you for your review and comments! We are taking the time to reorganize the manuscript with your feedback, including adding new discussions per your suggestion, rewriting targeted parts of the manuscript, revising relevant figures and adding new figures. Please see our responses in the attached document.
-
AC3: 'Reply on RC2', Bo Dong, 24 Oct 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
324 | 106 | 136 | 566 | 24 | 4 | 3 |
- HTML: 324
- PDF: 106
- XML: 136
- Total: 566
- Supplement: 24
- BibTeX: 4
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1