the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of CORDEX ERA5-forced ‘NARCliM2.0’ regional climate models over Australia using the Weather Research and Forecasting (WRF) model version 4.1.2
Abstract. Understanding regional climate model (RCM) capabilities to simulate current climate informs model development and climate change assessments. This is the first evaluation of the NARCliM2.0 ensemble of Weather Forecasting and Research RCMs driven by ECMWF Reanalysis v5 (ERA5) reanalyses over Australia at 20 km resolution contributing to CORDEX-CMIP6 Australasia, and south-eastern Australia at convection-permitting resolution (4 km). RCM performance in simulating mean and extreme maximum, minimum temperature and precipitation is evaluated against observations at annual, seasonal, and daily timescales, and compared to corresponding performances of previous-generation CORDEX-CMIP5 Australasia ERA-Interim-driven RCMs. ERA5-RCMs substantially reduce cold biases for mean and extreme maximum temperature versus ERA-Interim-RCMs, with small mean absolute biases (0.54 K; 0.81 K, respectively), but produce no improvements for minimum temperature. ERA5-RCM precipitation simulations show lower bias magnitudes versus ERA-Interim-RCMs, though dry biases remain over monsoonal northern Australia and extreme precipitation simulation improvements are principally evident at convection-permitting 4 km resolution. Although ERA5 reanalysis data confer improvements over ERA-Interim, only improvements in precipitation simulation by ERA5-RCMs are attributable to the ERA5 driving data, with RCM improvements for maximum temperature more attributable to model design choices, suggesting improved driving data do not guarantee all RCM performance improvements, with potential implications for CMIP6-forced dynamical downscaling. This evaluation shows that NARCliM2.0 ERA5-RCMs provide valuable reference simulations for upcoming CMIP6-forced downscaling over CORDEX-Australasia and are informative datasets for climate impact studies. Using a subset of these RCMs for simulating CMIP6-forced climate projections over CORDEX-Australasia and/or at convection-permitting scales could yield tangible benefits in simulating regional climate.
- Preprint
(3829 KB) - Metadata XML
-
Supplement
(9330 KB) - BibTeX
- EndNote
Status: closed
-
CEC1: 'Comment on gmd-2024-41', Juan Antonio Añel, 12 May 2024
Dear authors,
Unfortunately, after checking your manuscript, it has come to our attention that it does not comply with our "Code and Data Policy".
https://www.geoscientific-model-development.net/policies/code_and_data_policy.htmlFirst, you link for the WRF 4.1.2 version that you use a GitHub page. However, GitHub is not a suitable repository for scientific publication. GitHub itself instructs authors to use other alternatives for long-term archival and publishing, such as Zenodo. This is clear established in our policy. Secondly, it is not enough that you provide the main WRF 4.1.2 code, but you must provide the specific configuration files and/or adaptations to the model used for your experiment.
Therefore, please, publish the code requested in one of the appropriate repositories, and reply to this comment with the relevant information (link and DOI) as soon as possible, as it should have been available before the Discussions stage.
In this way, if you do not fix this problem, we will have to reject your manuscript for publication in our journal. I should note that, given this lack of compliance with our policy, your manuscript should not have been accepted in Discussions. Therefore, the current situation with your manuscript is irregular.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/gmd-2024-41-CEC1 -
CC1: 'Reply on CEC1', Jatin Kala, 15 May 2024
Dear Juan,
We apologize for this oversight on our behalf. As requested, we have now published the WRF 4.1.2 as used in this study, as well as all model configuration files on Zenodo at: https://doi.org/10.5281/zenodo.11189898
We will add this to the code and data section when we revise the manuscript at revision stage. We hope this satifies your concerns, please let us know if anything else is required.
Kind regards
Jatin
Citation: https://doi.org/10.5194/gmd-2024-41-CC1
-
CC1: 'Reply on CEC1', Jatin Kala, 15 May 2024
-
RC1: 'Comment on gmd-2024-41', Sugata Narsey, 21 May 2024
Manuscript title: Evaluation of CORDEX ERA5-forced ‘NARCliM2.0’ regional climate models over Australia using the Weather Research and Forecasting (WRF) model version 4.1.2
Authors: Di Virgilio et al
Reviewed by: Sugata Narsey (BoM)
Recommendation: Minor revisions
This manuscript documents the evaluation of the NARCliM2.0 regional climate model (RCM) driven using ECMWF Reanalysis v5 (ERA5). The manuscript is well-written, and systematically addresses key aspects of the evaluation of their model. They go further than a basic evaluation, providing useful insights into the regional impacts of multiple parameterisation configurations of the model. They find that changing the physics choices in their model can have quite dramatic effects on regional climate biases for Australia. A nice addition to this study is their analysis of the relative sources of bias estimated by interchanging their RCM with the previous version of the NARCliM RCM, and also interchanging the driving ERA5 reanalysis with the ERA Interim reanalysis data previously used. By doing so they find that the RCM set-up appears to be a stronger influence on the mean state bias in their regional climate simulations compared to the choice of driving reanalysis data. This manuscript forms an important scientific basis for the production of a nationally significant projections dataset and is an important contribution to regional modelling for the Southern Hemisphere.
I have some comments around evaluation choices, and around specific wording especially with regards to claims of improvement for precipitation since the biases over northern Australia appear significant. However overall, this manuscript is appropriate for publication, and my recommendation is for minor revisions.
Main comments:
- The distribution plots show nationally aggregated data, however I find this problematic since the map plots show that Tmax and precipitation in particular have opposing biases in the northern and southern regions (Fig 3 and Fig 7). Additionally, the bimodal distribution of Tmin in Fig 4 might be a function of mixing two climatically distinct regions. Why not split it into at least two regions? Then you can clearly state the improvements for the southern parts of Australia.
- The evaluation conducted here focuses on rainfall and temperature, which I agree are the most important variables to consider. However, some investigation of the circulation state in the RCMs may be of use to help understand the systematic biases, for example over northern Australia (R3-7), and over SE Australia (R2-4).
- The statements around general improvements in precipitation are not well-founded in my view, since the dry biases over northern Australia are large compared to NARCliM1.5 runs. I would prefer if the claims were either made specific to the inner domain, or else more carefully described in this manuscript. Alternatively the authors can show whether the biases in the NARCliM2.0 runs (especially for northern Australia) are actually smaller as a percentage of annual mean climatological rain.
- It is outside the scope of this study, however it would be interesting to know if the different physics configurations and their associated regional climate biases have much bearing on the future change signal in the model when holding the driving global model data constant. Similarly, it would be interesting to know if the evaluation of the ERA5 runs with different physics configurations translates in an evaluation of the CMIP6 historical scenario runs.
- Also outside the scope of this study, but it would be interesting to intercompare the dry-bias tendency over northern Australia in the NARCliM2.0 runs with other regional simulations using different models that introduced similar dry biases for the Australian monsoon. Although such a bias may be undesirable, there is a real opportunity here to shed light on some fundamental characteristics of the dynamics Australian monsoon, in particular the feedback mechanisms associated with land surface behaviour, convection, and large-scale circulation.
Specific comments:
L63-64: It’s now May 2024 and this statement is outdated; I believe the BARPA paper is now published (https://gmd.copernicus.org/articles/17/731/2024/gmd-17-731-2024-discussion.html) and there may be others by now. Might be worth a quick search.
L205-208: Did you follow the same experiment design in all other respects except for run length? Fig 13 shows the inner domain. Are these ERA5 swapped with ERA Interim sensitivity experiments conducted at the fine-scale for the inner domain? Maybe it’s specified somewhere but I couldn’t see it. Worth clarifying here.
L210-211: The short periods are probably fine, but why not just do a quick bootstrap check to see how representative 14-month periods are for rainfall in the longer run period using either AGCD or your simulations?
L231-233: Based on Fig 3 it seems Fig 2 might obscure some compensating biases between north and south. Is this the case?
Fig 3: The stippling is difficult to see. Can you improve somehow?
L270 and Fig 4: Is the bimodality due to mixing different climate zones?
Fig 6: Why not show log(P)? Might be easier to see differences and similarities.
Fig 7: Would the biases over northern Australia look this dramatic if you showed it as a percent of AGCD climatology? It’s hard to know for example which absolute bias is more concerning between runs, since a small absolute bias in the dry regions may matter more than a large absolute bias in the monsoon region.
Fig 13: If this is not 4km explicit convection runs than perhaps show larger domain? Otherwise, see comment for L205-208 above.
L459: The claim of general improvement in precipitation and even max temperature is not quite true in my opinion. The bias over northern Australia appears large and systematic. I think it is appropriate to claim general improvement over the inner domain though. See main comment 3.
Section 4.1: you note the dry bias vs wet bias may relate to microphysics scheme. Looking at fig 7 the three runs (R2-4) with MYNN2 boundary layer scheme are all wet biased over SE Australia. Is this a coincidence?
L492: suggest “especially over northern Australia where all other runs contain a systematic dry-bias”.
L568: Again, I don’t agree with this claim of general improvement for precipitation.
L577: It also appears important here at coarser scales when precipitation is parameterised, based on Fig 7.
L584-585: Potentially also in the rainfall biases, especially where dynamical feedbacks are known to occur in the real world such as over northern Australia during the summer monsoon season.
Citation: https://doi.org/10.5194/gmd-2024-41-RC1 -
AC1: 'Reply on RC1', Giovanni Di Virgilio, 23 Jul 2024
We are very grateful to the reviewer for assessing our work, for their constructive input and for recommending publication following Minor Revisions. We have carefully gone through all of the reviewer's comments and suggestions and responded to these point-by-point in Table 1 on pp. 2-16 in the document attached to this response, please see: 'DiVirgilio_et_al_Final_Response_Replies_to_Reviewers_1_and_2_2024_07_22.pdf'.
-
RC2: 'Comment on gmd-2024-41', Anonymous Referee #2, 23 May 2024
This paper provides an evaluation of the representation of precipitation and diurnal screen-level temperatures from a set of 7 model configurations of the NARCLIM2.0 regional climate model. By benchmarking model performance against a previous version of NARCLIM and repeating the analysis of a previous paper, the authors follow an objective, pre-determined framework, which is to be commended. NARCLIM2.0 is shown to have a reduction in outlier model configurations with large temperature biases in excess of 2K. Their results highlight model dependence, particularly of precipitation, on the choice of parametrisation schemes and identify a pervasive dry bias in northern Australia.
I have comments around the description and justifications of model configuration choices and some more minor comments on the presentation of the results. Overall, this is an important and well written manuscript suitable for publication following these revisions.
General comments
There are a large number of models, statistics and maps presented in this paper which makes it difficult to form an overall view of the improvement in model performance across generations. I would suggest you include a summary table of the mean absolute error, bias magnitudes and Perkins Skill Scores reported across the paper and supplementary materials.
The text at lines 137-140 suggests that the NARCLIM2.0 model configurations have been selected based on empirical performance during a single year, and that compatibility between parametrisation schemes or recommendations from the WRF model developers may not have been considered. Please add some text to provide assurance that these for each of the 7 selected configurations, the combination of parametrisation schemes is physically sensible. For example, have these combinations been used and recommendations by separate studies, developed and tested in combination, or at least not contain schemes developed specifically for use with a different setup or combinations precluded in the WRF user guide? Are the PBL schemes compatible with the surface schemes, and is shallow convection appropriately dealt with by the combination of PBL and convection schemes?
More details on common aspects of the experimental design would be welcome: how are ozone and aerosols represented in these models? How frequently does the SST update? What datasets have been used as static inputs to the land-surface schemes (vegetation fraction etc)?
As the differences between the parametrisation schemes forms a large component of this paper, please provide references for the schemes. Some explanation of the dynamic vegetation scheme would also be welcome.
The selection of RCMs for this study comes across as ad-hoc and incomplete: HadRM3P, RegCM4 and REMO2015 also contributed ERA-interim driven simulations to CORDEX-CMIP5 Australasia but have not been evaluated. Additionally, three additionally, three ERA5-driven CORDEX-CMIP6 Australasia simulations appear to have also been recently published before the submission date. While including extra models at this stage may be out of scope, the paper may sit better in the literature if it focuses purely on NARCLIM/WRF-based models.
On a similar note, you may consider acknowledging that NARCLIM2.0 will contribute to an ensemble of downscaled climate projections for Australasia. (e.g. https://www.sciencedirect.com/science/article/pii/S2405880723000298)
Map quality: stippling is hard to see, while coastlines and state boundaries show up as inconsistently rendered, adding to confusion. Can these be improved? Perhaps the figures would be easier to read if the stippling density was decreased and line thicknesses increased.
Can you provide a recommendation of which of R1-R7 you would recommend to be used in downscaling GCMs going forward?
Line-specific comments
Line 10: Please be more explicit about what these statistics (0.54K; 0.81K) are. They seem to be from R5 but I'm not sure why (R1 has a lower mean absolute error for the p99).
Lines 11-12 and lines 479-486: I can't see systematic improvement in mean state precipitation of the 7 CORDEX-CMIP6 RCMs over the 6 CORDEX-CMIP5 RCMs. Certainly, WRFJ has a very large wet bias, however the performance of WRFL is comparable to R3 and R4.
Line 194: Please specify the bin width used when calculating the Perkins Score.
Lines 380-383: Please review the meaning in this paragraph as it's confusing. In the first sentence you say the ERA5-driven and ERA-interim driven simulations are similar, in the second you say that the ERA5-driven show large reductions in biases.
Lines 341 and elsewhere: consider saying 'bias magnitudes' (or mean absolute error) over |biases| in the text.
Lines 488-490: I don't agree that the convection-permitting P99s from R3-R7 are markedly improved over WRFK and WRFL: perhaps a little along the coast but it's fairly marginal.
Figure 8-13: are you able to include cutouts of the 20km outer domains of ERA5 R1-R7 in these figures?
Citation: https://doi.org/10.5194/gmd-2024-41-RC2 -
AC2: 'Reply on RC2', Giovanni Di Virgilio, 23 Jul 2024
We are very grateful to the reviewer for assessing our work, for their constructive and helpful input and for their assessment of the manuscript as suitable for publication following revisions. We have carefully gone through all of the reviewer's comments and suggestions and responded to these point-by-point in Table 2 on pp. 17-33 in the document attached to this response, please see: 'DiVirgilio_et_al_Final_Response_Replies_to_Reviewers_1_and_2_2024_07_22.pdf'.
-
AC2: 'Reply on RC2', Giovanni Di Virgilio, 23 Jul 2024
-
EC1: 'Comment on gmd-2024-41', Stefan Rahimi-Esfarjani, 22 Aug 2024
Quick question on response to reviewer 2: You mention that MO surface layer physics is incompatible with YSU PBL physics (sfclay and pbl_physics both = 1. Are the authors sure this is correct? I am asking because I know of several studies which use this combination across other regions of the planet.
Citation: https://doi.org/10.5194/gmd-2024-41-EC1 -
AC3: 'Reply on EC1', Giovanni Di Virgilio, 26 Aug 2024
Thank you for this question. There are at least three surface layer (sf_sfclay_physics) options based on Monin-Obukhov (MO), i.e. option 1 (MM5 similarity), option 2 (Eta Similarity), and option 91 (old MM5 surface layer scheme). Our original statement on this matter was inaccurate, because we had meant to state that the Yonsei University (YSU) PBL scheme should not be used with the Eta Similarity Monin-Obukhov surface layer option (i.e. sf_sfclay_physics = 2). Hence, you are correct: compatible MO-based surface layer options for use with YSU PBL are sf_sfclay_physics = 1 (MM5 similarity) or 91 (old MM5 surface layer scheme). Additionally, on closer inspection, our other three statements on using specific WRF settings together in response #2 to Reviewer #2 should be revised because it is not the case that using these settings together is incompatible, rather, we found that they did not perform well together. Please accept our sincere apologies for these oversights on our part. All these statements on specific WRF settings were included in the response to reviewers document only (i.e. in response #2 to reviewer #2, pp 16-17); they were not stated in the manuscript itself.
Citation: https://doi.org/10.5194/gmd-2024-41-AC3
-
AC3: 'Reply on EC1', Giovanni Di Virgilio, 26 Aug 2024
Status: closed
-
CEC1: 'Comment on gmd-2024-41', Juan Antonio Añel, 12 May 2024
Dear authors,
Unfortunately, after checking your manuscript, it has come to our attention that it does not comply with our "Code and Data Policy".
https://www.geoscientific-model-development.net/policies/code_and_data_policy.htmlFirst, you link for the WRF 4.1.2 version that you use a GitHub page. However, GitHub is not a suitable repository for scientific publication. GitHub itself instructs authors to use other alternatives for long-term archival and publishing, such as Zenodo. This is clear established in our policy. Secondly, it is not enough that you provide the main WRF 4.1.2 code, but you must provide the specific configuration files and/or adaptations to the model used for your experiment.
Therefore, please, publish the code requested in one of the appropriate repositories, and reply to this comment with the relevant information (link and DOI) as soon as possible, as it should have been available before the Discussions stage.
In this way, if you do not fix this problem, we will have to reject your manuscript for publication in our journal. I should note that, given this lack of compliance with our policy, your manuscript should not have been accepted in Discussions. Therefore, the current situation with your manuscript is irregular.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/gmd-2024-41-CEC1 -
CC1: 'Reply on CEC1', Jatin Kala, 15 May 2024
Dear Juan,
We apologize for this oversight on our behalf. As requested, we have now published the WRF 4.1.2 as used in this study, as well as all model configuration files on Zenodo at: https://doi.org/10.5281/zenodo.11189898
We will add this to the code and data section when we revise the manuscript at revision stage. We hope this satifies your concerns, please let us know if anything else is required.
Kind regards
Jatin
Citation: https://doi.org/10.5194/gmd-2024-41-CC1
-
CC1: 'Reply on CEC1', Jatin Kala, 15 May 2024
-
RC1: 'Comment on gmd-2024-41', Sugata Narsey, 21 May 2024
Manuscript title: Evaluation of CORDEX ERA5-forced ‘NARCliM2.0’ regional climate models over Australia using the Weather Research and Forecasting (WRF) model version 4.1.2
Authors: Di Virgilio et al
Reviewed by: Sugata Narsey (BoM)
Recommendation: Minor revisions
This manuscript documents the evaluation of the NARCliM2.0 regional climate model (RCM) driven using ECMWF Reanalysis v5 (ERA5). The manuscript is well-written, and systematically addresses key aspects of the evaluation of their model. They go further than a basic evaluation, providing useful insights into the regional impacts of multiple parameterisation configurations of the model. They find that changing the physics choices in their model can have quite dramatic effects on regional climate biases for Australia. A nice addition to this study is their analysis of the relative sources of bias estimated by interchanging their RCM with the previous version of the NARCliM RCM, and also interchanging the driving ERA5 reanalysis with the ERA Interim reanalysis data previously used. By doing so they find that the RCM set-up appears to be a stronger influence on the mean state bias in their regional climate simulations compared to the choice of driving reanalysis data. This manuscript forms an important scientific basis for the production of a nationally significant projections dataset and is an important contribution to regional modelling for the Southern Hemisphere.
I have some comments around evaluation choices, and around specific wording especially with regards to claims of improvement for precipitation since the biases over northern Australia appear significant. However overall, this manuscript is appropriate for publication, and my recommendation is for minor revisions.
Main comments:
- The distribution plots show nationally aggregated data, however I find this problematic since the map plots show that Tmax and precipitation in particular have opposing biases in the northern and southern regions (Fig 3 and Fig 7). Additionally, the bimodal distribution of Tmin in Fig 4 might be a function of mixing two climatically distinct regions. Why not split it into at least two regions? Then you can clearly state the improvements for the southern parts of Australia.
- The evaluation conducted here focuses on rainfall and temperature, which I agree are the most important variables to consider. However, some investigation of the circulation state in the RCMs may be of use to help understand the systematic biases, for example over northern Australia (R3-7), and over SE Australia (R2-4).
- The statements around general improvements in precipitation are not well-founded in my view, since the dry biases over northern Australia are large compared to NARCliM1.5 runs. I would prefer if the claims were either made specific to the inner domain, or else more carefully described in this manuscript. Alternatively the authors can show whether the biases in the NARCliM2.0 runs (especially for northern Australia) are actually smaller as a percentage of annual mean climatological rain.
- It is outside the scope of this study, however it would be interesting to know if the different physics configurations and their associated regional climate biases have much bearing on the future change signal in the model when holding the driving global model data constant. Similarly, it would be interesting to know if the evaluation of the ERA5 runs with different physics configurations translates in an evaluation of the CMIP6 historical scenario runs.
- Also outside the scope of this study, but it would be interesting to intercompare the dry-bias tendency over northern Australia in the NARCliM2.0 runs with other regional simulations using different models that introduced similar dry biases for the Australian monsoon. Although such a bias may be undesirable, there is a real opportunity here to shed light on some fundamental characteristics of the dynamics Australian monsoon, in particular the feedback mechanisms associated with land surface behaviour, convection, and large-scale circulation.
Specific comments:
L63-64: It’s now May 2024 and this statement is outdated; I believe the BARPA paper is now published (https://gmd.copernicus.org/articles/17/731/2024/gmd-17-731-2024-discussion.html) and there may be others by now. Might be worth a quick search.
L205-208: Did you follow the same experiment design in all other respects except for run length? Fig 13 shows the inner domain. Are these ERA5 swapped with ERA Interim sensitivity experiments conducted at the fine-scale for the inner domain? Maybe it’s specified somewhere but I couldn’t see it. Worth clarifying here.
L210-211: The short periods are probably fine, but why not just do a quick bootstrap check to see how representative 14-month periods are for rainfall in the longer run period using either AGCD or your simulations?
L231-233: Based on Fig 3 it seems Fig 2 might obscure some compensating biases between north and south. Is this the case?
Fig 3: The stippling is difficult to see. Can you improve somehow?
L270 and Fig 4: Is the bimodality due to mixing different climate zones?
Fig 6: Why not show log(P)? Might be easier to see differences and similarities.
Fig 7: Would the biases over northern Australia look this dramatic if you showed it as a percent of AGCD climatology? It’s hard to know for example which absolute bias is more concerning between runs, since a small absolute bias in the dry regions may matter more than a large absolute bias in the monsoon region.
Fig 13: If this is not 4km explicit convection runs than perhaps show larger domain? Otherwise, see comment for L205-208 above.
L459: The claim of general improvement in precipitation and even max temperature is not quite true in my opinion. The bias over northern Australia appears large and systematic. I think it is appropriate to claim general improvement over the inner domain though. See main comment 3.
Section 4.1: you note the dry bias vs wet bias may relate to microphysics scheme. Looking at fig 7 the three runs (R2-4) with MYNN2 boundary layer scheme are all wet biased over SE Australia. Is this a coincidence?
L492: suggest “especially over northern Australia where all other runs contain a systematic dry-bias”.
L568: Again, I don’t agree with this claim of general improvement for precipitation.
L577: It also appears important here at coarser scales when precipitation is parameterised, based on Fig 7.
L584-585: Potentially also in the rainfall biases, especially where dynamical feedbacks are known to occur in the real world such as over northern Australia during the summer monsoon season.
Citation: https://doi.org/10.5194/gmd-2024-41-RC1 -
AC1: 'Reply on RC1', Giovanni Di Virgilio, 23 Jul 2024
We are very grateful to the reviewer for assessing our work, for their constructive input and for recommending publication following Minor Revisions. We have carefully gone through all of the reviewer's comments and suggestions and responded to these point-by-point in Table 1 on pp. 2-16 in the document attached to this response, please see: 'DiVirgilio_et_al_Final_Response_Replies_to_Reviewers_1_and_2_2024_07_22.pdf'.
-
RC2: 'Comment on gmd-2024-41', Anonymous Referee #2, 23 May 2024
This paper provides an evaluation of the representation of precipitation and diurnal screen-level temperatures from a set of 7 model configurations of the NARCLIM2.0 regional climate model. By benchmarking model performance against a previous version of NARCLIM and repeating the analysis of a previous paper, the authors follow an objective, pre-determined framework, which is to be commended. NARCLIM2.0 is shown to have a reduction in outlier model configurations with large temperature biases in excess of 2K. Their results highlight model dependence, particularly of precipitation, on the choice of parametrisation schemes and identify a pervasive dry bias in northern Australia.
I have comments around the description and justifications of model configuration choices and some more minor comments on the presentation of the results. Overall, this is an important and well written manuscript suitable for publication following these revisions.
General comments
There are a large number of models, statistics and maps presented in this paper which makes it difficult to form an overall view of the improvement in model performance across generations. I would suggest you include a summary table of the mean absolute error, bias magnitudes and Perkins Skill Scores reported across the paper and supplementary materials.
The text at lines 137-140 suggests that the NARCLIM2.0 model configurations have been selected based on empirical performance during a single year, and that compatibility between parametrisation schemes or recommendations from the WRF model developers may not have been considered. Please add some text to provide assurance that these for each of the 7 selected configurations, the combination of parametrisation schemes is physically sensible. For example, have these combinations been used and recommendations by separate studies, developed and tested in combination, or at least not contain schemes developed specifically for use with a different setup or combinations precluded in the WRF user guide? Are the PBL schemes compatible with the surface schemes, and is shallow convection appropriately dealt with by the combination of PBL and convection schemes?
More details on common aspects of the experimental design would be welcome: how are ozone and aerosols represented in these models? How frequently does the SST update? What datasets have been used as static inputs to the land-surface schemes (vegetation fraction etc)?
As the differences between the parametrisation schemes forms a large component of this paper, please provide references for the schemes. Some explanation of the dynamic vegetation scheme would also be welcome.
The selection of RCMs for this study comes across as ad-hoc and incomplete: HadRM3P, RegCM4 and REMO2015 also contributed ERA-interim driven simulations to CORDEX-CMIP5 Australasia but have not been evaluated. Additionally, three additionally, three ERA5-driven CORDEX-CMIP6 Australasia simulations appear to have also been recently published before the submission date. While including extra models at this stage may be out of scope, the paper may sit better in the literature if it focuses purely on NARCLIM/WRF-based models.
On a similar note, you may consider acknowledging that NARCLIM2.0 will contribute to an ensemble of downscaled climate projections for Australasia. (e.g. https://www.sciencedirect.com/science/article/pii/S2405880723000298)
Map quality: stippling is hard to see, while coastlines and state boundaries show up as inconsistently rendered, adding to confusion. Can these be improved? Perhaps the figures would be easier to read if the stippling density was decreased and line thicknesses increased.
Can you provide a recommendation of which of R1-R7 you would recommend to be used in downscaling GCMs going forward?
Line-specific comments
Line 10: Please be more explicit about what these statistics (0.54K; 0.81K) are. They seem to be from R5 but I'm not sure why (R1 has a lower mean absolute error for the p99).
Lines 11-12 and lines 479-486: I can't see systematic improvement in mean state precipitation of the 7 CORDEX-CMIP6 RCMs over the 6 CORDEX-CMIP5 RCMs. Certainly, WRFJ has a very large wet bias, however the performance of WRFL is comparable to R3 and R4.
Line 194: Please specify the bin width used when calculating the Perkins Score.
Lines 380-383: Please review the meaning in this paragraph as it's confusing. In the first sentence you say the ERA5-driven and ERA-interim driven simulations are similar, in the second you say that the ERA5-driven show large reductions in biases.
Lines 341 and elsewhere: consider saying 'bias magnitudes' (or mean absolute error) over |biases| in the text.
Lines 488-490: I don't agree that the convection-permitting P99s from R3-R7 are markedly improved over WRFK and WRFL: perhaps a little along the coast but it's fairly marginal.
Figure 8-13: are you able to include cutouts of the 20km outer domains of ERA5 R1-R7 in these figures?
Citation: https://doi.org/10.5194/gmd-2024-41-RC2 -
AC2: 'Reply on RC2', Giovanni Di Virgilio, 23 Jul 2024
We are very grateful to the reviewer for assessing our work, for their constructive and helpful input and for their assessment of the manuscript as suitable for publication following revisions. We have carefully gone through all of the reviewer's comments and suggestions and responded to these point-by-point in Table 2 on pp. 17-33 in the document attached to this response, please see: 'DiVirgilio_et_al_Final_Response_Replies_to_Reviewers_1_and_2_2024_07_22.pdf'.
-
AC2: 'Reply on RC2', Giovanni Di Virgilio, 23 Jul 2024
-
EC1: 'Comment on gmd-2024-41', Stefan Rahimi-Esfarjani, 22 Aug 2024
Quick question on response to reviewer 2: You mention that MO surface layer physics is incompatible with YSU PBL physics (sfclay and pbl_physics both = 1. Are the authors sure this is correct? I am asking because I know of several studies which use this combination across other regions of the planet.
Citation: https://doi.org/10.5194/gmd-2024-41-EC1 -
AC3: 'Reply on EC1', Giovanni Di Virgilio, 26 Aug 2024
Thank you for this question. There are at least three surface layer (sf_sfclay_physics) options based on Monin-Obukhov (MO), i.e. option 1 (MM5 similarity), option 2 (Eta Similarity), and option 91 (old MM5 surface layer scheme). Our original statement on this matter was inaccurate, because we had meant to state that the Yonsei University (YSU) PBL scheme should not be used with the Eta Similarity Monin-Obukhov surface layer option (i.e. sf_sfclay_physics = 2). Hence, you are correct: compatible MO-based surface layer options for use with YSU PBL are sf_sfclay_physics = 1 (MM5 similarity) or 91 (old MM5 surface layer scheme). Additionally, on closer inspection, our other three statements on using specific WRF settings together in response #2 to Reviewer #2 should be revised because it is not the case that using these settings together is incompatible, rather, we found that they did not perform well together. Please accept our sincere apologies for these oversights on our part. All these statements on specific WRF settings were included in the response to reviewers document only (i.e. in response #2 to reviewer #2, pp 16-17); they were not stated in the manuscript itself.
Citation: https://doi.org/10.5194/gmd-2024-41-AC3
-
AC3: 'Reply on EC1', Giovanni Di Virgilio, 26 Aug 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
650 | 203 | 41 | 894 | 55 | 23 | 23 |
- HTML: 650
- PDF: 203
- XML: 41
- Total: 894
- Supplement: 55
- BibTeX: 23
- EndNote: 23
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1