the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of Dust Emission and Land Surface Schemes in Predicting a Mega Asian Dust Storm over South Korea Using WRF-Chem (v4.3.3)
Abstract. This study evaluates the performance of the Weather Research and Forecasting Model coupled with Chemistry (WRF-Chem) in forecasting a mega Asian Dust Storm (ADS) event that occurred over South Korea on March 28–29, 2021. We specifically evaluated a combination of five dust emission schemes and four land surface schemes, which are crucial for predicting ADSs. Using in-situ and remote sensing data, we assessed surface meteorological and air quality variables, including 2 m temperature, 2 m relative humidity, 10 m wind speed, particulate matter 10 (PM10), and aerosol optical depth (AOD) over South Korea. Our results indicate that prediction of surface meteorological variables is more influenced by the land surface scheme than by the dust emission scheme—generally showing good performance when dust emission schemes are combined with the Noah land surface model with Multiple Parameterization options (Noah-MP). In contrast, prediction of air quality variables, including PM10 and AOD, is strongly affected by the dust emission schemes, which is directly related to the generation and amount of dust through interaction with surface properties. Among the total of 20 available scheme combinations, the University of Cologne 2004 combined with the Community Land Model version 4.0 (UoC04-CLM4) showed the best performance, closely followed by the University of Cologne 2001 combined with CLM4 (UoC01-CLM4). UoC04-CLM4 outperformed the other scheme combinations by reducing the root mean square errors of PM10 up to 29.6 %. However, both UoC04-CLM4 and UoC01-CLM4 simulated values closest to the MODIS AOD but tended to overestimate the AOD in some regions during the origination and transportation processes. In contrast, other scheme combinations significantly underestimated the AOD throughout the entire simulation process of ADSs.
- Preprint
(2715 KB) - Metadata XML
-
Supplement
(1464 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
CEC1: 'Comment on gmd-2024-114', Juan Antonio Añel, 14 Aug 2024
Dear authors,
Unfortunately, after checking your manuscript, it has come to our attention that it does not comply with our "Code and Data Policy".
https://www.geoscientific-model-development.net/policies/code_and_data_policy.html
The WRF-Chem 4.3.3 code is archived on GitHub. However, GitHub is not a suitable repository for scientific publication. GitHub itself instructs authors to use other alternatives for long-term archival and publishing. Therefore, please, publish the WRF-Chem 4.3.3 code in one of the appropriate repositories, and reply to this comment with the relevant information (link and and permanent identifier for it (e.g. DOI)) as soon as possible, as we can not accept manuscripts in Discussions that do not comply with our policy.Also, you must include in a potentially reviewed manuscript the modified 'Code Availability' section, with the new link and DOI for the code.
Please, note that if you do not fix this problem, we will have to reject your manuscript for publication in our journal.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/gmd-2024-114-CEC1 -
AC1: 'Reply on CEC1', Ji Won Yoon, 15 Aug 2024
Dear Executive Editor,
We appreciate your pointing this out. We have addressed the issue regarding the repository of WRF-Chem (v4.3.3) code.
The code is now accessible via the following link: https://doi.org/10.5281/zenodo.13324490.
Additionally, we have migrated all existing data to the same repository.
We have also revised the 'Code Availability' section to reflect this update, and we will incorporate it into the potentially reviewed manuscript.
Best,
Ji Won Yoon
Citation: https://doi.org/10.5194/gmd-2024-114-AC1
-
AC1: 'Reply on CEC1', Ji Won Yoon, 15 Aug 2024
-
RC1: 'Comment on gmd-2024-114', Paul Miller, 06 Sep 2024
Title: Evaluation of Dust Emission and Land Surface Schemes in Predicting a Mega Asian Dust Storm over South Korea Using WRF-Chem (v4.3.3)
Reviewer: Paul Miller, Louisiana State University
This manuscript examines the performance of 20 combinations of WRF-Chem dust aerosol parameterizations and land-surface schemes in reproducing a mega Asian Dust Storm (ADS) from March 2021. The validation study is highly specific to a single numerical modeling system and individual event, so the novelty of this study and its broader impact on the wider atmospheric sciences is limited. However, I recognize that verification studies like this are sometimes important incremental steps within larger projects, and the results may nonetheless help guide the selection of appropriate dust physics settings among other researchers and practitioners in East Asia. The manuscript, though modest in its scope and potential scientific impact, is nonetheless well written and soundly conducted. I believe it could be accepted for publication pending the revisions suggested below.
Overall Comments:
- The manuscript computes both POD and FAR for several ACWS-relevant PM10 thresholds, and it emphasizes that these two scores need to be interpreted jointly. However, performance metrics such as the Critical Success Index (CSI) do exactly that. The manuscript would be strengthened by the addition of CSI, or a similar metric, that merges these two ideas into a single score. The CSI is easy to compute with the information already provided in the manuscript.
- The study references PM10 PCC values (Figure 6) and scatterplot relationships as “good” for some scheme combinations. However, visually, the observed-vs-simulated PM10 relationships appear quite weak. The PCCs for even the most skillful LSM-dust scheme combinations still only explain a relatively small fraction of the variance (~30% at most) if thought of as R2 rather than R. The manuscript should clarify how the PCCs, even low ones, are indicating “good” performance.
Line 166: Does this mean you wrote the output at 1-hr intervals? The integration timestep had to be much shorter than this.
Line 281: MAE is referenced as MBE throughout the rest of the manuscript. Please revise for consistency.
Line 290: 1.0 does not necessarily indicate a “perfect forecast.” It just indicates that all true events were successfully identified. The manuscript clarifies this in the following sentence, but “skillful” is more appropriate phrasing than “perfect.”
Table 4 caption: What is the basis of using 0.4 as the threshold for a “weak” correlation?
Line 346, 351, and elsewhere: By my understanding, no forecasts were produced in this study (i.e., there was no attempt to predict the future). So, “forecasted” values is really referring to “modeled” or “simulated” values.
Citation: https://doi.org/10.5194/gmd-2024-114-RC1 - AC2: 'Reply on RC1', Ji Won Yoon, 11 Sep 2024
-
RC2: 'Comment on gmd-2024-114', Anonymous Referee #2, 01 Nov 2024
Formal Review
This paper describes the performance of the WRF-Chem model in simulating a strong Asian dust event with combinations of different dust emissions and land surface schemes. The selection of dust emission and land surface schemes in simulating dust events is sometimes quite challenging as studies have shown discrepancies in simulating dust events with different dust emission and land surface schemes. So, understanding how the combination of dust emission and land surface schemes can simulate dust events is important to improve dust event forecasting skills and to reduce immediate downwind impact on many effects, including air quality, human health, road safety, etc. So, I find the goal of the paper interesting, and it can contribute knowledge in identifying and possible improvement of dust event simulation with WRF-Chem. It can add value to regional dust event forecasting.
However, the paper is more focused on data comparison and did not explain the underlying causes for why different combinations of land surface and dust emission schemes simulated dust events differently. Also, meteorological forcing is one of the important aspects of dust event formation, however, the paper did not explain the meteorological phenomena for different combinations of schemes. I have pointed out a few major concerns and specific comments separately, below.
Both major concerns and specific comments need to be addressed to increase the quality of the manuscript. So, I recommend this paper for major revision.
Major Comments
- Why the simulation domain grid resolution is coarser (30 km) than the initial and boundary conditions forcing data FNL 0.25 degrees? Why do authors upscale instead of downscaling the simulation?
- Though the focus of the study is sensitivity analysis, the explanation of underlying causes that resulted in discrepancies in dust event simulation is important to explain. In general, we all know different land surface models perform differently, but having a large set of simulation results, the paper would benefit from some explanation on why and how different land surface models simulated dust events differently. I suggest adding some discussion to the result section to explain why different combinations resulted in different simulation results.
- Meteorological forcing is very important for dust event formation. Having a comparison between simulated and reanalysis near-surface wind allows us to evaluate how the model simulated near-surface wind, which is critical for dust emission and transport. Yes, the paper presented correlation analysis for 10m wind to compare observational data across South Korea (downwind sink region) but the near-surface wind condition upwind (across the source region) is completely unknown. We need to evaluate how the model performed near the source area as well to have more confidence on our simulation results that our simulation also reasonably reproduced near-surface wind at the source. Authors can add spatial evolution of near-surface wind (e.g., 10m wind) from FNL/MERRA-2 and compare with WRF-Chem simulation.
Specific and minor comments
Lines 3-4: “Using WRF-Chem (v4.3.3)”
I suggest removing WRF-Chem version in the title.
Lines 28-29: “exerting significant impacts on human life and health”
Please provide some references here.
Line 35: “spring season”
Provide months
Line 39: “literally meaning”
“Literally meaning” or “Literal meaning”?
Lines 42-44: “The Weather Research and Forecasting (WRF) model coupled with Chemistry (WRF-Chem; Grell et al., 2005) has been extensively employed for simulating and forecasting the weather and air quality (i.e., trace gases, aerosols, etc.) variables.”
Provide some references here.
Lines 51-52: “In WRF-Chem, the dust emission flux mainly depends on the soil type and the near-surface winds (Kok et al., 2012; Shao, 2008) within the dust emission scheme”
Since we are discussing how dust emission flux is calculated inside the WRF-Chem, better reference would be the papers that describes dust emission flux calculation inside the WRF-Chem. For example, Legrand et al. (2019) https://doi.org/10.5194/gmd-12-131-2019.
Also, is soil type is more important than surface roughness in dust emission? Look at the dust emission flux calculation equation in dust emission schemes. I suggest looking at a few latest research that describes how important surface roughness is for dust emission processes.
Line 52: “Conversely, soil moisture, vegetation, and snow can influence changes in dust”
Should not be “conversely”. You are describing other contributing variables that impact dust emission flux.
Lines 83-86 : “by using in-situ, including the Automated Surface Observing System (ASOS) and Asian dust observation data, remote sensing data, including the AErosol RObotic NETwork (AERONET) and the MODerate resolution Imaging Spectroradiometer(MODIS), and reanalysis data such as Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA2)”
You can include all these in the data and method section.
Lines 105-106: “Fig. 1”
See your figure caption. I suggest not to abbreviate if not abbreviated in caption. Please be consistent throughout the text.
Lines 135-136: “a grid spacing of 30 km and 50 vertical levels up to 50 hPa.”
Is there any reason for making grid resolution coarser (30km) than meteorological initial and boundary condition data (0.25 degrees)?
Lines 137-141: “The meteorological initial and boundary conditions are obtained from the global final analysis (FNL) dataset with a resolution of 0.25° × 0.25°, produced by the Global Forecast System (GFS) of the National Centers for Environmental Prediction (NCEP); the boundary conditions are updated every 6 h. The chemical initial and boundary conditions are derived from the Community Atmosphere Model with Chemistry (CAM-chem), part of the National Center for Atmospheric Research (NCAR)’s Community Earth System Model (CESM) and are produced using the mozbc pre-processing tool”
Better to provide links or references for used data sources.
Lines 316-317: “The scheme combinations generally have good performance with high to moderate PCCs for surface meteorological variables: 0.73−0.77 for T2m, 0.73−0.77 for RH2m, 0.58−0.62 for WS10m”
Here we can see the PCC for WS10m ranged between 0.58-0.62. Wind speed being the primary control of dust emission, accurate simulation of windspeed is important. From the correlation analysis, we can see how observed and modeled data are related but we will not have sufficient information on model performance. For example, whether the model output was overestimated or underestimated throughout the dust event period at a particular station. This is critically important if we are assessing the model’s performance to investigate when the model is not able to work well. So, my suggestion is to add a time series plot as PM10 timeseries plot for different meteorological variables for different stations. This will enable us to find where and when the model is doing better performance.
Topography can influence wind speed simulation, especially in complex terrain. The simulated wind at 30km is very coarse. As I pointed out earlier, the simulation grid resolution is coarser than its boundary conditions data, which might have introduced some discrepancies between observed and simulated values.
Since, in most cases, CPP values for 30km resolution are similar. I would add another sensitivity case for at least one case to check what if we downscale wind speed and how it affects CPP.
Lines 327-330: “Fig. S2 shows the MBE for all scheme combinations: 1) For T2m, Noah-MP- and Noah-based combinations showed similarly large MBEs, with a negative trend across all experiments (Fig. S2a); 2) For RH2m, Noah-MP- and Noah-based combinations also showed similarly good performance, with positive bias across all experiments (Fig. S2b); 3) For WS10m, 330 Noah-MP-based combination showed the best performance, with positive bias (Fig. S2c).”
An average MBE provides over/under-estimation of a given value, however, does not provide enough information on which geographic location (station) simulated results closely match with observed data. This hinders how the model performs at different geographic locations during the dust event. I would suggest making a time series plot at different locations. This will enable, how the model performs at different geographic locations and help to investigate what are the possible reasons behind observed and simulated discrepancies.
Line 421: “and Mungyeong: UoC04-CLM4 and UoC01-CLM showed”
Possible typo. “UoC01-CLM” should be “UoC01-CLM4”.
Line 3457: “depicting the processes of dust origination”
Depiction of AOD does not explain the dust origination. It provides spatial evolution of the dust, but not the dust origination. Please correct language throughout this section.
Line 470: “In summary, while UoC01-CLM4 and UoC04-CLM4 effectively simulated the processes of dust origin”
Similar to previous comment. Not dust origin but spatial evolution of the dust.
Lines 486: “3.2.3 Vertical distributions of dust concentrations”
This section presents the time evolution of vertical distribution of dust from different simulations and lacks comparison with observations. Without comparing it with observational datasets, it is hard to investigate which simulation reproduced the vertical evolution of the dust.
If there is any data source (e.g., CALIPO product) that can be used to compare vertical evolution of the dust. I suggest exploring this option or there might be other way.
Line 523: “This study aims to evaluate the performance various combinations of parameterization schemes”
Language is not clear, please make it clearer.
Lines 532-534: “They were verified against surface observation data using various static metrics: 1) It turns out that the land surface schemes have a greater effect on surface meteorological variables than the dust emission schemes---showing little difference in model performance using different dust emission schemes”
Our general understanding is that different land surface schemes perform differently. Readers might want to know the possible reasons why different land surface schemes along with different dust emissions schemes performed differently. Most of the results presented here just show how different models are performing differently, but the underlying reasons why different land surface models perform differently are absent. This is particularly important to address why some schemes are doing a good job while others are not. Please see my major comments section for more details.
Lines 534-539: “Additionally, the combinations of all dust emission and Noah-MP schemes, known for its excellence as a land surface scheme, showed the best performance; 2) For surface PM10 concentrations, we observed significant variations of prediction performance across different scheme combinations, as the dust emission schemes directly influence the generation of dust storms. UoC04-CLM4 showed the best performance, followed by UoC01-CLM4, UoC04-RUC, and UoC01-RUC. In contrast, other scheme combinations showed very poor performance and failed to predict PM10 in this study.”
This is a very interesting result. Dust emission with Noah-MP schemes showed the best performance for dust emission while for PM10, UoC04-CLM4 showed the best performance. The PM10 concentration comes from transported dust. So, dust emission and transport mechanism (meteorology behind dust transport) might have played a different role. Is there a forcing mechanism (dust emission and subsequent transport) is more important than land surface schemes? This needs to be investigated to separate the effect of schemes and forcing mechanisms. Authors have not described the underlying differences between different land surface schemes inhibiting why different schemes resulted in different results. Please see my major comments also.
Citation: https://doi.org/10.5194/gmd-2024-114-RC2
Data sets
Various Datasets for Evaluation Ji Won Yoon https://zenodo.org/records/11649488
Model code and software
Model Code (WRF v4.3.3) Ji Won Yoon https://zenodo.org/records/11649488
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
273 | 96 | 51 | 420 | 27 | 10 | 10 |
- HTML: 273
- PDF: 96
- XML: 51
- Total: 420
- Supplement: 27
- BibTeX: 10
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1