Optimization of weather forecasting for cloud cover over the European domain using the meteorological component of the Ensemble for Stochastic Integration of Atmospheric Simulations version 1.0
- 1Institute of Energy and Climate Research – Troposphere (IEK-8), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
- 2Fraunhofer Institute for Energy Economics and Energy System Technology IEE, Königstor 59, 34119 Kassel, Germany
- 3Rhenish Institute for Environmental Research at the University of Cologne, Cologne, Germany
- 1Institute of Energy and Climate Research – Troposphere (IEK-8), Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
- 2Fraunhofer Institute for Energy Economics and Energy System Technology IEE, Königstor 59, 34119 Kassel, Germany
- 3Rhenish Institute for Environmental Research at the University of Cologne, Cologne, Germany
Abstract. In this study, we present an expansive sensitivity analysis of physics configurations for cloud cover using the Weather Forecasting and Research Model (WRF V3.7.1) on the European domain. The experiments utilize the meteorological part of a large ensemble framework known as the Ensemble for Stochastic Integration of Atmospheric Simulations (ESIAS-met). The experiments first seek the best deterministic WRF physics configuration by simulating over 1,000 combinations of microphysics, cumulus parameterization, planetary boundary layer physics (PBL), surface layer physics, radiation scheme and land surface models. The results on six different test days are compared to CMSAF satellite images from EUMETSAT. We then selectively conduct stochastic simulations to assess the best choice for ensemble forecasts. The results indicate a high variability in terms of physics and parameterization. The combination of Goddard, WSM6, or CAM5.1 microphysics with MYNN3 or ACM2 PBL exhibited the best performance in Europe. For probabilistic simulations, the combination of WSM6 and SBU–YL microphysics with MYNN2 and MYNN3 showed the best performance, capturing the cloud fraction and its percentiles with 32 ensemble members. This work also demonstrates the capability and performance of ESIAS-met for large ensemble simulations and sensitivity analysis.
- Preprint
(2817 KB) -
Supplement
(1834 KB) - BibTeX
- EndNote
Yen-Sen Lu et al.
Status: closed
-
RC1: 'Comment on gmd-2022-118', Anonymous Referee #1, 03 Jul 2022
General Comments:
This manuscript describes a very large ensemble simulation to evaluate the ability of numerous physics parameterization combinations in WRF to simulated cloud cover over a European domain. The overarching experiment evaluation appears to be valid and may have scientific merit; however, the manuscript is very difficult to follow, lacks many details, and limits the ability of the reader to completely understand certain information. Therefore, numerous questions arise about the experiment design, conclusions, and the applicable nature of the recommendations that the authors provide. A complete overhaul of the full manuscript is required to fix these problems. Principal criteria scores are as follows:
Scientific significance: 3
Scientific quality: 3
Scientific reproducibility: 2
Presentation quality: 4Major Comments:
- The overall manuscript lacks clarity, making it very difficult for the reader to understand the experiment design, analysis, and results. At times, the text is also missing critical justification for decisions made by the authors.
- It is not very clear why the authors chose to separate the experiments into the three sets described. To help clarify, it would be useful if each set of experiments listed all physics parameterizations used, with the “cluster” name referenced, so they can be compared more easily. Also, the ensemble members that use stochastic physics need to be clearly identified. The authors mention using SPPT as well as SKEBS, but only SKEBS appears to be described in 2.2.
- It is not clear if the authors considered the appropriateness of each physics parameterization for the resolution used in the simulations. Certain physics parameterizations are targeted at specific resolutions, unless they are truly scale-aware. Some parameterizations used in WRF are specifically targeted toward convection-allowing scales (~3 km resolution). Therefore, the authors need to thoroughly explain which schemes may not be appropriate for the 20-km resolution they are running, if any, and exclude those from their runs.
- Additional details about the ESIAS and general experiment design would be helpful. For example, why does ESIAS use ~1000 members? Does it always run with that many? If it is specifically being run with ~1000 members only in this case to sample as many WRF physics parametrizations as possible, please mention that. Which of the ~1000 members employ stochastic physics? Is the ESIAS always run at 20 km resolution using the same 180 x 180 grid point domain? Can it be configured to run differently? Is it run operationally?
- Section 3.1 is unclear. GEFS data are apparently used, but is it for both ICs and LBCs? What is the frequency at which LBCs are applied? What is the forecast length (48 hours)? What is the frequency of the output (every three hours)?
- The work described in this manuscript may represent one of the largest physics sensitivity studies using the WRF to date (certainly this is true for the evaluation of cloud cover?). It may be worthwhile for the authors to highlight this fact in the abstract/conclusions.
- Grammar should be double-checked throughout the manuscript. Some explanations by the authors are very difficult to understand. There are also numerous typos found throughout the manuscript.
- The authors state in the conclusions that they “… offer a recommendation on the choice of physics configurations for studying the European domain and for weather forecasting purposes.” The manuscript only focused on the evaluation of cloud cover, which is just one of tens to hundreds of variables that are important for NWP. If the authors wish to provide physics recommendation for general NWP over Europe, many more variables need to be evaluated.
Minor Comments:
- Lines 41-43: Are the authors saying that most physics combinations will exhibit a bias when compared to surface-based observations? If so, sure, and that’s inevitable as model physics will never be completely bias-free. Also, why only surface-based observations? Upper-air observations can be used equally to verify model simulations, with model physics having the potential to impact upper-air variables just as much. Model simulations also include more than just physics, so it’s not possible to say that all bias is due to just the model physics. In addition, having some kind of bias doesn’t necessarily make a physics parameterization or suite “unsuitable for deterministic forecasts”. All operational models have some kind of physics bias, and work is always ongoing to minimize the error.
- Line 49 - What is “the scientific challenge of proper scoring rules”? Please clarify.
- Lines 50-51 – Is the “technical challenge” creating “large supercomputing facilities”, finding the resources to run on large supercomputing facilities, the ability of an ensemble to forecast extreme and damaging events, or all of the above? Please clarify.
- Line 52 and elsewhere – A simulation isn’t probabilistic by itself, but probabilistic forecasts for a given event can be created from an ensemble forecast. I would replace “probabilistic simulations” with “ensemble-based probabilistic forecasts”.
- Line 54-55 – It isn’t clear whether stochastic physics is used in all ~1000 ensemble members described in the ESIAS, or whether there is a subset of additional members that employ stochastic physics. Please clarify.
- Line 56 – “cope with” or “meet”?
- Line 59 – “ESIAS-met” has not been defined yet. It appears to be defined in line 63, but it’s not clear what the difference is between ESIAS and ESIAS-met until the next section.
- Lines 59-62 – Please double check grammar. Also, are multi-physics simulations combined with stochastic simulations?
- Line 78 – “to better fit the system” – I’m not sure what the authors are trying to say here.
- Any specific reason why only SPPT and SKEBS are used? Did the authors also look at using SHUM or SPP?
- Line 91 – Can the authors briefly describe the “different approach” used in the other study?
- Line 93 – “to investigate the optimal physics configuration for the simulation output” – it might be a bit clearer to say that the optimal physics configuration is for the accurate representation of cloud cover.
- Line 110 – What does “perform” signify here?
- Table 4: Why wouldn’t “over-predict” in Table 4 also be a “miss”? The difference between “over” and “over-predict” isn’t very clear.
- Line 131 – Can the authors define what a “rater” is here?
- If CFC data aren’t available over Northern Europe, why wasn’t cloud cover verified over the western and southern portion of the simulation domain?
- Lines 169-171 – The explanation of how/if the CFC data are upscaled for verification is unclear.
- Figures 6 and 7, 9 and 10 – Why where the specific dates chosen for these figures?
- Lines 247-250 – This text is unclear. Please clarify.
- Lines 329-332 – It is unclear how many ensemble members exist in this study.
- Line 344 – A deterministic simulation is never going to be unbiased.
- Lines 349-350 – Note that Jankov et al. (2019) don’t necessarily advocate for multi-physics over stochastic-based ensembles. The authors describe the practical and theoretical deficiencies of multi-physics ensembles as well.
- Line 352 – Spread produced by a multi-physics-based ensemble is mostly due to physics biases, not physics uncertainty.
- Lines 354-355 – I wouldn’t call this random. It’s a specific result of the different physics parameterizations. It’s also possible that probability matched mean could be calculated for the cloud field instead of just the standard mean to alleviate some of these problems.
- Line 356 – Jankov et al. (2019) used an eight-member ensemble, not four.
Abstract – the following two sentences aren’t clear: “We then selectively
conduct stochastic simulations to assess the best choice for ensemble forecasts. The results indicate a high variability in terms of physics and parameterization.”Line 23 – “negative wind energy prices” – This topic needs to be briefly explained
Line 23 - to study to the -> “to study the”
Line 26-29 - The introduction to deterministic models should be followed by a reference to the WRF model as being deterministic. Something like “Various global and regional deterministic weather models”
Line 31 – “optimal meteorological models,” – I would say “optimal model configuration” instead
Lines 39-41 – Double check for typos and correct comma placement.
Table 4 – Typo in description -> “Indaddition”
Line 206 – Typo - “most” -> “the most”
-
AC1: 'Reply on RC1', Yen-Sen Lu, 12 Oct 2022
All the comments and suggestions have been answered one-by-one as the attached supplement document. The texts from the reviewer are marked as blue, and our replies are in black to be distinguished. Also, we have made major revisions to the original manuscript according to the comments.
-
RC2: 'Comment on gmd-2022-118', Anonymous Referee #2, 30 Aug 2022
Summary
This work utilizes the ESIAS-met in an attempt identify the ideal combination of microphysics, boundary layer, and cumulus parameterizations in producing accurate cloud cover forecasts while understanding the variability and sensitivity of the simulations in an operational forecasting ensemble environment. The authors utilize a large variety of common parameterizations, finding that the choice of microphysics parameterization is essential in simulating accurate cloud cover over the European domain.
General Comments
My main concern is that there is no justification that the 6 cases provide enough information about the variety of cases that these parameterizations experience when in an operational model. Are the case characteristics representative of the variability in weather patterns across the domain? Does this collection of 6 cases contain passing fronts, extreme weather, and calm conditions? Why are there no winter cases included?
Why only examine cloud cover? Simply using the fraction of a column covered by cloud could obscure important model deficiencies like putting the clouds too high, for example. Surely, the amount of light reaching the surface is different if the cloud cover comes in the form of cirrus instead of boundary layer clouds. The general conclusions of this work could be altered if, for example, column aerosol optical depth were considered instead of cloud cover. Colum AOD is crucial for modeling pollution transport and boundary layer physics packages might play a more significant role (of course scavenging in the microphysics parameterization will also be important).
The authors do not utilize a satellite simulator package in order to make fair comparisons between models and observations. I am concerned that model is looking “straight down” at each column’s respective zenith when computing cloud cover but the SEVERI instrument is observing at a sharp angle (some observations are made above the arctic circle from geostationary orbit!). The lack of cloud height information could potentially lead to mis-placed clouds. Have the authors noticed any persistent biases or noise related to the zenith angle of the SEVERI observations? In addition to cloud height-related issues, every observation comes with a minimum detectable signal but models mostly do not. For example, truly-existing thin cirrus may be undetected by SEVERI due to weaknesses in infrared detection of clouds and algorithm deficiencies. A satellite simulator would alleviate these issues a great deal, if implemented correctly.
There are many small typographical errors, mostly related to plurals. I noted many in the “technical corrections” but I am confident I did not document them all.
Specific Comments
16: Which recent events in 2021?
95:96: “It is recommended that the surface layer physics be set with planetary boundary layer physics in WRF.” Who is recommending this? Are you recommending it or is it the official recommendation from the WRF developers? It would be best if you would provide a source for this recommendation.
102: I recognize the need for shortening the parameterization acronyms. However, these shortened acronyms are used throughout the paper and are important to the interpretation of most figures so Table A1 and Table A2 should be added to Table 1 and Table 2. Table 1 and Table 2 have plenty of space for the shortened acronyms in parentheses behind the full names, for example.
123: Are your results sensitive to these near-arbitrary thresholds?
124: The ASOS acronym needs a definition.
135: The first model evaluation results utilize this Kappa score, but there is essentially no preview of what a low-Kappa or low-Kappa means in terms of agreement with observations. Please provide some interpretation of this metric.
135-140: What is N? Total number of subjects? I am also unsure what the “subjects” are. Please provide a definition for each variable in the equations.
156: Why are these cases chosen? Does the domain experience considerable variability in these cases? Fronts with strong precipitation? Mesoscale convective systems? It is very important to explain why these days were chosen so please provide a short description of each and, more importantly, why simulations of these 6 cases are capable of summarizing the variety of weather conditions that these parameterizations are expected to simulate in an operational environment. Lines 172-181 provide a cursory description of what cloud cover patterns through each case, but not a justification of why these cases are sufficient to understand the differences between the parameterizations.
Section 3.2: Please elaborate on the description of the observational dataset. What instrument makes the observations? What techniques to they use in their cloud retrievals (BT-contrast, CO2 slicing, etc.)? What sort of processing takes the product from pixel-level to gridded, quality-controlled distribution?
Figure 3: These are UTC times, right? Please state in the caption.
Figure 3 caption: The caption says the colors represent both cloud cover and time of day. I think the second sentence should be removed.
Figure 4 and Figure 5: These wallclock times would be more accessible to the reader if presented as hours, as is done with the Simulation Time. It would also be more convenient for the reader if the (a) and (b) plots had identical y-axis limits. They are very close now so why not make them identical?
Figure 4 caption and Figure 5 caption: There is no hourly simulation time, only total accumulated wallclock time
216-219: This mini-paragraph should be placed earlier in the manuscript because some science results have already been presented (Figure 6 and Figure 7). Near the first sentence in Section 4.2 or earlier would be good.
255: “Accounting for the support of the simulation of the graupel mixing ratio for ESIAS-chem, we predominantly use the microphysics of WSM5, WSM6, and Goddard.” is more understandable when written similar to, “We continue with the WSM5, WSM6, and Goddard microphysics parameterizations because they include treatments of graupel mixing ratio for ESIAS-chem.”, unless I am misunderstanding the meaning of this sentence.
281: I’m confused about the “maximum of the boxplot”. In Figure 14a, the boxplot endpoints do not appear to be 1.5*interquartile range greater than the third quartile (assuming you meant quartile instead of quantile). For example, the maximum boxplot edge for the W6-T combination is only a small amount greater than the third quartile.
284: Which cumulus parameterization can improve Kappa? Tiedke?
Figure 14 and Figure 15 do not really add much to the analyses because they present more data than can be reasonably interpreted by the reader. The most important data are the summary data, which are only shown as text. Also, these are only two of the days and there are no analyses that summarize the other four cases! I recommend banishing the time series plots to the supplemental material and replacing these two figures with heat-maps of RMSE, sigma_bar, and x_bar and span all 6 cases.
310: Note that you’ve only investigated days during warm periods (mid-April to mid-September) so you cannot say with confidence that this is true for all time periods.
352:353: “but we should also consider the accuracy of the model physics”. Yes, good point and one that is often not considered in ensemble modeling
354:355: I think this sentence is important but I am struggling to fully understand it. Please consider rewording for better clarity.
Technical Corrections
23: Remove the “to” in “study to the impact”
23-24: Parentheses around references
Table 4 caption: Fix “Inaddition” to include a space
Figure 3: Move the legends upward into the white space and increase all text sizes
210: “figures indicate” instead of “figures indicates”
216: “model is run” instead of “model is runs”
220: Please mention figure 8a instead of just figure 8
Figure 8: Please label the subplots in the figure.
243: “wellby” should be “well by”
245: The word parameterizations is misspelled
265-266: You probably just want either “overall” or “over all” in this sentence and not both.
268: Missing space between sentences
Figure 15: Y-axes all say “Cloud Refraction” but they should say “Cloud Fraction”
308: There is a separate Conclusions section so there is no need to have “and conclusion” in this section heading.
339: Should say “not only changes the development of cloud cover fraction but also affects the”
-
AC2: 'Reply on RC2', Yen-Sen Lu, 12 Oct 2022
All the comments and suggestions have been answered one-by-one in the attached supplement document. The texts from the reviewer are marked as blue, and our replies are in black to be distinguished. Also, we have made major revisions to the original manuscript according to the comments.
-
AC2: 'Reply on RC2', Yen-Sen Lu, 12 Oct 2022
Status: closed
-
RC1: 'Comment on gmd-2022-118', Anonymous Referee #1, 03 Jul 2022
General Comments:
This manuscript describes a very large ensemble simulation to evaluate the ability of numerous physics parameterization combinations in WRF to simulated cloud cover over a European domain. The overarching experiment evaluation appears to be valid and may have scientific merit; however, the manuscript is very difficult to follow, lacks many details, and limits the ability of the reader to completely understand certain information. Therefore, numerous questions arise about the experiment design, conclusions, and the applicable nature of the recommendations that the authors provide. A complete overhaul of the full manuscript is required to fix these problems. Principal criteria scores are as follows:
Scientific significance: 3
Scientific quality: 3
Scientific reproducibility: 2
Presentation quality: 4Major Comments:
- The overall manuscript lacks clarity, making it very difficult for the reader to understand the experiment design, analysis, and results. At times, the text is also missing critical justification for decisions made by the authors.
- It is not very clear why the authors chose to separate the experiments into the three sets described. To help clarify, it would be useful if each set of experiments listed all physics parameterizations used, with the “cluster” name referenced, so they can be compared more easily. Also, the ensemble members that use stochastic physics need to be clearly identified. The authors mention using SPPT as well as SKEBS, but only SKEBS appears to be described in 2.2.
- It is not clear if the authors considered the appropriateness of each physics parameterization for the resolution used in the simulations. Certain physics parameterizations are targeted at specific resolutions, unless they are truly scale-aware. Some parameterizations used in WRF are specifically targeted toward convection-allowing scales (~3 km resolution). Therefore, the authors need to thoroughly explain which schemes may not be appropriate for the 20-km resolution they are running, if any, and exclude those from their runs.
- Additional details about the ESIAS and general experiment design would be helpful. For example, why does ESIAS use ~1000 members? Does it always run with that many? If it is specifically being run with ~1000 members only in this case to sample as many WRF physics parametrizations as possible, please mention that. Which of the ~1000 members employ stochastic physics? Is the ESIAS always run at 20 km resolution using the same 180 x 180 grid point domain? Can it be configured to run differently? Is it run operationally?
- Section 3.1 is unclear. GEFS data are apparently used, but is it for both ICs and LBCs? What is the frequency at which LBCs are applied? What is the forecast length (48 hours)? What is the frequency of the output (every three hours)?
- The work described in this manuscript may represent one of the largest physics sensitivity studies using the WRF to date (certainly this is true for the evaluation of cloud cover?). It may be worthwhile for the authors to highlight this fact in the abstract/conclusions.
- Grammar should be double-checked throughout the manuscript. Some explanations by the authors are very difficult to understand. There are also numerous typos found throughout the manuscript.
- The authors state in the conclusions that they “… offer a recommendation on the choice of physics configurations for studying the European domain and for weather forecasting purposes.” The manuscript only focused on the evaluation of cloud cover, which is just one of tens to hundreds of variables that are important for NWP. If the authors wish to provide physics recommendation for general NWP over Europe, many more variables need to be evaluated.
Minor Comments:
- Lines 41-43: Are the authors saying that most physics combinations will exhibit a bias when compared to surface-based observations? If so, sure, and that’s inevitable as model physics will never be completely bias-free. Also, why only surface-based observations? Upper-air observations can be used equally to verify model simulations, with model physics having the potential to impact upper-air variables just as much. Model simulations also include more than just physics, so it’s not possible to say that all bias is due to just the model physics. In addition, having some kind of bias doesn’t necessarily make a physics parameterization or suite “unsuitable for deterministic forecasts”. All operational models have some kind of physics bias, and work is always ongoing to minimize the error.
- Line 49 - What is “the scientific challenge of proper scoring rules”? Please clarify.
- Lines 50-51 – Is the “technical challenge” creating “large supercomputing facilities”, finding the resources to run on large supercomputing facilities, the ability of an ensemble to forecast extreme and damaging events, or all of the above? Please clarify.
- Line 52 and elsewhere – A simulation isn’t probabilistic by itself, but probabilistic forecasts for a given event can be created from an ensemble forecast. I would replace “probabilistic simulations” with “ensemble-based probabilistic forecasts”.
- Line 54-55 – It isn’t clear whether stochastic physics is used in all ~1000 ensemble members described in the ESIAS, or whether there is a subset of additional members that employ stochastic physics. Please clarify.
- Line 56 – “cope with” or “meet”?
- Line 59 – “ESIAS-met” has not been defined yet. It appears to be defined in line 63, but it’s not clear what the difference is between ESIAS and ESIAS-met until the next section.
- Lines 59-62 – Please double check grammar. Also, are multi-physics simulations combined with stochastic simulations?
- Line 78 – “to better fit the system” – I’m not sure what the authors are trying to say here.
- Any specific reason why only SPPT and SKEBS are used? Did the authors also look at using SHUM or SPP?
- Line 91 – Can the authors briefly describe the “different approach” used in the other study?
- Line 93 – “to investigate the optimal physics configuration for the simulation output” – it might be a bit clearer to say that the optimal physics configuration is for the accurate representation of cloud cover.
- Line 110 – What does “perform” signify here?
- Table 4: Why wouldn’t “over-predict” in Table 4 also be a “miss”? The difference between “over” and “over-predict” isn’t very clear.
- Line 131 – Can the authors define what a “rater” is here?
- If CFC data aren’t available over Northern Europe, why wasn’t cloud cover verified over the western and southern portion of the simulation domain?
- Lines 169-171 – The explanation of how/if the CFC data are upscaled for verification is unclear.
- Figures 6 and 7, 9 and 10 – Why where the specific dates chosen for these figures?
- Lines 247-250 – This text is unclear. Please clarify.
- Lines 329-332 – It is unclear how many ensemble members exist in this study.
- Line 344 – A deterministic simulation is never going to be unbiased.
- Lines 349-350 – Note that Jankov et al. (2019) don’t necessarily advocate for multi-physics over stochastic-based ensembles. The authors describe the practical and theoretical deficiencies of multi-physics ensembles as well.
- Line 352 – Spread produced by a multi-physics-based ensemble is mostly due to physics biases, not physics uncertainty.
- Lines 354-355 – I wouldn’t call this random. It’s a specific result of the different physics parameterizations. It’s also possible that probability matched mean could be calculated for the cloud field instead of just the standard mean to alleviate some of these problems.
- Line 356 – Jankov et al. (2019) used an eight-member ensemble, not four.
Abstract – the following two sentences aren’t clear: “We then selectively
conduct stochastic simulations to assess the best choice for ensemble forecasts. The results indicate a high variability in terms of physics and parameterization.”Line 23 – “negative wind energy prices” – This topic needs to be briefly explained
Line 23 - to study to the -> “to study the”
Line 26-29 - The introduction to deterministic models should be followed by a reference to the WRF model as being deterministic. Something like “Various global and regional deterministic weather models”
Line 31 – “optimal meteorological models,” – I would say “optimal model configuration” instead
Lines 39-41 – Double check for typos and correct comma placement.
Table 4 – Typo in description -> “Indaddition”
Line 206 – Typo - “most” -> “the most”
-
AC1: 'Reply on RC1', Yen-Sen Lu, 12 Oct 2022
All the comments and suggestions have been answered one-by-one as the attached supplement document. The texts from the reviewer are marked as blue, and our replies are in black to be distinguished. Also, we have made major revisions to the original manuscript according to the comments.
-
RC2: 'Comment on gmd-2022-118', Anonymous Referee #2, 30 Aug 2022
Summary
This work utilizes the ESIAS-met in an attempt identify the ideal combination of microphysics, boundary layer, and cumulus parameterizations in producing accurate cloud cover forecasts while understanding the variability and sensitivity of the simulations in an operational forecasting ensemble environment. The authors utilize a large variety of common parameterizations, finding that the choice of microphysics parameterization is essential in simulating accurate cloud cover over the European domain.
General Comments
My main concern is that there is no justification that the 6 cases provide enough information about the variety of cases that these parameterizations experience when in an operational model. Are the case characteristics representative of the variability in weather patterns across the domain? Does this collection of 6 cases contain passing fronts, extreme weather, and calm conditions? Why are there no winter cases included?
Why only examine cloud cover? Simply using the fraction of a column covered by cloud could obscure important model deficiencies like putting the clouds too high, for example. Surely, the amount of light reaching the surface is different if the cloud cover comes in the form of cirrus instead of boundary layer clouds. The general conclusions of this work could be altered if, for example, column aerosol optical depth were considered instead of cloud cover. Colum AOD is crucial for modeling pollution transport and boundary layer physics packages might play a more significant role (of course scavenging in the microphysics parameterization will also be important).
The authors do not utilize a satellite simulator package in order to make fair comparisons between models and observations. I am concerned that model is looking “straight down” at each column’s respective zenith when computing cloud cover but the SEVERI instrument is observing at a sharp angle (some observations are made above the arctic circle from geostationary orbit!). The lack of cloud height information could potentially lead to mis-placed clouds. Have the authors noticed any persistent biases or noise related to the zenith angle of the SEVERI observations? In addition to cloud height-related issues, every observation comes with a minimum detectable signal but models mostly do not. For example, truly-existing thin cirrus may be undetected by SEVERI due to weaknesses in infrared detection of clouds and algorithm deficiencies. A satellite simulator would alleviate these issues a great deal, if implemented correctly.
There are many small typographical errors, mostly related to plurals. I noted many in the “technical corrections” but I am confident I did not document them all.
Specific Comments
16: Which recent events in 2021?
95:96: “It is recommended that the surface layer physics be set with planetary boundary layer physics in WRF.” Who is recommending this? Are you recommending it or is it the official recommendation from the WRF developers? It would be best if you would provide a source for this recommendation.
102: I recognize the need for shortening the parameterization acronyms. However, these shortened acronyms are used throughout the paper and are important to the interpretation of most figures so Table A1 and Table A2 should be added to Table 1 and Table 2. Table 1 and Table 2 have plenty of space for the shortened acronyms in parentheses behind the full names, for example.
123: Are your results sensitive to these near-arbitrary thresholds?
124: The ASOS acronym needs a definition.
135: The first model evaluation results utilize this Kappa score, but there is essentially no preview of what a low-Kappa or low-Kappa means in terms of agreement with observations. Please provide some interpretation of this metric.
135-140: What is N? Total number of subjects? I am also unsure what the “subjects” are. Please provide a definition for each variable in the equations.
156: Why are these cases chosen? Does the domain experience considerable variability in these cases? Fronts with strong precipitation? Mesoscale convective systems? It is very important to explain why these days were chosen so please provide a short description of each and, more importantly, why simulations of these 6 cases are capable of summarizing the variety of weather conditions that these parameterizations are expected to simulate in an operational environment. Lines 172-181 provide a cursory description of what cloud cover patterns through each case, but not a justification of why these cases are sufficient to understand the differences between the parameterizations.
Section 3.2: Please elaborate on the description of the observational dataset. What instrument makes the observations? What techniques to they use in their cloud retrievals (BT-contrast, CO2 slicing, etc.)? What sort of processing takes the product from pixel-level to gridded, quality-controlled distribution?
Figure 3: These are UTC times, right? Please state in the caption.
Figure 3 caption: The caption says the colors represent both cloud cover and time of day. I think the second sentence should be removed.
Figure 4 and Figure 5: These wallclock times would be more accessible to the reader if presented as hours, as is done with the Simulation Time. It would also be more convenient for the reader if the (a) and (b) plots had identical y-axis limits. They are very close now so why not make them identical?
Figure 4 caption and Figure 5 caption: There is no hourly simulation time, only total accumulated wallclock time
216-219: This mini-paragraph should be placed earlier in the manuscript because some science results have already been presented (Figure 6 and Figure 7). Near the first sentence in Section 4.2 or earlier would be good.
255: “Accounting for the support of the simulation of the graupel mixing ratio for ESIAS-chem, we predominantly use the microphysics of WSM5, WSM6, and Goddard.” is more understandable when written similar to, “We continue with the WSM5, WSM6, and Goddard microphysics parameterizations because they include treatments of graupel mixing ratio for ESIAS-chem.”, unless I am misunderstanding the meaning of this sentence.
281: I’m confused about the “maximum of the boxplot”. In Figure 14a, the boxplot endpoints do not appear to be 1.5*interquartile range greater than the third quartile (assuming you meant quartile instead of quantile). For example, the maximum boxplot edge for the W6-T combination is only a small amount greater than the third quartile.
284: Which cumulus parameterization can improve Kappa? Tiedke?
Figure 14 and Figure 15 do not really add much to the analyses because they present more data than can be reasonably interpreted by the reader. The most important data are the summary data, which are only shown as text. Also, these are only two of the days and there are no analyses that summarize the other four cases! I recommend banishing the time series plots to the supplemental material and replacing these two figures with heat-maps of RMSE, sigma_bar, and x_bar and span all 6 cases.
310: Note that you’ve only investigated days during warm periods (mid-April to mid-September) so you cannot say with confidence that this is true for all time periods.
352:353: “but we should also consider the accuracy of the model physics”. Yes, good point and one that is often not considered in ensemble modeling
354:355: I think this sentence is important but I am struggling to fully understand it. Please consider rewording for better clarity.
Technical Corrections
23: Remove the “to” in “study to the impact”
23-24: Parentheses around references
Table 4 caption: Fix “Inaddition” to include a space
Figure 3: Move the legends upward into the white space and increase all text sizes
210: “figures indicate” instead of “figures indicates”
216: “model is run” instead of “model is runs”
220: Please mention figure 8a instead of just figure 8
Figure 8: Please label the subplots in the figure.
243: “wellby” should be “well by”
245: The word parameterizations is misspelled
265-266: You probably just want either “overall” or “over all” in this sentence and not both.
268: Missing space between sentences
Figure 15: Y-axes all say “Cloud Refraction” but they should say “Cloud Fraction”
308: There is a separate Conclusions section so there is no need to have “and conclusion” in this section heading.
339: Should say “not only changes the development of cloud cover fraction but also affects the”
-
AC2: 'Reply on RC2', Yen-Sen Lu, 12 Oct 2022
All the comments and suggestions have been answered one-by-one in the attached supplement document. The texts from the reviewer are marked as blue, and our replies are in black to be distinguished. Also, we have made major revisions to the original manuscript according to the comments.
-
AC2: 'Reply on RC2', Yen-Sen Lu, 12 Oct 2022
Yen-Sen Lu et al.
Yen-Sen Lu et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
570 | 90 | 14 | 674 | 23 | 1 | 1 |
- HTML: 570
- PDF: 90
- XML: 14
- Total: 674
- Supplement: 23
- BibTeX: 1
- EndNote: 1
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1