|Review of “Using radar observations to evaluate 3d radar echo structure simulated by the global model E3SM Version 1” by Jingyu Wang, Jiwen Fan, Robert A. Houze Jr., Stella R. Brodzik, Kai Zhang, Guang J. Zhang, and Po-Lun Ma|
Summary of paper:
The manuscript presents results of an evaluation of the E3SM model against NEXRAD radar observations for the summer periods during 2014-2016. The authors used the COSP forward simulator package to generate radar reflectivity values from their model cloud fields and averaged both the sub-grid COSP output and the NEXRAD observations to a 1-degree horizontal grid and 1-km vertical grid for like-with-like comparison. The model average reflectivity exceeds the observed value slightly at 2-km height, but at heights above 4 km the model generally does not produce enough cloud above the threshold reflectivity value. Sensitivity testing considering the convection and cumulus parameterisations does not improve this model bias.
This is generally a well-written paper with good quality figures. The evaluation of NWP models against 3D cloud and precipitation observations is of great importance and the present evaluation against NEXRAD is novel. The methodology is incomplete or insufficiently justified in places, which leads to serious concerns about the results. Nevertheless, these concerns might be overcome with appropriate clarifications or revisions and as such publication may be considered after major corrections.
Major comment 1:
The study is hindered by its original objective to evaluated the model against GPM and hence the implementation of the 13.6 GHz frequency in COSP. There are two issues at stake, namely (a) whether the comparison of 13.6 GHz simulated reflectivity against S-band (3-GHz) is appropriate and (b) whether the implementation has been done appropriately.
(a) The authors justify the 13.6 GHz versus 3 GHz comparison by citing their Wang et al. (2019b) study. While that is a nice paper, it is not a sufficiently comprehensive evaluation of the 13.6 GHz reflectivity against the 3 GHz reflectivity to convince the reader that these two are interchangeable. The key figure in that paper (Figure 2) uses normalisation, which removes any excess (or deficit) in cloud detection, which is of importance for this study. The normalisation within cloud, also performed in that Figure, masks the reduction in reflectivity values obtained with GPM (13.6 GHz) both due to attenuation and due to Mie scattering.
Beyond this general unease with the comparison, there are various studies that suggest a necessary conversion from Ku (13.6 GHz) to S band, with different equations used for ice and liquid phases. In particular, recent studies using the GPM radar to calibrate ground-based radars use such conversions:
Warren, R. A., A. Protat, S. T. Siems, H. A. Ramsay, V. Louf, M. J. Manton, and T. A. Kane, 2018: Calibrating Ground-Based Radars against TRMM and GPM. J. Atmos. Oceanic Technol., 35, 323–346, https://doi.org/10.1175/JTECH-D-17-0128.1.
(b) It is not obvious that the implementation of 13.6 GHz in COSP is straightforward. The authors state that the simulator automatically uses Rayleigh scattering, but that cannot be appropriate under all circumstances, particularly if the focus is on convection. Attenuation will not only be significant below 1-km altitude: convective towers can cause attenuation in the ice phase as well. Similarly, the large hydrometeors found aloft may lead to Mie scattering. That the Rayleigh scattering assumption is inappropriate for the GPM PR has long been established in the literature, e.g.:
L'Ecuyer, T. S., and G. L. Stephens, 2002: An Estimation-Based Precipitation Retrieval Algorithm for Attenuating Radars. J. Appl. Meteor., 41, 272–285, https://doi.org/10.1175/1520-0450(2002)041<0272:AEBPRA>2.0.CO;2.
Having said that, it is entirely possible that the 13.6 GHz is a red herring here. If Rayleigh scattering is assumed, and no Mie scattering is included for large particles, the COSP calculation might as well be considered as if it were a 3 GHz radar. In that case, it is worth checking the COSP calculations for whether the frequency/wavelength matters.
The authors have at least two options here. Either the authors provide corrected calculations following (for example) the papers above, for instance by applying such corrections to the COSP-simulated reflectivity. Alternatively, the authors develop a standalone forward simulator. If the latter, it is reasonable to assume Rayleigh reflectivity at 3 GHz (S-band) for comparison against the NEXRAD observations. Given the model microphysics assumptions (as listed in Table 1) it is relatively straightforward to calculate the Rayleigh reflectivity from the model ice and liquid water contents. This would be the most appropriate way to compare the model to the NEXRAD observations, but obviously requires some additional data processing, which may be difficult if the original cloud 3D fields were not included in the output.
Major comment 2:
The lack of sufficiently high radar reflectivity aloft is concerning and while this could be a model bias, it would be helpful for the reader to have more information regarding the COSP calculations. In particular, in Table 1 the authors specify the density of ice and the distribution width. Following Morrison and Gettelman (2008), the remaining size distribution parameters lambda and N0 should be calculated from the mixing ratios directly. The COSP calculation will require the (consant) density of ice and distribution width as well as the (variable) lambda and N0, unless COSP has the appropriate information to calculate lambda and N0 itself from the mixing ratio. If COSP is not provided with the correct information, a constant lambda and N0 may be assumed by COSP, leading to erroneous calculations.
On a related note, if the E3SM has been evaluated against CloudSat and/or CALIPSO, that would provide helpful context to include about its ability to produce high-level cloud.
Morrison, H., and A. Gettelman, 2008: A New Two-Moment Bulk Stratiform Cloud Microphysics Scheme in the Community Atmosphere Model, Version 3 (CAM3). Part I: Description and Numerical Tests. J. Climate, 21, 3642–3659, https://doi.org/10.1175/2008JCLI2105.1.
Major comment 3:
It is not clearly justified why the authors averaged their data to the 1-degree grid scale, when most of the information on the sub-grid scale is available to them. Averaging to 1-degree comes with its own problems (e.g. how to treat “cloud-free” regions) that may end up masking model deficiencies and it may have led to the disappearance of the characteristic CFAD shape in the NEXRAD analysis. Perhaps in Section 3.1, once the authors have performed their analysis using the sub-grid information, the authors could include some justification as to why the following analysis is done on the 1-degree averaged data.
Line 52-53: It is important to acknowledge in the introduction that these “convective processes” are not resolved by the model and that some sub-grid representation is needed. The evaluation can then be performed either on coarsened observations (as this study does) or on the sub-grid sampled model, as COSP does. Please include such a clarification in the introduction.
Line 53-55: It is not obvious that these scattering effects are such an issue at S-band. Iguchi et al. (2018) consider the GPM-DPR which are smaller wavelength. At S-band, Rayleigh scattering could potentially be assumed which would make forward simulation much easier (easier than considering the sub-grid sampling for convection). Please rephrase this statement and consider studies using S-band radars specifically. [NB It should be noted that these lines are likely a result of the authors’ original intent to evaluate the model against GPM PR observations.]
Line 69-72 and Line 98-99: It should be made clear to the reader that there is no difference in microphysics parameters between convective and stratiform (referring to Table 1). Some further clarification is then required regarding the sub-grid partition. Presumably, the model diagnoses a convective cloud fraction and a stratiform cloud fraction, with their respective water contents. These water contents may differ and therefore could lead to different simulated radar reflectivity. This is important information to include, so perhaps in Line 69-72 explain the convective-stratiform partition and in Line 98-99 clarify the typical differences between convective and stratiform water contents (noting that the microphysics parameters are the same).
Line 106-108: The adoption of model-specific parameters is not unique and is a widely used approach when implementing COSP (or when developing their own forward simulator). Perhaps rephrase: “Following general usage of COSP, we modified the microphysics assumptions…”. The Swales et al. (2018) paper explicitly mentions the need to “maintain consistency between COSP1 and the host model.”
Section 3.1: As stated above, the “out of the box” configuration of COSP is not advised and general use should always assume the model parameters. As such, it is recommended to remove the left-hand panels in Figure 2, as well as the standalone Figure 3, and focus instead on the differences between the model and observations from the right-hand panels. More specifically, in Figure 2: (1) Why does the x-axis start at 14 dBZ, when an 8 dBZ minimum reflectivity is considered? (2) What are the units of density? Per 2 dB (i.e. 2 dB bins)? (3) Why show these PDFs normalised? It should be important to note the absence of “cloud” above 8 dBZ as well. The authors should include a separate Figure showing the fraction of occurrence of Z>8 dBZ with height to compare this between model and observations.
Section 3.2 and Figure 4: How is the mean calculated? In Section 2, we learn that for the instantaneous observation/simulated output, the mean is calculated in linear Z units (so that cloud-free areas are 0) and then converted to dBZ, with an 8 dBZ threshold. But how are values below 8 dBZ considered when calculating these long-term means? Or are the means (and standard deviation and 95th percentile) in-cloud only? In either case, it is useful to understand the occurrence of Z>8dBZ, so please add this to Table 2 and as a separate set of maps to complement Figure 4. The occurrence could help explain the difference in the mean, as the model could compensate for missing higher values by having a higher “cloud” occurrence.
Section 3.3 and Figure 5: Again, normalization occurs in-cloud, so information is lost on the frequency of occurrence of cloud more generally. Could the authors comment on the difference in characteristic shape of the CFAD between NEXRAD and the model? The model follows the well-known shape with a maximum occurrence at dBZ that increases from cloud top to freezing level, and then slowly decreases or stays constant below the freezing level. That shape can be reproduced with NEXRAD, but it seems to have disappeared in the authors’ analysis – is that solely due to the averaging to 1 degree? Perhaps the choice of Z for cloud-free regions is important here?
Line 219: “Above 11km, the model completely fails to simulate any reflectivity”. There is some nuance here, as the authors use the 8-dBZ threshold. So: “Above 11km, the model fails to generate average reflectivity above 8 dBZ.” Assuming that the authors have access to the data, it would be useful to report the typical reflectivity values that are generated by the model at these altitudes, even if below 8 dBZ.
Line 238-240 and Figure 6: It is unclear what is actually being considered here. Is column-maximum reflectivity the maximum in a column on the 1-degree grid? What is then the “radar reflectivity” at a given “local time” in Figure 6? Is this an average over the entire CONUS, but only for grid boxes with this value above 8 dBZ? Or is it the maximum over the entire CONUS? The way this is calculated might partly explain the signals that appear, so all this needs to be clarified in the text.
Section 3.4 and Figure 7: As above, a general understanding of frequency of occurrence of Z>8dBZ would be useful in addition to these (normalised) diagrams.
Line 301-305: This conclusion should be removed, as should be the related Figures and discussion, as it is widely established that the microphysics assumptions of the forward simulator should be consistent with those in the host model (e.g. Swales et al., 2018).