the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A preliminary evaluation of FY-4A visible radiance data assimilation by the WRF (ARW v4.1.1)/DART (Manhattan release v9.8.0)-RTTOV (v12.3) system for a tropical storm case
Yongbo Zhou
Yubao Liu
Zhaoyang Huo
Yang Li
Download
- Final revised paper (published on 05 Oct 2022)
- Preprint (discussion started on 14 Apr 2022)
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2022-30', Anonymous Referee #1, 24 May 2022
Aim and relevance of the paper, Title and abstract
The present paper evaluates Observing System Simulation Experiments of a future satellite with the WRF-DART system. Radiances in the visible range are assimilated for a cyclone case. The authors demonstrate an improvement of the forecasts of cloud-related parameters and reveal weaknesses of the method.
Visible range satellite radiance assimilation is a rather recent field of research and evaluating the impact of a new satellite in OSSEs for a critical weather event is a future-oriented approach. Therefore, the present paper is highly relevant for the community. It promotes research in multiple fields at the same time: visible range radiance assimilation, the exploitation of a new satellite, and research on cyclones.
The title is informative and contains all relevant information, except maybe for the name of the satellite. Adding the name of the satellite to the title would make the word "preliminary" more meaningful.
Throughout the whole paper, the satellite is mentioned as "FY-4". However, FY-4 is a series of satellites. I believe the study uses FY-4B and this should be clearly mentioned throughout the paper.
The abstract misses two important pieces of information:
1) For which cyclone case was the study performed? Why was a cyclone case, and more specifically "this" cyclone case chosen for this pilot study of FY-4 SW radiance assimlation?
2) At the end of the abstract, an outlook is missing. What do the results imply? What should be future steps of research?The fact that different parameter settings were tested is important and should be mentioned in the abstract. This is clearly a strength of this paper.
Other remarks concerning the abstract:
L15: You might want to state that FY-4(B?) is a geostationary satellite located over Asia.
L16: You mention the experiment for which the best results were obtained without explaining what kind of experiments have been performed.
L18: As the previous sentence already contains "best resutls", I suggest to modify the beginning of this sentence to for example: "In this case, WRF could capture [...]"
L18: I suggest to modify the end of this sentence like this: "[...] and significantly improve the cloud water path and cloud coverage forecast."
L19: What does the word "its" refer to here? The simulation system? In this case you might write "The first is that the simulation system..."
Specific comments and remarks#Introduction and background:
L24: I suggest to add the word "satellite" to the beginning of the second sentence: "Most satellite DA-related studies [...]".
L33: "[...] only provide information on cloud top microphysics [...]"
L33: Better replace "weather radar" by "precipitation radar"
L37: This sentence is a bit misleading, you might want to say it like this: "Therefore, high-resolution satellite SW radiances provide information on cloud properties with a great significance for cloud-resolving model simulations."
L52: I suggest to change "in assimilating satellite radiance data" to "in satellite radiance DA".
L58: I suggest to remove the word "Nowadays".
L74: In my opinion there is no need to put the word hybrid in double quotes. Also, if you mention that "great achievements" have been made, you should state what these achievements are.
L80: You mention that RTTOV was "recently" enabled for DART. Do you have a reference for that information? Otherwise the word "recently" does not make sense.
L88: "Section 3" not "Sections 3"#References:
The provided references are relevant and recent and include key studies in the field.
#Methods:
You should probably add sources for the FNL and ERA5 data sets.
L91: So far nothing has been demonstrated and this sounds like a sentence from the conclusion. Maybe build this sentence like this: "This study demonstrates the performance of the WRF/DART-RTTOV [...]". Also, add the relevant information from the abstract: Mention FY-4 for example.
L99: Better: "horizontal grid boxes" instead of "horizontal grids".
L114: Why did you deviate from the CONUS physics suite for the microphysics scheme?
L115: This is the first time the reader learns about the dates of your experiments.
L119: The Betts-Miller-Janjic cumulus scheme is rarely chosen in the WRF literature. Can you explain why you chose this cumulus scheme?
L131: "It is noted that the Baran-2014 scheme has no explicit dependence on ice particle size." - and probably that is the reason why it was used?
L176: Didn't you already mention that you use 50 ensemble members?
L179: And probably that is the reason why you find that the vertical structure of the clouds is not very well represented in the simulations?
L184: And we do have a non-Gaussion problem here, right? This is why this information is given?
L188: Please give some more information about the cyclone event. Did the cyclone have a name? Which was the cyclone category at the time of the experiments? From where to where did it move? Does the type of cyclone event not have any influence on the simulations?
#Results:L230: Given how many details you provided in chapter 2, you should explain how DA actually changes the base state.
L232: Stating that something is "rather complicated" is not scientific. Please improve this sentence and explain what was complicated about it.
L280: "As indicated" -> Indicated where?
L305: This could have been explained above.
L359: What is a "weak" cloud? This is not a very scientific term.
L362: Do you have an idea why precipitation was not simulated in any of the DA experiments? It is indeed important to mention that, but you should also try to provide reasons.
L386: "QC = 7" and "QC = 4" are very DART-specific statements that only very few readers would understand, please rephrase this in an understandable way. This is also valid for Figure 11.
L392: What do you mean by "far observations"? What this mentioned before?#Discussion and Conclusions:
L436: You should explain what exactly is meant by "dense" here.
L445: Is it unable to influence the state variables in all the performed experiments?
L452: This is a bit short as a final sentence and an outlook is missing. What are the most urgent opportunities for future research? Once real FY-4B data will be available, should another cyclone case be used to validate the results? Please provide some more outlook on such questions.
Figures and tablesAxis labels in the figures should start with capital letters. This should be corrected in Figures 2, 3, 4, 5, 6, 8, 10, 11
Figure 1:
- The colorbar misses a label.
- You might want to change the color of the ocean to blue instead of green.Figure 2:
- I would remove the "unit:" in the axis labels unless this is required by the journal.Figure 3:
- The resolution of this figure does not seem to be very good, it is a bit blurry. For example in (a2) it is almost impossible to distinguish the lines "iwc-prior mean" and "iwc-posterior"
- The "x10^-4" in the horizontal axis label is a bit lost in all subplots on the right side. Please improve this.Figure 4:
- What is the "R statement" on the right in the plot? Must it be there?Figure 5:
- It is a good idea to make 1-4 correspond to a-d. But one has to search quite a bit to find the indication of the panels (a), (b), (c), (d). Please make these more visible, for example place them in the top left corner of each panel and in bold and with a larger font size.Figure 11:
- The choice of line and marker style is not optimal in this plot, especially since the points in (b) and (c) are very dense. Is it possible to find a better solution?
- "QC = ..." are very DART-specific statements that only very few readers would understand, please rephrase this in an understandable way.Figure 12:
The only reference to this figure is in line 406 and that sentence is more or less common knowledge that can be found in various studies. Considering how much information is contained in Figure 12 (4 panels with 3 lines each), the text must contain a deeper analysis of what we can learn from this figure. Otherwise it is not relevant.
Spelling, grammar, typosTable 1 caption: "data" instead of "dada"
L20: There is no need for using semicolons in this sentence. Please replace the semicolons by commas, e.g. "[...] cloud phases, the second [...] positively, and the third..."
L20: "The second is the its" -> Chose either "the" or "its"
L41: It is not common to write it like this. Better would be: "[...] in the study of Vukicevic et al. (2004), model [...]"
L45: "[...] while computing [...]"
L194: "are summarized"
L245: "would get the following formula" - that is a strange forumlation.
L250: "is calculated by the following formular"
L359: "produced" and not "produce"
L415: as many observations as possible
L443: "was detected."Citation: https://doi.org/10.5194/gmd-2022-30-RC1 -
RC2: 'Comment on gmd-2022-30', Anonymous Referee #2, 21 Jun 2022
Review of “A preliminary evaluation of WRF (ARW v4.1.1)/DART (Manhattan release v9.8.0)-RTTOV (v12.3) in assimilating satellite visible radiance data for a cyclone case” by Zhou et. al, 2022, submitted
General comments
This paper deals with the data assimilation of visible satellite radiances (also referred to as reflectance). Such observations are relatively new to the numerical weather prediction (NWP) community, since the absence of fast and accurate forward operators made their operational exploitation inconceivable for many decades.
In a set-up using the WRF model, the DART data assimilation framework providing EAKF and RHF filters plus the RTTOV-DOM forward operator, both numerical data assimilation cycle experiments and single observation experiments are conducted in an OSSE framework.
Thereby, relevant aspects related to reflectance data assimilation are shown and outlined, including non-linearity, non-Gaussianity, observation weight related to thinning length scales and update frequencies. Further, limits of reflectance data assimilation are discussed, e.g. the lack of vertical height information of the observed clouds and ambiguities in cloud phase and particle size distribution. While generally interesting, from what I can see, most of these aspects have already been discussed by Scheck et. al, 2020 in the context of the COSMO + KENDA system. I therefore strongly advice that the authors include a detailed discussion of how their findings relate to the previous study, to which extent they confirm or contradict previous findings and which parts of their analysis are uniquely novel.
While key sensitivities of the data assimilation cycle are discussed and evaluated very detailed with respect to analysis verification, it would be of great practical relevance to understand also how such sensitivities relate to the forecast quality and the forecast error growth. The assumption that a better analysis leads to a better forecast is by no means trivial particularly when dealing with cloud variables whose properties violate the mathematical assumptions of filter algorithms (linearity and Gaussianity), and which are prone to model biases and compensating model errors of the NWP model. Further, the analysis ensemble mean, which is mostly verified in this study, is not a physically consistent state so that it is not obvious to which extent the analysis error reduction related to the ensemble mean is beneficial for the accuracy of the individual forecast ensemble members which are initialised from the respective analysed model states. I therefore suggest to add results related to forecast verification (i.e., forecast verification of experiments 1-6). A discussion of the sensitivities of forecast quality and error growth for cloud variables, but also for other model parameters like temperature and humidity would add significant value to the publication and could provide guidance to colleagues for preparing visible radiance data assimilation also in an operational context.
While this manuscript contains some very interesting material, to be suitable for publication it requires some substantial changes. Please find below.
- The key research questions should be stated more clearly in the paper overview and at the beginning of each paragraph. It should be motivated why the different investigations are done and which question is followed therein.
- Could you motivate why you do have chosen to assimilate the observations in physical variables of radiance rather than reflectance which makes it much more easy to estimate how optically thick the clouds are? And which seems to be much more convenient in the data assimilation community?
- The fundamentally new methods and findings for the research community should be pointed out in a more precise way
- Forecasts should be added to the cycled data assimilation experiments to show the sensitivities of both analysis and forecast error to the update frequencies, thinning length scales, chosen DA filter and so forth
- The figures have to be revised fundamentally. Please increase the picture resolution and enlarge lines, labels and axes. It is very hard to differentiate between the triangles, diamonds and squares even when viewing the figures with a large zoom factor.
- The questions below should be answered in the text
- Please consider the comments below to improve the text
Specific comments
- Title: Code versions do not have to be part of the title of a scientific paper, suggestion: “A preliminary evaluation of visible radiance data assimilation for a cyclone case”
Sec 1.
- Please add some comments on challenges and potential of all-sky data assimilation
- Please elaborate bit more on why you think it is interesting to assimilate visible satellite radiances. Which forecast impact do you expect? What is different from IR or MW all-sky data assimilation?
- Please note and correct in the text: RTTOV is no forward operator, but rather a collection of forward operators (radiative transfer package)
- The goal of the publication and its value to the scientific community has to be stated clearly at the end of the introduction section
Sec 2.1
- Are the grid spacings between nature run, control run and DA runs equivalent?
- Why do you use higher resolved LBCs for the control / Da experiment than for the nature run / truth? An OSSE should represent the difference between real atmosphere (which is much higher resolved than a forecast model) and a forecast model. Thus, I would rather use the higher resolved LBCs for nature run / truth. Do you conduct short-range forecasts as well? Do the LBCs introduce the cyclone to the model domain across the lateral boundaries or does it fully develop within the model domain? If the first is the case this may be problematic.
- Why do you use the ensemble mean of the nature run as truth? The resulting model state is physically inconsistent between the variables
- How do you exactly produce the synthetic observations? What is the role of the 2km- original AGRI observations? Do you use them as observation locations for the synthetic observations? How do you assign observations and model grid points? Do you first assign model grid points to observation locations (which will lead to one grid point being assigned to many observations at a 15 km – 2km scale difference) and then apply thinning at the observation locations? Do you interpolate and how or do you use nearest-neighbour? Is it right that you simulate the observations based on the truth which is equal to ensemble mean? The next reasonable step would be to perturb the synthetic observations based on an estimated observation error distribution. Do you do that and how?
- Please explain the set-up of the OSSE in much more detail following the questions posed above. At the current state, it is hard to understand and not reproducible.
- Please explain in the text if you assimilate any other observations or only visible radiances
Sec 2.2
- For better understanding of the sensitivities of visible radiances on model variables please add a discussion of the subgrid-scale cloud variables which are presumably input to the forward operator. How is that realized in your system? How are subgrid-scale clouds parameterized? This may be important to discuss and understand the potential detrimental impact on non-cloud prognostic variables
- Please define “cloud water path” in the text (appears for the first time in Figure 2)
Sec 2.3.2
- It will help the reader if you explain in more detail the goal of the pointwise DA experiments. What do you want to show here?
- What is the meteorological situation at the 4 points that you have chosen in the domain? Please motivate why you have chosen exactly these points
- Would it be an idea to refer to the pointwise data assimilation experiments as single observation experiments? This is a term that seems to be more convenient in the data assimilation community
- Please clarify that the cycled experiments are also run in OSSE set-up. This does not seem to become clear in a moment.
Do you also run forecasts or only DA experiments?
VIS data assimilation strongly interacts with forecasts, so I would ask you to verify forecasts as well to show if the DA is successful in terms of forecast impact - Table1: Please explain the variable names or rather write the physical variable names, e.g. QICE=cloud ice mixing ratio.
Sec 3.1.
- I would suggest to refer to that kind of experiment as “single observation experiment”
- Please explain what you mean by “cloud water path”. Is that only vertically integrated liquid water? Or liquid or ice water? Or the sum of the two? May I ask why you do not show the ensemble distribution of cloud water and cloud ice in the left-hand plots since you compare to them on the right-hand side?
- Please motivate more for the reader why you assess and show that kind of experiment. What is your goal with that? Please make that very clear to explain the key issue with ambiguities in visible radiances, i.e. total water mass, cloud phase, effective radii, vertical position, multiple layers with different phases
- In Figure 3, do you work in observation space on the left-hand side and in model space on the right-hand side? So you try to figure out to what degree the model variables are improved if the analysis is drawn towards the observation in observation space? Could you clarify that in the text, please?
- I could not distinguish between the lines in Figure 3. Please draw fatter lines, fatter axes, fatter labels. Use different colours rather than symbols because it is unfortunately really hard to distinguish between them. It is confusing that the diamond sign means “truth” on the left and “analysis” on the right
- Since GMD is a journal dealing with models and the key goal of data assimilation is better meteorological forecasting skill it will add great value to the paper if you discuss not only the statistical properties of the single-obs experiments, but also explain the meteorological situations: e.g. in Figure b1/b2) in the truth run we have an optically thick water cloud. However, the model shows an ice cloud lying over a water cloud. Data assimilation draws radiance towards the truth and is well able to enhance the water cloud. However, the false alarm ice cloud is also enhanced, since a) VIS observations can only constrain vertical integrals, b) there seems to exist spurious correlation of the ice clouds in the background ensemble with the observation and we cannot vertically localize that due to missing information on the cloud top height and vertical cloud extent. Clarify that VIS observations are sensitive to the cloud water mass in the column and the particle size distribution. Ice clouds consist of few big particles and are typically much more optically thin than water clouds that consist of many small particles. Try to explain a bit more the microphysical connection between clouds and radiance to the reader
- Please add this kind of meteorological detail to the clarify the potential and limits of the VIS data assimilation in view of specific meteorological cloud situations
- In Scheck et. al 2020 such kind of case study has been performed as well. Please reference and compare your discussion to the results found there
Sec 3.1
- Do you assimilate any observations additional to radiance observations here?
- Do you assess first guesses or analyses or forecasts here?
- Please improve Figure 7. It is very hard to distinguish the symbols. Use fatter labels and axes and fatter lines and maybe different colors for the different experiments
- Clarify if the cycled experiments are OSSE experiments or if you assimilate original satellite observations
- You should add that the no-clouds situation in the ensemble is referred to as “zero-spread” problem
- You state that RMSE, MAE etc. measure different aspects of accuracy than FSS. Please explain which aspects and why you want to assess them
- To which degree do the results found for the six different experiments hold for forecast impact?
- Please replace “fake” correlations by “spurious” correlations
- Please elaborate a bit on why less observations and less frequent update intervals may lead to better overall forecasting skill
- Do you have suggestions why VIS DA may have detrimental impact on dynamic and thermodynamic prognostic variables? What is the role of the NWP model in here? What is the role of subgrid-scale clouds? Could you add a plot illustrating the detrimental impact please? Please discuss this issue in a bit more depth and to debate potential fixes
- I am unhappy with your term “thermodynamic” variables. Wind is no thermodynamic variable. Maybe you could refer to the variables you want to address as “non-cloud” variables
- You state that VIS radiance data do not have an apparent dependence on “thermodynamic” variables. I do not agree with that. Clouds are advected by wind fields so that cloud position error is correlated with wind field errors. Clouds depend on temperature and humidity. Subgrid clouds are typically parameterized in terms of grid-scale humidity fields.
Sec 3.2.1
- At first glance it has been unclear to me which kind of model state you verify here. Is it linear analyses, nonlinear analyses, forecasts, first guesses? Please clarify both in the text and in the figures
- Please explain why it is interesting to assess the temporal evolution of MPI / MPE of effective radius
- Please use the term “false alarm” clouds instead of “fake” clouds
- Please replace “updated in negative ways” by “analysis increments with wrong signs / wrong magnitudes”
- Does the underestimation of the effective radius come along with an overestimation of radiance, i.e. a positive radiance bias?
- Is the effective radius input to the RTTOV-DOM forward operator? Could you motivate why you show Figures 9 and 10, please? What is shown in Figure 9?
- Since you seem to assimilate the VIS observations over high terrains in China – do you have any quality control included that rejects observations that may be mixed up with snow or ice?
- How do you set the observation error? Next to the number of observations assimilated which is determined by thinning length scales and update frequencies you can control the weight of the observations by choosing a larger observation error. It would be nice if you included that in the discussion and potentially even in your experiments
Sec 4.
- What is the main message that you want to present to the reader related to your discussion on observation rejection?
- Please revise Figure 11. It is very hard to distinguish between the triangles, squares etc. and to recognize them. Maybe you do not have to display every time step in the plot
- I’m wondering why the departure between first guess and observations increases over time. In a healthy DA system, the average first guess error tends to decrease with increasing number of DA cycles. Do you have any explanation for that?
- Why do you use quality control to control the number of assimilated observations?
In my view, quality control should sort out erroneous or non-representative observations. You should rather control the number of observations by horizontal localization, thinning and superobbing as well as observation error - If I understand correctly, the outlier threshold acts on the first guess departure of the ensemble mean. Large first guess departures typically occur when clouds are missing or you have location error of clouds which tends to happen quite often for clouds- and precipitation-sensitive observations. In my opinion, being able to correct for such location errors or false alarms in the analysis is of particular importance. Why do you choose to sort out these observations? Would it be possible to inflate observation error in those cases rather than not assimilating the observations at all?
Sec 5.
- Updating only cloud variables for NWP forecasts is not practical for operational NWP. Do you have other suggestions to deal with potential detrimental effects on forecast skill of temperature, humidity, wind etc.?
- Please replace “modelling experiments” by “data assimilation experiments”. As far as I understood you do not show forecasts or their verification.
- Please replace “cloud simulations” by “cloud states”. You mostly look at analysis ensemble means which is not the same as a “simulation” or a “free model state”
- If you verify analyses the term “cloud forecasting skills” may be inappropriate
- Why would increased model grid spacing lead to a more nonlinear relationship between radiance and LWP / IWP? Or do you mean nonlinearity due to resolved convective processes? Please clarify
- Please elaborate a bit more on ideas for further research based on your found results in the conclusion. What did you learn and what do you suggest to deal with the found problems? What do you suggest for operational VIS radiance data assimilation?
Spelling, grammar, typos
- L-8: there are great potentials in assimilating
change to: there is great potential related to assimilating … - L-31: unique cloud information complementing the one contained in IR and MW data
- L-40: direct data assimilation critically depends on observation operators
- L-47: Method for Fast Satellite Image Synthesis (MFASIS)
- L-53: single-scattering method for SW radiative processes
- L-59, 61, 64: kill superflues “the”: assimilated GOES-9 VIS radiance, is ensemble-based methods
- L-93: revise end of sentence “A nature run is..”
- L-140: Other parameters not explicitly mentioned are set to default values.
- L-186: “To demonstrate the basic ability of the DA scheme.:”; it is unclear what you mean by that. Do you mean “to demonstrate the basic technical functionality of assimilating visible radiance data by employing EAKF”?
- 281: double “the”
- L-302: could clearly suppress false alarm clouds
- L-324: through spurious correlation between VIS radiance
- L-339: At the initial cycling step, convective initiation occurred in the nature run
- L-348: non-cloud state variables obtain analysis increments with wrong sign such that analysis error is increased compared to first guess error
- L-363: false alarm clouds
- L-364: much closer
- L-396: by the DART system
- L-408: by the detrimental effects on analysis error of the non-cloud …
- L-420: such as the Atmospheric Motion Vector
- L-426: life cycle, i.e. the intensification and decay processes of a cyclone
- L-428: the adjustment of CWP
- In general: replace thermodynamic by non-cloud
Citation: https://doi.org/10.5194/gmd-2022-30-RC2 -
AC1: 'Comment on gmd-2022-30(response to reviewers)', Yongbo Zhou, 25 Jul 2022
We made major revisions to the original manuscript (MS No.: gmd-2022-30) according to the referee comments. We cut-and-paste the referee comments into a new document and provide a point-by-point response to each of the comments. The comments and responses are provided in the supplement file named "responses to comments", where the comments are marked in black and our responses are marked in red.