the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Assimilation of the AMSU-A radiances using the CESM (v2.1.0) and the DART (v9.11.13)/RTTOV (v12.3)
Young-Chan Noh
Hyo-Jong Song
Kevin Raeder
Joo-Hong Kim
Youngchae Kwon
Abstract. To improve the initial condition (“analysis”) for numerical weather prediction, we attempt to assimilate observations from the Advanced Microwave Sounding Unit-A (AMSU-A) on board the low-earth-orbiting satellites. The data assimilation system, used in this study, consists of the Data Assimilation Research Testbed (DART) and the Community Earth System Model as the global forecast model. Based on the ensemble Kalman filter scheme, DART supports the radiative transfer model that is used to simulate the satellite radiances from the model state. To make the AMSU-A data available to be assimilated in DART, preprocessing modules are developed, which consist of quality control and bias correction processes. In the quality control, three sub-processes are included: gross quality control, channel selection, and spatial thinning. The bias correction process is divided into scan-bias correction and air-mass-bias correction. As input data used in DART, the observation errors are also estimated for the AMSU-A channels. In the trial experiments, a positive analysis impact is obtained by assimilating the AMSU-A observations on top of the DART data assimilation system that already makes use of the conventional measurements. In particular, the analysis errors are significantly reduced in the whole troposphere and lower stratosphere over the Northern Hemisphere. Overall, this study demonstrates a positive impact on the analysis when the AMSU-A observations are assimilated in the DART assimilation system.
Young-Chan Noh et al.
Status: open (until 29 Jun 2023)
-
RC1: 'Comment on gmd-2023-60', Lukas Kugler, 16 May 2023
reply
Review of “Assimilation of the AMSU-A radiances using the CESM (v2.1.0) and the DART (v9.11.13)/RTTOV (v12.3)” by Young-Chan Noh et al.
This paper develops and evaluates pre-processing modules for the assimilation of AMSU-A microwave observations from low-earth-orbiting satellites. The authors developed quality control and bias correction and validated their approach by data-denial experiments. They assess forecast impact by comparing the (6-h forecast) first-guess departures between experiments. Results show that the RMSE of 6-h forecasts is reduced compared to radiosonde observations and ERA5 as reference.
I want to thank the authors for their valuable contribution as it allows the assimilation of AMSU-A observations for reanalyses and weather forecast. I found this manuscript very interesting, however, I think some issues need to be addressed before publication.
Major comments
-
Please write in the introduction how your work relates to previous research on microwave and/or AMSU-A radiance assimilation. Readers should be made aware of relevant previous microwave assimilation studies if available.
-
If available, how do your results compare to results of other data-denial studies assimilating AMSU-A? Did you find any similar studies you can compare your results to? Are they similar or different in any ways? You could comment on that in the summary.
-
You abbreviate the standard deviation of the first-guess departure as STDDEV. I find this abbreviation very misleading and think that RMSE (or similar) would fit better, since STDDEV refers to standard-deviation and the standard-deviation of the forecast is the spread. So when I read your paper the first time, I thought that you show the reduction in spread. But with your definition STDDEV is the RMS difference (error) of observation minus background. Replacing the abbreviation STDDEV with RMSE (or similar) would be more clear.
-
L516: “in the tropics, the analysis impact is relatively small due to small model errors”. Option A: I guess you mean “smaller background error” (see minor comment L462). If you refer to the RMSE being smaller in the tropics, then I suggest a change of “model errors” to e.g. “background errors” or “first-guess departures”. Option B: If you really want to say “model error” then I don’t see from what result you conclude this. Please provide reasoning, evidence or rephrase (e.g. “presumably due to”) or tell the reader where it can be seen from (e.g. figure x). Why is the numerical model better in the tropics than at 60-90° S. I would rather guess that you get more analysis impact due to larger first-guess departures (option A) due to baroclinic waves?
-
L515: You say that analysis errors in T, U, V were significantly reduced. You probably refer to figure 11, which shows “Normalized difference of the standard deviation”. Does it show the standard deviation of the forecast (i.e. ensemble spread?) or the standard deviation of the first-guess departure (i.e. the root-mean-square error of the forecast – observation)? Please be specific. Option A: If this is the ensemble spread, then it is not the error of the forecast, making your statement “analysis errors were reduced” to be wrong. You could say that “following the reduced ensemble spread, we expect a significant error reduction due to AMSU-A”. To measure the error of the forecast directly, it is necessary to compare to observations or independent analyses. Option B: If it is the RMS of forecast-observation, then replace “standard-deviation” with “RMSE of … forecasts” or similar.
-
L518: “the number of assimilated AMSU-A data is small “ … “because the AMSU-A data are not assimilated in the harsh condition of high latitude regions” Instead of saying “due to harsh conditions”, please provide a more scientific reason why observations are not assimilated. With information from L466, I would suggest to write e.g. “we rejected observations at latitudes >60° S because these observations degraded the analysis” or e.g. “due to detrimental effects of clouds and sea-ice”, if this is the case.
-
I could not open your EXP_model_outputs.egg and CNTL_model_outputs.egg in the zenodo repository. Could you please indicate how this filetype can be read?
Minor comments
-
L17: “ is obtained by assimilating the AMSU-A observations on top of the DART data assimilation system that already makes use of the conventional measurements” DART itself is a software and users decide on which observations to assimilate. Do you refer to a specific DART system which makes use of conventional observations? If it is your own system, then I suggest to remove “the DART assimilation system that makes use of” giving you: “… assimilating AMSU-A observations on top of conventional measurements” to avoid confusion
-
L26: “With the advances in the observation/computation technique and the improved data assimilation methodology, the quality of the initial condition significantly increases” I suggest to rephrase, because it is not clear to me what the advance in observation /computation technique you refer to. Maybe you refer to increased amount of observations or satellite observations? What computation technique do you refer to? Parametrizations? An improved dynamical core?
-
L38: “However, researchers, who are not affiliated with the operational NWP centers, are restricted from accessing these data assimilation systems, because these operational NWP systems should be securely managed to provide global weather forecasting to the forecasters and users on time.” I agree that this is the case for many operational centers. However, I think you don’t need to explain why centers don’t share their system for research and I think it is not necessary for the reader. I suggest to remove or rephrase like “In our experience, researchers, who are not affiliated …” or “Currently, …” because this could change in the future. I also suggest you focus on the advantages of your setup e.g. by saying that your setup is freely available to everyone.
-
L63-67: It seems you use Zhou et al. 2022 as a reason (“thus”) for why there is interest to assimilate satellite-observed radiances. Zhou et al. 2022 is an example for radiance assimilation using the same software (DART). I suggest rephrasing.
-
L119: I suggest to provide a direct link to the dataset for easier replication. I assume it is https://rda.ucar.edu/datasets/ds337.0/ ?
-
L150: Is this “gross quality control” the same as the outlier_treshold option in DART? Please clarify.
-
L219: “Biases change with time” I suggest a change to “biases depend on time-of-day and on the season”. Change with time could mean that the biases are evolving, but the term “bias” usually means a constant (average), systematic error.
-
Figure 8a and 9: Confidence intervals would be appreciated to see whether any differences are significant . I guess you could do bootstrapping or a test for the difference.
-
Figure captions: instead of “;” please use “:”, e.g. “(global: grey, tropics: green, …)
-
L352: What is a horizontal distribution? I suggest to remove “horizontal”.
-
L361: It could be interesting to add information on whether you used the inflation method with a gaussian distribution (option 2) or the inverse gamma (option 5) for the inflation value.
-
L405: “Skill score of 500 hPa GPH”. The term “skill score” is reserved for specific verification metrics. I suggest to remove “skill score” as it is not necessary or use “ we verify the 500-hPa geopotential height using first-guess departure mean and standard deviation.”
-
L459-460: “if both regions are extracted … the assimilation impact is comparable” means just that if no observations are assimilated, we would not get an analysis impact. I would suggest to write instead what would happen if it would not be August/September, but February/March? Would we have a comparable/larger/smaller analysis impact compared to the analysis impact in the Northern Hemisphere in August?
-
L462: “as the conventional observations are quite sparse in the high latitude region, the model errors are relatively larger than the other latitudinal regions” I suggest to use “forecast error” instead of “model error” if figure 10 shows the RMSE of the forecast. Model error specificly states that the model is wrong. Forecast error is more general and includes that missing observations lead to larger analysis error which then grow into large forecast errors. I guess the latter is the case.
-
L466: From figure 2, I see that the rejected observations coincide with the presence of sea-ice? What do you think? Maybe you can comment on that?
-
L467: You write that observations from latitudes >60° S were rejected, but figure 2 shows also observations from >60°S at 180° longitude. Did you really exclude all observation latitude > 60°S or is it related to something else? e.g. an outlier threshold, or due to mostly cloud-affected observations with large CLW, sea-ice?
-
L474: It is not clear to me whether ERA5 or radiosondes were used to produce figure 11.
-
Table 2: Is it possible to indicate which values are statistically significant? Or are they all significant?
-
Code and data availability: What does a potential user of your pre-processing modules have to do in order to use your modules? Do they have to download your code from zenodo or will the modules be integrated into DART? Maybe you can provide a “recipe” of which steps are necessary?
-
Figure 8, 10, 11: I guess these figures use ERA5 data? Please add the data source for which reference data has been used in the caption e.g. ERA5 reanalysis or observations.
Citation: https://doi.org/10.5194/gmd-2023-60-RC1 -
-
RC2: 'Comment on gmd-2023-60', Anonymous Referee #2, 23 May 2023
reply
This study is related to the data assimilation of AMSE-A microwave radiance data (and additional PrepBUFR data) by DART (configured with EAKF) coupled with CESM. The AMSU-A observations were bias-corrected, and the observation errors were estimated. After these preprocessing steps, the observations were assimilated, and the results were validated and discussed. The study provided some interesting results, and the discussions were thought-provoking. Nevertheless, I have some concerns which were summarized below.
1. L215-216: Do you mean that extra experiments were performed to determine the optimal spatial thinning length? If so, could you please provide some details about how the experiments were designed in order to estimate the optimal thinning length?
2. L221-223: I have two comments here.
1) Could you please provide some references to this method (i.e., to estimate the bias by averaging the departures between the observed and simulated radiances)? Perhaps the study by Scheck et al. (2018) should be cited (section 5, P677). Although Scheck et al. (2018) focus on the visible imagery, their method should be applicable to the microwave imagery.
[Scheck, L., Weissmann, M., and Bernhard, M.: Efficient Methods to Account for Cloud-Top Inclination and Cloud Overlap in Synthetic Visible Satellite Images, J. Atmos. Ocean. Tech., 35, 665-685, doi:10.1175/JTECH-D-17-0057.1, 2018].
2) In my understanding, the bias of the observation could be estimated by this method, i.e, averaging the departures between the observed radiances and simulated radiances, only when the simulated radiances are unbiased statically. If the simulated radiances are biased, they cannot represent the “truth” very well. The simulated radiances are strongly influences by the model output of the pre-trial run. As is mentioned in L422-428, the model output of the pre-trial run is biased. Therefore, it seems that the estimated bias contains the observation bias and some model bias. About this problem, could you please give some explanations here?
3. L244: I was confused about the formula (4). Why was the averaged residual scan bias obtained by removing the mean bias of two near nadir FOVs (15 and 16) from the bias of the departure for each FOV (1–30)? In other words, why the off-nadir bias was estimated by subtracting the near-nadir bias? Could you please give more details here?
4. L337: Are twenty ensemble members enough for data assimilation in a global model? Were all ensemble members configured with the same physics options but with different initial and boundary conditions?
5. L344: A half-width of 0.075 radians is equivalent of a localization distance of 955.65 km (2*6371*0.075) in the horizontal direction. Could you please explain why is a localization distance of 955.65 km was set?
6. Figure 6 and Figure 11: Since different variables could interact with each other, as is mentioned in L383, I think it would be interesting to see how the humidity-related variables were influenced by assimilating the AMSU-A observations.
7. The assimilation would generate positive impact on the model variables at the analysis time. After that, the positive impact could diminish quickly with model integration since the balance law between different variables was not respected during the data assimilation process. It would be really helpful to see how the errors (biases, STDDEVs) for both the analysis and forecasting fields vary with time.
Citation: https://doi.org/10.5194/gmd-2023-60-RC2 -
RC3: 'Comment on gmd-2023-60', Wei Han, 01 Jun 2023
reply
Review of "Assimilation of the AMSU-A radiances using the CESM (v2.1.0) and the DART (v9.11.13)/RTTOV (v12.3)"
This manuscript describes the efforts on assimilation AMSU-A radiances data in the DART system which is coupled with RTTOV123 and CESM. The procedures for quality control, spatial thinning, bias correction and observational errors calculation are developed. The positive impacts are found in the analysis of the primary atmosphere parameters. The paper is well-organized. It appears to be logically set out and the standard of English is acceptable. I recommend this paper for acceptance in the journal with several revisions.
Major comments:
1, Line 150: “the square root of the sum of the observation error variance and the prior background error variance”. How about adding a table to list the threshold of each channel?
3, Line 4.2: How was the thinning interval “290 km” determined?
4, Line 239-242: The AMSU-A data with large latitude (>60°S) is excluded in this paper (Line 204), because their impacts are not ideal. Figure 3(b) shows that, “the residual san biases have different patterns depending on the latitude and for AMSU-A channel 6”, and the scan biases are rather large on the latitude band 50S-60S, which is near the >60°S region. It can be expected that the scan biases on the latitude band 60S-90S should be larger. Is it the case? And is it the possible reason for the negative impacts of AMSU-A data >60°S?
5, Figure 4: The impact of bias correction on CH10 and CH11 seems not ideal. Especially, the histogram of OMB of CH11 appears to be more “skewed” distribution after bias correction than that before bias correction. And an average deviation of approximately 0.2 K is remained after bias correction. Why the bias correction on CH11 does not perform well? How about adding a table to list the mean value and standard deviations of OMB in each channel before and after bias correction?
6, Figure 6 and Figure 11: The analysis results of temperature, zonal wind and meridional wind is given in detail, while the results of humidity are neither shown in the figures nor mentioned in the text. Although the channels on AMSU-A are mainly sensitive to the temperature, however, as mentioned in Line 382, “a change in one model parameter can change another model parameter in the assimilation process”. In my experience, the assimilation of microwave temperature sounders will more or less bring some impacts on the humidity. I wonder the impacts on humidity analysis in this work.
7, Figure 11: For both geopotential height, temperature and wind, the impacts of assimilating AMSU-A data in the Northern Hemisphere are better than that in the Southern Hemisphere (Line 444-445, 483-485, 491-493), and these are attributed to the lack of data assimilation in the region >60°S (494-497). However, the observational data from several channels of AMSU-A are also rejected over land (Table 1), which are mostly distributed in the Northern Hemisphere. In another word, the total amount of AMSU-A data assimilated in the Southern Hemisphere should be still more than those in the Northern Hemisphere. In general, the assimilation of satellite data brings more benefits to the analysis and forecasting of Southern Hemisphere, because of the larger ratio of ocean area and the lack of conventional observations. How to understand the difference between the results in this paper and our expectation?
Minor comments:
1, Figure 2 and Figure 6: the figures should be located behind the paragraph which first mentions it, i.e., behind Line 200 and 372, respectively (unless this manuscript is edited by LaTeX).
2, Line 170-205: The authors describe the quality control procedures as three parts: gross quality control, channel selection, and spatial thinning (Line 15 and 506). However, generally speaking, the contents in Line 170-205 cannot be summarized by “channel selection”, because these criterions are applied to the pixels instead of the whole channel. Besides, the spatial thinning is not belonged to quality control, because some pixels are rejected in this procedure not because their quality is not good. Thus, these paragraph should be reorganized. A simple consideration is to replace the title of section 4.1 by “quality control”, and revise the corresponding statements in the abstract and conclusion.
3, Line 197: “Figures 2a and b” à “Figure 2a and b”.
4, Throughout the whole manuscript, sometimes “Figure” is worded, but sometimes “Fig.” is worded. Please check the manuscript and unify it.
5, Line 307: “In the pre-trial run, the instrument noise errors were initially used as the observation errors within DART.” How long is the pre-trial run which is used for the statistics of observation errors?
6, Line 327: “CTRL” à “CNTL”.
7, Line 379: “Figs. 6b and c” à “Fig. 6b and c”.
8, If it is possible, all the figures are better to be parachromatism-friendly. The bars in Figure 4 and 11 are suggested to be shaded by different patterns, just like Figure 8. The symbol should be distinguished not only by colors but also by shapes (squares, circles, triangles…) in Figure 5, 7, and 9.
Citation: https://doi.org/10.5194/gmd-2023-60-RC3
Young-Chan Noh et al.
Data sets
Model outputs (CNTL & EXP) Young-Chan Noh https://doi.org/10.5281/zenodo.7714755
Model code and software
Model (CESM & DART) and preprocessing codes Young-Chan Noh https://doi.org/10.5281/zenodo.7714755
Young-Chan Noh et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
214 | 38 | 10 | 262 | 1 | 2 |
- HTML: 214
- PDF: 38
- XML: 10
- Total: 262
- BibTeX: 1
- EndNote: 2
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1