the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Description and evaluation of the tropospheric aerosol scheme in the Integrated Forecasting System (IFS-AER, cycle 47R1) of ECMWF
Zak Kipling
Vincent Huijnen
Johannes Flemming
Pierre Nabat
Martine Michou
Melanie Ades
Richard Engelen
Vincent-Henri Peuch
Download
- Final revised paper (published on 27 Jun 2022)
- Preprint (discussion started on 14 Sep 2021)
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2021-264', Anonymous Referee #1, 20 Oct 2021
This manuscript presents a description of the changes carried out between C47R1 and C57R1 in the aerosol module of IFS. My main concern is about writing quality. In its current state, the manuscript is very difficult to read and not very well structured. Many information is not provided in the manuscript as often only the changes made in the model are described with differences with parameterization used operationally and not making the paper confusing. For examples, almost no indications are given on how the formation of secondary aerosol was taken into account in the model. The hypotheses of the model are often not clearly explained. The manuscript is more a technical note designed for people working with the IFS model and not a scientific paper for the whole scientific community. In my opinion, the manuscript should not be accepted in its current state. I strongly advise the authors to make significant revisions of the manuscript, to present well the parameterizations by explaining their physicochemical significance and the assumptions behind these parameterizations. The authors should write a standalone manuscript with all the information necessary to understand the model.
Major comments:
No explanation is given in the choice of the parameters in Table 2. What is the basis for these parameters? I think there are wrong values given. For some lines, several rmod are given with no explanation or on the opposite not enough values for DUST or salt lines where there is several modes of aerosols. Are the parameters corresponding to a volume distribution or a number distribution? The values of rmod seem to be very low (often around 10 nm, as if the aerosol would all be nanoparticles in the atmosphere). It does not seem realistic as it is not in the accumulation mode as it should be. In that case, all the processes calculated with the aerosol diameter are probably not well represented in the model. However, later in the text, it is explained that the deposition rates are not computed with the aerosol size. The density of organic aerosol seems very high.
Almost no precision is given in the representation of phenomena. Many times, the authors refer to the previous article, without giving a short description of what the model does and what are the assumption. I think the paper should be entirely restructured as it is often not clear what is done and what is the rationale behind the parameterizations. Here are a few examples :
- The description of deposition rate arrives late in the text (section 5) whereas global budget including deposition are given quite early. It would help to provide all the information of the model earlier.
- How was the partitioning of nitric acid and ammonia represented? Has a thermodynamic module been used? How is the formation of coarse nitrate represented in the model? In the current
- Section 2.2. What are the heterogeneous reactions included in the model?
- SOA emissions scaled to CO? While I personally think that such approaches are not reliable, I understand that some models use this kind of approach for simplification purposes. It should be explained how these emissions are determined and what kind of chemistry it implicitly accounts for. What are the consequences of using this kind of simplified parameterizations? Are there SOA/CO ratio specific of emissions sectors (I don’t understand how a single ratio could be used)?
- Section 5.2.1. I don’t understand how the Di parameter was selected. Should the evaporation of droplets lead to an “evaporation” (probably not the good term as I don’t see how non-volatile dusts or BC can evaporate) of particles from the droplet? In reality, it would probably stay in the droplet unless the evaporation of the droplet is complete or has a low settling velocity.
- I don’t understand the basis behind the equation in section 5.2.2 with some not defined parameters.
- Section 6.2: Before comparing to AeroCom, it would be useful to provide information on the exercise. It is very difficult to understand what is done without having basic information on the exercise. What are the conditions of the simulations? When comparing the flux of emissions for Sea Salts, are the emissions corrected by the factor 4.3? Are all the emissions computed for the same conditions of humidity or are they corrected the same way? I understand the idea of the AeroCom exercise to compare the budgets between the model to see how much results differ between models. I however fail to see the interest of evaluating the representation of the budget from IFS-AER by comparison to the median of models. All models could fail to represent one process, in that case the median of models would be wrong.
- Comparison to observations: why did you keep traffic stations in the analysis. At the resolution of the simulations, it induces a very large bias. All the comparison should be redone without including traffic stations (or removed from the paper).
- The aqueous-phase sulfur chemistry seems to be lacking from the model (as only the gas-phase chemistry of CB05 is mentioned) while it is generally the main oxidation process of SO2 and of sulfate formation. I don’t see how the sulfate aerosol formation could seem trustworthy. The fact that IFS-AER overestimates sulfate concentrations without accounting for the aqueous-phase chemistry seem to indication that the representation of some phenomena are not accurate.
Minor comments:
“The mass mixing ratios of these two are passed from IFS-CB05 to IFS-AER, used in the nitrate and ammonium production schemes, and updated in return by those schemes.” Are IFS-CB05 and IFS-AER, two separate models. In that case, it would be necessary to have a scheme. Otherwise, I suggest to change or remove this, sentence as I think the authors just want to say that the concentrations given the gas-phase chemical mechanism are used as inputs for the aerosol module.
P1,L5. “The parameterizations of sources and sinks that have been updated since cycle 45R1 are described” While correct, the sentence is a bit confusing as “are described” refer to ‘the parameterizations’ at the beginning and not ‘since cycle 45R1’
P1, L9: if you use the IFS acronym it is probably better to say “of IFS” rather than “of the IFS”
P1, L10-11: “components that are not used operationally will be clearly flagged.” Should I understand that is not the case currently. In that case, should this sentence be highlighted in the abstract?
P1, L12: a wide range of
P1, L13: What is meant by an increase in skill?
P2, L15: Not clear what is meant by imbalances
L22-26 : This paragraph with many sentences beginning by “Section …” could be improved
P2, L28: I was not sure what was meant by “bulk–bin scheme ». It should be explained. I don’t think that it the good expression. It seems to be a model approach with a single mode and not a “bin” scheme that represent for me a sectional approach
P7, L5: What does the “implementation of a cap” mean?
Title of Section 4: Primary aerosol sources ?
P23,L1: remove as in “longer than as simulated”
P23,L11: Not sure what is meant by ““the very short lifetime … is dominant”.
P30, L19: a bias of 2-5 µg/m3 over Europe does not seem low. Later it says that the bias is negative where the number provided is positive. Is it an underestimation or overestimation
Citation: https://doi.org/10.5194/gmd-2021-264-RC1 -
RC2: 'Comment on gmd-2021-264', Anonymous Referee #2, 31 Jan 2022
Review of “Description and evaluation of the tropospheric aerosol scheme in the Integrated Forecasting System (IFS-AER, cycle 47R1) of ECMWF” by Remy et al.
This manuscript describes the latest version (denoted cycle 47R1) of the aerosol scheme used in the IFS along with a description of the model updates implemented in this scheme since cycle 45R1 documented by Remy et al. 2019. A wide range of aerosol developments have been implemented including online coupling of the aerosol scheme to the chemistry scheme for the sulfur cycle, updates to the emission sources of sea salt and mineral dust as well as wet and dry removal processes. An evaluation is provided of the impact of the new model updates. Most of the updates are implemented in the operational configuration of cycle47r1 but not all. An evaluation of 1 year of a free-running simulation (without assimilation of aerosol information) is given.
Overall, this is an interesting and useful manuscript describing an operational aerosol configuration of the CAMS aerosol system. Aerosol forecast products produced by this system are widely used and so a detailed description and evaluation is warranted. It also describes differences between this system and the aerosol scheme used in the also widely used CAMS reanalysis. Understanding these differences as well as establishing the baseline performance of the latest model will be of use to many users. As this is largely a technical description and evaluation paper it is highly suitable for publication in GMD.
I have a number of recommendations which should be addressed before publication.
Major remarks:
- I would recommend that the authors consider making some fundamental changes to the manuscript layout. In my view it currently doesn’t flow well and this makes it quite hard to read and follow. The model updates sections (Section 3) includes some quite detailed evaluation of the specific updates documented but this is then followed by a further general evaluation in Section 7. Some of the latter still compares Cycle 45r1 with Cycle 47r1 so why not just put all the evaluation aspects together? Could the early sections, detailing model updates not just focus on the difference between the old and new model and then combine all the evaluation together under the Evaluation section. I would also recommend having a Results section which includes current Section 6 and Section 7 as subsections. To me the current layout is a bit disjointed and unclear.
- The model updates are not sufficiently motivated in my view and in many cases the model "improvements" or developments are not reflected in the skill scores. Can the authors motivate the changes in more detail, clearly outlining what the key drivers of the updates were. Are process-based improvements in one part of the model uncovering compensating biases elsewhere within the aerosol scheme? this also should be discussed. Has the original scheme been tuned in any way for instance to give the correct AOD values?
- The description of sulphur cycle in Section 2 would benefit from being described in more detail. The coupling of the aerosol and chemistry schemes is a significant step-change in the complexity of the IFS aerosol scheme and warrants full description. What sources of SO2 are represented and what chemical reactions (gas phase and aqueous phase processes) are represented in CB05, does it include a representation of DMS chemistry for instance. In order to understand the key drivers of the improved evaluation of surface SO2/so4 concentrations is it important to know what processes are represented or not and a reader should not have to go to another reference to get the information needed to understand the results presented here.
- Tables 2 and 3 are very confusing and I’m afraid I don’t understand them at all. This could be due in part to the Captions perhaps not being complete enough and incorrect labelling used (what config does “IFS” refer to?). I have read and reread the relevant sections but still do not understand why both cycle45r1 and cycle47r1 are included in both tables. I thought coupling to chemistry is included in the latter but not the former so why do we have 4 different simulations of the sulfur cycle? If it is to separate out the change in sulphur cycle from coupling to the CB05 alone from the other model upgrades included in cycle 47r1 then this is not at all clear in the text and both main text and table captions need to be improved. Could the 2 tables be merged perhaps to facilitate comparison? The whole section on the coupling and consistent and clear labelling really needs to be improved.
- It would also be nice given the significant impact of the deposition improvements in the cycle47r1 to discuss this generally overlooked part of aerosol modelling, with much focus often being placed on emissions and chemical production etc. The results here highlight the large and important role of more tightly constraining deposition processes in models more generally. It would be nice for the authors to place this work a bit more in the context of the current state of wider aerosol modelling/literature and not just the ECMWF models.
- While some areas of the evaluation quantifies the impact of the improvement on model skill scores in others it is more qualitative, and the authors use language such as “x is slightly better than y” or “the skill seems to improve”. This I feel detracts from the significance of their findings and from the benefits attributed to these model developments. An attempt should be made to be more quantitative in their language.
- In parts I find the text a little sloppy and so there are a lot of typographical corrections listed below. Taking a bit more care with the writing would aid both clarity and make the paper easier to follow in places. Figure labelling, I find to be incomplete and doesn’t include information on the temporal sampling of model or observed data in many places. For example, in Figure 1 I presume the model data is an annual mean but this isn’t clear from the caption. The captions need to be self-explanatory in their own right.
- There is insufficient description of the observations used, what time periods do they represent? What temporal sampling was used? This is very important in terms of interpreting the results to understand how representative the comparison is and are you comparing apples with apples! This likely could be better achieved via the restructure of the manuscript recommended above.
Minor comments:
P1 L14/15: concentration of sulphate … is improved à concentrations of …. are improved
P1 L15: imbalances à biases
P2 L20: Cycling forecasts without data assimilation – presumably it is just the DA of aerosol information that is excluded and the data assimilation of meteorological variables is retained to constrain the simulated meteorology? Please make this clearer in the text.
P2 L22-26 : The Section labelling is all incorrect here, as the Introduction is Section 1, needs correcting.
P2 L24 : aerosol sources à primary aerosol sources
P3 L4: All of sea-salt à All of the sea-salt
P3 L6: The use of “etc.” isn’t satisfactory here. You should state clearly what model variables are divided by 4.3 or if all, say “all sea salt properties”
Note there is inconsistent use of “sea salt” and “sea-salt” throughout the manuscript. Please make consistent.
P3 Subsection title 2.1.1 : sulfur à Sulfur
P3 L27: Use of the CAMS_GLOB_ANT emissions versus MACCity, can you more accurately quantify the impact of the different emissions dataset on the subsequent simulated sulfur cycle. Emissions of SO2 are a big uncertainty in modelled S cycle budget generally and so could play a not insignificant role here? Also in Tables 2 and 3 I presume from the values these are annual mean fluxes but its not clear from the captions.
P3 L29 the chemical conversion rates are globally of the same order of magnitude – please see my comment above on how more detail on the simulation of the S cycle is warranted. Chemical conversion of what to what? Also if the chemical conversion rates are the key drivers of the increase in S lifetime in cycle 47r1 (presumably in Table 2) why is the lifetime of cycle 45r1 in Table 3 similar (~3 days).
Table 4: Caption is incomplete and I do not see any comparison or mention of the AeroCom Phase 3 comparison mentioned in the text.
P6 L5: Put reference in brackets.
Table 5: This table is quite informative but is barely mentioned in the text. Inclusion of the appropriate reference for each cycle would also be good to be included. M86, N12, G14, A16 are undefined. What is meant by Mass Fixer? This also should be explained in the text.
P9 L2: Monahan86 and Grythe14, why not just label them as M86 and G14 as you do in Table 5. Inconsistent labelling is confusing.
P9 L5: ofMonahan à of Monahan.
P9 L5-9 How globally representative are the ocean surface brightness retrievals?
P9 L24: Similarly to à Similar to
P9 L29: How was the evaluation carried out? Temporal frequency of observations and model (again not stated in caption of Figure 5 but it looks to be weekly?)? How representative is this comparison? The sea salt contribution to the total AOD will be maximised in local wintertime, and so exhibits a clear seasonal cycle, has this been assessed? The MAN network could otherwise contain contributions from secondary sources of sulfate aerosol from DMS and other biogenic sources, looking at the seasonal cycle could help discriminate between the various sources.
P10 L2 AEROCE/SEAREX programme – include appropriate citation
P10 L8: is slightly improved -> can you be more quantitative
Table 6 and 7: It would be good to include the diameter ranges below the bin labels.
P14 L8 and Figure 6: is this an annual total?
P14 L14: The skill of the simulated dust seems to improve -> please be more quantitative
P14 L21: IFS-AER – which version? Please cf with reference to IFS-AER on L22 (same page)
P14 L21 producign à producing
Figure 7: It would be nice to see some uncertainty bounds or even a standard deviation of the observations (dust being highly variable in space and time) on these plots.
P18 L3: have been brought to à have been added?
P19 L14/15: why bold? Also I don’t really understand how something can be implemented in a cycle but is not operational? Do the cycle numbers and revisions not refer to an operational configuration?
P20 L14: This sulphur à The sulphur . Also sulphur and sulfur are both used in the text.
P21 L2: CASTNET – include appropriate reference?
P23 L1: Where is the budget for Cycle 45r1 presented?
P23 L9/L18: “20” missing unit
P23 L29: Are the AeroCom values for the year 2017, if not this would easily explain differences in emissions?
P25 L6: deserts a à deserts
Figure 12: Highest and lowest values use the same colour which is a bit confusing.
P27 L14/15: what drives the simulated peaks in AOD? Are SO2 emissions from fire included?
P27 L29 : positiv à positive
P29 L6: significantly over à significantly improved
Figure 14 caption: regionallevel à regional level
P30 L8: probably don’t always hold true à isn’t true in all instances
Figure 16: While the evaluation of dust deposition is qualitative at best, it does look like the model deposits most of its dust too close to the African coastline with not enough extending westward over the Atlantic.
P30 L24: 2-3 ug/m3 than à 2-3 ug/m3 more than
P31 L2: biomassburning à biomass burning
P31 L4/5 : Again the improvements associated with the NEWDEP changes are very interesting. Are the new deposition changes offsetting the increase in AOD and PM2.5 associated with the biomass burning emission height change? Presumably if more particles are emitted higher up away from BL processes and sedimentation processes. Do you see a shift from dry to wet deposition between 45R1 and 47R1? Also the NEWDEP changes seem to impact some aerosol species (eg biomass burning) more than others (eg nitrate), can the authors offer a suggestion as to why this is the case?
P31 L7-10: Improvements in PM biases over China, can this be linked back to improved S cycle via coupling of aerosol scheme to CB05 chemistry?
P31 L25: obsreved à observed
P31 L27: in which the same nitrate scheme as IFS-AER has been adaptedà which has implemented an adapted version of the IFS-AER nitrate scheme
Figure 15: is only half the possible range of FGE covered by the colorbar used here? There seems to be a large underestimation of AOD over the Maritime Continent that persists through all model versions.
Figure 18 caption : Please note the different scale between the two panels – but the scales appear to be the same?
Figure 19: I can’t make out the observations in black circles.
Figure 20: what time period is used for the observations?
Figure 20 caption: should this be OM in PM2.5 and not surface ammonium concentration?
Citation: https://doi.org/10.5194/gmd-2021-264-RC2 -
RC3: 'Comment on gmd-2021-264', Anonymous Referee #3, 16 Feb 2022
The manuscript from Remy et al presents different updates of the CAMS modelling system for tropospheric aerosols since version 45R1, following up on a similar paper written by Remy et al 2019. Updates concern a range of parameterisations from emissions to deposition and an integration with the tropospheric chemistry scheme in IFS. It contains quite useful details about the system configurations used in the different IFS cycles.
The paper in its current state is a bit difficult to read, not well organised and in parts too vague. I would not like to recommend publication in this form. The results and model descriptions are intertwined to the extent that it is confusing.
My recommendations would be:
- Please make a harmonisation in terms of model simulations used. I believe it would be useful if all three model versions 45R1 and 47R1 and 47R1newdep should be shown for all evaluations and tables. I think also that for some evaluation the period February to December 2017 was used, and for others Jan-Dec 2017. Not sure this change in base time period is useful. Even though results for 45R1 are probably (? I did not check ? ) already in Remy 2019, they should also be shown here.
- The model changes should be described first and then evaluation results could be discussed altogether in subsequent chapters.
- The statistical data are presented in four different ways: as maps, as time series, as tables, as histograms. It is not clear why the “style of presentation” changes in the course of the manuscript. It would be good to have the same statistics, eg bias, rmse, r, and mnmb available for all evaluations. This could be a few overview tables or more complete annotations in the figures. This would give substance to the often vague statements in the text on quality. Please make sure statistics are available for all three model versions 45R1 and 47R1 and 47R1newdep.
- from table 5: It looks like 45R1 and 47R1 are run on quite different vertical resolution. Shouldnt that be a major factor in all aerosol budgets? This is not discussed as far as I can see it. If the experiments have been made on different vertical resolution, then the changes in budgets are not just because of the changes parameterisations. This would be interesting to understand for the budgets and the deposition evaluation.
- In general, I am not clear about whether the integration to the IFS CB05 is activated in all 47R1 experiments with IFS-AER? Does mentioning 47R1 and IFS-AER mean (for the experiments shown in this study) that IFS-CB05 is used for all gas phase chemistry?
specific comments:
- table 5 - I think this table comes a bit late.
- table 2+3: Should be simply combined to one sulfur cycle table. Its a bit confusing this way with two tables.
- table 2: SO2 budget - Its a bit counterintuitive that life time goes up when wet deposition as a process is added. Why did dry deposition go down so much in 47R1?
p9 l2: typo “Grythe14 Grythe”
- table 4: the figure caption is incomplete. What is in brackets? Which cycle is shown? maybe show all?
- p7 l11: 46R1 is discussed but what about 45R1?
- table 5 I think the experiments used in this paper: 45R1 and 47R1 and 47R1newdep without data assimilation should be included in this table to clarify what is used.
-p2 l20 : one year of cycling without data assimilation : I think the missing data assimilation should be commented more . How different is the model as compared to the operational model with data assimilation?
-2.1.1 sulfur => Sulfur
-p3 l29: sinks=> sink
- table 6: Why is the life time changing so much for a given bin and different source functions? Each bin has one size and density. Dry and wet removal should be roughly the same, or?
- figure 3: What is the color in the plots? It looks like all dots are plotted…
- figure 7: typo at end
- p22 l15: Budgets are shown for Feb-Dec but then table 11 says Jan-Dec 2017, what is used ? Why not using Jan-Dec throughout? Is the spinup really needed?
Citation: https://doi.org/10.5194/gmd-2021-264-RC3 - AC1: 'Answers to reviews of gmd-2021-264', Samuel Remy, 04 Apr 2022