the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Improvements in the Canadian Earth System Model (CanESM) through systematic model analysis: CanESM5.0 and CanESM5.1
James Anstey
Vivek Arora
Ruth Digby
Nathan Gillett
Viatcheslav Kharin
William Merryfield
Catherine Reader
John Scinocca
Neil Swart
John Virgin
Carsten Abraham
Jason Cole
Nicolas Lambert
Woo-Sung Lee
Yongxiao Liang
Elizaveta Malinina
Landon Rieger
Knut von Salzen
Christian Seiler
Clint Seinen
Andrew Shao
Reinel Sospedra-Alfonso
Libo Wang
Duo Yang
Download
- Final revised paper (published on 15 Nov 2023)
- Preprint (discussion started on 04 Apr 2023)
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2023-52', Charles Pelletier, 17 May 2023
Improvements in the Canadian Earth System Model (CanESM) through systematic model analysis: CanESM5.0 and CanESM5.1
M. Sigmond et al., Geosci. Model Dev. Disc., 2023
https://doi.org/10.5194/gmd-2023-52General comments
The authors document ongoing model analysis efforts for the Canadian Earth System Model (CanESM). Upgrades from CanESM5.0 to CanESM5.1 (p1) are described, featuring an extensive coverage of one tuning fix related to the hybridization of advective atmospheric tracers which got the model rid of spurious unrealistic spikes in simulated stratospheric temperatures (I have to stress that Section 5.1 particularly reads like a detective novel set in an ESM context, and that's a compliment). The authors then present CanESM5.1 p2, an alternate version of CanESM5.1 p1 specifically retuned for better representing ENSO variability. Finally, the authors address less successful attempts in correcting other prominent biases (e.g., overestimation of North Atlantic sea ice, cold bias over the Himalaya) providing further development perspectives inferred from preliminary experimentations (mostly based on offline Earth system model component simulations), which also provide valuable insight to the community.
Yet what makes this paper stands out to my eyes lies in these bias corrections being set in the framework of the “Analysis for development” (A4D) internal effort. This provides a refreshingly transparent and assuming outlook on model development strategy, which is extremely valuable to the worldwide modelling community. The two successful bias corrections listed above can thus be perceived as two flagship applications of the A4D procedure.
The manuscript is of high quality and clearly falls within the scope of GMD. It represents a significant amount of technical work which tends to often be overlooked in literature. As said above, the earnestness and transparency with which it approaches difficult challenges in model developments is a big plus. However, I think that clarifying some specific points would help the reader and hopefully improve the manuscript. Therefore, I recommend its publication after addressing the comments listed below.
------------------------------
Specific comments
1) The three-fold typology in model issues and the classification of encountered biases in the conclusion is one of the manuscript’s strengths. Nevertheless, while I kind of understand where the categories are coming from and acknowledge that there is no perfect way to define them, I think that they could be worded in a sounder manner with little to no impact on the manuscript. I'm OK with the definition of community-specific issues as related to physical phenomena that are universally hard to represent, and thus translate into issues in virtually all models. Then, I’d distinguish remaining issues as model-specific or *configuration*-specific, instead of version-specific. “Model versions” is somehow vague, and strictly speaking some model version changes could yield major impacts on the dynamical core and resulting physics (e.g., relaxing the hydrostaticity, or changing the vertical coordinate -- which are big model changes, I admit). Configuration-specific issues would be those that can be addressed by changing anything but the model source code (resolution, new tuning, input data). Model-specific issues would then be issues that call for model developments, which would include parameterization updates/changes, or dynamical core updates. Finally, regardless of the way issue categories are defined, classifying issues is not straightforward, and boundaries in-between categories are porous. The manuscript’s conclusion kind of infers it when discussing where to categorize the discussed biases, but I think that insisting on this classification exercise not being rocket-science would make sense as classes are introduced in Section 2.
2) There are a few different model versions being used and I found the indexing and their interdependency confusing at times.
2a. L. 26: “CanESM5” is mentioned here, while only “CanESM5.0” had previously been mentioned. My understanding is that “CanESM5” refers to both CanESM5.0 and CanESM5.1. Is that right? If so, it’d probably be best to define what exactly is meant by “CanESM5”, and to use it once CanESM5.1 has been introduced (if it is included in CanESM5, that is).
2b. L. 99: the “p” index for the patch version is quite confusion-prone, since the “p” letter is also used for distinguishing other things (CMIP physics-related ensemble) which are important to the rest of the manuscript. Since the patch version is not used anywhere else in the manuscript (unless I misunderstood), lines 99 to 103 can be removed from the paper, for the sake of clarity. IMO every GMD reader already knows that CanESM5.0 is more different from CanESM2 than CanESM5.1 is from CanESM5.0. And they can read and understand the manuscript without knowing about patch versions.
2c. L. 118: for the sake of clarity, I would say right there two variants of CanESM5.1 have been implemented, and that the common traits between both these variants, which are new compared to CanESM5.0, are the bullet points below. Actually, I’d create subsubsections for the CanESM5.1 three bullet points (common to p1 and p2), and then another one for presenting the two variants, so that the distinction between both levels is clearer.
2d. Please specify whether CanESM5.1 p2 is built on top of CanESM5.1 p1, or is it just targeted at different applications/diagnostics? I think the latter, right?
2e. Ideally, it would be very nice to have a diagram (e.g. Venn and/or arrow-connected boxes) to help the reader differentiate the different model versions and their relationships: is X a successor or an alternate version of Y, etc.3) Bare ground fraction (L. 286 on): I don’t understand the discussion. The first sentence of the paragraph says that there was an interpolation error, and then there’s a discussion (and runs performed) to decide whether the correction should be applied or not? If this is an interpolation error, shouldn’t it just be corrected, or am I missing something? If the “wrong” bare ground fraction yields better results, is it still reasonable to keep it? Are we getting on overtuning grounds? If so, it's worth being explicitly mentioned.
4) L. 436 – 474: consider putting the details in appendix and leave in the main body just the three bullet points around L. 432, and there comes no definitive conclusions / solutions from this. It’s still valuable information for modellers, and I can feel the sweat and tears, but this part is a bit too lengthy and unconclusive yet to be worth a main body spot to my external eyes. Also, L. 471 – L. 474 (up until “considered”) read a bit like a funding application (and the bulk of it is already in the manuscript’s conclusion). Please consider removing it or keeping it for a next paper where these things are actually presented.
------------------------------
Technical comments
- L. 46: I wouldn’t use the word “specific” to describe the biases dealt with in section 5, as these biases are present in both CanESM5.0 and CanESM5.1, but also in other ESM models (as the manuscript rightfully later says so). “Persisting biases”, maybe?
- L. 94 “a new”
- L. 104: please provide a reference for CanESM2
- L. 119: if CanESM5.0 p1 and p2 are the same as CanESM5 p1 and p2 as per Swart et al. (2019), please specify it here. If not, explain
- L. 124 – 127: papers are meant to be read by human beings, not parsed by a namelist-reading program. Please refer to the method as “second-order conservative”. It could also be worth explaining that on top of the fields themselves, second-order methods remap their spatial gradients in a conservative way (which fits the desired results). And potentially cite Jones 1999 (https://doi.org/10.1175/1520-0493(1999)127%3C2204:FASOCR%3E2.0.CO;2 ).
- L. 128 – 135: this represents a tremendous amount of extremely useful and ungrateful work – congratulations. Did you notice any impact on performance (speed, memory requirements, etc.)? Also, does “same bit pattern” mean “bit identical”? If so, just say the code F90-izing is bit-identical (and the array transformation isn’t, but climatologically equivalent).
- Fig. 2 and others in appendix: I’m fine with using ERA5 as a reference, but labelling it as “Obs” in figure subtitles is taking it a bit too far.
- L. 169: “…, as *supported* by panels…”
- L. 182: please make these reports accessible permanently, e.g. by uploading them to Zenodo, or by adding them as supplementary material to the paper.
- L. 189: include a citation (e.g. https://www.pnas.org/doi/full/10.1073/pnas.1906556116 ) after “observations”.
- L. 189 “that that”
- L. 189 – 191: it definitely is noteworthy that one ensemble member has positive February trend, however suggesting this hints at internal variability as a driver of observed positive Antarctic sea-ice trend feels like a bit of a stretch.
- L. 196: “such as the run”
- L. 288: “This issue is investigated by comparing two atmosphere-only simulations (with and without bare ground fraction correction), in which the atmosphere is nudged to reanalysis so that the observed meteorological conditions, which have a large impact on dust, are well reproduced”
- Fig. 11: I think that here “seasonal cycle” refers to monthly means – annual means, right? If so, it’d be worth specifying it in the figure caption. Also, please specify how the ensemble members were picked (presumably randomly).
- L. 348: to me “gradients” are local properties, by definition. I’d describe Diff_CE as a “large-scale zonal variations”
- L. 360: is/was CanOM4 an in-house ocean model, or adapted from another model as CanNEMO is from NEMO? Please provide a bit more detail and fitting citations (including the NEMO book for CanNEMO).
- L. 363: a complex interplay of both oceanic and atmospheric processes
- L. 366: isn’t CanESM5.1 p1’s climate sensitivity virtually the same as CanESM5.0? If so, please rephrase, e.g., “CanESM5.1 p2 was successfully tuned to reduce (-20%) the overestimated climate sensitivity of CanESM5.0 and 5.1 p1”.
- L. 413: I think replacing “air-sea interactions” with “sea-surface buoyancy loss” would be more focused. Please consider citing literature, e.g. https://link.springer.com/article/10.1007/s00382-019-04802-4
- L. 410: since you’re talking about deep convection, could you specify the choice of convection parameterization in CanNEMO? Enhanced diffusivity?
- L. 418: please specify *ocean* vertical diffusivity.
- L. 425 – 426: which bulk formulae, which runoff observations, and which SSS?
- L. 430: it could also be that the CanESM forcings have been obtained from coupled runs, so that they have the imprint of the ocean surface biases (the same way observed SAT have an imprint of sea-ice presence or lack thereof).
- L. 477: ocean physics tuning or adjusments?
- L. 481: for sea-ice covered ocean cells, are SSTs the sea-ice surface temperature, or the temperature of the liquid ocean? If the latter, then SSTs can’t get below freezing point anyway, which may explain the lack of signal.
- Table 2: please briefly provide references for the empirical function.
- L. 484: “This reasoning…”: or that this is a community-systemtic issue?
- L. 515: missing period.
- L. 521: lonely )
- Table 3: why capital W
- L. 590: it may be worth talking about “overtuning” here (and a reference would be nice).
- L. 591: please remove sentence starting with “Physically“ – it generically can be said about modelling in general, so not very engaging.
- L. 622: please first introduce the datasets (with adequate citations), and then explain that they’ve been tested as model forcing. The reader doesn’t know what these acronyms mean when they reach line 622.
- L. 625: required to drive CLASSIC, right?
- L. 638: please rephrase – not sure what “complete” means here (and seems counter-intuitive)
- L. 646: “As a result” is a bit fast here, especially as GMD is specialized on this. I think (?) that the authors are thinking of reduced blocking. Please provide more detail, and potentially some references ( e.g. https://journals.ametsoc.org/view/journals/atsc/66/2/2008jas2689.1.xml )
- L. 664 missing space
- L. 694: informing users of the model?
- L. 705: if the choices of these new model components have been made (e.g., I suspect sea ice is SI3), it would be worth explicitly specifying them.
- Acknowledgement: please acknowledge external data used in the study, e.g. https://confluence.ecmwf.int/display/CKB/How+to+acknowledge+and+cite+a+Climate+Data+Store+%28CDS%29+catalogue+entry+and+the+data+published+as+part+of+it for ERA5.
Citation: https://doi.org/10.5194/gmd-2023-52-RC1 -
RC2: 'Comment on gmd-2023-52', Hans Segura, 14 Jun 2023
General comments,
The manuscript “Improvements in the Canadian Earrth System Model (CanESM) through systematic model analysis: CanESM5.0 and CanESM5.1” details the evaluation of two versions of CanESM with a focus on the improvements and distortions on the global climate and specific geographic locations. The document has the objective of highlighting the benefits of analyzing the model through what the authors call the “Analysis for Development” or A4D. This idea has a lot of potential for model development and understanding processes governing the Climate System. And to highlight this in a publication is a merit that needs to be acknowledged to the authors.
As I mentioned, the intention has its merits, but there are aspects of the manuscript that makes it difficult to find a clear story. Instead, the actual state of the manuscript gives the impression of a collection of reports rather than a story to tell to the scientific community. In the following lines, it is listed the points that the authors can improve.
- The document makes it not clear if the versions and patches of the CanESM5 model were proposed as a result of the A4D initiative or if the versions and patches were decided before, and the A4D only analyzed the outputs. For example, it is not clear if the choices in the parameters that changed between patch 1 (p1) and patch 2 (p2) are the result of the A4D initiative. So, what is the role of the A4D initiative? Were the changes between CanESM5.0 to CanESM5.1 also a product of the A4D initiative?
- Aside from the versions and patches, the document presents numerous experiments in which only the atmosphere module or the ocean module were used, and all this universe makes it difficult to put in context the advantages of the A4D initiative. An idea could be to make a sketch showing the “genealogy” of the experiments and patches described in the text. And in this sketch, the authors can highlight the versions, patches, and experiments suggested by the A4D initiative.
- Moreover, it is not clear if all only-atmosphere or only-ocean experiments will contribute to building a new patch or version of the model. For example, the section about dust tunning is interesting because an error is tracked, fixed, and implemented in the new version of CanESM 5.1. However, the last paragraph in section 5.1 (lines 287-298) is unclear about the reason for using the atmospheric-only simulations, which are not referred to by a name, and what the document is gaining from them. A similar example is the conclusion for the OGWD parameter, which has a different effect between sudden stratospheric warming (SSW) events and the neck wind regions. When I arrived at this part, I asked, “Okay, what are the following steps?”
- I think that the manuscript will benefit if it is presented first with changes in the global Climate System (historical climate, global dust, ENSO, climate sensitivity) and then shows the regional impacts (Dust in east China, sea-ice area for different months and places, Himalayas’cold bias). And then, it is more obvious to highlight what changed across versions and what is still unchanged. For example, according to the results, fixing the “hybridization” problem corrected the stratospheric temperature spike but with little changes on the global scale (temperature, precipitation, climate sensitivity).
- I also suggest adding a summary of the characteristics of the CanESM5 model in terms of resolution (horizontal and vertical), the most important schemes used, and The period of run.
Specific comments:
-Line 7: I do not agree with the “substantial improvements”. Yes, there are specific improvements, but there are still biases in the representation of global scale temperature, precipitation, seasonality, etc.
-Line 17-19: While I like the statement, I find some caveats in this. What do you mean by more reliable climate change projections or high-quality models? Are high-quality models the ones that produce a similar pattern of any climate variable, even if processes are not represented correctly? From my point of view, a climate model has the objective of representing the processes governing the climate inside a framework of the assumptions they are built. Thus, more parameterization schemes (statistical approaches) are used fewer processes are explicitly represented. I think this phrase is referring to the fact that tunning the model to represent historical climate gives more confidence in the climate projections.
-Line 36: What do you mean to be “particularly good”? Pattern, variability?
-Line 108-109: Is this line suggesting that having more ensembles sacrificing the resolution is indeed good? One can argue that if all the ensembles point out in the wrong direction, there is not any advantage to this.
-Line 186: The historical Antarctic sea ice trend In Figure 4a is only for September. So, is it enough to use one month to state that the historical sea ice trend is very close to observations?
-Line 198: What is GSAT? it is not specified in the main text.
-Line 329: From 10b-d, the mean spectrum in CanESM5 is moving to higher frequencies along with the new patches. Do you have an explanation for this?
-Lines 359-362: I like this phrase. Do you have any mechanism that could potentially affect the pattern of SST in the Pacific?
-Line 377: What is IRF? This is not explained in the main text.
-Line 394: I did not fully understand your argument about moisture. Is it because the water vapor feedback is higher in patch 2 (p2) than in patch 1 (p1)?
-Line 395-399: The small reduction in BCS is enough to explain your reduction in climate sensitivity.
-Line 402-403: Which type of parameterization in Table 1 do you refer to? What about the role of shallow clouds, as explained by Vogel et al. (2022)?
-Lines 407-408: What is the connection between excessive sea-ice and salinity? I would have thought that too much sea-ice indicates less fresh water in the ocean and, as a consequence, more salinity.
-Lines 471-477: In all the discussion, there is no mention of the air-sea-ice processes that the model could misrepresent. Do you have any concrete idea? What about the ice module? Is everything okay with that module?
-Lines 555-561: It was not clear if changing G(v) or Fcrit led to an increase in SSW events. In CanAM simulations, an increase in G(v) increases SSW events, but this logic does not apply in the CanESM5 simulations. Could you comment on this? Is it because G(v) and Fcrit have a different relationship with OGWD?
-Line 664: It is stated that the lack of realistic topography could be the reason for the cold bias in the Himalayas. Having stated this, do you think that using a resolution of 1° is enough to solve this problem?
References:
Vogel, R., Albright, A.L., Vial, J. et al. Strong cloud–circulation coupling explains weak trade cumulus feedback. Nature 612, 696–700 (2022). https://doi.org/10.1038/s41586-022-05364-y
Citation: https://doi.org/10.5194/gmd-2023-52-RC2 - AC1: 'Reply to referee comments gmd-2023-52', Michael Sigmond, 29 Aug 2023