the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Improvements in the Canadian Earth System Model (CanESM) through systematic model analysis: CanESM5.0 and CanESM5.1
James Anstey
Vivek Arora
Ruth Digby
Nathan Gillett
Viatcheslav Kharin
William Merryfield
Catherine Reader
John Scinocca
Neil Swart
John Virgin
Carsten Abraham
Jason Cole
Nicolas Lambert
Woo-Sung Lee
Yongxiao Liang
Elizaveta Malinina
Landon Rieger
Knut von Salzen
Christian Seiler
Clint Seinen
Andrew Shao
Reinel Sospedra-Alfonso
Libo Wang
Duo Yang
Abstract. The Canadian Earth System Model version 5.0 (CanESM5.0), the most recent major version of the global climate model developed at the Canadian Centre for Climate Modelling and Analysis (CCCma) at Environment and Climate Change Canada (ECCC), has been used extensively in climate research and for providing future climate projections in the context of climate services. Previous studies have shown that CanESM5.0 performs well compared to other models and have revealed several model biases. To address these biases, CCCma has recently initiated the ‘Analysis for Development’ (A4D) activity, a coordinated analysis activity in support of CanESM development. Here we describe the goals and organization of this effort and introduce two variants (``p1'' and ``p2'') of a new CanESM version, CanESM5.1, which features substantial improvements as a result of the A4D activity. These improvements include the elimination of spurious stratospheric temperature spikes and an improved simulation of tropospheric dust. Other climate aspects of the p1 variant of CanESM5.1 are similar to those of CanESM5.0, while the p2 variant of CanESM5.1 features reduced equilibrium climate sensitivity and improved ENSO variability as a result of intentional tuning of the atmospheric component. The A4D activity has also led to the improved understanding of other notable CanESM5.0/5.1 biases, including the overestimation of North Atlantic sea ice, a cold bias over sea ice, biases in the stratospheric circulation and a cold bias over the Himalayas. It provides a potential framework for the broader climate community to contribute to CanESM development, which will facilitate further model improvements and ultimately lead to improved climate change information.
Michael Sigmond et al.
Status: open (until 15 Jun 2023)
-
RC1: 'Comment on gmd-2023-52', Charles Pelletier, 17 May 2023
reply
Improvements in the Canadian Earth System Model (CanESM) through systematic model analysis: CanESM5.0 and CanESM5.1
M. Sigmond et al., Geosci. Model Dev. Disc., 2023
https://doi.org/10.5194/gmd-2023-52General comments
The authors document ongoing model analysis efforts for the Canadian Earth System Model (CanESM). Upgrades from CanESM5.0 to CanESM5.1 (p1) are described, featuring an extensive coverage of one tuning fix related to the hybridization of advective atmospheric tracers which got the model rid of spurious unrealistic spikes in simulated stratospheric temperatures (I have to stress that Section 5.1 particularly reads like a detective novel set in an ESM context, and that's a compliment). The authors then present CanESM5.1 p2, an alternate version of CanESM5.1 p1 specifically retuned for better representing ENSO variability. Finally, the authors address less successful attempts in correcting other prominent biases (e.g., overestimation of North Atlantic sea ice, cold bias over the Himalaya) providing further development perspectives inferred from preliminary experimentations (mostly based on offline Earth system model component simulations), which also provide valuable insight to the community.
Yet what makes this paper stands out to my eyes lies in these bias corrections being set in the framework of the “Analysis for development” (A4D) internal effort. This provides a refreshingly transparent and assuming outlook on model development strategy, which is extremely valuable to the worldwide modelling community. The two successful bias corrections listed above can thus be perceived as two flagship applications of the A4D procedure.
The manuscript is of high quality and clearly falls within the scope of GMD. It represents a significant amount of technical work which tends to often be overlooked in literature. As said above, the earnestness and transparency with which it approaches difficult challenges in model developments is a big plus. However, I think that clarifying some specific points would help the reader and hopefully improve the manuscript. Therefore, I recommend its publication after addressing the comments listed below.
------------------------------
Specific comments
1) The three-fold typology in model issues and the classification of encountered biases in the conclusion is one of the manuscript’s strengths. Nevertheless, while I kind of understand where the categories are coming from and acknowledge that there is no perfect way to define them, I think that they could be worded in a sounder manner with little to no impact on the manuscript. I'm OK with the definition of community-specific issues as related to physical phenomena that are universally hard to represent, and thus translate into issues in virtually all models. Then, I’d distinguish remaining issues as model-specific or *configuration*-specific, instead of version-specific. “Model versions” is somehow vague, and strictly speaking some model version changes could yield major impacts on the dynamical core and resulting physics (e.g., relaxing the hydrostaticity, or changing the vertical coordinate -- which are big model changes, I admit). Configuration-specific issues would be those that can be addressed by changing anything but the model source code (resolution, new tuning, input data). Model-specific issues would then be issues that call for model developments, which would include parameterization updates/changes, or dynamical core updates. Finally, regardless of the way issue categories are defined, classifying issues is not straightforward, and boundaries in-between categories are porous. The manuscript’s conclusion kind of infers it when discussing where to categorize the discussed biases, but I think that insisting on this classification exercise not being rocket-science would make sense as classes are introduced in Section 2.
2) There are a few different model versions being used and I found the indexing and their interdependency confusing at times.
2a. L. 26: “CanESM5” is mentioned here, while only “CanESM5.0” had previously been mentioned. My understanding is that “CanESM5” refers to both CanESM5.0 and CanESM5.1. Is that right? If so, it’d probably be best to define what exactly is meant by “CanESM5”, and to use it once CanESM5.1 has been introduced (if it is included in CanESM5, that is).
2b. L. 99: the “p” index for the patch version is quite confusion-prone, since the “p” letter is also used for distinguishing other things (CMIP physics-related ensemble) which are important to the rest of the manuscript. Since the patch version is not used anywhere else in the manuscript (unless I misunderstood), lines 99 to 103 can be removed from the paper, for the sake of clarity. IMO every GMD reader already knows that CanESM5.0 is more different from CanESM2 than CanESM5.1 is from CanESM5.0. And they can read and understand the manuscript without knowing about patch versions.
2c. L. 118: for the sake of clarity, I would say right there two variants of CanESM5.1 have been implemented, and that the common traits between both these variants, which are new compared to CanESM5.0, are the bullet points below. Actually, I’d create subsubsections for the CanESM5.1 three bullet points (common to p1 and p2), and then another one for presenting the two variants, so that the distinction between both levels is clearer.
2d. Please specify whether CanESM5.1 p2 is built on top of CanESM5.1 p1, or is it just targeted at different applications/diagnostics? I think the latter, right?
2e. Ideally, it would be very nice to have a diagram (e.g. Venn and/or arrow-connected boxes) to help the reader differentiate the different model versions and their relationships: is X a successor or an alternate version of Y, etc.3) Bare ground fraction (L. 286 on): I don’t understand the discussion. The first sentence of the paragraph says that there was an interpolation error, and then there’s a discussion (and runs performed) to decide whether the correction should be applied or not? If this is an interpolation error, shouldn’t it just be corrected, or am I missing something? If the “wrong” bare ground fraction yields better results, is it still reasonable to keep it? Are we getting on overtuning grounds? If so, it's worth being explicitly mentioned.
4) L. 436 – 474: consider putting the details in appendix and leave in the main body just the three bullet points around L. 432, and there comes no definitive conclusions / solutions from this. It’s still valuable information for modellers, and I can feel the sweat and tears, but this part is a bit too lengthy and unconclusive yet to be worth a main body spot to my external eyes. Also, L. 471 – L. 474 (up until “considered”) read a bit like a funding application (and the bulk of it is already in the manuscript’s conclusion). Please consider removing it or keeping it for a next paper where these things are actually presented.
------------------------------
Technical comments
- L. 46: I wouldn’t use the word “specific” to describe the biases dealt with in section 5, as these biases are present in both CanESM5.0 and CanESM5.1, but also in other ESM models (as the manuscript rightfully later says so). “Persisting biases”, maybe?
- L. 94 “a new”
- L. 104: please provide a reference for CanESM2
- L. 119: if CanESM5.0 p1 and p2 are the same as CanESM5 p1 and p2 as per Swart et al. (2019), please specify it here. If not, explain
- L. 124 – 127: papers are meant to be read by human beings, not parsed by a namelist-reading program. Please refer to the method as “second-order conservative”. It could also be worth explaining that on top of the fields themselves, second-order methods remap their spatial gradients in a conservative way (which fits the desired results). And potentially cite Jones 1999 (https://doi.org/10.1175/1520-0493(1999)127%3C2204:FASOCR%3E2.0.CO;2 ).
- L. 128 – 135: this represents a tremendous amount of extremely useful and ungrateful work – congratulations. Did you notice any impact on performance (speed, memory requirements, etc.)? Also, does “same bit pattern” mean “bit identical”? If so, just say the code F90-izing is bit-identical (and the array transformation isn’t, but climatologically equivalent).
- Fig. 2 and others in appendix: I’m fine with using ERA5 as a reference, but labelling it as “Obs” in figure subtitles is taking it a bit too far.
- L. 169: “…, as *supported* by panels…”
- L. 182: please make these reports accessible permanently, e.g. by uploading them to Zenodo, or by adding them as supplementary material to the paper.
- L. 189: include a citation (e.g. https://www.pnas.org/doi/full/10.1073/pnas.1906556116 ) after “observations”.
- L. 189 “that that”
- L. 189 – 191: it definitely is noteworthy that one ensemble member has positive February trend, however suggesting this hints at internal variability as a driver of observed positive Antarctic sea-ice trend feels like a bit of a stretch.
- L. 196: “such as the run”
- L. 288: “This issue is investigated by comparing two atmosphere-only simulations (with and without bare ground fraction correction), in which the atmosphere is nudged to reanalysis so that the observed meteorological conditions, which have a large impact on dust, are well reproduced”
- Fig. 11: I think that here “seasonal cycle” refers to monthly means – annual means, right? If so, it’d be worth specifying it in the figure caption. Also, please specify how the ensemble members were picked (presumably randomly).
- L. 348: to me “gradients” are local properties, by definition. I’d describe Diff_CE as a “large-scale zonal variations”
- L. 360: is/was CanOM4 an in-house ocean model, or adapted from another model as CanNEMO is from NEMO? Please provide a bit more detail and fitting citations (including the NEMO book for CanNEMO).
- L. 363: a complex interplay of both oceanic and atmospheric processes
- L. 366: isn’t CanESM5.1 p1’s climate sensitivity virtually the same as CanESM5.0? If so, please rephrase, e.g., “CanESM5.1 p2 was successfully tuned to reduce (-20%) the overestimated climate sensitivity of CanESM5.0 and 5.1 p1”.
- L. 413: I think replacing “air-sea interactions” with “sea-surface buoyancy loss” would be more focused. Please consider citing literature, e.g. https://link.springer.com/article/10.1007/s00382-019-04802-4
- L. 410: since you’re talking about deep convection, could you specify the choice of convection parameterization in CanNEMO? Enhanced diffusivity?
- L. 418: please specify *ocean* vertical diffusivity.
- L. 425 – 426: which bulk formulae, which runoff observations, and which SSS?
- L. 430: it could also be that the CanESM forcings have been obtained from coupled runs, so that they have the imprint of the ocean surface biases (the same way observed SAT have an imprint of sea-ice presence or lack thereof).
- L. 477: ocean physics tuning or adjusments?
- L. 481: for sea-ice covered ocean cells, are SSTs the sea-ice surface temperature, or the temperature of the liquid ocean? If the latter, then SSTs can’t get below freezing point anyway, which may explain the lack of signal.
- Table 2: please briefly provide references for the empirical function.
- L. 484: “This reasoning…”: or that this is a community-systemtic issue?
- L. 515: missing period.
- L. 521: lonely )
- Table 3: why capital W
- L. 590: it may be worth talking about “overtuning” here (and a reference would be nice).
- L. 591: please remove sentence starting with “Physically“ – it generically can be said about modelling in general, so not very engaging.
- L. 622: please first introduce the datasets (with adequate citations), and then explain that they’ve been tested as model forcing. The reader doesn’t know what these acronyms mean when they reach line 622.
- L. 625: required to drive CLASSIC, right?
- L. 638: please rephrase – not sure what “complete” means here (and seems counter-intuitive)
- L. 646: “As a result” is a bit fast here, especially as GMD is specialized on this. I think (?) that the authors are thinking of reduced blocking. Please provide more detail, and potentially some references ( e.g. https://journals.ametsoc.org/view/journals/atsc/66/2/2008jas2689.1.xml )
- L. 664 missing space
- L. 694: informing users of the model?
- L. 705: if the choices of these new model components have been made (e.g., I suspect sea ice is SI3), it would be worth explicitly specifying them.
- Acknowledgement: please acknowledge external data used in the study, e.g. https://confluence.ecmwf.int/display/CKB/How+to+acknowledge+and+cite+a+Climate+Data+Store+%28CDS%29+catalogue+entry+and+the+data+published+as+part+of+it for ERA5.
Citation: https://doi.org/10.5194/gmd-2023-52-RC1
Michael Sigmond et al.
Michael Sigmond et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
282 | 96 | 7 | 385 | 6 | 3 |
- HTML: 282
- PDF: 96
- XML: 7
- Total: 385
- BibTeX: 6
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1