Submitted as: methods for assessment of models 19 Jan 2021
Submitted as: methods for assessment of models | 19 Jan 2021
The interpretation of temperature and salinity variables in numerical ocean model output, and the calculation of heat fluxes and heat content
- 1School of Mathematics and Statistics, University of New South Wales, Sydney, NSW 2052, Australia
- 2Australian Research Council Centre of Excellence for Climate Extremes, University of New South Wales, Sydney, NSW 2052, Australia
- 3Dept. of Earth and Ocean Sciences, University of British Columbia, Vancouver, B.C. V6T 1Z4, Canada
- 4NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, New Jersey, USA
- 5Program for Climate Model Diagnosis and Intercomparison, Lawrence Livermore National Laboratory, Livermore, California, USA
- 1School of Mathematics and Statistics, University of New South Wales, Sydney, NSW 2052, Australia
- 2Australian Research Council Centre of Excellence for Climate Extremes, University of New South Wales, Sydney, NSW 2052, Australia
- 3Dept. of Earth and Ocean Sciences, University of British Columbia, Vancouver, B.C. V6T 1Z4, Canada
- 4NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, New Jersey, USA
- 5Program for Climate Model Diagnosis and Intercomparison, Lawrence Livermore National Laboratory, Livermore, California, USA
Abstract. The 2010 international thermodynamic equation of seawater, TEOS-10, defined the enthalpy and entropy of seawater, thus enabling the global ocean heat content to be calculated as the volume integral of the product of in situ density, ρ, and potential enthalpy, h0 (with reference sea pressure of 0 dbar). In terms of Conservative Temperature, Θ, ocean heat content is the volume integral of ρcp0Θ, where cp0 is a constant isobaric heat capacity
.
However, several ocean models in CMIP6 (as well as all of those in previous Coupled Model Intercomparison Project phases, such as CMIP5) have not been converted from EOS-80 (Equation of State - 1980) to TEOS-10, so the question arises of how the salinity and temperature variables in these models should be interpreted. In this article we address how heat content, surface heat fluxes and the meridional heat transport are best calculated in these models, and also how these quantities should be compared with the corresponding quantities calculated from observations. We conclude that even though a model uses the EOS-80 equation of state which expects potential temperature as its input temperature, the most appropriate interpretation of the model's temperature variable is actually Conservative Temperature. This interpretation is needed to ensure that the air-sea heat flux that leaves/arrives-in the atmosphere is the same as that which arrives-in/leaves the ocean.
We also show that the salinity variable carried by TEOS-10 based models is Preformed Salinity, while the prognostic salinity of EOS-80 based models is also proportional to Preformed Salinity. These interpretations of the salinity and temperature variables in ocean models are an update on the comprehensive Griffies et al. (2016) paper that discusses the interpretation of many aspects of coupled model runs.
Trevor J. McDougall et al.
Status: open (until 16 Mar 2021)
-
RC1: 'Comment on gmd-2020-426', Remi Tailleux, 16 Feb 2021
reply
-
RC2: 'Comment on gmd-2020-426', Baylor Fox-Kemper, 25 Feb 2021
reply
This paper covers topics needed to better understand what to do in the present situation of partial adoption of TEOS-10 and partial reliance on EOS-80 in the CMIP6 ensemble. The authors review the basic differences between these schemes, with an emphasis on how best to estimate the energy changes in the ocean under models using both equations of state. The paper improves on earlier treatments of these and related issues and forms a basis for future model evaluations with higher physical consistency.
The paper is admirably quantitative in its comparison of different techniques. It furthermore takes on a more pragmatic explanation of the utility of various metrics, especially potential enthalpy and Conservative Temperature. Many of the questions surrounding the interpretation of salinity are also clarified. The preformed salinity interpretation of modeled salinity is also a helpful pragmatic step.
The key insight of the new approach--proved elsewhere but clearly stated here--is that potential temperature is not actually a conserved variable under advection, which means that the standard method of estimating ocean heat content anomaly (or the energy anomaly in the earth system held by the oceans) of using surface referenced potential temperature and then a heat capacity based on surface properties where the water can exchange energy, so is not an accurate estimate of the energy that has been added to or can be extracted from the ocean. The paper is explicit on this point, for the first time I am aware of making a specific estimate of how the air-sea fluxes affecting potential temperature should be calculated rather than how they are calculated.
There are two aspects of this paper that are not stated, which I would recommend the authors consider adding:
1) One aspect that is not covered in the paper is whether Option 2 involves more data downloads or disk storage. As the authors are aware, the OHC (and steric sea level) calculations require large amounts of 3D data fields from each model under consideration. Is one method or the other lighter in terms of data access?
2) Steric sea level is also of interest, and has a quite similar set of issues in calculation. By Landerer et al. (2007, DOI: 10.1175/JPO3013.1) the steric calculations depend intimately on the correspondence between the modeled variables and the equation of state for in situ density. I would like to see a small additional discussion on this point, related to the discussion of isobaric density gradients in Section 4.2. The steric anomalies are nontrivially different, as they are vertical integrals of the density, so it matters if the ~1% density gradient errors or 2.7% thermal wind errors accumulate or are random. This would be a valuable addition to this discussion.
-
RC3: 'Further clarification...', Baylor Fox-Kemper, 25 Feb 2021
reply
Let me be explicit about the data download overhead point I made above.
To do the TEOS-10 calculation of OHC, we need model T (thetao or bigthetao, hopefully not thetao recalculated from bigthetao!), model S (interpreted as the paper directs) and in situ density (not specified in this paper if it is to be recalculated or archived) and c_p^0. In Griffies et al. 2016, the seawater density is not spefically recommended for archiving, thus its calculation from archived data would require regeneration from T, S which might imply temporal aliasing (e.g., from using monthly-mean T, S rather than instantaneous). I suppose similar issues are at hand for using the EOS-80 approach to estimating OHC as well. What I'd like is a bit of comparison between the two approaches from a data archive perspective, specifically calling back to the list of data recommended for collection in the Griffies et al. OMIP protocol, and any advice for what we should have recommended to save but didn't (e.g., in situ density? potential enthalphy? depth integrated potential enthalpy?, etc.)
-
RC3: 'Further clarification...', Baylor Fox-Kemper, 25 Feb 2021
reply
-
RC4: 'Further comment on gmd-2020-426', Remi Tailleux, 28 Feb 2021
reply
I find Prof. Fox-Kemper’s positive review intriguing and unexpected. Since it stands in quite sharp contrast with my own review, it is likely that many readers will find our contrasting views confusing and making it difficult to decide where to stand. As a result, I thought it might be of interest to complement my technical review by a more informal one attempting to restate what I understand of McDougall et al’s paper in plain and straightforward English as a reading guide.
Thus, as far as I understand their argument, McDougall et al. essentially say there are two possible ways to interpret a standard EOS80-based numerical ocean model:
- Interpretation 1, a.k.a. the standard interpretation: as a model carrying potential temperature that does everything correctly apart from: a) neglecting non-conservation effects in its conservation equation; b) neglecting the spatial variations of the specific heat capacity in its estimation of the surface fluxes.
- Interpretation 2, a.k.a. the new interpretation: as a model carrying Conservative Temperature that does everything incorrectly apart from: a) correctly using a conservative equation for it; b) using the correct surface fluxes. In other words, in interpretation 2, everything that is done correctly according to interpretation 1 becomes incorrect and conversely, meaning that an EOS80-based model: a) wrongly initialises CT with observations of potential temperature; 2) use the wrong equation of state for evolution pressure and forces in the momentum equations; 3) incorrectly compute all aspects of the surface fluxes that depend on the surface temperature.
Logically, it is true that the authors’ new interpretation 2 provides a logical basis for comparing the temperature variable of an EOS80-based model with Conservative Temperature. However, by the same logic, it is similarly possible to interpret a TEOS-10 model as a bad model for potential temperature, which also provides a logical basis for comparing its temperature with observations of potential temperature. In the latter case, however, everybody would agree that it would not make sense to pursue interpretation 2. The question is therefore why should we consider that it makes more sense for an EOS80-based model?
A key issue with interpretation 2 relates to the point I raised in my former review, namely the fact that the construction of CT requires the specification of three arbitrary constants, a crucial piece of information that a standard EOS80-based model does not possess in general. The proposition that it is possible to interpret the temperature variable of such a model as CT therefore conflicts with the fact that an EOS80-based model has no way to know anything about which determination of CT the authors have in mind. Indeed, obeying a conservative equation and being forced by surface heat fluxes divided by a constant heat capacity does not provide sufficient information to specify the three arbitrary constants entering the construction of CT. This is sufficient to refute the validity of interpretation 2.
Nevertheless, note that the authors’ proposition is in principle testable. Just take a new TEOS-10 model that does everything correctly as reference solution. Now, run the same simulation using an EOS80-based model. If the authors were correct, the temperature variable of the latter model should compare more closely with the CT of the TEOS-10 model than with the potential temperature inferred from CT. At the very least, the authors should perform such a comparison in order to test the validity of their ideas.
Some other non-scientific reasons why one should be careful in putting interpretation 2 out there are due to the fact that:
- Interpretation 2 casts EOS80-based models in an extremely bad light and ocean modellers as basically incompetent, since it essentially consists in saying that EOS80 models initialise their simulations with the wrong field, use the wrong equation of state, and erroneously compute air-sea interactions, which sounds a very stupid thing to do;
- Given the key role of the ocean on climate change, Interpretation 2 also casts doubt on the scientific integrity of all IPCC assessments so far, as I don’t think how it is possible to trust an ocean model making so many elementary mistakes.
Finally, it seems useful to point out that although Griffies et al. (2016) present the development of TEOS-10 model as posing new challenges for model intercomparisons, the ocean model component of the GISS model has been carrying potential enthalpy as its prognostic heat variable for over 25 years (Russell et al., 1995), see https://simplex.giss.nasa.gov/gcm/doc/ModelDescription/GISS_Dynamic_ocean_model.html for details, long before McDougall (2003). As far as I am aware, the GISS model is included in all existing coupled-model intercomparison projects, which means that if the errors associated with the neglect of the non-conservation and spatial variations of cp are as large as the authors claim, one should expect the GISS model to do systematically better in its simulation of the temperature than all other EOS80-based models. Is that the case? As regards to recommendations, how has the GISS model been compared to other models so far? Shouldn’t this form the basis for the recommendations discussed in this paper and in Griffies et al. (2016)?
-
RC5: 'Reply on RC4', Baylor Fox-Kemper, 01 Mar 2021
reply
Prof. Tailleux is correct that my report is more positive than his, but I think he misinterprets my intention or rationale. I am in favor of this publication because it begins the calculation of the differences between these methods in an orderly way. It is of critical importance to have a handle on how big these differences are.
The point he makes about the GISS model is an excellent one, and I think it goes to show that there are many other factors affecting the skill of climate models beyond the EOS. It is common practice to evaluate "model biases" and "reduction in model bias" by changing one factor at a time (e.g., adding or retuning a parameterization). However, because such models have many counterbalancing errors, changing a single process is as likely to reveal other errors as to lead to overall improvement. It is only in attaching independent evidence to a particular change that we can hope to move forward, e.g., process models at higher resolution, observational constraints, etc.
However, our community has neither the financial nor human capital to simultaneously explore all processes represented in these models. Thus, quantitative sensitivity experiments are a crucial step in identifying where the errors needing the most attention are. I take this paper as falling into the sensitivity category and thus is valuable, as well as being a band-aid on the wound of the slow transition of the community from EOS-80 to TEOS-10 (and its decendents), and a pedagogical excercise in highlighting the differences between the two. Right now, it is clear that modeling centers have not decided to move forward rapidly on TEOS-10 implementation (otherwise, there would be no need for this stopgap or sensitivity excercise). In my mind there is little doubt that the formulation of EOS-80 is improved by the TEOS-10 approach, but what is not clear is the urgency and this paper contributes to establish that metric.
-
RC6: 'Reply on RC5', Remi Tailleux, 01 Mar 2021
reply
For the record, I made no comments on Prof. Fox-Kemper's intention or rationale so not sure how I may have mis-interpreted him. This being said, I find it hard to reconcile what he says about this paper and what I understand of it. Just to be sure that I understand him correctly, could Prof. Fox-Kemper clarify that by supporting this paper, he actually approves its main recommendation, namely that in future comparisons between EOS80- and TEOS10- based models, it will be acceptable from now on to compare (among other things) the monthly-averaged potential temperature computed with an EOS80 model with the monthly-averaged Conservative Temperature computed with a TEOS-10 model? (As opposed to compare the EOS80 monthly-averaged potentialt emperature with the TEOS10 montly-averaged potential temperature inferred from the Conservative Temperature, as is currently recommended).
I think that what is at stakes here is whether it is needed for TEOS-10 models to archive both Conservative Temperature and the potential temperature inferred from it, which is the current recommendation of Griffies et al. (2016). The current need to save both fields represents a significant burden for TEOS-10 models, which in some sense are doubly-penalised compared to EOS80 models. Indeed, the switch to TEOS-10 and Conservative Temperature generally entails some additional computational cost, as the equation of state is more costly to estimate, and some additional operations are required to convert Conservative Temperature to potential temperature at the surface for correctly estimating radiative and sensible fluxes. The further need to calculate potential temperature at each time step in order to compute monthly means or snapshots represent a significant added computational and storage cost. Presumably, ocean modelling groups must have realised that while they agree that it would be benefitial to switch to TEOS-10, the significant increase in computational and storage costs that this currently entails is a strong disincentive to do so. Presumably, I imagine that this is the real motivation for this paper, which in some sense provides a way out for TEOS-10 models by telling them they can just archive Conservative Temperature, their argument that EOS80 potential temperature can actually be re-interpreted as Conservative Temperature if one wants providing a rationale saving them the need for diagnosing and archiving potential temperature, thus considerably reducing their computational and storage burden and making the switch to TEOS10 considerably less painful.
This is really the key issue to be debated here, which Prof. Fox-Kemper hasn't really commented upon yet. This is quite a big deal, because if TEOS-10 models stop diagnosing and archiving potential temperature, it will become quite hard in the future to disentangle when comparing potential temperature with Conservative Temperature, what are the differences that are due to the inherent differences between the two variables from those that are due to actual physical reasons.
-
RC7: 'Reply on RC6', Baylor Fox-Kemper, 01 Mar 2021
reply
Not quite, Prof. Tailleux. The point is that whatever was requested in Griffies et al. 2016, in practice it has not occurred. Thus, the CMIP6 ensemble has a hodge-podge of models with a hodge-podge of variables uploaded. The vast majority use EOS-80 and a few use TEOS-10. But, as for which data were actually archived, as opposed to what one might have hoped for, does not include Absolute Salinity (even when the models used TEOS-10) and rarely Conservative Temperature (only CSIRO, CNRM, IPSL, and EC-Earth so far).
So, what to do? That is the point I think this paper helps address. Most importantly, it is providing some quantitative information on what the two options they propose imply in terms of accuracy.-
RC8: 'Reply on RC7', Baylor Fox-Kemper, 01 Mar 2021
reply
Perhaps I should clarify a bit more on the "hodge-podge"--this is a natural outcome of a great diversity of modelers and scientists pulling together to assemble information on a wide range of applications and interests. This paper aids in clarifying the interests of the EOS community, in a transparent way, which I think is valuable.
-
RC8: 'Reply on RC7', Baylor Fox-Kemper, 01 Mar 2021
reply
-
RC7: 'Reply on RC6', Baylor Fox-Kemper, 01 Mar 2021
reply
-
RC6: 'Reply on RC5', Remi Tailleux, 01 Mar 2021
reply
Trevor J. McDougall et al.
Trevor J. McDougall et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
388 | 73 | 18 | 479 | 2 | 1 |
- HTML: 388
- PDF: 73
- XML: 18
- Total: 479
- BibTeX: 2
- EndNote: 1
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1