Articles | Volume 9, issue 7
Development and technical paper
12 Jul 2016
Development and technical paper |  | 12 Jul 2016

Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

Allison H. Baker, Yong Hu, Dorit M. Hammerling, Yu-heng Tseng, Haiying Xu, Xiaomeng Huang, Frank O. Bryan, and Guangwen Yang

Abstract. The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.

Short summary
Software quality assurance is critical to detecting errors in large, complex climate simulation codes. We focus on ocean model simulation data in the context of an ensemble-based statistical consistency testing approach developed for atmospheric data. Because ocean and atmosphere models have differing characteristics, we develop a new statistical tool to evaluate ocean model simulation data that provide a simple, subjective, and systematic way to detect errors and instil model confidence.