the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Hybrid ensemblevariational data assimilation in ABCDA within a tropical framework
Joshua Chun Kwang Lee
Javier Amezcua
Ross Noel Bannister
Hybrid ensemblevariational data assimilation (DA) methods have gained significant traction in recent years. These methods aim to alleviate the limitations and maximise the advantages offered by ensemble or variational methods. Most existing hybrid applications focus on the midlatitudinal context; almost none have explored its benefits in the tropical context. In this article, hybrid ensemblevariational DA is introduced to a tropical configuration of a simplified nonhydrostatic convectivescale fluid dynamics model (the ABC model, named after its three key parameters: the pure gravity wave frequency A, the controller of the acoustic wave speed B, and the constant of proportionality between pressure and density perturbations C), and its existing variational framework, the ABCDA system.
The hybrid ensemblevariational DA algorithm is developed based on the alpha control variable approach, often used in numerical weather prediction. Aspects of the algorithm such as localisation (used to mitigate sampling error caused by finite ensemble sizes) and weighting parameters (used to weight the ensemble and climatological contributions to the background error covariance matrix) are implemented. To produce the flowdependent error modes (ensemble perturbations) for the ensemblevariational DA algorithm, an ensemble system is also designed for the ABC model which is run alongside the hybrid DA system. A random field perturbations method is used to generate an initial ensemble which is then propagated using the ensemble bred vectors method. This setup allows the ensemble to be centred on the hybrid control analysis. Visualisation software has been developed to focus on the diagnosis of the ensemble system.
To demonstrate the hybrid ensemblevariational DA in the ABCDA system, sensitivity tests using observing system simulation experiments are conducted within a tropical framework. A 30member ensemble was used to generate the error modes for the experiments. In general, the best performing configuration (with respect to the “truth”) for the hybrid ensemblevariational DA system used an $\mathrm{80}\phantom{\rule{0.125em}{0ex}}\mathit{\%}/\mathrm{20}\phantom{\rule{0.125em}{0ex}}\mathit{\%}$ weighting on the ensemblederived/climatological background error covariance matrix contributions. For the horizontal wind variables though, full weight on the ensemblederived background error covariance matrix ($\mathrm{100}\phantom{\rule{0.125em}{0ex}}\mathit{\%}/\mathrm{0}\phantom{\rule{0.125em}{0ex}}\mathit{\%}$) resulted in the smallest cycleaveraged analysis root mean square errors, mainly due to large errors in the meridional wind field when contributions from the climatological background error covariance matrix were involved, possibly related to a suboptimal background error covariance model.
The ensemble bred vectors method propagated a healthylooking DAcentred ensemble without bimodalities or evidence of filter collapse. The ensemble was underdispersive for some variables but for others, the ensemble spread approximately matched the corresponding root mean square errors. Reducing the number of ensemble members led to slightly larger errors across all variables due to the introduction of larger sampling errors into the system.
 Article
(8874 KB)  Fulltext XML
 BibTeX
 EndNote
Data assimilation (DA) methods can traditionally be classified into three categories: variational methods which look for a maximumaposteriori (MAP) estimator, Kalmanbased methods which produce a minimum variance estimator (often in an ensemble implementation), and methods which attempt to estimate full probability density functions (PDFs) without making any parametric assumptions (e.g. Markov Chain Monte Carlo and particle filters). For an introductory discussion, the reader is referred to e.g. Asch et al. (2016). Each traditional DA method is subject to its own advantages and limitations which determine the applicability in operational numerical weather prediction (NWP) systems. A wide spectrum of modern DA methods have been proposed in recent years including hybrid ensemblevariational (hybridEnVar) methods which have gained significant traction. Within the category of hybridEnVar methods, there exist different flavours due to subtleties in the derivation and permutations arising from the usage of different variational or ensemble methods. One can modify, for instance, the elements of the problem or the solution algorithm, yielding different varieties of hybrid variants. Bannister (2017) provides a comprehensive review of the latest hybridEnVar methods used in modern DA.
In this article, we focus on the hybrid covariance ensemblevariational approach (Hamill and Snyder, 2000). This differs from the hybrid gain ensemblevariational DA approach (Penny, 2014) which is also commonly used. Most existing hybrid applications focus on the midlatitudinal context and highlight the advantage of introducing flowdependency in the error statistics. However, almost none have explored the hybrid application in the tropical context where the characteristics of the error statistics are still poorly understood. Here, we introduce the hybridEnVar method to an existing convectivescale DA framework (Bannister, 2020) for a simplified nonhydrostatic fluid dynamics model (ABC model; Petrie et al., 2017) with the hope that the upgraded system can provide insights on the benefits and highlight potential issues that may arise using hybridEnVar methods in the tropical context. We note that this study is also the first to use a tropical configuration of the ABCDA system.
The aims of this study are as follows:
 a.
to document and test a hybridEnVar DA system for the ABC model, and
 b.
to test generating an ensemble suitable for hybridEnVar DA to function.
Section 2 contains details of the existing system used in this study. Section 3 documents the development of an ABC ensemble system, necessary to generate a meaningful ensemble of ABC states, which feed into the hybridEnVar DA system along with the implementation of hybridEnVar DA system itself. Section 4 demonstrates the use of the hybridEnVar DA system within a tropical framework. Three appendices provide details that may be of interest to readers familiar with ensemble initialisation and intervariable localisation in ensemble DA.
2.1 Model equations
The ABC model used in this study was originally developed by Petrie et al. (2017) and was designed as a simplified nonhydrostatic fluid dynamics model for use in convectivescale DA experiments. It comprises solving a set of simplified partial differential equations derived from the Euler equations. A vertical slice formulation containing only dry dynamics is used (twodimensional x–z spatial grid). This section summarises the model equations and their properties. These are
where $\mathit{u}=(u,v,w)$ is the threedimensional wind vector of zonal, meridional, and vertical wind; ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ and b^{′} are perturbation quantities from a reference state of scaled density ($\stackrel{\mathrm{\u0303}}{\mathit{\rho}}$) and buoyancy, respectively (see Petrie et al., 2017). The coefficients A, B, and C are tunable parameters which control the pure gravity wave frequency, the modulation of the advective and divergent terms, and the relationship between the pressure and density perturbations in the equation of state, respectively. The smallscale acoustic wave speed is given by $\sqrt{BC}$. Additionally, the Coriolis parameter f can be chosen depending on the desired latitudinal position of the vertical slice. Collectively, the variables u, v, w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′} at every grid position in the domain are referred to as the state vector x.
2.2 Variational data assimilation
Variational DA was subsequently implemented in the ABC model by Bannister (2020), termed as the ABCDA system. As of version 1.4 (https://doi.org/10.5281/zenodo.3531926, Bannister, 2019), 3DVar and 3DVarFGAT (First Guess at Appropriate Time) are available in the ABCDA system. The reader is directed to Bannister (2020) for the full details of this implementation, but here we summarise the key equations in the context of 3DVarFGAT. The 3DVarFGAT scheme is adapted into a hybrid scheme later in this article.
2.2.1 Incremental formulation
The objective of variational DA is to find an optimal state x^{a} which minimises a cost function J(x) (e.g. Kalnay, 2003). This cost function usually comprises two terms: one for the departure of the state with respect to the background state x^{b} and one for the departure of the state (transformed to observation space) with respect to observations y. A third term related to any model errors can be added in the socalled weakconstraint formulation, which is not needed in this work as we do not consider model errors in our setup. Even though the terms in J are based on Mahalanobis distances, J can be nonquadratic (with respect to the state variable) due to the nonlinearities of the (often) nonlinear forecast model (${\mathcal{M}}_{t\mathrm{1}\to t}$ used in the case of 4DVar) and observation operator (ℋ_{t}). Most variational systems implement an incremental formulation of the cost function (Courtier et al., 1994) which involves iteratively linearising ${\mathcal{M}}_{t\mathrm{1}\to t}$ and ℋ_{t} around a reference state (x^{r}) and framing the problem in terms of increments to x^{r} in a series of outer loops. This allows one to find an approximate solution of a complicated nonquadratic optimisation problem by tackling a series of easier quadratic ones. To illustrate, for a DA cycle with a window from t=0 to T, the incremental form of the 3DVarFGAT cost function is
where $\mathit{x}={\mathit{x}}^{\mathrm{r}}+\mathit{\delta}\mathit{x}$, x^{r} is a reference state, δx is the state increment, and δx^{b} is the difference between the background and the reference. B_{c} is the climatological background error covariance matrix, R_{t} is the observation error covariance matrix at time t, and H_{t} is the linearised observation operator at time t. We define the innovation:
Note that Eq. (2) is the same as Eq. (7) of Bannister (2020) except that the linearised forecast model ${\mathbf{M}}_{t\mathrm{1}\to t}$ has been replaced here by the identity I (this replacement is what distinguishes 3DVarFGAT from 4DVar). For the first outer loop, x^{r} is set as x^{b} (i.e. δx^{b}=0).
2.2.2 Estimation and modelling of B_{c}
A vital component in variational DA is B_{c}. It is the averaged (climatological) second moment of the PDF of forecast errors of the system (Bannister, 2008a). It determines the weighting between the use of observational and background information, and it allows for the spreading of observational information spatially and between variables. We can disentangle the construction of B_{c} by considering how the background errors are first estimated and then used in the modelling of B_{c}.
In the original implementation by Bannister (2020), the estimation of the background error statistics was performed by the extraction of multiple longitude–height slices from one or more Met Office Unified Model outputs (since these were conveniently available) which were processed to create an “ensemble” of ABC states (and subsequently, ABC forecasts). This set of forecast perturbations serve as proxies for the background errors used as training data for B_{c}. The validity of this prescribed source of background error statistics has not been investigated but the approach is convenient and practical. Another way to estimate the training data is to compute forecast differences (with different lead times and valid at the same time) over a climatological period (the National Meteorological Center method; Parrish and Derber, 1992) but as of version 1.4, this is not coded in the ABCDA system. Instead, we introduce a different method to compute the ensemble forecasts for the training data (Sect. 3.1).
In many systems, B_{c} is too large to explicitly be computed using the training data. For instance, in operational models, the size of the state variable can be 𝒪(10^{9}). Instead, B_{c} is often modelled through the use of a socalled control variable transform U. Even though the ABC model is small enough for the explicit computation of B_{c} to be feasible, it is still far more practical to use a control variable. We introduce a control vector δχ which is related to a state vector δx by
The choice of the control vector δχ and control variable transform U is flexible but they dictate the eventual crosscovariances between model variables of δx. In order to improve the conditioning of the incremental cost function (for more efficient minimisation), the control variables are chosen to be uncorrelated and have unit variance. Substituting Eq. (4) into Eq. (2) yields a new preconditioned incremental cost function:
where δx^{b}=Uδχ^{b}. Since B_{c} is a symmetric and positive definite matrix, U may be chosen to be a lower triangular matrix (using a Cholesky decomposition). The implied B_{c} is given by minimising Eq. (5) with the transform in Eq. (4); ${\mathbf{B}}_{\mathrm{c}}={\mathbf{UU}}^{\top}$ (Bannister, 2008b). It is evident that the use of a carefully designed U removes the need to compute ${\mathbf{B}}_{\mathrm{c}}^{\mathrm{1}}$ in order to minimise the cost function. By contrast, since observation errors are assumed to be uncorrelated in the ABCDA system, R_{t} is diagonal. Hence, there is no requirement for a separate transform since ${\mathbf{R}}_{t}^{\mathrm{1}}$ can be easily computed.
The calibration of U (and the implied B_{c}) is usually only performed once at the start of any cycling experiment using climatological background error statistics and then used for every DA cycle.
2.2.3 ABCDA system minimisation algorithm
In the ABCDA system, a conjugate gradient algorithm is used to find the minimiser of the cost function. Differentiating Eq. (5) with respect to δχ yields the gradient ∇_{δχ}J, given by
where U^{⊤} and ${\mathbf{H}}_{t}^{\top}$ are the adjoints of U and H_{t}, respectively. The reader is directed to Bannister (2020) for more details. The modifications required for specific steps in order to enable hybridEnVar DA are highlighted later in Sect. 3.2.2.
HybridEnVar schemes stem from a combination of two approaches: ensemble methods and variational methods. For the former, the archetypical example is the ensemble Kalman filter (EnKF) in its different formulations. The reader is referred to e.g. Evensen (2006) for an introduction. The variational approach has been discussed in Sect. 2.2. Instead of a oneoff retrieval of the background error statistics from a climatological source (Sect. 2.2.2), the purpose of having an ensemble is to estimate timedependent background error statistics from the ensemble forecasts valid at each cycle. As such, the background error statistics vary as the system evolves.
Accordingly, a parallel ensemble system that runs alongside the hybrid (singletrajectory) analysis is required in order to provide the background error statistics at each cycle. In this study, we explore the ensemble bred vectors (a variant of the bred vector method; EBV) method to evolve the ensemble system which will be described in Sect. 3.3.1. The following sections cover the stepbystep implementation of a cycling hybridEnVar DA system in the ABC model, in accordance with the schematic diagram (Fig. 1) which shows the coupling between the deterministic components and the parallelrun ensemble system using the two different ensemble propagation methods. Figure 1 is explained in the remainder of Sect. 3.
3.1 Generation of initial ensemble of states for ABC ensemble system
This section discusses the generation of an initial ensemble, the first step in Fig. 1 (red segments) which is needed in the case of a cold start. Subsequent propagation of the ensemble can then proceed after this problem is addressed.
In the ABC model, an initial twodimensional state can be computed from a longitude–height slice of a Unified Model output which is a convenient approach adopted by Petrie et al. (2017). In this light, the simplest method to generate an initial ensemble is to extract different longitude–height slices from the same Unified Model output, similar to how a population of training data is generated in Bannister (2020) for the calibration of the climatological background error covariances as mentioned in Sect. 2.2.2. Another method is to simply add statistical noise to the initial ABC model state, although it is not straightforward to determine the distributions for the noise sampling (which could vary for different variables and include multivariate correlations), so that the solutions are consistent with the underlying dynamics. The model evolution in the first few cycles may spuriously dampen or amplify the added statistical noise if it is drawn from an incorrectly chosen distribution.
For this study, we adopt the random field perturbation method proposed by Magnusson et al. (2009) to generate the initial ensemble. The main idea relies on choosing two (assumed independent) ABC states and calculating their differences. The differences are treated as perturbations and can then be scaled to maintain a fixed amplitude between ensemble members and/or cycles, and are subsequently added to the initial ABC state computed above to generate an initial (arbitrarysized) ensemble of states. Linear balances are approximately preserved in the resulting ensemble as only linear operations are performed on the fields (Magnusson et al., 2009).
Unlike in an operational NWP system where archived past analyses are available, a long “truth” simulation needs to be performed using the ABC model starting from a chosen initial state (the “truth run” in Fig. 1). To generate each ensemble member, two states from the same “truth” simulation are chosen at random. These need to be sufficiently separated in time for the assumption of independence to be valid. Following the above steps, the initial ensemble of states is given by
where ${\mathit{x}}_{\mathrm{cs}}^{k}$ represents the kth initial state of N ensemble members; ${\mathit{x}}_{\mathrm{cs}}^{\mathrm{c}}$ is the initial unperturbed (hereafter referred to as control) state; the subscript “cs” refers to cold start; ${\mathit{x}}_{kt\mathrm{1}}^{\mathrm{tr}}$ and ${\mathit{x}}_{kt\mathrm{2}}^{\mathrm{tr}}$ are the two random states drawn from the same “truth” simulation at different times (kt1 and kt2); and r^{k} depends on the scaling factor ϵ^{rf} defined according to the total energy norm ($\u2022{}_{{E}_{\mathrm{tot}}}=\sqrt{{E}_{\mathrm{tot}}}$; see Eq. A1) of the perturbations to maintain a fixed amplitude (the ensemble mean $\stackrel{\mathrm{\u203e}}{\u2022{}_{{E}_{\mathrm{tot}}}}$) between ensemble members. The reason for the $\frac{\mathrm{1}}{\sqrt{\mathrm{2}}}$ factor being included is because we are considering differences between two states so the variance of the difference is a reflection of the sum of their error variances rather than considering differences between a state and a mean (see Appendix of Berre et al., 2006). More details and justification of the method are covered in Appendix A.
After generating the initial ensemble, the cold start members are propagated to the analysis time of the first cycle at T+0 (from ${\mathit{x}}_{\mathrm{cs}}^{k}$ to ${\mathit{x}}_{\mathrm{0}}^{\mathrm{f}k}$ and from ${\mathit{x}}_{\mathrm{cs}}^{\mathrm{c}}$ to ${\mathit{x}}_{\mathrm{0}}^{\mathrm{fc}}$ by ℳ_{cs→0}; Fig. 1, lower brown segments). Note that for subsequent cycles, the analysis ensemble (produced from Sect. 3.3) and control analysis are propagated instead of cold start members (i.e. from ${\mathit{x}}_{t}^{\mathrm{a}k}$ to ${\mathit{x}}_{t+\mathrm{1}}^{\mathrm{f}k}$ and from ${\mathit{x}}_{t}^{\mathrm{ac}}$ to ${\mathit{x}}_{t+\mathrm{1}}^{\mathrm{fc}}$ by ${\mathcal{M}}_{t\to t+\mathrm{1}}$; Fig. 1, upper right brown segments). These ensemble forecasts are then used in the DA step, described in the next section.
3.2 Hybrid ensemblevariational data assimilation
The hybridEnVar approach seeks to implement a hybrid background error covariance B_{h} which is a linear combination of a climatological and an ensemblederived background error covariance matrix (B_{c} and B_{e}) in the form following Hamill and Snyder (2000):
where ${\mathit{\beta}}_{\mathrm{c}}^{\mathrm{2}}$ and ${\mathit{\beta}}_{\mathrm{e}}^{\mathrm{2}}$ are (positive) scalar weights often determined empirically for the algorithm. These weights are often chosen to add to unity but this need not be the case. This approach computes B_{h} explicitly but is not practical in an NWP system. For the ABCDA system, the alpha control variable approach of Lorenc (2003) is instead implemented which constructs an implied version of Eq. (8) using an alteration of the standard variational cost function and control variables (see Sect. 3.2.2). Wang et al. (2007) demonstrates the mathematical equivalence of both approaches.
Given the control background which is a shortrange forecast from the previous cycle (${\mathit{x}}^{\mathrm{b}}={\mathit{x}}_{t}^{\mathrm{fc}}$), the hybridEnVar DA yields the hybrid control analysis ${\mathit{x}}_{t}^{\mathrm{ac}}$ (Fig. 1, blue segments) which needs the ensemble members to implicitly construct B_{e} (recall that B_{c} in Eq. 8 is derived from the U transform and B_{e} is derived from the ensemble). The steps to retrieve the hybrid control analysis are described in Sect. 3.2.2 but we first explain how B_{e} can be computed from the ensemble.
3.2.1 Computation of the ensemblederived background error covariance matrix for the control analysis
At each cycle, one may compute a rectangular matrix ${\mathbf{X}}_{t}^{\mathrm{f}}$ whose columns contain the scaled differences between the ensemble forecasts (i.e. ${\mathit{x}}_{t}^{\mathrm{f}k}$ for the kth member forecast valid at time t) and the ensemble mean (${\stackrel{\mathrm{\u203e}}{\mathit{x}}}_{t}^{\mathrm{f}}$):
where ${{\mathit{x}}^{\prime}}_{t}^{k}$ are the scaled error modes valid at time t. The ensemblederived background error covariance matrix (at time t) ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$ is explicitly given by the outer product:
As we shall see in Sect. 3.2.2, this matrix is not computed explicitly although parts of it are computed explicitly for visualisation purposes later in this article.
In the limit where N tends to infinity or where N is far greater than the degrees of freedom of the state n (N≫n), ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$ may be full rank. In practice, however, a small number of ensemble members (N≪n) will inevitably lead to sampling error and a rankdeficient matrix. Houtekamer and Mitchell (2001) proposed mitigating this problem by performing a Schur product of ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}$ with a correlation matrix (or localisation matrix) L:
This seeks to address the sampling error by damping the longrange background error covariances and effectively increasing the rank of ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$. The spatial and multivariate aspects of the localisation matrix are further discussed in Sect. 3.2.3 including how this can be performed without constructing explicit matrices.
3.2.2 Alpha control variable transform
Following the approach of Lorenc (2003), we introduce an ensemblerelated penalty in the variational cost function. This requires constructing socalled alpha fields α^{k} (part of a new set of mutually uncorrelated control variables) associated with each ensemble member k and constrained to have covariance L (the localisation matrix as used in Eq. 11). The number of elements in α^{k} must be the same as the state vector of ${{\mathit{x}}^{\prime}}_{t}^{k}$ (number of model grid points N_{g} × number of model variables N_{var}). The modified cost function is
where J_{b}, J_{o}, and J_{e} are the background, observation, and ensemble penalties, respectively. Equation (12) is an extension of Eq. (5) and Eq. (12b) is an extension of Eq. (4), the hybrid control variable transform. Together, these equations make up the hybrid scheme.
Similar to the way that B_{c} can be decomposed as ${\mathbf{B}}_{\mathrm{c}}={\mathbf{UU}}^{\top}$, L can be decomposed in terms of the alpha control variable transform, U^{α}, i.e. $\mathbf{L}={\mathbf{U}}^{\mathit{\alpha}}{\mathbf{U}}^{\mathit{\alpha}\top}$. Consider an alpha control vector χ^{αk} (again associated with ensemble member k) which is related to the alpha field α^{k} via
Substituting Eq. (13) into Eq. (12) yields
The variational problem (Eq. 14) is minimised with respect to the collective set of control vectors comprising a part that is associated with B_{c} (δχ) and parts that are associated with B_{e} (${\mathit{\chi}}^{\mathit{\alpha}\mathrm{1}},{\mathit{\chi}}^{\mathit{\alpha}\mathrm{2}},\mathrm{\dots},{\mathit{\chi}}^{\mathit{\alpha}N}$). Together, these are combined using the hybrid transform (Eq. 14b) to give the particular δx that minimises Eq. (14), namely, δx^{a}. This gives the analysis ${\mathit{x}}_{t}^{\mathrm{ac}}$ in Eq. (14c).
The total implied covariance matrix (that is effectively seen by the DA) is formally given by
which is a linear combination of the implied B_{c} and B_{e} (without explicitly constructing either), and is elementwise equivalent to the explicit hybrid covariance in Eq. (8).
Next, we reproduce the minimisation algorithm steps Sect. 3.5 of Bannister (2020) and highlight (underlined) the modifications required when the hybridEnVar scheme is enabled.

Set the reference state at t=0 to the background state x^{r}=x^{b}. Decide values for N, β_{c}, and β_{e};

Do the outer loop;
 a.
For the first outer loop, δχ^{b}=0; otherwise, compute $\mathit{\delta}{\mathit{\chi}}^{\mathrm{b}}={\mathbf{U}}^{\mathrm{1}}({\mathit{x}}^{\mathrm{b}}{\mathit{x}}^{\mathrm{r}})$,
 b.
Compute x^{r}[t] over the time window, $\mathrm{1}\le t\le T$, with the nonlinear model ${\mathit{x}}^{\mathrm{r}}\left[t\right]={\mathcal{M}}_{t\mathrm{1}\to t}\left({\mathit{x}}^{\mathrm{r}}\right[t\mathrm{1}\left]\right)$,
 c.
Compute the reference state's observations: y^{mr}[t]=ℋ_{t}(x^{r}[t]),
 d.
Compute the differences: $\mathit{d}\left[t\right]=\mathit{y}\left[t\right]{\mathit{y}}^{\mathrm{mr}}\left[t\right]$,
 e.
Set δχ=0, δx=0, and χ^{αk}=0, $\mathrm{1}\le k\le N$,
 f.
Do the inner loop,
 i.
Integrate the perturbation trajectory over the time window, $\mathrm{1}\le t\le T$, with the linear forecast model: $\mathit{\delta}\mathit{x}\left[t\right]={\mathbf{M}}_{t\mathrm{1}\to t}\mathit{\delta}\mathit{x}[t\mathrm{1}]$
 ii.
Compute the perturbations to the model observations: δy^{m}[t]=H_{t}δx[t]
 iii.
Compute Δ[t] vectors as defined as $\mathbf{\Delta}\left[t\right]={\mathbf{H}}_{t}^{\top}{\mathbf{R}}_{t}^{\mathrm{1}}\left(\mathit{\delta}{\mathit{y}}^{\mathrm{m}}\right[t]\mathit{d}[t\left]\right)$
 iv.
Set the adjoint state $\mathit{\lambda}[T+\mathrm{1}]=\mathrm{0}$
 v.
Integrate the following adjoint equation backwards in time $T\ge t\ge \mathrm{0}$: $\mathit{\lambda}\left[t\right]=\mathbf{\Delta}\left[t\right]+{\mathbf{M}}_{t\to t+\mathrm{1}}^{\top}\mathit{\lambda}[t+\mathrm{1}]$
 vi.
Compute the gradient as follows: ${\mathrm{\nabla}}_{\mathit{\delta}\mathit{\chi}}J=\mathit{\delta}\mathit{\chi}\mathit{\delta}{\mathit{\chi}}^{\mathrm{b}}+{\mathit{\beta}}_{\mathrm{c}}{\mathbf{U}}^{\top}\mathit{\lambda}\left[\mathrm{0}\right]$, and ${\mathrm{\nabla}}_{{\mathit{\chi}}^{\mathit{\alpha}k}}J={\mathit{\chi}}^{\mathit{\alpha}k}+{\mathit{\beta}}_{\mathrm{e}}{\mathbf{U}}^{\mathit{\alpha}\top}({{\mathit{x}}^{\prime}}_{t}^{k}\circ \mathit{\lambda}[\mathrm{0}\left]\right)$; these are the gradients with respect to each control vector segment, $\mathrm{1}\le k\le N$
 vii.
Use the conjugate gradient algorithm to adjust δχ and χ^{αk} to reduce the value of J. Note that the cost function is $J={J}_{\mathrm{b}}+{J}_{\mathrm{o}}+{J}_{\mathrm{e}}$ (Eq. 14)
 viii.
Compute the new increment in model space using the control variable transform and alpha control variable transform: $\mathit{\delta}\mathit{x}={\mathit{\beta}}_{\mathrm{c}}\mathbf{U}\mathit{\delta}\mathit{\chi}+{\mathit{\beta}}_{\mathrm{e}}{\sum}_{k=\mathrm{1}}^{N}{{\mathit{x}}^{\prime}}_{t}^{k}\circ \left({\mathbf{U}}^{\mathit{\alpha}}{\mathit{\chi}}^{\mathit{\alpha}k}\right)$ (Eq. 14b)
 ix.
Go to step 2fi until the innerloop convergence criterion is satisfied
 i.
 g.
Update the reference state: ${\mathit{x}}^{\mathrm{r}}\to {\mathit{x}}^{\mathrm{r}}+\mathit{\delta}\mathit{x}$,
 h.
Go to step 2a until the outerloop convergence criterion is satisfied. At convergence, set the hybrid control analysis ${\mathit{x}}_{t}^{\mathrm{ac}}={\mathit{x}}^{\mathrm{r}}$,
 a.

Run a nonlinear forecast from ${\mathit{x}}_{t}^{\mathrm{ac}}$ for the background of the next cycle and longer forecasts if required.
3.2.3 Intervariable and spatial localisation
Localisation of the ensemblederived background error covariance matrix, as in Eq. (11), is required to mitigate sampling error which can dominate the computed covariance between distant points (Hamill et al., 2001). Localisation opens up a range of options and raises some pertinent questions: should we localise only in space (and should these spatial localisation matrices depend on the variable) or should we additionally include localisation between different model variables? This depends on the design of χ^{αk} and U^{α} in Eq. (13), and the implied L. If χ^{αk} only depends on grid point location (i.e. it need only be of length N_{g}), then U^{α} must be rectangular (N_{g}N_{var}×N_{g}) so that α^{k} has length N_{g}N_{var} required for the Schur product in Eq. (12b). This approach was adopted by Wang et al. (2008a) except that χ^{αk} was only dependent on horizontal grid point locations. By design, U^{α} functions to “use the same χ^{αk} for each model variable and model level” (i.e. repeated rows in U^{α}) so the Schur product in Eq. (14b) can be computed. This point was not highlighted in the description of Eq. (1) of Wang et al. (2008a).
If χ^{αk} only has N_{g} elements, the implied $\mathbf{L}={\mathbf{U}}^{\mathit{\alpha}}{\mathbf{U}}^{\mathit{\alpha}\top}$ can only involve spatial localisation so full intervariable covariances (as found from the raw ensemble) are retained (see Appendix B). Alternatively, if χ^{αk} is full length (i.e. of length N_{g}N_{var}) with independent fields for each model variable, U^{α} is square (N_{g}N_{var}×N_{g}N_{var}) so it is possible to use this transform to damp the ensemblederived covariances between different variables and spatial locations. Nonetheless, there is flexibility to still retain the full intervariable covariances in L depending on the design of U^{α}. For the record, a proof of the equivalence between this approach (χ^{αk} with length N_{g}N_{var}) with full intervariable covariances retained, and the approach where χ^{αk} is of length N_{g} is included in Appendix B. More complex designs of U^{α} which allow the retention of intervariable covariances only between certain model variables or using different spatial localisation lengthscales for different model variables are also possible. In practice, these may be useful in convectivescale data assimilation particularly when hydrometeor variables are involved (Xuguang Wang, personal communication, 2022).
In the ABCDA system, U^{α} is further decomposed into the horizontal ${\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}$ and vertical ${\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}}$ localisation transforms similar to the decomposition of U in Bannister (2020). The series of transforms is given by
treating the vertical and horizontal localisation separately.
Initial tests constructed ${\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}$ using a Fourier decomposition (as is done in the standard horizontal transform of U – see Bannister, 2020), but this yielded undesirable small negative correlations at longer localisation distances (presumably related to the Gibbs phenomenon; not shown). Thus, a different approach was adopted as the basis for populating ${\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}$ using the eigendecomposition of a target horizontal localisation matrix ${\mathbf{L}}_{\mathrm{horiz}}={\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}{\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}\top}$ (where ${\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}={\mathbf{F}}_{\mathrm{horiz}}^{\mathit{\alpha}}({\mathbf{\Lambda}}_{\mathrm{horiz}}^{\mathit{\alpha}}{)}^{\mathrm{1}/\mathrm{2}}$, and ${\mathbf{F}}_{\mathrm{horiz}}^{\mathit{\alpha}}$ and ${\mathbf{\Lambda}}_{\mathrm{horiz}}^{\mathit{\alpha}}$ are the eigenvectors and eigenvalues, respectively, of the imposed horizontal localisation matrix). We start by constructing L_{horiz} using the fifthorder piecewise Gaspari–Cohn function with a horizontal localisation lengthscale h^{α} (equivalent to c with $a=\mathrm{1}/\mathrm{2}$, in Eq. 4.10 of Gaspari and Cohn, 1999). This function is approximately Gaussian over a compact support. As the ABC model uses periodic boundary conditions, L_{horiz} must be designed to be circulant and account for “overlapping tails” of the Gaspari–Cohn function when h^{α} is larger than half the domain size. In the “overlapping tails” regime, the correlation function does not satisfy the “spacelimited” requirement described in (Gaspari and Cohn, 1999). Thus, the resulting L_{horiz} is found to not be positive semidefinite when h^{α} is large and tends to infinity so the horizontal eigenvectors associated with the negative eigenvalues need to be truncated. Offline testing in idealised setups and within the ABCDA system showed that with the remaining eigenvectors, ${\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}{\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}\top}$ is a good approximation for L_{horiz}. It is also possible to scale the remaining eigenvalues to restore the initial total variances for a better approximation. Figure 2 illustrates the implied correlation function (h^{α}=250 km) with respect to longitudinal grid point 50 for an ABCDA system with 364 longitudinal grid points and a 1.5 km horizontal grid retrieved using the above steps. The original Gaspari–Cohn function with “overlapping tails” is compared with the implied correlation function reconstructed from the eigenvectors and eigenvalues of the eigendecomposition of L_{horiz} (Fig. 2a), the resulting implied correlation function when negative eigenvectors/eigenvalues are truncated (Fig. 2b) and the resulting implied correlation function after further restoration of the initial total variances by scaling (Fig. 2c). Note that in this example, the threshold for which negative eigenvalues appear is h^{α}≈138.33 km found empirically. In the current version of the ABCDA system, the scaling to restore initial total variances is not implemented yet.
To populate ${\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}}$, a similar approach is adopted; a Gaspari–Cohn function is used with vertical localisation lengthscale v^{α}. Note that the target vertical localisation matrix L_{vert} is a correlation matrix and so must be positive semidefinite so truncation of eigenvectors is not required. Since ${\mathbf{U}}_{\mathrm{horiz}}^{\mathit{\alpha}}$ and ${\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}}$ are separate, it is possible to have a different L_{vert} for each horizontal eigenvector. However, for simplicity, the default setup in the ABCDA system uses the same L_{vert} for each horizontal eigenvector. As for L_{horiz}, the vertical eigenvectors are retrieved through the eigendecomposition of L_{vert} and used to populate ${\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}}$ such that ${\mathbf{L}}_{\mathrm{vert}}={\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}}{\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}\top}$ (where ${\mathbf{U}}_{\mathrm{vert}}^{\mathit{\alpha}}={\mathbf{F}}_{\mathrm{vert}}^{\mathit{\alpha}}({\mathbf{\Lambda}}_{\mathrm{vert}}^{\mathit{\alpha}}{)}^{\mathrm{1}/\mathrm{2}}$, and ${\mathbf{F}}_{\mathrm{vert}}^{\mathit{\alpha}}$ and ${\mathbf{\Lambda}}_{\mathrm{vert}}^{\mathit{\alpha}}$ are the eigenvectors and eigenvalues, respectively, of the imposed vertical localisation matrix).
3.3 Generation of ABC analysis ensemble
After the initial ensemble has been generated (Sect. 3.1) using the method of Magnusson et al. (2009) and the initial hybrid control analysis has been retrieved (Sect. 3.2), the next step is to generate analysis ensembles (Fig. 1, green segments). The ensemble then proceeds via the forecast model (Fig. 1, upper right brown segments) as a forecast ensemble which is used in the next hybrid DA step. Various methods have been used in previous studies, such as singular vectors (Buizza et al., 1993), bred vectors (Toth and Kalnay, 1993, Toth and Kalnay, 1997), perturbed observations (Houtekamer and Derome, 1995), EnKF (Evensen, 1994), ensemble transform Kalman filter (Bishop et al., 2001), and other square root filters. Here, we focus mainly on the EBV method. This method has useful information about the nature of dynamical error growth about the analysis state at each cycle but is uninformed about the observation network.
The ensemble, which is run in parallel to the hybrid DA, are important components in hybridEnVar since they provide the means to compute ${\mathbf{X}}_{t}^{\mathrm{f}}$ in Eq. (9). The success of the scheme depends on the extent to which the ensemble forecasts can appropriately represent the background error statistics for the ABCDA system so proper design of the ensemble system is critical. We construct the analysis ensemble around the hybrid control analysis (i.e. adding ensemble perturbations to the hybrid control analysis; see below) so the ensemble is “DAcentred”.
3.3.1 Ensemble bred vectors
In this approach, we consider a variant of the bred vectors method – the EBV method (Balci et al., 2012). The basic bred vectors method (Toth and Kalnay, 1993) is generally simple to implement and has a cheap computational cost. The idea relies on breeding perturbations by running the nonlinear forecast model for a fixed period for pairs of forecast ensemble members, taking the difference between the two forecasts, and then scaling the difference to have a specified and fixed amplitude. This process “breeds” the fastest growing error modes. This is repeated to retrieve the required number of error modes and the resulting perturbations are respectively added to the hybrid control analysis to generate an analysis ensemble. The intention is that these perturbations should adequately sample the space of possible analysis errors.
The main difference between the bred vectors and EBV methods lies in the scaling of the perturbations at each cycle. In the bred vectors method, the perturbations are scaled to maintain a fixed amplitude across cycles for each ensemble member. The scaling is independent for each ensemble member and there is, therefore, no mechanism to compare the dynamics with perturbations of the other members. The EBV method (Balci et al., 2012), on the other hand, involves a global scaling factor which depends on the amplitude of the largest perturbation and offers better insights into the relative behaviour of nearby ensemble trajectories. Perturbations that have an amplitude smaller than the largest perturbation of the ensemble then play a smaller role after scaling; in other words, ensemble trajectories that are clustered around the control member trajectory are less important for identifying dominant directions of error growth. An indepth comparison of the bred vectors and EBV methods is provided in Balci et al. (2012).
To generate the analysis ensemble, a target maximum amplitude ϵ_{0} is required but this opens the question on what to choose for ϵ_{0}. Here, we use ϵ_{0}=ϵ^{rf} (the mean total energy norm of the initial ensemble of states) although other choices are possible, such as running a series of experiments and finding the average analysis error to estimate the analysis uncertainty. This scaling factor is fixed across perturbations so at each cycle, the perturbations are scaled by the same ratio ${r}_{t}^{\mathrm{ebv}}$ which is used and defined as follows:
where ${\mathit{x}}_{t}^{\mathrm{f}k}$ and ${\mathit{x}}_{t}^{\mathrm{fc}}$ are the kth ensemble and control forecast from the previous cycle, respectively, and $\mathit{\delta}{\mathit{x}}_{t}^{\mathrm{f}k}$ is the kth scaled ensemble perturbation at time t. The kth member of the analysis ensemble ${\mathit{x}}_{t}^{\mathrm{a}k}$ is centred on the hybrid control analysis ${\mathit{x}}_{t}^{\mathrm{ac}}$ produced by the DA step (Sect. 3.2). The $\frac{\mathrm{1}}{\sqrt{\mathrm{2}}}$ factor is not necessarily required because we are computing differences between individual ensemble member forecasts and the same control member forecast, but we have included it as a deflation factor with our choice of ϵ_{0}. It is worth noting that the EBV method is not formally consistent with Kalman filter theory but will not suffer from filter collapse as long as the ϵ_{0} chosen is welltuned.
It is not uncommon to use such a setup (i.e. separate hybrid deterministic and ensemble systems for, respectively, the first and second moments of the posterior). While the hybrid control analysis involves both the ensemble and climatological contributions to the background error covariance matrix Eq. (15), the computation of analysis perturbations involves only the forecast ensemble and neglects the climatological contributions. While this is a formal discrepancy, we assume that this setup is an adequate from a practical perspective.
For this study, the Unified Model output is retrieved from a tropical convectivescale NWP system over the Maritime Continent (SINGVDA; Heng et al., 2020). SINGVDA operates on a 1.5 km core horizontal grid with a model top height of 38.5 km. Longitude–height slices of fields u and v around 2^{∘} N are extracted from the SINGVDA output by placing these fields onto the 1.5 km ABC model grid for the lowest 60 levels (up to around 18 km height), resulting in a 364×60 ABC model grid. These initial u and v fields are then modified to make them compatible with the ABC model's periodic boundary conditions and the remaining fields, w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′}, are derived following the procedure in Sect. 4.1 of Bannister (2020). For the ensemble system, 30 initial ensemble members are generated following Sect. 3.1 excluding the control state reconfigured from the longitude–height slice. This is a typical ensemble size used in operational NWP systems.
To represent a tropical setting of the ABC model, a value of $f={\mathrm{10}}^{\mathrm{5}}$ s^{−1} is used. This corresponds approximately to a value of f at a latitude of 4^{∘} N in an NWP system. The other model parameters are set as follows: A=0.02 s^{−1}, B=0.01, and C=10^{4} m^{2} s^{−2}. A series of hourlycycling multicycle DA observation system simulation experiments are conducted to demonstrate the incorporation of ensemblederived background error covariances in hybridEnVar DA. The hybrid extension of 3DVarFGAT may be termed hybridEn3DVarFGAT.
We run four experiments with the following configurations:
 a.
100 % B_{c} (i.e. no flowdependency, equivalent to 3DVarFGAT),
 b.
50 % B_{c}, 50 % B_{e}; hybridEn3DVarFGAT (i.e. flowdependency with an equal contribution from B_{c} and B_{e}),
 c.
20 % B_{c}, 80 % B_{e}; hybridEn3DVarFGAT (i.e. flowdependency with most contribution from B_{e}),
 d.
100 % B_{e}; pure En3DVarFGAT (i.e. no contribution from B_{c}).
These experiments are referred to as EBV(a) to EBV(d) accordingly. Note that configuration (a) does not use ensemble information but the experiment is named as EBV(a) for ease of reference.
4.1 Implied background error covariances
To show the workings of Eq. (8) (or equivalently Eq. 15) and the localisation, we compute a selection of implied background error covariances with the various weights assigned to B_{c} and B_{e} (as the above configurations (a) to (d)). This is similar to performing single observation experiments and retrieving the analysis increments. For B_{e}, spatial localisation length scales of h^{α}=100 km and v^{α}=5 km are set, and with no intervariable localisation. The implied background error covariances are valid for the time of the first cycle after a cold start (i.e. at T+0) and use 1 h forecast ensemble perturbations. For B_{c}, the same ensemble is used as training data to calibrate U.
Figure 3 shows the implied background error covariances of ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, v, and b^{′} with respect to a fixed ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ point in the middle of the domain for the four configurations (four rows). Configuration (a) (top row) shows the implied background error covariances that are modelled purely by U and configuration (d) (bottom row) shows the purely ensemblederived covariances with spatial localisation (implied by U^{α}). Configurations (b) and (c) are linear combinations with different weights as demonstrated by Eq. 8.
For ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$–${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ covariances in configuration (a) for pure 3DVarFGAT, the central region of autocorrelation has horizontal and vertical length scales of 100 and 2 km, respectively, and is surrounded by oscillations possibly reflecting the dominant gravity wave propagation. The vertical length scale here is smaller than that found in the midlatitude study of Bannister (2020), and such a contrast between low and higher latitudes is seen in other studies, e.g. Ingleby (2001). This can be compared with configuration (d) for pure (and localised) En3DVarFGAT which shows a narrower but taller region of autocorrelation. Most of the oscillations are beyond the localisation region so are not visible apart from small negative values to the west of the autocorrelation.
The ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$–v covariances in configuration (a) follow from the use of geostrophic balance which is manifested in U (Eq. 2a in Bannister, 2020). The v pattern is consistent with an anticyclonic field around the source point (i.e. positive and negative v covariances west and east of the positive ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ source point, respectively). Since f is small, these covariances are also small. Contrasting this with configuration (d), it appears that the ensemblederived covariances are more substantial and are of opposite sign suggesting that there exists some (other) mass–wind relationship manifested in B_{e} (e.g. related to equatorial gravity wave processes not represented in U).
The ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$–b^{′} covariances in configuration (a) follow from the use of hydrostatic balance, again manifested in U (Eq. 3 in Bannister, 2020). Hydrostatic balance relates b^{′} increments with the vertical gradient of ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ increments, and the topleft and topright panels are confirmed to be consistent in this way. In configuration (d), the ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$–b^{′} covariances have similar vertical patterns within the localisation region although they are weaker. As noted in Bannister (2020), the B_{c} covariances tend to be larger than their B_{e} counterparts even though both use the same training data for calibrating the transforms, but the broad structures are similar.
Note that the implied background error covariances between ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ and u and w are each zero in configuration (a) by definition of U (Bannister, 2020). By contrast, in configurations (b), (c), and (d), the implied background error covariances are prescribed directly between the associated model variables implied by U^{α} (not shown).
Even though the multivariate background error relationships relevant to the tropics are likely to be different from those at midlatitudes, the same balance conditions designed for midlatitudes are often used in U for tropical settings (as they are here). By exploring ensemblederived multivariate background error relationships, we may be able to identify alternative balances inherent in the dynamical fields. This will be explored in a separate study.
4.2 Details of observation system simulation experiments
In all experiments, 200 observations of each variable (u, v, w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′}), which are equally spaced throughout the domain, are assimilated at every hourly cycle. The observations are sampled from a “truth” run with added observation noise following a Gaussian distribution. The observation error standard deviations are chosen to be approximately 10 % of the variable's root mean square value as seen in the “truth” run. These are 0.2, 0.2, 0.01, $\mathrm{1.5}\times {\mathrm{10}}^{\mathrm{4}}$, and $\mathrm{1.5}\times {\mathrm{10}}^{\mathrm{3}}$ m s^{−2} for u, v, w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′}, respectively. All generated observations are valid at the background/analysis time of each cycle so there is no difference in the analysis between 3DVar and 3DVarFGAT (and indeed, 4DVar, if it were implemented). The number of observations are ≈1 % of the degrees of freedom of the state (both spatial and multivariate) to mimic how observations are sparse in the tropical setting.
The initial background of the deterministic system is determined from the initial “truth” plus a small background noise perturbation δx=Uδχ, where δχ is drawn randomly from 𝒩(0,I). In order to reduce the effect of random noise on the experiments, the ABCDA system is first spunup for fifty 1 h cycles with the expectation that the DAcentred ensemble system and deterministic system will have lost memory of the particular way that the system was initialised from a cold start. The information from the 50th cycle of spinup is then used in the first cycle of all the actual experiments.
During the spinup configuration testing, we noticed that the inclusion of vertical localisation in B_{e} was particularly detrimental to the evolution of the w field. Investigation revealed that this was due to the introduction of hydrostatic imbalance in the analysis increments (not shown). A similar wellknown issue to do with horizontal localisation introducing geostrophic imbalance was discussed in Sect. 3c of Lorenc (2003). We include more comments on the hydrostatic imbalance issue in Appendix C. For this reason, vertical localisation was excluded in B_{e} in the spinup process and in all experiments. After inspecting the other fields during spinup configuration testing, we found that in most configurations, the hybrid control analysis gradually converged around the “truth” run as the observations were assimilated over the 50 spinup cycles, as logically expected. Particularly using the EBV(d) configuration, the evolution of the fields was reasonably in line with the “truth” run so this was the chosen configuration which was run for 50 spinup cycles, referred to as the spinup run.
To ensure a fair comparison in the results, the spinup run provides the same starting background (50th cycle forecast), empirically tuned EBV ensemble, and ensemblederived error modes (if required) for the first cycle of all the actual experiments. Each experiment is run for 50 cycles and only differ in the DA algorithm configurations after spinup. Where B_{e} is required, we use h^{α}=20 km for the horizontal localisation while not performing any intervariable localisation (see Appendix B). The horizontal localisation lengthscale was determined by comparing with the horizontal distance between adjacent observations (≈23 km). For the minimisation, a total of 75 inner loops within a single outer loop is used. This was determined after testing to ensure that sufficient convergence was attained for all cycles in the experiments. This is demonstrated in Fig. 4 which shows the minimisation of the cost function for the first cycle of the EBV experiments. For this cycle, the cost function was minimised the fastest and slowest in EBV(d) and (a), respectively. The analysis misfit to assimilated observations was also the largest in EBV(d) with J_{o}≈1500 after minimisation. However, it is important to note that this metric is not a particularly useful indicator of analysis quality but rather how each scheme draws the analysis towards the observations (Wang et al., 2008b). We would expect that the minimum of the cost function would approximate half the number of observations (i.e. ${J}_{\mathrm{min}}\approx \frac{{N}_{\mathrm{obs}}}{\mathrm{2}}=\mathrm{500}$, the expected value of a chisquared PDF), so EBV(c) neatly matches our expectations.
In addition to the experiments, a free background run, hereafter referred to as FreeBG, is performed starting from the same 50th cycle forecast of the spinup run. This is used as the control run to assess if the DA in the experiments is adding value by bringing the deterministic run trajectories closer to the “truth” or if the trajectories are simply following the natural evolution of the system and neglecting the observational information.
4.3 Sensitivity to weighting of B_{c} and B_{e}
Typically, for the hybridEnVar scheme, tuning of the weights (${\mathit{\beta}}_{\mathrm{c}}^{\mathrm{2}}$ and ${\mathit{\beta}}_{\mathrm{e}}^{\mathrm{2}}$) for B_{c} and B_{e} is performed empirically to assess the best configuration which combines the benefits from both sources of background error statistics. Figure 5 shows the comparison of domainaveraged analysis errors (root mean square errors; RMSE) with respect to the “truth” for the EBV and FreeBG experiments.
The cycleaveraged analysis errors (Fig. 5; bottom right panel) for all prognostic variables except v are generally smaller for the EBV experiments compared to FreeBG with an RMSE ratio less than 1. During the simulation, the w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′} errors were decreasing, suggesting that the deterministic run trajectories of the EBV experiments were converging around the “truth” because of the availability of observational information. The u analysis errors were decreasing in EBV(c) and EBV(d) but were increasing in EBV(a) and EBV(b). Throughout the 50 cycles, the v analysis errors were generally increasing in the EBV experiments. This peculiar issue was exacerbated when the weighting towards B_{c} was increased, suggesting that the issue originates from B_{c}. A feature of the u, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′} RMSE time sequences is the 8 h periodicity which is also apparent in the basic dynamical root mean square fields. A normal mode analysis (not shown) and inspection of the basic dynamical fields reveals that there is a 16 h period (local maxima to local maxima) which is within the period range of lowzonalwavenumber gravity waves, suggesting that this feature is due to the dominant gravity waves in this system.
To test if the issue with the v analysis errors was due to the choice of training data, we repeated EBV(a) but with B_{c} calibrated using other training data (e.g. the ensemble perturbations from the 50th cycle of the spinup run instead of those from the initial forecast ensemble). Even with more timeappropriate training data (but same variances), the issue was only partially resolved (smaller increase in RMSE; not shown). Also, this issue does not appear to occur in midlatitudinal experimental setups in Bannister (2021). From the implied background error covariances in Sect. 4.1, there exists some mass–wind relationship in B_{e} that is not wellrepresented in B_{c} through the geostrophic balance relationship since f is small in the tropical setting. Repeating EBV(a) but omitting the geostrophic balance constraint entirely in the calibration of B_{c} (i.e. treating ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ and v background errors univariately), also did not resolve the issue (not shown). We speculate that the issue could be due to the absence of a suitable balance constraint for prescribing the mass–wind relationship for B_{c} or a likely lack of tuning of the variances for B_{c}. Early results with a tuned B_{c} showed that the issue with the v analysis errors could be resolved by reducing the variances of all variables substantially. This warrants further investigation in a separate study to tune the variances for the system or assess the possibility of deriving a balance relationship between v and ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ for the tropical setting.
Comparing between the EBV experiments, the u analysis errors and v analysis errors are generally the smallest in EBV(d), indicating that allocating full weight to B_{e} in this setup is ideal for minimising the horizontal windrelated analysis errors. The w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′} analysis errors are arguably the smallest in EBV(c) with the smallest cycleaveraged RMSE. The results presented here are not unsurprising given that previous studies evaluating hybridEnVar DA in simplified models (e.g. Hamill and Snyder, 2000) and NWP systems (e.g. Montmerle et al., 2018; Bédard et al., 2020) also show that the best configuration appears to rely on a combination of both B_{e} and B_{c}, and not solely one or the other.
4.4 Ensemble trajectories and spread–error relationship
We can better appreciate the robustness of the ensemble by plotting the trajectories of the ensemble, its mean, the FreeBG, and the “truth” (Fig. 6). To avoid oversmoothing the local spatial variations in the fields, the trajectories are computed by taking a grid pointaveraged value of the fields for a subset of the full domain; a box located at the centre of the domain (model levels 25 to 35, longitudinal grid points 127 to 237). We have also investigated the trajectories using other subsets (boxes) distributed around the domain but the main ideas are the same so we have excluded discussion on them.
In Fig. 6, the spread of the EBV(d) ensemble is centred around the ensemble mean throughout the 50 cycles. The “truth” trajectory is also generally contained within the spread of the ensemble, particularly for u, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′}. There was no evidence of filter collapse nor bimodalities which indicates that the DAcentred ensemble generated using the EBV method is healthy.
It is common practice to also compare the ensemble spread with the RMSE which, for a perfectly reliable large ensemble where observation density and errors are accounted for, the two quantities should be approximately the same (Leutbecher, 2009). In the EBV method, the ensemble spread is largely dependent on the choice of ϵ_{0} since it does not account for the observation network, but this method is still worth considering as a “control method” and comparing with future methods consistent with Kalman filter theory. In the computation of the ensemble spread, Fortin et al. (2014) also cautions against using the wrong metric. Following their recommended approach, we define the grid pointaveraged ensemble spread $\stackrel{\mathrm{\u203e}}{{S}_{t}}$ using the square root grid pointaveraged ensemble variance:
where the ensemble spread S is computed using Eq. (4) of Whitaker and Loughe (1998). N_{g} in this case refers to the number of grid points over which the average is taken (i.e. the points within the same box used for Fig. 6) and i is the grid point index which represents points in the box. The RMSE is computed as before except now over points within this box.
Figure 7 shows $\stackrel{\mathrm{\u203e}}{{S}_{t}}$ for each model quantity in the EBV(d) ensemble. These are benchmarked against the RMSE and the (timestationary) implied background error standard deviations at model level 30 of B_{c} which are also plotted. For u and ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, the ensemble spread approximately matches the RMSE particularly for later cycles as the hybrid control analysis converges around the “truth”. For v, w, and b^{′}, the ensemble is clearly underdispersive. For all variables, the ensemble spread is also much smaller than the corresponding implied background error standard deviation at model level 30 of B_{c}. This strongly suggests that the issue with v analysis errors highlighted in the previous section is due to lack of tuning of the variances of B_{c} which depends on the specific data assimilation setup. Note that the ensemble spread is computed with respect to the ensemble mean (Whitaker and Loughe, 1998) but the RMSE is computed between the hybrid control analysis (a surrogate to the ensemble mean) and the “truth”. The spread–error relationship from this setup suggests that the DAcentring did not result in major statistical inconsistencies with the EBV ensemble. While the spread–error relationship is a useful diagnostic, it is not so straightforward to directly relate the ensemble spread to the eventual skill of the hybridEnVar DA system. Hence, it is not easy to determine whether to further inflate or deflate the EBV analysis perturbations by considering other choices of ϵ_{0}.
4.5 Sensitivity to number of ensemble members
As mentioned in Sect. 3.2.1, having a finite number of ensemble members will lead to sampling error in ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$. Logically, decreasing the number of ensemble members N used to compute ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$ should result in larger sampling errors. For a fixed L, we demonstrate the sensitivity of the skill of the hybridEnVar DA system to N in the ABCDA system. We perform two additional experiments as variants of EBV(d) to maximise the impact of the ensemble size changes. The experiments follow the same configuration as EBV(d) but with only 20 and 10 ensemble members in the ensemble instead, referred to as EBV(d20) and EBV(d10), respectively, instead of the 30 members used until now.
Figure 8 shows the comparison of cycleaveraged analysis errors as N is varied. The RMSE is smallest in EBV(d) for almost all prognostic variables. Reducing the ensemble size from 30 to 20 in EBV(d20) leads to an increase in the RMSE, indicating poorer performance of the ABCDA system. A further reduction of the ensemble size to 10 in EBV(d10) leads to the poorest performance overall. In this simple setup, these results are expected following the above argument that larger sampling errors are introduced into the system when N is smaller. For ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ and b^{′}, the RMSE ratio in EBV(d10) is even larger than in EBV(a), indicating that the pure EnVar setup may perform poorer than its 3DVarFGAT counterpart when the ensemble size is too small (Fig. 8; bottom right panel).
It is important to highlight that the results are specific to this ABCDA setup where the localisation lengthscales are kept fixed (and are arguably quite tight) across the experiments. For other setups where the localisation lengthscales are broader, the optimal ensemble size would be expected to be larger. It would also be worth exploring if a further increase in the ensemble size by orders of magnitude would yield a “saturation point” where there is little additional benefit to the system. However, one should also be aware of ensemble clustering (Amezcua et al., 2012) in very large ensembles. This issue has been shown to negatively impact hybridEnVar DA in simpler models such as the three variable Lorenz63 model (Goodliff et al., 2015). In the case of the much larger ABCDA system though, it is unlikely that N could practically be made large enough relative to n to be exposed to this handicap.
In this article, we document the development of the hybrid ensemblevariational data assimilation system for the ABC model (Petrie et al., 2017) built on the existing variational ABCDA system (Bannister, 2020). The hybrid ensemblevariational algorithm that is introduced is based on the alpha control variable approach of Lorenc (2003). Key details related to the spatial and intervariable localisation are discussed; the approach coded in the ABCDA system allows flexibility in the localisation for use in future exploratory studies. The hybrid ensemblevariational algorithm requires an ensemble system that is run parallel to the deterministic components to provide the flowdependent error modes. To achieve this, the random field perturbations method is introduced in the ABC model for generating an initial ensemble. The ensemble bred vectors (EBV) method is also introduced in the ABCDA system to propagate the ensemble, which is centred on the hybrid control analysis at each cycle.
Using a tropical setting of the ABC model, we test both ensemble propagation methods (30member ensemble) in a series of hourlycycling multicycle data assimilation observation system simulation experiments with hybrid ensemblevariational data assimilation. In the experiments, 3DVarFGAT (First Guess at Appropriate Time) is employed together with EBV using different weightings assigned to the implied climatological (or static) background error covariance matrix (B_{c}) and the implied ensemblederived background error covariance matrix (B_{e}); (a) 100 % B_{c} (i.e. no flowdependency, equivalent to 3DVarFGAT), (b) 50 % B_{c}, 50 % B_{e}; hybridEn3DVarFGAT (i.e. flowdependency with an equal contribution from B_{c} and B_{e}), (c) 20 % B_{c}, 80 % B_{e}; hybridEn3DVarFGAT (i.e. flowdependency with most contribution from B_{e}), and (d) 100 % B_{e}; pure En3DVarFGAT (i.e. no contribution from B_{c}).
The cycleaveraged analysis root mean square errors with respect to the “truth” for all prognostic variables except v were generally smaller for the EBV experiments compared to the free background. All experiments that involved the ensemble outperformed pure B_{c} for all variables. EBV(c) was the best performing configuration for w, ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, and b^{′} while EBV(d) was the best performing configuration for u and v. We also noted that the v field gradually diverged from the “truth” during the simulations for experiments involving B_{c} even though fields of other variables were converging around the “truth” as logically expected. Through further assessment of the implied background error covariances and sensitivity tests, it was found that for the tropical setting of the ABC model, there exists some mass–wind relationship that is captured in B_{e} which is not wellrepresented by the (weak) geostrophic balance constraint in B_{c}. We speculate that the issue with v for configurations that involve B_{c} could be due to the absence of a suitable balance constraint for prescribing the mass–wind relationship which may exist in the tropical setting of the ABC model, warranting further investigation in a separate study since it is not trivial to derive one. The results demonstrate the advantages of employing hybrid ensemblevariational data assimilation in the ABCDA system over traditional variational data assimilation.
An inspection of the EBV(d) ensemble trajectories showed that the ensemble was centred around the ensemble mean throughout the experiment with the “truth” trajectory generally contained within the spread of the ensemble. For v, w, and b^{′}, the EBV ensemble was underdispersive but for u and ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$, the ensemble spread approximately matched the corresponding RMSE. The EBV ensemble did not exhibit bimodalities or evidence of filter collapse, indicating that the DAcentred ensemble generated was healthy.
To illustrate the sensitivity to ensemble members, we performed two additional experiments as variants of EBV(d); EBV(d20) with 20 ensemble members and EBV(d10) with 10 ensemble members. The cycleaveraged analysis errors for almost all prognostic variables were smallest in EBV(d). Reducing the ensemble size from 30 to 20 and subsequently to 10 led to an increase in the RMSE, indicating poorer performance of the ABCDA system. The results in this simple setup are consistent with the expectation that larger sampling errors are introduced into the system with a smaller ensemble, thus resulting in larger RMSE.
During the testing and development of the hybrid ensemblevariational method, localisationrelated issues like hydrostatic imbalance in the analysis increments also became apparent. Similar issues have been documented in previous studies but we have included additional comments in this article. Given the rapid adoption and broad shift towards hybrid ensemblevariational methods in convectivescale numerical weather prediction, we hope that the ABCDA system can prove useful in providing further insights and highlight other potential issues that may arise in such methods. Particularly for the tropics, further work is required to better understand the characteristics of the ensemblederived background errors such as disentangling its flowdependency or designing the localisation to isolate or identify important multivariate relationships.
From Sect. 3.1, the random field perturbations method is used to generate the initial ensemble states for the ABC ensemble system. Equation (7) describes the implementation where pairs of states are randomly chosen from a long “truth” run.
In Magnusson et al. (2009), there are additional constraints placed on the choice of random fields. The dates must be from different years and must be from the same season in order to eliminate interannual correlations in the perturbations yet preserve the seasonal characteristics of the variability. In the ABC model, we have attempted to capture the essence of these constraints even though there are no seasons in the ABC model.
For the experiments, the long “truth” run is generated with the initial control state ${\mathit{x}}_{\mathrm{cs}}^{\mathrm{c}}$ as the initial condition and is run for 50 d. ABC model dumps are produced every hour, resulting in a total of 1200 state dumps. A minimum threshold of 100 h is set between the validity time of each random pair of states for which they are assumed to be uncorrelated. In other words, pairs of states are selected randomly and are retained only if they are valid at least 100 h apart. Additionally, Magnusson et al. (2009) did not indicate if the dates can be repeatedly selected (i.e. selection from a pool with replacement) so we have not imposed the additional constraint of selection from a pool without replacement. For the experiments, a total of 1200 state dumps is sufficiently large compared to the number of pairs required (number of ensemble members, 30 in most of this work).
One aspect that was highlighted in the implementation was the choice of fixed perturbation amplitude to scale the random field perturbations. It is not possible to follow the exact approach of Magnusson et al. (2009) using the average analysis error statistics in the ABC model. However, we use the same metric (total energy norm) to gauge the initial fixed perturbation amplitude. As described in Sect. 3.1, the random field perturbations are scaled towards their mean total energy norm. This approach ensures that the random fields perturbations have the same fixed perturbation amplitude but differ in directions of error growth. Further testing with the ABC model showed that reducing the fixed perturbation amplitude yielded smaller errors in the experiments so a deflation factor of 5 was eventually adopted.
The total energy (E_{tot}) for the random field perturbations are computed using
where E_{k}, E_{b}, and E_{e} are the kinetic, buoyant, and elastic energy, respectively, ρ_{0}=1.225 kg m^{−3} is a reference air density, and dV is the volume of a grid box in the ABC model. Note that as mentioned in Sect. 3.3.1, we also use $\sqrt{{E}_{\mathrm{tot}}}$ in the ensemble bred vectors method to scale the ensemble perturbations for subsequent cycles. Prior to the experiments, we performed some initial testing using the inner product norm instead of $\sqrt{{E}_{\mathrm{tot}}}$ which yielded similar results between the two norm choices. Since $\sqrt{{E}_{\mathrm{tot}}}$ is a metric that is physically meaningful, it was eventually used for the scaling of the random field perturbations and ensemble perturbations from the ensemble bred vectors method for the experiments.
As highlighted in Sect. 3.2.2, L (the localisation matrix) can be partitioned into a matrix U^{α}:
We seek to prove that two approaches used to code χ^{αk} and U^{α} (described below) give the same result when L is applied on the same model space vector v (of length N_{g}N_{var}). Note that L is N_{g}N_{var}×N_{g}N_{var} which, in principle, means that the intervariable localisation matrix can be set to have any correlation structure, including the limiting cases of full localisation between different variables (where the corresponding matrix elements are 0) and no localisation (matrix elements are 1). Recall that L is used in the DA via a Schur product with ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$ (Eq. 11). While the number of rows in U^{α} is constrained to be N_{g}N_{var}, the number of columns can be chosen. The fewer the columns, the smaller the corresponding size of the χ^{αk} vectors (Eq. 14b) but the less flexible the implied localisation matrix. The first approach considered is based on Wang et al. (2008a) (N_{g} columns) and the second approach is coded in the ABCDA system (N_{g}N_{var} columns) inspired by Bannister (2017). The first approach requires less memory and computation but has less flexibility than the second approach in terms of multivariate localisation choices.
For simplicity, the proof is demonstrated using pure EnVar with one nonzero element in v. This is a similar procedure to computing a column of the implied B_{c} or B_{e} but now the implied L is being probed. It is easier to visualise the interactions of the matrix elements by partitioning v into segments of size N_{g}×1 based on the ABC prognostic variables, i.e. $\mathbf{v}=({\mathbf{v}}_{u},{\mathbf{v}}_{v},{\mathbf{v}}_{w},{\mathbf{v}}_{{\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}},{\mathbf{v}}_{{b}^{\prime}}{)}^{\top}$. Similarly, we can consider blocks, each of size N_{g}×N_{g}, used to construct U^{α} (i.e. ${\mathbf{U}}_{u}^{\mathit{\alpha}},{\mathbf{U}}_{v}^{\mathit{\alpha}},{\mathbf{U}}_{w}^{\mathit{\alpha}},{\mathbf{U}}_{{\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}}^{\mathit{\alpha}},$ and ${\mathbf{U}}_{{b}^{\prime}}^{\mathit{\alpha}}$) which will determine the spatial localisations (horizontal and vertical) for each variable.
The main difference between the two approaches is in the design of U^{α}. In the first approach, based on Wang et al. (2008a), U^{α} (denoted ${\stackrel{\mathrm{\u0303}}{\mathbf{U}}}^{\mathit{\alpha}}$) is rectangular (N_{g}N_{var}×N_{g} and χ^{αk} has N_{g} elements), given by
Applying L (denoted $\stackrel{\mathrm{\u0303}}{\mathbf{L}}$ for first approach) to v yields
with the elements given by
If there is only one nonzero element (the qth element of v), this simplifies to
In the second approach, U^{α} (denoted ${\widehat{\mathbf{U}}}^{\mathit{\alpha}}$) is square (N_{g}N_{var}×N_{g}N_{var} and χ^{αk} has N_{g}N_{var} elements), given by
where 0 is a N_{g}×N_{g} block containing zeroes. This is the default configuration that is coded in the ABCDA system which gives an implied L (denoted $\widehat{\mathbf{L}}$ for the second approach):
Notice that here, $\widehat{\mathbf{L}}$ does a full intervariable localisation so that the Schur product of $\widehat{\mathbf{L}}$ with ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$ will not retain any intervariable covariances. This may be useful if N is small and sampling noise is problematic in ${\mathbf{P}}_{\mathrm{e}}^{\mathrm{f}}\left[t\right]$.
Next, we introduce a mapping matrix $\widehat{\mathbf{I}}$ which consists of N_{p}×N_{p} blocks of identity matrices (${\mathbf{I}}_{{N}_{\mathrm{g}}}$, each of size N_{g}×N_{g}):
where N_{p} is the number of model variables whose intervariable covariances are retained by the mapping matrix (i.e. ${N}_{\mathrm{p}}={N}_{\mathrm{var}}=\mathrm{5}$ in the above). Note that other designs of $\widehat{\mathbf{I}}$ (e.g. replacing some blocks with 0) will allow only the desired retention of specific covariances between certain model variables.
Using the second approach of coding U^{α} and χ^{αk}, it is possible to retain the full intervariable covariances and achieve the exact same outcome as the first approach by defining ${\mathbf{U}}^{\mathit{\alpha}}={\widehat{\mathbf{U}}}^{\mathit{\alpha}}\widehat{\mathbf{I}}$. The implied localisation matrix is thus $\mathbf{L}=\frac{\mathrm{1}}{{N}_{\mathrm{p}}}{\widehat{\mathbf{U}}}^{\mathit{\alpha}}\widehat{\mathbf{I}}\widehat{\mathbf{I}}{\widehat{\mathbf{U}}}^{\mathit{\alpha}\top}$. As before, applying L to v yields
with the elements given by
Note how in this case, the rows of U^{α⊤} are the same as in ${\stackrel{\mathrm{\u0303}}{\mathbf{U}}}^{\mathit{\alpha}\top}$ from the first approach (Eq. B2) but repeated N_{p} times. If there is only one nonzero element (the qth element of v), then the computation simplifies to
noting that when N_{p}=N_{var}, full intervariable covariances are retained. In the computation of any inner products of χ^{αk} in the variational algorithm, such as for the minimisation or in the computation of J_{e}, these also have to be scaled by $\frac{\mathrm{1}}{{N}_{\mathrm{p}}}$ accordingly.
The key thing to note here is that when using both approaches with ${\stackrel{\mathrm{\u0303}}{\mathbf{U}}}^{\mathit{\alpha}}$ and U^{α} respectively, the implied localisation matrices are the same ($\stackrel{\mathrm{\u0303}}{\mathbf{L}}=\mathbf{L}$) as demonstrated by Eqs. (B3c) and (B7c) being the same. Given the greater flexibility, the second approach was coded in the ABCDA system.
According to Eq. (3) of Bannister (2020), the hydrostatic balance relation in the ABC model (also used in the control variable transform) is given by
From Eq. (1), the prognostic w equation indicates that the change in w following an air parcel (i.e. a Lagrangian frame of reference) is given by the source/sink terms $C\frac{\partial {\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}}{\partial z}$ and b^{′}. This neatly corresponds to hydrostatic balance. In other words, hydrostatic imbalance will lead to sources/sinks in w as the system evolves.
Applying vertical localisation directly to the ensemblederived error modes via the Schur product results in alterations in the vertical gradient of the ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ field depending on the kurtosis of the correlation curve applied. We can consider the following scenario: assuming that the ensemble forecasts are hydrostatically balanced on the large scales, one could expect that assimilating a single ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ observation without vertical localisation would result in hydrostatically balanced ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ and b^{′} increments. However, with vertical localisation, the sharpness of the correlation curve superimposes on the ${\stackrel{\mathrm{\u0303}}{\mathit{\rho}}}^{\prime}$ fields in the ensemblederived error modes and results in increments that decrease more rapidly with distance (sharper gradient) from the point of observation. Thus, a larger b^{′} increment is required in order to maintain hydrostatic balance but the actual b^{′} increments are also reduced by the Schur product. In this scenario, the resulting b^{′} increments would be subhydrostatic.
During the spinup configuration testing with vertical localisation applied, it was noted that the root mean square value of the w field was gradually increasing throughout the earlier stages of the spinup process. Since there exists a restoring A^{2}w source term in the prognostic b^{′} equation, the root mean square value of the w field does not increase indefinitely because of corresponding induced changes in the b^{′} field.
The model and data assimilation system are written in Fortran 90/95, and the plotting code is written in Python. The upgraded system branches from ABCDA v1.5. The source code, experiment, and plotting scripts are open source and freely available on a Github repository at https://doi.org/10.5281/zenodo.6646951 (Lee and Bannister, 2022).
JCKL, JA, and RNB designed the experiments. JCKL developed the model code and conducted the simulations and analysis. JCKL prepared the article which was vetted by RNB and JA.
The contact author has declared that none of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The authors would like to thank two anonymous reviewers for their useful comments.
This research has been supported by the National Environment Agency – Singapore (grant no. NIL) and the National Centre for Earth Observation (grant no. NE/R016518/1).
This paper was edited by Simon Unterstrasser and reviewed by two anonymous referees.
Amezcua, J., Ide, K., Bishop, C. H., and Kalnay, E.: Ensemble clustering in deterministic ensemble Kalman filters, Tellus A, 64, 18039, https://doi.org/10.3402/tellusa.v64i0.18039, 2012. a
Asch, M., Bocquet, M., and Nodet, M.: Data Assimilation: Methods, Algorithms, and Applications, Fundamentals of Algorithms, SIAM, Society for Industrial and Applied Mathematics, https://books.google.co.uk/books?id=A3Q6vgAACAAJ (last access: 20 February 2022), 2016. a
Balci, N., Mazzucato, A. L., Restrepo, J. M., and Sell, G. R.: Ensemble dynamics and bred vectors, Mon. Weather Rev., 140, 2308–2334, 2012. a, b, c
Bannister, R.: A review of operational methods of variational and ensemblevariational data assimilation, Q. J. Roy. Meteor. Soc., 143, 607–633, 2017. a, b
Bannister, R. N.: A review of forecast error covariance statistics in atmospheric variational data assimilation. I: Characteristics and measurements of forecast error covariances, Q. J. Roy. Meteor. Soc. A, 134, 1951–1970, 2008a. a
Bannister, R. N.: A review of forecast error covariance statistics in atmospheric variational data assimilation. II: Modelling the forecast error covariance statistics, Q. J. Roy. Meteor. Soc. A, 134, 1971–1996, 2008b. a
Bannister, R. N.: ABCDA_1.4da: Data assimilation for ABC model, Zenodo [code], https://doi.org/10.5281/zenodo.3531926, 2019. a
Bannister, R. N.: The ABCDA system (v1.4): a variational data assimilation system for convectivescale assimilation research with a study of the impact of a balance constraint, Geosci. Model Dev., 13, 3789–3816, https://doi.org/10.5194/gmd1337892020, 2020. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q
Bannister, R. N.: Balance conditions in variational data assimilation for a highresolution forecast model, Q. J. Roy. Meteor. Soc., https://doi.org/10.1002/qj.4106, 2021. a
Bédard, J., Caron, J.F., Buehner, M., Baek, S.J., and Fillion, L.: Hybrid Background Error Covariances for a LimitedArea Deterministic Weather Prediction System, Weather Forecast., 35, 1051–1066, 2020. a
Berre, L., Ştefaănescu, S. E., and Pereira, M. B.: The representation of the analysis effect in three error simulation techniques, Tellus A, 58, 196–209, 2006. a
Bishop, C. H., Etherton, B. J., and Majumdar, S. J.: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects, Mon. Weather Rev., 129, 420–436, 2001. a
Buizza, R., Tribbia, J., Molteni, F., and Palmer, T.: Computation of optimal unstable structures for a numerical weather prediction model, Tellus A, 45, 388–407, 1993. a
Evensen, G.: Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics, J. Geophys. Res.Oceans, 99, 10143–10162, 1994. a
Evensen, G.: Data Assimilation: The Ensemble Kalman Filter, Springer Berlin Heidelberg, https://books.google.co.uk/books?id=VJ2oOecHhOYC (last access: 20 February 2022), 2006. a
Fortin, V., Abaza, M., Anctil, F., and Turcotte, R.: Why should ensemble spread match the RMSE of the ensemble mean?, J. Hydrometeorol., 15, 1708–1713, 2014. a
Gaspari, G. and Cohn, S. E.: Construction of correlation functions in two and three dimensions, Q. J. Roy. Meteor. Soc., 125, 723–757, 1999. a, b
Goodliff, M., Amezcua, J., and Van Leeuwen, P. J.: Comparing hybrid data assimilation methods on the Lorenz 1963 model with increasing nonlinearity, Tellus A, 67, 26928, https://doi.org/10.3402/tellusa.v67.26928, 2015. a
Hamill, T. M. and Snyder, C.: A hybrid ensemble Kalman filter–3D variational analysis scheme, Mon. Weather Rev., 128, 2905–2919, 2000. a, b, c
Hamill, T. M., Whitaker, J. S., and Snyder, C.: Distancedependent filtering of background error covariance estimates in an ensemble Kalman filter, Mon. Weather Rev., 129, 2776–2790, 2001. a
Heng, B. P., Tubbs, R., Huang, X.Y., Macpherson, B., Barker, D. M., Boyd, D. F., Kelly, G., North, R., Stewart, L., Webster, S., and Wlasak, M.: SINGVDA: A data assimilation system for convectivescale numerical weather prediction over Singapore, Q. J. Roy. Meteor. Soc., 146, 1923–1938, 2020. a
Houtekamer, P. and Derome, J.: Methods for ensemble prediction, Mon. Weather Rev., 123, 2181–2196, 1995. a
Houtekamer, P. L. and Mitchell, H. L.: A sequential ensemble Kalman filter for atmospheric data assimilation, Mon. Weather Rev., 129, 123–137, 2001. a
Ingleby, N. B.: The statistical structure of forecast errors and its representation in the Met. Office global 3D variational data assimilation scheme, Q. J. Roy. Meteor. Soc., 127, 209–231, 2001. a
Lee, J. C. K. and Bannister, R. N.: ABCDA_1.5da_hybrid, Zenodo [code], https://doi.org/10.5281/zenodo.6646951, 2022. a
Kalnay, E.: Atmospheric modeling, data assimilation and predictability, Cambridge university press, 2003. a
Leutbecher, M.: Diagnosis of ensemble forecasting systems, in: Seminar on Diagnosis of Forecasting and Data Assimilation Systems, 235–266, 2009. a
Lorenc, A. C.: The potential of the ensemble Kalman filter for NWP – A comparison with 4DVar, Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, Applied Meteorology and Physical Oceanography, 129, 3183–3203, 2003. a, b, c
Magnusson, L., Nycander, J., and Källén, E.: Flowdependent versus flowindependent initial perturbations for ensemble prediction, Tellus A, 61, 194–209, 2009. a, b, c, d, e, f
Montmerle, T., Michel, Y., Arbogast, E., Ménétrier, B., and Brousseau, P.: A 3D ensemble variational data assimilation scheme for the limitedarea AROME model: Formulation and preliminary results, Q. J. Roy. Meteor. Soc., 144, 2196–2215, 2018. a
Parrish, D. F. and Derber, J. C.: The National Meteorological Center's spectral statisticalinterpolation analysis system, Mon. Weather Rev., 120, 1747–1763, 1992. a
Penny, S. G.: The hybrid local ensemble transform Kalman filter, Mon. Weather Rev., 142, 2139–2149, 2014. a
Petrie, R. E., Bannister, R. N., and Cullen, M. J. P.: The “ABC model”: a nonhydrostatic toy model for use in convectivescale data assimilation investigations, Geosci. Model Dev., 10, 4419–4441, https://doi.org/10.5194/gmd1044192017, 2017. a, b, c, d, e
Toth, Z. and Kalnay, E.: Ensemble forecasting at NMC: The generation of perturbations, B. Am. Meteor. Soc., 74, 2317–2330, 1993. a, b
Toth, Z. and Kalnay, E.: Ensemble forecasting at NCEP and the breeding method, Mon. Weather Rev., 125, 3297–3319, 1997. a
Wang, X., Snyder, C., and Hamill, T. M.: On the theoretical equivalence of differently proposed ensemble–3DVAR hybrid analysis schemes, Mon. Weather Rev., 135, 222–227, 2007. a
Wang, X., Barker, D. M., Snyder, C., and Hamill, T. M.: A hybrid ETKF–3DVAR data assimilation scheme for the WRF model. Part I: Observing system simulation experiment, Mon. Weather Rev., 136, 5116–5131, 2008a. a, b, c, d
Wang, X., Barker, D. M., Snyder, C., and Hamill, T. M.: A hybrid ETKF–3DVAR data assimilation scheme for the WRF model. Part II: Real observation experiments, Mon. Weather Rev., 136, 5132–5147, 2008b. a
Whitaker, J. S. and Loughe, A. F.: The relationship between ensemble spread and ensemble mean skill, Mon. Weather Rev., 126, 3292–3302, 1998. a, b
 Abstract
 Introduction
 The ABCDA system
 Technical implementation of the data assimilation and forecast framework
 Data assimilation experiments using the hybridEnVar scheme in a tropical setting
 Summary
 Appendix A: Details on the random field perturbations method
 Appendix B: Accounting for intervariable covariances – proof of equivalence of two approaches
 Appendix C: Hydrostatic imbalance due to vertical localisation
 Code availability
 Author contributions
 Competing interests
 Disclaimer
 Acknowledgements
 Financial support
 Review statement
 References
 Abstract
 Introduction
 The ABCDA system
 Technical implementation of the data assimilation and forecast framework
 Data assimilation experiments using the hybridEnVar scheme in a tropical setting
 Summary
 Appendix A: Details on the random field perturbations method
 Appendix B: Accounting for intervariable covariances – proof of equivalence of two approaches
 Appendix C: Hydrostatic imbalance due to vertical localisation
 Code availability
 Author contributions
 Competing interests
 Disclaimer
 Acknowledgements
 Financial support
 Review statement
 References