Articles | Volume 15, issue 20
https://doi.org/10.5194/gmd-15-7641-2022
https://doi.org/10.5194/gmd-15-7641-2022
Development and technical paper
 | 
20 Oct 2022
Development and technical paper |  | 20 Oct 2022

A fast, single-iteration ensemble Kalman smoother for sequential data assimilation

Colin Grudzien and Marc Bocquet
Abstract

Ensemble variational methods form the basis of the state of the art for nonlinear, scalable data assimilation, yet current designs may not be cost-effective for real-time, short-range forecast systems. We propose a novel estimator in this formalism that is designed for applications in which forecast error dynamics is weakly nonlinear, such as synoptic-scale meteorology. Our method combines the 3D sequential filter analysis and retrospective reanalysis of the classic ensemble Kalman smoother with an iterative ensemble simulation of 4D smoothers. To rigorously derive and contextualize our method, we review related ensemble smoothers in a Bayesian maximum a posteriori narrative. We then develop and intercompare these schemes in the open-source Julia package DataAssimilationBenchmarks.jl, with pseudo-code provided for their implementations. This numerical framework, supporting our mathematical results, produces extensive benchmarks demonstrating the significant performance advantages of our proposed technique. Particularly, our single-iteration ensemble Kalman smoother (SIEnKS) is shown to improve prediction/analysis accuracy and to simultaneously reduce the leading-order computational cost of iterative smoothing in a variety of test cases relevant for short-range forecasting. This long work presents our novel SIEnKS and provides a theoretical and computational framework for the further development of ensemble variational Kalman filters and smoothers.

Dates
1 Introduction

1.1 Context

Ensemble variational methods form the basis of the state of the art for nonlinear, scalable data assimilation (DA; Asch et al.2016; Bannister2017). Estimators following an ensemble Kalman filter (EnKF) analysis include the seminal maximum likelihood filter and 4DEnVAR (Zupanski2005; Liu et al.2008), the ensemble randomized maximum likelihood method (EnRML; Gu and Oliver2007; Chen and Oliver2012; Raanes et al.2019b), the iterative ensemble Kalman smoother (IEnKS; Sakov et al.2012; Bocquet and Sakov2013, 2014), and the ensemble Kalman inversion (EKI; Iglesias et al.2013; Schillings and Stuart2018; Kovachki and Stuart2019). Unlike traditional 3D-Var and 4D-Var, which use the adjoint-based approximation for the gradient of the Bayesian maximum a posteriori (MAP) cost function, these EnKF-based approaches utilize an ensemble of nonlinear forecast model simulations to approximate the tangent linear model. The gradient can then be approximated by, e.g., finite differences from the ensemble mean as in the bundle variant of the IEnKS (Bocquet and Sakov2014). The ensemble approximation can thus obviate constructing tangent linear and adjoint code for nonlinear forecast and observation models, which comes at a major cost in development time for operational DA systems.

These EnKF-based, ensemble variational methods combine the high accuracy of the iterative solution to the Bayesian MAP formulation of the nonlinear DA problem (Sakov et al.2012; Bocquet and Sakov2014), the relative simplicity of model development and maintenance in ensemble-based DA (Kalnay et al.2007), the ensemble analysis of time-dependent errors (Corazza et al.2003), and a variational optimization of hyperparameters for, e.g., inflation (Bocquet et al.2015), localization (Lorenc2003), and surrogate models (Bocquet et al.2020) to augment the estimation scheme. However, while the above schemes are promising for moderately nonlinear and non-Gaussian DA, an obstacle to their use in real-time, short-range forecast systems lies in the computational barrier of simulating the nonlinear forecast model in the ensemble sampling procedure. In order to produce forecast, filter, and reanalyzed smoother statistics, these estimators may require multiple runs of the ensemble simulation over the data assimilation window (DAW), consisting of lagged past and current times.

When nonlinearity in the DA cycle is not dominated by the forecast error dynamics, as in synoptic-scale meteorology, an iterative optimization over the forecast simulation may not produce a cost-effective reduction in the forecast error. Particularly, when the linear Gaussian approximation for the forecast error dynamics is adequate, nonlinearity in the DA cycle may instead be dominated by the nonlinearity in the observation model, the nonlinearity in the hyperparameter optimization, or the nonlinearity in temporally interpolating a reanalyzed, smoothed solution over the DAW. In this setting, our formulation of iterative, ensemble variational smoothing has substantial advantages in balancing the computational cost/prediction accuracy tradeoff.

1.2 Objectives and outline

This long paper achieves three connected objectives. First, we review and update a variety of already published smoother algorithms in a narrative of Bayesian MAP estimation. Second, we use this framework to derive and contextualize our estimation technique. Third, we develop all our algorithms and test cases in the open-source Julia package DataAssimilationBenchmarks.jl (Bezanson et al.2017; Grudzien et al.2021). This numerical framework, supporting our mathematical results, produces extensive simulation benchmarks, validating the performance advantages of our proposed technique. These simulations likewise establish fundamental performance metrics for all estimators and the Julia package DataAssimilationBenchmarks.jl.

Our proposed technique combines the 3D sequential filter analysis and retrospective reanalysis of the classic ensemble Kalman smoother (EnKS; Evensen and Van Leeuwen2000) with an iterative ensemble simulation of 4D smoothers. Following a 3D filter analysis and retrospective reanalysis of lagged states, we reinitialize each subsequent smoothing cycle with a reanalyzed, lagged ensemble state. The resulting scheme is a single-iteration ensemble Kalman smoother, denoted as such as it produces its forecast, filter, and reanalyzed smoother statistics with a single iteration of the ensemble simulation over the DAW. By doing so, we seek to minimize the leading-order cost of ensemble variational smoothing in real-time, geophysical forecast models, i.e., the ensemble simulation. However, the scheme can iteratively optimize the sequential filter cost functions in the DAW without computing additional iterations of the ensemble simulation.

We denote our framework single-iteration smoothing, while the specific implementation presented here is denoted as the single-iteration ensemble Kalman smoother (SIEnKS). For linear Gaussian systems, with the perfect model hypothesis, the SIEnKS is a consistent Bayesian estimator, albeit one that uses redundant model simulations. When the forecast error dynamics is weakly nonlinear, yet other aspects of the DA cycle are moderately to strongly nonlinear, we demonstrate that the SIEnKS has a prediction and analysis accuracy that is comparable to, and often better than, some traditional 4D iterative smoothers. However, the SIEnKS has a numerical cost that scales in iteratively optimizing the sequential filter cost functions for the DAW, i.e., the cost of the SIEnKS scales in matrix inversions in the ensemble dimension rather than in the cost of ensemble simulations, making our methodology suitable for operational short-range forecasting.

Over long DAWs, the performance of iterative smoothers can degrade significantly due to the increasing nonlinearity in temporally interpolating the posterior estimate over the window of lagged states. Furthermore, with a standard, single data assimilation (SDA) smoother, each observation is only assimilated once, meaning that new observations are only distantly connected to the initial conditions of the ensemble simulation; this can introduce many local minima to a smoother analysis, strongly affecting an optimization (Fillion et al.2018, and references therein). To handle the increasing nonlinearity of the DA cycle in long DAWs, we derive a novel form of the method of multiple data assimilation (MDA), previously derived in a 4D stationary and sequential DAW analysis (Emerick and Reynolds2013; Bocquet and Sakov2014, respectively). Our new MDA technique exploits the single-iteration formalism to partially assimilate each observation within the DAW with a sequential 3D filter analysis and retrospective reanalysis. Particularly, the sequential filter analysis constrains the ensemble simulation to the observations while temporally interpolating the posterior estimate over the DAW – this constraint is shown to improve the filter and forecast accuracy at the end of long DAWs and the stability of the joint posterior estimate versus the 4D approach. This key result is at the core of how the SIEnKS is able to outperform the predictive and analysis accuracy of 4D smoothing schemes while, at the same time, maintaining a lower leading-order computational cost.

This work is organized as follows. Section 2 introduces our notations. Section 3 reviews the mathematical formalism for the ensemble transform Kalman filter (ETKF) based on the LETKF formalism of Hunt et al. (2007), Sakov and Oke (2008b), and Sakov and Bertino (2011). Subsequently, we discuss the extension of the ETKF to fixed-lag smoothing in terms of (i) the right-transform EnKS, (ii) the IEnKS, and (iii) the SIEnKS, with each being different approximate solutions to the Bayesian MAP problem. Section 4 discusses several applications that distinguish the performance of these estimators. Section 5 provides an algorithmic cost analysis for these estimators and demonstrates forecast, filter, and smoother benchmarks for the EnKS, the IEnKS, and the SIEnKS in a variety of DA configurations. Section 6 summarizes these results and discusses future opportunities for the single-iteration smoother framework. Appendix A contains the pseudo-code for the algorithms presented in this work, which are implemented in the open-source Julia package DataAssimilationBenchmarks.jl (Grudzien et al.2021). Note that, due to the challenges in formulating localization/hybridization for the IEnKS (Bocquet2016), we neglect a treatment of these techniques in this initial study of the SIEnKS, though this will be treated in a future work.

2 Notations

Matrices are denoted with upper-case bold and vectors with lower-case bold and italics. The standard Euclidean vector norm is denoted v:=vv. For a symmetric, positive definite matrix ARN×N, we define the Mahalanobis vector norm with respect to A (Sankhya2018) as follows:

(1) v A := v A - 1 v .

For a generic matrix ARN×M, with full-column rank M, we denote the pseudo-inverse as follows:

(2) A := A A - 1 A .

When A has a full-column rank as above, we define the Mahalanobis vector “norm”, with respect to G=AA, as follows:

(3) v G := A v A v .

Note that when G does not have full-column rank, i.e., N>M, this is not a true norm on N as it is degenerate in the null space of A. Instead, this is a lift of a non-degenerate norm in the column span of A to RN. For v in the column span of A,

(4a)v=Aw,(4b)vG=w,

for a vector of weights wRM.

Let x denote a random vector of physics-based model states. Assume that an initial, prior probability density function (density henceforth) on the model state p(x0) is given, with a hidden Markov model of the following form:

(5a)xk=Mkxk-1,(5b)yk=Hkxk+ϵk,

which determines the distribution of future states, with the dependence on the time tk denoted by the subscript k. For simplicity, assume that Δt:=tk-tk-1 is fixed for all k, though this is not a required restriction in any of the following arguments. The dimensions of the above system are denoted as follows: (i) Nx is the model state dimension xkRNx, (ii) Ny is the observation vector dimension ykRNy, and (iii) Ne is the ensemble size, where an ensemble matrix is given as EkRNx×Ne. State model and observation variables are related via the (possibly) nonlinear observation operator Hk:RNxRNy. Observation noise ϵk is assumed to be an unbiased white sequence such that, in the following:

(6) E ϵ k ϵ l = δ k , l R k ,

where 𝔼 is the expectation, RkRNy×Ny is the observation error covariance matrix at time tk, and δk,l denotes the Kronecker delta function on the indices k and l. The error covariance matrix Rk is assumed to be invertible without losing generality.

The above configuration refers to a perfect model hypothesis (Grudzien and Bocquet2021) in which the transition probability for dxRNx is written as follows:

(7) P x k d x | x k - 1 = δ M k x k - 1 ( d x ) ,

with δv referring to the Dirac measure at vRNx. Similarly, we say that the transition density is proportional, as follows:

(8) p ( x k | x k - 1 ) δ x k - M k x k - 1 ,

where δ represents the Dirac distribution. The Dirac measure is singular with respect to Lebesgue measure, so this is simply a convenient abuse of the notation that can be made rigorous with the generalized function theory of distributions (Taylor1996, see chap. 3 Sect. 4). The perfect model assumption is utilized throughout this work to frame the studied assimilation schemes in a unified manner, although this is a highly simplified framework for a realistic geophysical DA problem. Extending the single-iteration formalism to the case of model errors will be studied in a future work.

Define the multivariate Gaussian density as follows:

(9) n ( z | z , B ) := 1 2 π N z det B exp - 1 2 z - z B - 1 z - z .

In the case where (i) k:=Mk and k:=Hk are both linear transformations, (ii) the observation likelihood is

(10) p ( y k | x k ) := n ( y k | H k x k , R k ) ,

and (iii) the first prior is given as follows:

(11) p ( x 0 ) := n ( x 0 , B 0 ) .

Then, the DA configuration is of a perfect linear Gaussian model. This is a further restriction of the perfect model assumption from which many classical filtering results are derived, though it is only a heuristic for nonlinear and erroneous geophysical DA.

For a time series of model or observation states with l>k, we define the notations as follows:

(12a)xl:k:=xl,xl-1,,xk,(12b)yl:k:=yl,yl-1,,yk.

To distinguish between the various conditional probabilities under consideration, we make the following definitions. Let l>k; then, the forecast density is denoted as follows:

(13) p ( x l | x l - 1 : 1 , y l - 1 : 1 ) .

Next, the filter density is denoted as follows:

(14) p ( x l | y l : 1 ) .

A smoother density for xk, given observations yl:1, is denoted as follows:

(15) p ( x k | y l : 1 ) .

In the above, the filter and smoother densities are marginals of the joint posterior density, denoted as follows:

(16) p ( x l : 1 | y l : 1 ) .

The Markov hypothesis implies that the forecast density can, furthermore, be written as follows:

(17) p ( x k | x k - 1 : 1 , y k - 1 : 1 ) = p ( x k | x k - 1 ) .

For a fixed-lag smoother, define a shift in length S≥1 analysis times and a lag of length LS analysis times, where time tL denotes the present time. We use an algorithmically stationary DAW throughout the work, referring to the time indices {t1,,tL}. Smoother schemes estimate the joint posterior density p(xL:1|yL:1) or one of its marginals in a DA cycle. After each estimate is produced, the DAW is subsequently shifted in time by S×Δt, and all states are reindexed by tk:=tk+S to begin the next DA cycle. For a lag of L and a shift of S, the observation vectors at times {tL-S+1,,tL} correspond to the observations newly entering the DAW at time tL. When S=L, the DAWs are disconnected and adjacent in time, whereas, for S<L, there is an overlap between the estimated states in sequential DAWs. Figure 1 provides a schematic of how the DAW is shifted for a lag of L=5 and shift of S=2. Following the convention in DA that there is no observation at time zero, in addition to the DAW {t1,,tL}, states at time t0 are estimated or utilized to connect estimates between adjacent/overlapping DAWs.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f01

Figure 1Three cycles of a smoother with a shift S=2 and a lag L=5. The cycle number increases from top to bottom. Time indices in the left-hand margin indicate the current time for the associated cycle of the algorithm. New observations entering the current DAW are shaded black. The initial DAW ranges from {tL-6,,tL-2}. In the next cycle, this is shifted to {tL-4,,tL} and is shifted thereafter to {tL-2,,tL+2}. States at the zero-time indices are tL−7 in the first cycle, tL−5 in the second cycle, and tL−3 in the third cycle. These are estimated in addition to states in the DAW to connect the cycles in the sequential DAWs.

Download

Define the background mean and covariance as follows:

(18a)xki:=Exki,(18b)Bki:=Exki-xkixki-xki,

where the label i refers to the density with respect to which the expectation is taken. The ensemble matrix EkiRNx×Ne is likewise given a label i, denoting the conditional density according to which the ensemble is approximately distributed. The ensemble Ekfore is assumed to have columns sampled that are independent and identically distributed (iid), according to the forecast density. The ensemble Ekfilt is assumed to have columns iid, according to the filter density. The ensemble Ek|Lsmth is assumed to have columns iid according to a smoother density for the state at time tk, given observations up to time tL. Multiple data assimilation schemes will also utilize a balancing ensemble Ekbal and an MDA ensemble Ekmda, which will be defined in Sect. 4.3. Time indices and labels may be suppressed when the meaning is still clear in the context. Note that, in realistic geophysical DA, the iid assumption rarely holds in practice, and even in the perfect linear Gaussian model, the above identifications are approximations due to the sampling error in estimating the background mean and covariance.

The forecast model is given by Ek+1i=Mk+1Ekj, referring to the action of the map being applied column-wise, and where the type of ensemble input and output i,j{fore,filt,smth,bal,mda} (forecast/filter/smoother/balancing/MDA) is specified according to the estimation scheme. Define the composition of the forecast model as Eli=MlMk=Ml:kEk-1j. Let 1 denote the vector with all entries equal to one, such that the ensemble-based empirical mean, the ensemble perturbation matrix, and the ensemble-based empirical covariance are each defined by linear operations with conformal dimensions as follows:

(19a)x^ki:=Eki1/Ne,Xki:=Eki-x^ki1(19b)=EkiINe-11/Ne,(19c)Pki:=XkiXki/Ne-1,

which is distinguished from the background mean xii and background covariance Bki.

3 Deriving the SIEnKS

The ETKF analysis (Hunt et al.2007) is utilized in the following for its popularity and efficiency and in order to emphasize the commonality and differences between other well-known smoothing schemes. However, the single-iteration framework is not restricted to any particular filter analysis, and other types of filter analysis, such as the deterministic EnKF (DEnKF) of Sakov and Oke (2008a), are compatible with the formalism and may be considered in future studies.

3.1 The ETKF

The filter problem is expressed recursively in the Bayesian MAP formalism with an algorithmically stationary DAW as follows. Suppose that there is a known filter density p(x0|y0) from a previous DA cycle. Using the Markov hypothesis and the independence of observation errors, we write the filter density up to proportionality, via Bayes' law, as follows:

(20a)p(x1|y1:0)p(y1|x1,y0)p(x1,y0)(20b)p(y1|x1)(i)p(x1|x0)p(x0|y0)dx0(ii),

which is the product of the (i) likelihood of the observation, given the forecast, and (ii) the forecast prior. The forecast prior (ii) is generated by the model propagation of the last filter density p(x0|y0), with the transition density p(x1|x0), marginalizing out x0. Given a first prior, the above recursion inductively defines the forecast and filter densities, up to proportionality, at all times.

In the perfect linear Gaussian model, the forecast prior and filter densities,

(21) p ( x 1 | x 0 ) p ( x 0 | y 0 ) d x 0 and p ( x 1 | y 1 ) ,

are Gaussian. The Kalman filter equations recursively compute the mean x1fore/x1filt and covariance B1fore/B1filt of the random model state x1, parameterizing its distribution (Jazwinski1970). In this case, the filter problem can also be written in terms of the Bayesian MAP cost function, as follows:

(22) J ( x 1 ) = 1 2 x 1 - x 1 fore B 1 fore 2 + 1 2 y 1 - H 1 x 1 R 1 2 .

To render the above cost function into the right-transform analysis, define the matrix factor as follows:

(23) B 1 fore : = Σ 1 fore Σ 1 fore ,

where the choice of Σ1fore can be arbitrary but is typically given in terms of a singular value decomposition (SVD; Sakov and Oke2008b). Instead of optimizing the cost function in Eq. (22) over the state vector x1, the optimization is equivalently written in terms of weights w, where, in the following:

(24) x 1 := x 1 fore + Σ 1 fore w .

Thus, by rewriting Eq. (22) in terms of the weight vector w, we obtain the following:

(25) J ( w ) = 1 2 w 2 + 1 2 y 1 - H 1 x 1 fore - H 1 Σ 1 fore w R 1 2 .

Furthermore, for the sake of compactness, we define the following notations:

(26a)y1:=H1x1fore,(26b)δ1:=R1-12y1-y1,(26c)Γ1:=R1-12H1Σ1fore.

The vector δ1 is the innovation vector, weighted inverse proportionally to the observation uncertainty. The matrix Γ1, in one dimension with H1:=1, is equal to the standard deviation of the model forecast relative to the standard deviation of the observation error.

The cost function Eq. (25) is hence further reduced to the following:

(27) J ( w ) = 1 2 w 2 + 1 2 δ 1 - Γ 1 w 2 .

This cost function is quadratic in w and can be globally minimized where w𝒥=0. Notice that, in the following:

(28) w J = w - Γ 1 δ 1 - Γ 1 w .

By setting the gradient equal to zero for w, we find the following expression for the optimal weights:

(29a)0=w-Γ1δ1-Γ1w(29b)Γ1δ1=INx+Γ1Γ1w(29c)w=INx+Γ1Γ1-1Γ1δ1.

From Eq. (28), notice that

(30) w J | w = 0 = - Γ 1 δ 1 .

Similarly, taking the gradient of Eq. (28), we find that the Hessian, ΞJ:=w2J, is equal to the following:

(31) Ξ J = I N x + Γ 1 Γ 1 .

Therefore, with w=0 corresponding to x1fore as the initialization of the algorithm, the MAP weights w are determined by a single iteration of Newton's descent method (Nocedal and Wright2006). For iterate i, this has the general form of the following:

(32) w i + 1 := w i - Ξ J - 1 J | w = w i .

The MAP weights define the maximum a posteriori model state as follows:

(33) x 1 filt := x 1 fore + Σ 1 fore w .

Under the perfect linear Gaussian model assumption, 𝒥 can then be rewritten in terms of the filter MAP estimate as follows:

(34a)Jx1=12x1-x1filtB1filt2(34b)J(w)=12x1fore-Σ1forew-x1filtB1filt2.

Define the matrix decomposition B1filt=Σ1filtΣ1filt and the change in variables as follows:

(35a)Ω1:=Σ1filt-1Σ1fore,(35b)ϱ1:=Σ1filt-1x1fore-x1filt.

Then, Eq. (34b) can be rewritten as follows:

(36) J ( w ) = 1 2 ϱ 1 - Ω 1 w 2 .

Computing the Hessian ΞJ=w2J from each of Eqs. (27) and (36), we find, by their equivalence, the following:

(37a)INx+Γ1Γ1=Ω1Ω1(37b)INx+Γ1Γ1=Σ1foreΣ1filt-Σ1filt-1Σ1fore(37c)B1filt=Σ1foreINx+Γ1Γ1-1Σ1fore.

If we define the covariance transform as

(38) T := Ξ J - 1 2 ,

then this derivation above describes the square root Kalman filter recursion (Tippett et al.2003) when written for the exact mean and covariance, which is recursively computed in the perfect linear Gaussian model. The covariance update is then as follows:

(39) B 1 filt = Σ 1 fore T Σ 1 fore T .

It is written entirely in terms of the matrix factor Σki and the covariance transform T, such that the background covariance need not be explicitly computed in order to produce recursive estimates. Likewise, the Kalman gain update to the mean state is reduced to Eq. (33) in terms of the weights and the matrix factor. This reduction is at the core of the efficiency of the ETKF in which one typically makes a reduced-rank approximation to the background covariances B1i.

Using the ensemble-based empirical estimates for the background, as in Eq. (19), a modification of the above argument must be used to solve the cost function 𝒥 in the ensemble span, without a direct inversion of P1fore when this is of a reduced rank. We replace the background covariance norm square with one defined by the ensemble-based covariance, as follows:

(40) v P 1 i 2 = N e - 1 X 1 i v X 1 i v .

We then define the ensemble-based estimates as follows:

(41a)x1:=x^1fore+X1forew,(41b)y^1:=H1x^1fore,(41c)δ^1:=R1-12y1-y^1,(41d)S1:=R1-12H1X1fore,

where w is now a weight vector in RNe. The ensemble-based cost function is then written as follows:

J̃(w)=12x^1fore-X1forew-x^1foreP1fore2+(42a)12y1-H1x^1fore-H1X1forewR12(42b)=12(Ne-1)w2+12δ^1-S1w2.

Define w^ to be the minimizer of the cost function in Eq. (42). Hunt et al. (2007) demonstrate that, up to a gauge transformation, w^ yields the minimizer of the state space cost function, Eq. (22), when the estimate is restricted to the ensemble span. Let Ξ̃J̃ denote the Hessian of the ensemble-based cost function in Eq. (42). This equation is quadratic in w and can be solved similarly to Eq. (27) to render the following:

(43a)w^:=0-Ξ̃J̃-1J̃|w=0,(43b)T:=Ξ̃J̃-12,(43c)P1filt=X1foreTX1foreT/(Ne-1).

The ensemble transform Kalman filter (ETKF) equations are then given by the following:

(44) E 1 filt = x ^ 1 fore 1 + X 1 fore w ^ 1 + N e - 1 TU ,

where URNe×Ne can be any mean-preserving, orthogonal transformation, i.e., U1=1. The simple choice of U:=INe is sufficient, but it has been demonstrated that choosing a random, mean-preserving orthogonal transformation at each analysis, as above, can improve the stability of the ETKF, preventing the collapse of the variances to a few modes in the empirical covariance estimate (Sakov and Oke2008b). We remark that Eq. (44) can be written equivalently as a single linear transformation as follows:

(45a)E1filt=E1foreΨ1,Ψ1:=11/Ne+(45b)INe-11/New^1+Ne-1TU.

The compact update notation in Eq. (45) is used to simplify the analysis.

If the observation operator 1 is actually nonlinear, then the ETKF typically uses the following approximation to the quadratic cost function:

(46a)Y1:=H1E1fore,(46b)y^1:=Y11/Ne,(46c)S1:=R1-12Y1-y^11,

where term (46a) refers to the action of the observation operator being applied column-wise. Substituting the definitions in Eq. (46) for the definitions in Eq. (41) gives the standard nonlinear analysis in the ETKF. Note that this framework extends to a fully iterative analysis of nonlinear observation operators, as discussed in Sect. 4.1. Multiplicative covariance inflation is often used in the ETKF to handle the systematic underestimation of the forecast and filter covariance due to the sample error implied by a finite size ensemble and nonlinearity of the forecast model 1 (Raanes et al.2019a).

The standard ETKF cycle is summarized in Algorithm A5. This algorithm is broken into the subroutines, in Algorithms A1A4, which are reused throughout our analysis to emphasize the commonality and the differences in the studied smoother schemes. The filter analysis described above can be extended in several different ways when producing a smoother analysis on a DAW, including lagged past states, depending in part on whether it is formulated as a marginal or a joint smoother (Cosme et al.2012). The way in which this analysis is extended, utilizing a retrospective reanalysis or a 4D cost function, differentiates the EnKS from the IEnKS and highlights the ways in which the SIEnKS differs from these other schemes.

3.2 The fixed-lag EnKS

The (right-transform) fixed-lag EnKS extends the ETKF over the smoothing DAW by sequentially reanalyzing past states with future observations. This analysis is performed retrospectively in the sense that the filter cycle of the ETKF is left unchanged, while an additional smoother loop of the DA cycle performs an update on the lagged state ensembles stored in memory. Assume S=1L, then the EnKS estimates the joint posterior density pxL:1|yL:1 recursively, given the joint posterior estimate over the last DAW pxL-1:0|yL-1:0. We begin by considering the filter problem as in Eq. (20).

Given p(xL-1:0,yL-1:0), we write the filter density up to proportionality as follows:

(47a)p(xL|yL:0)p(yL|xL,yL-1:0)p(xL,yL-1:0)p(yL|xL)(i)×(47b)p(xL|xL-1)p(xL-1:0|yL-1:0)dxL-1:0(ii),

with the product of (i) the likelihood of the observation yL, given xL, and (ii) the forecast for xL, using the transition kernel on the last joint posterior estimate and marginalizing out xL-1:0. Recalling that p(xL|yL:1)p(xL|yL:0), this provides a means to sample the filter marginal of the desired joint posterior. The usual ETKF filter analysis is performed to sample the filter distribution at time tL; yet, to complete the smoothing cycle, the scheme must sample the joint posterior density p(xL:1,yL:1).

Consider that the marginal smoother density is proportional to the following:

p(xL-1|yL:0)p(yL|xL-1,yL-1:0)×(48a)p(xL-1,yL-1:0)(48b)p(yL|xL-1)(i)p(xL-1|yL-1:0)(ii),

where (i) is the likelihood of the observation yL, given the past state xL−1, and (ii) is the marginal density for xL−1 from the last joint posterior.

Assume now the perfect linear Gaussian model; then, the corresponding Bayesian MAP cost function is given as follows:

(49) J ( x L - 1 ) = 1 2 x L - 1 - x L - 1 | L - 1 smth B L - 1 | L - 1 smth 2 + 1 2 y L - H L M L x L - 1 R L 2 ,

where xL-1|L-1smth and BL-1|L-1smth are the mean and covariance of the marginal smoother density p(xL-1|yL-1:0). Take the following matrix decomposition:

(50) B L - 1 | L - 1 smth = Σ L - 1 | L - 1 smth Σ L - 1 | L - 1 smth .

Then, write xL-1=xL-1|L-1smth+ΣL-1|L-1smthw, rendering the cost function as follows:

J(w)=12w2+(51a)12yL-HLML(xL-1|L-1smth+ΣL-1|L-1smthw)RL2(51b)=12w2+12yL-HLxLfore-HLΣLforewRL2(51c)=12w2+12δL-ΓLw2.

Let w now denote the minimizer of Eq. (51). It is important to recognize that

(52)xL:=MLxL-1|L-1smth+ΣL-1|L-1smthw(53)=xLfore+ΣLforew,

such that the optimal weight vector for the smoothing problem w is also the optimal weight vector for the filter problem.

The ensemble-based approximation,

(54a)xL-1=x^L-1|L-1smth+XL-1|L-1smthw,(54b)J̃(w)=12(Ne-1)w2+12δ^L-SLw2,

to the exact smoother cost function in Eq. (51) yields the retrospective analysis of the EnKS as follows:

(55a)w^:=0-Ξ̃J̃-1J̃|w=0,(55b)T:=Ξ̃J̃-12,EL-1|Lsmth=x^L-1|L-1smth1+(55c)XL-1|L-1smthw^1+Ne-1TU,(55d)EL-1|L-1smthΨL.

The above equations generalize for arbitrary indices k|L, completely describing the smoother loop between each filter cycle of the EnKS. After a new observation is assimilated with the ETKF analysis step, a smoother loop makes a backwards pass over the DAW, applying the transform and the weights of the ETKF filter update to each past state ensemble stored in memory. This generalizes to the case where there is a shift in the DAW with S>1, though the EnKS does not process observations asynchronously by default, i.e., the ETKF filter steps, and the subsequent retrospective reanalysis, are performed in sequence over the observations and ordered in time rather than making a global analysis over yL:L-S+1. A standard form of the EnKS is summarized in Algorithm A6, utilizing the subroutines in Algorithms A1A4.

A schematic of the EnKS cycle for a lag of L=4 and a shift of S=1 is pictured in Fig. 2. Time moves forwards, from left to right, on the horizontal axis, with a step size of Δt. At each analysis time, the ensemble forecast from the last filter density is combined with the observation to produce the ensemble update transform ΨL. This transform is then utilized to produce the posterior estimate for all lagged state ensembles conditioned on the new observation. The information in the posterior estimate thus flows in reverse time to the lagged states stored in memory, but the information flow is unidirectional in this scheme. It is understood then that reinitializing the improved posterior estimate for the lagged states in the dynamical model does not improve the filter estimate in the perfect linear Gaussian configuration. Indeed, define the product of the ensemble transforms as follows:

(56) Ψ k : l := Ψ k Ψ l .

Then, for arbitrary 1klL,

(57a)Ml:kEk-1|k-1smthΨk:l=Ml:kEk-1|lsmth(57b)=El|k-1foreΨk:l(57c)=El|lsmth.

This demonstrates that conditioning on the information from the observation is covariant with the dynamics. Raanes (2016) demonstrates the equivalence of the EnKS and the Rauch–Tung–Striebel (RTS) smoother, where this property of perfect linear Gaussian models is well understood. In the RTS formulation of the retrospective reanalysis, the conditional estimate reduces to the map of the current filter estimate under the reverse time model Mk-1 (Jazwinski1970; see example 7.8, chap. 7). Note, however, that both of the EnKS and ensemble RTS smoothers produce their retrospective reanalyses via a recursive ensemble transform without the need to make backwards model simulations.

The covariance of conditioning on observations and the model dynamics does not hold, however, either in the case of nonlinear dynamics or of model error. Reinitializing the DA cycle in a perfect nonlinear model with the conditional ensemble estimate E0|Lsmth can dramatically improve the accuracy of the subsequent forecast and filter statistics. Particularly, this exploits the mismatch in perfect nonlinear dynamics between ML:1E0|LsmthELfilt. Chaotic dynamics generate additional information about the initial value problem in the sense that initial conditions nearby to each other are distinguished by their subsequent evolution and divergence due to dynamical instability. Reinitializing the model forecast with the smoothed prior estimate brings new information into the forecast for states in the next DAW. This improvement in the accuracy of the ensemble statistics has been exploited to a great extent by utilizing the 4D ensemble cost function (Hunt et al.2004). Particularly, the filter cost function can be extended over multiple observations simultaneously and in terms of lagged states directly. This alternative approach to extending the filter analysis to the smoother analysis is discussed in the following.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f02

Figure 2The EnKS with a lag =4 and a shift =1. The observations are assimilated sequentially via the filter cost function, and a retrospective reanalysis is applied to all ensemble states within the lag window stored in memory. This figure is adapted from Asch et al. (2016).

3.3 The Gauss–Newton fixed-lag IEnKS

The following is an up-to-date formulation of the Gauss–Newton IEnKS of Bocquet and Sakov (2013, 2014) and its derivations. Instead of considering the marginal smoother problem, now consider the joint posterior density directly and for a general shift S. The last posterior density is written as pxL-S:1-S|yL-S:1-S. Using the independence of observation errors and the Markov assumption recursively,

p(xL:1|yL:1-S)k=L-S+1Lp(yk|xk)p(xk|xk-1)×(58)k=1L-Sp(xk|xk-1)px0|yL-S:1-Sdx0.

Additionally, using the perfect model assumption,

(59) p ( x k | x k - 1 ) = δ x k - M k x k - 1

for every k. Therefore,

p(xL:1|yL:1-S)p(x0|yL-S:1-S)(i)×k=L-S+1Lp(yk|xk)(ii)×(60)k=1Lδxk-Mkxk-1(iii)dx0,

where term (i) in Eq. (60) represents the marginal smoother density for x0|L-S over the last DAW, term (ii) represents the joint likelihood of the observations given the model state, and term (iii) represents the free forecast of the smoother estimate for x0|L-S. Noting that p(xL:1|yL:1)p(xL:1|yL:1-S), this provides a recursive form to sample the joint posterior density.

Under the perfect linear Gaussian model assumption, the above derivation leads to the following exact 4D cost function:

(61) J x 0 := 1 2 x 0 - x 0 | L - S smth B 0 | L - S smth 2 + 1 2 k = L - S + 1 L y k - H k M k : 1 x 0 R k 2 .

The ensemble-based approximation, using notations as in Eq. (41), yields the following:

(62a)x0:=x^0|L-Ssmth+X0|L-Ssmthw,J̃(w):=12(Ne-1)w2+(62b)12k=L-S+1Lδ^k-Skw2.

Notice that Eq. (62b) is quadratic in w; therefore, for the perfect linear Gaussian model, one can perform a global analysis over all new observations in the DAW at once.

The gradient and the Hessian of the ensemble-based 4D cost function are given as follows:

(63a)J̃:=(Ne-1)w-k=L-S+1LSkδ^k-Skw,(63b)Ξ̃J̃:=(Ne-1)INe+k=L-S+1LSkSk,

so that, evaluating at w=0, the minimizer w^ is again given by a single iteration of Newton's descent

(64) w ^ := 0 - Ξ ̃ J ̃ J ̃ | w = 0 .

Define the covariance transform again as T:=Ξ̃J̃-12. We denote the right ensemble transform corresponding to the 4D analysis ΨL-S+1:L4D to distinguish from the product of the sequential filter transforms ΨL-S+1:L. The global analyses are defined as follows:

ΨL-S+1:L4D:=11/Ne+(65a)INe-11/New^1+Ne-1TU,(65b)E0|Lsmth=E0|L-SsmthΨL-S+1:L4D,

where U is any mean-preserving orthogonal matrix.

In the perfect linear Gaussian model, this formulation of the IEnKS is actually equivalent to the 4D-EnKF of Hunt et al. (2004), Fertig et al. (2007), and Harlim and Hunt (2007). The above scheme produces a global analysis of all observations within the DAW, even asynchronously from the standard filter cycle (Sakov et al.2010). One generates a free ensemble forecast with the initial conditions drawn iid as p(x0|yL-S:1-S), and all data available within the DAW are used to estimate the update to the initial ensemble. The perfect model assumption means that the updated initial ensemble E0|Lsmth can then be used to temporally interpolate the joint posterior estimate over the entire DAW from the marginal sample, i.e., for any 0<kL, a smoothing solution is defined as follows:

(66) M k : 1 E 0 | L - S smth Ψ L - S + 1 : L 4 D E k | L smth .

When k and k are nonlinear, the IEnKS formulation is extended with additional iterations of Newton's descent, as in Eq. (32), in order to iteratively optimize the update weights. Specifically, the gradient is given by the following:

(67a)J̃:=Ne-1w-k=L-S+1LỸkRk-1φ,(67b)φ:=yk-HkMk:1x^0|L-Ssmth+X0|L-Ssmthw,

where Ỹk represents a directional derivative of the observation and state models with respect to the ensemble perturbations at the ensemble mean, as follows:

(68) Y ̃ k := | x ^ 0 | L - S smth H k M k : 1 X 0 | L - S smth .

This describes the sensitivities of the cost function, with respect to the ensemble perturbations, mapped to the observation space. When the dynamics is weakly nonlinear, the ensemble perturbations of the EnKS and IEnKS are known to closely align with the span of the backward Lyapunov vectors of the nonlinear model along the true state trajectory (Bocquet and Carrassi2017). Under these conditions, Eq. (68) can be interpreted as a directional derivative with respect to the forecast error growth along the dynamical instabilities of the nonlinear model (see Carrassi et al.2022, and references therein).

In order to avoid an explicit computation of the tangent linear model and the adjoint as in 4D-Var, Sakov et al. (2012) and Bocquet and Sakov (2012) proposed two formulations to approximate the tangent linear propagation of the ensemble perturbations. The bundle scheme makes an explicit approximation of finite differences in the observation space where, for an arbitrary ensemble, they define the approximate linearization as follows:

(69) Y k := 1 ϵ H k M k : 1 x 0 1 + ϵ X 0 I N e - 11 / N e ,

for a small constant ϵ. Alternatively, the transform version provides a different approximation to the variational analysis, using the covariance transform T and its inverse as a pre-/post-conditioning of the perturbations used in the sensitivities approximation. The transform variant of the IEnKS is in some cases more numerically efficient than the bundle version, requiring fewer ensemble simulations, and it is explicitly related to the ETKF/EnKS/4D-EnKF formalism presented thus far. For these reasons, the transform approximation is used as a basis of comparison with the other schemes in this work.

For the IEnKS transform variant, the ensemble-based approximations are redefined in each Newton iteration as follows:

(70a)Yk:=HkEk,(70b)y^k:=Yk1/Ne,(70c)Sk:=Rk-12Yk-y^k1T-1,(70d)δ^:=Rk-12yk-y^k,

where the first covariance transform is defined as T:=INe, the gradient and Hessian are computed as in Eq. (63) from the above, and where the covariance transform is redefined in terms of the Hessian, T:=Ξ̃J̃-12, at the end of each iteration. With these definitions, the first iteration of the IEnKS transform variant corresponds to the solution of the nonlinear 4D-EnKF, but subsequent iterates are initialized by pre-conditioning the initial ensemble perturbations via the update T and post-conditioning the sensitivities by the inverse transform T−1.

An updated form of the Gauss–Newton IEnKS transform variant is presented in Algorithm A7. Note that, while Algorithm A7 does not explicitly reference the sub-routine in Algorithm A1, many of the same steps are used in the IEnKS when computing the sensitivities. It is important to notice that, for S>1, the IEnKS only requires a single computation of the square root inverse of the Hessian of the 4D cost function, per iteration of the optimization, to process all observations in the DAW. On the other hand, the EnKS processes these observations sequentially, requiring S total square root inverse calculations of the Hessian, corresponding to each of the sequential filter cost functions.

The IEnKS is computationally constrained by the fact that each iteration of the descent requires L total ensemble simulations in the dynamical state model k. One can minimize this expense by using a single iteration of the IEnKS equations, which is denoted the linearized IEnKS (Lin-IEnKS) by Bocquet and Sakov (2014). When the overall DA cycle is nonlinear, but only weakly nonlinear, this single iteration of the IEnKS algorithm can produce a dramatic improvement in the forecast accuracy versus the forecast/filter cycle of the EnKS. However, the overall nonlinearity of the DA cycle may be strongly influenced by factors other than the model forecast k itself. As a simple example, consider the case in which k is nonlinear yet kMk for all k. In this setting, it may be more numerically efficient to iterate upon the 3D filter cost function rather than the full 4D cost function which requires simulations of the state model. Combining (i) the filter step and retrospective reanalysis of the EnKS and (ii) the single iteration of the ensemble simulation over the DAW as in Lin-IEnKS, we obtain an estimation scheme that sequentially solves the nonlinear filter cost functions in the current DAW, while making an improved forecast in the next by transmitting the retrospective analyses through the dynamics via the updated initial ensemble.

3.4 The fixed-lag SIEnKS

3.4.1 Algorithm

Recall that, from Eq. (57), conditioning the ensemble with the right transform Ψk is covariant with the dynamics. In a perfect linear Gaussian model, we can therefore estimate the joint posterior over the DAW via model propagation of the marginal for x0|Lsmth, as in the IEnKS but by using the EnKS retrospective reanalysis to generate the initial condition. For arbitrary 1SL, define each of the right transforms Ψkk=L-S+1L as in the sequential filter analysis of the ETKF with Eq. (45). Rather than storing the ensemble matrix in memory for each time tk in the DAW, we instead store E0|L-Ssmth and EL-S|L-Ssmth to begin a DA cycle. Observations within the DAW are sequentially assimilated via the 3D filter cycle initialized with EL-S|L-Ssmth and a marginal, retrospective, smoother analysis is performed sequentially on E0|L-Ssmth with these filter transforms. The joint posterior estimate is then interpolated over the DAW for any 1kL via the model dynamics as follows:

(71a)E0|Lsmth=E0|L-SsmthΨL-S+1:L,(71b)Ek|Lsmth:=Mk:1E0|Lsmth.

Notice that, for S=1, the product of the 3D filter ensemble transforms reduces to the 4D transform, i.e.,

(72) Ψ L - S + 1 : L Ψ L - S + 1 : L 4 D Ψ L ,

so that, in the perfect linear Gaussian model with S=1, the SIEnKS and the Lin-IEnKS coincide. The SIEnKS and the Lin-IEnKS have different treatments of nonlinearity in the DA cycle, but even in the perfect linear Gaussian model, a shift S>1 distinguishes the 4D approach of the Lin-IEnKS and the hybrid 3D/4D approach of the SIEnKS. For comparison, a schematic of the SIEnKS cycle is pictured in Fig. 3, while a schematic of the (Lin-)IEnKS cycle is shown in Fig. 4, and each is configured for a lag of L=4 and a shift of S=2. This comparison demonstrates how the sequential 3D filter analysis and retrospective smoother reanalysis for each observation differ from the global 4D analysis of all observations at once in the (Lin-)IEnKS. A generic form of the SIEnKS is summarized in Algorithm A8, utilizing the sub-routines in Algorithms A1A4. Note that the version presented in Algorithm A8 is used to emphasize the commonality with the EnKS. However, an equivalent implementation initializes each cycle with E0|L-Ssmth alone, similar to the IEnKS. Such a design is utilized when we derive the SIEnKS MDA scheme in Algorithm A12 from the IEnKS MDA scheme in Algorithm A13.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f03

Figure 3The SIEnKS with a lag =4 and a shift =2. An initial condition from the last smoothing cycle initializes a forecast simulation over the current DAW of the L=4 states. New observations entering the DAW are assimilated sequentially via the 3D filter cost function. After each filter analysis, a retrospective reanalysis is applied to the initial ensemble. At the end of the DAW, after sequentially processing all observations, the reanalyzed initial condition is evolved, via the model S analysis times, forward to begin the next cycle.

Download

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f04

Figure 4The (Lin-)IEnKS with a lag =4 and a shift =2. An initial condition from the last smoothing cycle initializes a forecast simulation over the current DAW of the L=4 states. Unlike the SIEnKS, all new observations entering the DAW are assimilated globally at once via the 4D cost function. The innovations of the free forecast over all of the observation times are used to produce a retrospective reanalysis of the initial ensemble. Finally, the reanalyzed initial condition is evolved, via the model, S analysis times forward to begin the next cycle. Unlike the SIEnKS and the EnKS, the filter analysis of the (Lin-)IEnKS is performed by dynamically interpolating the smoothing estimate over new observation times with a free forecast in the subsequent cycle. The Lin-IEnKS is differentiated from the IEnKS by using only a single free ensemble forecast to produce the 4D optimization of the initial ensemble in each cycle.

Download

3.4.2 Comparison with other schemes

Other well-known DA schemes combining a retrospective reanalysis and reinitialization of the ensemble forecast include the running-in-place (RIP) smoother of Kalnay and Yang (2010) and the one-step-ahead (OSA) smoother of Desbouvries et al. (2011) and Ait-El-Fquih and Hoteit (2022). The RIP smoother iterates over both the ensemble simulation and filter cost function, in order to apply a retrospective reanalysis to the first prior with a lag and shift of L=S=1. The RIP smoother is designed to spin up the LETKF from a cold start of a forecast model and DA cycle (Yang et al.2013). However, the RIP optimizes a different style cost function than the S/Lin-/IEnKS family of smoothers. The stopping criterion for RIP is formulated in terms of the mean square distance between the ensemble forecast and the observation, potentially leading to an overfitting of the observation. The OSA smoother is also proposed as an optimization of the DA cycle and integrates an EnKF framework, including for a two-stage, iterative optimization of dynamical forecast model parameters within the DA cycle (Gharamti et al.2015; Ait-El-Fquih et al.2016; Raboudi et al.2018). The OSA smoother uses a single iteration and a lag and shift of L=S=1, making a filter analysis of the incoming observation and a retrospective reanalysis of the prior. However, the OSA smoother differs from the SIEnKS in using an additional filter analysis while interpolating the joint posterior estimate over the DAW, accounting for model error in the simulation of M1E0|1smth. Without model error, the second filter analysis in the OSA smoother simulation is eliminated from the estimation scheme. Therefore, with an ETKF-style filter analysis, a perfect linear Gaussian model and a lag of L=S=1, the SIEnKS, and RIP and OSA smoothers all coincide.

The rationale for the SIEnKS is to focus computational resources on optimizing the sequence of 3D filter cost functions for the DAW when the forecast error dynamics is weakly nonlinear, rather than computing the iterative ensemble simulations needed to optimize a 4D cost function. The SIEnKS generalizes some of the ideas used in these other DA schemes, particularly for perfect models with weakly nonlinear forecast error dynamics, including for (i) arbitrary lags and shifts 1SL, (ii) an iterative optimization of hyperparameters for the filter cost function, (iii) multiple data assimilation, and (iv) asynchronous observations in the DA cycle. In order to illustrate the novelty of the SIEnKS, and to motivate its computational cost/prediction accuracy tradeoff advantages, we discuss each of these topics in the following.

4 Applications of single-iteration smoothing

4.1 Nonlinear observation operators

Just as the IEnKS extends the linear 4D cost function, the filter cost function in Eq. (42) can be extended with Newton iterates in the presence of a nonlinear observation operator. The maximum likelihood ensemble filter (MLEF) of Zupanski (2005) and Zupanski et al. (2008) is an estimator designed to process nonlinear observation operators and can be derived in the common ETKF formalism. Particularly, the algorithm can be granted bundle and transform variants like the IEnKS (Asch et al.2016; see Sect. 6.7.2.1), which are designed to approximate the directional derivative of the nonlinear observation operator with respect to the forecast ensemble perturbations at the forecast mean,

(73) Y ̃ k := | x ^ k fore H k X k fore ,

which is used in the nonlinear filter cost function gradient as follows:

J̃:=Ne-1w-(74)ỸkRk-1yk-Hkx^kfore+Xkforew.

When the forecast error dynamics is weakly nonlinear, the MLEF-style nonlinear filter cost function optimization provides a direct extension to the SIEnKS. The transform, as defined in the sub-routine in Algorithm A9, is interchangeable with the usual ensemble transform in Algorithm A1. In this way, the EnKS and the SIEnKS can each process nonlinear observation operators with an iterative optimization in the filter cost function alone and, subsequently, apply their retrospective analyses as usual. We refer to the EnKS analysis with MLEF transform as the maximum likelihood ensemble smoother (MLES), though we refer to the SIEnKS as usual, whether it uses a single iteration or multiple iterations of the solution to the filter cost function. Note that only the transform step needs to be interchanged in Algorithms A6 and A8, so we do not provide additional pseudo-code.

Consider that, for the MLES and the SIEnKS, the number of Hessian square root inverse calculations expands in the number of iterations used in Algorithm A9 to compute the transform for each of the S observations in the DAW. For each iteration of the IEnKS, this again requires only a single square root inverse calculation of the 4D cost function Hessian. However, even if the forecast error dynamics is weakly nonlinear, optimizing versus the nonlinear observation operator requires L ensemble simulations for each iteration used to optimize the cost function.

4.2 Adaptive inflation and the finite size formalism

Due to the bias of Kalman-like estimators in nonlinear dynamics, covariance inflation, as in Algorithm A4, is widely used to regularize these schemes. In particular, this can ameliorate the systematic underestimation of the prediction/posterior uncertainty due to sample error and bias. Empirically tuning the multiplicative inflation coefficient λ≥1 can be effective in stationary dynamics. However, empirically tuning this parameter can be costly, potentially requiring many model simulations, and the tuned value may not be optimal across timescales in which the dynamical system becomes non-stationary. A variety of techniques is used in practice for adaptive covariance estimation, inflation, or augmentation, accounting for these deficiencies of the Kalman-like estimators (Tandeo et al.2020, and references therein).

One alternative to empirically tuning λ is to derive an adaptive multiplicative covariance inflation factor via a hierarchical Bayesian model by including a prior on the background mean and covariance px1fore,B1fore, as in the finite size formalism of Bocquet (2011), Bocquet and Sakov (2012), and Bocquet et al. (2015). This formalism seeks to marginalize out over the first 2 moments of the background, yielding a Gaussian mixture model for the forecast prior as follows:

px1|E1fore=px1|E1fore,x1fore,B1fore×(75)px1fore,B1fore|E1foredx1foredB1fore.

Using Jeffreys' hyperprior for x1fore and B1fore, the ensemble-based filter MAP cost function can be derived as proportional to the following:

J̃(w):=12y-Hx^1fore+X1forewR12+(76)Ne2logϵNe+w2,

where ϵNe:=1+1Ne. Notice that Eq. (76) is non-quadratic in w, regardless of whether 1 is linear or nonlinear, such that one can iteratively optimize the solution to the nonlinear filter cost function with a Gauss–Newton approximation of the descent. When accounting for the nonlinearity in the ensemble evolution and the sample error due to small ensemble sizes in perfect models, optimizing the extended cost function in Eq. (76) can be an effective means to regularize the EnKF. In the presence of significant model error, one may need to extend the finite size formalism to the variant developed by Raanes et al. (2019a).

Algorithm A10 presents an updated version of the finite size ensemble Kalman filter (EnKF-N) transform calculation of Bocquet et al. (2015), explicitly based on the IEnKS transform approximation of the gradient of the observation operator. The hyperprior for the background mean and covariance is similarly introduced to the IEnKS and optimized over an extended 4D cost function. Note that, in the case when kHk is linear, a dual, scalar optimization can be performed for the filter cost function with less numerical expense. However, there is no similar reduction to the extended 4D cost function, and in order to emphasize the structural difference between the 4D approach and the sequential approach, we focus on the transform variant analogous to the IEnKS optimization.

Extending the adaptive covariance inflation in the finite size formalism to either the EnKS or the SIEnKS is simple, requiring that the ensemble transform calculation is interchanged with Algorithm A10 and that the tuned multiplicative inflation step is eliminated. The finite size iterative ensemble Kalman smoother (IEnKS-N) transform variant, including adaptive inflation as above, is described in Algorithm A11. Notice that iteratively optimizing the inflation hyperparameter comes at the additional expense of square root inverse Hessian calculations for the EnKS and the SIEnKS, while the IEnKS also requires L additional ensemble simulations for each iteration.

4.3 Multiple data assimilation

When the lag L>1 is long, temporally interpolating the posterior estimate in the DAW via the nonlinear model solution, as in Eq. (71), becomes increasingly nonlinear. In chaotic dynamics, the small simulation errors introduced this way eventually degrade the posterior estimate, and this interpolation becomes unstable when L is taken to be sufficiently large. Furthermore, for the 4D cost function, observations only distantly connected with the initial condition at the beginning of the DAW render the cost function with more local minima that may strongly affect the performance of the optimization. Multiple data assimilation is a commonly used technique, based on statistical tempering (Neal1996), designed to relax the nonlinearity of performing the MAP estimate by artificially inflating the variances of the observation errors with weights and assimilating these observations multiple times. Multiple data assimilation is made consistent with the Bayesian posterior in perfect linear Gaussian models by appropriately choosing weights so that, over all times that an observation vector is assimilated, all of its associated weights sum to one (Emerick and Reynolds2013). Given Gaussian likelihood functions, this implies that the sum of the precision matrices over the multiple assimilation steps equals R−1, as with the usual Kalman filter update.

Multiple data assimilation is integrated into the EnRML for static DAWs in reservoir modeling (Evensen2018, and references therein). With the fixed-lag, sequential EnKS, there is no reason to perform MDA as the assimilation occurs in a single pass over each observation with the filter step as in the ETKF. Sequential MDA, with DAWs shifting in time, was first derived with the IEnKS by Bocquet and Sakov (2014). In order to sample the appropriate density, the IEnKS MDA estimation is broken over two stages. First, in the balancing stage, the IEnKS fully assimilates all partially assimilated observations, targeting the joint posterior statistics. Second, the window of the partially assimilated observations is shifted in time with the MDA stage. The SIEnKS is similarly broken over these two stages, using the same weights as the IEnKS above. However, there is an important difference in the way MDA is formulated for the SIEnKS versus the IEnKS. For the SIEnKS, each observation in the DAW is assimilated with the sequential 3D filter cost function instead of the global 4D analysis in the IEnKS. The sequential filter analysis constrains the posterior's interpolation estimate to the observations in the balancing stage, as observations are assimilated sequentially in the SIEnKS, whereas the posterior estimate is performed by interpolating with a free forecast from the marginal posterior estimate in the IEnKS. Our novel SIEnKS MDA scheme is derived as follows.

Recall our algorithmically stationary DAW, {t1,,tL}, and suppose, at the moment, that there is a shift of S=1 and an arbitrary lag L. We take the notation that the covariance matrices for the likelihood functions are inflated to be as follows:

(77) p y β | x := n y | H x , β - 1 R ,

where the observation weights are assumed 0<β1. We index the weight for observation yk at the present time tL as βk|L. For consistency with the perfect linear Gaussian model, we require that

(78) i = 1 L β i | L = 1 .

This implies that, as we assimilate an observation vector for L total times, shifting the algorithmically stationary DAW, the sum of the weights used to assimilate the observation equals one.

We denote

(79) α k | L := i = k L β i | L

as the fraction of the observation yk that has been assimilated after the analysis step at the time tL. Note that, under the Gaussian likelihood assumption, and assuming the independence of the fractional observations, this implies that

(80) i = k L p y β i | L | x = p y α k | L | x .

Let βl:k|L and αl:k|L denote the length (l-k+1) vectors as follows:

(81a)βl:k|L=βl|Lβk|L,(81b)αl:k|L=αl|Lαk|L.

We then define the sequences,

(82a)yl:kβl:k|L:=ylβl|L,yl-1βl-1|L,,ykβk|L,(82b)yl:kαl:k|L:=ylαl:L,yl-1αl-1|L,,ykαk|l,

as the observations yl:k in the current DAW {t1,,tL}, with Eq. (82a), the corresponding MDA weights for this DAW, and, with Eq. (82b), the total portion of each observation assimilated in the MDA conditional density for this DAW after the analysis step. Similar definitions apply with the indices l:k|L-1 but are relative to the previous DAW.

For the current DAW, the balancing stage is designed to sample the joint posterior density,

(83) p x L : 1 | y L : 1 ,

where the current cycle is initialized with a sample of the MDA conditional density,

(84) p x 0 | y L - 1 : 0 α L - 1 : 0 | L - 1 .

That is, from the previous cycle, we have a marginal estimate for x0, given the sequence of observations yL-1:0, where the portion of observation yk that has been assimilated already is given by αk|L-1. Notice that α0|L-1=1 so that y0 has already been fully assimilated. To fully assimilate y1, we note that 1-α1|L-1=β1|L, and therefore,

px1:0|yL-1:2αL-1:2|L-1,y1:0py1β1|L|x1px1|x0×(85)px0|yL-1:0αL-1:0|L-1.

The above corresponds to a single simulation/analysis step in an EnKS cycle, where the observation y1β1|L is assimilated, and a retrospective reanalysis is applied to the ensemble at t0.

More generally, to fully assimilate observation yk, we assimilate the remaining portion left unassimilated from the last DAW and given as 1-αk|L-1. We define an inductive step describing the density for xk:0, which has fully assimilated yk:0, though it has yet to assimilate the remaining portions of observations yL-1:k+1, as follows:

(86) p ( x k : 0 | y L - 1 : k + 1 α L - 1 : k + 1 | L - 1 , y k : 0 ) p y k 1 - α k | L - 1 | x k × p ( x k | x k - 1 ) p x k - 1 : 0 | y L - 1 : k α L - 1 : k | L - 1 , y k - 1 : 0 .

For k=2,,L-2, this describes a subsequent simulation/analysis step of an EnKS cycle but where the observation yk1-αk|L-1 is assimilated and a retrospective analysis is applied to the ensemble at times t0,,tk-1. A subsequent EnKS analysis gives the following:

(87) p ( x L - 1 : 0 | y L - 1 : 0 ) p y L - 1 1 - α L - 1 | L - 1 | x L - 1 × p ( x L - 1 | x L - 2 ) p x L - 2 : 0 | y L - 1 α L - 1 | L - 1 , y L - 2 : 0 ,

i.e., this samples the joint posterior for the last DAW. A final EnKS analysis is used to assimilate yL, for which no portion was already assimilated in the previous DAW, as follows:

(88) p ( x L : 1 | y L : 1 ) p ( y L | x L ) × p ( x L | x L - 1 ) p ( x L - 1 : 0 | y L - 1 : 0 ) .

We thus define an initial ensemble, distributed approximately as follows:

(89) E 0 bal p x 0 | y L - 1 : 0 α L - 1 : 0 | L - 1 .

In the balancing stage, the observation error covariance weights are defined by the following:

(90) η k | L := 1 - α k | L - 1 ,

where ηL|L=1. When βk|L=1L for all k, we obtain the balancing weights as ηk|L=kL for all k=1,,L. An EnKS cycle initialized as in Eq. (89), using the balancing weights in Eq. (90), will approximately, sequentially, and recursively sample

(91) E k : 0 bal p x k : 0 | y L - 1 : k + 1 α L - 1 : k + 1 | L - 1 , y k : 0

from the inductive relationship in Eq. (86), where the final analysis gives EL:0balEL:0|Lsmth from Eq. (88).

To subsequently shift the DAW and initialize the next cycle, we target the density px1|yL:1αL:1|L. Given px0|yL-1:0αL-1:0|L-1, the target density is sampled by assimilating each observation ykβk|L, so that the portion of each observation assimilated becomes yL:1αL:1|L. Notice that, for k=1,,L-2,

(92) p ( x k : 0 | y L - 1 : k + 1 α L - 1 : k + 1 | L - 1 , y k : 0 α k : 0 | L ) p y k β k | L | x k × p ( x k | x k - 1 ) p x k - 1 : 0 | y L - 1 : k α L - 1 : k | L - 1 , y k - 1 : 0 α k - 1 : 0 | L .

The above recursion corresponds to an EnKS step in which the observation ykβk|L is assimilated and a retrospective analysis is applied to ensembles at times t0,,tk-1. Subsequent EnKS analyses using the MDA weights then give the following:

p(xL-1:0|yL-1:0αL-1:0|L)pykβL-1|L|xL-1×(93)p(xL-1|xL-2)pxL-2:0|yL-1αL-1|L-1,yL-2:0αL-2:0|L,p(xL:0|yL:0αL:0|L)pyLβL|L|xL×(94)p(xL|xL-1)pxL-1:0|yL-1:0αL-1:0|L.

We therefore perform a second EnKS cycle using the MDA observation error covariance weights βk|L to sample the target density. Given that η1|L=β1|L, the first analysis of the balancing stage in Eq. (85) is identical to the first analysis in the MDA stage, corresponding to k=1 in Eq. (92). Therefore, this first EnKS analysis step can be reused between the two stages.

Define an initial ensemble for the MDA stage, reusing the first analysis in the balancing stage, as follows:

(95) E 1 mda E 1 bal p x 1 | y L - 1 : 2 α L - 1 : 2 | L - 1 , y 1 : 0 .

An EnKS cycle initialized as in Eq. (95), using the MDA weights βk, approximately, sequentially, and recursively samples

(96) E k : 1 mda p x k : 1 | y L - 1 : k + 1 α L - 1 : k + 1 | L - 1 , y k : 0 α k : 0 | L

from the relationship in Eq. (92). The final analysis samples the density pxL:1|yL:0αL:0|LpxL:1|yL:1αL:1|L, as in Eq. (94), which is used to initialize the next cycle. To make the scheme more efficient, we note that we need only sample the marginal px1|yL:1αL:1|L to reinitialize the next cycle of the algorithm. This means that the smoother loop of the EnKS in the second stage needs to only store and sequentially condition the ensemble E1mda with the retrospective filter analyses in this stage. Combining the two stages together into a single cycle that produces forecast, filter, and smoother statistics over the DAW {t1,,tL}, as well as the ensemble initialization for the next cycle, requires 2L ensemble simulations. Due to the convoluted nature of the indexing over multiple DAWs above, a schematic of the two stages of the SIEnKS MDA cycle is presented in Fig. 5.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f05

Figure 5A schematic of the two stages of the SIEnKS MDA cycle. The DAW of the SIEnKS moves forward in time, from top to bottom, where the EnKS stage using MDA weights pushes the MDA conditional density, on the far left, forward in time. The middle layer represents the indexing of the stationary DAW, while the top layer represents a DAW one cycle back in time, and the bottom layer represents a DAW one cycle forward in time. The balancing density is sampled sequentially and recursively with an EnKS stage, using the balancing weights and moving from left to right in each cycle. For the current DAW, the middle balancing density has fully assimilated observations yk:0 and has partially assimilated observations yL:k+1αL-1:k+1|L-1. The EnKS stage with balancing weights completes when sampling the joint posterior, and the EnKS stage with MDA weights begins again.

Download

The MDA algorithm is generalized to shift windows of S>1 with the number of ensemble forecasts remaining invariant at 2L when using blocks of uniform MDA weights in the DAW. Assume that L=SQ for some positive integer Q, so that we partition yL:1 into Q total blocks of observations each of length S. In this case, the perfect linear Gaussian model consistency constraint is revised as follows:

(97) β k | L = β ̃ i | L for  i := k S , with  j = 1 Q β ̃ j | L = 1 ,

where the above brackets represent rounding up to the nearest integer. This ensures, again, that the weights corresponding to the Q total times to which yk is assimilated sum to one. With this weighting scheme, the equivalence between the balancing and MDA stages' first EnKS filter analysis extends to the first S total EnKS filter analyses, and therefore, ESmdaESbal initializes the MDA stage. Memory usage is further reduced by only performing the retrospective conditioning in the balancing stage on the states ES:0bal. This samples the density p(xS:0|yL:0) in the final cycle before the estimates for these states are discarded from all subsequent DAWs. MDA variants of the SIEnKS and the (Lin-)IEnKS are presented in Algorithms A12 and A13.

The primary difference between the SIEnKS and IEnKS MDA schemes lies in the 3D filter balancing analysis versus the global 4D balancing analysis. The IEnKS MDA scheme is not always robust in its 4D balancing estimation because the MDA conditional prior estimate that initializes the scheme may lie far away from the solution for the balanced, joint posterior. As a consequence, the optimization may require many iterations of the balancing stage. On the other hand, the sequential SIEnKS MDA approach uses the partially unassimilated observations in the DAW directly as a boundary condition to the interpolation of the joint posterior estimate over the DAW with the sequential EnKS filter cycle. For long DAWs, this means that the SIEnKS controls error growth in the ensemble simulation that accumulates over the long free forecast in the 4D analysis of the IEnKS.

Note how the cost of assimilation scales differently between the SIEnKS and the IEnKS when performing MDA. Both the IEnKS and the SIEnKS use the same weights ηk|L and βk|L for their balancing and MDA stages. However, each stage of the IEnKS separately performs an iterative optimization of the 4D cost function. While each iteration therein requires only a single square root inverse calculation of the cost function Hessian, the iterative solution requires at least 2L total ensemble simulations in order to optimize and interpolate the estimates over the DAW. An efficient version of the scheme can be performed as such by using the same free ensemble simulation initialized, as in Eq. (89), in order to assimilate each of the observation sequences yL:1ηL:1|L and yL:1βL:1|L. However, the IEnKS additionally requires S total ensemble simulations in order to shift the DAW thereafter. This differs from the SIEnKS, which requires fixed 2L ensemble simulations over the DAW. However, the computational barrier to the SIEnKS MDA scheme lies in the fact that it requires 2LS square root inverse calculations, corresponding to each unique filter cost function solution over the two stages; in the case that MDA is combined with, e.g., the ensemble transform in the MLEF, this further grows to the sum of the number of iterations j=12L-Sij, where ij iterations are used in the jth optimization of a filter cost function. However, when the cost of an ensemble simulation is sufficiently greater than the cost of the square root inverse in the ensemble dimension, the SIEnKS MDA scheme can substantially reduce the leading-order computational cost of the ensemble variational smoothing with MDA, especially when S>1.

4.4 Asynchronous data assimilation

In real-time prediction, fixed-lag smoothers with shifts in S>1 are computationally more efficient in terms of reducing the number of smoother cycles necessary to traverse a time series of observations with sequential DAWs – versus a shift of one, the number of cycles necessary is reduced by the factor of S. A barrier to using the SIEnKS with S>1 lies in the fact that the sequential filter analysis of the EnKS does not in and of itself provide a means to asynchronously assimilate observations. However, the SIEnKS differs from the EnKS in numerically simulating lagged states in the DAW. When one interpolates the posterior estimate with the dynamical model over lagged states, one can easily revise the algorithm to assimilate any newly available data corresponding to a time within the past simulation window, though the weights in MDA need to be adjusted accordingly. There are many ways in which one may even design methods of excluding observations and reintroducing them in a later DAW with a shift S>1. In the current work, the SIEnKS assimilates all observations synchronously, even with S>1. A systematic investigation of algorithms that would optimize this asynchronous assimilation in single-iteration smoothers goes beyond the scope of the current work. However, this key difference between the EnKS and the SIEnKS will be considered later.

5 Numerical benchmarks

5.1 Algorithm cost analysis

Fix the ensemble size Ne in the following, and let us suppose that the cost of the nonlinear ensemble simulation is fixed in Δt, equal to C floating-point operations (flops). In order to compute the ensemble transform in any of the methods, we assume that the inversion of the approximate Hessian Ξ̃J̃, and its square root, is performed with an SVD-based approach with the cost of the order of ONe3 flops. This assures stability and efficiency in the sense that the computation of all of T=Ξ̃J̃-12, T-1=Ξ̃J̃12 and Ξ̃J̃-1 combined is dominated by the cost of the SVD of the symmetric, which is Ne×Ne matrix Ξ̃J̃. If a method is iterative, we denote the number of iterations used in the scheme with ij, where the sub-index j distinguishes distinct iterative optimizations.

A summary of how each of the S/I/EnKS scale in their numerical cost is presented in Tables 1 and 2. This analysis is easily derived based on the pseudo-code in Appendix A and with the discussions in Sect. 4. Table 1 presents schemes that are used in the SDA configuration, while Table 2 presents schemes that are used in the MDA configurations. Note that, while adaptive inflation in the finite size formalism can be used heuristically to estimate a power of the joint posterior, this has not been found to be fully compatible with MDA (Bocquet and Sakov2014), and this combination of techniques is not considered here.

Table 1Order of the SDA cycle flops for lag=L, shift=S, tuned inflation (TI), or adaptive inflation (AI)/nonlinear observation operator (NO).

Download Print Version | Download XLSX

For realistic geophysical models, note that the maximal ensemble size Ne is typically of the order of 𝒪(102), while the state dimension Nx can be of the order of 𝒪(109) (Carrassi et al.2018); therefore, the cost of all algorithms is reduced to terms of CMNe3 at leading-order in target applications. It is easy to see then that the EnKS/MLES has a cost that is of the order of the regular ETKF/MLEF filter cycle, representing the least expensive of the estimation schemes. Consider now, in row one of Table 1, that the i1 in the IEnKS represents the number of iterations utilized to minimize the 4D cost function. If we set i1=1, then this represents the cost of the Lin-IEnKS. Particularly, we see that, for S=1 and a linear filter cost function, the Lin-IEnKS has the same cost as the SIEnKS. However, even in the case of a linear filter cost function, when S>1, then the SIEnKS is more expensive than the Lin-IEnKS. Setting i1 in Table 1 to terminate with a maximum possible value the cost of the IEnKS is bounded at the leading order; yet, we demonstrate shortly how the number of iterations tends to be small in stable filter regimes.

Consider the case when the filter cost function is nonlinear, as when adaptive inflation is used (as defined in Sect. 4.2), or when there is a nonlinear observation operator. Row two of Table 1 shows how the cost of these estimators is differentiated when nonlinearity is introduced – particularly, the cost of the MLES and the SIEnKS requires one SVD calculation for each iteration used to process each new observation. This renders the SIEnKS notably more expensive than the Lin-IEnKS, which uses a single Hessian SVD calculation to process all observations globally. However, for target applications, such as synoptic-scale meteorology, the additional expense of iteratively optimizing filter cost functions with the SIEnKS versus the single iteration of the Lin-IEnKS in the 4D cost function is insignificant.

Table 2 describes the cost of the SIEnKS and the IEnKS using MDA when there is a linear observation operator and when there is a nonlinear observation operator. Recall that, at leading-order C, the cost of the SIEnKS is invariant in S. This again comes with the caveat that observations are assumed to be assimilated synchronously in this work, while the IEnKS assimilates observations asynchronously by default. Nonetheless, the equivalence between the first S-filter cycles in the balancing stage and the MDA stage in the SIEnKS allows the scheme to fix the leading-order cost at the expense of two passes over the DAW with the ensemble simulation.

Table 2Order of the MDA cycle flops for lag =L=Q×S, shift =S, tuned inflation, linear observation operator (LO), or nonlinear observation operator (NO).

Download Print Version | Download XLSX

5.2 Data assimilation benchmark configurations

To demonstrate the performance advantages and limitations of the SIEnKS, we produce statistics of its forecast/filter/smoother root mean square error (RMSE) versus the EnKS/Lin-IEnKS/IEnKS in a variety of DA benchmark configurations. Synthetic data are generated in a twin experiment setting, with a simulated truth twin generating the observation process. Define the truth twin realization at time tk as xkt; we define the ensemble RMSE as follows:

(98) RMSE E k i := j = 1 N x x ^ j , k i - x j , k t 2 N x ,

where i refers to an ensemble label i{fore,filt,smth,bal,mda}, j refers to the state dimension index j{1,,Nx}, and k refers to time tk as usual.

A common diagnostic for the accuracy of the linear Gaussian approximation in the DA cycle is verifying that the ensemble RMSE has approximately the same order as the ensemble spread (Whitaker and Loughe1998), which is known as the spread–skill relationship; overdispersion and underdispersion of the ensemble both indicate the inadequacy of the approximation. Define the ensemble spread as follows:

(99) spread E k i := 1 N e - 1 j = 1 N e X k i , j X k i , j N x ,

where i again refers to an ensemble matrix label, j in this case refers to the ensemble matrix column index, and k again refers to time. The spread is then given by the square root of the mean square deviation of the ensemble from its mean. Performance of these estimators will be assessed in terms of having low RMSE scores with the spread close to the value of the RMSE. Estimators are said to be divergent when either the filter or smoother RMSE is greater than the standard deviation of the observation errors, indicating that initializing a forecast with noisy observations is preferable to the posterior estimate. The perfect hidden Markov model in this study is defined by the single-layer form of the Lorenz 96 equations (Lorenz1996). The state dimension is fixed at Nx=40, with the components of the vector x given by the variables xj with periodic boundary conditions, x0=x40, x-1=x39, and x41=x1. The time derivatives dxdt:=f(x), also known as the model tendencies, are given for each state component j{1,,40} by the following:

(100) f j ( x ) = - x j - 2 x j - 1 + x j - 1 x j + 1 - x j + F .

Each state variable heuristically represents the atmospheric temperature at one of the 40 longitudinal sectors discretizing a latitudinal circle of the Earth. The Lorenz 96 equations are not a physics-based model, but they mimic the fundamental features of geophysical fluid dynamics, including conservative convection, external forcing, and linear dissipation of energy (Lorenz and Emanuel1998). The term F is the forcing parameter that injects energy into the model, and the quadratic terms correspond to energy-preserving convection, while the linear term xj corresponds to dissipation. With F≥8, the system exhibits chaotic, dissipative dynamics; we fix F=8 in the following simulations, with the corresponding number of unstable and neutral Lyapunov exponents being equal to N0=14.

For a fixed Δt, the dynamical model k is defined by the flow map generated by the dynamical system in Eq. (100). Both the truth twin simulation, generating the observation process, and ensemble simulation, used to sample the appropriate conditional density, are performed with a standard four-stage Runge–Kutta scheme with the step size h=0.01. This high-precision simulation is used for generating a ground truth for these methods, validating the Julia package DataAssimilationBenchmarks.jl (Grudzien et al.2021) and testing its scalability; however, in general, h=0.05 should be of sufficient accuracy and is recommended for future use. The nonlinearity of the forecast error evolution is controlled by the length of the forecast window, Δt. A forecast length Δt=0.05 corresponds to a 6 h atmospheric forecast, while for Δt>0.05, the level of nonlinearity in the ensemble simulation can be considered to be greater than that which is typical of synoptic-scale meteorology.

Localization, hybridization, and other standard forms of ensemble-based gain augmentation are not considered in this work for the sake of simplicity. Therefore, in order to control the growth of forecast errors under weakly nonlinear evolution, the rank of the ensemble-based gain must be equal to or greater than the number of unstable and neutral Lyapunov exponents N0=14, corresponding to Ne≥15 (see Grudzien et al.2018, and references therein). In the following experiments, we range the ensemble size as Ne{15+2i}i=013, from the minimal rank needed without gain augmentation to a full-rank ensemble-based gain. When the number of experimental parameters expands, we restrict to the case where Ne=21 for an ensemble-based gain of actual rank 20, making a reduced-rank approximation of the covariance in analogy to DA in geophysical models.

Observations are full dimensional, such that Ny=Nx=40, and observation errors are distributed according to the Gaussian density nz|0,INy, i.e., with mean zero, uncorrelated across state indices and with homogeneous variances equal to one. When the observation map is linear, it is defined as Hk:=INx; when the observation map is taken to be nonlinear, define the following:

(101) H ( x ) := x 2 1 + x 10 γ - 1 ,

where above refers to the Schur product. This observation operator is drawn from Sect. 6.7.2.2 of Asch et al. (2016), where the parameter γ controls the nonlinearity of the map. In particular, for γ=1, this corresponds to the linear observation operator Hk, while γ>1 increases the nonlinearity of the map. When we vary the nonlinearity of the observation operator, we take γ{i}i=111 corresponding to 10 different nonlinear settings and the linear setting for reference.

When tuned inflation is used to regularize the smoothers, as in Algorithm A4, we take a discretization range of λ{1.0+0.01i}i=010, corresponding to the usual Kalman update with λ=1.0 and to up to 10 % inflation of the empirical variances with λ=1.1. Using tuned inflation, estimator performance is selected for the minimum average forecast RMSE over the experiment for all choices of λ, unless this is explicitly stated otherwise. When adaptive inflation is used, no additional tuned inflation is utilized. Simulations using the finite size formalism will be denoted with -N, following the convention of the EnKF-N. Multiple data assimilation will always be performed with uniform weights as βk|L:=1L for all estimators.

For the IEnKS, we limit the maximum number of iterations per stage at ij=10 for j=1,2. Therefore the IEnKS can take a maximum of i1+i2=20 iterations in the MDA configuration to complete a cycle. Iteratively optimizing the filter cost function in the MLES(-N)/SIEnKS(-N), the maximum number of iterations is capped at ij=40 per analysis. The tolerance for the stopping condition in the filter cost functions is set to 10−4, while the tolerance for the 4D estimates is set to 10−3. However, the scores of the algorithms are, to a large extent, insensitive to these particular hyperparameters.

In order to capture the asymptotically stationary statistics of the filter/forecast/smoother processes, we take a long time-average of the RMSE and spread over the time indices k. The long experiment average ensures that, for an ergodic dynamical system, we average over the spatial variation in the attractor, and we account for variations in the observation noise realizations that may affect the estimator performance. So that the truth twin simulates observations on the attractor, it is simulated for an initial spinup of 5×103 analysis times before observations are given. Let the time be given as t0 after this initial spinup. Observations are generated identically for all estimators using the same Gaussian error realizations at a given time to perturb the observation map of the truth twin. At time t0, the ensemble is initialized identically for all estimators (depending on the ensemble size) with the same iid sample drawn from the multivariate Gaussian with mean at the truth twin x0t and covariance equal to the identity INx. All estimation schemes are subsequently run over observation times indexed as {tk}k=12.5×104. As the initial warmup of the estimators' statistics from this cold start tends to differ from the asymptotically stationary statistics, we discard the forecast/filter/smoother RMSE and spread corresponding to the observations times {tk}k=15×103, taking the time average of these statistics for the remaining 2×104 analysis time indices. Particularly, this configuration is sufficient to represent estimator divergence which may have a delayed onset.

Forecast statistics are computed for each estimator whenever the ensemble simulates a time index tk for the first time, before yk has been assimilated into the estimate. Filter statistics are computed in the first analysis at which the observation yk is assimilated into the simulation. For the (Lin-)IEnKS, with S>1, this filter estimate includes the information from all observations yL:L-S+2 when making a filter estimate for the state at tL-S+1. Smoother statistics are computed for the time indices t0,t1,,tS-1 in each cycle, corresponding to the final analysis for these states before they are discarded from subsequent DAWs. Empty white blocks in heat plots correspond to Inf (non-finite) values in the simulation data. Missing data occur due to numerical overflow when attempting to invert a close-to-singular cost function Hessian Ξ̃J̃, which is a consequence of the collapse of the ensemble spread. When an estimator suffers this catastrophic filter divergence, the experiment output is replaced with Inf values to indicate the failure. Other benchmarks for the EnKS/Lin-IEnKS/IEnKS in the Lorenz 96 model above can be found in, e.g., Bocquet and Sakov (2014), Asch et al. (2016), and Raanes et al. (2018), which are corroborated here with similar but slightly different configurations.

5.3 Weakly nonlinear forecast error dynamics – linear observations

We fix Δt=0.05 in this section, set S=1, and use the linear observation operator in order to demonstrate the baseline performance of the estimators in a simple setting. On the other hand, we vary the lag length, the ensemble size, and the use of tuned/adaptive inflation or MDA. The lag in this section is varied on a discretization of L{1+3i}i=030. As a first reference simulation, consider the simple case where all schemes use tuned covariance inflation, so that the SIEnKS and the Lin-IEnKS here are formally equivalent. Likewise, with S=1, there is no distinction between asynchronous or synchronous DA. Figure 6 makes a heat plot of the forecast/filter/smoother RMSE and spread as the lag length L is varied along with the ensemble size Ne.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f06

Figure 6The lag length L is shown on the vertical axis, and the ensemble size Ne is shown on the horizontal axis. SDA, tuned inflation, shift S=1, linear observations, and Δt=0.05 are also indicated.

Download

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f07

Figure 7Cross section of Fig. 6 at the ensemble size Ne=21.

Download

It is easy to see the difference in the performance between the EnKS and the iterative S/Lin-/IEnKS schemes. Particularly, the forecast and filter RMSE does not change with respect to the lag length in the EnKS, as these statistics are generated independently of the lag with a standard ETKF filter cycle. However, the smoother performance of the EnKS does improve with longer lags, without sacrificing stability over a long lag as in the iterative schemes. In particular, all of the iterative schemes use the dynamical model to interpolate the posterior estimate over the DAW. For sufficiently large L, this becomes unstable due to the small simulation errors that are amplified by the chaotic dynamics. The scale of the color map is capped at 0.30, as a more accurate forecast/filter performance can be attained in this setting with the ETKF alone, as demonstrated by the EnKS.

On the other hand, the iterative estimate of the posterior, as in the S/Lin-/IEnKS in this weakly nonlinear setting, shows a dramatic improvement in the predictive and analysis accuracy for a tuned lag length. Unlike the standard ETKF observation/analysis/forecast cycle, these iterative smoothers are able to control the error growth in the neutral Lyapunov subspace corresponding to the N0=14th Lyapunov exponent. With the ensemble size Ne=15 corresponding to a rank 14 ensemble-based gain, the iterative smoothers maintain stable prediction and posterior estimates over a wide range of lags while the EnKS diverges for all lag settings. We notice that the stability regions of the S/Lin-/IEnKS are otherwise largely the same in this simple benchmark configuration, though the IEnKS has a slightly longer stability over long lags with low sample sizes.

In order to illustrate the difference in accuracy between the iterative schemes and the non-iterative EnKS, Fig. 7 plots a cross section of Fig. 6 for Ne=21. The iterative schemes have almost identical performance until approximately a lag of L≈37, at which point all schemes become increasingly unstable. The differences shown between the iterative schemes here are insignificant and may vary between different implementations of these algorithms or pseudo-random seeds. We note that all estimators are also slightly overdispersive due to selecting a tuned inflation value based on the minimum forecast RMSE rather than balancing the RMSE and spread simultaneously. Nonetheless, we clearly demonstrate how all iterative estimators reduce the prediction and analysis error over the noniterative EnKS approach. Tuning the lag L, the forecast error for the iterative schemes is actually lower than the filter error in the EnKS.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f08

Figure 8The lag length L is shown on the vertical axis, and the ensemble size Ne is shown on the horizontal axis. SDA, adaptive inflation, shift S=1, linear observations, and Δt=0.05 are also indicated.

Download

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f09

Figure 9Cross section of Fig. 8 at the ensemble size Ne=21.

Download

Consider the case where the filter cost function is nonlinear due to the adaptive inflation scheme. Figure 8 makes the same heat plot as in Fig. 6 but where the finite size formalism is used instead of tuned inflation. All schemes tend to have slightly weaker performance in this setting, except for the IEnKS-N in the low-ensemble-size regime. The design of the adaptive inflation scheme is to account for sample error due to the low ensemble size and nonlinearity in the forecast error dynamics, which is typical of mid-range forecasts. The efficacy of the design is illustrated, as the scheme is most effective when the low ensemble size and nonlinear forecast error dynamics conditions are present. Note that the Lin-IEnKS-N uses a single iteration of the extended 4D cost function, optimizing both the weights for the initial condition and the hyperparameter simultaneously. On the other hand, while the SIEnKS-N makes a single iteration of the ensemble simulation over the DAW, it iteratively optimizes the adaptive inflation hyperparameter in the filter cost function. This allows the SIEnKS-N to make substantial improvements over the Lin-IEnKS-N in terms of the stability region while remaining at the same leading-order cost.

Figure 9 plots a cross section of Fig. 8 at Ne=21 in order to further demonstrate the improved accuracy of the forecast/filter/smoother statistics of the SIEnKS-N versus the Lin-IEnKS-N. For a tuned lag L, the Lin-IEnKS-N fails to achieve distinctly better forecast and filter accuracy than the EnKS-N. While the smoother RMSE for the Lin-IEnKS-N does make an improvement over the EnKS-N, this improvement is not comparable to the smoother accuracy of the SIEnKS-N, which has the same leading-order cost. The performance of the SIEnKS-N is almost indistinguishable from the 4D IEnKS-N up to a lag of L≈25. At this point, the stability of the SIEnKS-N begins to suffer, while, on the other hand, the IEnKS-N is able to improve smoother RMSE for slightly longer lags. Nonetheless, both the SIEnKS-N and the IEnKS-N become increasingly underdispersive for lags L≥25, demonstrating the systematic underestimation of the estimator's uncertainty that leads to divergence for sufficiently large L.

We now demonstrate how MDA relaxes the nonlinearity of the MAP estimation and the interpolation of the posterior estimate over the DAW. Recall that MDA is handled differently in the SIEnKS from the 4D schemes because the 4D approach interpolates the DAW with the balancing estimate from a free forecast, while the SIEnKS interpolates the posterior estimate via a sequence of filter analyses steps using the balancing weights. Recall that, for target applications, the SIEnKS is the least expensive MDA estimator, requiring only 2L ensemble simulations in this configuration, while the (Lin-)IEnKS uses at least 2L+1. Figure 10 presents the same experiment configuration as in Figs. 6 and 8 but where MDA is used with tuned inflation. The EnKS does not use MDA, but the results from Fig. 6 are presented here for reference.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f10

Figure 10The lag length L is shown on the vertical axis, and the ensemble size Ne is shown on the horizontal axis. MDA, tuned inflation, shift S=1, linear observations, and Δt=0.05 are indicated. The EnKS SDA results are presented here for reference.

Download

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f11

Figure 11Cross section of Fig. 10 at the ensemble size Ne=21.

Download

It is easy to see that MDA improves all of the iterative smoothing schemes in Fig. 10, with greatly expanded stability regions from Fig. 6. Moreover, a key new pattern emerges that differentiates the traditional 4D MDA approach and the new MDA scheme in the SIEnKS. In particular, while the stability regions for the SIEnKS/(Lin-)IEnKS are similar for their smoother statistics in this configuration, the forecast/filter statistics are strongly differentiated. Unlike the free forecast solution used to interpolate the posterior estimate over the DAW in the 4D approach, the filter step within the SIEnKS MDA controls the simulation errors that accumulate when L is large.

In order to examine the effect more precisely, consider the cross section of Fig. 10 for Ne=21 presented in Fig. 11. Notice that all iterative MDA estimators have almost indistinguishable performance until lag L≈31. From this point, although the smoother accuracy increases with longer lags for the (Lin-)IEnKS, this comes at a sacrifice in the forecast/filter accuracy. Particularly, for lags L≥31, the forecast/filter accuracy of the (Lin-)IEnKS begins to degrade; at a lag of L≈61, the IEnKS performs worse than the EnKS, while the Lin-IEnKS has diverged. This is in stark contrast to the SIEnKS because not only does the forecast/filter accuracy remain stable for lags L≥40, but each of these also improve along with the smoother accuracy until a lag L≈61. Furthermore, the spread of the SIEnKS indicates that the SIEnKS MDA and perfect linear Gaussian approximation is well satisfied, with the ensemble dispersion very close to the RMSE within the stability region.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f12

Figure 12MDA configuration. RMSE and spread versus the ensemble size Ne. Lag and inflation are optimized for a minimum forecast RMSE in Fig. 10.

Download

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f13

Figure 13MDA configuration. RMSE and spread versus the ensemble size Ne. Lag and inflation are optimized for a minimum forecast RMSE in Fig. 10.

Download

The SIEnKS thus highlights a performance tradeoff of the 4D MDA schemes that it does not suffer from itself. In particular, suppose that the lag L in Fig. 10 is selected in order to optimize each estimator's accuracy, in terms of RMSE, for each fixed ensemble size Ne. One can optimize the lag L using the forecast RMSE or the smoother RMSE as the criterion. However, Fig. 11 indicates that L may be quite different for the forecast accuracy versus the smoother accuracy in the 4D schemes. Figures 12 and 13 demonstrate this tradeoff precisely, where the former plots the RMSE and spread, with lag and inflation simultaneously optimized for forecast accuracy, and the latter is optimized for smoother accuracy.

Tuning for optimum forecast RMSE, as in Fig. 12, the performance of the SIEnKS/(Lin-)IEnKS for any fixed Ne is indistinguishable with respect to this metric. On the other hand, the SIEnKS strongly outperforms the Lin-IEnKS and even exhibits a slightly better overall stability and accuracy than the IEnKS across the range of ensemble sizes. The difference in performance is more pronounced when tuning for the minimal smoother RMSE in Fig. 13. Again, the three estimators are indistinguishable in their smoother estimates, but the SIEnKS forms high-precision smoother estimates without sacrificing its predictive accuracy while interpolating the solution over long lags.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f14

Figure 14Iterations per cycle versus lag L on the vertical axis and ensemble size Ne on the horizontal axis. The mean (top panel) and standard deviation (bottom panel) of the iterations used per cycle from simulations, generating Figs. 6, 8, and 10, are presented.

Download

Using MDA or adaptive inflation in DA cycles with weakly nonlinear forecast error dynamics, we demonstrate how the SIEnKS greatly outperforms the Lin-IEnKS with the same, or lower, leading-order cost. The SIEnKS MDA scheme also outperforms the IEnKS MDA scheme with less cost, but the 4D IEnKS-N is able to extract additional accuracy over the SIEnKS-N at the cost of L additional ensemble simulations per iteration. Therefore, it is worth considering the statistics on the number of iterations that the IEnKS uses in each of the above-studied configurations. Figure 14 shows a heat plot for the mean and the standard deviation of the number of iterations used per cycle for each of the IEnKS with SDA, IEnKS-N, and IEnKS with MDA to optimize the 4D cost function. Notice that, in the MDA configuration, the mean and the standard deviation is computed over the two stages of the IEnKS, accounting for both the balancing and MDA 4D cost functions.

Although the number of possible iterations is bounded below by one in the case of SDA and two in the case of MDA, the frequency distribution for the total iterations is not especially skewed within the stability region of the IEnKS. This is evidenced by the small standard deviation, less than or equal to one, that defines the stability region for the scheme. Particularly, the IEnKS typically stabilizes around (i) three iterations in the SDA, with tuned inflation configuration, (ii) three to four iterations in the SDA, with adaptive inflation configuration, and (iii) six to eight iterations in the MDA, with tuned inflation configuration. Therefore, the SIEnKS is shown to make a reduction ranging between (i) 2L, (ii) 2L to 3L, or (iii) 4L to 6L ensemble simulations of the estimator's cycle, on average, versus the IEnKS. While this is unremarkable for the SDA, a tuned inflation configuration where the Lin-IEnKS performs similarly, this demonstrates a strong performance advantage of the SIEnKS in its target application, i.e., in settings with weakly nonlinear forecast error dynamics and other sources of nonlinearity dominating the DA cycle. This an especially profound reduction for the MDA configuration, where the SIEnKS MDA scheme proves to be both the least expensive and the most stable/accurate estimator by far.

5.4 Weakly nonlinear forecast error dynamics – nonlinear observations

A primary motivating application for the SIEnKS is the scenario where the forecast error dynamics is weakly nonlinear but where the observation operator is weakly to strongly nonlinear. There are infinite possible ways for how nonlinearity in the observation operator can be expressed, and the results are expected to strongly depend on the particular operator. In the following, we consider the operator in Eq. (101) for the ability to tune the strength of this effect with the parameter γ. In order to avoid conflating the effect of the nonlinearity in the hyperparameter optimization and the nonlinearity in the observation operator, we suppress adaptive inflation in this section. In this case, SDA and MDA schemes are considered to compare how MDA can be used to temper the effects of local minima in the MAP estimation versus a nonlinear observation operator. We again choose Δt=0.05 to maintain weakly nonlinear forecast error dynamics. We restrict to Ne=21, as we expand the experimental parameters to include γ. The lag is varied as L{1+3i}i=027.

Figure 15 demonstrates the effect of varying the nonlinearity in the observation operator, where strong differences once again emerge between the retrospective analysis of the MLES and the iterative schemes. The scale of the color map is raised to a maximum of 0.5, as a better performance can be achieved with the MLEF alone, as demonstrated by the MLES. In the MLES, the forecast and analysis error increases almost uniformly in γ, but a very different picture emerges for the iterative smoothers. While the stability regions of the iterative schemes tend to shrink for larger γ, the accuracy of the estimators changes non-monotonically. Moreover, iteratively optimizing the filter cost function in the SIEnKS or the 4D cost function in the IEnKS does not in and of itself guarantee a better performance than the Lin-IEnKS, due to the increasing presence of local minima. Particularly for the SIEnKS and the IEnKS with highly nonlinear observations, this optimization can also become deleterious to the estimator performance, with evidence of instability and catastrophic divergence in these regimes.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f15

Figure 15Lag length L on the vertical axis and nonlinearity parameter γ on the horizontal axis. SDA, tuned inflation, shift S=1, Ne=21, and Δt=0.05 are indicated.

Download

In Fig. 16, we repeat the experimental configuration of Fig. 15, with the exception of using the MDA configuration. As seen in Fig. 10, MDA greatly extends the forecast/filter accuracy of the SIEnKS over the 4D schemes. Multiple data assimilation in this context additionally weakens the effect of the assimilation update step, smoothing the cost function contours and expanding the stability regions of all estimators.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f16

Figure 16Lag length L is shown on the vertical axis, and the nonlinearity parameter γ is shown on the horizontal axis. MDA, tuned inflation, shift S=1, Ne=21, and Δt=0.05 are indicated. The MLES SDA results are presented here for reference.

Download

Figure 17 presents tuned results from Fig. 16, where the lag and inflation are simultaneously optimized for the minimal forecast RMSE. In this context, we clearly see how the effect of varying γ is non-monotonic on the estimator accuracy for the iterative schemes. However, important differences also emerge in this configuration between the SIEnKS and the (Lin-)IEnKS. While the forecast and filter accuracy of these schemes remains indistinguishable for γ≤7, the smoother RMSE of the SIEnKS is almost uniformly lower than these other schemes for all γ. Interestingly, the degradation of the performance of the IEnKS for highly nonlinear observations, γ≥8, does not extend to either of the Lin-IEnKS or the SIEnKS in the MDA configuration. Whereas the iterative optimization of the 4D cost function becomes susceptible to the effects of the local minima with large γ, the Lin-IEnKS remains stable for the full window of the γ presented here. Moreover, the SIEnKS demonstrates significantly improved smoother accuracy over the Lin-IEnKS while remaining at a lower leading-order cost. This suggests that the sequential MDA scheme of the SIEnKS is better equipped to handle highly nonlinear observation operators than the 4D formalism, which appears to suffer from a greater number of local minima.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f17

Figure 17MDA configuration. RMSE and spread versus γ. Lag and inflation are optimized for a minimum forecast RMSE.

Download

5.5 Weakly nonlinear forecast error dynamics – lag versus shift

Even for a linear observation operator and tuned inflation, a shift S>1 distinguishes the performance of each of the studied estimators. In this section, we fix Δt=0.05, corresponding to weakly nonlinear forecast error dynamics, and we vary L,S{2,4,8,16,32,48,64,80,96} to demonstrate these differences. For the iterative schemes, we only consider combinations of L divisible by S for compatibility with the MDA schemes. The EnKS is defined for arbitrary S<L, and all such configurations are presented for reference.

Recall the qualification that the EnKS and SIEnKS are designed to assimilate observations sequentially and synchronously in this work, whereas the (Lin-)IEnKS assimilates observations asynchronously by default. When S=1, there is no distinction between asynchronous and synchronous assimilation, but in this section this distinction is borne in mind. Likewise, it is recalled that, for the (Lin-)IEnKS with a shift S>1, filter statistics are computed, including the information from all observations yL:L-S+1 when making a filter estimate for states at times tS+1,,tL. This arises from the asynchronous design of the IEnKS, whereas filter statistics are computed sequentially without future information in the SIEnKS.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f18

Figure 18Lag length L on the vertical axis and shift S on the horizontal axis. SDA, tuned inflation, linear observations, ensemble size Ne=21, and Δt=0.05 are indicated.

Download

Figure 18 presents the heat plot of RMSE and spread for each estimator in the SDA configuration. We note that the EnKS for a fixed L has a performance that is largely invariant with respect to changes in S, except for the special case where S=L. In this case, the non-overlapping DAWs impose that posterior estimates are constructed with fewer observations conditioning the final estimate than in overlapping DAWs. Otherwise, the stability regions of the iterative schemes are largely the same, with the SIEnKS only achieving a slight improvement over the Lin-IEnKS and the IEnKS only slightly improving on the SIEnKS.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f19

Figure 19Lag length L on the vertical axis and shift S on the horizontal axis. MDA, tuned inflation, linear observations, ensemble size Ne=21, and Δt=0.05 are indicated. The EnKS SDA results are presented here for reference.

Download

The SDA configuration is contrasted with Fig. 19, where we again see the apparent strengths of the SIEnKS MDA scheme. When MDA is introduced, all iterative schemes increase their respective stability regions to include longer lags and larger shifts in the DAW simultaneously. However, the SIEnKS has the largest stability region of all iterative estimators, extending to shifts at least as large as the other schemes for every lag setting. Likewise, the earlier distinction between the forecast and filter statistics of the SIEnKS and the 4D schemes is readily apparent. Not only does the stability region of the SIEnKS improve over the other schemes, but it is also generally more accurate in its predictive statistics at the end of long lag windows.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f20

Figure 20MDA configuration. RMSE and spread versus shift S. Lag L optimized for a minimum forecast RMSE in Fig. 19.

Download

In order to obtain a finer picture of the effect of varying the shift S, we tune the lag and inflation simultaneously for each estimator for their minimal forecast RMSE when given a fixed shift; we plot the results of this tuning in Fig. 20. Given that all iterative estimators uniformly diverge for a shift S≥32, we only plot results for shifts in the range {2i}i=04. Several important features stand out in this plot. First, note that, while optimizing the lag, the performance of the SIEnKS is almost invariant in the shift, similar to the performance of the EnKS. This is because the sequential filter analysis of the SIEnKS constrains the growth of the filter and forecast errors as the DAW shifts. Indeed, the prediction of states at times tL-S+1,,tL arises from a filter ensemble at the previous time point. In the MDA scheme, the balancing weights for the observations of these newly introduced states in the DAW are, furthermore, all equal to one and equivalent to a standard ETKF filter analysis.

Second, note that the filter estimates of the (Lin-)IEnKS actually improve with larger shifts; however, this is an artifact of computing the filter statistics over all times tL-S+1,,tL and using the observations yL:L-S+1 simultaneously. This means that the filter estimates for all times except tL actually contain future information. This is contrasted with the sequential analyses of the EnKS and the SIEnKS, which only produce filter statistics with observations from past and current times.

Third, note that the Lin-IEnKS, while maintaining a similar prediction and filtering error to the IEnKS, is less stable and performs almost uniformly less accurately than the IEnKS in its smoothing estimates. The SIEnKS, moreover, tends to exhibit a slight improvement in stability and accuracy over the IEnKS therein.

Finally, it is immediately apparent how S>1 strongly increases the prediction error for the 4D estimators. The longer free forecasts for S>1, used to shift the DAW, accumulate errors such that, for S≥16, the Lin-IEnKS actually experiences filter divergence. The difference in the estimators' performances is once again a consequence of how observations are assimilated synchronously as in the EnKS/SIEnKS or asynchronously by default in the (Lin-)IEnKS.

Bearing all the above qualifications in mind, we analyze the performance of the estimators while varying the shift S. First, for all experimental settings, the leading-order cost of the SIEnKS MDA scheme is fixed at 2L ensemble simulations, whereas for the other schemes the minimal cost is at 2L+S ensemble simulations. For configurations where S>1, the SIEnKS thus makes a dramatic cost reduction versus the other schemes in this aspect alone, requiring fewer ensemble simulations per cycle. We consider that the leading-order cost for the Lin-IEnKS is similar to the SIEnKS for S=1, requiring only one more ensemble simulation per cycle. However, the SIEnKS with a shift S=16 maintains a prediction and smoother error that is comparable to the Lin-/IEnKS for a shift of S=1. This implies that the SIEnKS can maintain a performance similar to the S=1 IEnKS MDA scheme, while using 1/16 of the total cycles needed by the IEnKS to pass over the same observations in real time. If we assume that the observations can be assimilated synchronously, then the above SIEnKS MDA scheme is thus able to run in its EnKS cycle over a long time series of observations while needing infrequent reinitialization with its smoothed estimates. For a real-time forecast cycle, where the computational cost/prediction accuracy tradeoff is the most important consideration, this once again demonstrates how the SIEnKS can balance this tradeoff, performing as well as, and often better than, 4D estimators with a substantially lower leading-order cost. Not only is each cycle less expensive in the SIEnKS than in the (Lin-)IEnKS, but the SIEnKS reduces the number of required cycles by an order of magnitude.

5.6 Strongly nonlinear forecast error dynamics – lag versus Δt

In all other numerical benchmarks, we focus on the scenario that the SIEnKS is designed for, i.e., DA cycles in which the forecast error evolution is weakly nonlinear. In this section, we demonstrate the limits of the SIEnKS when the forecast error dynamics dominate the nonlinearity of the DA cycle. We vary Δt{0.05×i}i=110, while the ensemble size Ne=21 and the shift S=1 are fixed. The lag is varied as L{1+3i}i=017. We neglect the nonlinear observation operators in this section, though we include the finite size adaptive inflation formalism, which is itself designed to ameliorate the increasing nonlinearity in the forecast error dynamics. Single data assimilation and MDA configurations are considered for the iterative schemes as usual.

Figure 21 demonstrates the effect of the increasing nonlinearity of the forecast error evolution with tuned inflation. Due to the extreme nonlinearity for large Δt, we raise the heat map scale for the RMSE and spread to 1.0. Several features become apparent with the increasing forecast nonlinearity. First, the EnKS, which has a performance dependent on the standard ETKF cycle, is fully divergent for Δt≥0.2. This is in contrast with all iterative schemes which maintain adequate performance for Δt≤0.25. We note that the performance of the SIEnKS and the Lin-IEnKS, in this first scenario, is nearly identical; this corresponds to the fact that they are formally equivalent in this setting. However, appropriately, it is the 4D IEnKS that maintains the most stable and accurate performance over the range of forecast lengths. Indeed, this demonstrates the precise benefit of the iterative solution to 4D cost function for moderately nonlinear, non-Gaussian DA.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f21

Figure 21Lag length L on the vertical axis and Δt on the horizontal axis. SDA, tuned inflation, and ensemble size Ne=21 are indicated.

Download

In Fig. 22, we repeat the same experiments as in Fig. 21 but using the finite size adaptive inflation, rather than tuned inflation, for each estimator. Once again, the efficacy of the finite size formalism in ameliorating the nonlinearity of the forecast error dynamics is demonstrated. In particular, all schemes except the SIEnKS see an overall improvement in their stability region and often in their overall accuracy. The EnKS-N actually strongly outperforms the tuned inflation EnKS, extending an adequate filter performance as far as Δt≤0.35. Likewise, the IEnKS-N has a strongly enhanced stability region, though it increasingly suffers from catastrophic filter divergence outside of this zone. Notably, whereas the SIEnKS-N outperformed the Lin-IEnKS-N for Δt=0.05, the Lin-IEnKS-N generally yields a better performance for moderately to strongly nonlinear forecast error dynamics. Indeed, the finite size formalism appears to become incompatible with the design of the SIEnKS for strongly nonlinear forecast error dynamics, as suggested by the widespread ensemble collapse and catastrophic divergence.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f22

Figure 22Lag length L on the vertical axis and Δt on the horizontal axis. SDA, adaptive inflation, and ensemble size Ne=21 are indicated.

Download

As a final experimental configuration, we consider how MDA affects the increasing nonlinearity of the forecast error dynamics. Figure 23 demonstrates the performance of these estimators in the MDA configuration with tuned inflation, where the SDA results of the EnKS are pictured for reference. In particular, we see the usual increase in the estimators' stability regions over the SDA configuration. However, the improvement in the SIEnKS over the Lin-IEnKS is marginal to nonexistent for moderately to strongly nonlinear forecast error dynamics. The 4D IEnKS, furthermore, is again the estimator with the largest stability region and greatest accuracy over a wide range of Δt.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-f23

Figure 23Lag length L on the vertical axis and Δt on the horizontal axis. MDA, tuned inflation, and ensemble size Ne=21 are indicated.

Download

The results in this section indicate that, while the SIEnKS is very successful in weakly nonlinear forecast error dynamics, the approximations used in this estimator strongly depend on the source of nonlinearity in the DA cycle. Particularly, when the nonlinearity of the forecast error dynamics dominates the DA cycle, the approximations of the SIEnKS break down. It is thus favorable to consider the Lin-IEnKS, or to set a low threshold for the iterations in the IEnKS, instead of applying the SIEnKS in this regime. Notably, as the finite size inflation formalism is designed for a scenario different to that of the SIEnKS, one may instead consider designing adaptive covariance inflation in such a way that it exploits the design principles of the SIEnKS. Such a study goes beyond the scope of this work and will be considered later.

6 Conclusions

In this work, we achieve three primary objectives. First, we provide a review of sequential, ensemble variational Kalman filters and smoothers with perfect model assumptions within the Bayesian MAP formalism of the IEnKS. Second, we rigorously derive our single-iteration formalism as a novel approximation of the Bayesian MAP estimation, explaining how this relates to other well-known smoothing schemes and how its design is differentiated in a variety of contexts. Third, using the numerical framework of DataAssimilationBenchmarks.jl (Grudzien et al.2021), we extensively demonstrate how the SIEnKS has a unique advantage in balancing the computational cost/prediction accuracy tradeoff in short-range forecast applications. Pursuant to this, we provide a cost analysis and pseudo-code for all of the schemes studied in this work, in addition to the open-source implementations available in the supporting Julia package. Together, this work provides a practical reference for a variety of topics at the state of the art in ensemble variational Kalman smoothing.

The rationale of the SIEnKS is, once again, to efficiently perform a Bayesian MAP estimation in real-time, short-range forecast applications where the forecast error dynamics is weakly nonlinear. Our central result is the novel SIEnKS MDA scheme, which not only improves the forecast accuracy and analysis stability in this regime but also simultaneously reduces the leading-order cost versus the traditional 4D MDA approach. This MDA scheme is demonstrated to produce significant performance advantages in the simple setting where there is a linear observation operator and especially when the shift S can be taken greater than one. Not only is each cycle of the SIEnKS MDA scheme significantly less expensive than the other estimators for S>1, but the estimator performance while varying S tends to be invariant. This crucial aspect also means that one can, in principle, reduce the number of cycles actually needed by the estimator to produce forecasts in real time. Our scheme also appears better equipped than the 4D MDA estimation to handle highly nonlinear observation operators, as it maintains greater accuracy and is more robust to the effects of local minima. Separately, we find that, in our target regime, the single-iteration formalism is cost-effective for optimizing hyperparameters of the estimation scheme, as with the SIEnKS-N.

The above successes of the SIEnKS come with the following three important qualifications: (i) we have focused on synchronous DA, assuming that we can sequentially assimilate observations before producing a prediction step, (ii) we have not studied localization or hybridization, which are widely used in ensemble-based estimators to overcome the curse of dimensionality for realistic geophysical models, and (iii) we have relied upon the perfect model assumption, whereas realistic forecast settings include significant modeling errors. These restrictions come by necessity, to limit the scope of an already lengthy study. However, we note that the SIEnKS is capable of asynchronous DA, as already discussed in Sect. 4.4. Likewise, it is possible that some of the issues faced by the IEnKS in integrating localization/hybridization (Bocquet2016) may actually be ameliorated by the design principles of the SIEnKS. Domain localization, as in the LETKF (Hunt et al.2007; Sakov and Bertino2011), is likely to have a satisfactory extension to the SIEnKS, where this may be applied directly in the filter step as usual. Assuming that the ensemble forecast dynamics is not highly nonlinear, the spatial correlations defining the observation domain truncation for the initial ensemble at t0 may, furthermore, be well approximated by the domains from the filter step but mapped by a linear, reverse-time evolution over the DAW via an explicit or implicit adjoint model. Experiments suggest that a tuned radius for a smoother domain localization can be implemented successfully in an EnKS analysis (Nerger et al.2014). However, there are also rich opportunities to iteratively optimize a localization hyperparameter as with, e.g., the α trick (Lorenc2003) within the SIEnKS framework. Similarly, it is possible that an extension of the single-iteration formalism could provide a novel alternative to other iterative ensemble smoothers designed for model error, such as the IEnKS-Q (Sakov et al.2018; Fillion et al.2020), EnKS expectation maximization schemes (Pulido et al.2018), or the family of OSA smoothers (Ait-El-Fquih and Hoteit2022).

For the reasons above, this initial study provides a number of directions in which our single-iteration formalism can be extended. Localization and hybridization are both prime targets to translate the benefits of the SIEnKS to an operational short-range forecasting setting. Likewise, an asynchronous DA design is an important operational topic for this estimator. Noting that the finite size adaptive inflation formalism is designed to perform in a different regime than the SIEnKS and is not fully compatible with MDA schemes, developing an adaptive inflation and/or model error estimation based on the design principles of the SIEnKS is an important direction for a future study. Having currently demonstrated the initial success of this single-iteration formalism, each of these above directions can be considered in a devoted work. We hope that the framework provided in this paper will guide these future studies and will provide a robust basis of comparison for further development of ensemble variational Kalman filters and smoothers.

Appendix A: Algorithm pseudo-code

Algorithm A1Ensemble transform (ET).

Require:
Ensemble matrix ERNx×Ne, observation map , observation error covariance RRNy×Ny, and observation vector y
1:
Y=ℋ(E)
2:
y^=Y1/Ne
3:
S=R-12Y-y^1
4:
δ^=R-12y-y^
5:
J̃=-Sδ^
6:
Ξ̃J̃=(Ne-1)INe+SS
7:
w=-Ξ̃J̃-1J̃
8:
T=Ξ̃J̃-12
9:
return  T,w

Algorithm A2Random mean-preserving orthogonal matrix (RO).

Require:
Ensemble size Ne; let QR represent the QR algorithm.
1:
Let QR(Ne-1)×(Ne-1), with entries drawn iid from 𝒩(0,1)
2:
Q,R=QR(Q)
3:
U=100Q
4:
Let {ai}i=1Ne be an arbitrary orthogonal basis of RNe up to the requirement that a1=1/Ne; let A=[ai]i=1Ne
5:
return  U=AUA

Algorithm A3Ensemble update (EU).

Require:
Ensemble matrix ERNx×Ne, transform T, weights w, and mean-preserving orthogonal matrix U.
1:
x^=E1/Ne
2:
X=E-x^1
3:
return  E=x^1+Xw1+Ne-1TU

Algorithm A4Covariance inflation (CI).

Require:
Ensemble matrix ERNx×Ne and inflation λ.
1:
x^=E1/Ne
2:
X=E-x^1
3:
return  E=x^1+λX

Algorithm A5ETKF.

Require:
Observation y1, filter ensemble E0filtRNx×Ne, and inflation λ.
Require:
Let ET, RO, EU, and CI represent Algorithms A1, A2, A3, and A4, respectively.
1:
E1fore=M1E0filt
2:
T,w=ETE1fore,H1,R1,y1
3:
U=RO(Ne)
4:
E1filt=EUE1fore,T,w,U
5:
E1filt=CIE1filt,λ
Require:
Store E0filt:=E1filt for the next cycle

Algorithm A6EnKS.

Require:
Lag=L, shift=S, observations yL:L-S+1, smoother ensemble states EL-S:0smth, ensemble size Ne, and inflation λ.
Require:
Let ET, RO, EU, and CI represent Algorithms A1, A2, A3, and A4, respectively.
1:
EL-Sfilt:=EL-Ssmth
2:
for k{L-S+1,,L} do
3:
Ekfore=Mk(Ek-1filt)
4:
T,w=ETEkfore,Hk,Rk,yk
5:
U=RO(Ne)
6:
Ekfilt=EUEkfore,T,w,U
7:
for  j{0,,k-1} do
8:
Ejsmth=EUEjsmth,T,w,U
9:
end for
10:
Ekfilt=CIEkfilt,λ
11:
Eksmth:=Ekfilt
12:
end for
Require:
Store EL-S:0smth:=EL:Ssmth for the next cycle

Algorithm A7Gauss–Newton IEnKS in the SDA transform version.

Require:
Lag =L, shift =S, and observations yL:L-S+1.
Require:
E0smthRNe×Ne
Require:
Let RO, EU, and CI represent algorithms A2, A3, and A4, respectively.
Require:
Parameters tol, jmax, inflation λ.
1:
T:=INe
2:
E0:=E0smth
3:
j:=0,w:=0
4:
loop
5:
for k{1,,L} do
6:
Ek=Mk(Ek-1)
7:
if k{L-S+1,,L} then
8:
Yk=ℋk(Ek)
9:
y^k=Yk1/Ne
10:
Sk=Rk-12Yk-y^k1T-1
11:
δ^k=Rk-12yk-y^k
12:
end if
13:
end for
14:
J̃=(Ne-1)w-k=L-S+1LSkδ^k
15:
Ξ̃J̃=(Ne-1)INe+k=L-S+1LSkSk
16:
Δw=Ξ̃J̃-1J̃
17:
w:=w-Δw
18:
j:=j+1
19:
if Δw<tol or j=jmax then
20:
break loop
21:
else
22:
T=Ξ̃J̃-12
23:
E0=EUE0smth,T,w,INe
24:
end if
25:
end loop
26:
T=Ξ̃J̃-12
27:
U=RO(N)
28:
E0:=EUE0smth,T,w,U
29:
for k=1,,L+S do
30:
Ek=Mk(Ek-1)
31:
end for
32:
EL-S:0smth:=EL-S:0
33:
EL:L-S+1filt:=EL:L-S+1
34:
EL+S:L+1fore:=EL+S:L+1
35:
ESsmth=CIESsmth,λ
Require:
E0smth:=ESsmth for the next cycle.

Algorithm A8SIEnKS in the SDA version.

Require:
Lag =L, shift =S, observations yL:L-S+1, ensemble states E0smth and EL-Ssmth, and inflation λ.
Require:
Let ET, RO, EU, and CI represent Algorithms A1, A2, A3, and A4, respectively.
1:
EL-Sfilt:=EL-Ssmth
2:
for k{L-S+1,,L} do
3:
Ekfore=Mk(Ek-1filt)
4:
T,w=ETEkfore,Hk,Rk,yk
5:
Uk=RO(N)
6:
Ekfilt=EUEkfore,T,w,Uk
7:
E0smth=EUE0smth,T,w,Uk
8:
end for
9:
E0smth:=CIE0smth,λ
10:
for k=1,,L do
11:
Eksmth=Mksmth(Ek-1)
12:
end for
Require:
E0smth:=ESsmth, EL-Ssmth:=ELsmth for the next cycle.

Algorithm A9Maximum likelihood ensemble transform (MLET).

Require:
Ensemble matrix ERNx×Ne, observation map , observation error covariance RRNy×Ny, and observation vector y.
Require:
Parameters tol,jmax
1:
T:=INe
2:
j:=0,w:=0
3:
E0:=E
4:
loop
5:
Y=ℋ(E)
6:
y^=Y1/Ne
7:
S=R-12Y-y^1T-1
8:
δ^=R-12y-y^
9:
J̃=(Ne-1)w-Sδ^
10:
Ξ̃J̃=(Ne-1)INe+SS
11:
Δw=Ξ̃J̃-1J̃
12:
w:=w-Δw
13:
if Δw<tol or j=jmax then
14:
break loop
15:
else
16:
T=Ξ̃J̃-12
17:
E=EUE0,T,w,INe
18:
end if
19:
end loop
20:
T=Ξ̃J̃-12
21:
return  T,w

Algorithm A10Finite size ensemble transform (FSET).

Require:
Ensemble matrix ERNx×Ne, observation map , observation error covariance RRNy×Ny, and observation vector y.
Require:
Parameters tol,jmax
1:
T:=INe
2:
j:=0,w:=0
3:
E0:=E
4:
ϵNe:=1+1/Ne, Neff:=Ne+1
5:
loop
6:
Y=ℋ(E)
7:
y^=Y1/Ne
8:
S=R-12Y-y^1T-1
9:
δ^=R-12y-y^
10:
ζ=1/ϵNe+ww
11:
J̃=ζNeffw-Sδ^
12:
Ξ̃J̃=(Ne-1)INe+SS
13:
Δw=Ξ̃J̃-1J̃
14:
w:=w-Δw
15:
j:=j+1
16:
if Δw<tol or j=jmax then
17:
break loop
18:
else
19:
T=Ξ̃J̃-12
20:
E=EUE0,T,w,INe
21:
end if
22:
end loop
23:
ζ=1/ϵN+ww
24:
Ξ̃J̃=NeffζIN-2ζ2ww+SS
25:
T=Ξ̃J̃-12
26:
return  T,w

Algorithm A11Gauss–Newton IEnKS-N in the SDA transform version.

https://gmd.copernicus.org/articles/15/7641/2022/gmd-15-7641-2022-g01

Algorithm A12SIEnKS in the MDA version.

Require:
Lag=L, shift=S, observations yL:1, MDA conditional ensemble E0mda, ensemble size Ne, and inflation λ.
Require:
Let ET, RO, EU, and CI represent Algorithms A1, A2, A3 and A4, respectively.
Require:
Let {βk}k=1L and {ηk}k=1L be the multiple data assimilation and balancing weights, respectively.
1:
E0bal:=E0mda
2:
for k=1,,L do
3:
U=RO(Ne)
4:
Ekbal=Mk(Ek-1bal)
5:
if k{L-S+1,,L} then
6:
Ekfore:=Ekbal
7:
end if
8:
T,w=ETEkbal,Hk,Rk/ηk,yk
9:
Ekbal=EUEjbal,T,w,U
10:
if k{L-S+1,L} then
11:
Ekfilt:=Ekbal
12:
end if
13:
for j=0,,k-1 do
14:
Ejbal=EUEjbal,T,w,U
15:
end for
16:
if k=S then
17:
E0mda=E0bal
18:
ESmda=Ekbal
19:
end if
20:
end for
21:
E0:L-Ssmth:=E0:L-Sbal
22:
for k=S+1,,L do
23:
U=RO(Ne)
24:
Ekmda=Mk(Ek-1mda)
25:
T,w=ETEkmda,Hk,Rk/βk,yk
26:
Ekmda=EUEkmda,T,w,U
27:
E0mda=EUE0mda,T,w,U
28:
end for
29:
E0mda=CIE0mda,λ
30:
for k=1,,S do
31:
Ekmda=Mk(Ek-1mda)
32:
end for
Require:
Store E0mda=ESmda for the next cycle

Algorithm A13Gauss–Newton IEnKS in the MDA transform version.

Require:
Lag =L, shift =S, observations yL:1, conditional MDA ensemble E0mda, and ensemble size Ne.
Require:
Let RO, EU, and CI represent Algorithms A2, A3, and A4, respectively.
Require:
Let {βk}k=1L and {ηk}k=1L be the multiple data assimilation and balancing weights, respectively.
Require:
Parameters tol, jmax, inflation λ.
1:
T=INe
2:
j=0,w=0
3:
for stage=1,2 do
4:
E0=E0mda
5:
if stage=1 then
6:
θk=ηk
7:
else
8:
θk=βk
9:
end if
10:
loop
11:
for k{1,,L} do
12:
Ek=Mk(Ek-1)
13:
y^k=Hk(Ek)1/Ne
14:
Yk=ℋk(Ek)
15:
Sk=θkRk-12Yk-y^k1T-1
16:
δ^k=θkRk-12yk-y^k
17:
end for
18:
J̃=(Ne-1)w-k=L-S+1LSkδ^k
19:
Ξ̃J̃=(Ne-1)INe+k=L-S+1LSkSk
20:
Δw=Ξ̃J̃-1J̃
21:
w:=w-Δw
22:
j:=j+1
23:
if Δw<tol or j=jmax then
24:
break loop
25:
else
26:
T=Ξ̃J̃-12
27:
E0=EUE0mda,T,w,INe
28:
end if
29:
end loop
30:
T=Ξ̃J̃-12
31:
U=RO(Ne)
32:
E0:=EUE0mda,T,w,U
33:
if stage=1 then
34:
for k=1,,L+S do
35:
Ek=Mk(Ek-1)
36:
end for
37:
EL-S:0smth:=EL-S:0
38:
EL:L-S+1filt:=EL:L-S+1
39:
EL+1:L+Sfore:=EL+S:L+1
40:
end if
41:
end for
42:
for k=1,,S do
43:
Ek=Mk(Ek-1)
44:
end for
45:
ESsmth=CIESsmth,λ
Require:
E0smth:=ESsmth for the next cycle.
Code availability

The current version of DataAssimilationBenchmarks.jl is available at https://github.com/cgrudz/DataAssimilationBenchmarks.jl (last access: 10 October 2022) and is in the Julia General Registries under the Apache 2.0 License. The exact version of the package used to produce the results used in this paper is archived on Zenodo (https://doi.org/10.5281/zenodo.5430619; Grudzien et al.2021), as are scripts to process data and produce the plots for all the simulations presented in this paper.

Data availability

All data in this study were generated synthetically by the package DataAssimilationBenchmarks.jl, with the specific version in the code availability statement above. Settings for generating equivalent synthetic data experiments are described in Sect. 5.2.

Author contributions

CG mathematically derived the original SDA and MDA SIEnKS schemes. CG and MB together refined and improved upon these mathematical results for their final form. All numerical simulation and plotting codes were developed by CG, and MB shared the original Python code for the IEnKS and the finite size formalism schemes, which contributed to the development of the Julia code supporting this work. CG and MB worked together on all conceptual diagrams. All numerical experiments and benchmark configurations for the SIEnKS were devised together between CG and MB. The paper was written by CG, with contributions from MB to refine the narrative and presentation of results in their final form.

Competing interests

The contact author has declared that neither of the authors has any competing interests.

Disclaimer

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Acknowledgements

Special thanks go to Eric Olson, Grant Schissler, and Mihye Ahn, for high-performance computing support and logistics at the University of Nevada, Reno. Thanks go to Patrick Raanes, for the open-source DAPPER Python package, which was referenced at times for the development of DA schemes in Julia. Thanks go to Amit N. Subrahmanya and Pavel Sakov, who reviewed this paper and provided important suggestions and clarifications to improve this work. CEREA is a member of Institut Pierre-Simon Laplace.

Review statement

This paper was edited by Adrian Sandu and reviewed by Pavel Sakov and Amit N. Subrahmanya.

References

Ait-El-Fquih, B. and Hoteit, I.: Filtering with One-Step-Ahead Smoothing for Efficient Data Assimilation, in: Data Assimilation for Atmospheric, Oceanic and Hydrologic Applications (Vol. IV), edited by: Park, S. K. and Xu, L., Springer, Cham, 69–96, https://doi.org/10.1007/978-3-030-77722-7_1, 2022. a, b

Ait-El-Fquih, B., El Gharamti, M., and Hoteit, I.: A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology, Hydrol. Earth Syst. Sci., 20, 3289–3307, https://doi.org/10.5194/hess-20-3289-2016, 2016. a

Asch, M., Bocquet, M., and Nodet, M.: Data Assimilation: Methods, Algorithms, and Applications, SIAM, ISBN 978-1-61197-453-9, https://doi.org/10.1137/1.9781611974546, 2016. a, b, c, d, e

Bannister, R. N.: A review of operational methods of variational and ensemble-variational data assimilation, Q. J. Roy. Meteor. Soc., 143, 607–633, https://doi.org/10.1002/qj.2982, 2017. a

Bezanson, J., Edelman, A., Karpinski, S., and Shah, V.: Julia: A fresh approach to numerical computing, SIAM Rev., 59, 65–98, https://doi.org/10.1137/141000671, 2017. a

Bocquet, M.: Ensemble Kalman filtering without the intrinsic need for inflation, Nonlin. Processes Geophys., 18, 735–750, https://doi.org/10.5194/npg-18-735-2011, 2011. a

Bocquet, M.: Localization and the iterative ensemble Kalman smoother, Q. J. Roy. Meteor. Soc., 142, 1075–1089, https://doi.org/10.1002/qj.2711, 2016. a, b

Bocquet, M. and Carrassi, A.: Four-dimensional ensemble variational data assimilation and the unstable subspace, Tellus A, 69, 1304504, https://doi.org/10.1080/16000870.2017.1304504, 2017. a

Bocquet, M. and Sakov, P.: Combining inflation-free and iterative ensemble Kalman filters for strongly nonlinear systems, Nonlin. Processes Geophys., 19, 383–399, https://doi.org/10.5194/npg-19-383-2012, 2012. a, b

Bocquet, M. and Sakov, P.: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlin. Processes Geophys., 20, 803–818, https://doi.org/10.5194/npg-20-803-2013, 2013. a, b

Bocquet, M. and Sakov, P.: An iterative ensemble Kalman smoother, Q. J. Roy. Meteor. Soc., 140, 1521–1535, https://doi.org/10.1002/qj.2236, 2014. a, b, c, d, e, f, g, h, i

Bocquet, M., Raanes, P. N., and Hannart, A.: Expanding the validity of the ensemble Kalman filter without the intrinsic need for inflation, Nonlin. Processes Geophys., 22, 645–662, https://doi.org/10.5194/npg-22-645-2015, 2015. a, b, c

Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization, Foundations of Data Science, 2, 55–80, https://doi.org/10.3934/fods.2020004, 2020. a

Carrassi, A., Bocquet, M., Bertino, L., and Evensen, G.: Data Assimilation in the Geosciences-An overview on methods, issues and perspectives, WIREs Clim. Change, 9, e535, https://doi.org/10.1002/wcc.535, 2018. a

Carrassi, A., Bocquet, M., Demaeyer, J., Grudzien, C., Raanes, P., and Vannitsem, S.: Data Assimilation for Chaotic Dynamics, in: Data Assimilation for Atmospheric, Oceanic and Hydrologic Applications (Vol. IV), edited by: Park, S. K. and Xu, L., Springer, Cham, 1–42, https://doi.org/10.1007/978-3-030-77722-7_1, 2022. a

Chen, Y. and Oliver, D. S.: Ensemble randomized maximum likelihood method as an iterative ensemble smoother, Math. Geosci., 44, 1–26, https://doi.org/10.1007/s11004-011-9376-z, 2012. a

Corazza, M., Kalnay, E., Patil, D. J., Yang, S.-C., Morss, R., Cai, M., Szunyogh, I., Hunt, B. R., and Yorke, J. A.: Use of the breeding technique to estimate the structure of the analysis “errors of the day”, Nonlin. Processes Geophys., 10, 233–243, https://doi.org/10.5194/npg-10-233-2003, 2003. a

Cosme, E., Verron, J., Brasseur, P., Blum, J., and Auroux, D.: Smoothing problems in a Bayesian framework and their linear Gaussian solutions, Mon. Weather Rev., 140, 683–695, https://doi.org/10.1175/MWR-D-10-05025.1, 2012. a

Desbouvries, F., Petetin, Y., and Ait-El-Fquih, B.: Direct, prediction-and smoothing-based Kalman and particle filter algorithms, Signal Process., 91, 2064–2077, https://doi.org/10.1016/j.sigpro.2011.03.013, 2011. a

Emerick, A. A. and Reynolds, A. C.: Ensemble smoother with multiple data assimilation, Comput. Geosci., 55, 3–15, https://doi.org/10.1016/j.cageo.2012.03.011, 2013. a, b

Evensen, G.: Analysis of iterative ensemble smoothers for solving inverse problems, Comput. Geosci., 22, 885–908, https://doi.org/10.1007/s10596-018-9731-y, 2018. a

Evensen, G. and Van Leeuwen, P. J.: An ensemble Kalman smoother for nonlinear dynamics, Mon. Weather Rev., 128, 1852–1867, https://doi.org/10.1175/1520-0493(2000)128<1852:AEKSFN>2.0.CO;2, 2000. a

Fertig, E. J., Harlim, J., and Hunt, B. R.: A comparative study of 4D-VAR and a 4D ensemble Kalman filter: Perfect model simulations with Lorenz-96, Tellus A, 59, 96–100, https://doi.org/10.1111/j.1600-0870.2006.00205.x, 2007. a

Fillion, A., Bocquet, M., and Gratton, S.: Quasi-static ensemble variational data assimilation: a theoretical and numerical study with the iterative ensemble Kalman smoother, Nonlin. Processes Geophys., 25, 315–334, https://doi.org/10.5194/npg-25-315-2018, 2018. a

Fillion, A., Bocquet, M., Gratton, S., Görol, S., and Sakov, P.: An iterative ensemble Kalman smoother in presence of additive model error, SIAM/ASA J. Uncertainty Quantification, 8, 198–228, 2020. a

Gharamti, M. E., Ait-El-Fquih, B., and Hoteit, I.: An iterative ensemble Kalman filter with one-step-ahead smoothing for state-parameters estimation of contaminant transport models, J. Hydrol., 527, 442–457, https://doi.org/10.1016/j.jhydrol.2015.05.004, 2015. a

Grudzien, C. and Bocquet, M.: A Tutorial on Bayesian Data Assimilation, arXiv [preprint], https://doi.org/10.48550/arXiv.2112.07704, 2021. a

Grudzien, C., Carrassi, A., and Bocquet, M.: Asymptotic forecast uncertainty and the unstable subspace in the presence of additive model error, SIAM/ASA J. Uncertainty Quantification, 6, 1335–1363, https://doi.org/10.1137/17M114073X, 2018. a

Grudzien, C., Sandhu, S., and Jridi, A.: cgrudz/DataAssimilationBenchmarks.jl:, Zenodo [code], https://doi.org/10.5281/zenodo.5430619, 2021. a, b, c, d, e

Gu, Y. and Oliver, D. S.: An iterative ensemble Kalman filter for multiphase fluid flow data assimilation, SPE J., 12, 438–446, https://doi.org/10.2118/108438-PA, 2007. a

Harlim, J. and Hunt, B. R.: Four-dimensional local ensemble transform Kalman filter: numerical experiments with a global circulation model, Tellus A, 59, 731–748, https://doi.org/10.1111/j.1600-0870.2007.00255.x, 2007. a

Hunt, B. R., Kalnay, E., Kostelich, E. J., Ott, E., Patil, D. J., Sauer, T., Szunyogh, I., Yorke, J. A., and Zimin, A. V.: Four-dimensional ensemble Kalman filtering, Tellus A, 56, 273–277, https://doi.org/10.3402/tellusa.v56i4.14424, 2004. a, b

Hunt, B. R., Kostelich, E. J., and Szunyogh, I.: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter, Phys. D, 230, 112–126, https://doi.org/10.1016/j.physd.2006.11.008, 2007. a, b, c, d

Iglesias, M. A., Law, K. J. H., and Stuart, A. M.: Ensemble Kalman methods for inverse problems, Inverse Problems, 29, 045001, https://doi.org/10.1088/0266-5611/29/4/045001, 2013. a

Jazwinski, A. H.: Stochastic Processes and Filtering Theory, Academic Press, New-York, IBSN 9780486462745, 1970. a, b

Kalnay, E. and Yang, S. C.: Accelerating the spin-up of ensemble Kalman filtering, Q. J. Roy. Meteor. Soc., 136, 1644–1651, https://doi.org/10.1002/qj.652, 2010. a

Kalnay, E., Li, H., Miyoshi, T., Yang, S.-C., and Ballabrera-Poy, J.: 4-D-Var or ensemble Kalman filter?, Tellus A, 59, 758–773, https://doi.org/10.1111/j.1600-0870.2007.00261.x, 2007. a

Kovachki, N. B. and Stuart, A. M.: Ensemble Kalman inversion: a derivative-free technique for machine learning tasks, Inverse Problems, 35, 095005, https://doi.org/10.1088/1361-6420/ab1c3a, 2019. a

Liu, C., Xiao, Q., and Wang, B.: An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test, Mon. Weather Rev., 136, 3363–3373, https://doi.org/10.1175/2008MWR2312.1, 2008. a

Lorenc, A. C.: The potential of the ensemble Kalman filter for NWP – A comparison with 4D-Var, Q. J. Roy. Meteor. Soc., 129, 3183–3203, https://doi.org/10.1256/qj.02.132, 2003. a, b

Lorenz, E. N.: Predictability: A problem partly solved, in: Proc. Seminar on predictability, vol. 1, https://www.ecmwf.int/node/10829 (last access: 10 October 2022), 1996.  a

Lorenz, E. N. and Emanuel, K. A.: Optimal sites for supplementary weather observations: Simulation with a small model, J. Atmos. Sci., 55, 399–414, https://doi.org/10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2, 1998. a

Neal, R. M.: Sampling from multimodal distributions using tempered transitions, Stat. Comput., 6, 353–366, https://doi.org/10.1007/BF00143556, 1996. a

Nerger, L., Schulte, S., and Bunse-Gerstner, A.: On the influence of model nonlinearity and localization on ensemble Kalman smoothing, Q. J. Roy. Meteor. Soc., 140, 2249–2259, https://doi.org/10.1002/qj.2293, 2014. a

Nocedal, J. and Wright, S.: Numerical optimization, Springer Science & Business Media, https://doi.org/10.1007/978-0-387-40065-5, 2006. a

Pulido, M., Tandeo, P., Bocquet, M., Carrassi, A., and Lucini, M.: Stochastic parameterization identification using ensemble Kalman filtering combined with maximum likelihood methods, Tellus A, 70, 1442099, https://doi.org/10.1080/16000870.2018.1442099, 2018. a

Raanes, P. N.: On the ensemble Rauch-Tung-Striebel smoother and its equivalence to the ensemble Kalman smoother, Q. J. Roy. Meteor. Soc., 142, 1259–1264, https://doi.org/10.1002/qj.2728, 2016. a

Raanes, P. N., Bocquet, M., and Carrassi, A.: Adaptive covariance inflation in the ensemble Kalman filter by Gaussian scale mixtures, Q. J. Roy. Meteor. Soc., 145, 53–75, https://doi.org/10.1002/qj.3386, 2019a. a, b

Raanes, P. N., Stordal, A. S., and Evensen, G.: Revising the stochastic iterative ensemble smoother, Nonlin. Processes Geophys., 26, 325–338, https://doi.org/10.5194/npg-26-325-2019, 2019b. a

Raanes, P. N., Grudzien, C., and14tondeu: nansencenter/DAPPER: Version 0.8, Zenodo [code], https://doi.org/10.5281/zenodo.2029296, 2018. a

Raboudi, N. F., Ait-El-Fquih, B., and Hoteit, I.: Ensemble Kalman filtering with one-step-ahead smoothing, Mon. Weather Rev., 146, 561–581, https://doi.org/10.1175/MWR-D-17-0175.1, 2018. a

Sakov, P. and Bertino, L.: Relation between two common localisation methods for the EnKF, Comput. Geosci., 15, 225–237, https://doi.org/10.1007/s10596-010-9202-6, 2011. a, b

Sakov, P. and Oke, P. R.: A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters, Tellus A, 60, 361–371, https://doi.org/10.1111/j.1600-0870.2007.00299.x, 2008a. a

Sakov, P. and Oke, P. R.: Implications of the form of the ensemble transformation in the ensemble square root filters, Mon. Weather Rev., 136, 1042–1053, https://doi.org/10.1175/2007MWR2021.1, 2008b. a, b, c

Sakov, P., Evensen, G., and Bertino, L.: Asynchronous data assimilation with the EnKF, Tellus A, 62, 24–29, https://doi.org/10.1111/j.1600-0870.2009.00417.x, 2010. a

Sakov, P., Oliver, D. S., and Bertino, L.: An iterative EnKF for strongly nonlinear systems, Mon. Weather Rev., 140, 1988–2004, https://doi.org/10.1175/MWR-D-11-00176.1, 2012. a, b, c

Sakov, P., Haussaire, J. M., and Bocquet, M.: An iterative ensemble Kalman filter in presence of additive model error, Q. J. Roy. Meteor. Soc., 144, 1297–1309, https://doi.org/10.1002/qj.3213, 2018.  a

Sankhya, A.: Reprint of: Mahalanobis, P.C. (1936) “On the Generalised Distance in Statistics”, 80 (Suppl 1), 1–7, https://doi.org/10.1007/s13171-019-00164-5, 2018. a

Schillings, C. and Stuart, A. M.: Convergence analysis of ensemble Kalman inversion: the linear, noisy case, Appl. Anal., 97, 107–123, https://doi.org/10.1080/00036811.2017.1386784, 2018. a

Tandeo, P., Ailliot, P., Bocquet, M., Carrassi, A., Miyoshi, T., Pulido, M., and Zhen, Y.: A review of innovation-based methods to jointly estimate model and observation error covariance matrices in ensemble data assimilation, Mon. Weather Rev., 148, 3973–3994, https://doi.org/10.1175/MWR-D-19-0240.1, 2020. a

Taylor, M. E.: Partial differential equations. 1, Basic theory, Springer, https://doi.org/10.1007/978-1-4419-7055-8, 1996. a

Tippett, M. K., Anderson, J. L., Bishop, C. H., Hamill, T. M., and Whitaker, J. S.: Ensemble square root filters, Mon. Weather Rev., 131, 1485–1490, https://doi.org/10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2, 2003. a

Whitaker, J. S. and Loughe, A. F.: The relationship between ensemble spread and ensemble mean skill, Mon. Weather Rev., 126, 3292–3302, https://doi.org/10.1175/1520-0493(1998)126<3292:TRBESA>2.0.CO;2, 1998. a

Yang, S.-C., Lin, K. J., Miyoshi, T., and Kalnay, E.: Improving the spin-up of regional EnKF for typhoon assimilation and forecasting with Typhoon Sinlaku (2008), Tellus A, 65, 20804, https://doi.org/10.3402/tellusa.v65i0.20804, 2013. a

Zupanski, M.: Maximum likelihood ensemble filter: Theoretical aspects, Mon. Weather Rev., 133, 1710–1726, https://doi.org/10.1175/MWR2946.1, 2005. a, b

Zupanski, M., Navon, I. M., and Zupanski, D.: The Maximum Likelihood Ensemble Filter as a non-differentiable minimization algorithm, Q. J. Roy. Meteor. Soc., 134, 1039–1050, https://doi.org/10.1002/qj.251, 2008. a

Download
Short summary
Iterative optimization techniques, the state of the art in data assimilation, have largely focused on extending forecast accuracy to moderate- to long-range forecast systems. However, current methodology may not be cost-effective in reducing forecast errors in online, short-range forecast systems. We propose a novel optimization of these techniques for online, short-range forecast cycles, simultaneously providing an improvement in forecast accuracy and a reduction in the computational cost.