Articles | Volume 14, issue 10
https://doi.org/10.5194/gmd-14-5957-2021
https://doi.org/10.5194/gmd-14-5957-2021
Model description paper
 | 
04 Oct 2021
Model description paper |  | 04 Oct 2021

SymPKF (v1.0): a symbolic and computational toolbox for the design of parametric Kalman filter dynamics

Olivier Pannekoucke and Philippe Arbogast
Abstract

Recent research in data assimilation has led to the introduction of the parametric Kalman filter (PKF): an implementation of the Kalman filter, whereby the covariance matrices are approximated by a parameterized covariance model. In the PKF, the dynamics of the covariance during the forecast step rely on the prediction of the covariance parameters. Hence, the design of the parameter dynamics is crucial, while it can be tedious to do this by hand. This contribution introduces a Python package, SymPKF, able to compute PKF dynamics for univariate statistics and when the covariance model is parameterized from the variance and the local anisotropy of the correlations. The ability of SymPKF to produce the PKF dynamics is shown on a nonlinear diffusive advection (the Burgers equation) over a 1D domain and the linear advection over a 2D domain. The computation of the PKF dynamics is performed at a symbolic level, but an automatic code generator is also introduced to perform numerical simulations. A final multivariate example illustrates the potential of SymPKF to go beyond the univariate case.

Dates
1 Introduction

The Kalman filter (KF) (Kalman1960) is one of the backbones of data assimilation. This filter represents the dynamics of a Gaussian distribution all along the analysis and forecast cycles and takes the form of two equations representing the evolution of the mean and of the covariance of the Gaussian distribution.

While the equations of the KF are simple linear algebra, the large dimension of linear space encountered in the realm of data assimilation makes the KF impossible to handle, and this is particularly true for the forecast step. This limitation has motivated some approximation of covariance matrix to make the KF possible. For instance, in the ensemble method (Evensen2009), the covariance matrix is approximated by a sample estimation, whereby the time evolution of the covariance matrix is then deduced from the forecast of each individual sample. In the parametric Kalman filter (PKF) (Pannekoucke et al.2016, 2018b, a), the covariance matrix is approximated by a parametric covariance model; the time evolution of the matrix is deduced from the time integration of the parameters' evolution equations.

One of the major limitations for the PKF is the design of the parameter evolution equations. Although not difficult from a mathematical point of view, this step requires the calculation of many terms that are difficult to calculate by hand and may involve errors in the calculation. To facilitate the derivation of the parametric dynamics and certify the correctness of the resulting system a symbolic derivation of the dynamics would be welcome.

The goal of the package SymPKF 1.01 is to facilitate the computation of the PKF dynamics for a particular class of covariance model, the VLATcov model, which is parameterized by the variance and the anisotropy. The symbolic computation of the PKF dynamics relies on a computer algebra system (CAS) able to handle abstract mathematical expressions. A preliminary version has been implemented with Maxima2 (Pannekoucke2021a). However, in order to create an integrated framework that would include the design of the parametric system, as well as its numerical evaluation, the symbolic Python package SymPy (Meurer et al.2017) has been preferred for the present implementation. In particular, SymPKF comes with an automatic code generator to provide an end-to-end exploration of the PKF approach from the computation of the PKF dynamics to their numerical integration.

The paper is organized as follows. The next section provides the background on data assimilation and introduces the PKF. Section 3 focuses on the PKF for univariate VLATcov models in the perspective of symbolic computation by a CAS. Then, the package SymPKF is introduced in Sect. 4 from its use on the nonlinear diffusive advection (the Burgers equation) over a 1D domain. A numerical example illustrates the use of the automatic code generator provided in SymPKF. Then, the example of the linear advection over a 2D domain shows the ability of SymPKF to handle 2D and 3D domains. The section ends with a simple illustration of a multivariate situation, which also shows that SymPKF applies to a system of prognostic equations. The conclusion is given in Sect. 5.

2 Description of the PKF

2.1 Context of the numerical prediction

Dynamics encountered in geosciences are given as a system of partial differential equations (PDEs):

(1) t X = M ( t , X ) ,

where 𝒳(t,x) is the state of the system and denotes either a scalar field or multivariate fields in a coordinate system x=(xi)i[1,d], where d is the dimension the geographical space, ∂𝒳 represents the partial derivatives with respect to the coordinate system at any order, with the convention that order zero denotes the field 𝒳 itself, and denotes the trend of the dynamics. A spatial discretization (e.g., by using finite differences, finite elements, finite volumes, spectral decomposition) transforms Eq. (1) into

(2) t X = M ( t , X ) ,

where, this time, 𝒳(t) is a vector, and denotes the discretization of the trend in Eq. (1). Thereafter, 𝒳 can be seen either as a collection of continuous fields with dynamics given by Eq. (1) or a discrete vector of dynamics as in Eq. (2).

Because of the sparsity and the error of the observations, the forecast 𝒳f is only an estimation of the true state 𝒳t, which is known to within a forecast error defined by ef=Xf-Xt. This error is often modeled as an unbiased random variable, 𝔼[ef]=0. In the discrete formulation of the dynamics in Eq. (2), the forecast error covariance matrix is given by Pf=𝔼[ef(ef)T], where the superscript T denotes the transpose operator. Since this contribution is focused on the forecast step, hereafter the upper script f is removed for the sake of simplicity.

We now detail how the error covariance matrix evolves during the forecast by considering the formalism of the second-order nonlinear Kalman filter.

2.2 Second-order nonlinear Kalman filter

A second-order nonlinear Kalman filter (KF2) is a filter that extends the Kalman filter (KF) to nonlinear situations in which the error covariance matrix evolves tangent-linearly along the trajectory of the mean state and the dynamics of this mean are governed by the fluctuation–mean interacting dynamics (Jazwinski1970; Cohn1993). Hence, we first state the dynamics of the mean under the fluctuation–mean interaction, then the dynamics of the error covariance. Note that the choice of the following presentation is motivated by the perspective of using a computer algebra system to perform the computation.

2.2.1 Computation of the fluctuation–mean interaction dynamics

Because of the uncertainty in the initial condition, the state 𝒳 is modelized as a Markov process X(t,x,ω), where ω stands for the stochasticity, while 𝒳 evolves by Eq. (1). Hence, ω lies within a certain probability space (Ω,F,P), where is σ algebra on Ω (a family of subsets of Ω, which contains Ω and which is stable for the complement and the countable union) and P is a probability measure (see, e.g., Øksendal2003, chap.2). X(t,x,):(Ω,F)(Rn,BRn) is an -measurable function wherein BRn denotes the Borel σ algebra on n, where the integer n is either the dimension of the multivariate field 𝒳(t,x) or the dimension of its discretized version 𝒳(t). The connection between the Markov process and the parameter dynamics is obtained using the Reynolds averaging technique (Lesieur2007, chap. 4).

To perform the Reynolds averaging of Eq. (1), the first step is to replace the random field by its Reynolds decomposition X(t,x,ω)=E[X](t,x)+ηe(t,x,ω). In this modeling of the random state, 𝔼[𝒳] is the ensemble average or the mean state; e is an error or a fluctuation to the mean, and it is an unbiased random field, 𝔼[e]=0. Then, Eq. (1) reads as

(3) t E [ X ] + η t e = M ( t , E [ X ] + η e ) ,

where η is a control of magnitude introduced to facilitate Taylor's expansion when using a computer algebra system. At the second order, the Taylor's expansion in η of Eq. (3) reads

(4a) t E [ X ] + η t e = M ( t , E [ X ] ) + η M ( t , E [ X ] ) ( e ) + η 2 M ′′ ( t , E [ X ] ) ( e e ) ,

where and ′′ are two linear operators; the former (the latter) refers to the tangent-linear model (the Hessian), and both are computed with respect to the mean state 𝔼[𝒳]. The first-order expansion is deduced from Eq. (4a) by setting η2=0, which then reads as

(4b) t E [ X ] + η t e = M ( t , E [ X ] ) + η M ( t , E [ X ] ) ( e ) .

By setting η to 1, the dynamics of the ensemble average are calculated at the second order from the expectation of Eq. (4a) that reads as

(5) t E [ X ] = M ( t , E [ X ] ) + M ′′ ( t , E [ X ] ) ( E [ e e ] ) ,

where ee denotes the tensor product of the partial derivatives with respect to the spatial coordinates, i.e., terms such as keme for any positive integers (k,m). Here, we have used the assumption that the partial derivative commutes with the expectation, E[e]=E[e], and that 𝔼[e]=0. Because the expectation is a projector, E[E[]]=E[], the expectation of M(t,E[X]) is itself. The second term of the right-hand side makes the retro-action of the error appear in the ensemble-averaged dynamics. Hence, Eq. (5) gives the dynamics of the error–mean interaction (or fluctuation–mean interaction).

Note that the tangent-linear dynamics along the ensemble-averaged dynamics in Eq. (5) are obtained as the difference between the first-order Taylor's expansion in Eq. (4b) and its expectation, and they read as

(6) t e = M ( t , E [ X ] ) ( e ) .

Now it is possible to detail the dynamics of the error covariance from the dynamics of the error, which tangent-linearly evolve along the mean state 𝔼[𝒳].

2.2.2 Computation of the error covariance dynamics

In the discretized form, the dynamics of the error in Eq. (6) read as the ordinary differential equation (ODE):

(7) d e d t = M e ,

where M stands for the tangent-linear (TL) model M(t,E[X]) evaluated at the mean state 𝔼[𝒳]. So the dynamics of the error covariance matrix, P=𝔼[eeT], are given by the ODE:

(8a) d P d t = MP + PM T

(MT is the adjoint of M) or its integrated version

(8b) P ( t ) = M t 0 P 0 M t 0 T ,

where Mt←0 is the propagator associated with the time integration of Eq. (7), initiated from the covariance P0.

2.2.3 Setting of the KF2

Gathering the dynamics of the ensemble mean given by the fluctuation–mean interaction in Eq. (5) and the covariance dynamics in Eq. (8) leads to the second-order closure approximation of the extended KF, which is the forecast step equations of the KF2.

Similarly to the KF, the principal limitation of the KF2 is the numerical cost associated with the covariance dynamics in Eq. (8): living in a discrete world, the numerical cost of Eq. (8) dramatically increases with the size of the problem. As an example, for the dynamics of a simple scalar field discretized with n grid points, the dimension of its vector representation is n, while the size of the error covariance matrix scales as n2, leading to a numerical cost of Eq. (8) between 𝒪(n2) and 𝒪(n3).

We now introduce the parametric approximation of covariance matrices, which aims to reduce the cost of the covariance dynamics in Eq. (8).

2.3 Formulation of the PKF prediction

The parametric formulation of covariance evolution stands as follows. If P(𝒫) denotes a covariance model featured by a set of parameters P=(pi)iI, then there is a set Ptf featuring the forecast error covariance matrix so that P(Ptf) approximates the forecast error covariance Ptf, i.e., P(Ptf)Ptf. Note that a parameter pi can be a scalar or a field, e.g., a variance field.

Hence, starting from the initial condition Pf=P0f so that P(P0f)P0f, if the dynamics of the parameters Ptf are known, then it is possible to approximately determine Ptf by P(Ptf) without solving Eq. (8) explicitly. This approach constitutes the so-called parametric Kalman filter (PKF) approximation introduced by Pannekoucke et al. (2016, 2018a) (P16, P18).

In practice, the parametric covariance models considered in the PKF are such that the number of parameters in 𝒫 is much lower than the number of coefficients required to represent the full covariance P(𝒫). For instance, for the dynamics of a scalar field discretized with n grid points, as introduced in Sect. 2.2.3, the total number of parameters in 𝒫 should be of same order as n, e.g., 2n or 3n, so that the cost to predict the evolution of the parameters would represent 2 or 3 times the cost to predict the evolution of the scalar field. Said differently, the cost to predict the parameters should scale in 𝒪(n), which is much lower than the computation of Eq. (8) in 𝒪(n2) - 𝒪(n3).

The cost of the PKF can be compared with other low-rank methods such as the reduced-rank Kalman filter (Fisher1998) or the ensemble Kalman filter (Evensen2009), for which an ensemble size of 100 members is often encountered depending on the dimension of the unstable subspace and to limit the amount of sampling noise. Hence, when each forecast is made at full resolution the cost of these approaches is 100 times the cost of a prediction, which is larger than the cost we expect for the PKF (see P16, P18). Note that low-rank and ensemble methods often consider the computation of the dynamics at a lower resolution, which leads to a lower cost than the 100 forecast at full resolution. The PKF is computed at the full resolution and is free from sampling noise.

But the frugality of the covariance model is not the only criterion. For instance, the first variational data assimilation systems considered a covariance model based on the spectral diagonal assumption in spectral space (Courtier et al.1998; Berre2000). This covariance model reads as Ps(Ps)=ΣS-1DS-1*Σ*, where S denotes the spectral transform with * the conjugate transpose operator, and Σ is the diagonal matrix of standard deviation, i.e., the square root of the variance field V. In this model, the set of parameters 𝒫s is given by the grid points and the spectral variances, Ps=V,diag(D). If the shape of Ps(𝒫s) is n2, the number of parameters 𝒫s for this covariance model is 2n (n variances in grid points, stored as the standard deviation in the diagonal of Σ; n variances in spectral, stored in the diagonal of D), which is quite economical. However, the resulting correlation functions are homogeneous (there is the same correlation function in each point), which is enough to represent climatologically stationary background error statistics but not the flow-dependent statistics as existing in the KF. While it is possible to write the equations for the dynamics of the spectral variances (e.g., for the linear waves), the limitation that the spectral diagonal approach can only model homogeneous correlations motivated the introduction of other covariance models. For example, the covariance model based on the diagonal assumption in wavelet space (Fisher2004; Pannekoucke et al.2007) can model heterogeneous correlations at a low memory cost. However, the dynamics of the wavelet variances are much more difficult to develop because of the redundancy of the wavelet transform on the sphere.

Hence, a covariance model adapted for the PKF should be able to represent realistic correlations and be such that the dynamics of the parameters can be computed, e.g., a covariance model defined by parameters in grid points. To do so, we now focus on the PKF applied to a particular family of covariance models, whose parameters are defined in grid points by the variance and the anisotropy fields: P=(V,g), where g will denotes the local anisotropy tensor of the local correlation function.

3 PKF for VLATcov models

This part introduces a particular family of covariance models parameterized by the fields of variances and of the local anisotropy tensor: the VLATcov models (Pannekoucke2021b). What makes this covariance model interesting is that its parameters are related to the error field, and thus it is possible to determine the dynamics of the parameters. To introduce VLATcov models, we first present the diagnosis of the variance and of the local anisotropy tensor; then we present two examples of VLATcov models, and we end the section with a description of the dynamics of the parameters.

3.1 Definition of the fields of variance and of local anisotropy tensor

From now, we will focus on the forecast error statistics, so the upper script f is removed for the sake of simplicity. Moreover, for a function f, when there is no confusion, the value of f at a point x is written either as f(x) or as fx.

The forecast error being unbiased, 𝔼[e]=0, its variance at a point x is defined as

(9) V ( x ) = E [ e ( x ) 2 ] .

When the error is a random differentiable field, the anisotropy of the two-point correlation function ρ(x,y)=1VxVyE[e(x)e(y)] is featured from the second-order expansion,

(10) ρ ( x , x + δ x ) 1 - 1 2 | | δ x | | g x 2 ,

by the local metric tensor g(x) and defined as

(11) g ( x ) = - T ρ x ,

where ρx(y)=ρ(x,y), e.g.,

gij(x)=-yiyj2ρx(y)y=x.

The metric tensor is a symmetric positive definite matrix, and it is a 2×2 (3×3) matrix in a 2D (3D) domain.

Note that it is useful to introduce the local aspect tensor (Purser et al.2003), defined as the inverse of the metric tensor:

(12) s ( x ) = g ( x ) - 1 ,

where the superscript −1 denotes the matrix inverse. The aspect tensor at the point x is geometrically interpreted as an ellipse whose shape coincides with that of the local correlation function.

What makes the metric tensor attractive, either at a theoretical or at a practical level, is that it is closely related to the normalized error ε=eV by

(13) g i j ( x ) = E ( x i ε ) ( x j ε )

(see, e.g., Pannekoucke2021b, for details).

Hence, using the notation introduced in Sect. 2.3, a VLATcov model is a covariance model, P(𝒫), characterized by the set of two parameter fields, P=(p1,p2), given by the variance field and by the anisotropy field – the latter being defined either by the metric tensor field g or by the aspect tensor field s – i.e., P=(V,g) or P=(V,s). Said differently, any VLATcov model reads as P(V,g) or P(V,s).

To put some flesh on the bones, two examples of VLATcov models are now presented.

3.2 Examples of VLATcov models

We first consider the covariance model based on the heterogeneous diffusion operator of Weaver and Courtier (2001), which is used in variational data assimilation to model heterogeneous correlation functions, e.g., for the ocean or for air quality. This model has the property that, under the local homogenous assumption (when the spatial derivatives are negligible), the local aspect tensors of the correlation functions are twice the local diffusion tensors (Pannekoucke and Massart2008; Mirouze and Weaver2010). Hence, by defining the local diffusion tensors as half the local aspect tensors, the covariance model based on the heterogeneous diffusion equation is a VLATcov model.

Another example of a heterogeneous covariance model is the heterogeneous Gaussian covariance model:

(14) P he . g ( V , ν ) ( x , y ) = V ( x ) V ( y ) ν x 1 / 4 ν y 1 / 4 1 2 ( ν x + ν y ) 1 / 2 × exp - | | x - y | | ( ν x + ν y ) - 1 2 ,

where ν is a field of symmetric positive definite matrices, and |ν| denotes the matrix determinant. Phe.g(V,ν) is a particular case of the class of covariance models deduced from Theorem 1 of Paciorek and Schervish (2004). Again, this covariance model has the property that, under local homogenous assumptions, the local aspect tensor is approximately given by ν, i.e., for any point x,

(15) s x ν x .

Hence, as for the covariance model based on the diffusion equation, by defining the field ν as the aspect tensor field, the heterogeneous Gaussian covariance model is a VLATcov model (Pannekoucke2021b).

At this stage, all the pieces of the puzzle are put together to build the PKF dynamics. We have covariance models parameterized from the variance and the local anisotropy, which are both related to the error field: knowing the dynamics of the error leads to the dynamics of the VLATcov parameters. This is now detailed.

3.3 PKF prediction step for VLATcov models

When the dynamics of the error e are well approximated from the tangent-linear evolution in Eq. (6), the connection between the covariance parameters and the error, represented in Eqs. (9) and (13), makes it possible to establish the prediction step of the PKF (Pannekoucke et al.2018a), which reads as the dynamics of the ensemble average (at the second-order closure),

(16a) t E [ X ] = M ( t , E [ X ] ) + M ′′ ( t , E [ X ] ) ( E [ e e ] ) ,

coupled with the dynamics of the variance and the metric,

(16b)tV(t,x)=2Eete,(16c)tgij(t,x)=E[t(xiε)(xjε)],

where it remains to replace the dynamics of the error (and its normalized version ε=e/V) from Eq. (6) and where the property that the expectation operator and the temporal derivative commutes, tE[]=E[t], has been used to obtain Eq. (16b) and (16c).

Following the discussion in Sect. 2.3, the set of Eq. (16) is at the heart of the numerical sobriety of the parametric approach since the cost of the prediction of the parameter scales like 𝒪(n). In contrast to the matrix dynamics of the KF, the PKF approach is designed for the continuous world, leading to PDEs for the parameter dynamics in place of ODEs in Eq. (8) for the full matrix dynamics. Moreover, the dynamics of the parameters shed light on the nature of the processes governing the dynamics of covariances, and it does not require any adjoint of the dynamics (Pannekoucke et al.2016, 2018a).

Note that Eq. (16) can be formulated in terms of aspect tensors thanks to the definition in Eq. (12): since sg=I, its time derivative (ts)g+s(tg)=0 leads to the dynamics ts=-g-1(tg)s, and then

(17) t s = - s ( t g ) s ,

where it remains to replace occurrences of g by s−1 in the resulting dynamics of the mean, the variance, and the aspect tensor.

Hence, the PKF forecast step for a VLATcov model is given by either the system in Eq. (16) (in metric) or by its aspect tensor formulation thanks to Eq. (17). Whatever the formulation considered, it is possible to carry out the calculations using a formal calculation language. However, even for simple physical processes, the number of terms in formal expressions can become very large; e.g., it is common to have to manipulate expressions with more than 100 terms. Thus, any strategy that simplifies the assessment of PKF systems in advance can quickly become a significant advantage.

In the following section, we present the splitting method that allows the PKF dynamics to be expressed by bringing together the dynamics of each of the physical processes, calculated individually.

3.4 The splitting strategy

When there are several processes in the dynamics in Eq. (1), the calculation of the parametric dynamics can be tedious even when using a computer algebra system. To better use digital resources, a splitting strategy can be introduced (Pannekoucke et al.2016, 2018a).

While the theoretical background is provided by the Lie–Trotter formula for Lie derivatives, the well-known idea of time splitting is easily taken from a first-order Taylor expansion of an Euler numerical scheme.

The computation of dynamics,

(18) t X = f 1 ( X ) + f 2 ( X ) ,

over a single time step δt can be done in two steps following the numerical scheme

(19) X = X ( t ) + δ t f 1 ( X ( t ) ) , X ( t + δ t ) = X + δ t f 2 ( X ) ,

where at order δt, this scheme is equivalent to X(t+δt)=X(t)+δtf1(X(t))+f2(X(t)), which is the Euler step of Eq. (18). Because f1 and f2 can be viewed as vector fields, the fractional scheme, joining the starting point (at t) to the end point (at t+δt), remains to go through the parallelogram formed by the sum of the two vectors along its sides. Since there are two paths joining the extreme points, starting the computation by f2 is equivalent to starting by f1 (at order δt); this corresponds to the commutativity of the diagram formed by the parallelogram.

Appendix A shows that the dynamics given by Eq. (18) imply dynamics of the error, the variance, the metric, and the aspect written as a sum of trends. Hence, it is possible to apply a splitting for all these dynamics.

As a consequence of the calculation of the parametric dynamics, calculating the parametric dynamics of Eq. (18) is equivalent to separately calculating the parametric dynamics of tX=f1(X) and tX=f2(X), then bringing together the two parametric dynamics into a single one by summing the trends for the mean, the variance, the metric, or the aspect dynamics. This splitting applies when there are more than two processes and appears to be a general method to reduce the complexity of the calculation.

3.5 Discussion and intermediate conclusion

However, although the calculation of the system in Eq. (16) is straightforward, as it is similar to the calculation of Reynolds equations (Pannekoucke et al.2018a), it is tedious because of the many terms involved, and there is a risk of introducing errors during the calculation by hand.

Then, once the dynamics of the parameters are established, it remains to design a numerical code to test whether the uncertainty is effectively well represented by the PKF dynamics. Again, the design of a numerical code is not necessarily difficult, but with numerous terms the risk of introducing an error is important.

To facilitate the design of the PKF dynamics and the numerical evaluation, the package SymPKF has been introduced to perform the VLATcov parameter dynamics and to generate a numerical code used for the investigations (Pannekoucke2021c). The next section introduces and details this tool.

4 Symbolic computation of the PKF for VLATcov

In order to introduce the symbolic computation of the PKF for the VLATcov model, we consider an example: the diffusive nonlinear advection in the Burgers equation, which reads

(20) t u + u x u = κ x 2 u ,

where u stands for the velocity field and corresponds to a function of the time t and the space of coordinate x and where κ is a diffusion coefficient (constant here). This example illustrates the workflow leading to the PKF dynamics. It consists of defining the system of equations in SymPy, then computing the dynamics with Eq. (16); we now detail these two steps.

4.1 Definition of the dynamics

The definition of the dynamics relies on the formalism of SymPy as shown in Fig. 1. The coordinate system is first defined as instances of the class Symbols. Note that the time is defined as sympkf.t, while the spatial coordinate is left to the choice of the user, here x. Then, the function u is defined as an instance of the class Function as a function of (t,x).

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f01

Figure 1Sample of code and Jupyter notebook outputs for the definition of the Burgers dynamics using SymPKF.

Download

In this example, the dynamics consist of a single equation defined as an instance of the class Eq, but in the general situation in which the dynamics are given as a system of equations, the dynamics have to be represented as a Python list of equations.

A preprocessing of the dynamics is then performed to determine several important quantities to handle the dynamics: the prognostic fields (functions for which a time derivative is present), the diagnostic fields (functions for which there is no time derivative in the dynamics), the constant functions (functions that only depend on the spatial coordinates), and the constants (pure scalar terms that are not a function of any coordinate). This preprocessing is performed when transforming the dynamics as an instance of the class PDESystem and whose default string output delivers a summary of the dynamics: for the Burgers equation, there is only one prognostic function, u(t,x), and one constant, κ.

The prognostic quantities being known, it is then possible to perform the computation of the PKF dynamics, as discussed now.

4.2 Computation of the VLATcov PKF dynamics

Thanks to the preprocessing, we are able to determine the VLATcov parameters needed to compute the PKF dynamics, which are the variance and the anisotropy tensor associated with the prognostic fields. For the Burgers equation, the VLATcov parameters are the variance Vu and the metric tensor gu=(gu,xx) or its associated aspect tensor su=(su,xx). Note that, in SymPKF, the VLATcov parameters are labeled by their corresponding prognostic fields to facilitate their identification. This labeling is achieved when the dynamics are transformed as an instance of the class SymbolicPKF. This class is at the core of the computation of the PKF dynamics from Eq. (16).

As discussed in Sect. 2.2.1, the PKF dynamics rely on the second-order fluctuation–mean interaction dynamics wherein each prognostic function is replaced by a stochastic counterpart. Hence, the constructor of SymbolicPKF converts each prognostic function as a function of an additional coordinate, ω∈Ω. For the Burgers equation, u(t,x) becomes u(t,x,ω).

Since the computation of the second-order fluctuation–mean interaction dynamics relies on the expectation operator, an implementation of this expectation operator has been introduced in SymPKF: it is defined as the class Expectation built by inheritance from the class sympy.Function to leverage the computational facilities of SymPy. The implementation of the class Expectation is based on the linearity of the mathematical expectation operator with respect to deterministic quantities and its commutativity with partial derivatives and integrals with respect to coordinates different from ω, e.g., for the Burgers equation E[xu(t,x,ω)]=xE[u(t,x,ω)]. Note that E[u(t,x,ω)] is a function of (t,x) only: the expectation operator converts a random variable into a deterministic variable.

Then, the symbolic computation of the second-order fluctuation–mean interaction dynamics in Eq. (16a) is performed, thanks to SymPy, by following the steps as described in Sect. 2.2.1. In particular, the computation also leads to the tangent-linear dynamics of the error in Eq. (6), from which it is possible to compute the dynamics of the variance in Eq. (16b) and of the metric tensor in Eq. (16c) (or its associated aspect tensor version). Applying these steps and the appropriate substitutions, this is achieved when calling the in_metric or in_aspect Python property of an instance of the class SymbolicPKF. This is shown for the Burgers equation in Fig. 2, where the background computation of the PKF dynamics leads to a list of the three coupled equations corresponding to the mean, the variance, and the aspect tensor, similar to the system in Eq. (22) first obtained by Pannekoucke et al. (2018a).

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f02

Figure 2Sample of code and Jupyter notebook outputs: systems of partial differential equations given in metric and in aspect forms produced by SymPKF when applied to the Burgers equation (Eq. 20).

Download

Hence, from SymPKF, for the Burgers equation, the VLATcov PKF dynamics given in the aspect tensor read as

(21) t u = κ x 2 u - u x u - x V u 2 t V u = - 2 κ V u s u , x x + κ x 2 V u - κ x V u 2 2 V u - u x V u - 2 V u x u t s u , x x = 2 κ s u , x x 2 E ε u x 4 ε u - 3 κ x 2 s u , x x - 2 κ + 6 κ x s u , x x 2 s u , x x - 2 κ s u , x x x 2 V u V u + κ x V u x s u , x x V u + 2 κ s u , x x x V u 2 V u 2 - u x s u , x x + 2 s u , x x x u ,

where su,xx is the single component of the aspect tensor su in 1D domains. Note that in the output of the PKF equations, as reproduced in Eq. (21), the expectation in the dynamics of the mean is replaced by the prognostic field; for the Burgers equation, 𝔼[u](t,x) is simply denoted by u(t,x).

While the Burgers equation only contains two physical processes, i.e., the nonlinear advection and the diffusion, the resulting PKF dynamics in Eq. (21) make numerous terms appear, which justifies the use of symbolic computation, as mentioned above. The computation of the PKF dynamics leading to the metric and to the aspect tensor formulation takes about 1 s of computation (Intel Core i7-7820HQ CPU at 2.90 GHz × 8).

In this example, the splitting strategy has not been considered to simplify the computation of the PKF dynamics. However, it can be done by considering the PKF dynamics for the advection tu=-uxu and the diffusion tu=κx2u, and computed separately, then merged to find the PKF dynamics of the full Burgers equation. For instance, Fig. 3 shows the PKF dynamics for the advection (first cell) and for the diffusion (second cell); the output can be traced back in Eq. (2), e.g., by the terms in κ for the diffusion.

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f03

Figure 3Illustration of the splitting strategy that can be used to compute the PKF dynamics and applied here for the Burgers equation: PKF dynamics of the Burgers equation can be obtained from the PKF dynamics of the advection (first cell) and of the diffusion (second cell).

Download

Thanks to the symbolic computation using the expectation operator, as implemented by the class Expectation, it is possible to handle terms such as E[εux4εu] during the computation of the PKF dynamics. The next section details how these terms are handled during the computation and the closure issue they bring.

4.3 Comments on the computation of the VLATcov PKF dynamics and the closure issue

4.3.1 Computation of terms E[αεβε] and their connection to the correlation function

An important point is that terms such as 𝔼[εαε], e.g., Eεux4εu in Eq. (21) are directly connected to the correlation function ρ(x,y)=E[ε(x)ε(y)] whose Taylor expansion is written as

(22) ρ ( x , x + δ x ) = k 1 k ! E ε ( x ) k ε ( x ) δ x k .

However, during their computation, the VLATcov PKF dynamics make the terms E[αεβε] appear, with |α||β|, where for any α, α denotes the derivative with respect to the multi-index α=(αi)i[1,n], with αi denoting the derivative order with respect to the ith coordinate xi of the coordinate system and where the sum of all derivative order is denoted by |α|=iαi. The issue it that these terms in Eαεβε are not directly connected to the Taylor expansion in Eq. (22).

The interesting property of these terms is that they can be reworded as spatial derivatives of terms in the form 𝔼[εγε]. More precisely, any term Eαεβε can be written from derivative of terms in 𝔼[εγε], where |γ|<|α|+|β|, and the term Eεα+βε (see Appendix B for the proof). So, to replace any term in Eαεβε by terms in 𝔼[εγε], where |γ|<|α|+|β|, a substitution dictionary is computed in SymPKF and stored as the variable subs_tree. The computation of this substitution dictionary is performed thanks to a dynamical programming strategy. Thereafter, the integer |α|+|β| is called the order of the term Eαεβε. Figure 4 shows the substitution dictionary computed for the Burgers equation. It appears that terms of order lower than 3 can be explicitly written from the metric (or its derivatives), while terms of order larger than 4 cannot: this is known as the closure issue (Pannekoucke et al.2018a).

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f04

Figure 4Substitution dictionary computed in SymPKF to replace terms such as E[αεβε] by terms in 𝔼[εγε], where |γ|<|α|+|β|.

Download

The term E[εx4ε], which features long-range correlations, cannot be related to the variance or to the metric and has to be closed. We detail this point in the next section.

4.3.2 Analytical and data-driven closure

A naïve closure for the PKF dynamics in Eq. (21) would be to replace the unknown term Eεux4εu by zero. However, in the third equation that corresponds to the aspect tensor dynamics, the coefficient −3κ of the diffusion term x2su being negative, it follows that the dynamics of su numerically explode at an exponential rate. Of course, because the system represents the uncertainty dynamics of the Burgers equation in Eq. (20) that is well posed, the parametric dynamics should not explode. Hence, the unknown term Eεux4εu is crucial: it can balance the negative diffusion to stabilize the parametric dynamics.

For the Burgers equation, a closure for Eεux4εu has been previously proposed (Pannekoucke et al.2018a), given by

(23) E ε u x 4 ε u 2 s u 2 x 2 s u + 3 s u 2 - 4 x s u 2 s u 3 ,

where the symbol is used to indicate that this is not an equality but a proposal of closure for the term in the left-hand side and that leads to the closed system

(24) t u = - u x u + κ x 2 u - 1 2 x V u , t V u = - u x V u - 2 ( x u ) V u + κ x 2 V u - κ 2 1 V u ( x V u ) 2 - 2 κ s u , x x V u , t s u , x x = - u x s u , x x + 2 ( x u ) s u , x x + 4 κ - 2 κ s u , x x V u x 2 V u + 2 κ s u , x x V u 2 ( x V u ) 2 + κ 1 V u x V u x s u , x x + κ x 2 s u , x x - 2 κ 1 s u , x x ( x s u , x x ) 2 .

The closure in Eq. (23) results from a local Gaussian approximation of the correlation function. Previous numerical experiments have shown that this closure is well adapted to the Burgers equation (Pannekoucke et al.2018a). But the approach that has been followed to find this closure is quite specific, and it would be interesting to design a general way to find such a closure.

In particular, it would be interesting to search for a generic way to design closures that leverage the symbolic computation, which could be plugged with the PKF dynamics computed from SymPKF at a symbolic level. To do so, we propose an empirical closure that leverages a data-driven strategy to hybridize machine learning with physics, as proposed by Pannekoucke and Fablet (2020) with their neural network generator PDE-NetGen.

The construction of the proposal relies on the symbolic computation shown in Fig. 5.

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f05

Figure 5Example of a symbolic computation leading to a proposal for the closure of the unknown terms of order 4 and 5.

Download

The first step is to consider an analytical approximation for the correlation function. For the illustration, we consider the local correlation function to be well approximated by the quasi-Gaussian function

(25) ρ ( x , x + δ x ) exp - δ x 2 s u ( x ) + s u ( x + δ x ) .

Then, the second step is to perform the computation of the Taylor's expansion of Eq. (22) at a symbolic level. This is done thanks to SymPy with the method series applied to Eq. (25) for δx near the value 0 and at a given order; e.g., for the illustration expansion is computed as the sixth order in Fig. 5.

Then, the identification with the Taylor's expansion in Eq. (22) leads to the closure

(26) E ε u x 4 ε u 3 s u , x x 2 x 2 s u , x x + 3 s u , x x 2 - 3 x s u , x x 2 s u , x x 3 .

While it looks like the closure in Eq. (23), the coefficients are not the same. But this suggests that the closure of Eεux4εu can be expanded as

(27) E ε u x 4 ε u a 0 4 s u , x x 2 x 2 s u , x x + a 1 4 s u , x x 2 + a 3 4 x s u , x x 2 s u , x x 3 ,

where a4=(a04,a14,a24) represents three unknown reals. A data-driven strategy can be considered to find an appropriate value of a4 from experiments. This has been investigated by using the automatic generator of a neural network PDE-NetGen, which bridges the gap between the physics and the machine learning (Pannekoucke and Fablet2020) and with which the training has led to the value a4(1.86,3.0,3.6). Since this proposal is deduced from symbolic computation, it is easy to build some proposals for higher-order unknown terms as is shown in Fig. 5 for the term E[εux5εu].

Whatever closure has been obtained in an analytical or an empirical way, it remains to compute the closed PKF dynamics to assess their performance. To do so a numerical implementation of the system of partial differential equations has to be introduced. As for the computation of the PKF dynamics, the design of a numerical code can be tedious, with a risk of introducing errors in the implementation due to the numerous terms occurring in the PKF dynamics. To facilitate the research on the PKF, SymPKF comes with a Python numerical code generator, which provides an end-to-end investigation of the PKF dynamics. This code generator is now detailed.

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f06

Figure 6Introduction of a closure and automatic generation of a numerical code in SymPKF.

Download

4.4 Automatic code generation for numerical simulations

While compiled language with appropriate optimization should be important for industrial applications, we chose to implement a pure Python code generator, which offers a simple research framework for exploring the design of PKF dynamics. It would have been possible to use a code generator already based on SymPy (see, e.g., Louboutin et al.2019), but with such code generators being domain-specific, it was less adapted to the investigation of the PKF for arbitrary dynamics. Instead, we consider a finite-difference implementation of partial derivatives with respect to spatial coordinates. The default domain to perform the computation is the periodic unit square with the dimension of the number of spatial coordinates. The length of the domain can be specified along each direction. The domain is regularly discretized along each direction, while the number of grid points can be specified for each direction.

The finite difference takes the form of an operator that approximates any partial derivate at a second order of consistency: for any multi-index α, Fαu=0αu+O(|δx|2), where 𝒪 is Landau's big O notation. For any f, the notation f(δx)=0O(δx2) means that limδx0f(δx)δx2 is finite. The operator computed with respect to independent coordinates commutes, e.g., Fxy=FxFy=FyFx, where denotes the composition, but it does not commute for dependent coordinates, e.g., Fx2FxFx. The finite difference of the partial derivative with respect to the multi-index is computed sequentially, e.g., Fxxy=Fx2Fy=FyFx2. The finite difference of order α with respect to a single spatial coordinate is the centered finite difference based on α+1 points.

For instance, Fig. 6 shows how to close the PKF dynamics for the Burgers equation following P18 and how to build a code from an instance of the class sympkf.FDModelBuilder: it creates the class ClosedPKFBurgers. In this example, the code is rendered from templates thanks to Jinja3; then it is executed at runtime. Note that the code can also be written in an appropriate Python module for adapting the code to a particular situation or to check the correctness of the generated code. At the end, the instance closed_pkf_burgers of the class ClosedPKFBurgers is created, raising a warning to indicate that the value of constant κ has to be specified before performing a numerical simulation. Note that it is possible to set the value of kappa as a keyword argument in the class ClosedPKFBurgers. Figure 6 also shows a sample of the generated code with the implementation of the computation of the first-order partial derivative xVu, which appears as a centered finite difference. Then, the sample of code shows how the partial derivatives are used to compute the trend of the system of partial differential equations in Eq. (24).

The numerical integration is handled through the inheritance mechanism: the class ClosedPKFBurgers inherits the integration time loop from the class sympkf.Model as described by the unified modeling language (UML) diagram shown in Fig. 7. In particular, the class Model contains several time schemes, e.g., a fourth-order Runge–Kutta scheme. Note that in the present implementation of SymPKF, only explicit time schemes are considered, but it could be possible to leverage the symbolic computation to implement other schemes more adapted to a given PDE, e.g., an implicit scheme for the transport or the diffusion or a high-order exponential time-differencing method (Kassam and Trefethen2005) with which the linear and the nonlinear parts would be automatically determined from the symbolic computation. The details of the instance closed_pkf_burgers of the class ClosedPKFBurgers make it appear that the closed system in Eq. (24) will be integrated by using a RK4 time scheme on the segment [0,D] (here D=1) with periodic boundaries and discretized by 241 points.

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f07

Figure 7UML diagram showing the inheritance mechanism implemented in SymPKF: the class ClosedPKFBurgers inherits from the class Model, which implements several time schemes. Here, closed_pkf_burgers is an instance of the class ClosedPKFBurgers.

Download

Thanks to the end-to-end framework proposed in SymPKF, it is possible to perform a numerical simulation based on the PKF dynamics in Eq. (23). To do so, we set κ=0.0025 and consider the simulation starting from the Gaussian distribution N(u0,Phf) of mean u0(x)=Umax[1+cos(2π(x-D/4)/D)]/2 with Umax=0.5 and of covariance matrix

(28) P h f ( x , y ) = V h exp - ( x - y ) 2 2 l h 2 ,

where Vh=0.01Umax and lh=0.02D5dx. The time step of the fourth-order Runge–Kutta scheme is dt=0.002. The evolution predicted from the PKF is shown in Fig. 8 (solid lines). This simulation illustrates the time evolution of the mean (panel a) and of the variance (panel b); panel (c) represents the evolution of the correlation length scale defined from the aspect tensor as L(x)=su,xx(x). Note that at time 0, the length scale field is L(x)=lh. For the illustrations, the variance (the length scale) is normalized by its initial value Vh (lh).

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f08

Figure 8Illustration of a numerical simulation of the PKF dynamics in Eq. (23) (solid line), with the mean (a), the variance (b), and the correlation length scale (c), which is defined from the component su,xx of the aspect tensor by L(x)=su,xx(x). An ensemble-based validation of the PKF dynamics is shown as a dashed line. (Pannekoucke and Fablet2020, see their Fig. 7)

Download

In order to show the skill of the PKF applied to the Burgers equation, when using the closure of P18, an ensemble validation is now performed. Note that the code generator of SymPKF can be used for arbitrary dynamics, e.g., the Burgers equation itself. Hence, a numerical code solving the Burgers equation is rendered from its symbolic definition. Then an ensemble of 1600 forecasts is computed starting from an ensemble of initial errors at time 0. The ensemble of initial errors is sampled from the Gaussian distribution N0,Phf of zero mean and covariance matrix Phf. Note that the ensemble forecasting implemented in SymPKF as the method Model.ensemble_forecast (see Fig. 7) leverages the multiprocessing tools of Python to use the multiple cores of the CPU when present. On the computer used for the simulation, the forecasts are performed in parallel on eight cores. The ensemble estimation of the mean, the variance, and the length scale is shown in Fig. 8 (dashed lines). Since the ensemble is finite, sampling noise is visible, e.g., in the variance at the initial time that is not strictly equal to Vh. In this simulation, it appears that the PKF (solid line) coincides with the ensemble estimation (dashed lines), which shows the ability of the PKF to predict the forecast error covariance dynamics. Note that the notebook corresponding to the Burgers experiment is available in the example directory of SymPKF.

While this example shows an illustration of SymPKF in a 1D domain, the package also applies in 2D and 3D domains, as presented now.

4.5 Illustration of dynamics in a 2D domain

In order to illustrate the ability of SymPKF to apply in a 2D or 3D domain, we consider the linear advection of a scalar field c(t,x,y) by a stationary velocity field u=(u(x,y),v(x,y)), which reads as the partial differential equation

(29) t c + u c = 0 .

As for the Burgers equation, the definition of the dynamics relies on SymPy (not shown but similar to the definition of the Burgers equation as given in Fig. 1). This leads to preprocessing the dynamics by creating the instance advection of the class PDESystem, which transforms the equation into a system of partial differential equations. In particular, the procedure will diagnose the prognostic functions of dynamics, here the function c. Then it identifies the constant functions, which can depend on space but not on time: here, these are the components of the velocity (u, v). The process also identifies exogenous functions and constants, of which there are none here.

The calculation of the parametric dynamics is handled by the class SymbolicPKF as shown in the first cell in Fig. 9. The parametric dynamics are a property of the instance pkf_advection of the class SymbolicPKF, and when it is called, the parametric dynamics are computed once and for all. The parametric dynamics formulated in terms of metric are first computed; see the second cell. For the 2D linear advection, the parametric dynamics are a system of five partial differential equations, as is shown in the output of the second cell: the dynamics of the ensemble average 𝔼[c], which outputs as c for the sake of simplicity (first equation), the dynamics of the variance (second equation), and the dynamics of the local metric tensor (last three equations). In compact form, the dynamics are given by the system

(30a)tc+uc=0,(30b)tVc+uVc=0,(30c)tgc+ugc=-gcu-uTgc,

which corresponds to the 2D extension of the 1D dynamics first found by Cohn (1993) (Pannekoucke et al.2016) and validates the computation performed in SymPKF. Due to the linearity of the linear advection in Eq. (29), the ensemble average in Eq. (30a) is governed by the same dynamics as in Eq. (29). While both the variance in Eq. (30b) and the metric are advected by the flow, the metric is also deformed by the shear in Eq. (30c). This deformation more commonly appears in the dynamics written in aspect tensor form, which is given by

(31a)tc+uc=0,(31b)tVc+uVc=0,(31c)tsc+usc=usc+scuT,

where Eq. (31c) is similar to the dynamics of the conformation tensor in viscoelastic flow (Bird and Wiest1995; Hameduddin et al.2018).

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f09

Figure 9Sample of code and Jupyter notebook outputs: system of partial differential equations produced by SymPKF when applied to the linear advection in Eq. (29).

Download

https://gmd.copernicus.org/articles/14/5957/2021/gmd-14-5957-2021-f10

Figure 10Output of the computation by SymPKF of the PKF dynamics for the simple multivariate periodic chemical reaction, corresponding to the right-hand side of Eq. (32).

Download

We do not introduce any numerical simulation of the PKF dynamics in Eq. (30) or Eq. (31), but interested readers are referred to the 2D numerical PKF assimilation cycles of Pannekoucke (2021b), which have been made thanks to SymPKF.

This example illustrates a 2D situation and shows the multidimensional capabilities of SymPKF. Similarly to the simulation conducted for the Burgers equation, it is possible to automatically generate a numerical code able to perform numerical simulations of the dynamics in Eq. (31) (not shown here). Hence, this 2D domain example showed the ability of SymPKF to apply in dimensions larger than 1D.

Before concluding, we would like to present a preliminary application of SymPKF in a multivariate situation.

4.6 Towards the PKF for multivariate dynamics

SymPKF can be used to compute the prediction of the variance and the anisotropy in a multivariate situation.

Note that one of the difficulties with the multivariate situation is that the number of equations increases linearly with the number of fields and the dimension of the domain; e.g., for a 1D (2D) domain and two multivariate physical fields, there are two ensemble-averaged fields, two variance fields, and two (six) metric fields. Of course this is no not a problem when using a computer algebra system as done in SymPKF.

To illustrate the multivariate situation, only a very simple example is introduced. Inspired from chemical transport models encountered in air quality, we consider the transport over a 1D domain of two chemical species, whose concentrations are denoted by A(t,x) and (B(t,x), advected by the wind u(x). For the sake of simplicity, the two species interact following periodic dynamics as defined by the coupled system

(32a)tA+uxA=B,(32b)tB+uxB=-A.

Thanks to the splitting strategy, the PKF dynamics due to the advection have already been detailed in the previous section (see Sect. 4.5), so we can focus on the chemical part of the dynamics, which is given by the processes on the right-hand side of Eq. (32). The PKF of the chemical part is computed thanks to SymPKF and shown in Fig. 10. This time, and as expected, multivariate statistics appear in the dynamics. Here, the dynamics of the cross-covariance VAB=𝔼[eAeB] are given by the fifth equation. The coupling brings up unknown terms, e.g., the term E[xεAxεB] in the sixth equation of the output shown in Fig. 10. Note that by taking into account the multivariate situation with the dynamics of the cross-covariance, the multivariate PKF hybridizes the continuous nature of the multivariate fields with the matrix form in Eq. (8a), which corresponds here to the dynamics of the variances (VA,VB) and the cross-covariance VAB.

To go further, some research is still needed to explore the dynamics and the modeling of the multivariate cross-covariances. A possible direction is to take advantage of the multivariate covariance model based on the balance operator as often introduced in variational data assimilation (Derber and Bouttier1999; Ricci et al.2005). Note that such multivariate covariance models have recently been considered for the design of the multivariate PKF analysis step (Pannekoucke2021b). Another way is to consider a data-driven strategy to learn the physics of the unknown terms from a training based on ensembles of forecasts (Pannekoucke and Fablet2020).

To conclude, this example shows the potential of SymPKF to tackle the multivariate situation. Moreover, the example also shows that SymPKF is able to perform the PKF computation for a system of partial differential equations. However, all the equations should be prognostic; SymPKF is not able to handle diagnostic equations.

5 Conclusions

This contribution introduced the package SymPKF that can be used to conduct the research on the parametric Kalman filter prediction step for covariance models parameterized by the variance and the anisotropy (VLATcov models). SymPKF provides an end-to-end framework: from the equations of dynamics to the development of a numerical code.

The package has been first introduced by considering the nonlinear diffusive advection dynamics in the Burgers equation. In particular, this example shows the ability of SymPKF to handle abstract terms, e.g., the unclosed terms formulated with the expectation operator. The expectation operator implemented in SymPKF is a key tool for the computation of the PKF dynamics. Moreover, we showed how to handle a closure and how to automatically render numerical codes.

For univariate situations, SymPKF applies in a 1D domain as well as in 2D and 3D domains. This has been shown by considering the computation of the PKF dynamics for the linear advection equation on a 2D domain.

A preliminary illustration with multivariate dynamics showed the potential of SymPKF to handle the dynamics of multivariate covariance. But this point has to be further investigated, and this constitutes the main perspective of development. Moreover, to perform a multivariate assimilation cycle with the PKF, the multivariate formulation of the PKF analysis state is needed. A first investigation of the multivariate PKF assimilation has been proposed by Pannekoucke (2021b).

In its present implementation, SymPKF is limited to computation with prognostic equations. It is not possible to consider dynamics based on diagnostic equations, while these are often encountered in atmospheric fluid dynamics, e.g., the geostrophic balance. This constitutes another topic of research development for the PKF, facilitated by the use of symbolic exploration.

Note that the expectation operator as introduced here can be used to compute Reynolds equations encountered in turbulence. This opens new perspectives for the use of SymPKF for other applications that could be interesting, especially for automatic code generation.

Appendix A: Splitting for the computation of the parametric dynamics

In this section we show that using a splitting strategy is possible for the design of the parametric dynamics. For this, it is enough to show that given dynamics written as

(A1) t X = f 1 ( X ) + f 2 ( X ) ,

the dynamics of the error, the variance, the metric, and the aspect all write as a sum of trends depending on each process f1 and f2. We show this starting from the dynamics of the error.

Due to the linearity of the derivative operator, the TL dynamics resulting from Eq. (A1) are written as

(A2) t e = f 1 ( e ) + f 2 ( e ) ,

where f1 and f2 denote the differential of the two functions, which can be written as the sum of two trends te1=f1(e) and te2=f2(e), depending exclusively on f1 and f2, respectively. For the variance's dynamics, tV=2Eete, substitution by Eq. (A2) leads to

(A3) t V = t V 1 + t V 2 ,

where tV1=2Eef1(e) and tV2=2Eef2(e) depend exclusively on f1 and f2, respectively. Then the standard deviation dynamics, obtained by differentiating σ2=V as 2σtσ=tV,

(A4) t σ = 1 σ t V 1 + 1 σ t V 2 ,

read as the sum of two trends tσ1=1σtV1 and tσ2=1σtV2, depending exclusively on f1 and f2, respectively. It results that the dynamics of the normalized error ε=1σe, deduced from the time derivative of e=σε, te=εtσ+σtε, read as

(A5) t ε = 1 σ f 1 ( e ) - ε 2 σ t V 1 + 1 σ f 2 ( e ) - ε 2 σ t V 2

and also expand as the sum of two trends tε1=1σ[f1(e)-ε2σtV1] and tε2=1σ[f2(e)-ε2σtV2], again depending exclusively on f1 and f2, respectively. For the metric terms gij=E[iεjε], we deduce that the dynamics tgij=E[i(tε)jε]+E[iεj(tε)] expand as

(A6) t g i j = t g i j 1 + t g i j 2 ,

with tgij1=E[i(tε1)jε]+E[iεj(tε1)] and tgij2=E[i(tε2)jε]+E[iεj(tε2)] where each partial trend depends exclusively on f1 and f2, respectively. To this end, the dynamics of the aspect tensor s are deduced from Eq. (17), which expands as

(A7) t s = t s 1 + t s 2 ,

where ts1=-s(tg1)s and ts2=-s(tg2)s only depend on f1 and f2, respectively.

To conclude, the computation of the parametric dynamics for Eq. (A1) can be performed from the parametric dynamics of tX=f1(X) and tX=f2(X) calculated separately, then merged together to obtain the dynamics of the variance in Eq. (A3), of the metric in Eq. (A6), and of the aspect tensors in Eq. (A7).

Appendix B: Computation of terms E[αεβε]

In this section we proof the property.

  • Property 1. Any term E[αεβε] with |α||β| can be related to the correlation expansion term 𝔼[εγε], where |γ|<|α|+|β|, and the term Eεα+βε.

    • Proof. The derivative with respect to a zero αi is the identity operator. Note that the multi-index forms a semi-group since for two multi-indexes α and β we can form the multi-index α+β=(αi+βi)i[1,n].

      Now Property 1 can be proven considering the following recurrent process, when assuming that the property is true for all patterns of degree strictly lower than the degree |α|+|β|.

      Without loss of generality we assume αi>0 and denote δi=(δij)j[1,n], where δij is the Kronecker symbol (δii=1, δij=0 for ji). From the formula

      (B1) x i α - δ i ε β ε = α ε β ε + α - δ i ε β + δ i ε

      and from the commutativity of the expectation operator and the partial derivative with respect to the coordinate system, the result is that

      (B2) E α ε β ε = x i E α - δ i ε β ε - E α - δ i ε β + δ i ε ,

      considering the terms of the left-hand side. On one hand, we observe that the degree of the first term is decreasing to |α|+|β|-1; from the recurrence assumption, E[α-δiεβε] can be expanded as terms of the form 𝔼[εγε]. On the other hand, the degree of the second term remains the same, |α|+|β|, but with a shift of the derivative order. This shift of the order can be done again following the same process, leading after iterations to the term E[εα+βε].

Code and data availability

The SymPKF package is free and open-source. It is distributed under the CeCILL-B free software license. The source code is provided through a GitHub repository at https://github.com/opannekoucke/sympkf (last access: 22 March 2021). A snapshot of SymPKF is available at https://doi.org/10.5281/zenodo.4608514 (Pannekoucke2021c). The data used for the simulations presented here are generated at runtime when using the Jupyter notebooks.

Author contributions

OP introduced the symbolic computation of the PKF dynamics, and OP and PA imagined an end-to-end framework for the design of the PKF dynamics from the equation of the dynamics to the numerical simulation thanks to an automatic code generation. OP developed the codes.

Competing interests

The authors declare that they have no conflict of interest.

Disclaimer

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Acknowledgements

We would like to thank Sylwester Arabas and the two anonymous referees for their fruitful comments, which have contributed to improving the paper. The UML class diagram has been generated from UMLlet (Auer et al.2003).

Financial support

This research has been supported by the French national program LEFE/INSU (Étude du filtre de KAlman PAramétrique, KAPA).

Review statement

This paper was edited by Sylwester Arabas and reviewed by two anonymous referees.

References

Auer, M., Tschurtschenthaler, T., and Biffl, S.: A Flyweight UML Modelling Tool for Software Development in Heterogeneous Environments, in: Proceedings of the 29th Conference on EUROMICRO, EUROMICRO '03, pp. 267–272​​​​​​​, IEEE Computer Society, Washington, DC, USA, 1–6 September 2003, https://doi.org/10.1109/EURMIC.2003.1231600​​​​​​​, 2003. a

Berre, L.: Estimation of Synoptic and Mesoscale Forecast Error Covariances in a Limited-Area Model, Mon. Weather Rev., 128, 644–667, 2000. a

Bird, R. B. and Wiest, J. M.: Constitutive Equations for Polymeric Liquids, Annu. Rev. Fluid Mech., 27, 169–193, https://doi.org/10.1146/annurev.fl.27.010195.001125, 1995. a

Cohn, S.: Dynamics of Short-Term Univariate Forecast Error Covariances, Mon. Weather Rev., 121, 3123–3149, https://doi.org/10.1175/1520-0493(1993)121<3123:DOSTUF>2.0.CO;2, 1993. a, b

Courtier, P., Andersson, E., Pailleux, J., Vasiljević, W. H., Hamrud, D., Hollingsworth, M., Rabier, A. F., and Fisher, M.​​​​​​​​​​​​​​: The ECMWF implementation of three-dimensional variational assimilation (3D-Var). I: Formulation, Q. J. Roy. Meteor. Soc., 124, 1783–1807, 1998. a

Derber, J. and Bouttier, F.: A reformulation of the background error covariance in the ECMWF global data assimilation system, Tellus A, 51, 195–221, https://doi.org/10.3402/tellusa.v51i2.12316, 1999. a

Evensen, G.: Data Assimilation: The Ensemble Kalman Filter, Springer-Verlag Berlin Heidelberg, https://doi.org/10.1007/978-3-642-03711-5, 2009. a, b

Fisher, M.: Development of a simplified Kalman filter, Tech. Rep., 260, ECMWF, https://doi.org/10.21957/vz40cqca4, 1998. a

Fisher, M.: Generalized frames on the sphere, with application to background error covariance modelling, in: Proc. ECMWF Seminar on ”Recent developments in numerical methods for atmospheric and ocean modelling”, edited by ECMWF, pp. 87–102, Reading, UK, 2004. a

Hameduddin, I., Meneveau, C., Zaki, T. A., and Gayme, D. F.: Geometric decomposition of the conformation tensor in viscoelastic turbulence, J. Fluid Mech., 842, 395–427, https://doi.org/10.1017/jfm.2018.118, 2018. a

Jazwinski, A.: Stochastic Processes and Filtering Theory, Academic Press, New York, 1970. a

Kalman, R. E.: A New Approach to Linear Filtering and Prediction Problems, Journal of Basic Engineering​​​​​​​, 82, 35–45, https://doi.org/10.1115/1.3662552, 1960. a

Kassam, A.-K. and Trefethen, L.: Fourth-order time-stepping for stiff PDEs, SIAM J. Sci. Comput., 26, 1214–1233, 2005. a

Lesieur, M.: Turbulence in Fluids, Springer, p. 558, ISBN 978-1-4020-6434-0, 2007. a

Louboutin, M., Lange, M., Luporini, F., Kukreja, N., Witte, P. A., Herrmann, F. J., Velesko, P., and Gorman, G. J.: Devito (v3.1.0): an embedded domain-specific language for finite differences and geophysical exploration, Geosci. Model Dev., 12, 1165–1187, https://doi.org/10.5194/gmd-12-1165-2019, 2019. a

Meurer, A., Smith, C. P., Paprocki, M., Čertík, O., Kirpichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., Rathnayake, T., Vig, S., Granger, B. E., Muller, R. P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M. J., Terrel, A. R., Roučka, Š., Saboo, A., Fernando, I., Kulal, S., Cimrman, R., and Scopatz, A.: SymPy: symbolic computing in Python, PeerJ Computer Science, 3, e103, https://doi.org/10.7717/peerj-cs.103, 2017. a

Mirouze, I. and Weaver, A. T.: Representation of correlation functions in variational assimilation using an implicit diffusion operator, Q. J. Roy. Meteor. Soc., 136, 1421–1443, https://doi.org/10.1002/qj.643, 2010. a

Øksendal, B.: Stochastic Differential Equations, Springer Berlin Heidelberg, https://doi.org/10.1007/978-3-642-14394-6, 2003. a

Paciorek, C. and Schervish, M.: Nonstationary Covariance Functions for Gaussian Process Regression, Advances Neural Information Processing Systems, 16, 273–280, https://proceedings.neurips.cc/paper/2003/file/326a8c055c0d04f5b06544665d8bb3ea-Paper.pdf (last access: 22 September 2021)​​​​​​​, 2004. a

Pannekoucke, O.: CAC-PKF-M (v0.1): Computer-aided calculation of PKF dynamics with Maxima, Zenodo [code], https://doi.org/10.5281/ZENODO.4708316, 2021a. a

Pannekoucke, O.: An anisotropic formulation of the parametric Kalman filter assimilation, Tellus A, 73, 1–27​​​​​​​, https://doi.org/10.1080/16000870.2021.1926660, 2021b. a, b, c, d, e, f

Pannekoucke, O.: SymPKF: a symbolic and computational toolbox for the design of parametric Kalman filter dynamics, Zenodo [code], https://doi.org/10.5281/zenodo.4608514, 2021c. a, b

Pannekoucke, O. and Fablet, R.: PDE-NetGen 1.0: from symbolic partial differential equation (PDE) representations of physical processes to trainable neural network representations, Geosci. Model Dev., 13, 3373–3382, https://doi.org/10.5194/gmd-13-3373-2020, 2020. a, b, c, d

Pannekoucke, O. and Massart, S.: Estimation of the local diffusion tensor and normalization for heterogeneous correlation modelling using a diffusion equation, Q. J. Roy. Meteor. Soc.., 134, 1425–1438, https://doi.org/10.1002/qj.288, 2008. a

Pannekoucke, O., Berre, L., and Desroziers, G.: Filtering properties of wavelets for local background-error correlations, Q. J. Roy. Meteor. Soc., 133, 363–379, 2007. a

Pannekoucke, O., Ricci, S., Barthelemy, S., Ménard, R., and Thual, O.: Parametric Kalman Filter for chemical transport model, Tellus, 68, 31547​​​​​​​, https://doi.org/10.3402/tellusa.v68.31547, 2016.  a, b, c, d, e

Pannekoucke, O., Bocquet, M., and Ménard, R.: Parametric covariance dynamics for the nonlinear diffusive Burgers equation, Nonlin. Processes Geophys., 25, 481–495, https://doi.org/10.5194/npg-25-481-2018, 2018a. a, b, c, d, e, f, g, h, i, j

Pannekoucke, O., Ricci, S., Barthelemy, S., Ménard, R., and Thual., O.: Parametric Kalman filter for chemical transport models – Corrigendum, Tellus A, 70, 1–2​​​​​​​, https://doi.org/10.1080/16000870.2018.1472954, 2018b. a

Purser, R., Wu, W.-S., Parrish, D., and Roberts, N.: Numerical aspects of the application of recursive filters to variational statistical analysis. Part II: Spatially inhomogeneous and anisotropic general covariances, Mon. Weather Rev., 131, 1536–1548, https://doi.org/10.1175//2543.1, 2003. a

Ricci, S., Weaver, A. T., Vialard, J., and Rogel, P.: Incorporating State-Dependent Temperature–Salinity Constraints in the Background Error Covariance of Variational Ocean Data Assimilation, Mon. Weather Rev., 133, 317–338, https://doi.org/10.1175/mwr2872.1, 2005. a

Weaver, A. and Courtier, P.: Correlation modelling on the sphere using a generalized diffusion equation, Q. J. Roy. Meteor. Soc., 127, 1815–1846, https://doi.org/10.1002/qj.49712757518, 2001. a

1

https://github.com/opannekoucke/sympkf, last access: 22 September 2021.​​​​​​​

2

https://maxima.sourceforge.io/, last access: 22 September 2021.

3

https://jinja.palletsprojects.com/en/2.11.x/, last access: 22 September 2021.​​​​​​​

Download
Short summary
This contributes to research on uncertainty prediction, which is important either for determining the weather today or estimating the risk in prediction. The problem is that uncertainty prediction is numerically very expensive. An alternative has been proposed wherein uncertainty is presented in a simplified form with only the dynamics of certain parameters required. This tool allows for the determination of the symbolic equations of these parameter dynamics and their numerical computation.