PDE-NetGen 1.0: from symbolic PDE representations of physical processes to trainable neural network representations

Bridging physics and deep learning is a topical challenge. While deep learning frameworks open avenues in physical science, the design of physically-consistent deep neural network architectures is an open issue. In the spirit of physics-informed NNs, PDE-NetGen package provides new means to automatically translate physical equations, given as PDEs, into neural network architectures. PDE-NetGen combines symbolic calculus and a neural network generator. The later exploits NN-based implementations of PDE solvers using Keras. With some knowledge of a problem, PDE-NetGen is a plug-and-play tool to generate physics-informed NN architectures. They provide computationally-efficient yet compact representations to address a variety of issues, including among others adjoint derivation, model calibration, forecasting, data assimilation as well as uncertainty quantification. As an illustration, the workflow is first presented for the 2D diffusion equation, then applied to the data-driven and physics-informed identification of uncertainty dynamics for the Burgers equation.


Introduction
Machine learning and deep learning receive a fast growing interest in geo-science to address open issues, including for instance sub-grid parmeterization, A variety of learning architectures have shown their ability to encode the physics of a problem, especially deep learning schemes which typically involve millions of unknown parameters, while the theoretical reason of this success remains a key issue (Mallat 2016). A recent research trend has involved the design of lighter neural network (NN) architectures, like ResNets with shared weights (He et al. 2016), while keeping similar learning performance. Interestingly, a ResNet can be understood as an implementation of a numerical time scheme solving a ODE/PDE (Ruthotto and Haber 2019;Rousseau et al. 2019). Applications to learning PDEs from data have also been introduced e.g. PDE-Net (Long et al. 2017(Long et al. , 2018. These previous works emphasize the connection between the underlying physics and the NN architectures.
Designing or learning a NN representation for a given physical process remains a difficult issue. If the learn- * olivier.pannekoucke@meteo.fr ing fails, it may be unclear to know how to improve the architecture of the neural network. Besides, it seems irrelevant to run computationally-expensive numerical experiments on large-scale dataset to learn wellrepresented processes. The advection in fluid dynamics may be a typical example of such processes, which do not require complex non-linear data-driven representations. Overall, one would expect to accelerate and make more robust the learning process by combining, within the same NN architecture, the known physical equations with the unknown physics.
From the geoscience point of view, a key question is to bridge physical representations and neural network ones so that we can decompose both known and unknown equations according to the elementary computational units made available by state-of-the-art frameworks (e.g., keras, tensorflow). In other words, we aim to translate physical equations into the computational vocabulary available to neural networks. PDE-NetGen addresses this issue for PDE representations, for which we regard convolutional layers as being similar to the stencil approach, which results from a finite difference implementation of PDEs. PDE-NetGen relies on two main components: (i) a computer algebra system, here Sympy (Meurer et al. 2017), used to handle the physical equations and discretize the associated spatial derivatives, (ii) a Keras network generator which automaticaly translate PDEs into neural network layers from these discretized forms. Note that code generator based on symbolic computation receives new interests to facilitate the design of numerical experiments see e.g. Louboutin et al. (2019). As an illustration, we consider in this paper the application of PDE-NetGen to the identification of closure terms.
The paper is organized as follows. In the next section, we detail the proposed neural network generator, with an illustration of the workflow on a diffusion equation. In section 3, we present the numerical integration of the neural network implementation of the diffusion equation then an application to the data-driven identification of the closure of Burgers equation. Conclusion and perspective are given in section 4

Neural Network Generatation from symbolic PDEs
Introducing physics in the design of neural network topology is challenging since physical processes can rely on very different partial derivative equations, e.g. eigenvalue problems for waves or constrained evolution equations in fluid dynamics under iso-volumetric assumption. The neural network code generator presented here focuses on physical processes given as evolution equations which writes where u denotes either a scalar field or multivariate fields, ∂ α u denotes partial derivatives with respect to spatial coordinates, and F is the generator of the dynamics. At first glance, this situation excludes diagnostic equation as encountered in geophysics, like balance equations: each equation has to be the evolution equation of a prognostic variable. PDE-NetGen incorporates a way to solve diagnostic equation, this will be shown in the example detailed in Section 3.2. We first explain how the derivatives are embedded into NN layers, then we detail the workflow of PDE-NetGen for a simple example.

Introducing physical knowledge in the design of a NN topology
Since the NN generator is designed for evolution equations, the core of the generator is the automatic translation of partial derivatives with respect to spatial coordinates into layers. The correspondence between the finite-difference discretization and the convolutional layer give a practical way to translate a PDE into a NN (Cai et al. 2012;Dong et al. 2017;Long et al. 2017). For instance, the finite difference of a second order partial derivative ∂ 2 x u, for u(t, x) a one dimensional function, is given by where δx stands for the discretization space step.
This makes appear a kernel stencil k = [1/δx 2 , −2/δx 2 , 1/δx 2 ] that can be used in a 1D convolution layer with a linear activation function and without bias. A similar routine applies for 2D and 3D geometries. PDE-NetGen relies on the computer algebra system sympy (Meurer et al. 2017) to compute the stencil as well as to handle symbolic expressions. Alternatives using automatic differentiation can be considered as introduced by Raissi (2018) who used TensorFlow for the computation of derivative.
Then, the time integration can be implemented either by a solver or by a ResNet architecture of a given time scheme e.g. an Euler scheme or a fourth order Runge-Kutta (RK4) scheme (Fablet et al. 2017). These two components, namely the translation of partial derivatives into NN layers and a ResNet implementation of the time integration, are the building blocks of the proposed NN topology generator as examplfied in the next Section.

Workflow of the NN representation generator
We now present the workflow for the NN generator given a symbolic PDE using the heterogeneous 2D diffusion equation as a testbed: where κ(x, y) is a field of 2 × 2 tensors ((x, y) are the spatial coordinates) and whose python implementation is detailed in Fig. 1. Starting from a list of coupled evolution equations given as a PDE, a first preprocessing of the system determines the prognostic functions, the constant functions, the exogenous functions and the constants. The exogenous functions are the functions which depends on time and space, but whose evolution is not described by the system of evolution equations. For instance, a forcing term in a dynamics is an exogenous function.
For the diffusion equation Eq. (3), the dynamics is represented in sympy using Function, Symbol and Derivative classes. The dynamics is defined as an equation using the Eq class of PDE-NetGen, which inherits from the Eq class of sympy with additional facilities (see the implementation in Fig. 1 for additional details).
The core of the NN generator is given by the NNMod-elBuilder class. This class first preprocesses the system of evolution equations and translates the system into a python NN model.
The preprocessing of the diffusion equation Eq. (3) presents a single prognostic function, u, and three constant functions κ 11 , κ 12 and κ 22 . There is no exogenous function for this example. During the preprocessing, the coordinate system of each function is diagnosed such that we may determine the dimension of the problem. For the diffusion equation Eq. (3), since the function u(t, x, y) is a function of (x, y) the geometry is twodimensional. In the current version of PDE-NetGen, only periodic boundaries are considered. The specific DerivativeFactory class ensures the periodic extension of the domain, then the computation of the derivative by using CNN and finally the crop of the extended domain to return to the initial domain. Other boundaries could also be implemented and might be investigated in future developments.
All partial derivatives with respect to spatial coordinates are detected and then replaced by an intermediate variable in the system of evolution equations. The resulting system is assumed to be algebraic, which means that it only contains addition, subtraction, multiplication and exponentiation (with at most a real). For each evolution equation, the abstract syntax tree is translated into a sequence of layers which can be automatically converted into NN layers in a given NN framework. For the current version of PDE-NetGen, we consider Keras (Chollet 2018). An example of the implementation in Keras is shown in Fig. 2: a first part of the code is used to compute all the derivatives using Conv layers of Keras, then Keras layers are used to implement the algebraic equation which represents the trend ∂ t u of the diffusion equation Eq. (3).
At the end, a python code is rendered from templates by using the jinja2 package. The reason why templates are used is to facilitate the saving of the code in python modules and the modification of the code by the experimenter. Runtime computation of the class could be considered, but this is not implemented in the current version of PDE-NetGen. For the diffusion equation Eq. (3), when run, the code rendered from the NNMod-elBuilder class creates the NNDiffusion2DHeterognous class. Following the class diagram Fig. 3, the NNDiffu-sion2DHeterogeneous class inherits from a Model class which implements the time evolution of an evolution dynamics by incorporating a time-scheme. Here several time-schemes are implemented, namely an explicit Euler scheme, a second and a fourth order Runge-Kutta scheme.

Applications of PDE-NetGen
Two applications are now considered. First we validate the NN generator on a known physical problem: the diffusion equation Eq. (3) detailed in the previous section. Then, we tackle a situation where a part of the physics remains unknown, showing the benefit of merging the known physics in the learning of the unknown processes.

Application to the diffusion equation
In the python implementation Fig. 1, diffusion_model is an instance of the NNDiffusion2DHeterogeneous class, which numerically solves the diffusion equation Eq. (3) over a 2D domain, defined by default as the periodic domain [0, 1) × [0, 1) discretized by 100 points along each directions, so that dx = dy = 1.0/100.
The time integration of the diffusion equation is shown in Fig. 4. For this numerical experiment, the heterogeneous tensor field of diffusion tensors κ(x, y) is set as rotations of the diagonal tensor (l 2 x /τ, l 2 y /τ ) defined from the length-scales l x = 10 dx, l y = 5 dy and the time-scale τ = 1.0, and with the rotation angles θ(x, y) = π 3 cos(k x x + k y y) where (k x , k y ) = 2π(2, 3). The time step for the simulation is dt = τ M in(dx 2 /lx 2 , dy/ly 2 )/6. The numerical integration is computed by using a fourth-order Runge-Kutta scheme. The initial condition of the simulation is given by a Dirac Fig. 4 (a). In order to validate the solution obtained from the generated neural network, we compare the integration with the finite difference discretization of Eq. (3): (3) as a neural-network by using Keras (only one derivative is explicitly given, for the sake of simplicity) which is shown in Fig. 4 (b). The heterogeneity of the diffusion tensors makes appear an anisotropic diffusion of the Dirac, which is perfectly reproduced by the result obtained from the integration of the generated neural network, shown in Fig. 4 (c). At a quantitative level, the l 2 distance between the both solutions is 10 −5 . This validates the ability of the NN generator PDE-NetGen to compute the dynamics of a given physical evolution equation.
The next section illustrates the situation where only a part of the dynamics is known, while the remaining physics is learned from the data.

Application to the data-driven identification of stochastic representations
As an illustration of the PDE-NetGen package, we consider a problem encountered in uncertainty prediction: the parametric Kalman filter (PKF) (Pannekoucke et al. 2016(Pannekoucke et al. , 2018. For a detailed presentation and discussion of uncertainty prediction issues in geophysical dy-  namics, we may refer the reader to Le Maître and Knio (2010). Here, we briefly introduce basic elements for the self-consistency of the example.
The idea of the PKF is to mimic the dynamics of the covariance-error matrices all along the analysis and the forecast cycle of the data assimilation in a Kalman setting (Kalman filter equations for the uncertainty). It relies on the approximation of the true covariance matrices by some parametric covariance model. When considering a covariance model based on a diffusion equation, the parameters are the variance V and the local diffusion tensor ν. Therefore, the dynamics of the covarianceerror matrices along the data assimilation cycles is deduced from the dynamics of the variance and of the diffusion tensors. In place of the full covariance evolution this dramatically reduces the dynamics to the one of few parameters.
For the non-linear advection-diffusion equation, known as the Burgers equation, the dynamics of the variance V u and the diffusion tensor ν u = [ν u,xx ] (which is featured by a single field ν u,xx ), writes (Pannekoucke et al. 2018) denotes the expectation operator. For the sake of simplicity, in this system of PDEs, u denotes the expectation of the random field and not the random field itself as in (Eq. (5)).
In this system of PDEs, the term E ε u ∂ 4 ∂x 4 ε u can not be determined from the known quantities u, V u and ν u,xx . This makes appear a closure problem, i.e. determinining the unknown term as a function of the known quantities. A naive assumption would be to consider a zero closure (closure(t, x) = 0). However, while the tangent-linear evolution of the perturbations along the Burgers dynamics is stable, the dynamics of the diffusion coefficient ν u,xx would lead to unstable dynamics as the coefficient of the second order term −3κ ∂ 2 ∂x 2 ν u,xx is negative. This stresses further the importance of the unknown term to successfully predict the uncertainty.
Within a data-driven framework, one would typically explore a direct identification of the dynamics of diffusion coefficient ν u,xx . Here, we exploit PDE-NetGen to fully exploit the known physics and focus on the datadriven identification of the unknown term E ε u ∂ 4 ∂x 4 ε u in the system of equations Eq. (6). It comes to replace term E ε u ∂ 4 ∂x 4 ε u in Eq. (6) by an exogenous function closure(t,x) and then to follow the workflow detailed in Section 2.2.
The unknown closure function is represented by a neural network (a Keras model) which implements the expansion where (a, b, c) are unknown and where the partial derivatives are computed from convolution layers, as described in Section 2. This expression is similar to a dictionary of possible terms as in Rudy et al. (2017) and it is inspired from an arbitrary theoretically-designed closure for this problem, where (a, b, c) = (1, 3 4 , −2) (Pannekoucke et al. 2018). In the NN implementation of the exogenous function modeled as Eq. (7), each of the unknown coefficients (a, b, c) are implemented as a 1D convolutional layer, with a linear activation function and without bias. Note that the estimated parameters (a, b, c) could be different from the one of the theoretical closure: while the theoretical closure can give some clues for the design of the unknown term, this closure is not the truth which is unknown.
For the numerical experiment, the Burgers equation is solved on a one-dimensional periodic domain of length 1, discretized in 241 points. The time step is dt = 0.002, and the dynamics is computed over 500 time steps so to integrate from t = 0 to t = 1.0. The coefficient of the physical diffusion is set to κ = 0.0025. The numerical setting considered for the learning is the tangent-linear regime described in Pannekoucke et al. (2018) where the initial uncertainty is small and whose results are shown in their Fig. 4(a), Fig. 5(a) and Fig. 6(a).
To train the parameters (a, b, c) in Eq. (7), we build a training dataset from an ensemble prediction method where each member is a numerical solution of the Burgers equation. The numerical code for the Burgers equation derives from PDE-NetGen applied on the symbolic dynamics Eq. (5). Using this numerical code, we generate a training dataset composed of 400 ensemble simulations of 501 time steps, where each each ensemble contains 400 members. For each ensemble forecast, we estimate the mean, variance V u and diffusion tensor ν u . Here, we focus on the development of the front where we expect the unknown term to be of key importance and keep for training purposes the last 100 time-steps of each ensemble forecast. For the training only, the RK4 time-scheme is computed as the ResNet implementation given in Fig. 5, so to provide the end-to-end NN implementation of the dynamics.
The resulting dataset involves 40000 samples. To train the learnable parameters (a, b, c), we minimize the one-step ahead prediction loss for the diffusion tensor ν u . We use ADAM optimizer (Kingma and Ba 2014) and a batch size of 32. Using an initial learning rate of 0.1, the training converges within 3 epochs with a geometrical decay of the learning rate by a factor of 1/10 after each epoch. The coefficients resulting from the training over 10 runs are (a, b, c) = (0.93, 0.75, −1.80) ±

Figure 5:
Example of a Keras implementation for a RK4 time-scheme: given time-step dt and a Keras model trend of the dynamics, the function make_time_scheme return a Kera model implementing a RK4.
(5.1 10 −5 , 3.6 10 −4 , 2.7 10 −4 ). the physical dimension of a length. Both panels show the increase of the length-scale due to the physical diffusion, except in the vicinity of the front where an oscillation occurs, which is related to the inflexion point of the front. While the magnitude of the oscillation predicted by the PKF (b3) is slightly larger than the estimation from the large ensemble reference (a3), the pattern is well predicted by the PKF. Besides, the parametric form of the PKF does not involve local variabilities due to the finite size of the ensemble, which may be observed in panel (a3). Overall, these experiments support the relevance of the closure Eq. (7) learned from the data to capture the uncertainty associated with Burgers' dynamics.

Discussion on the choice of a closure
In the Burgers' dynamics, an a priori knowledge was introduced to propose a NN implementing the closure Eq. (7). In the general case, the choice of the terms to be introduced in the closure may be guided by known physical properties that need to be verified by the system. For example, conservation or symmetries properties that leave the system invariant can guide in proposing possible terms For the Burgers' dynamics, ν u,xx has the dimension of a length squared, [L 2 ], and E ε u When no priors are available, one may consider modeling the closure using state-of-the-art deep neural network architectures which have shown impressive prediction performance, e.g. CNNs, ResNets (Zagoruyko and Komodakis 2016;Raissi 2018).

Conclusion
We have introduced a neural network generator PDE-NetGen, which provides new means to bridge physical priors given as symbolic PDEs and learning-based NN frameworks. This package derives and implements a finite difference version of a system of evolution equations, where the derivative operators are replaced by appropriate convolutional layers including the boundary conditions. The package has been developed in python using the symbolic mathematics library sympy and keras.
We have illustrated the usefulness of PDE-NetGen through two applications: a neural-network implementation of a 2D heterogeneous diffusion equation and the uncertainty prediction in the Burgers equation. The later involves unknown closure terms, which are learned from data using the proposed neural-network framework. Both illustrations show the potential of such an approach, which could be useful for improving the training in complex application by taking into account the physics of the problem. This work opens new avenues to make the most of existing physical knowledge and of recent advances in data-driven settings, and more particularly neural networks, for geophysical applications. This includes a wide range of applications, where such physicallyconsistent neural network frameworks could either lead to the reduction of the computational cost (e.g., GPU implementation embedded in deep learning frameworks) or provide new numerical tools to derive key operators (e.g., adjoint operator using automatic differentiation). Besides, these neural network representations also offer new means to complement known physics with the data-driven calibration of unknown terms. This is regarded as key to advance the state-of-the-art for the simulation, forecasting and reconstruction of geophysical dynamics through model-data-coupled frameworks.