DecTree v1.0 - Chemistry speedup in reactive transport simulations: purely data-driven and physics-based surrogates

. The computational costs associated with coupled reactive transport simulations are mostly due to the chemical subsystem: replacing it with a pre-trained statistical surrogate is a promising strategy to achieve decisive speedups at the price of small accuracy losses and thus to extend the scale of problems which can be handled. We introduce a hierarchical coupling scheme in which “full physics”, equation-based geochemical simulations are partially replaced by surrogates. Errors on mass 5 balance resulting from multivariate surrogate predictions effectively assess the accuracy of multivariate regressions at runtime: inaccurate surrogate predictions are rejected and the more expensive equation-based simulations are run instead. Gradient boosting regressors such as xgboost, not requiring data standardization and being able to handle Tweedie distributions, proved to be a suitable emulator. Finally, we devise a surrogate approach based on geochemical knowledge, which overcomes the issue of robustness when encountering previously unseen data, and which can serve as basis for further development of hybrid 10 physics-AI modelling. are not accelerated on grid (pseudo speedup of 0.86), but they reach 1.33 on the 200 grid. The surrogate-only speedup

In praxis, the CPU-time, the user interactions and the overall skills required for optimally training complex regressors cannot be underestimated and may prove overwhelming for a geoscientist. The whole process of hyperparameter tuning, required by most advanced machine learning algorithms, while being an active area of development and research, is still hardly fully automatable.
This work showcases and analyses two different approaches for surrogate geochemical modelling in reactive transport sim-70 ulations. The first is completely data-driven, disregarding any possible knowledge about the ongoing process. In the second approach, we derive a surrogate which exploits the actual equations solved by the full physics representation of chemistry. Both are applied and evaluated on the same 1D benchmark implemented in a simple reactive transport framework. Our implementation of coupled reactive transport includes a hierarchical submodel coupling strategy, which is advantageous when different accuracy levels for the predictions of one sub-process are available. The versioned R code used for DecTree v.1.0 model setup and evaluation is referenced in section "code availability". It is based on version v0.0.4 of the RedModRphree package for the R environment (R Core Team, 2020), which is also referenced in section "code availability". It makes use of the geochemical simulator PHREEQC (Appelo et al., 2013). RedModRphree supersedes the in-house developed R-PHREEQC interface Rphree (https://rphree.r-forge.r-project.org/, De Lucia and Kühn, The benchmarks and the performance measurements refer to computations run on a recent desktop workstation equipped with Intel Xeon W-2133 CPU with clock at 3.60 GHz and DDR4 RAM at 2.666 GHz under Linux kernel 5.9.14 and R version 4.0.3. If not otherwise specified, only one CPU core is employed for all computational tasks. Since chemistry is inherently an embarassing parallel task, the speedup achieved on a single CPU as in this work will transfer -given large enough simulation 85 grids making up for the overhead -on parallel computations.

Numerical simulation of flow and transport
We consider a stationary, fully-saturated, incompressible, isothermal 1D Darcy flow in a homogeneous medium. Transport is restricted to pure advection and the feedback of mineral precipitation and dissolution on porosity and permeability is also disregarded; the fluid density is also considered constant. Advection is numerically computed via a forward Euler explicit 90 3 https://doi.org/10.5194/gmd-2020-445 Preprint. Discussion started: 2 March 2021 c Author(s) 2021. CC BY 4.0 License. resolution scheme: where u is the module of Darcy velocity, C i (x, t) the volumetric concentration (molality) of the i-th solute species at the point x and time t, and ∆ x the size of a grid element. For this scheme, the Courant-Friedrichs-Lewy stability condition (CFL) imposes that the Courant number ν be less than or equal to 1: For Courant numbers less than 1, numerical dispersion arises; the scheme is unstable for ν > 1. The only both stable and precise solution for advection is with ν = 1. Thus, the CFL condition is very limiting in ∆ t: a factor two refinement in the spatial discretisation corresponds to a factor two decrease in ∆ t, thus obtaining double required iterations. Note that equation 1 is not written in terms of porosity, so that effectively the Darcy velocity is assumed equal to the fluid flux or, alternatively, porosity is 100 equal to unity. This assumption does not have any impact on the calculations besides the initial scaling of the system.
The implemented advection relies on transport of total elemental concentrations instead of the actual dissolved species, since all solutes are subjected to the very same advection equation (Parkhurst and Wissmeier, 2015). Special attention needs to be payed to pH which is defined in terms of activity of protons: pH = − log 10 [H + ] description of the chemical system at any time, seven input variables are required: pH, C, Ca, Mg, Cl, calcite and dolomitethose can be considered, with an abuse of language, state variables, in the sense that they constitute the necessary and sufficient inputs of the geochemical subsystem, and all reactions only depend on them. The outcome of the "full physics" calculations is completely defined (at least with the simplifications discussed above) by four distinct quantities: the amounts of reaction 125 affecting the two minerals calcite (i) and dolomite (ii) in the given time step, from which the changes in solutes Ca, Mg and C can be backcalculated; Cl (iii), which is actually non-reactive; and pH (iv). In a completely process-agnostic, data-driven framework, however, the relationships between minerals and aqueous concentrations are disregarded, and the output of the chemical subsystem is expressed solely in terms of the input variables.

130
For the remainder of this work, the geochemical benchmark described above is solved on a 1D column of length 0.5 m, with constant fluid velocity of u = 9.375 · 10 −6 m/s. The domain is discretised with grid refinements ranging between 50 and 500 grid elements. Higher refinements have a double effect: on one side larger grids obviously increase the overall computational load, and in particular for chemistry; on the other side, given the restriction of the implemented forward Euler explicit advection scheme, the time stepping required for the coupled simulations in order to be free of numerical dispersion decreases 135 accordingly. Smaller time steps decrease the computational load for geochemistry for each iteration, since they require shorter time integrations, but also require more coupled iterations to reach the same simulation time. More iterations mean also that there are more chances for errors introduced by surrogates to further propagate into the simulations in both space and time. In presence of significant overhead due to, e.g., data passing between different softwares or the setup of geochemical simulations, the advantage due to shorter time steps vanishes. However, these aspects become more relevant in the context of parallelisation of geochemistry and are not addressed in the present work.
All coupled simulations, both reference (full physics) and with surrogates, are run with constant time step either honouring the CFL condition with ν = 1, and thus free of numerical dispersion, or, when assessing how the speedup scales with larger grids, a fixed time step small enough for the CFL condition (eq. 2) to be satisfied for every discretisation. As previously noted, the resulting simulations will be affected by grid-dependent numerical dispersion, which we do not account for in the present 145 work. This makes the results incomparable in terms of transport across grids. However, since the focus is on the acceleration of geochemistry through pre-computed surrogates, this is an acceptable simplification.
The comparison between the reference simulations, obtained by coupling of transport with the PHREEQC simulator, and those obtained with surrogates is based on an error measure composed as the geometric mean of the relative RMSEs of each variable i, using the variable's maximum at a given time step as norm: where m is the number of variables to compare, n the grid dimension and t the particular time step where the error is computed.
In this work the datasets used for training the surrogates are obtained directly by storing all calls to the full physics simulator and its responses in the reference coupled reactive transport simulations, possibly limited to a given simulation time. This way of proceeding is considered more practical than, e.g., an a priori sampling of a given parameter space, where the bounds of 155 the parameter space are defined by the ranges of the input/output variables occuring in the reference coupled simulations. This strategy mimics the problem of wanting to train a surrogates directly at runtime during the coupled simulations. Furthermore, an a priori, statistical sampling of parameter space, in absence of restrictions based on the physical relationships between the variables, would include unphysical and irrelevant input combinations. By employing only the input/outputs tables actually required by the full physics simulations, this issue is automatically solved; however, the resulting datasets will be in general 160 skewed, multimodal and highly inhomogeneously distributed within the parameter space, with highly dense samples in some regions and even larger empty ones.

Hierarchical coupling of chemistry
In this work we consider only a Sequential Non-Iterative Approach (SNIA) coupling scheme, meaning that the sub-processes flow, transport and chemistry are solved numerically one after another before advancing to the next simulation step. For sake 165 of simplicity, we let the CFL condition (2) for advection dictate the allowable time step for the coupled simulations.
Replacing the time-consuming, equation-based numerical simulator for geochemistry with an approximated but quick surrogate introduces inaccuracies into the coupled simulations. These may quickly propagate in space and time during the coupled simulations and lead to ultimatively incongruent and unusable results.
of models used to compute chemistry at each time step during the coupled simulations. The idea is to first ask the surrogate for predictions, then identify unplausible or unphysical ones, and finally run the full physics chemical simulator for the rejected ones. This way, the surrogate can be tuned to capture with good accuracy the bulk of the training data, and no particular attention needs to be payed to the most difficult "corner cases". For the highly non-linear systems usually encountered in geochemistry, this is of great advantage. In practice, however, there still is need to have a reliable and cheap error estimation 175 of surrogate predictions at runtime.
It is important to understand that the criteria employed to accept or reject the surrogate predictions depend strictly on the architecture of the multivariate surrogate and on the actual regression method used. Methods such as kriging offer error models, based, e.g., on the distance of the estimated point from the nearest training data. However, in the general case, any error estimation requires first the training and then the evaluation at runtime of a second "hidden" model. Both steps can be 180 time-consuming; furthermore, in the general case one can only guarantee that the error is expected -in a probabilistic senseto be lower than a given threshold.
In a completely data-driven surrogate approach, where each of the output variables is independently approximated by a different multivariate regressor, checking mass conservation is a very inexpensive way to estimate the reliability of a given surrogate prediction, since it only requires the evaluation of linear combinations across predictors and predictions. Other con-185 straints may be added, suited to the chemical problem at hand, such as charge balance. However we only use mass balance in the present work. Figure 1 illustrates this simple hierarchical coupling schematically. For the chemical benchmark of sec- tion 2.2, three mass balance equations can be written, one for each element C, Ca and Mg, accounting for the stoichiometry of the minerals' brute formulas. If a surrogate prediction exceeds a given, predetermined, tolerance on the Mean Absolute Error of the balance equations, that particular prediction is rejected and a more expensive full physics simulation is run instead.

190
This approach moderates the need for extremely accurate regressions, especially in instances of non-linear behaviour of the chemical models, for example when a mineral precipitates for the first time or when it is completely depleted, which are hard things for regressors to capture. However, the number of rejected simulations must be low to produce relevant speedups; it is effectively a trade-off between the accuracy of the surrogates (and efforts and time which goes into it) and the speedup achieved in coupled simulations.

195
3 Fully data-driven approach The first approach is a completely general one, fully data-driven and thus process-agnostic: it can be employed for any kind of numerical model or process which can be expressed in form of input and output tables. In our case, the tables produced by the geochemical sub-process during the reference coupled simulations are used to train seven multiple multivariate regressors, one for each output.

200
The reference simulations, and hence the dataset for training the surrogate, are fully coupled simulations on grid 50, 100 and 200 with a fixed time step of 210 s, run until 33600 s or else 161 total coupling iterations. As previously noted, these simulations are then not comparable among themselves due to significant numerical dispersion; however, from the point of view of geochemical processes, this strategy has the advantage of spreading the "perturbations" due to transport of the results of geochemistry in the previous time step. Instead of the usual random split of the dataset in train and test subsets, costumary 205 in the machine learning community, we retained only the data resulting from the first 101 iterations for training the surrogates, and evaluated the resulting reactive transport simulations until iteration 161, where the geochemical surrogate is faced with 60 iterations on unseen or "out of sample" geochemical data. The training dataset comprises tables with 13959 unique rows or input combinations. All simulations, the reference and with surrogates, are run on a single CPU core.
The choice of the regressor for each output is actually arbitrary, and nothing forbids to have different regressors for each kinds of algorithms that we tested, we found that decision-tree based methods such as Random Forest and their recent gradient boosting evolutions appear the most flexible and successful for our purposes. Their edge can in our opinion be resumed by: (1) implicit feature selection by construction, meaning that the algorithm automatically recognizes which input variables are most important for the estimation of the output; (2) no need for standardisation of both inputs and outputs; (3) ability to 215 deal with any probability distributions; (4) fairly quick to train with sensible hyperparameter defaults; (5) extremely efficient implementations available.
The points number (2)-(4) cannot be overlooked. Data normalisation or standardisation techniques, also called "preprocessing" in machine learning lingo, are redundant with decision-tree based algorithms, whereas they have a significant impact on results and training efficiency with other regressors such as Support Vector Machines and Artificial Neural Networks. The 220 distributions displayed by the variables in the geochemical data are extremely variable and cannot be assumed uniform, gaussian or lognormal in general. We found out that the Tweedie distribution is suited to reproduce many of the variables in the training dataset. The Tweedie distribution is a special case of exponential dispersion models introduced by Tweedie (1984) and thoroughly described by Jørgensen (1987), which finds application in many actuarial and signal processing processes (Hassine et al., 2017). interesting case, which is normally referred to when using the term "Tweedie", is when 1 ≤ p ≤ 2. This distribution represents positive variables with positive mass at zero: meaning that this distribution preserves the "physical meaning" of zero. It's intuitively an important property when modelling solute concentrations and mineral abundances: the geochemical system solved by the full physics simulator is radically different when, e.g., a mineral is present or not.
In the previous section it was claimed that in the framework of hierarchical coupling there is no practical need to further 240 refine the regressions. This could be achieved by hyperparameter tuning and by using a different and more adapted probability distribution for each label including proper fitting of parameter p for the Tweedie variables. While this would be of course beneficial, we proceed now by plugging such a rough surrogate into the reactive transport simulations. The coupled simulations with surrogates are performed on the three grids for 161 iterations, setting the tolerance on mass balance to 10 −5 , 10 −6 and only relying on the surrogate, meaning with no call to PHREEQC even if a large mass balance error is detected.

245
In Figure 2 are exemplarily displayed the variables profiles for grid 100 and tolerance 10 −6 at two different time steps, iteration 101, which is the last one within the training dataset, and at the end of the simulation time, after 60 coupling iterations in "unseen territory" for the surrogates. The accuracy of the surrogate simulations is excellent for the 101st iteration, but by iteration 161, while still acceptable, some discrepancies start to show. The number of rejected surrogate responses at each time step does not remain constant during the simulations, but increases steadily. An overview of all the simulations is given in tolerance 10 −6 , which makes the whole surrogate predictions actually useless in terms of speedup even before making them so inaccurate to be useless. It is also apparent from the error panel in Figure 3 (bottom) that errors introduced in the coupled simulations at early time steps propagate through the rest of the simulations, so that the overall discrepancy between reference and surrogate simulations also steadily increases. Note that this "diverging behavior" also tends to bring the geochemistry "out of sample", in the sense of seen vs. unseen geochemical data, since the training data only comprises "physical" input combinations but, due to the introduced inaccuracies, we are asking the surrogate more and more predictions based on slightly "unphysical" input combinations. Having highly accurate surrogates, hence, would be beneficial also in this regard.
It is difficult to discriminate "a priori" between acceptable and unacceptable simulation results based on a threshold of an error measure such as that of eq. 3, which can be roughly interpreted as "mean percentage error". This is also a point where in 265 our opinion further research is needed. Relying on the visualisation of the surrogate simulation results and reference, we can summarize that the tolerance on mass balance of 10 −6 (solid lines in Figure 3) produces accurate coupled simulations, excellent accuracy within the time steps of the training data and good accuracy after the 60 out of sample iterations. The tolerance of 10 −5 as well as the simulations based solely on surrogates produce acceptable accuracy in sample but unusable and rapidly diverging results out of sample. For the given chemical problem, the 10 −6 tolerance on mass balance could be relaxed, whereas 270 the 10 −5 is too optimistic. The optimal value, at least for the considered time steps, lies between these two values.
The overall speedup -in terms of total wall clock time of the coupled simulations, thus including also CPU time used for advection and all the overheads, although both much less computationally intensive than chemistry, and therefore termed pseudo speedup -with respect to the reference simulations is summarized in  Figure 3. Purely data-driven approach: evaluation of calls to full physics simulator for the runs with hierarchical coupling for the three discretisations à 50, 100 and 200 elements, and of overall discrepancy between surrogate simulations and reference. When the surrogate enters the region of "unseen data", its accuracy degrades significantly, which causes loss of efficiency rather than accuracy. starts at around 2.6 for the 50 grid and reaches 4.2 for the 200 grid, and that can be taken as a measure of achievable speedup, in projection, by using extremely accurate surrogates. Considering only the first 101 iterations, the 10 −6 simulations would achieve speedup slightly larger than one already on the 50 elements grid, and be well over 2 on the 200 grid. Extrapolating to grids with 10 5 or 10 6 elements, speedups in the order of 25-50 are achievable for this chemical problem. The speedup will The above presented fully data-driven approach disregards any domain knowledge or known physical relationships between variables besides those which are picked up automatically by the multivariate algorithms operating on the input/outputs in the training data. 285 We start this approach by considering the actual "true" degrees of freedom for the geochemical problem, which is fully described by seven inputs and four outputs: ∆calcite, ∆dolomite, Cl and pH. This means that we will have to calculate back the changes in concentrations for C, Ca and Mg, risking a quicker propagation of errors if the reaction rates of the minerals are incorrectly predicted.
The reference simulations for this part are run with ν = 1 and thus without numerical dispersion on four different grids: 50, 290 100, 200 and 500 elements respectively. This implies that the simulation on grid 500 has ten times more coupling iterations than the 50 grid, or in other terms, that the allowable time step in grid 500 is a tenth of that for grid 50.
A common way to facilitate the task of the regressors by "injecting" physical knowledge into the learning task of the algorithms is to perform feature engineering: this simply means computing new variables defined by non-linear functions of the original ones, which may give further insights regarding multivariate dependencies, hidden conditions or relevant subsets 295 of the original data.
For any geochemical problem involving dissolution or precipitation of minerals, each mineral's saturation ratio (SR) or its logarithm SI (Saturation Index) discriminates the direction of the reaction. If SR > 1 (and thus SI > 0) the mineral is oversaturated and precipitates; it is undersaturated and dissolves if SR < 1 (SI < 0); SR = 1 (SI = 0) implies local thermodynamic equilibrium. Writing the reaction of calcite dissolution: The Law of Mass Action (LMA) relates, at equilibrium, the activities of the species present in the equation. We conventionally indicate activity with square brackets. For eq. 5, the LMA reads: where γ stands for the activity coefficient of subscripted aqueous species. The solubility product K eq Cc at equilibrium, tabulated in thermodynamic databases, is a function of temperature and pressure and defines the saturation ratio: The estimation of the saturation ratio of equation 7 using the elemental concentrations available in our training data is the first, natural feature engineering tentative. Hereby, a few assumptions must be made.

310
Using total elemental concentrations as proxy for species activities implies neglecting the actual speciation to estimate the ion activity products, but also the difference between concentration and activity -"true" activity [H + ] is known from the pH -, but also not relying on. For the chemical problem at hand, as it will be shown, it is a viable approximation, but it will not be in presence of strong gradients of ionic strength or in general for more complex or concentrated systems. An exception to this simplification is required for dissolved carbon due to the well known buffer. In this case, given that the whole model is at pH 315 between 7 to 10, we may assume that two single species dominate the dissolved carbon speciation: CO −2 3 and HCO − 3 . The relationship between the activities of those two species is always kept at equilibrium in the PHREEQC models and thus, up to the "perturbation" due to transport, also in our dataset. This relationship is expressed by the reaction and the corresponding law of mass action written in eq. 8: The closure equation, expressing the approximation of total carbon concentration as the sum of two species, gives us the second equation for the two unknowns: Combining eq. 8 and 9 we get the estimation of dissolved bicarbonate (the wide tilde indicates that it is an estimation) from the variables total carbon and pH comprised in our dataset and an externally calculated thermodynamic constant: The two thermodynamic quantities (at 25°C and atmospheric pressure) K eq carb = 10 −10.3288 and K eq Cc = 10 −2.00135 were computed with the CHNOSZ package for the R environment (Dick, 2019), but may also be derived with simple algebraic calculations 330 from, i.e., the same PHREEQC database employed in the reactive transport simulations.
Do these two newly defined variables, or "engineered features", bicarbonate and calcite saturation ratio, actually help to better understand and characterize our dataset? This can be simply assessed by plotting the ∆calcite against the logarithm of SR theor Cc , which is the SI theor Cc (Figure 5a, leftmost panel, dataset from the reference simulations on grid 200, which will be used from now on to illustrate the analysis since it contains enough data points) in the data. While many points remarkably lie on 335 a smooth curve (colored in black), many others are scattered throughout the graph (in red). It is easy to observe that those red points are either on the trivial ∆calcite=0 line, implying that calcite is undersaturated but not present in the system so nothing happens, or else the reaction did not reach the amount which could have been expected based on its initial undersaturation simply because calcite has been completely depleted during the timestep. All the red points correspond in facts to simulations with calcite=0 in the labels (results) dataset. The retained black points, however, belong to timesteps where the dissolution of 340 calcite is limited by kinetics and not by its initial amount, and can be thus used to estimate the reaction rate.  Figure 5a also displays a problem with the defined SR theor Cc : its relationship is not bijective with the ∆calcite. This means that we should proceed now to split the data in two different regions above and under the cusp (signalled by the blue horizontal line). However, just simply dropping the denominator of equation11 solves this problem to a large extent: The center panel of Figure 5b shows the scatter plot of ∆calcite versus the simplified SI Cc . All points lie now on a smooth curve, and the relation between the two variables is indeed quite perfectly bijective, with the exception of points very close to the ∆calcite=0 line, where they are more scattered; but since those points also correspond to the smallest amounts of reactions, we can deem this as successful approximation. Note that dropping the denominator in the definition of SR Cc also means that this feature does not reach one at equilibrium (and SI Cc zero), which is clear observing the range of the x-axis in 350 panels a and b of Figure 5. This has however no practical consequence for this problem: calcite is always undersaturated or at equilibrium in the benchmark, and we just defined a simple feature which is in bijective relationship with the amount of "true" dissolution in the data. While it could be possible to derive an analytical functional dependency between the observed amount of dissolved calcite and the estimated SI Cc , for example manipulating the kinetic law, we opted to use a regressor instead. The good bijectivity between the two variables means that we should be able to regress the first using only the second.

355
In the rightmost panel of Figure 5c are plotted in blue the in sample predictions of a Multivariate Adaptive Regression Spline model (MARS) (Friedman, 1991(Friedman, , 1993, computed through the earth R package (Milborrow, 2018), based only on SI Cc .
The accuracy is already acceptable indeed; however including further predictors from the already available features, in this case pH, Ca and Mg, a better regression (in red) is achieved, improving the RMSE of more than factor two. (c) A MARS regressor is computed for the retained data points, based solely on the estimated SICc (in blue) and using also other predictors to ameliorate the multivariate regression.

Original dataset (grid 200)
Before moving forward, two considerations are important. First, the red points of Figure 5a should not be used when trying 360 to estimate the rate of calcite dissolution, since they result from a steep and "hidden" non-linearity or discontinuity in the underlying model. This is a typical example of data potentially leading to overfitting in a machine-learning sense. Secondly, this "filter" does not need to be applied at runtime during coupled reactive tansport simulations: it suffices to estimate correctly the reaction rate given the initial state and then ensure that calcite does not reach negative values.
More interesting and more demanding is the case of dolomite, which firstly precipitates and then re-dissolves in the bench-365 mark simulations. In a completely analogous manner as above we define its saturation ratio SR Dol as: thus using the total elemental dissolved concentration of C and with K eq Dol = 10 3.647 resulting from the reaction: The theoretical value of K eq Dol = 10 3.647 used for calculation of SI Dol does not discriminate the initially undersaturated from the 370 oversaturated samples (dashed vertical black line in Figure 6). The "offset" which would serve us for a correct discrimination is nothing else than the maximum value of SR Dol restricted to the the region where ∆dolomite ≤ 0. We correspondingly update the definition of SR Dol : Now we are guaranteed that the vertical line SR Dol =1 (or equivalently, SI Dol =0, plotted with a dashed blue line in Figure 6) dolomite precipitating while calcite is dissolving, cannot be explained based only on the estimated SI Dol since their relationship is not surjective (Figure 8a). Here again we can use a piece of domain knowledge to engineer a new feature to move forward.
The Mg/Ca ratio is often used to study the thermodynamics of dissolution of calcite and precipitation of dolomite (Möller and De Lucia, 2020). Effectively, the occurring overall reaction which transforms calcite into dolomite reads: By applying the law of mass action to reaction 16, it is apparent that its equilibrium constant is a function of the Mg/Ca ratio (of its inverse in the form of equation 16). Plotting the ∆dolomite versus the initial Mg/Ca ratio, a particular ratio of 7.345 discriminates between two distinct regions for this reaction. Incidentally, this splitting value corresponds to the highest observed SR Dol in the training data; again, as previously noted for the offset on the estimated saturation index, this numerical value depends on the considered grid and time step. On the left hand region we observe a smooth, quasi-linear dependency 400 of the amount of precipitated dolomite on initial Mg/Ca. This is a simple bijective relationship where we can apply a simple monovariate regression. The amount of precipitated dolomite is accurately predicted by a MARS regressor using the sole Mg/Ca as predictor.
The region on the right of the splitting ratio can be best understood considering the fact that the precipitation of dolomite is limited, in this region, by a concurrent amount of calcite dissolution. The full physics chemical solver finds iteratively the cor-405 rect amounts of calcite dissolution and dolomite precipitation while honoring both the kinetic laws and all the other conditions for a geochemical DAE system (mass action equations, electroneutrality, positive concentrations, activity coefficients, . . . ). We cannot reproduce such articulate and "interdependent" behavior without knowing the actual amount of dissolved calcite: we are forced here to employ the previously estimated ∆calcite as a "new feature" to estimate of the amount of dolomite precipitation, albeit limited to this particular region of the parameter space. A surprisingly simple expression, fortunately, captures 410 this relationship quite accurately (Figure 9). This implies of course that during coupled simulations first the ∆calcite must be computed, and relying on this value, the ∆dolomite can be further estimated.
The last parameter space region which is left to consider is the orange-shaded, topleft quadrant of Figure 6. Here, although dolomite is undersaturated at the beginning of the time step, it still precipitates in the end, following the concurrent dissolution of calcite which changes its saturation state. Since however we already calculated the ∆calcite, we can update the concen-415 trations of dissolved Ca and C of corresponding amounts. One of these two concentrations, together with that of Mg, will constitute a limiting factor for the precipitation of dolomite. Hence, plotting the ∆dolomite against the minimum value of these three concentrations at each point (C must be divided by two for the stoichiometry of dolomite), we obtain a piecewise- linear relationship with limited non-linear effects. A very simple regression is hence sufficient to capture the bulk of the "true model behavior" for all these data points ( Figure 10). Now the behavior of calcite and dolomite is fully understood and we dis-  pose of a surrogate for both of them. Among the remaining output variables, only pH needs to be regressed: Cl is non-reactive, meaning that the surrogate is the identity function. For pH, while it could be possible to derive a simplified regression informed with geochemical knowledge, we chose for simplicity to use the xgboost regressor.

Regression in the orange quadrant
Summarizing, we effectively designed a decision tree, based on domain knowledge, which enabled us to make sense of the "true" data, to perform physically meaningful feature engineering and ultimatively to define a surrogate model "translated" to 425 the data domain ( Figure 11). The training of this decision tree surrogate consists merely in computing the engineered features,  finding the apparent offset for the SI Dol , the split value for the Mg/Ca ratio, and performing six distinct regressions on data subsets, of which three are monovariate and two use less predictors than the corresponding completely data-driven counterpart.
All of them, excluding pH, only use a subset of the original training dataset. On our workstation, this operation takes few seconds. The resulting surrogate is valid for the ∆t of the corresponding training data.

430
To evaluate the performance of this surrogate approach, a decision tree is trained separately for each grid (and hence ∆t) using the reference timesteps until 42000 seconds, whereas the coupled simulations are prolonged to 60000 s, so that at least 30 % of the simulation time is computed on unseen geochemical data.
The top panel of Figure 12 shows the results of the coupled simulation for grid 50 using the surrogate trained on the same data, at the end of the iterations used for training. Discrepancies with respect to the reference full physics simulation are already 435 evident. The problem here is that the training dataset is too small and the time step too large for the decision tree surrogate to be accurate. However, nothing forbids to perform "inner iterations" for the chemistry using a surrogate trained on a finer grid, which directly corresponds to smaller ∆t. For grid 50 (∆t=1066 s) we can hence use the surrogate trained on grid 500 (∆t=106.6 s) just calling it 10 times within each coupling iterations. The bottom panel of Figure   Note that the decision tree approach has been implemented in pure high-level R language (up to the calls to the regressors xgboost and earth, which are implemented in low-level languages such as C/C++) and is not optimized. A better implementation would further improve its performance, especially in the case where repeated calls to the surrogate are performed at each coupled iteration.

Discussion and future work 455
The results presented in this work devise some strategies which can be successfully exploited to speedup reactive transport simulations. The simplifications concerning the transport and the coupling itself (stationary flow; pure advection with dispersive full explicit forward Euler scheme; no feedback of chemistry on porosity and permeability; initially homogeneous medium; kinetic rate not depending on reactive surfaces) are obviously severe, but most of them should only marginally affect the validity of the benchmarks concerning the achievable speedup of geochemistry in a broad class of problems.

460
A fully data-driven approach, combined with a hierarchical coupling in which full physics simulations are performed only if surrogate predictions are found implausible, is feasible and promises significant speedups for large scale problems. The main advantage of this approach is that the same "code infrastructure" can be used to replace any physical process, not limited to geochemistry: it is completely general, and it could be implemented in any multiphysics toolbox to be used for any cosimulated process. The hierarchy of models for process co-simulation is a vast research field on itself. This idea has to our 465 knowledge never been implemented specifically for reactive transport, but has been proposed, e.g., for particular problem settings in fluid dynamics and elastomechanics (Altmann, 2013;Altmann and Heiland, 2015) and in the broader context of theoretical model reduction and error control (Domschke et al., 2011). This is however a fertile interdisciplinary research task and it is not difficult to foresee that significant progress in this area will soon be required to facilitate and fully leverage the powerful machine learning algorithms already available, in order to speedup any complex, multiscale numerical simulations.

470
The coupling hierarchy implemented in this work is obviously extremely simple and cannot be directly compared with the above cited works, since it is merely based on a posteriori evaluation of plausibility of geochemical simulations. Furthermore, it exploits redundant regressions, which is suboptimal, albeit practical: in effects, regressing more variables than strictly necessary is not much different than regressing the true independent variables and their error models. and base the error check on the accordance between the overlapping one. As an example, one could regress the ∆dolomite, ∆calcite and ∆Ca, limiting in practice the mass balance check to one element.
In our opinion there is no point in discussing if there is one most suitable or most efficient regression algorithm. This largely depends on the problem at hand and on the skills of the modeller. While we rather focused on gradient boosting decision-tree regressors for the reasons briefly discussed in section 3, a consistent number of authors successfully applied artificial neural 485 networks to a variety of geochemical problems and coupled simulations (Laloy and Jacques, 2019;Guérillot and Bruyelle, 2020;Prasianakis et al., 2020). We would like to point out that transforming geochemistry -as any other process -in a pure machine learning problem requires on one hand skills that are usually difficult for the geoscientists to acquire, and on the other it fatally overlooks domain knowledge that can be used to improve at least the learning task, which will directly result in accurate and robust predictions, as we demonstrated in section 4. Feature engineering based on known physical 490 relationships and equations should be part of any machine learning workflow anyway; building experience in this matter, devising suitable strategies for a broad class of geochemical problems is in our opinion much more profitable than trying to tune overly complex "black box" models of general applicability. The purely data-driven approach has its own rights and applications. As already noted, it is a completely process-agnostic approach which can be implemented in any simulator for any process. However, in absence of physical knowledge within the surrogate, the training data must cover beforehand all the 495 processes and the scenarios happening in the coupled simulations. On-demand training and successive incremental update of the surrogates at runtime during the coupled simulations would mitigate this issue. This would require a careful choice of the regressors, since not all of them have this capability, and possibly a sophisticated load balance distribution, likely viable only in the context of parallel computing. In perspective, however, this is a feature that in our opinion should be implemented in the numerical simulators. A second issue, related to the first, is that a data-driven surrogate trained on a specific chemical 500 problem (intending here initial conditions, concentration of the injected solutions, mineral abundances, time steps...), is not automatically transferable to different problem settings, even when for example only a single kinetic constant is varied. Again, shaping the surrogate following the physical process to be simulated seem here the most straightforward way to overcome this issue, at least partially. One would dispose of partial regressions in specific parameter space regions which could be varied following changes in underlying parameters.

505
It remains to be assessed if it is possible to generalize and automate the physics-based surrogate approach devised in section 4 on geochemical problems of higher complexity, i.e., with many minerals reacting. No claim of optimality is made about the actual choice of engineered features we made for this chemical benchmark: different features could possibly explain the data even more simply, and thus the chemical process. The important part is the principle: identify relationships as bijective as possible between input and output parameters, compartimentalized in separated regions of parameter space, using features 510 derived by the governing equations. An automation of feature engineering based on stoichiometry of the minerals is a straightforward extension, since it can be achieved by simply parsing the thermodynamic databases. An automatic application of the approach starting with a large number of engineered features may originate forests of trees much like the well known random forest or gradient boosting algorithms, but specialised in geochemical models: a true hybrid physics-AI model.
Also, the regressors which constitute the leaves of the decision tree of Figure 11 are completely arbitrary and were selected 515 based on our own experience. A more in-depth breakdown of the relationships between variables, for example analytical expressions derived directly from the kinetic law, could reduce most or all regressions to simple statistical linear models, which would even further increase the interpretability of the surrogate.

Conclusions
Employing surrogates to replace computationally intensive geochemical calculations is a viable strategy to speedup reactive 520 transport simulations. A hierarchical coupling of geochemical sub-processes, allowing to recur to "full physics" simulations when surrogate predictions are not accurate enough is advantageous to mitigate the inevitable inaccuracies introduced by the approximated surrogate solutions. In the case of purely data-driven surrogates, which are a completely general approach not limited to geochemistry, regressors operate exclusively on input/output data oblivious to known relationships. Here, redundant information content can be employed effectively to obtain cheap estimation of plausibility of surrogate predictions at runtime, 525 by checking the errors on mass balance. This estimation works well at least for the presented geochemical benchmark. Our strategy consists in partitioning the parameter space based on the engineered features, and looking for bijective relationships within each region. This approach reduces both the required multivariate predictions and the dimension of training dataset upon which each regressor must operate. Algorithmically it can be represented by a decision tree, and proved both accurate and robust, being equipped to handle unseen data and less sensible to sparse training dataset, since it embeds and exploits knowledge about the modelled process. Further research is required in order to generalize it and to automate it for more 535 complex chemical problems, as well as for adapting it to specific needs such as sensitivity and uncertainty analysis.
Both approaches constitute non-mutually-exclusive valid strategies in the arsenal of modellers dealing with overwhelmingly CPU-expensive reactive transport simulations, required by present day challenges in subsurface utilisation. In particular, we are persuaded that hybrid AI-physics models will offer the decisive computational advantage needed to overcome current limitations of classical equation-based numerical modelling. Author contributions. MDL shaped the research, performed analyses, programming and wrote the manuscript. MK helped providing fund-545 ing, shaping the research, and revised the manuscript.
Competing interests. The authors have no conflict of interests.