Constraining stochastic 3-D structural geological models with topology information using Approximate Bayesian Computation using GemPy 2.1

Structural geomodeling is a key technology for the visualization and quantification of subsurface systems. Given the limited data and the resulting necessity for geological interpretation to construct these geomodels, uncertainty is pervasive and traditionally unquantified. Probabilistic geomodeling allows for the simulation of uncertainties by automatically constructing geomodels from perturbed input data sampled from probability distributions. But random sampling of input parameters can lead to construction of geomodels that are unrealistic, either due to modeling artefacts or by not matching known information 5 about the regional geology of the modeled system. We present here a method to incorporate geological information in the form of geomodel topology into stochastic simulations to constrain resulting probabilistic geomodel ensembles. Simulated geomodel realisations are checked against topology information using a likelihood-free Approximate Bayesian Computation approach. We demonstrate how we can learn our input data parameter (prior) distributions on topology information in two experiments: (1) A synthetic geomodel using a rejection sampling scheme (ABC-REJ) to demonstrate the approach; (2) A 10 geomodel of a subset of the Gullfaks field in the North Sea, comparing both rejection sampling and a Sequential Monte Carlo sampler (ABC-SMC). We also discuss possible speed-ups of using more advanced sampling techniques to avoid simulation of unfeasible geomodels in the first place. Results demonstrate the feasibility to use topology as a summary statistic, to restrict the generation of model ensembles with additional geological information and to obtain improved ensembles of probable geomodels using stochastic simulation methods. 15

function that can be used in a Bayesian inference is considered intractable, apart maybe from time-and cost-consuming expert elicitation . This work tries instead to approximate the (Bayesian) posterior geomodel ensemble that incorporates both the geological input data and the topology information using an Approximate Bayesian Computation (ABC) approach for a likelihood-free approximation of the posterior. 60 To test this approach we designed two distinct experiments, one synthetic and one case study: 1. We construct a synthetic fault model and explore its topological uncertainty. We do this by describing our input data not as fixed parameters, but as probability distributions. We then use Monte Carlo sampling to obtain input data from which geomodels are constructed. We then show how a single topology graph can be used as a summary statistic in an ABC-rejection scheme to approximate the posterior model ensemble that honours the added information.
2. To test the same ABC approach on a real-world dataset, we apply it to a model extracted from a seismic interpretation of the North Sea Gullfaks field. We also explore a more advanced sampling technique to demonstrate possibilities for reducing the computational costs of the method In the following section we will give an overview of the applied implicit geomodeling approach, the basic concept of Bayesian inference and its use in probabilistic geomodeling, as well as the idea behind Approximate Bayesian Computation. 70 We further describe how we analyze model topology and use it as a summary statistic. We will then introduce, in detail, both the synthetic fault model and the case study, followed by a comprehensive discussion of our findings.

Implicit Geomodeling
Several approaches exist for creating structural geomodels, which can be separated into three main categories: (a) interpolation, 75 (b) kinematic methods and (c) process simulation. The interpolation of surfaces and volumes from spatial data is currently the most widely used approach in geosciences, especially manually, which requires robust knowledge of the geological setting and extensive amounts of data in order to robustly approximate reality. Additionally, highly complex structures such as extensive fault networks and repeatedly folded areas are challenging to recreate using current interpolation methods (Jessell et al., 2014;Wellmann et al., 2016;Laurent et al., 2016). 80 The open-source, Python-based implicit modeling package GemPy 1 (de la Varga et al., 2019) is used here. It is based on the work of Lajaunie et al. (1997) and Calcagno et al. (2008), and allows the interpolation of geological interface position and plane orientation data by using a scalar field method in combination with cokriging (Chilès et al., 2004). For a detailed overview of the algorithm and the functionality of GemPy, we refer the reader to de la .  Fossen, 2010). The black nodes represent the centroids of the geobodies and the black edges the topology connections, together building a topology graph.

85
Topology, referring to "properties of space that are maintained under continuous deformation, such as adjacency, overlap or separation" (Thiele et al., 2016a;Crossley, 2006), is a highly relevant concept in structural geology, as it provides a useful description of the relations between stratigraphic units across layer interfaces, faults or the contact to an intrusive body.
Generally, eight binary topological relationships can exist between three-dimensional objects (Egenhofer, 1990), while a total of 69 relations are possible between simple lines, surfaces and bodies (e.g. surfaces without holes; see Zlatanova, 2000).

90
From these eight Egenhofer-Herring relationships, meets (i.e. adjacency) is the most relevant one for describing structural and stratigraphic relationships, such as across-fault connectivity of layers (see Fig. 1). The topology relationships of geological models can be represented by an adjacency graph, which represents topological units as individual nodes and their connections by edges (see Fig. 1). The adjacency topology of geological structures is highly dependent on deformation: compressional deformation leads to different connectivities in the topology graph than does extensional, but even within the same type of 95 deformation they can lead to different topologies-as visualized by the Horst and Graben structures in Figure 1. Not only does the type of deformation have an important influence on the systems topology, but also the quantity-e.g. the fault throw. For an in-depth introduction and discussion of topology in geology see Thiele et al. (2016a) for the fundamental theory and Thiele et al. (2016b) and also Pakyuz-Charrier et al. (2019) for the influence of structural uncertainty on geomodel topology. 1 URL: github.com/cgre-aachen/gempy 4 https://doi.org/10.5194/gmd-2020-136 Preprint. Discussion started: 18 August 2020 c Author(s) 2020. CC BY 4.0 License.

Computing geomodel topology
To compute the geomodel topology with the necessary computational efficiency to conduct a feasible stochastic simulation of realistic geomodels, we implemented a topology algorithm using theano (Theano Development Team et al., 2016) into the core of GemPy. This enables the topology computation to run alongside the geomodel interpolation on graphical processing units (GPUs). As theano is a highly optimized linear algebra library, the employed method is mainly focused on utilizing matrix operations for the computation of the geomodel topology. When the implicit geomodel is discretized using a regular 105 grid, it becomes a 3-D matrix of lithology IDs L (Fig. 2a), which we use for the calculation of the geomodel topology. For each geomodel we also have access to the 3-D boolean matrices F n for each fault, representing the two sides of the respective fault by two ascending consecutive integers (Fig. 2b). Given these two input data, we compute the geomodel topology as follows: 1. The lithology matrix L and the summed fault matrices n fault i=1 F i , where n fault is the total number of faults in the geomodel, are combined into a matrix where each lithology in each fault block is represented by its own unique integer, 110 referred to as the topology labels matrix T (see Fig. 2c): with n lith being the total number of lithology IDs in the geomodel.
2. The topology labels matrix T is then shifted twice (forward and backward) along each axis X, Y and Z. The two resulting shifted matrices S 1 and S 2 along each axis are then subtracted from each other to result in a difference matrix D, in which 115 only the cells along a lithology or fault boundary are non-zero (Fig. 3).
3. The topology labels matrix T is then evaluated at all non-zero cells of D to obtain the two topology labels n a , n b of each topological connection (reffered to as an edge e) in the geobody, which are stored in a set of unique edges E representing the geomodels topology. For the example shown in Figure 2 and 3 the abbreviated set is E = {(0, 4), (0, 5), (0, 1), ..., (3, 7)}.
This method of topology calculation works on regular grids, which imposes a strong bias on the result: if the main lithological 120 and structural features are not aligned with the grid orientation, the resulting topology graph could thus contain (or miss) connections. For a more detailed discussion on the effects of model discretization see Wellmann and Caumon (2018).

Bayesian Inference
Bayesian inference is fundamentally different to the classical frequentist approach of inference. It treats probabilities as degrees 125 of certainty of a parameter θ, which is inherently considered to be a random variable itself (Bolstad, 2009;VanderPlas, 2014).
It is based on Bayes' theorem (Eq. 2), which allows to update a given probability -the prior probability p(θ) of a parameter θ -after the occurrence of a connected event (Bolstad, 2009). This updating process relies on the use of a likelihood function   p(y|θ), representing the probability distribution of the observed data y of the occurring event. It is used to condition the prior into the posterior distribution p(θ|y), which represents the degree of certainty over the parameter θ after the occurrence of the 130 event and its observed data y.
For the use in geomodeling, these parameters can be seen as (de la Varga and Gelman et al., 2013): -Model parameters θ: The model-defining parameters (e.g. layer interface positions, dip or fault parameters used for the interpolation of the geomodel), which can be either deterministic (thus be exactly defined and known) or probabilistic.

135
The latter represent uncertain parameters, which is expressed in the form of probability distributions (e.g. a normal distribution expressing the uncertainty of the vertical subsurface position of a layer interface); 6 https://doi.org/10.5194/gmd-2020-136 Preprint. Discussion started: 18 August 2020 c Author(s) 2020. CC BY 4.0 License.
-Observed data y: Represents additional measurements or observations, which should enhance the model definition by providing additional information with the goal to reduce model uncertainty or enable the comparison of the model to reality (e.g. by comparing geophysical potential-field measurements with the according forward simulation on the basis 140 of a geomodel). In this work we use topology information in the form of a topology adjacency graph as the "observed data"; -Likelihood functions p(y|θ): These form the relationship between the model parameters θ and the observed data y.
Essentially, this function describes the likelihood for the parameters θ for a given observation y (e.g. MacKay and Kay, 2003). In the case of structural modeling, this essentially means that we compute the geomodel from the input parameters 145 θ and compare model predictions (e.g. the thickness of a certain layer at a certain position), with additional observed data. The likelihood of the parameter θ is then encoded in the likelihood function.
While constructing meaningful likelihood functions for physical properties such as layer thickness or geobody volume from observed data is straight forward (de la Varga and Wellmann, 2016), we have no proper framework to construct them for more abstract or "soft data", such as our understanding of the geological setting, or the topology relationships of our layers across 150 faults or unconformities. For this reason, we chose to pursue a likelihood-free method to estimate our posterior distributions given abstract geological information: Approximate Bayesian Computation.

Approximate Bayesian Computation
Geoscientists often have extensive implicit knowledge of the geological settings (e.g. our understanding of the tectonics of a system), but only a limited amount of this knowledge can be incorporated into the geological interpolation function (Wellmann 155 and Caumon, 2018). Additionally, it is often difficult to define formal likelihood functions for geological knowledge, as required for conventional Bayesian inference methods. A less formal but valid alternative approach is to approximate the posterior distributions using Approximate Bayesian Computation (ABC) methods. These methods are also referred to as likelihood-free inference methods (Marin et al., 2012), ABC methods evaluate the distance of stochastically generated models to our additional data using one or multiple summary statistics S (e.g. model topology), instead of a probabilistic likelihood function.

160
To obtain the approximate posterior distribution we need to sample from our prior parameter distributions, plug the values into our simulator functions (our geomodeling software), compute the summary statistic y (geomodel topology) and evaluate its distance to our observed summary statistic (data)ŷ (e.g. a geomodel topology graph). The most fundamental sampling scheme for ABC is based on rejection sampling (ABC-REJ; see Algorithm 1), for which the distance between our simulated data y and observed dataŷ is calculated using a distance function of the summary statistics d S(ŷ), S(y(θ )) . The simulated 165 model is accepted if the distance is below a user-specified error bound ≥ 0 (Sadegh and Vrugt, 2014), or else rejected. The accepted samples form the approximate posterior. Thus, this method circumvents the need to specify a likelihood function for our additional data, while still approximating the posterior distributions incorporating the information of both our priors and our additional information (Sunnåker et al., 2013). Within this work we use the Jaccard index (1 − J) as a distance function between topology graphs.

170
Algorithm 1 ABC-REJ Simulate geomodel y(θ ) Compute geomodel topology S(y(θ )) Calculate d S(ŷ), S(y(θ )) end while end for A more advanced sampling scheme for ABC is Sequential Monte Carlo sampling (ABC-SMC). In its simplest form it can be seen as an extension of rejection sampling, by chaining rejection sampling simulations together (each referred to as an epoch).
During the first epoch of rejection sampling, a large error threshold 1 is used while sampling from the prior distributions p(θ). tends to suffer from potentially low computational efficiency when using low error thresholds , the iterative shrinking paired with adjustment of the prior distributions can potentially obtain the approximate posterior much more quickly. We apply this sampling scheme to our Gullfaks case study to show the potential speed-ups.

Topology distance functions
To use geomodel topology as a constraint for probabilistic geomodels in an ABC framework, we need a consistent way of comparing geomodel topologies-i.e. suitable distance functions. We consider here three possible comparison methods: 1. Presence or abscence of defined connections: As the relational topology information is captured in adjacency graphs, the most fundamental approach is to check if two relevant nodes n 1 and n 2 (e.g. representing two regions in the model) 185 share an edge e = (n 1 , n 2 ) (are adjacent), and if this edge exists in both models. This is the most simple way of comparing  190 index (Jaccard, 1912). It can be used to compare the similarity of sets by creating the ratio of the intersection and union of two graphs A and B: For two topology graphs A and B, this means we calculate the ratio of edges (representing connected regions) shared in both (intersection: A ∩ B) and their total combined number of edges (union: A ∪ B). This ratio can be used to 195 efficiently identify all unique topology graphs in a given ensemble, as only an identical pair of graphs results in a Jaccard index of J(A, B) = 1. A comparison using the Jaccard index yields ratios of integers, thus a discrete comparison. This method also allows specifying a tolerance 0 < < 1 for model acceptance, i.e. to accept models within the range 1 − ≤ J ≤ 1.

Contact area:
Comparing the number of actual edge pixels (or voxels), representing the area of the contact A e between 200 two geobodies could yield a more granular comparison that allows to take into accounts trends of the contact size. Thus the ABC error tolerance could be used to reject geomodels where certain topological contact areas are above and/or below a certain value A e − low ≤ A e ≤ A e + high .

Quantifying Uncertainty using Shannon Entropy
Stochastic simulations yield vast ensembles of geomodel realizations and their variability (and thus uncertainty) needs to 205 be analyzed and understood. The uncertainty of a single geological entity (e.g. a layer or a fault) can be estimated from its frequency of occurrence in each single geomodel voxel. In order to analyze the whole geomodel uncertainty at once, more sophisticated measures can be applied: the concept of Shannon entropy H can be used in a spatial context to evaluate the uncertainty of an entire geomodel ensemble at once, as described by Wellmann and Regenauer-Lieb (2012). Their concept is based on concepts from information theory, derived by Shannon (1948), and further on the concept of fuzziness established 210 9 https://doi.org/10.5194/gmd-2020-136 Preprint. Discussion started: 18 August 2020 c Author(s) 2020. CC BY 4.0 License.
by Zadeh (1965) and De Luca and Termini (1972). If applied to a fuzzy set 2 f ∈ [0, 1] in a grid, the measure should only be 0 if every grid cell is either 0 or 1 everywhere (thus the grid having no uncertainty anywhere, meaning we are absolutely certain about the lithology at this position), and should have its maximum value when f = 0.5 for all grid cells (meaning all outcomes are equally likely, which represents the highest uncertainty possible: every lithology is equally likely to be present at this position). The resulting equation is: where we denote the fuzzy set f as the probability p m of an outcome m ∈ M of a cell x, and H m being the Shannon entropy normalized by the total number of cells N . The average model entropy H can also be evaluated by: Which makes the average model entropy equal to 0 if all cells x have only one possible outcome (no uncertainty), and reaching 220 its maximum when all outcomes are equally likely for all cells of the model (maximum uncertainty).

Synthetic Fault Model
As a proof of concept we show how ABC can be used to incorporate geological knowledge and reasoning into an uncertain synthetic geomodel. This model represents a folded layer cake stratigraphy that is cut by a N-S striking normal fault to represent 225 an idealised reservoir scenario frequently encountered in the energy industry (see Fig. 4a).
The prior parametrization is schematically visualized in Figure 4b and consists of two different kinds of uncertain parameters: (i) vertical location of the layer and fault interfaces and (ii) lateral location of the fault interface, with the specific parametrization displayed in Table 1 in the Appendix. Two separate simulations were run for this experiment so we can see how topology can constrain an uncertain geomodel compared to the Monte Carlo simulation of uncertainties: 230 1. A Monte Carlo simulation of the prior parameters to evaluate the uncertainty in the resulting geomodel ensemble consisting of 2000 generated models. This represents our 'base case' uncertainty without any constraints.
2. An Approximate Bayesian Computation using the initial model topology graph (see Fig. 4c) to represent our geological knowledge. We are employing a rejection sampling scheme (ABC-REJ) with an error tolerance of = 0 to obtain 500 generated posterior models. Thus, the resulting posterior geomodel ensemble will contain only samples with matching

Case Study: The Gullfaks Field
To demonstrate the applicability of the method to real datasets we apply it to a model of part of the Gullfaks Field, located in the northern North Sea. The field is located in the western part of the Viking Graben, and consists of the NNE-SSW-trending 10-25 km wide Gullfaks fault block (Fossen and Hesthammer, 1998). For a detailed overview of the regional and structural 240 geology we refer to Fossen and Rørnes (1996); Fossen and Hesthammer (1998); Fossen et al. (2000); Schaaf and Bond (2019).
For the experiment, we constructed a base geomodel (Fig. 5a) founded in an interpretation of the training data set provided with the seismic interpretation software Petrel ™ . We have chosen a relatively simple subset of the interpretation, containing 2 faults, three horizon tops Tarbert (red), Ness (purple) and Etive (green), and the Base Cretaceous Unconformity (BCU, yellow).  Table 2 in the Appendix). This parametrization was chosen due to its ease of implementation and to demonstrate how simplified uncertainty modeling can lead to highly uncertain results, especially regarding the topology graphs of the resulting geomodel ensembles in real-world geomodels. We then conducted a sensitivity study of the topological spread with respect to the geomodel resolution. This allowed us to determine the appropriate geomodel resolution necessary for our experiment. Next, we performed three separate simulations to compare different approaches: 255 1. A Monte Carlo simulation of the prior uncertainty for 1000 samples, to evaluate the spatial uncertainty and the topological spread of the resulting geomodel ensemble. This serves as our 'base case' uncertainty for comparison with the following two simulations.
2. An ABC-REJ simulation using the initial geomodel topology graph (see Fig. 5b) to represent our geological knowledge.
We used an error threshold of = 0.025 for 1000 accepted posterior samples, as the threshold was small enough to 260 constrain the posterior topology spread to the initial geomodel topology graph.
3. An ABC-SMC simulation using the same initial geomodel topology graph. We ran six SMC epochs using values of 0.3, 0.2, 0.1, 0.075, 0.05 and 0.025. Each epoch was run for 1000 accepted posterior samples. In comparison, applying a single topology graph as a summary statistics to the simulation using ABC leads to significantly reduced uncertainty throughout the geomodel ensemble (see Fig. 6d difference between the prior and the posterior geomodel ensembles shows the highest reduction in entropy for the two inner layer interfaces (see Fig. 7), and not around the fault surface. As expected, constraining the simulation using a single topology graph with an error of = 0 collapses the number of geomodel ensemble topologies from 100 down to 1.    not the initial (mean prior) topology graph. The uncertainty of a XZ-section of the forward ensemble is visualized in Figure   10a using Shannon entropy. The section illustrates the general trend of uncertainty throughout the forward simulation: we 290 observe highest uncertainty surrounding the two faults in the geomodel, especially around the eastern fault. The area also shows increased uncertainty due to the interaction of layer interfaces, the fault and the vertical vicinity of the BCU.
Applying the initial topology graph as a constraining summary statistics using ABC with rejection sampling (ABC-REJ) using a threshold of = 0.025 (chosen empirically), results in much reduced uncertainty, as exemplified by the entropy section shown in Figure 10b. At this threshold, the approximate posterior geomodel ensemble contains only the applied initial topology   Figure 12a shows the number of unique topologies for forward simulations and each threshold of the ABC-SMC. As we iteratively lower the acceptable threshold during the SMC simulation, the simulated and accepted topologies iteratively converge towards the topology graph we used as our prior geological knowledge. The average geomodel ensemble entropȳ H is also iteratively decreasing from 0.233 for the forward simulation down to 0.112 at = 0.025 (see Fig. 12b), showing how fixing a probabilistic geomodel to a single topology graph can significantly reduce, or rather significantly constrain, the 305 simulated uncertainty. Figure 9 shows how the ABC-SMC simulation iteratively affects the probability distributions of selected probabilistic geomodel parameters with decreasing thresholds . Each row shows the consecutive epochs of the ABC-SMC simulation and corresponds to a specific . Each column describes a different stochastic parameter in the stochastic model. By applying the initial topology graph of the geomodel as our summary statistics, we can directly see here how the parameter distribution 310 for the BCU (Fig. 9a) shifts its mean µ by 47.4 m upwards and reduces its standard deviation σ by 35.8 % to accommodate our geological knowledge about the geomodel topology. We can observe this effect in the entropy section of the posterior geomodel ensemble as well (Fig. 10b). In Figure 11, we show the difference in entropy between the prior and approximate posterior geomodel ensemble shown in Figure 10, where areas with decreasing entropy values are shown in blue, increasing values in red. We observe here how the BCU moves upward and increases the entropy there, while lowering entropy in the 315 lithologies below. The parameter distributions for Tarbert B (Fig. 9b, red) and Etive B (Fig. 9c, green) show similar behaviour: shifted mean and reduced standard deviation to accommodate the topology information. We see a much stronger reduction in standard deviation for the two faults (Fig. 9d,e): 80.4 % and 80.0 % for Fault A and Fault B, respectively. This is also shown as the strongest reduction in entropy in Figure 11. both our prior parameter knowledge and qualitative geological knowledge. If the applied topological information is meaningful, then the constrained stochastic geomodel ensemble will see a meaningful reduction in uncertainty, and will subsequently 325 allow for more precise model-based estimates and decision-making (Stamm et al., 2019). More importantly, the (approximate) Bayesian approach requires the explicit statement of the geological knowledge (here the topology information) used in the probabilistic geomodel, increasing the transparency of assumptions made during the geomodeling process and any subsequent decisions.
With our approach, we directly address a scientific challenge raised in recent work by Thiele et al. (2016b), that known 330 topological relationships are frequently not honoured during the probabilistic modeling process, thus potentially invalidating large parts of the resulting geomodel ensemble. Injecting topology information into a Bayesian approach allows us to obtain topologically valid, and hence geologically reasonable, geomodel ensembles. And, although we have only used simple topology The plot highlights areas where the entropy was reduced (blue), increased (red) and kept constant (white).
information within this study, the demonstrated ABC approach allows to easily scale the amount of topology information used: from simple True-False comparisons of single topology graphs to the use of a whole range of topology graphs and relationships.

335
The work of Pakyuz-Charrier et al. (2019) shows how clustering of probabilistic geomodel topologies can be used to differentiate between different modes of topologies. Their approach compares geomodel topologies by describing them as half-vectorized adjacency matrices, resulting in a binary string that can be compared using the Hamming distance (Hamming, 1950). It could be considered as a different distance metric in the ABC approach presented in this work to constrain the simulated probabilistic geomodel. And, while their work focuses on the analysis of existing probabilistic geomodel ensembles, 340 our approach focuses on learning probabilistic geomodels on topology information while reducing the number of required iterations through use of advanced sampling techniques. As more complex geomodels strongly increase the required parametrization to accurately describe the model domain in a probabilistic framework, constraining them with topological information could help keep this parametrization at computationally feasible levels by reducing the parameter dimensionality, while still obtaining meaningful geomodels (e.g. free of modeling 345 artefacts caused by random perturbations of the limited input data). This would not work using an inefficient rejection sampling scheme (e.g. ABC-REJ), but would rather require the use of "adaptive" sampling algorithms to efficiently explore the posterior parameter space without wasting too much computing power on rejected models (e.g. ABC-SMC). In our Gullfaks case study, we have not only shown the efficacy of the method on a real-world example, but demonstrated the stark increase in computational efficiency when using advanced sampling techniques. The SMC sampler used in our work requires manual for the approximate inference of complex structural geomodels with topology constraints, as it has shown promise to very efficiently explore high-dimensional (read: large amount of prior parameters) and multi-modal parameter spaces. When using multiple topology graphs (which are discrete) in an ABC framework, the posterior parameter space may potentially become multi-modal, which poses significant challenges for traditional Markov Chain-based samplers (Feroz and Hobson, 2008).
The approach by Sadegh and Vrugt (2014) is based on combining multiple Markov chains, which natively supports parallel 360 computing and would thus allow for a high scalability of the approach to complex, computationally intensive geomodels. and the use of discrete summary statistics poses unique challenges to sampling algorithms, requiring further research to identify algorithms that can confidently converge and minimize the high computational cost of probabilistic 3-D geomodels.
The method demonstrated the effect of topology information on geomodel uncertainty-showing how well the parametrization of a probabilistic geomodel fits our geological assumptions. The acceptance rates during sampling could potentially be used as a proxy for the validity of our assumptions: low acceptance rates could reveal a bad fit between our model and our added 370 geological knowledge and reasoning. Using entropy-difference plots, the effect of geological assumptions on the uncertainty can be analysed spatially, e.g. how it reduces (or increases) around faults and other structures in the geomodel or other summary statistics of the geomodel, such as the gross rock volume of a potential reservoir across all fault blocks (or compartments) of interest.

375
-We have shown how to use Approximate Bayesian Computation to constrain probabilistic geomodels so that the approximate posterior incorporates topology information.
-The method enables additional geological knowledge and reasoning to be explicitly encoded and incorporated into probabilistic geomodel ensembles, potentially increasing transparency of the modeling assumptions.
-As opposed to standard MC with rejection, the implemented SMC approach makes the use of ABC feasible in realistic 380 settings. Further research into using more advanced sampling schemes could provide additional speed-ups in obtaining the posterior geomodel ensemble, which is especially relevant for computationally more expensive complex geomodels with large parametrizations.