TREMOL v 0 . 1 : A stochastic rupture earthquake code based on the fiber bundle model . Application to Mexican subduction earthquakes

In general terms, earthquakes are the result of brittle failure within the heterogeneous crust of the Earth. However, the rupture process of a heterogeneous material is a complex physical problem which is difficult to model deterministically due to the numerous parameters and physical conditions, which are largely unknown. Considering the variability within the parametrization, it is necessary to analyze earthquakes by means of different approaches. Computational physics may offer alternative ways to study brittle rock failure by generating synthetic seismic data based on physical and statistical models, 5 and by the use of only few free parameters. The Fiber Bundle model (FBM) is a discrete element model, which is able to describe complex rupture processes in heterogeneous materials. In this article, we present a computer code called stochasTic Rupture Earthquake MOdeL, TREMOL. This code is based on the principle of the FBM to investigate the rupture process of asperities on the earthquake rupture surface. In order to validate TREMOL, we carried out a parametric study at first to identify the best parameter configuration while minimizing computational efforts. As test cases, we applied the final configuration to 10 10 Mexican subduction zone earthquakes in order to compare the synthetic results by TREMOL with real data. According to our results, TREMOL is able to model the rupture of an asperity that is defined essentially by two basic dimensions: (1) the size of the fault plane, and (2) the size of the maximum asperity within the fault plane. Based on this data, and few additional parameters, TREMOL is able to generate numerous earthquakes as well as a maximum magnitude for different scenarios within a reasonable error range. The simulated earthquake magnitudes are of the same order as the real earthquakes. Thus, TREMOL 15 can be used to analyze the behavior of a single asperity or a group of asperities since TREMOL considers the maximum magnitude occurring on a fault plane as a function of the size of the asperity. TREMOL is a simple, and flexible model which allows its users to investigate the role of the initial stress configuration, and the dimensions and material properties of seismic asperities. Although various assumptions and simplifications are included in the model, we show that TREMOL can be a powerful tool which can deliver promising new insights into earthquake rupture processes. 20 Copyright statement. TEXT


Introduction
Rupture models of large earthquakes suggest significant heterogeneity in slip and moment release over the fault plane, (e.g., Aochi and Ide, 2011).In order to characterize the seismic source rupture complexity, two main models have been proposed: the asperity model (Kanamori and Stewart, 1978), and the barrier model (Das and Aki, 1977).Asperities are defined as regions on the fault rupture plane that have larger slip and strength in comparison to the average values on the fault plane (Somerville et al., 1999).Asperities also have larger stress drop than the background area (Madariaga, 1979;Das and Kostrov, 1986).
Understanding the physical features in the fault zone that produce these high-slip regions is still a challenge.
The most common method for studying seismic asperities is waveform slip inversion.However, information obtained from this method is highly variable due to the inherent nature of the inversion process (see review in Scholz (2018)).The slip inversion results depend on the type of data (such as strong ground motion, geodetic and/or seismic data at different distances) and the inversion technique used.Somerville et al. (1999) used average slip to define asperities.In their criterion, asperities include fault elements where slip is 1.5 times or more larger than the average slip.By using this criterion, it is possible to estimate the asperity area from a finite-fault slip model.Considering the stress drop for a circular crack model (∆σ) (Eshelby, 1957), the stress drop on an asperity (∆σ a ) can be estimated as ∆σ a = (A eff /A a )∆σ, where A eff and A a are the rupture effective area, and the asperity area, respectively (Madariaga, 1979).The A eff /A a factor (or its reciprocal value) depends on different features with the most relevant one being the type of earthquake.For example, Somerville et al. (1999) found that on average the total area covered by asperities represents 22% of the total rupture area for inland crustal events.Murotani et al. (2008) showed that A a /A eff is approximately equal to 20% for plate-boundary events.Similarly, for subduction events, the value of A a /A eff is approximately equal to 25% (Somerville et al., 2002;Rodríguez-Pérez and Ottemöller, 2013).The previous average values were determined considering values that range from 0.09 to 0.35.This last condition means, for instance, that the reciprocal fraction A a /A eff can deviate from these average values as well (for example 0.09 to 0.35 for the proportions mentioned above), which leads to great stress contrasts (factors of 2.8 to 11) (Iwata and Asano, 2011;Murotani et al., 2008).Mai et al. (2005) proposed another definition of asperities based on the maximum displacement, D max .They defined "largeslip" and "very-large-slip" asperities as regions where the slip D lies between 0.33D max ≤ D < 0.66D max , and 0.66D max ≤ D, respectively.They found that approximately 28% of the rupture plane is occupied by large-slip asperities, whereas verylarge-slip areas constitute only 7% of the fault plane.Furthermore, different authors agree that the rupture area of the asperity scales with the seismic magnitude (Somerville et al., 1999;Murotani et al., 2008;Iwata and Asano, 2011;Rodríguez-Pérez and Ottemöller, 2013, among others).The estimation of seismic magnitude is an essential feature for characterizing the energy of an earthquake.In fact, an accurate magnitude estimation is indispensable to do both deterministic and probabilistic seismic hazard assessments.
Earthquakes are the most relevant example of self-organized criticality (SOC) (Bak and Tang, 1989;Olami et al., 1992).The concept of SOC can be visualized by imagining a natural system in a marginally stable state, where phases of instability may occur which place the system back into a meta-stable state (Barriere and Turcotte, 1994).A popular model representing this process was proposed by Bak and Tang (1989) and is well-known as the "sand pile model".Some models have been proposed to explain the statistical behavior of earthquakes patterns based on the SOC concept, e.g.Caruso et al. (2007), Barriere and Turcotte (1994), Olami et al. (1992), Bak and Tang (1989).The failure properties of solids have been modeled by simple stochastic discrete models, which are based on the SOC framework.The Fiber Bundle Model, FBM, is one of those models which has been used to reproduce many basic properties of the failure dynamic within solids (Chakrabarti and Benguigui, 1997).Additionally, the FBM has been successfully applied to studies of brittle failure of rocks (Hansen et al., 2015;Monterrubio et al., 2015;Turcotte and Glasscoe, 2004;Moreno et al., 2001).

The Fiber Bundle Model
The FBM is a mathematical tool to study the rupture process of heterogeneous materials which was originally introduced by Peirce (1926).Over the years the FBM has been widely used to study failure in a wide range of heterogeneous materials (Hansen et al., 2015;Pradhan and Chakrabarti, 2003).Regardless of the specific FBM type, there are three basic assumptions that all FBMs have in common (Daniels, 1945;Andersen et al., 1997;Kloster et al., 1997;Vázquez-Prada et al., 1999;Phoenix and Beyerlein, 2000;Pradhan et al., 2010;Monterrubio-Velasco et al., 2017): 1.A discrete set of cells (or fibers) which are defined on a d−dimensional lattice.In seismology, the bundle can represent a fault system, or seismic source where each fiber is a section of the fault plane (Moreno et al., 2001), or individual faults (Lee and Sornette, 2000).
2. A probability distribution that defines the inner properties of each cell (fiber), such as lifetime, or stress distribution.
3. A load-transfer rule which determines how the load is distributed from the ruptured cell to its neighbor cells.The most common load-transfer rules are: (a) Equal Load Sharing (ELS) in which the distributed load is equally shared to the other cells within the material or bundle; and (b) Local Load Sharing (LLS) where the transferred load is only shared with the nearest neighbors.
TREMOL is based on the probabilistic formulation of the FBM, with the failure rate of a set of fibers given by Eq. 1 (Gómez et al., 1998;Moral et al., 2001).
where U (t) is the number of fibers that remain unbroken at time t.The hazard rate K(σ(t)) is a function of the fiber stress σ(t).Experimental results show that the hazard rate of materials under constant load can be well described by the Weibull probability distribution function.This behavior can be summarized in Eq. 2 (Coleman, 1958;Phoenix, 1978;Phoenix and Tierney, 1983;Vázquez-Prada et al., 1999;Moreno et al., 2001;Biswas et al., 2015): where ν 0 is the reference hazard rate, and σ 0 the reference stress.The Weibull exponent, ρ, quantifies the non-linearity (Yewande et al., 2003).If σ 0 = ν 0 = 1, the expression in Eq. 2 can be simplified to K(σ(t)) = σ(t) ρ .From the probabilistic formulation, two equations arise (Eq. 3 and Eq. 4), which are applied in our algorithm to define the system dynamics.The details of these two equations are mentioned below.
a) Gómez et al. (1998), andMoral et al. (2001), developed a relation to compute the expected rupture time [dimensionless] of the fibers following Eqs. 1, and 2. This expected rupture time interval is defined as δ k (Eq.3), and can be applied to any load transfer rule, where N is the total number of cells, and σ i is the load in the i th cell.The dimensionless cumulative time, T , is the sum of δ k .
b) The failure probability, F i , which is a function of the load σ i in each cell is (Moreno et al., 2001), The dynamic values δ k , and F i are updated with each time-step due to rupture processes, and the resulting load transfer.
A suitable FBM algorithm to simulate earthquakes should consider a complex stress field, physical properties of materials, stress transfer between faults (at short and long distances), and dissipative effects.Using the FBM we assume that earthquakes can be considered as an analogy to characteristic brittle rupture of a heterogeneous material (Kun et al., 2006a, b).
The previous basic concepts about the FBM were considered for the development of the TREMOL code, with the purpose of modeling the behavior of seismic asperities.In the next section, we describe details of this code.

The TREMOL code
Since the main objective of TREMOL is to simulate the rupture process of seismic asperities based on the principles of the FBM, we model two materials with different mechanical properties interacting with each other.
In order to introduce the features of TREMOL we describe three main stages during the application of TREMOL.

Pre-processing
In this stage we have to assign the following input data: the size of the fault plane, the size of the maximum asperity within the fault plane, other parameters (load-transfer value π, strength value γ, initial load values σ, and load threshold σ th ).

Processing
TREMOL uses the data of the pre-processing stage to carry out the FBM algorithm, and applying Eqs. 3 and 4 is computed the rupture process in the fault plane studied.The asperity size of each earthquake is used by TREMOL to compute also the magnitude of each synthetic earthquake.

Post-processing
In this stage, TREMOL summarizes the results that are computed in the processing stage and computes the equivalent rupture area [km 2 ].In general, TREMOL output generates a synthetic catalogue of earthquakes, which consists of the following: total number of earthquakes that can occur in the the fault plane studied, size of the asperity of each earthquake, magnitude of each earthquake.
In the next sections we describe with more detail each one of the three main stages during the application of TREMOL.An overview of the entire simulation process is shown in Fig. 1.

Pre-processing: Input data and initial conditions
In TREMOL, a fault plane is modeled as a rectangle (Ω), which is divided into N x × N y cells.Each cell is defined by its position (i, j), where i ∈ [1, ..., N x ], and j ∈ [1, ..., N y ].In the fault plane Ω earthquakes can occur with different magnitudes.
Additionally, it is possible to assign to each fault plane an asperity region (R Asp ).
To define each fault plane (Ω), and its respective asperity region (R Asp )it is necessary to assign specific properties to their cells.Particularly, it is necessary to define three properties (or values) for each cell of Ω and R Asp : a load σ(i, j), a strength value γ(i, j), and a load-transfer value π(i, j).
-The load σ(i, j).At the beginning of each realization, TREMOL assigns randomly a value of the load σ(i, j) to each cell of Ω using an uniform distribution function (0 < σ(i, j) < 1).Without loosing generality, this assumption simulates a heterogeneous stress field.Moreover, a load threshold σ th = 1 is defined to identify the amount of load required to break a cell (Moreno et al., 2001).At the end of this step any cell within Ω must have a load value between 0 and 1. -The strength value γ(i, j).This parameter represents an analogy to the concept of hardness or strength.In our model, the algorithm will find it difficult to break a cell if this cell has a value γ > 1 since the strength threshold before failure is set as γ th = 1 (see a detailed explanation in Subsection 3.2).As a result, a strength γ > 1 may simulate a hard material which needs to be weakened before it can fail.This process can be regarded as a simile to material fatigue or creep failure.The strength value for all cells in R Asp , namely γ Asp , is chosen in a discrete interval of where U D is an integer uniformly distributed, and γ Ref is an assigned reference value.
-The load-transfer value π(i, j).This parameter represents the percentage of load that can be distributed from a ruptured cell to its neighbors.In this study, the load in the ruptured cell is called σ F (i, j).TREMOL uses a local load sharing  (Eq.5), to its four orthogonal neighbor cells.The remaining load, σD (Eq.6), is transferred to its four diagonal neighbor cells.Afterwards, the load of the broken cell drops to zero, σ = 0. Asperity cells cannot receive any new load.
(LLS) rule considering the eight nearest neighbors.According to previous studies, such as Monterrubio-Velasco et al. (2017), TREMOL redistributes the majority of load to the four orthogonal neighbors.The load that is transferred to these orthogonal neighbors is called σ O and it is defined according to Eq. 5: where π F is the load-transfer value of the failed cell.Additionally, a small proportion of the load is transferred to the four diagonal neighbors.The value of this load is called σ D (i, j), and it is defined according to Eq. 6: Fig. 2 is a schematic representation of the load distribution process from the failed cell, σ F (i, j) (in red color), to its nearest neighbors.

Ny [cells]
Ny [cells] In order to differentiate the parameters of the asperity with the rest of the fault plane Ω, we define π Asp (i, j) and γ Asp (i, j) that refer only to the cells within R Asp .For the rest of the fault plane Ω, we are using the same parameters defined previously π(i, j) as well as γ(i, j).Fig. 3(a) shows an example of the randomly distributed initial load throughout the fault plane.Fig.
3(b) displays an example of differences between the strength of the asperity and the rest of the fault plane.

Main computational processes
Once the initial information for the entire domain Ω is defined, the core algorithm of TREMOL will realize a transfer, accumulation and rupture process.While the cells interact with each other, there are two basic failure processes depending on the load of the cell in comparison with the threshold load (Moreno et al., 2001): • N ormal event: If all cells within the system have a load σ(i, j) < σ th , a normal-event is generated, and the cell that will fail is randomly chosen considering the individual failure probability of each cell, F (i, j) (Eq.4).
• Avalanche event: If one or more cells have a load value σ(i, j) ≥ σ th , an avalanche-event is generated, and the cell that fails is the one with the greatest σ(i, j) value.
Due to the integrated strength property some extra rules for rupture are necessary.The requirement for failure is γ(i, j) = 1.
On the other hand, if a cell with γ(i, j) > 1 is chosen, its strength is reduced by one unit.This strength condition enables us to simulate material weakening process during the load transfer process.Additionally, this condition offers the possibility to produce large load accumulations locally which are more likely to generate larger ruptures.
When a cell within R Asp breaks it becomes inactive until the end of the simulation which means it cannot receive any further load.The large load concentration within the asperity usually produces a very short time interval (Eq.3), with the result that there is physically not enough time available to re-load the stress on an asperity right after its rupture.On the contrary, a cell outside of the asperity region remains active after its failure but its load drops to zero.The simulation ends when all the cells within the asperity have become inactive.

Output data and post processing
After every execution TREMOL outputs a catalog where the position (x,y) of the failed cell, the rupture time (Eq.12), the avalanche-event or normal-event identification, the mean load, and many other values are saved for each time-step.We cluster avalanche-events considering the time and space criterion.Assuming a i−1 = (x i−1 , y i−1 ) and a i = (x i , y i ) being both two consecutive avalanche-events generated in chronological order.If their euclidean distance ∆r i ≤ r th , (where r th = 2 √ 2), then a i and a i−1 will belong to the same cluster.This clustering algorithm is applied to all generated avalanche-events.Lastly, we extract a new catalog that shows the size of each cluster, the position of the first element of each cluster, related to the nucleation point, and the time when it was initiated.This database is our simulated seismic catalog.Note that the cluster size is given in non-dimensional units.However, we use an equivalence between Ω and an effective area A eff in order to obtain a physical rupture area.Finally, each cell can represent an area in km 2 .This step is necessary in order to compute an equivalent magnitude, which is comparable with real earthquake magnitudes.For this purpose, we use three magnitudearea-relations.In particular, we use Eqs. 7, 8, and Eq. 9, obtained by Rodríguez-Pérez and Ottemöller (2013) for Mexican subduction earthquakes: where A a is the asperity area [km 2 ].Eq. 7 was obtained from asperities defined by the average displacement criterion (Somerville et al., 1999).Eqs 8 and 9 were computed from asperities defined by the maximum displacement criterion for a large asperity and a very large asperity, respectively (Mai et al., 2005).
Furthermore, we define the inter-event rate ∆ν k as an analogy to the rupture velocity: where ∆r k is the inter-event euclidean distance between the k − event located at (x k , y k ), and k − 1 event in (x k−1 , y k−1 ).
The inter-event time ∆δ k is computed following where δ k is given by Eq. 3. Figure 4a shows an example of the final spatial distribution of rupture clusters for a particular example.Each cluster is indicated by the same color and represents a simulated earthquake.Figure 4b shows the related inter-event rate.The inter-event rate largely increases when the asperity rupture occurs.In the post-processing step we additionally computed the rupture duration of the largest simulated earthquake, D Aval , using the rupture velocity, and the effective fault dimensions obtained from finite-fault models (Table 5).
We used Eq. 13 (Geller, 1976) where Using these considerations, we can assign a physical unit of time [seconds] to the largest simulated earthquake, The flowchart in Fig. 1 and the pseudo-codes 1, 2, and 3 summarize the algorithm of TREMOL.A summary of all required parameters to execute the TREMOL code are shown in Table 1.
Algorithm 1 Basic FBM.Main algorithm of TREMOL which applies the Algorithm 3 in regards to the initial conditions procedure and to the rupture procedure, respectively.
Algorithm 3 Failure of a cell.The algorithm of TREMOL which computes the failure process in the model.rupture(p, q) σ(p, q) = π (p, q) σ(p, q) for (r, s) ∈ {(1, 0), (0, 1), (−1, 0), (0, −1)} do σ(p + r, s 4.1 Methods: Parametric study We performed a sensitivity analysis of the three asperity parameters (γ asp , S a−Asp , and π asp ), in order to identify the best combination which produces the best approximation to real data, such as the maximum rupture area, A syn , and its related magnitude M syn .In order to investigate the influence of every single parameter, we statistically determined how the results vary with different parameter configurations.

Methods: Percentage of transferred load, π asp
To explore the influence of π asp , we analyzed 12 values [0.67 ≤π asp ≤ 1.0, with increments of 0.3].The minimum π asp = 0.67 assigns the same value to an asperity cell, and to a background cell.On the other hand, π asp = 1.0 means that the load in a failed asperity cell is fully transmitted to their neighbors (ideal case no dissipative effects).Note that π asp = 1 does not represent real physical conditions since dissipative effects are ignored completely.On the other hand, if π asp = 0.67 (case 1) the asperity cells would transfer as much load as the cells in the background.The objective is to generate a load concentration within the asperity which corresponds to the largest magnitude.If the asperity cells transfer as much load as the background cells, no such load concentration can be obtained.As a result, we can expect that the mean A syn for π asp = 0.67 (case 1) is the lowest value in comparison to all other cases.
The input data of this experiment is summarized in Table 2.We assigned a strength to the asperity (R Asp ) γ asp = 4 ± 1, and a value of γ = 1 to the rest of the fault plane.These values are chosen after experimental trials, which have shown that the difference is large enough to simulate a significant strength difference with low computational effort.To define the effective area, and the asperity size, we chose the values computed for the earthquake of 20/03/2012, Mw=7.4, in Rodríguez-Pérez and Ottemöller (2013): A eff = 2944.2km 2 , and S a = 0.26.We defined the size of Ω consisting of in total N cell = 10000 cells.
We carried out 50 simulations per π asp configuration.In addition, we modified the random seed to have different initial load configurations, σ ( i, j), to assure that the results over π asp are independent of the initial load conditions σ ( i, j).

Methods: Strength parameter, γ asp
To perform the parametric study of γ asp , we configured two asperities embedded in Ω.In this experiment, the total size is Ω = 200 × 100 cells.Afterwards, we located each asperity in the center of the two sub-domains Ω of 100 × 100 cells.Fig. 5 shows a schematic representation of the domains Ω, and Ω used in this experiment.
The separation between both asperities remains constant.We chose a value of π asp = 0.90 to produce a large contrast between the asperity, and the rest of the fault plane (π = 0.67) (Monterrubio-Velasco et al., 2017).In order to analyze the influence of γ asp (and S a−Asp ), the asperity on the right side hand (Asp.2) varying strength values, while the strength of the left asperity (Asp. 1) remains constant.Finally, the maximum ruptured area, and magnitude generated in each Ω is computed.In order to explore how the system behaves when γ asp changes, we analyzed 6 different values of γ asp = [2 ± 1, 5 ± 1, 7 ± 1, 9 ± 1, 11 ± 1, 14 ± 1] (case 13 to 18).The input data used in this test is summarized in Table 3.We defined the same asperity size for both, S a1 = S a2 = 0.22.In Fig. 6, we show an example of the spatial configuration of this analysis.The background strength is considered as γ bkg = 1 = constant, and the color bar indicates the γ(i, j) values.

Methods: Asperity size, S a−Asp
The modification of the S a−Asp parameter was based on the same configuration as described in the previous section.We analyzed 6 different values of the asperity size S a (cases 19 to 24).In Fig. 7 we show an example of the asperity configuration where the left asperity (Asp. 1) has a constant size S a2 , while the size of the right one (Asp.2) increases.In this experiment, we considered γ asp = 5 ± 1, and π asp = 0.90.The main data related to these 6 cases is summarized in Table 4.

Methods: Model validation
We evaluated the capability of the model to reproduce the characteristics of 10 Mexican subduction earthquakes (8 shallow thrust subduction events, ST, and 2 intraslab subduction events, IN).The input data of the effective area A eff , and the asperity ratio size, S a , is given from waveform slip inversions and seismic source studies (A eff = L eff ×W eff , and S a = A a /A eff ) shown in the database of the Mexican earthquake source parameters by Rodríguez-Pérez et al. (2018).This database includes results from two different methodologies: spectral analysis and finite-fault models.From the latter, the database provides estimations of effective fault dimensions, rupture velocity, source duration, number of asperities, stress and radiated seismic energy on the asperities and background areas.Slip solutions were obtained with teleseismic data for events with 6.4 < M w < 8.2.
The number of cells was N cell = 10000 for a domain Ω of 100 × 100 cells.We modeled the size of Ω proportionally to the size of L eff , and W eff for each scenario, according to the following equations, Eqs. 15 and 16: where N x and N y are the number of cells in the x-axis, and in y-axis, respectively.As an example, Fig. 8 presents the size and aspect ratio of Ev. 3 and Ev. 5 (Table 5).
In some cases, the number of asperities computed in Rodríguez-Pérez et al. ( 2018) is greater than 1.However, as a first approximation we simplified the problem by modeling only one asperity per earthquake.
In order to study how the asperity size S a affects the maximum ruptured area, we randomly modified the size as where 0 < α < 1 is a random value.We introduce this assumption because we want to avoid a preconceived final size.In future trials it may be useful to consider the inner uncertainties of finite-fault models.The asperity aspect ratio follows the same proportion as the effective area, Ωx Ωy = N y(Sa) (Fig. 8).We carried out 50 realizations per event (Table 5) changing the size S a−Asp in each one (Eq.17).

Methods: Modelling the rupture area and magnitude of 10 subduction earthquakes
In this case the number of cells is N cell = 10000 cells (100×100).We carried out 50 executions per event and in each execution we randomly changed the size S a−Asp following Eqs.15, 16, and 17.The input data of the 10 modeled earthquakes of Table 5 are summarized in Table 6.
Table 5.The finite-fault source parameters used in this work.W eff and L eff are the effective fault dimensions (width and length, respectively, according to Mai and Beroza (2000)).A real is the asperity area, A eff is the effective rupture area (W eff × L eff ).Duration is the rupture duration computed from the slip inversion, Na is the number of asperities and Vr is the rupture velocity.Ratio is the aspect ratio of the fault area.The type of the event is labeled ST for shallow thrust and IN for intraslab events.2018) for some events, there are several solutions which allows us to analyze the variability in the estimated source parameters (see parameters of events 7, 7a, 7b in Table 5).In this study, we applied TREMOL to study how the ruptured area and the assessed magnitude changes when we use different input data to model the same earthquake.The data related to these three events is summarized in Table 7.
4.2.3Methods: Assessing a future earthquake in the Guerrero seismic gap: rupture area and magnitude We apply our method for the estimation of possible future earthquakes.In particular, to compute the expected magnitude, since TREMOL may offer new insights for future hazard assessments.We carried out a statistical test to assess the size of an earthquake that may occur in the Guerrero seismic gap (GG) region.
As input parameters, we used the area found by Singh and Mortera (1991): L eff = 230km ×W eff = 80 km.We defined the asperity size ratio S a as proposed by Somerville et al. (2002) for regular subduction zone events (SB), based on average slip, S a = 0.25.Singh and Mortera (1991), Astiz et al. (1987), and Astiz and Kanamori (1984), proposed a probable maximum magnitude for this region of M w ≈ 8.1 − 8.4.Therefore, using the effective rupture area (L eff , W eff and S a ), we executed the algorithm as in previous sections.The input data related to this analysis is summarized in Table 8.Likewise, we want to estimated the duration D aval of the event.To compute this value, we used a mean of the V r from Table 5. Fig. 9 shows the mean (black dots) of the maximum ruptured area A syn including the upper and lower limits of the standard deviations (blue squares) after the execution of all 12 cases (Table 2) with 50 realizations.The value of A syn is related to the largest produced cluster in Ω.There are two dominant tendencies identifiable: 1.If π asp < 0.76, the mean of the maximum ruptured area increases continuously more than one order of magnitude -from 15 to ≈ 500 km 2 , i.e. an increase of 3333 %.The standard deviation of A syn for π asp = 0.7 is ≈ 35 km 2 (100 % error).2. If π asp ≥ 0.76 (cases 4 to 12), the A syn values remain essentially constant (≈ 500 km 2 ).Likewise, the upper and lower limit vary around the same order.The standard deviation for this interval is ≈ 100 km 2 (20% error).the asperity ratio Sa computed for event 7 (Table 5).
Using the mean of A syn obtained in each case, we computed the corresponding magnitude.The results are given as mean and the standard deviation of the maximum magnitude in Fig. 10 for all twelve cases (see Table 2).Due to the fact that ruptured area and magnitude are correlated (see Eqs. 7, 8 and 9), the pattern in Fig. 10 is very similar to the one in Fig. 9.
2. If 0.70 ≤ π asp < 0.76 a transition with an increasing trend with the largest standard deviation is visible.
3. If π asp = 0.67 (case 1), the mean of the maximum magnitude is the lowest.
In this experiment, the initial value of S a = 0.26 remains constant, i.e. the asperity size does not increase randomly (red line in Fig. 11).After executing all configurations, we computed the ratio of S a−Asp = A syn /A eff , relating to the largest ruptured area.We show the mean, and standard deviation of this ratio S a−Asp in Fig. 11.We observed that the ratio of S a−Asp is always ≈ 0.10 lower than S a .

Results: Strength parameter, γ asp
For each value of γ asp (Table 3), we performed 50 executions while changing the initial strength parameter of the asperity γ asp (Fig. 3b).Likewise, we computed the maximum magnitude obtained for each Ω .Fig. 12 indicates the mean and standard deviation of the computed maximum magnitude in dependence on γ asp .The upper subplot (blue markers) shows the results for the left (constant) asperity (Asp.1).The lower subplot (red markers) shows the results for the right (variable) asperity (Asp.2).
We observe in Fig. 12 that the mean magnitude remains essentially independent for γ asp > 5±1.Additionally, the error bars slightly decrease while γ asp increases.Another observation is that when γ asp = 2 ± 1 the average of the maximum magnitude is the lowest in both asperities.Moreover, there is a transition zone for 2 ± 1 ≤ γ asp ≤ 5 ± 1.We observed that γ asp > 5 ± 1 has a limited influence on the results of the maximum magnitude.The maximum magnitude of γ asp = 14 ± 1 is approx.0.3 magnitudes larger than the one of γ asp = 5 ± 1. Based on the observations described in the previous section, we used γ asp = 5 ± 1, and π asp = 0.90 in order to validate the model.We chose γ asp = 5±1 because it represents the strength interval of 5±1 ≤ γ asp ≤ 14±1 with less computational costs.
We chose π asp = 0.90 because it represents the relatively constant magnitude for the parameter range 0.76 ≤ π asp ≤ 0.90.In addition, π asp = 0.90 enables to obtain the best approximation to the ratios of S a−Asp .Both parameter choices ensure an appropriate reproduction of the asperity rupture area, the maximum magnitude and least computational payload.
Fig. 14 depicts a comparison between the (real) asperity area A real (Table 5), and the area of the largest simulated earthquake, A syn .We plot the mean (blue dots), the minimum (green triangles), and the maximum (red triangles) of all 50 realizations for each real earthquake event.Black squares represent the real asperity size.The results in Fig. 14 point out that A syn is almost 10 identical to A real from Table 5 for the majority of earthquakes.Only three events show significant differences between synthetic and realistic maximum rupture area.Even in these cases, however, A real is located within the upper and lower limit of A syn . Ev

Maximum ruptured area [km²]
A real A syn A syn(Max. ) A syn (Min. ) Figure 14.Comparison between the real asperity area A real and the synthetic values (mean and standard deviation) of the largest ruptured event Asyn.Black squares depict the real asperity area from table 5, whereas blue circles indicate the mean area of 50 executions.Red and green triangles represent the maximum and minimum Asyn.
Fig. 15 shows the statistical results of the synthetic maximum magnitude, M syn , determined for all 10 events.The real magnitudes from Table 5 are given as red markers.Black circles indicate the mean of M syn of 50 realizations using Eq. 7, whereas blue and green markers indicate the magnitude following the equations 8 and 9, respectively.The error bars represent the standard deviation.We observed that the statistical parameters computed with TREMOL fit the magnitudes shown in Table 5.However, the computed magnitudes depend on the scale relation employed (Eq.7, Eq. 8, and Eq. 9).Fig. 16 includes the mean of the three scale relations.Overall, the mean magnitude M syn and the expected magnitude M w show similar values.
Given that the difference between the mean and the expected value (Table 5) is lower than ∆M w < 0.5 for the 10 events, we can affirm that the results of assessing the magnitude by means of TREMOL using a randomly modified asperity size, S a−Asp (Eq.17), are reasonable.5. Red squares depict the real estimated magnitudes from table 5, while black circles, blue triangles and green triangles represent the synthetic mean magnitude of 50 executions following Eqs.7, 8 and 9, respectively.The error bars are the standard deviation of the scale relations.Fig. 17 shows the real ratio size S a from table 5 (black squares) in comparison to the mean of the largest simulated earthquake, S syn (blue squares).The standard deviation is represented as error bars.The results indicate that in most of the cases the computed S syn range fits the expected S a well.Note that for the Events 3, 7, and 8 the mean values are lower than the reported S a , while S syn is overestimated for Events 2, 5 and 9.For Events 1, 4, 6 and 10 the estimated value of S syn coincides with the expected one.However, the error bars enclose the expected values in all cases (Fig. 17).Moreover, if we compare Fig. 17 with Fig. 11 we observe that the employed strategy of randomly increasing asperity size (using Eq. 17) generate rupture areas similar to the ones proposed by Rodríguez-Pérez et al. (2018).
We also computed an equivalent rupture duration, D Aval using the equation proposed by Geller (1976) to calculate the rise time (Eq.13 and 14).Rodríguez-Pérez and Ottemöller (2013) determined the rupture velocity V r (Ev.3-8), which is a useful parameter in order to validate our results.Fig. 18 shows the results of this analysis.In red we plot the values V r calculated by .Statistical results of the maximum magnitude for the events from table 5. Red squares represent the magnitudes from table 5, whereas black circles, the mean magnitude (M syn) value of all 50 executions following Eqs.7, 8, and 9.The error bars stand for the standard deviation of the mean for the three scale relations.Rodríguez-Pérez and Ottemöller (2013), and in blue the D Aval based on Eq. 13 with V r provided by Table 5.The equivalent D Aval using the Eqs. 13 and 14 is printed in black.In cases where we have the reference values, V r , computed by Rodríguez-Pérez and Ottemöller (2013), we observe that the reference value are always larger than the modelled D Aval values.However, is worth to note that V r is the mean rupture time that considers the rupture of the whole effective area (A eff ).For the simulated rupture duration, D Aval , we only consider the rupture length of the largest rupture cluster A syn .As a result, smaller values than those proposed in Rodríguez-Pérez and Ottemöller (2013) are expected.Nevertheless, the rupture duration shows a clear dependency on the magnitude.In the cases where several effective rupture areas were proposed by different studies (see Table 5), it is possible to assess which set of parameters is better in order to simulate an event by means of TREMOL.We tested TREMOL by using three different combinations of L eff , W eff and S a according to results for Ev. 7 in Table 5.A comparison of these three combinations is visualized in Fig. 19: (a) shows the comparison of the ruptured areas, A real and A syn ; (b) shows the mean and standard deviation of the maximum magnitude, M syn , in comparison to the reference magnitude; (c) shows the ratio S syn of the simulated events compared to S a the real scenarios.Although, the three combinations express similar results, the closest approximation between real and synthetic data is generated based on the data by Rodríguez-Pérez and Ottemöller (2013) (Ev.7).  of 3333 %.Therefore, the range of π asp is both crucial and sensitive.A parameter increase of only 15 %, affects the size of the biggest earthquake within the system by 3333 %.Considering the large standard deviation of ≈ 35 km 2 (100 % error) a parameter configuration based on π asp < 0.76 would be unsuitable for further simulations due to the unstable properties insights of earthquake physics and hazard assessment from a completely different perspective.The development of TREMOL and similar models should be therefore strongly encouraged and supported.

Conclusions
In this study, we present a FBM-based computer code called stochasTic Rupture Earthquake MOdeL, TREMOL, in order to investigate the rupture process of seismic asperities.We show that the model is capable of reproducing the main characteristics observed in real scenarios by means of few input parameters.We carried out a parametric study in order to determine the optimal values for the three most important initial input parameters: π asp : As long as the fault plane has a conservation parameter of π bkg = 0.67, the conservation parameter of the asperity must be π asp ≥ 0.76 to ensure a realistic maximum rupture area.
γ asp : The best strength interval for the asperity is 5 ± 1 < γ asp < 14 ± 1.However, due to computational costs it is recommended to use the lowest value of γ asp = 5 ± 1, since the number of necessary time-steps to activate the whole asperity increases strongly with the applied asperity strength (see Algorithm 1).
-S a−Asp : The generated magnitude can be controlled by parameter S a−Asp .This parameter is dependent on the earthquake of interest, and follows results from data from inversion studies.
We also carried out a validation study employing 10 subduction earthquakes which occurred in Mexico.TREMOL proved that it is able to reproduce those scenarios with an appropriate tolerance.
A big advantage of our algorithm is the low number of free parameters (L eff , W eff , and S a ) to obtain an appropriate rupture area, and magnitude assessment.Our code TREMOL allows its users to investigate the role of the initial stress configuration, and the material properties over the seismic asperity rupture.Both characteristics are key factors for understanding earthquake dynamics.The strengths of our FBM model are the simplicity in their implementation, the flexibility, and capability to model different rupture scenarios (i.e.asperity configurations) with varying mechanical properties within the asperities, and/or background area or fault plane.Although we simplified the expected complex asperity geometries, irregular asperity geometries may be introduced in future works.Another advantage is the analysis of earthquake dynamics from the point of view of deformable materials that break under critical stress.The results of TREMOL are promising.However, various assumptions and simplifications require further experiments and modifications of the algorithm to cover various tectonic settings.Likewise, the machine learning application by Monterrubio-Velasco et al. (2018) needs to be incorporated into the model to determine the optimal parameter ranges for different fault types and tectonic regimes.Although many questions are still left to be answered due to the model's early development stage, TREMOL proved to be a powerful tool which can deliver promising new insights into earthquake triggering processes.Our future work will investigate complex asperity configurations, earthquake doublets and stress transfer in three-dimensional domains.

Figure 1 .
Figure 1.TREMOL flowchart.At the beginning (pre-process) the algorithm initiates a domain Ω with Nx × Ny cells where every cell is either part of an asperity or of the background or fault plane.Afterwards (first time-step, k = 1) an uniform distribution allocates a random stress load and rupture probability to all cells.In addition, asperity cells obtain a random strength value from an uniform distribution.At next (time-step k ≥ 2) the failure process starts following the FBM algorithm.After every failure the stress of the broken cell is redistributed via the LLS rule and the number of time-steps (k) increases by 1 until the target number of time-steps is reached.If the final number of time-steps has been reached the simulation stops.At the end, all information about the entire failure process are saved in a database/synthetic catalogue which can be used for statistical analysis.Further details about the algorithm are given in section 3.

Figure 2 .
Figure 2. Schematic representation of the considered local load rule.The broken cell with load, σF, distributes the largest load fraction, σO

Figure 3 .
Figure 3. (a) Spatial distribution of the random initial loads σ(i, j).RAsp represents a rectangular fault plane of Nx = Ny = 100 cells.The color bar indicates the load and the threshold load of σ th = 1.(b) Spatial distribution of the strength γ(i, j).Two main regions can be distinguished in this figure: 1) the asperity region defined as the inner rectangle, and 2) a background area or fault plane.While the asperity contains strength values in the range of 3 to 5, the rest of the fault plane has a strength value of 1.

Figure 4 .
Figure 4. Results of one realization by TREMOL.(a) The spatial distribution of avalanches.Patches of the same color indicate one temporalconsecutive Avalanche cluster (synthetic earthquake).(b) Logarithmic representation of the inter-event rate with time.The red dots represent the inter-event rate when the asperity rupture occurs.The blue dots indicate foreshocks.

Figure 5 .
Figure 5. Schematic configuration for the parametric study of γasp and Sa−Asp.The size of the domain Ω is Ω = 200 × 100 cells.Each asperity is located within the center of the two sub-domains Ω of 100 × 100 cells.The strength parameter γasp and degree of heterogeneity for each asperity can be varied according to the material properties.

Figure 6 .
Figure 6.Example of the strength configuration γ(i, j) for the sensitive analysis of γasp.Two asperities with the same size Sa−Asp = 0.22 are defined and embedded in Ω following the schema of Fig. 5.The conservation parameters are: πasp = 0.90 and πasp = 0.67.The color bar indicates different γ(i, j) values.The left asperity (Asp. 1) contains constant properties, while the right asperity (Asp.2) has variable strength values.

Figure 7 .
Figure 7.An example configuration of different asperity sizes, Sa−Asp.The color bar indicates the strength γ(i, j) values used during the test.

Figure 8 .
Figure 8. Example of the domain configuration Ω, considering L eff and W eff .(a) Example configuration of Event 3 and (b) Example configuration of Event 5.The required data can be found in table 5.

Figure 9 .Figure 10 .
Figure 9. Mean of the maximum rupture area [km 2 ], Asyn for different values of πasp depicted as black circles.The minimum and maximum limits of the rupture area are represented by blue squares.

Figure 11 .
Figure 11.Mean and standard deviation of the ratio Ssyn = Asyn/A eff over 50 realizations for different values of πasp.The red line indicates

Figure 12 .
Figure 12.Statistical results of γasp for a configuration similar to figure 6. Markers represent the mean value while the error bars indicate the standard deviation for all 50 executions considering different initial strength configurations.The red markers correspond to the results of the left asperity (Asp. 1) and blue markers to the results of the right asperity (Asp.2) (Fig.6).The strength of the left asperity is kept constant,

Figure 13 .
Fig.13shows the mean magnitude and standard deviation as a function of asperity size.The first asperity with the fixed size indicates a relative constant magnitude of approx.7.4.Conversely, the second asperity with variable size produces only a slight increase in magnitude.The magnitude of S a−Asp = 0.52 is approx.0.5 magnitudes larger than the one of S a−Asp = 0.22.

Figure 15 .
Figure15.Statistical results of the maximum magnitude for the events from table 5. Red squares depict the real estimated magnitudes from Figure16.Statistical results of the maximum magnitude for the events from table 5. Red squares represent the magnitudes from table 5,

Figure 17 .
Figure17.Proportion of simulated ruptured area occupied by the largest Avalanche, Ssyn, in comparison with the real ratio size Sa from table 5.The real ratio size Sa from table 5 is represented by black squares and the mean of the largest simulated earthquake, Ssyn by blue squares.The standard deviation is represented as error bars.

Figure 19 .Figure 20 .
Figure19.A comparison between the data from table 5 and the results by TREMOL for the events 7, 7a and 7b.(a) Maximum ruptured area, Asyn; (b) mean maximum magnitude,Msyn; (c) ratio of maximum event size Ssyn.

Table 1 .
TREMOL pre-processing: Input parameters and their definition.

Table 2 .
Input data in order to carry out cases 1 to 12.

Table 3 .
Main input data in order to carry out cases 13 to 18.

Table 4 .
Main input data in order to carry out cases 19 to 24.

Table 6 .
Main data used for Ev.1 to Ev. 10.

Table 7 .
Main data for the case-study Ev. 7 test, Ev. 7a test, and Ev.7b test.

Table 8 .
Main data for assessing a future earthquake in the Guerrero seismic gap (GG event).