The Utrecht Finite Volume Ice-Sheet Model: UFEMISM (version 1.0)

Improving our confidence in future projections of sea-level rise requires models that can simulate ice-sheet evolution both in the future and in the geological past. A physically accurate treatment of large changes in ice-sheet geometry requires a proper treatment of processes near the margin, like grounding line dynamics, which in turn requires a high spatial resolution 10 in that specific region. This leads to a demand for computationally efficient models, where such a high resolution can be feasibly applied in simulations of 105 – 107 yr in duration. Here, we present and evaluate a new ice-sheet model that solves the SIA and SSA approximations of the stress balance on a fully adaptive, unstructured triangular mesh. This strongly reduces the number of grid points where the equations need to be solved, increasing the computational efficiency. We show that the model reproduces the analytical solutions or model intercomparison benchmarks for a number of schematic ice-sheet configurations, 15 indicating that the numerical approach is valid. Because of the unstructured triangular mesh, the number of vertices increases less rapidly with resolution than in a square-grid model, greatly reducing the required computation time for high resolutions. A simulation of all four continental ice sheets during an entire 120 kyr glacial cycle, with a 4 km resolution near the grounding line, is expected to take 100 – 200 wall clock hours on a 16-core system (1,600 – 3,200 core hours), implying that this model can be feasibly used for high-resolution paleo-ice-sheet simulations. 20


Introduction
The response of the Greenland and Antarctic ice sheets to the warming climate forms the largest uncertainty in long-term sealevel projections (e.g.Oppenheimer et al., 2019;van de Wal et al., 2019).Since the dynamical evolution of ice sheets has components that are slow compared to the human time scale, observational evidence alone cannot sufficiently reduce this uncertainty.Instead, reconstructions of the evolution of ice sheets during the geological past are required to improve our understanding of the long-term evolution of these systems and the constraints this provides for future ice-sheet retreat.Recent work has focused on using ice-sheet models and climate models, with varying degrees of inter-model coupling, to reproduce different periods of the geological past, with climates that have been both significantly warmer and colder than the present (e.g.Abe-Ouchi et al., 2013;Pollard et al., 2013;DeConto and Pollard, 2016;Stap et al., 2017;Berends et al., 2018Berends et al., , 2019;;Willeit et al., 2019).These studies have highlighted and reemphasized the importance of understanding, and properly modelling, the different physical interactions between the ice sheets, sea level, the solid Earth, and the regional and global climate.Since many of these processes become relevant only when significant changes in ice-sheet geometry occur, studying them requires very long (10 5 -10 7 yr) ice-sheet model simulations.
The dynamics of the Antarctic ice sheet at present are strongly influenced by the presence of floating ice shelves (Pattyn, 2018).Different studies have investigated the physical processes affecting these ice shelves, including surface melt induced by atmospheric processes (Bevan et al., 2017;Kuipers Munneke et al., 2018), bottom melt induced by intrusion of warm ocean water (Depoorter et al., 2013;Lazeroms et al., 2018, Reese et al., 2018), brittle fracturing of ice cliffs (Pollard et al., 2015), and the response of the grounding line to changes in sea level and bedrock elevation (Gomez et al., 2013;Barletta et al., 2018).
Ice dynamics around the grounding line have been of particular interest, demonstrating the need for a high model resolution (< 1 km) in order to accurately resolve the physical processes involved (Schoof, 2007;Gladstone et al., 2012;Pattyn et al., 2012Pattyn et al., , 2013)).Different approaches have been proposed where semi-analytical solutions (Schoof, 2007;Tsai et al., 2015) are implemented as boundary conditions in numerical models, to maintain physical accuracy at lower resolutions (Pollard & DeConto, 2012;Pattyn, 2017).However, while these approaches achieve good results in situations without buttressing (Pattyn et al., 2013), their performance in simulating realistic ice shelves, where buttressing is usually a significant factor, is poor (Reese et al., 2018).
Ice-sheet model computation time increases rapidly with model resolution, due to the increasing number of grid points, the decreasing time step required for numerical stability, and the increasing number of topographic features that are resolved.This means that the need for both a high model resolution and long simulations results in a computational paradox.Most research groups working on paleo-ice-sheet simulations consider the uncertainties in paleoclimate reconstructions used as forcing to be much larger than the physical inaccuracy resulting from a low model resolution.The ice-sheet models used by these groups therefore typically have a low-resolution, fixed grid, and solve a simplified version of the Navier-Stokes equations, which makes them very computationally efficient, and relatively easy to compile, run, and modify (e.g.SICOPOLIS; Greve et al., 2011;PISM;Winkelmann et al., 2011;ANICE;de Boer et al., 2014;f.ETISh;Pattyn, 2017;GRISLI;Quiquet et al., 2018;CISM;Lipscomb et al., 2019;Yelmo;Robinson et al., 2020).Paleo-simulations of the Antarctic ice sheet with such models have used resolutions of e.g. 10 km (DeConto and Pollard, 2016), 32 km (Robinson et al., 2020), 40 km (Berends et al., 2019), 80 km (Willeit et al., 2019) or 110 km (Abe-Ouchi et al., 2013), too low to properly capture grounding line dynamics.A few existing models, which are mainly intended for relatively short (10 1 -10 3 yr) simulations, use high-resolution, adaptive grids (e.g.ISSM; Larour et al., 2012;Elmer/Ice;Gagliardini et al., 2013;MALI;Hoffman et al., 2018).They often solve for more terms in the Navier-Stokes equations (either the higher-order Blatter-Pattyn [Pattyn, 2003] approximation, or even the full Stokes system) using finite element methods, making them more physically accurate.While recent developments have improved their computational efficiency enough to make small-scale (single ice-sheet basin, 10 4 yr) paleo-simulations feasible https://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License.(Cuzzone et al., 2018(Cuzzone et al., , 2019)), they tend to be too computationally demanding for the 10 5 -10 7 yr simulations needed for paleoice-sheet simulations.
Here, we present and evaluate a new ice-sheet model that constitutes an optimum between these two families: the Utrecht Finite Volume Ice-Sheet Model (UFEMISM).It combines the SIA and SSA simplifications of the stress balance, used in most paleo-ice-sheet models, with a variable-resolution unstructured grid, which allows it to achieve a high (< 5 km) resolution near the grounding line, while retaining the computational speed required for feasibly performing long paleo-ice-sheet simulations.This makes it especially useful for studying the impact of ice-sheet -solid Earth -sea-level interactions on long-term ice-sheet evolution.In Sect.2, we provide a brief description of the physical equations for ice dynamics and thermodynamics that are solved by the model, as well as a description of the unstructured grid upon which those equations are solved.In Sect.3, we present results from a number of benchmark experiments performed with UFEMISM, showing that the model output agrees well with different analytical solutions, as well as with output from other ice-sheet models.In Sect.4, we present results from a detailed analysis of the computational performance of the model.

Overview
UFEMISM is a variable-resolution ice-sheet model.Flow velocities for grounded ice are calculated using the shallow ice approximation (SIA; Morland and Johnson, 1980), while the shallow shelf approximation (SSA; Morland, 1987) is used both for calculating flow velocities for floating ice, and as a "sliding law" for grounded ice, using the hybrid approach by Bueler and Brown (2009).These equations are discretised and solved on an unstructured mesh, which is generated and updated based on the modelled ice-sheet geometry.The resolution can vary from as coarse as 200 km over open ocean or 100 km over the interior of a large ice-sheet, to as fine as 1 km at the grounding line, with the precise numbers specified at run-time through a config file.Using a finite volume approach, ice velocities and fluxes are calculated on cell boundaries, similar to the "staggered" Arakawa C grid used in many ice models based on finite differences.By explicitly calculating the mass of ice moved between individual vertices in every time step, this approach guarantees conservation of mass.The model is thermomechanically coupled, solving for the diffusion and advection of heat, which enters the ice sheet through the surface and base, as well as through strain heating.For this purpose, the ice sheet is divided vertically into 15 unequally spaced layers, decreasing in thickness near the base, where most of the deformation takes place.The resulting three-dimensional englacial temperature field is used to determine the ice viscosity.

Unstructured triangular mesh
The main distinguishing feature of UFEMISM with respect to many other paleo-ice-sheet models is the unstructured triangular mesh on which the physical equations are solved.The mesh generation code in UFEMISM is based on Ruppert's Algorithm https://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License.(Ruppert, 1995), which is used to iteratively add vertices to a pre-existing Delaunay triangulation (a process called "refinement"), until the smallest internal angle of all triangles no longer lies below a pre-defined threshold value (typically 25 degrees).Since the stability of many numerical iterative schemes for solving differential equations strongly depends on this particular property of the mesh geometry (Ruppert, 1995), Ruppert's Algorithm is a very useful tool for generating meshes suitable for use in physical models.
The mesh generation code in UFEMISM uses an extended version of Ruppert's algorithm.In the original version, a triangle is marked as "bad" if its smallest internal angle lies below the threshold value.If this is the case, the triangle is "split", by adding a vertex at the triangle's circumcentre, and updating the Delaunay triangulation.Ruppert (1995) showed that, as long as the perimeter of the mesh does not contain any angles sharper than the threshold value, this algorithm always converges, i.e. results in a mesh with no "bad" triangles and a finite, typically small number of vertices.In UFEMISM, several additional conditions have been added which can cause a triangle to be marked as "bad", which depend upon the triangle's "model content" and size.If, for example, the modelled grounding line passes through a triangle whose longest edge exceeds the specified maximum grounding line resolution, that triangle is also marked as "bad".Extensive preliminary experiments showed that this particular approach to mesh generation is robust, resulting in a mesh with resolutions over specified areas (such as the grounding line) that lie below, but not too far below (rarely more than 25 %), the specified maximum value.Other conditions that can be included, but will not be discussed here, include a maximum vertical error in surface elevation based on surface curvature (useful for the interior close to the ice margin), a maximum resolution derived from ice velocity (useful for ice streams).Lastly, a maximum resolution can be prescribed at specific geographic locations, such as ice core drill sites.Shown in Fig. 1 are four meshes generated for both Antarctica and Greenland, based on present-day ice-sheet geometry (BedMachine Greenland version 3, Morlighem et al., 2017, andBedMachine Antarctica version 1, Morlighem et al., 2019), with maximum ice-margin (including both the grounding line and the calving front) resolutions of 100, 30, 10 and 3 km, respectively.For Antarctica, two high-resolution locations are included: one at the South Pole, and one at the EPICA Dome C drill site.Since the ice margin, the calving front and the grounding line are one-dimensional regions, increasing the desired resolution only over these regions results in a total number of vertices and triangles that scales almost linearly with this resolution (though not quite, as increasing the resolution resolves more geographical features, increasing the length of the lines).The way the number of vertices and the computational speed of the model scale with the prescribed resolution, is investigated in more detail in Sect. 4. Mesh generation has been parallelised by subdividing the model domain into separate regions for each processor, which generates a mesh for only that section.These "submeshes" are then merged to produce one single mesh that covers the entire model domain.An example of four such submeshes that have been generated in parallel for Antarctica is shown in Fig. 2.
Although these particular submeshes have equally-sized domains, in general the size of each domain is chosen such that they contain approximately equal numbers of vertices, to ensure proper load balancing.The results presented in Sect. 4 show that this approach results in a computation time that scales with the number of processors to order 0.47.The mesh adapts to the ice-sheet geometry as it evolves through time.In every model time step, a "mesh fitness factor" is calculated, defined as the fraction of triangles that still meet all the fitness criteria in the extended version of Ruppert's algorithm.For a newly generated mesh, this fraction is, by definition, 100 %.As the ice-sheet geometry evolves over time, the fitness factor slowly decreases.When it falls below a prescribed threshold value (typically 95 %), a mesh update is triggered, and an entire new mesh is generated.In glacial cycle simulations, the typical time between mesh updates is 50 -500 model years.Although Ruppert's algorithm provides a very intuitive rule for refining a mesh, no such rules can be easily defined for "un-refining" a mesh.The simplest approach is therefore to generate an entirely new mesh once the fitness of the current mesh falls below the threshold value.Once this is done, the model data fields (ice thickness, velocity, temperature, etc.) are projected from the old mesh to the new one, using trilinear interpolation for diagnostic fields such as ice velocity, surface albedo, etc., and second-order conservative remapping (Jones, 1999) for ice thickness and englacial temperature.

Ice dynamics
Ice velocities over land are calculated in UFEMISM using the SIA.The horizontal ice velocities in the vertical column, () and (), can be expressed in terms of the depth-dependent ice diffusivity (): (2) The depth-dependent ice diffusivity () is defined as a function of the ice density , gravitational acceleration , ice thickness , surface gradient ℎ, Glen's flow law exponent , and the temperature-dependent ice flow factor ( * ).For a comprehensive derivation of Eqs. 1 and 2, see e.g.Bueler and Brown (2009).In order to solve these equations numerically, these different quantities need to be discretised on a grid.In many ice models, this is done using a combination of a standard central differencing scheme, and a "staggered" grid, where either the ice thickness  is averaged and the surface slopes ?@?A and ?@?B are differenced (Type I models; Huybrechts et al., 1996), or the other way around (Type II models).UFEMISM is, in this sense, a Type I model.The discretisation of spatial derivatives on an unstructured triangular mesh is derived in Appendix A. Ice thickness changes over time are calculated using a finite volume approach: by calculating both the diffusivity  and surface slopes ?@?A and ?@?B , and the resulting ice velocities  and , on the boundaries between vertices (using the "staggered mesh" approach described in Appendix A), ice fluxes between individual vertices are calculated explicitly, and moved from one vertex to the other in every time step.This guarantees conservation of mass.UFEMISM also solves the SSA, to determine ice velocities for floating ice shelves, and sliding velocities for grounded ice, following the approach of Bueler and Brown (2009).The iterative scheme for discretising and solving the set of partial different equations describing the SSA on the unstructured triangular mesh, is derived in Appendix B.

Thermodynamics
UFEMISM uses a mixed implicit-explicit finite differencing scheme with a fixed time step to solve the heat equation in a flowing medium: The three terms on the right-hand side of Eq. 3 respectively represent diffusion, advection and strain heating.In UFEMISM, horizontal diffusion of heat is neglected, simplifying Eq. 3 to: This equation is discretised on an irregular grid in the vertical direction; all vertical derivatives are discretised implicitly, whereas horizontal derivatives are discretised explicitly.This mixed approach, which was first used in the ice-sheet model SICOPOLIS (Greve, 1997), is numerically stable (since both the steepest gradients and shortest grid distances are in the vertical direction), relatively easy to implement, and fast to compute.Using 15 layers in the vertical direction, a time step of 10 years is typically sufficient to maintain numerical stability, independent of the horizontal resolution.The iterative scheme for solving this equation is derived in Appendix C.

Model verification and benchmark experiments
In order to verify the numerical solution to the ice dynamical equations, we performed several benchmark experiments, where we compare our model output to analytical solutions (Halfar, 1981;Bueler et al., 2005), or to results from the EISMINT intercomparison exercise (Huybrechts et al., 1996).

Verification using analytical solutions
For several schematic, simplified ice-sheet configurations, analytical solutions exist for the time evolution of the ice sheet.
One of these was derived by Halfar (1981), describing a "similarity solution" for the time evolution of a radially symmetrical, isothermal ice sheet lying on top of a flat bed, with a uniform zero mass balance.For an ice sheet which, at time  9 , has a thickness at the dome  9 and margin radius  9 , the time-dependent solution to the SIA, with Glen's flow law exponent , reads:  UFEMISM reproduces the analytical solution well, with the largest errors occurring at the margin, and decreasing with resolution.As was shown by Bueler et al. (2005), this is the case for all spatially discrete SIA models, and is due to the fact that the SIA is intrinsically unable to accurately describe ice flow at the margin, where the surface slope becomes very large.
They also show that these errors do not "corrupt" the model solution over the interior.This matches our results, where the modelled ice-sheet interior after 100,000 years is still close to the analytical solution.At that time, the modelled ice-sheet margin at 4 km resolution differs from the analytical solution by about 10 km.
A generalisation of the solution by Halfar (1981), applicable to problems including a simple elevation-dependent accumulation rate, was derived by Bueler et al. (2005).For an accumulation given by: the solution for the ice thickness over time reads: The special case of zero mass balance, described by  = 0, gives the solution by Halfar (1981).The results of this experiment are shown in Fig. 4, for the case of  = 5, which describes a positive accumulation rate, resulting in a rapidly expanding ice sheet.Here, too, UFEMISM reproduces the analytical solution well, with the largest errors occurring at the ice-sheet margin, and decreasing with resolution.

EISMINT benchmark experiments
In order to further investigate the validity of our numerical schemes for ice dynamics and thermodynamics, we used UFEMISM to perform the first set of EISMINT benchmark experiments (Huybrechts et al., 1996).All of the 6 experiments consist of a radially symmetric ice sheet, lying atop a flat, non-deformable bedrock.While the temperature of the ice is calculated dynamically, the ice flow factor is kept fixed at a value of A = 10 -16 Pa 3 yr -1 , meaning that ice temperature is a purely diagnostic variable.The six experiments are divided into two groups of three: a "moving margin" and a "fixed margin" group; in the "fixed margin" experiments, the mass balance is such that the expected theoretical ice margin lies outside the model domain, and ice thickness at the boundary is artificially kept at zero.In realistic model configurations, such a margin (i.e.where the ice thickness does not approach zero) can occur at a calving front.The three experiments within a group have different mass balances; a fixed, "steady-state" mass balance, one with an added 20 kyr sinusoid, and one with a 40 kyr sinusoid, which is useful for investigating the performance of the model in terms of temporal evolution.Simulations for each experiment were performed with ice-margin resolutions of the original EISMINT resolution of 50 km, as well as finer resolutions of 16, 8 and 4 km.Shown in Fig. 6 are the same transects for experiment IV (steady-state, fixed margin), showing an ice sheet that is ~3400 m thick at the divide, compared to 3340 -3420 m in Huybrechts et al. (1996).1996), the thermal conductivity and specific heat capacity of ice are kept fixed at uniform values of k = 2.1 J s -1 m -1 K -1 and cp = 2009 J kg -1 K -1 , respectively.Here, too, results agree well those reported by Huybrechts et al. (1996).In experiment I, basal temperatures at the ice divide are 11 -12 K below the pressure melting point (PMP), increasing along the outward transect until they reach the PMP at 350 -400 km from the divide.In experiment IV we see similar results, with basal temperatures at the ice divide lying around 8 K below PMP, reaching PMP slightly further towards the margin at 400 -450 km from the divide.
Preliminary experiments with a one-dimension set-up (vertical column only) show that these results are robust across different choices of vertical discretisations.We find G-IG temperature changes that are slightly smaller than the values reported by Huybrechts et al. (1996), lying just outside their reported ranges.In agreement with the findings of Huybrechts et al. (1996), introducing glacial cycles in the moving margin experiments (Fig. 10a) results in G-IG mean temperature decrease of about 1 K, while the fixed margin experiments (Fig. 10b) see a temperature increase of about 2.5 K.

Computational performance
The first version of UFEMISM presented here is parallelised using the MPI construct of "shared memory", allowing the program to run in parallel on a number of processor cores that are able to directly access the same physical memory (usually either 16, 24, or 32 cores on typical computer clusters or supercomputers).This is a temporary choice; the effort to extend the parallelisation to multiple nodes, using the full capability of MPI, is still ongoing, but is beyond the scope of this paper.
In order to investigate the computational performance of the different model components, we performed a series of simulations of Antarctic ice-sheet retreat.For these simulations, the climate was set to the present-day observed climate (ERA-40; Uppala et al., 2005) plus a spatially and temporally uniform 5 °C warming term to force ice-sheet retreat (and thus ensure frequent mesh updating) over a period of 10,000 years.The results of one of these simulations are shown in Fig. 11.    have been made between the computation time t, the number of cores C, and the horizontal resolution R. For the number of cores C, this fit is described by ln  =  +  u ln , with  u describing the degree of parallelisation, such that  u = 1 describes perfect parallelisation, i.e. doubling the number of cores reduces the computation time by half.For the resolution, this fit is described by ln  =  +  w ln .The three physics modules have a good degree of parallelisation, ranging from  u = 0.71 for the SSA to  u = 0.83 for the SIA.Mesh updating scales less well, with  u = 0.47.However, since mesh updating accounts for only 2 -4 % of total computation time, this does not noticeably affect the parallelisation of the complete model, which has  u = 0.74.The SSA dominates the total computation time across all resolutions and numbers of processors, requiring as much or more computation time as all other model components combined.Fig. 12, panel B shows computation versus model resolution for the same model components.In an idealised situation (such as the EISMINT experiments), the SIA and SSA should scale with the resolution to order  w = 3 in a square-grid model (two orders from the number of grid cells, and one from the time-step dependence on resolution, according to the numerical stability criterion for the solution of the mass conservation equation).In UFEMISM, this can be theoretically reduced to order  w = 2, since a high resolution is only applied over the one-dimensional ice-sheet margin, where the diffusivity is not necessarily the largest of the model domain.The reason that the results shown in Fig. 12 deviate from this idealised case is because, for a realistic ice sheet, increasing the resolution resolves more topographical features, which increases the length of the onedimensional domains of the ice margin, the grounding line, and the calving front.This is illustrated in Fig. 13, which shows the number of vertices of meshes for Greenland and Antarctica versus the model resolution at the ice margin.Similarly, resolving more topographical features also leads to an increased ice diffusivity in areas with steep gradients, decreasing the critical time step and increasing the computation time.Preliminary experiments with a simply square-grid model showed the same effect, and the computation time for those experiments scaled with resolution to order  w = 3.5 − 4.0, rather than the  w = 3 in the idealised case.
The computation time of the mesh updating component also scales well with model resolution, to the order  w = 2.59.The thermodynamics component scales even better, with order  w = 0.95.The reason that this is so much lower than the SIA and SSA components, is that the thermodynamics uses a constant rather than a dynamic time step, which is related to the choice of vertical discretisation rather than to the horizontal model resolution.
In these experiments, the entire ice margin, including the grounding line and calving front, was given the same resolution.
When using UFEMISM for actual paleo-simulations, such a high resolution would only be used for the grounding line, where it directly increases the physical accuracy of the solution.In one last experiment, we performed the same Antarctic retreat simulation, with the resolution set to 4 km for the grounding line, 16 km for the rest of the ice margin, and 200 km over the ice sheet interior.Run on 16 cores, the simulation took 5 h 30 m to complete, which predicts a requirement of about 70 h for an entire glacial cycle.Extrapolating these numbers to the Greenland, North American and Eurasian ice sheets is difficult, due to the differences in size, glacial-interglacial geometry changes, and relative area of floating ice.Previous work with the squaregrid model ANICE (Berends et al., 2018(Berends et al., , 2019) ) indicates that the Antarctic and North American ice sheets typically have roughly the same computational expense when they are at their maximum extent, while the Eurasian and Greenland ice sheets respectively require about ½ and ¼ as much computation time.Based on these numbers, a full glacial cycle simulation of all four ice sheets with UFEMISM would take somewhere between 1.5 and 3 times as much as one for only Antarctica, implying a required computation time of about 1,600 -3,200 core hours (100 -200 wall hours on a 16-core system).For comparison, a full glacial cycle simulation with ANICE at 40 km resolution takes about 8 core hours, which implies a simulation at 4 km would take about 25,000 -80,000 core hours (numbers based on extrapolation), meaning that UFEMISM is about 10 -30 times faster.If the grounding line resolution is decreased to 8 km, these numbers decrease to about 320 -640 core hours (20 -40 wall hours) for UFEMISM, and 2,200 -5000 core hours for ANICE.
UFEMISM can be compiled with both the gfortran and ifort compilers, requiring only LAPACK, NetCDF and MPI as external packages, and can run on any number of processors (with a minimum of 2) that can access the same shared memory chip.This means that it should be possible to compile and run the model on most consumer-grade systems.The model contains roughly 200 double precision data fields.For the 3 km resolution Antarctica mesh (~100,000 vertices) shown in Fig. 1, this implies a memory usage of about 1.6 Gb.Output is written to NetCDF files at about 10 kb per vertex, implying that a 120 kyr glacial cycle simulation of Antarctica at 3 km (100,000 vertices), where output is written every 1,000 model years, would generate about 150 Gb of data.

Conclusions and discussion
We have presented and evaluated a new thermomechanically coupled ice-sheet model, UFEMISM, which solves the SIA and SSA versions of the ice-dynamical equations on a fully adaptive mesh.The model is able to accurately reproduce the analytical solutions for the evolution of schematic ice sheets by Halfar (1981) and Bueler et al. (2005), as well as the EISMINT benchmark experiments (Huybrechts et al., 1996), indicating that the numerical scheme developed for UFEMISM is valid.
Analysis of the computational performance of the model indicates that it scales well with both number of processors and model resolution.Based on those results, UFEMISM should be able to simulate the evolution of the four large continental ice sheets over an entire glacial cycle, with a grounding line resolution of 4 km, in 1600 -3200 core hours (100 -200 wall hours on 16 processors), which is very feasible on typical facilities for scientific computation.
In the benchmark experiments we performed, only the SIA is solved, as floating ice and basal sliding are not considered.While a number of benchmark experiments exist for floating ice, these are less straightforward.MISMIP (Pattyn et al., 2012) describes a one-dimensional (flowline) case, which is shown in the subsequent MISMIP3d (Pattyn et al., 2013) experiment to be difficult to apply for a two-dimensional case.The two-dimensional experiments in MISMIP3d do not always have analytical solutions, and the results produced by different models are shown to be strongly resolution-dependent. Different semianalytical solutions for the ice flux across the grounding line in the absence of buttressing have been proposed (Schoof et al., 2007;Tsai et al., 2015).However, implementing those in numerical models has proven difficult (Pollard and DeConto, 2012;Pattyn et al., 2013), and recent work has demonstrated that these analytical solutions cannot be feasibly altered to account for buttressing, which is the case for the majority of Antarctic grounding lines (Reese et al., 2018).This means that the problem of solving the stress balance near the grounding line in numerical models without resorting to very high (< 100 m) resolutions remains open, and is not solved by this new model configuration.
The current version of UFEMISM has been parallelised using MPI shared memory, meaning the number of processors that can run the model depends on the hardware system, limited by how many processors can access the same memory chip.Since the three most computationally expensive model components (the SSA, SIA and thermodynamics modules, respectively) are shown to scale well, extending the parallelisation framework to allow the model to run on multiple nodes might substantially reduce the wall clock time for large simulations.The current version of UFEMISM uses domain subdivision to parallelise the mesh generation algorithm.This approach lends itself well to multi-node parallelisation, as each node can be assigned a region of the model domain.
The iterative scheme used in solving the SSA is currently the most computationally demanding model component, requires as much or more computation time as all the other model components combined.Preliminary experiments have identified several possible solutions that could substantially reduce this, including a spatially variable relaxation parameter (see Appendix B) and/or a multigrid scheme.Efforts to develop these solutions into applications that are robust and stable enough for the largescale, long-term simulations for which UFEMISM is intended are ongoing, but are beyond the scope of this study.
As UFEMISM is intended to be used for paleo-simulations, it has been developed to be able to include an elaborate climate / mass balance forcing.Previous work by our group has focused on developing a computationally efficient climate forcing that explicitly includes important feedback processes in the ice-climate system, such as the altitude-temperature feedback, icealbedo feedback and orographic precipitation feedback (Berends et al., 2018).While these solutions have not yet been implemented in UFEMISM, it has been designed with these solutions in mind, so that implementing them should be straightforward.We have also previously worked on developing a coupled ice-sheet -sea-level model (de Boer et al., 2014).
A key improvement in UFEMISM with respect to our previous ice-sheet model is the improved treatment of grounding line dynamics, which makes it important to accurately account for changes in the geoid and the resulting changes in water depth at the grounding line.While UFEMISM has not yet been coupled to a sea-level model, it has been designed with such a future coupling in mind.
where  †‰} } is the number of triangles containing vertex v i .Preliminary experiments have shown that "weighting" the contributions from the surrounding triangles based on triangle size or internal angle, as is sometimes done, does not significantly affect the solution.Since  A †1 ,  A †H , … are linear combinations of  } ,  ,1 ,  ,H ,… , we can write: ) where c sums over the vertices neighbouring v i .The "neighbour functions"  A } and  A },‹ follow from the mesh geometry.This means that they can be calculated in advance, and used in the calculation of spatial derivatives of any function whose values are defined on the mesh vertices, greatly reducing the required computation time.
On the regular rectangular grid, the discretisation of the second derivative could be interpreted as the gradient of the first derivative on the staggered Arakawa C grid.This meant that the value of the second derivative on a grid cell could, like the first derivative, be expressed as a linear combination of the function values on that grid cell and its eight-fold neighbours.For the triangular mesh, we take a similar approach.We view the geometric centres of the triangles as "staggered" vertices, where the first spatial derivatives are calculated using the normal vector to the triangle.We construct a new set of "sub-triangles", spanned by the vertex v i and these staggered vertices, as illustrated in Fig. A3.Since we know the values of  A } and  B } on these new vertices, we can use the same approach as before to calculate the second derivatives on the sub-triangles, and average them to get the value on v i .Preliminary experiments showed that using the geometric centre instead of the circumcentre yields more stable solutions when using these to solve differential equations.A mathematical proof of why this is the case might be interesting, but lies beyond the scope of this study.For now, consider only the first derivative of f to x.The new, staggered vertex in triangle t 1 is created by the horizontal coordinates of the geometric centre of t 1 , and the derivative of f on t 1 ,  A †1 , as the vertical coordinate: (C2) The different spatial derivatives of  follow from Eq. C1: The scaled vertical derivatives of a function, e.g.
These expressions are very similar to the neighbour functions that were used in Appendix A. We will not include their derivation here.Using these expressions to discretise the vertical derivatives in Eq.C2, and using an implicit time- This matrix equation has to be solved for every individual grid cell.In UFEMISM, this is done with the Fortran package LAPACK; in Matlab, it can be done with the "backslash method": T = A\delta.We apply a Dirichlet boundary condition at the top of the column, keeping ice temperature equal to surface air temperature.At the base, a Neumann boundary condition is applied, keeping the vertical temperature gradient fixed to a value dictated by the geothermal heat flux and the frictional heating from sliding.Ice temperature throughout the vertical column is limited to the depth-dependent pressure melting point.

Figure 1 :
Figure 1: Meshes generated for Antarctica (a-d) and Greenland (e-h), based on present-day ice-sheet geometry, with maximum icemargin (including grounding line and calving front) resolutions of 100, 30, 10 and 3 km, respectively.For Antarctica, two highresolution locations are included: one at the South Pole, and one at the EPICA Dome C drill site.Both have been prescribed the same resolution as the ice margin.
Since the surface mass balance is zero, any change in ice thickness is caused only by ice dynamics, making this a useful experiment for verifying ice-sheet model numerics.We performed several simulations with UFEMISM of an ice-sheet that starts at  =  9 with the Halfar solution for H0 = 5000 m, R0 = 300 km, A = 10 -16 Pa 3 yr -1 , and n = 3.The ice-margin resolutions for the different simulations were set to 50, 16, 8, and 4 km.Shown in Fig.3are transects of the simulated ice sheet at different points in time, compared to the analytical solution.https://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License.

Figure 3 :
Figure 3: The evolution through time of a schematic, radially symmetric, isothermal ice sheet, as simulated by UFEMISM at different ice-margin resolutions, compared to the Halfar solution starting at t = t0.The small sub-panels in the top-right corner of the panels show a zoomed-in view of the ice margin, showing how the simulated ice margin converges to the analytical solution as the model resolution increases.

Figure 4 :
Figure 4: the evolution through time of a schematic, radially symmetric, isothermal ice sheet, with positive accumulation rate, as simulated by UFEMISM at different ice-margin resolutions, compared to the Bueler solution.
https://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License. .5 are the simulated ice thickness and ice velocity of a radial transect of the ice sheet in experiment I, at the end of a 120 kyr simulation that was initialised with an ice thickness of zero.These results agree well with those presented byHuybrechts et al. (1996), showing an ice sheet that is ~2960 m thick at the divide, and has a maximum outward ice velocity of ~55 m yr -1 at approximately 450 km away from the divide.The small sub-panel in the upper right corner of panel A zooms in 5 on the ice margin, showing that the modelled ice margin converges to the analytical ice margin (the perimeter of the circle where the mass balance integrates to zero) as the resolution increases.

Figure 5 :
Figure 5: a) Ice thickness and b) vertically averaged ice velocity of a radial transect of the ice sheet in experiment I, simulated by UFEMISM at 50, 16, 8, and 4 km ice-margin resolutions.The vertical dashed line in panel a denotes the analytical ice margin.The 10 https://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License.

Figure 6
Figure 6: a) Ice thickness and b) vertically averaged ice velocity of a radial transect of the ice sheet in experiment IV, simulated by UFEMISM at 50, 16, 8, and 4 km ice-margin resolutions.The sharp peaks in the velocity near the margin are a result of the singularity in these Nye-Vialov margins, the ice thickness approaches zero as one approaches the margin, but the ice flux remains finite, implying an infinite velocity.

Figs. 7
Figs. 7 and 8 show the temperature transect of the ice sheet for experiments I and IV at 4 km resolution, and the basal temperature transects for all simulations in these experiments.Following the specifications from Huybrechts at al. (1996), the https://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License.

Figure 7 :
Figure 7: a) Ice temperature and velocity vectors for the steady-state ice sheet in experiment I, as produced by UFEMISM with a 4 km ice margin resolution.b) Basal temperature transects for the different simulations in the same experiment.

Figure 8 :
Figure 8: a) Ice temperature and velocity vectors for the steady-state ice sheet in experiment IV, as produced by UFEMISM with a 4 km ice margin resolution.b) Basal temperature transects for the different simulations in the same experiment.

Fig. 9
Fig.9shows the ice thickness change at the ice divide over time for experiments II, III, V and VI.Here too, the results from simulations with UFEMISM at different resolutions agree with the results published byHuybrechts et al. (1996).The glacialinterglacial ice thickness (G-IG) changes for all four experiments lie within the ranges reported byHuybrechts et al. (1996), as listed in the top-right corners of both panels of Fig.9.The simulated ice thickness timeseries at different resolutions are not distinguishable.Fig.10shows the basal temperature relative to the pressure melting point over time, for the same experiments.

Figure 9 :
Figure 9: a) Ice thickness change at the divide for experiments II and III (moving margin, respectively 20 and 40 kyr sinusoid mass balance perturbation).b) The same for the fixed margin experiments (V and VI), simulated by UFEMISM at 50, 16, 8, and 4 km ice-margin resolutions.Listed in the top-right corners of both panels are the glacial-interglacial ice thickness changes simulated by UFEMISM, compared to the ranges reported by Huybrechts et al. (1996) between brackets.

Figure 10 :
Figure 10: a) Basal temperature change at the divide for experiments II and III (fixed margin, respectively 20 and 40 kyr sinusoid mass balance perturbation).b) The same for the fixed margin experiments (V and VI), simulated by UFEMISM at 50, 16, 8, and 4 km ice-margin resolutions.Listed in the top-right corners of both panels are the glacial-interglacial basal temperature changes simulated by UFEMISM, compared to the ranges reported by Huybrechts et al. (1996) between brackets.

Figure 11 :
Figure 11: Results from the 10,000 yr Antarctic retreat simulation with a 4 km grounding line resolution.Panels a-f show the modelled ice sheet at 2,000 yr intervals.Bedrock elevation is indicated by colours, surface elevation contours are shown at 1,000 m intervals.The computation times of the different model components as a function of number of cores and model resolution are shown 5 in Fig. 12.

Figure 12 :
Figure 12: a) Computation times vs number of cores for the SSA (blue), SIA (red), thermodynamics (yellow) and mesh updating (purple) model components, as well as for the entire model (black), for a 10,000 yr Antarctic retreat simulation at 8 km resolution.These four components together account for ~99.8 % of the total computation time.b) Computation times vs model resolution for the same model components, run on 16 cores.

Fig. 12 ,
Fig. 12, panel A shows the degree of parallelisation of the model components.For each model component, logarithmic fits

Figure 13 :
Figure 13: Number of vertices vs ice margin resolution for Greenland and Antarctica.

Figure A3 :
Figure A3: the geometric centres of the triangles surrounding vertex v i make up the vertices of a new set of "sub-triangles", which can be used to define the second derivative of a function on vertex v i .
://doi.org/10.5194/gmd-2020-288Preprint.Discussion started: 8 September 2020 c Author(s) 2020.CC BY 4.0 License.This system of equations can be represented by the matrix equation  †Z∆ † = , where the lower diagonal , central diagonal  and upper diagonal  of the tridiagonal matrix , and the solution , are given by: https