the Creative Commons Attribution 4.0 License.
Special issue: The PALM model system 6.0 for atmospheric and oceanic boundarylayer...
Development and technical paper 03 Jun 2021
Development and technical paper  03 Jun 2021
A nested multiscale system implemented in the largeeddy simulation model PALM model system 6.0
 ^{1}Finnish Meteorological Institute, P.O. Box 503, 00101, Helsinki, Finland
 ^{2}Software Consultant, Beethovenstr. 29A, 12247 Berlin, Germany
 ^{3}Leibniz University Hannover, Institute of Meteorology and Climatology, Herrenhäuser Strasse 2, 30419 Hanover, Germany
 ^{4}Deutscher Wetterdienst, Frankfurter Straße 135, 63067 Offenbach, Germany
 ^{5}Aristotle University Thessaloniki, P.O. Box 483, 54124, Thessaloniki, Greece
 ^{6}University of Bergen, Geophysical Institute, Postboks 7803, 5020 Bergen, Norway
 ^{1}Finnish Meteorological Institute, P.O. Box 503, 00101, Helsinki, Finland
 ^{2}Software Consultant, Beethovenstr. 29A, 12247 Berlin, Germany
 ^{3}Leibniz University Hannover, Institute of Meteorology and Climatology, Herrenhäuser Strasse 2, 30419 Hanover, Germany
 ^{4}Deutscher Wetterdienst, Frankfurter Straße 135, 63067 Offenbach, Germany
 ^{5}Aristotle University Thessaloniki, P.O. Box 483, 54124, Thessaloniki, Greece
 ^{6}University of Bergen, Geophysical Institute, Postboks 7803, 5020 Bergen, Norway
Correspondence: Antti Hellsten (antti.hellsten@fmi.fi) and Mikko Auvinen (mikko.auvinen@fmi.fi)
Hide author detailsCorrespondence: Antti Hellsten (antti.hellsten@fmi.fi) and Mikko Auvinen (mikko.auvinen@fmi.fi)
Largeeddy simulation (LES) provides a physically sound approach to study complex turbulent processes within the atmospheric boundary layer including urban boundary layer flows. However, such flow problems often involve a large separation of turbulent scales, requiring a large computational domain and very high grid resolution near the surface features, leading to prohibitive computational costs. To overcome this problem, an online LES–LES nesting scheme is implemented into the PALM model system 6.0. The hereby documented and evaluated nesting method is capable of supporting multiple child domains, which can be nested within their parent domain either in a parallel or recursively cascading configuration. The nesting system is evaluated by first simulating a purely convective boundary layer flow system and then three different neutrally stratified flow scenarios with increasing order of topographic complexity. The results of the nested runs are compared with corresponding nonnested high and lowresolution results. The results reveal that the solution accuracy within the highresolution nest domain is clearly improved as the solutions approach the nonnested highresolution reference results. In obstacleresolving LES, the twoway coupling becomes problematic as anterpolation introduces a regional discrepancy within the obstacle canopy of the parent domain. This is remedied by introducing canopyrestricted anterpolation where the operation is only performed above the obstacle canopy. The test simulations make evident that this approach is the most suitable coupling strategy for obstacleresolving LES. The performed simulations testify that nesting can reduce the CPU time up to 80 % compared to the fineresolution reference runs, while the computational overhead from the nesting operations remained below 16 % for the twoway coupling approach and significantly less for the oneway alternative.
Largeeddy simulation (LES) has been used for basic research of atmospheric boundary layer (ABL) phenomena using idealized model setups for decades. At present it is becoming an important method in applied research on realistic, very detailed, and complicated flow systems such as urban ABL problems (Britter and Hanna, 2003; Tseng et al., 2006; BouZeid et al., 2009; Tominaga and Stathopoulos, 2013; Giometto et al., 2016; Buccolieri and Hang, 2019; Auvinen et al., 2020a). Until recent years, there were no ABL LES models capable of modeling detailed surface structures, such as buildings or steep complex terrain shapes in the ABL. Nowadays, it is possible to carry out LES for complex built areas (e.g., Letzel et al., 2008), but this is still limited to relatively small areas because of the high spatial resolution requirement. Concerning urban LES, Xie and Castro (2006) have shown that at least 15 to 20 grid nodes are needed across street canyons to satisfactorily resolve the most important turbulent structures within the canyons. This requirement typically leads to grid spacings on the order of 1 m. However, the vertical extent of the LES domain should scale with the ABL height, and the horizontal size should span over several ABL heights in order to capture the ABLscale turbulent structures (de Roode et al., 2004; Fishpool et al., 2009; Chung and McKeon, 2010; Auvinen et al., 2020a). To adequately capture processes on the street scale and to simultaneously capture large ABLscale turbulence, sufficiently large model domains at small grid sizes are required, posing high demands on the computational resources in terms of CPU time and memory. Moreover, the uncertainty related to the lateral boundary conditions usually decreases as the domain is made larger.
Many conventional continuumbased numerical solution methods (e.g., finiteelement and finitevolume methods) allow variable resolution so that the resolution can be concentrated to the area of principal interest and relaxed elsewhere. However, only unstructured grid systems allow full advantage to be taken of spatially variable resolution. Many generalpurpose computational fluid dynamics packages offer unstructured grid systems, but according to our experience such solvers are usually computationally decidedly less efficient than ABLtailored LES models, such as PALM (Raasch and Schröter, 2001; Maronga et al., 2015, 2020), the Weather Research and Forecasting Model (WRF) (Skamarock et al., 2008) with its LES option, and the Dutch Atmospheric LargeEddy Simulation (DALES) (Heus et al., 2010) that are based on structured orthogonal grid system with constant horizontal resolution. The model nesting approach can be exploited to further speed up ABL LES models or to allow larger domain sizes without compromising the resolution in the area of primary interest. The authors acknowledge that alternative solution approaches, such as lattice Boltzmann method (e.g., Aidun and Clausen, 2010; Ahmad et al., 2017), also offer strategies for computational efficiency improvements. The presented nesting approach is primarily relevant for the family of structured, finitedifference NavierStokes solvers.
The idea of grid nesting is to simultaneously run a series of two or more LES model domains with different spatial extents and grid resolutions. In this implementation the outermost and coarsestresolution LES domain (termed “root” domain henceforth), which acts as a “parent” to its “child” domains, obtains its boundary conditions in a conventional manner, whereas the nested LES domain (child) always obtains its boundary condition from its respective parent domain through interpolation. In oneway coupled nesting only the children obtain information from their parents. In such a coupling strategy, the instantaneous child and parent solutions can deviate within the volume of the nest. If a stronger binding between the solutions is desired, the child solution needs to be incorporated into the parent solution. This is achieved in twoway coupled nesting, where the parent solutions are influenced by their children through socalled “anterpolation” (Clark and Farley, 1984; Clark and Hall, 1991; Sullivan et al., 1996). The term anterpolation was coined by Sullivan et al. (1996), although the concept is older.
The childtoparent anterpolation can be implemented using, for instance, the post insertion (PI) approach (Clark and Hall, 1991), where the parent solution is replaced by the child solution within the volume occupied by both domains. In practice, some buffer zones where anterpolation is omitted are necessary near the child boundaries to avoid growth of unphysical perturbations near the child boundaries (Moeng et al., 2007). An example of a twoway coupled nesting implemented in the WRFLES model is given by Moeng et al. (2007), and later successfully applied to a stratocumulus study by Zhu et al. (2010). The WRFLES nesting system can also be used in oneway coupled mode (Mirocha et al., 2013), and it has been applied in this way, e.g., to a complex terrain study (e.g., Nunalee et al., 2014; MuñozEsparza et al., 2017) and to an offshore convective boundary layer study (MuñozEsparza et al., 2014). Subsequently, Daniels et al. (2016) introduced a vertical interpolation into the WRF model, but this method is restricted to oneway coupled nesting. Recent implementation of an immersed boundary method has made WRFLES with a oneway coupled nesting approach applicable to obstacleresolving LES (Wiersema et al., 2020). In addition to WRFLES, the numerical weather prediction model ICON (Zängl et al., 2015) features an LES mode and includes an online nesting capability (Heinze et al., 2017). However, due to its terrainfollowing coordinate system, ICONLES cannot resolve sharp obstacle structures.
Recently, Huq et al. (2019) implemented a purely vertical nesting system into PALM in which the child and parent domains are required to have the same horizontal extent. Although this approach is useful, e.g., when the grid resolution near the surface needs to be refined to better capture the atmosphere–surface exchange, the requirement of equal horizontal domain extensions poses high demands on the computational resources, limiting this approach to only academic studies. This implementation is also limited to have a single child domain only. For these reasons, we decided to develop the present, more general, fully threedimensional nesting system in PALM. It can also be run in a pure vertical nesting mode.
Oneway coupled obstacleresolving LES has been applied to a built environment by Nakayama et al. (2016) and by Vonlanthen et al. (2016, 2017). The present PALM implementation has also already been demonstrated by Maronga et al. (2019, 2020) and applied to obstacleresolving urban studies (Kurppa et al., 2019, 2020; Auvinen et al., 2020a; Karttunen et al., 2020) using oneway coupling. At present we are not aware of any research on obstacleresolving LES in the ABL context employing a twoway coupled nesting approach. Through our studies, we have observed that the application of twoway coupling in obstacleresolving LES can become problematic. Therefore, in addition to documenting and evaluating the newly implemented nesting method in the PALM model, this paper addresses the applicability of the twoway coupled nesting approach in obstacleresolving LES.
The paper is organized as follows. Section 2 gives a brief description of the LES mode of the PALM model system 6.0. Section 3 presents the technical, algorithmic, and numerical aspects of the implemented nesting. In Sect. 4 the implemented nesting is evaluated for a series of test cases featuring different kinds of boundary layer flow. Finally, Sect. 5 summarizes the results and gives and outlook of future developments.
The PALM model system (Raasch and Schröter, 2001; Maronga et al., 2015, 2020) is based on the nonhydrostatic, filtered, Navier–Stokes equations in a Boussinesqapproximated or anelastic form. It solves the prognostic equations for the conservation of momentum, mass, energy, and moisture on a staggered Cartesian ArakawaC grid. Subgridscale turbulence is parameterized using a 1.5order closure following Deardorff (1980) in the formulation of Saiki et al. (2000). In its standard configuration, PALM thus has seven prognostic quantities: the velocity components u_{i} (where ${u}_{\mathrm{1}}=u,{u}_{\mathrm{2}}=v,{u}_{\mathrm{3}}=w$), the potential temperature θ, specific humidity q_{v}, a passive scalar s, and the subgridscale (SGS) turbulent kinetic energy e. By default, discretization in time and space is achieved using a thirdorder Runge–Kutta scheme following Williamson (1980) and a fifthorder advection scheme following Wicker and Skamarock (2002). The horizontal grid spacing is always equidistant, whereas it is possible to use variable grid spacing in the vertical direction. Often, the vertical grid spacing is set equidistant within the boundary layer, and stretching is applied above the boundary layer to save computational time in the nonturbulent free atmosphere. At the lateral boundaries cyclic conditions or more advanced in and outflow conditions can be employed.
Both the Boussinesq and the anelastic approximation require incompressibility of the flow. To provide this feature a predictor–corrector method is used where an equation is solved for the modified perturbation pressure after every Runge–Kutta subtime step (e.g., Patrinos and Kistler, 1977). The method involves the calculation of a preliminary prognostic velocity. Divergences in the flow field are then attributed solely to the pressure term, leading to a Poisson equation for the perturbation pressure. In the case of cyclic lateral boundaries, the Poisson equation is solved by using a direct fast Fourier transform (FFT) method. However, in the case of noncyclic boundary conditions, an iterative multigrid scheme is used (e.g., Hackbusch, 1985).
Parallelization of PALM is achieved by using the message passing interface (MPI, e.g., Gropp et al., 1999) and a twodimensional (horizontal) domain decomposition.
PALM offers several embedded models to simulate physical processes within the urban environment. Namely, without the intention of providing an exhaustive list, this embraces a land surface (Gehrke et al., 2020) and a building surface model (Resler et al., 2017) to consider the surface–atmosphere exchange of heat and moisture, a radiativetransfer model (Krč et al., 2020) to include complex threedimensional mutual radiative interactions between surfaces and plants, an indoor and building energy demand model, an aerosol (Kurppa et al., 2019) and an air chemistry model (Khan et al., 2020), and an embedded Lagrangian particle model for dispersion. For a complete overview we refer to Maronga et al. (2020).
3.1 General concept
The nesting system we have developed is based on the concept of parent and child domains. Each parent domain can enfold multiple child domains, but a child domain can, naturally, only have one parent domain. The toplevel domain, also called the root domain, acts as a parent domain to child domains at the first nesting level. The child domains at first nesting level might have subsequent child domains for which they then act as parent domains (cascading arrangement); see Fig. 1. Our nesting system allows for up to 63 nested domains plus the root domain. The implementation requires that all child domains are always completely located inside their respective parent domain. In addition, the grid spacings of a child domain naturally have to be smaller than the grid spacings of its parent domain. The gridspacing ratios $\mathrm{\Delta}{X}_{i}/\mathrm{\Delta}{x}_{i}$ must always be integervalued, although different ratios may be used in different directions. Therefore, in nested runs the grid stretching is only allowed in the root domain and only above the top boundary of the highest nested domain. There may be multiple child domains at the same nesting levels, but overlapping child domains at the same nesting level are not permitted. Finally, all the nest domains have to be surfacebound, meaning that elevated child domains are not allowed. Time synchronization is taken care of by simply selecting the minimum of the time steps determined by each model independently and broadcasting this time step value for all models. Each model inputs and outputs in the same way.
In general, the system is designed as twoway coupled nesting, in which a child domain can affect its parent domain and vice versa. It is possible, however, to run the system in a oneway coupled mode where no feedback from the child domain is incorporated in its parent domain. Moreover, it is possible to use the system as a pure vertical onedimensional nesting, where the lowest part of the model (e.g., the atmospheric surface layer where the dominant turbulent eddies are usually very small) can be run as a child domain with finer grid spacing than its parent domain that compasses the entire boundary layer. In the case of pure vertical nesting, cyclic boundary conditions must be set on all the lateral boundaries. Unlike the method proposed by Huq et al. (2019), the present method also allows a cascade of more than one child domain in the pure vertical nesting cases.
The present nesting approach is a variant of the PI method, in which the communication between each parent–child couple is realized via interpolations (from parent to child) and anterpolations (from child to parent) after each Runge–Kutta substep and just before the pressure solver. The latter then ensures that mass conservation is enforced in the anterpolated solution in the parent domain.
3.2 Restrictions
The current implementation poses a few restrictions for the nested setups. Moreover, the interpolation and anterpolation methods, which are discussed in the following sections, are based on certain assumptions, e.g., on the grid line matching between parent and child domains leading to a few more restrictions. Altogether these restrictions are as follows:

the child domain must always be completely inside its parent domain, and there must be a margin of four parent grid cells between the boundaries of child and parent domains;

parallel child domains must not overlap each other, and there must be a margin of at least four child grid cells between two parallel child domains;

the domain decomposition of all child domains must be such that the subdomain size is never smaller than the parent gridspacing in the respective direction;

buildings or any other topography or geometry must not reach the child domain top;

all the gridspacing ratios must be integervalued;

the outer boundaries of child domains must match with grid planes in its parent domain;

no grid stretching is allowed in the child domains, and in root domain it is allowed only above the top boundary of the highest child domain.
3.3 Structure of the nesting algorithm
Ideally, the coupling actions, i.e., data transfers between the domains, anterpolation, and interpolation, would be performed after the pressurecorrection step using the final divergencefree velocity field on both parent and child. To achieve this in the context of the pressurecorrection method employed in PALM requires a staged arrangement of the coupling actions such that a child first sends data to its parent and after receiving the data the parent anterpolates and performs the pressure correction step. After the pressurecorrection step the parent sends data to the child, which interpolates new boundary conditions from the received data and performs the pressurecorrection step. The purely vertical nesting method implemented in PALM by Huq et al. (2019) features this kind of staged structure. However, Huq et al.'s method may lead to excessive waiting times as the child has to wait until the parent performs the pressurecorrection step and vice versa. Moreover, the staged coupling approach becomes more complicated and more inefficient when a cascade of several nested domains is used. Therefore, Huq's implementation allows for only one child domain. The possibility to employ cascades of child domains was an initial requirement for the present system design, and therefore the staged coupling arrangement had to be abandoned. In principle, it would be possible to perform the pressurecorrection step twice,: the first time before the coupling actions for all domains, and the second time for all parent domains after the coupling to make the anterpolated fields divergence free. However, this would be computationally very expensive, severely compromising the benefits from the nesting. This is because the pressure solution is typically the most timeconsuming part of the solution process. To avoid this extra penalty, the coupling is based on the preliminary prognostic velocity fields u_{pre} in the present implementation. The sequence of the coupling actions is illustrated in Fig. 2. This choice has the consequence that the interpolated velocity boundary conditions for a child domain may violate the global mass balance over the child domain such that
where the tilde symbol is the interpolation operator and n is the unit inner surface normal vector of the child domain boundary S excluding the bottom boundary. This massconservation error, though typically small, is eliminated in an integral sense by adding a constant velocity correction $\mathrm{\Delta}{\mathit{u}}_{\text{pre}}^{\left(l\right)}$ on each boundary l∈{left, right, south, north, top}
to the interpolated child boundary values to exactly eliminate the global massbalance error in Eq. (1). In the case of a purely vertical nesting mode, the correction is applied only on the top boundary and S only spans over it. This correction is made for all child domains right before the pressurecorrection step. According to our tests, Δu_{pre} is typically 3 or 4 orders of magnitude smaller than the dominant velocity scales of the flow.
Huq et al. (2019) showed results for a zero meanwind convective boundary layer (CBL) case. In this case, especially if the nesttop boundary is set on a relatively low level, unphysical overestimation of horizontal velocity component variances easily develop if the coupling is based on u_{pre}. Huq et al. (2019) showed that using the staged sequence of coupling actions, allowing the coupling based on the final velocity field u, mostly removes the overestimation of the horizontal velocity variances. We have confirmed this by temporarily modifying the current implementation to adhere to Huq et al.'s staged arrangement and simulating a vertically nested zero mean wind CBL case similar to Huq et al.'s test case.
In the present method, the overestimation of the horizontal velocity component variances can be mostly avoided by using the integral massbalance forcing (Eq. 2) and further by setting a narrow buffer zone below the top boundary in which the anterpolation is not performed. This is described in more detail in Sect. 3.5.
In addition to the velocity field, all other prognostic variables are also coupled, except for the SGS turbulent kinetic energy (e), as it depends on the resolution by definition and therefore is not straightforward to couple between parent and child domains that have different resolutions. The anterpolated values should be increased by some unknown resolutiondependent factor, and the interpolated values should be reduced accordingly. e strongly follows the velocity gradient field, and therefore it tends to adapt to the anterpolated velocity field on the parent side during the next Runge–Kutta step without being anterpolated itself. Relying on this reasoning, we omit the anterpolation of e. Moreover, we assume that the local generation of e often dominates its advection, implying that replacing the interpolation of its child boundary values with simple zerogradient conditions may be acceptable. In our numerical tests we compared the zerogradient conditions with interpolated boundary values reduced by an estimated resolutiondifferencedependent factor. The comparisons revealed only negligible differences in the results.
Further technical implementation issues are discussed in Appendix A.
3.4 Interpolation (parent to child)
3.4.1 Emphasis on conservation properties
Conservation of fluxes through the nest boundaries is an essential condition for a nesting algorithm. By flux conservation we mean that the total flux through a nest–domain boundary is equal to the total flux through the corresponding plane in the parent domain. This must not be confused with the massconservation error discussed in Sect. 3.3 Eq. (1), which results from the fact that the nest boundary conditions are interpolated from the preliminary velocity field instead of the final divergencefree velocity field.
Earlier studies by Kurihara et al. (1979) and Clark and Farley (1984) focused mostly on conservation of prognostic variables but not on conservation of their advection fluxes. Later, Sullivan et al. (1996) and Zhou et al. (2018) also paid attention to conservation of fluxes, which according to our observations is very important. For example, if no attempt is made to minimize the fluxconservation errors of the velocity components on the nested boundaries, the mean flow angle across the whole system of domains may become significantly deflected. We observed this unacceptable phenomenon in an early phase of this work while experimenting with an interpolation scheme that produces nonnegligible fluxconservation errors. Therefore, we construct the interpolation method such that the flux conservation errors on the nested boundaries are minimized. In our view, the conservation properties are far more important than local accuracy at the nest boundaries. It will be shown in Sect. 3.4.2 that zerothorder interpolations are favorable in terms of conservation properties, although their local accuracy is obviously lower than those of higherorder interpolations. Here, it should be noted that increasing the order of interpolation accuracy on the child boundary planes is irrelevant because in all cases the solution requires a development zone (i.e., a border margin) as it adapts to the increased resolution. Therefore, a loworder interpolation method with acceptable conservation properties should be preferred.
In principle, the most straightforward way to satisfy flux conservation is to directly use the flux on the parent grid cell face on the nested boundary and to distribute it onto the underlying child grid cell faces akin to the finitevolume method. However, PALM is based on the finitedifference method, and thus its architecture does not support this method. Therefore, it is necessary to construct an interpolation procedure that is at least approximately fluxconservative.
3.4.2 General conservation considerations
Before laying out the new interpolation procedure in Sect. 3.4.3, the earlier developments are reviewed, and their merits and weaknesses are discussed in this section.
We first consider the work by Kurihara et al. (1979), who required conservation of prognostic variables in the form
where ΔS_{I} and Δs_{i} are the face areas of the parent and child grid cells on the nested boundary and $\sum \mathrm{\Delta}{s}_{i}=\mathrm{\Delta}{S}_{I}$. Here the child variables and indices are denoted by lowercase letters and the parent variables and indices are indicated by uppercase letters. Clark and Farley (1984) later applied this condition to both interpolation and anterpolation and called it the reversibility condition. The reversibility condition can also be written as
where the tilde is the interpolation operator and the hat is the anterpolation operator. Based on this condition, Clark and Farley (1984) derived a quadratic interpolation scheme that forms a reversible pair with their anterpolation scheme. This reversibility guarantees the massflux conservation (in incompressible flows). However, it does not guarantee conservation of other fluxes, consisting of products of the advected variable and advective velocity component. As recently noted by Zhou et al. (2018), the conservation of other fluxes is violated if both advective velocity component and advected variable are interpolated using the Clark and Farley scheme. Although not mentioned by Zhou et al. (2018), this actually applies to any interpolation scheme higher than the zeroth order due to the nonlinearity of the advection terms. Instead of applying the reversibility requirement, Zhou et al. (2018) require global conservation of both prognostic variable ϕ and its resolvedscale turbulent flux $\langle {u}^{\text{(N)}\prime}{\mathit{\varphi}}^{\prime}\rangle $ through the boundary as
where the superscript (N) refers to a boundarynormal velocity component and 〈⋅〉_{b} denotes spatial averaging over the child domain boundary. It is straightforward to show that if Eq. (5) holds, the flux conservation condition Eq. (6) can be also be written for the total flux as
We shall study the flux conservation using this condition instead of Eqs. (5) and (6).
In order to ensure flux conservation for all prognostic variables, Zhou et al. (2018) chose to only use the Clark and Farley method for the advective velocity component, and for all advected variables they used the simple zerothorder method:
Obviously this zerothorder interpolation also satisfies the reversibility condition Eq. (4) in addition to Eq. (5). This choice readily satisfies the fluxconservation condition for all variables. However, in cases with evenvalued gridspacing ratios, Eq. (8) cannot be applied to the velocity components that are in staggered positions relative to u^{(N)} and U^{(N)} on the boundary plane. These velocity components are denoted by u^{(S)} and U^{(S)}. Equation (8) is not applicable in this case because the odd number of child grid u^{(S)} nodes cannot be evenly associated to the surrounding U^{(S)} nodes; see Fig. 3. The method by Zhou et al. (2018) is indeed strictly limited to oddvalued gridspacing ratios.
3.4.3 Construction of an approximately fluxconservative method
In our view, the limitations of the method by Zhou et al. (2018) are too restrictive. However, we were not able to find any alternative zerothorder interpolation that would exactly satisfy the condition Eq. (7) for u^{(S)} and also be applicable to evenvalued gridspacing ratios. Instead, we found a zerothorder interpolation that approximately satisfies the condition Eq. (7) for u^{(S)}, as will be shown below. This interpolation reads as follows:
This interpolation is also applicable for u^{(S)} in cases of evenvalued gridspacing ratios since the child grid nodes between the parent grid nodes need not to be associated with the surrounding parent grid nodes, see Fig. 3.
In another deviation from the Zhou et al. (2018) method, we do not employ the quadratic scheme of Clark and Farley (1984) at all. The reason is that in PALM the interpolation algorithm has to cope with complex geometries, and the application of such a complicated wide stencil interpolation scheme might become excessively cumbersome. Instead, we use Eq. (8) for the advective velocity component u^{(N)}, Eq. (9) for the advected components u^{(S)}, and Eq. (8) for all other advected variables. Equation (9) is also used for the staggered velocity components in cases of oddvalued gridspacing ratios even though Eq. (8) would also be applicable in such cases.
As stated above, the flux conservation condition is satisfied approximately for u^{(S)} for both odd and evenvalued gridspacing ratios by using Eq. (9) for u^{(S)} and Eq. (8) for u^{(N)}. As an example and for the sake of clarity (but without any loss of generality), we demonstrate the spatially averaged fluxes of v and V components for a nested domain boundary and assume that the boundarynormal direction is x. The velocity components in the x direction are u and U. Within the advection scheme, the advective u velocity for the flux is interpolated linearly to the flux point of v as $({u}_{j\mathrm{1}}+{u}_{j})/\mathrm{2}$. This becomes simply U_{J−1} for those child flux points of v that are located between V_{J−1} and V_{J} and $({U}_{J\mathrm{1}}+{U}_{J})/\mathrm{2}$ for those v flux points coinciding with V_{J}. Let us begin by applying the chosen interpolation technique to an extremely simplified example with only two U grid cells along the left nest boundary. Then the yaveraged flux 〈uv〉_{y} (we omit the z averaging at this stage) consists of only one V node J in the parent grid. Let the integervalued gridspacing ratio in the y direction be R_{y}=4. We also omit the SGS fluxes, which are assumed to be small, as in the entire discussion above. With these choices, 〈uv〉_{y} consists of seven child grid nodes as follows:
The first term represents fluxes from those child flux points lying between V_{J−1} and V_{J}, the second term is the flux at the child flux point coinciding with V_{J}, and the last term is a similar contribution as the first term but from the child flux points lying between V_{J} and V_{J+1}. Equation (10) can be rearranged as follows:
where we can identify the factor 4 in the inner term as R_{y} and the factor 3 in the edge terms as R_{y}−1. This can be generalized to an arbitrary integer R_{y} and to an arbitrary number of parent grid nodes across the boundary ${N}_{y}={J}_{n}{J}_{s}+\mathrm{1}$. By doing so, and by also incorporating the z averaging, we obtain
On the parent grid, the correspondingly averaged advection flux of V is (as expanded by R_{y} for easier comparison)
Here, ${N}_{z}={K}_{t}{K}_{b}+\mathrm{1}$ and $({K}_{b},\mathrm{\dots},{K}_{t})$ is the vertical parent grid index range covering the nest boundary. Clearly Eqs. (12) and (13) are not exactly equal because of the additional edge terms in Eq. (12) containing ${V}_{{J}_{s}}$ and ${V}_{{J}_{n}+\mathrm{1}}$, and because the denominator of Eq. (12) deviates from R_{y}N_{y}N_{z}. It is important to note, however, that 〈uv〉_{b}−〈UV〉_{b} tends towards zero as N_{y} becomes large. In typical applications, the order of magnitude of N_{y} is hundreds, making the flux conservation error negligibly small. Moreover, if we can assume that
which is a reasonable assumption in many cases, then $\langle uv{\rangle}_{\mathrm{b}}\langle UV{\rangle}_{\mathrm{b}}\approx \mathrm{0}$, even with small values of N_{y}.
3.4.4 Effects of the advection scheme on flux conservation
Above, the flux conservation was discussed generally without taking into account the effects of the actual discretization scheme employed in the advection algorithm. In this subsection, a mismatch of the advection term approximations on the child and parent sides in PALM is first identified and discussed, and subsequently a method to reduce the fluxconservation error resulting from this mismatch is proposed.
In Sect. 3.4.3 it was assumed that the advected parent grid variable values on the child boundary (V_{J,K} in Eqs. 12 and 13) are equal to the values used for the fluxes in the advection term computation. This is not the case in PALM, as they are interpolated onto their flux points using the fifthorder scheme by Wicker and Skamarock (2002). Here, it is important to understand that the interpolation onto the flux point in the advection scheme is a separate procedure from the parenttochild interpolation, and it is performed in a different phase of the time step cycle (Fig. 2) in the prognostic equation step. In PALM, the fifthorder interpolation is not employed at the boundaries (except at the cyclic boundaries), instead a socalled advection scheme degradation procedure is utilized. The degradation procedure entails degrading the fluxpoint interpolation within the advection scheme on first two layers of nodes. The firstorder upwind scheme is applied on the first layer, and the thirdorder Wicker and Skamarock (2002) scheme is applied for the second layer. This way, only one layer of boundary ghost points is needed at the boundary. Technically, three grid layers are allocated in PALM for boundary ghost points, but using all of them at child boundaries would lead to no gain in accuracy as the second and third layers would be just copies of the first layer due to the zerothorder parenttochild interpolation.
The firstorder upwind scheme makes the advected values on childboundary flux points independent of the child solution itself if the local flow direction is into the child domain. This is important from a fluxconservation point of view as the flux into the child domain should be entirely controlled by the parent solution. On the other hand, the firstorder upwind scheme leads to values on the child boundary flux points that may differ from those on the corresponding grid plane on the parent side as those are interpolated with the fifthorder scheme. Therefore, additional fluxconservation errors may be generated.
We have not found any way to totally eliminate the resulting additional fluxconservation error, but we can reduce it in the following way. Instead of using the original parent grid values in the parenttochild interpolation, we replace them with values preinterpolated (Fig. 4, phase 1) onto the parent flux points using a scheme that is higher than the first order and use these values in the parenttochild interpolation. From here on, we refer to this preinterpolation as transfer to boundary plane (TBP). As a result, the formally firstorder upwind advection scheme becomes the selected higherorder scheme if the local flow direction is into the child domain.
The TBP must not employ more than one parent grid layer behind a child domain boundary because the child has no information about the parent domain geometry outside the first parent grid layer. An interpolation stencil reaching further away could penetrate a vertical wall leading to erroneous interpolation. Therefore, the best available choice is to simply use the average of the parent grid values on both sides of the child domain boundary, i.e., a secondorder interpolation. Obviously it is different from the fifthorder scheme, but we argue that the difference between values interpolated onto the boundary plane using the fifthorder and secondorder schemes can be expected to be smaller than the difference between those interpolated using the fifthorder and firstorder schemes. Our numerical tests support this argument.
On the top boundary there is no geometry, and hence we can use a wider TBP stencil there. We ended up using the thirdorder Wicker and Skamarock (2002) scheme there for the TBP because in our numerical tests it yielded almost indistinguishable results from the more complicated and more communicationintensive fifthorder scheme. Note that the TBP reduces the fluxconservation error only on those boundary regions where the flux is into the child domain.
The sequence of interpolation operations is illustrated in Fig. 4 using the left child boundary as an example. Phase 1 operations belong to the TBP, and phase 2 operations belong to the actual parenttochild interpolation using Eqs. (8) and (9). Note that TBP is not applied to the boundarynormal velocity component u^{(N)}.
We evaluated the fluxconservation error in a simple test run modeling a horizontally homogeneous slightly convective boundary layer over flat terrain with capping inversion at z=450 m. The constant kinematic surface heat flux is 0.025 K m s^{−1}, and the wind is driven by the geostrophic balance with a geostrophic mean wind angle of 11^{∘} relative to the x axis. Within the boundary layer the mean wind angle is close to 40^{∘}, making the u and v components roughly equal to each other on average. The root grid dimensions are $\mathrm{256}\times \mathrm{256}\times \mathrm{48}$ in the x, y, and z directions, respectively, with isotropic grid spacing of 12 m. The nest domain is placed in the middle of the root domain, and its grid dimensions are $\mathrm{256}\times \mathrm{256}\times \mathrm{64}$. The nest grid spacing is isotropic 6 m, thus ${R}_{x}={R}_{y}={R}_{z}=\mathrm{2}$. Periodic boundary conditions are given on the lateral root boundaries. The u and v fluxconservation errors are evaluated for the left nest boundary for which N_{y}=128 and N_{z}=32. The relative errors $\left(\langle uu{\rangle}_{\text{b}}\langle UU{\rangle}_{\text{b}}\right)/\langle UU{\rangle}_{\text{b}}$ and $\left(\langle uv{\rangle}_{\text{b}}\langle UV{\rangle}_{\text{b}}\right)/\langle UV{\rangle}_{\text{b}}$ fluctuate in time within ±3 %, with an average of 0.04 % and rootmeansquare value of 1.7 % for $\left(\langle uu{\rangle}_{\text{b}}\langle UU{\rangle}_{\text{b}}\right)/\langle UU{\rangle}_{\text{b}}$, and 0.02 % and 0.6 % for $\left(\langle uv{\rangle}_{\text{b}}\langle UV{\rangle}_{\text{b}}\right)/\langle UV{\rangle}_{\text{b}}$, respectively. It is expected that with larger grid dimensions these errors will become even smaller.
According to our numerical tests presented in the Sect. 4, the proposed zerothorder interpolation method, i.e., Eqs. (8) and (9) together with TBP, has proven to be fully sufficient and remains the only nesting interpolation method implemented in PALM. We have also considered an alternative interpolation approach for the advected variable based on trilinear interpolation with a specific reversibility correction. Although it is not implemented, a short discussion is provided in Appendix B.
3.5 Anterpolation (child to parent)
Anterpolation is used to feed the child domain solution back to its parent domain. Generally, anterpolation consists of filtering the finegrid child solution ${\mathit{\varphi}}_{i,j,k}$ and mapping it to the parent domain grid. We choose to employ the anterpolation scheme proposed by Clark and Farley (1984), which consists of simple averaging over one parent domain grid volume around the parent grid node of the variable in question corresponding to tophat filtering, viz.
The original parent solution ${\mathrm{\Phi}}_{I,J,K}$ is replaced by the anterpolated solution in the domain of overlap. Here, $i,j,k$ and $I,J,K$ are the child and parent grid indices, respectively, and the hat is the anterpolation operator. The summation index limits, i.e., the span of the anterpolation cell i_{1}(I), i_{2}(I), j_{1}(J), j_{2}(J), k_{1}(K), and k_{2}(K), are precomputed during the initialization, and they depend on the grid configuration and the variable in question; i.e., the staggered velocity components have different index limits to the gridcellcentered scalars. Note that for the velocity components, the anterpolation volume is reduced to the gridcell face on which the velocity component is defined. This means that the upper index limit in the direction of the velocity component is reduced to the lower one, for instance i_{2}=i_{1} for u, because the coordinates of the velocity component node in the respective direction in the parent and the child readily coincide, and thus there is no need for anterpolation in this direction. ${N}_{I,J,K}$ is the number of child domain values used for anterpolation at a given parent grid location and is precomputed during the initialization as
Note that due to the staggered grid, four sets of the index limits and ${N}_{I,J,K}$ are precomputed and stored: one for each velocity component and one for all scalars. Generally, the anterpolation cells can be spanned in more than one way. We define the anterpolation cells similarly to Clark and Farley (1984) in order to ensure the reversibility discussed in Sect. 3.4.2. For scalar variables (nonstaggered variables) the anterpolation cell spans ${X}_{i}\pm \mathrm{\Delta}{X}_{i}/\mathrm{2}$, where X_{i} $(i=\mathrm{1},\mathrm{2},\mathrm{3})$ are the coordinates of the scalar node in the parent grid. For the velocity components (staggered variables), for example for u, the anterpolation cell spans X_{1}, ${X}_{\mathrm{2}}\pm \mathrm{\Delta}{X}_{\mathrm{2}}/\mathrm{2}$, and ${X}_{\mathrm{3}}\pm \mathrm{\Delta}{X}_{\mathrm{3}}/\mathrm{2}$, where X_{i} are the coordinates of the staggered u node in the parent grid.
Buffer zones where the anterpolation is omitted are applied next to the child domain boundaries (except the bottom boundary). The main purpose of the buffer zones is to avoid an unstable feedback loop between the anterpolation and interpolation. The default width of these buffer zones is two prognostic grid nodes. The user may choose a different value for the buffer width, but the minimum allowed width is one parent grid spacing. This is because the layer of nodes nearest to the child boundary is directly used in the interpolation and using an anterpolated value for interpolation leads to a strongly unstable behavior. The buffer zones are comparable to the relaxation zones applied in the nesting system of the WRFLES model (Moeng et al., 2007). In the WRFLES nesting system the anterpolation is underrelaxed within these zones such that the underrelaxation coefficient varies linearly across the relaxation zones, which are five grid spacings wide. As mentioned in Sect. 3.3, the buffer zone below the top boundary also reduces the overestimation of the horizontal velocity variances observed in zero mean wind CBL tests in a purely vertical nesting mode. According to these tests in a purely vertical nesting mode, simulation results are not particularly sensitive to the extent of the vertical downward shift of the upper edge of the anterpolation domain.
Canopyrestricted anterpolation
The anterpolation algorithm is implemented in the PALM model with a feature that enables its application in a spatially selective manner such that the operation is only performed within the computational domain that is above a userdefined vertical threshold. This practice is discovered to resolve complications that arise when twoway coupled nesting is applied in obstacleresolved LES simulations where the anterpolated solution within the obstacle canopy introduces discrepancies in the coarser parent solution. Thus, we label this approach canopyrestricted (CR) anterpolation, and the coupling is referred to as twoway CR. The necessity of this anterpolation strategy is motivated and its effectiveness demonstrated in Sect. 4.2.3 where nesting is applied to obstacleresolved LES test case.
In order to evaluate the nesting strategy, show its benefits, and point out its limits, we performed a series of nested model simulations for different gridspacing ratios and respective nonnested reference simulations for different atmospheric situations. The idea is not to mainly validate the PALM model against experimental data but instead to systematically compare the nesteddomain results to corresponding nonnested fine and coarsegrid reference results and to show that the nesteddomain solutions are closer to the finegrid reference solutions than the coarsegrid reference solutions. PALM has been already evaluated for various ABL flows against measurement data (Letzel et al., 2008). Nevertheless, we show one comparison against wind tunnel data in Sect. 4.2.2. Furthermore, the idea is not to present grid convergence studies since the grid convergence of the PALM model has been demonstrated previously, e.g., for the convective boundary layer by Hellsten and Zilitinkevich (2013).
We simulated a homogeneously heated flatterrain convective boundary layer and a purely sheardriven flatterrain boundary layer. Further, to investigate the performance of the grid nesting in more complex situations where nonflat topography is present, we performed twostaged nested simulations for a neutrally stratified flow over a smooth threedimensional hill and will compare the results against wind tunnel data. These three test cases were simulated only employing the twoway coupling. Second, we simulated a neutrallystratified urban boundary layer flow over a regular staggered arrangement of building cubes using one and twostaged nesting, and will compare the nested simulation results to corresponding nonnested fine and coarsegrid simulation results. Details concerning the different simulation setups are given in their respective sections. Note that for the sake of simplicity velocity components will hereafter be addressed by lower case variable names only, regardless of whether they refer to the flow in the parent or the child domain.
4.1 Convective boundary layer
The nesting method is first evaluated for a pure convective boundary layer (CBL) with zero mean wind. We set up one child domain that is centered within the parent domain. For the root domain, cyclic lateral boundary conditions were set. A homogeneous and timeconstant surface sensible heat flux of 0.1 K m s^{−1} was prescribed. The simulation was initialized with a potential temperature profile that increases linearly with height at a lapse rate of $\mathrm{0.3}\phantom{\rule{0.125em}{0ex}}\mathrm{K}/\mathrm{100}\phantom{\rule{0.125em}{0ex}}\mathrm{m}$. The root model domain size is $\mathrm{10.2}\phantom{\rule{0.125em}{0ex}}\mathrm{km}\times \mathrm{10.2}\phantom{\rule{0.125em}{0ex}}\mathrm{km}\times \mathrm{3.0}\phantom{\rule{0.125em}{0ex}}\mathrm{km}$ in the x, y, and z directions, respectively, with an isotropic grid spacing of 20 m. The top of the child domain is set to be within the middle part of the CBL, and the domain size is $\mathrm{2.5}\phantom{\rule{0.125em}{0ex}}\mathrm{km}\times \mathrm{2.5}\phantom{\rule{0.125em}{0ex}}\mathrm{km}\times \mathrm{0.48}\phantom{\rule{0.125em}{0ex}}\mathrm{km}$ in the x, y, and z directions, respectively, with an isotropic grid spacing of 10 m, resulting in a gridspacing ratio of 2. In order to examine how turbulence statistics behave for different gridspacing ratios between parent and child in the CBL, we additionally run nested simulations with gridspacing ratio of 3 and 4 by increasing the isotropic grid spacing in the parent domain to 30 m and 40 m, respectively. Nonnested coarse and finegrid reference simulations were carried out corresponding to the nested simulations with different gridspacing ratios. The simulated time was 4 h for all convective cases. Data analysis started after 2 h of simulated time when model spinup effects are not present any more and the simulations reached steadystate conditions. In order to perform a spectral analysis of time series data, the time step was held constant at 1.0 s in all convective simulations during the data analysis period.
Figure 5a shows an instantaneous horizontal cross section of the w component at a height of 40 m for the parent and child (overlaid) domains for the gridspacing ratio 2. A hexagonal pattern of convective cells with strong updrafts and weaker downdrafts is visible, as can be typically observed in LES. The transition between parent and child appears smooth and the flow structures are continuous in terms of shape and amplitude, while within the inner part of the child domain more finescale structures can be observed with slightly stronger updrafts and downdrafts, as also reported by Moeng et al. (2007). Furthermore, Fig. 5b, showing an instantaneous vertical cross section for the w component, also depicts how the updrafts and downdrafts are consistently maintained across the child boundary without any obvious impact on the turbulent structures.
Figure 6 shows horizontally and timeaveraged vertical profiles of potential temperature θ, vertical turbulent heat flux $\u2329{w}^{\prime}{\mathit{\theta}}^{\prime}\u232a$, variances of horizontal and vertical velocity components, and the skewness of the vertical velocity component w, being one of the most gridsensitive quantities (Sullivan and Patton, 2011). The profiles of 〈θ〉 indicate a wellmixed CBL. With increasing gridspacing ratio the corresponding parent and coarsegrid simulations deviate from the finegrid reference, particularly near the surface and within the inversion layer, while the child results adhere well with the nonnested finereference simulation, indicating that the profiles of 〈θ〉 in the child domains are rather independent of the parent grid for the employed grid spacings.
The heat flux profiles in the child and parent simulations decrease linearly with height within the CBL and are in good agreement with the finereference simulation. For the parent simulation we note the nearsurface kink in the heat flux (see Fig. 6c for a closeup view). Moeng et al. (2007) observed a similar kink in the heat flux and attributed it to inaccuracies in the statistical evaluation of the heat flux, more precisely, to errors that arise from interpolation from a mass to a heightcoordinate system. However, to evaluate fluxes PALM does not apply any interpolations but uses directly the resolved and subgridscale fluxes as calculated in the advection scheme and the subgrid model, respectively, and thus interpolation errors cannot explain the kink in this case. Instead, we attribute the kink in the parent domain to the anterpolation from the fine child solution. In simulations with different vertical grid spacing, the vertical gradients of 〈θ〉 within the unstable nearsurface layer are differently resolved, resulting in slightly different nearsurface temperatures, as it can, e.g., be observed between the fine and coarsereference simulations in Fig. 6a. This indicates that the parent simulation will yield slightly different 〈θ〉 profiles than the child simulation. After the anterpolation is performed, the parent solution is replaced by the underlying child solution, where the nearsurface vertical gradients of 〈θ〉 in the parent domain partly deviate from the ones the model would create without feedback from the child domain, i.e., the nearsurface 〈θ〉 profile in the parent is not in equilibrium with the applied surface boundary condition any more. In the following time step the parent model tries to readjust the postinserted 〈θ〉 to the vertical gradients as being present without feedback from the child, altering the heating rates and thus the near surface vertical gradients of the heat flux, which in turn becomes visible as nearsurface kink. In fact, we verified this hypothesis in a test case by using identical vertical grid spacing in parent and child. In this case, no kink in the vertical heat flux was visible any more (not shown).
The variances of the horizontal and vertical velocity components, as well the skewness of the vertical component, depend strongly on the grid spacing as the coarse and finegrid reference simulations show, where the variances (skewness) become smaller (larger) for increasing grid spacing. The parent simulation agrees well with the coarseresolution simulation, indicating that the anterpolation changes the parent flow field only marginally. The variances and skewness in the child simulations agree with the finereference profiles, except for the upper regions of the child domain where the variances are slightly overestimated. The child profiles are almost independent of gridspacing ratio and are close to the reference simulation profile. This indicates that the child solutions are almost independent of the chosen gridspacing ratio in the studied cases.
Although there is no mean horizontal advection in the zeromean wind CBLs, spatially and temporally local horizontal advection always takes place, and therefore flow structures are advected locally from parent to child (and vice versa). Therefore, advected flow structures may need a certain fetch to adjust to the changed grid spacing. In order to get an idea of how much distance from the lateral child boundaries is required to observe similar turbulence properties as in a nonnested fineresolution reference simulation, we performed a spectral analysis. Therefore, we sampled time series of turbulent kinetic energy (TKE) and θ at different locations inside the child domain and calculated frequency spectra from the sampled time series. Subsequently, we averaged spectra over all sampling locations with the same distance from the lateral child boundaries. Overall, we calculated spectra inside the child domain at locations 75, 100, 200, 300, and 500 m away from the lateral boundary. It should be noted that transforming frequency spectra into wave number spectra using Taylor's hypothesis in order to directly link spectral information and grid spacing is not strictly correct in this case where we have no background wind; nevertheless, we will assume that frequency and wave number space are connected, i.e., that large frequencies belong to small spatial scales and vice versa. Figure 7 shows the resulting frequency spectra, as well as corresponding spectra from fine and coarsegrid reference simulations. As expected, the coarseresolution spectra exhibit less spectral energy at larger frequencies compared to the child and fineresolution spectra. This is due to the larger filter length assumed for the subgrid model removing more energy at larger spatial scales and thus also affecting smaller frequencies. The child spectra agree well with the finereference spectra, especially for the gridspacing ratio case 2, where even locations close to the lateral boundaries show good agreement with the reference. We attribute this to the nature of the CBL, where turbulence is mostly produced locally by buoyancy and horizontal advection is almost negligible, and thus turbulence is almost unaffected by any transport from the boundaries. For the gridspacing ratios of 3 and 4 the differences are also generally small, but locations close to the lateral boundaries show slightly smaller spectral densities at larger frequencies, indicating that for larger gridspacing ratios the flow needs a fetch of a few tens of meters in a purely buoyancydriven boundary layer to adjust to the finer grid spacing.
Even though the child simulations yield turbulence profiles, spectra, and instantaneous flow patterns similar to the finegrid reference simulation for a pure buoyancydriven flow, the nested simulation nevertheless creates side effects on the flow which appear as a secondary circulation (SC). This SC is not caused by a violation of mass conservation that has been discussed in Sect. 3.4. Figure 8 shows the 5 h timeaveraged w component at the middle part of the CBL in a homogeneously heated nested simulation. In order to compute the 5 h time average, we continued the simulation with gridspacing ratio of 2 for a further 3 h. Within the region of the nested child domain, a mean updraft can be observed, which is in the range of 0.4–0.9 m s^{−1} and extends throughout the entire depth of the CBL (not shown). At the child domain boundaries and outside the child domain region the flow subsides on average and horizontally directed branches at the upper and lower parts of the CBL occur, giving the overall picture of a SC. The strength of this SC, indicated by the amplitude of the mean updraft, is on the order of the strength of SCs observed in previous simulations over idealized stripelike surface heterogeneities (Sühring et al., 2014) and even exceeds the strength of SCs observed in simulations over realistic surface forms (Maronga and Raasch, 2013).
SCs develop above surface heterogeneities mainly due to differential surface heating of the air, resulting in mean updrafts and downdrafts over the stronger and lessheated patches, respectively. However, since we prescribe the same surface sensible heat flux in the parent and the child simulations, differential surface heating cannot be the reason for the SC in the nested simulation. Moeng et al. (2007) observed a temperature bias in their child domain that led to mean vertical motion to compensate the temperature bias. They observed temperature biases that go either way, i.e., a too cold or a too warm child domain, which they attributed to a nested child domain of too small horizontal extent. If only a few updrafts or downdrafts are resolved in the child domain, the vertical transport is dominated by these updrafts or downdrafts, and thus a warmer or cooler CBL can be quickly produced in the child domain, respectively. They showed that for larger horizontal child domain size the temperature bias and thus the associated vertical motion vanished. However, they only considered instantaneous differences between parent and child, meaning that the temperature bias is a result of insufficient sampling of the large updrafts and downdrafts rather than an inherent feature of the nesting which can only be observed after timeaveraging. In our case the SC becomes visible only after considerable timeaveraging. The updraft branch of the secondary circulations is always located within the child domain for larger child domain extensions as well. We hypothesize that this SC is triggered by a slightly different divergence of the vertical heat flux between the region occupied by the child domain and the remaining parent domain due to different grid spacing. It might be impossible to eliminate because a higher resolution better represents the turbulent mixing, and thus differences between the parent and the child solutions are to be expected in general.
Even though this inherent artificially induced SC only appears when the flow is averaged over a longer time under quasistationary conditions (no diurnal cycle, no change in the mean wind, etc.), nested simulation results should be interpreted carefully in terms of SCs. In particular, since the strength of the artificial SC is on the order of “realworld” circulations over heterogeneous terrain, these two may become superimposed, altering the pattern of the vertical transport of sensible and latent heat. Although we did not succeed in proving our hypothesis, we encourage other researchers to look for the existence of such SCs in any nested models by analyzing the timeaveraged results.
4.2 Neutrally stratified boundary layer tests
Initialization and inflow conditions
As further test cases, we set up a series of boundary layer flow simulations with increasing order of complexity. First, to evaluate the performance of grid nesting in sheardriven boundary layer flows, we simulated a flow over a homogeneous flat surface in order to compare first and secondorder moments from a nested simulation against reference simulations. In a second step, we simulated a flow over a smooth threedimensional hill for comparison of nested simulation results against wind tunnel data. Finally, in order to illustrate the advantages of the grid nesting in more complex setups, we simulated a flow over a staggered arrangement of cubes mounted on a flat surface.
The parent domain size for all neutrally stratified simulations was ${L}_{x}\times {L}_{y}\times {L}_{z}=\mathrm{5.1}\times \mathrm{1.5}\times \mathrm{0.32}\phantom{\rule{0.125em}{0ex}}{\mathrm{km}}^{\mathrm{3}}$ in the x, y, and z directions, respectively. In all neutral simulations we prescribed a homogeneous roughness length of z_{0}=0.01 m. At the top boundary we applied a freeslip condition for the horizontal wind components and zero vertical motion. At the spanwise lateral boundaries (north and south boundary), we applied cyclic conditions. At the left lateral boundary (hereafter referred to as the inflow boundary), we prescribed mean inflow profiles for the u and v component obtained from a cyclic precursor run. Two different precursor simulations were employed for the subsequent test cases. The one used for the flat surface and for the smooth hill featured a geostrophic wind of ${u}_{\mathrm{g}}=\mathrm{4.8}\phantom{\rule{0.125em}{0ex}}\mathrm{m}\phantom{\rule{0.125em}{0ex}}{\mathrm{s}}^{\mathrm{1}}$ and ${v}_{\mathrm{g}}=\mathrm{1.3}\phantom{\rule{0.125em}{0ex}}\mathrm{m}\phantom{\rule{0.125em}{0ex}}{\mathrm{s}}^{\mathrm{1}}$ at a latitude of 55^{∘}, adjusted such that the surface layer mean flow became parallel with the x axis. This precursor simulation ran for 36 h to reach a stationary state. The second precursor simulation, used for the cuboid case, was driven by a fixed pressure gradient angled to result in a mean flow of $u=\mathrm{10}\phantom{\rule{0.125em}{0ex}}\mathrm{m}\phantom{\rule{0.125em}{0ex}}{\mathrm{s}}^{\mathrm{1}}$ at z=L_{z} with a 3^{∘} angle from the x axis.
In order to obtain a turbulent inflow, we applied a turbulence recycling method according to Kataoka and Mizuno (2002), where the inflow mean vertical profiles of u and v are superimposed by turbulent fluctuations sampled at a recycling plane, which is placed at x_{rc}=1.5 km downstream from the inflow boundary. The recycling plane is placed sufficiently far apart from the inflow boundary to allow for statistically independent turbulence but also sufficiently far apart from the location of the child domain to avoid any feedback between the grid nesting and the inflow conditions. For further details on the implementation of the turbulence recycling method, see Maronga et al. (2015).
Further, in order to avoid persistent streaks in the u component, which may develop in neutrallystratified flows and will be recurrently recycled in case of vanishing v component, we shifted the recycled turbulent signals along the y direction at the inflow boundary, following Munters et al. (2016). At the eastern outflow boundary we set a radiation boundary condition (Miller and Thorpe, 1981). The root domain was initialized with threedimensional data recursively copied from the precursor run, while the child domain was initialized with data obtained from the parent. We used an isotropic grid spacing of 4 and 2 m within the root and the nested child domain, respectively. The cuboid and the smooth hill case also encompass a third domain with 1 m resolution (twostage nesting).
In order to evaluate the effect of the nesting, we performed additional nonnested reference simulations with 4 and 2 m grid spacing. However, due to its high computational demands, a 1 m nonnested reference simulation was not performed. The simulated time of the neutrally stratified simulations ranged from 4 to 7 h. Data analysis started after 2 h of simulated time. When spectral analysis was performed (homogeneous flat case), the time step was held constant at 1.0 s for that simulation.
4.2.1 Neutrally stratified boundary layer flow over flat terrain
Figure 9 shows an instantaneous horizontal cross section of the u component for the nested simulation. As is typical for a neutrally stratified boundary layer, elongated streaklike structures can be observed (Hutchins and Marusic, 2007; Hutchins et al., 2012). These elongated structures preserve their size and amplitude when entering the child domain from the left and exiting to the right.
Figure 10 shows horizontal profiles of the timeaveraged and yaveraged friction velocity u_{*} within the child domain and the corresponding coarse and finegrid reference cases. In the coarse and finereference cases u_{*} is constant along the x axis, indicating that the flow is in equilibrium with the surface friction. In the coarsegrid simulations u_{*} shows slightly higher values compared to the finegrid reference simulation, even though the prescribed surface roughness is identical in all simulations. This suggests that the flow in the coarsegrid simulations sees a slightly rougher surface, which we attribute to the less accurate representation of the vertical nearsurface gradients of the wind profile compared to the finegrid simulation. When the flow enters the child domain, the coarsegrid inflow wind profile is not in equilibrium with the surface friction any more and the nearsurface flow decelerates, indicated by the higher values of u_{*} near the child inflow boundaries. With increasing distance to the inflow boundary, u_{*} rapidly decreases and reaches a minimum with lower values compared to the reference cases, until it increases again, reaching a secondary maximum, and then asymptotically approaches a constant value, which is similar to the value of the finegrid reference case in the gridspacing ratio cases 2 and 3. However, at least for the given model domain size, u_{*} does not approach the finegrid solution in gridspacing ratio case 4 but still exhibits higher values. This kind of spatial oscillation of u_{*}, which indicates an alternating deceleration and acceleration of the nearsurface flow along the x direction, shows that the surfacemomentum exchange in the child domain needs a sufficiently large development length. For gridspacing ratio case 2 the required fetch length is at least 1 km to adjust to the finegrid resolution. With increasing gridspacing ratio the amplitude of the spatial oscillation increases and the fetch length becomes longer and even exceeds the model domain size (as in gridspacing ratio case 4).
Figure 10 shows that u_{*} gradually adjusts to the finereference value, at least for gridspacing ratio cases 2 and 3. This is in contrast to Moeng et al. (2007), who revealed a friction velocity bias between parent and child in their neutrally stratified simulation when employing a gridsizedependent SGS model, which is also the case in this study.
Figure 11 shows timeaveraged and yaveraged profiles of the horizontal wind speed within the child domain for different grid ratios, taken at different distances downstream of the inflow boundary, indicated by the solid white lines in Fig. 9. At a distance of 100 m, the profiles in the child domains agree well with the finereference profile within the lowest 10 m. Even though the surface momentum exchange is still not in equilibrium at that position (see Fig. 10), one could already conclude from the nearsurface wind profiles that the flow has already been adapted to the finer grid resolution. However, further above this point, the wind profiles of the child model still deviate from the finereference solution and are closer to the coarsereference profiles. This is especially obvious for the gridspacing ratio case 4, where the wind profile shows a discontinuity at a height of z=20 m. With increasing distance from the child inflow boundary, the child profiles gradually adjust to the finereference simulation, while at a distance of 2000 m the child profiles agree with the finereference solution, except for the gridspacing ratio case 4, which still deviates from the finereference solution.
In order to further analyze the flow adjustment within the child domain, we computed resolvedscale TKE spectra at different distances from the child inflow boundary. The spectra were calculated from time series of the three velocity components that were sampled at different locations within the domain. The final spectra were then obtain by averaging individual spectra over all locations with identical distance to the inflow boundary, assuming that the flow is parallel to the x axis. Figure 12 shows TKE spectra obtained from the child domain and for the corresponding reference simulations. At low frequencies (large wave numbers), the spectra look quite similar and no obvious differences with the fine and coarsereference spectra can be observed, indicating the grid nesting does not induce any largerscale oscillations that propagate through the model domain. At higher frequencies, however, especially at the nearinflow boundary, the child spectra differ from the finereference spectra and more resemble the corresponding coarsereference spectra. With increasing distance from the inflow boundary, the spectral properties gradually adjust to those of the finereference case, while at a fetch of 500–1000 m almost no differences can be observed at that height level.
In contrast to a buoyancydriven boundary layer, the flow in a purely sheardriven boundary layer requires a sufficiently large development distance to adjust to the finer grid resolution. However, a purely sheardriven flow over a flat homogeneous surface can certainly be considered as an extreme case in terms of flow adjustment, as the vertical turbulent exchange, which is primarily driven by surfaceroughnessinduced shear, is rather low compared to less idealized flows over nonflat terrain or with obstacles included. Hence, we expect that the required fetch length may decrease for rougher surfaces and more complex surface geometries.
4.2.2 Neutrally stratified boundary layer flow over a smooth threedimensional hill
The hill case is studied to compare flow statistics against the wind tunnel observations conducted by Ishihara et al. (1999), who sampled data at different upstream and downstream locations along the center hill axis. This flow is simulated using a twostage nesting configuration in which a second child domain with 1 m resolution is placed within the first child domain with 2 m resolution.
The terrain height of the smooth threedimensional hill is given by
with the hill height H=40 m and a hill radius l=100 m, while x and y indicate the location on the discrete grid, and x_{0}=2024 m and y_{0}=384 m indicate the location of the hill top with respect to the parent domain dimensions. Note that we upscaled the hill dimension by a factor of 1000 with respect to the wind tunnel model. Again, the parent domain size is ${L}_{x}\times {L}_{y}\times {L}_{z}=\mathrm{5.1}\times \mathrm{1.5}\times \mathrm{0.32}\phantom{\rule{0.125em}{0ex}}{\mathrm{km}}^{\mathrm{3}}$ in the x, y, and z directions, respectively. The child domain sizes are ${L}_{x}\times {L}_{y}\times {L}_{z}=\mathrm{0.768}\times \mathrm{0.384}\times \mathrm{0.16}\phantom{\rule{0.125em}{0ex}}{\mathrm{km}}^{\mathrm{3}}$ and ${L}_{x}\times {L}_{y}\times {L}_{z}=\mathrm{0.576}\times \mathrm{0.288}\times \mathrm{0.12}\phantom{\rule{0.125em}{0ex}}{\mathrm{km}}^{\mathrm{3}}$ for the first and second child domains, respectively. The upstream boundaries of the first and second child domains are placed about 9.7H and 6.3H upstream of the hill top, respectively. The smooth hill geometry is approximated with the gridfollowing stair step geometry in PALM due to its orthogonal grid arrangement and its topography description system; see (Maronga et al., 2015).
Figure 13 shows the mean flow field along the centerline of the threedimensional hill for the nested child simulations and the fine and coarsegrid reference simulations. Upwind of the hill, the mean flow in the nested simulations agrees with the one in the fine and coarsereference simulations. In the coarsegrid reference simulation the recirculation extends further downstream up to about 4.1H on the lee side of the hill compared to the 2 m nested and finereference simulation, which both show a recirculation that extends up to about 3.75H downstream of the hill top. When further increasing the grid resolution to 1 m, the recirculation zone further shortens to about 3H. Figures 14 and 15 show the corresponding standard deviations of the u and w components sampled at different locations along the centerline of the hill. Upwind of the hill the standard deviations of the u and w components agree with the observations and show no significant difference between the simulations. However, leeward of the hill at 1.25H, the LES underestimates the standard deviations of the u and w components in the 2 and 4 m simulations, which is most pronounced in the coarsegrid reference simulation, while the profiles in the 1 m child domain of the nested simulation agree fairly well with the measurements. Further downstream, the coarsegrid reference run still slightly underestimates the observed standard deviations, while the 2 m child domain result and finegrid reference result slightly overestimate the standard deviations, which is in agreement with results from the EPFLLES model presented in Diebold et al. (2013), who employed a similar finegrid resolution for this hill flow. With 1 m grid spacing the standard deviations further downstream are still slightly overestimated, though their vertical shape and amplitude is captured quite well. In order to provide a more quantitative measure, Table 1 provides the rootmeansquare deviation (RMSD) of the profiles shown in Figs. 14 and 15. RMSD is defined as follows:
where ψ is any prognostic variable from the considered solution and ψ_{𝚁𝚎𝚏} refers to the corresponding measurement value. The RMSD values indicate that the profiles converge towards the observations with increasing grid resolutions. Furthermore, the standard deviations from the 2 m child domain and from the finegrid reference simulation show only marginal differences among each other, indicating that the nesting method and the finegrid reference simulations lead to almost identical results. Considering that the hilltop is placed only about 390 and 250 m downstream of the child domain inflow boundary, this indicates that in more complex setups where topography is present, the adjustment fetch can become significantly smaller compared to purely flat terrain, as discussed in Sect. 4.2.1.
4.2.3 Neutrally stratified boundary layer over a regular array of cubes
The final test case features a neutral atmospheric boundary layer flow over flat terrain, which becomes incident with a staggered pattern of cubical obstacles. The resulting flow scheme resembles urban canopy turbulence where the interaction between roughness elements and ABL turbulence is primarily resolved. Here, the cubical obstacle height is H=40 m. The distance between the obstacles is 3H in the x direction and 1H in the y direction.
To demonstrate the flexibility of the nesting implementation, we carried out simulations with two different nested configurations illustrated in Fig. 16. The first (v1) case features a single child domain, while the second (v2) case contains a twostage nesting system where a second child domain is nested within the first. In the latter configuration, the first child acts as a parent for the second child domain. The isotropic grid spacing is 4 m in the root domain, 2 m in the secondlevel nest (first child), and 1 m in the thirdlevel nest (second child). Note that the implementation does allow child locations to be selected such that their domain boundaries intersect with the obstacles. The two example configurations represent nesting applications designed to meet different levels of accuracy demands. The v2 configuration is set to resolve the transition effect at the leading edge of the cube canopy and to capture the bluntbody–wake interactions in sufficient detail within the center region of the cuboid canopy.
First, in the context of obstacleresolved LES, we motivate the employment of an optional canopyrestricted (CR) anterpolation strategy introduced in Sect. 3.5. For this purpose, consider Fig. 17, which shows an instantaneous horizontal cross section of vorticity vector magnitude at z=0.9H height for configuration v2. The image focuses on a region where all domains with different resolutions are visible.
The visualization indicates the strength and spatial structure of the resolved turbulent eddies and how they are affected by grid resolution. The differences are significant. In such an obstacleresolving LES, the increased grid resolution has the ability to alter the flow solution to such a degree that the anterpolation introduces details to the coarser parent that are inconsistent with the rest of the parent's flow solution. Particularly with bluntbody obstacle canopy flows, this discrepancy is clearly manifested as a locally changing resultant pressure drag (caused by the obstacles) within the anterpolated domain. To inspect this, we compute the resultant pressure drag coefficient
for the differently coupled simulations. In Eq. (19), ${u}_{\text{ref}}=\u2329\stackrel{\mathrm{\u203e}}{u}\u232a{}_{z=\mathrm{1.25}H}$ is the reference wind speed, F_{p} is the resultant pressure force exerted on the cubes obtained by integrating the pressure over vertical walls, ρ is the density of air, and S_{ref} is the accumulated frontal area of the cubes. The results are listed in Table 2, which makes evident the drastic difference between the values for the coarse reference and the twoway coupled parent (${C}_{{F}_{\mathrm{p}}}$[coarse] vs. ${C}_{{F}_{\mathrm{p}}}$[root]: twoway). This large difference arises as the anterpolated solution within the obstacle canopy introduces a largescale disturbance to the parent solution, giving rise to unphysical secondary effects. These effects, in turn, lead to complicated feedback systems in the twoway coupled solutions, whose realizations become dependent on the chosen nesting configuration.
This problematic behavior is significantly abated by adopting the CR anterpolation strategy setting and by setting the vertical threshold at 1.25H via experimenting. This CR anterpolation allows the parent and child flow fields to become strongly coupled while minimizing global inconsistencies in the parent solution. While all the child domain solutions overpredict the pressure drag, the twoway CR solution yields ${C}_{{F}_{\mathrm{p}}}$[child 2] values that are closest to the fine reference.
To further evaluate the nesting performance, we exploit root(normalized)meansquare difference (RNMSD or RMSD) and fractional bias (FB) as comparison metrics (see, Britter and Hanna, 2003) evaluated over successive x–y planes to assess the effectiveness of the nesting approach in obstacleresolving LES cases. RNMSD and RMSD provide a measure of mean difference that is composed of random scatter and systematic bias, whereas the fractional bias (FB) yields a specific measure for the systematic bias between the two solutions. The RMSD metric is defined by Eq. (18), while RNMSD and FB are defined as follows:
where ψ is a generic prognostic variable from the considered case, while here ψ_{𝚁𝚎𝚏} refers to the value from the reference simulation with 2 m resolution. RMSD is used instead of RNMSD in cases where the product of doubleaveraged quantities used for normalization approaches zero. Similarly, FB is only evaluated for the streamwise velocity component because other components yield a nearzero denominator that contaminates the metric. The evaluations are performed for 15 x–y planes within the child domain (zone $(\star \star )$ in Fig. 16) that are equally spaced over the range $\mathrm{0}\le z/H\le \mathrm{1.5}$. We have excluded 128 and 64 m wide development zones at the boundaries in the x and y directions, respectively. When the coarsereference (4 m resolution) solution is compared to the finereference (2 m resolution) solution, the coarse solution is interpolated onto the fine grid before the comparison metrics are evaluated.
Both model variants (v1 and v2) are included in the analysis to demonstrate how the size and placement of the child domains affect the metrics and to illustrate the possibility of employing a cascade of nested domains. Although no comparison metrics are presented for the second child solution featuring 1 m resolution, its influence is embedded in the solution of the first child.
The RNMSD and RMSD profiles for the velocity components and their variances depicted in Figs. 18 and 19 lay bare the effectiveness of the presented nesting system and reveal the added benefit of the CR anterpolation. While all the coupling approaches succeed in significantly reducing the discrepancy compared to the fine reference, the conventional twoway coupling exhibits the most pronounced level of deviation. The FB results in Fig. 20 indicate that the twoway coupled solution also contains the most systematic deviation, which conforms with the pressure drag results.
The oneway coupling approach consistently performs better than the unmodified twoway coupled approach in all metrics, but it is also associated with a systematic bias that is larger than the value by coarse reference. However, if the modest systematic shift in streamwise velocity can be accepted, the oneway coupling offers a costeffective nesting coupling approach (see Sect. 4.3 for performance measures). Nonetheless, the results conclude that the introduced CR anterpolation approach presents the most recommended coupling strategy for obstacleresolving LES as it provided the best metrics in every category.
4.3 Performance issues
Table 3 gives an overview of the consumed CPU time T_{CPU}, wallclock time T_{wall}, and some performance measures in the nested and fine and coarsegrid reference simulations for the hill and the cube case simulations. T_{CPU} is defined here simply as N_{procs}T_{wall}, and N_{procs} is the number of parallel processes. As a rule of thumb, doubling the resolution leads to an increase in CPU time by approximately a factor of 16 (when the numerical time step is determined according to the Courant–Friedrichs–Lewy (CFL) criterion). This can be observed comparing the coarse and finegrid reference simulations. Compared to the finegrid reference simulations, the nested simulations consumed significantly less CPU time (up to 80 % reduction), while increasing the computational cost by factors of 3.8 and 3.4 in the hill and cube canopy cases compared to the coarsegrid reference. The direct overhead due to the nesting, i.e., the fraction of the CPU time used by all the nestingrelated operations during the timestepping is reasonably small in these tests, i.e., only from 2 % to 16 %. Also the CPU time per grid point per time step increases only moderately due to the nesting. These figures surprisingly differ a lot between the hill and the cube array cases. This is most likely due to the fact that these cases were run on different computer systems. Although these factors depend on the child domain size, these tests make evident that the nesting technique can significantly reduce the computational cost, while yielding results that closely adhere to the nonnested fineresolution simulation.
Due to the interpolation and anterpolation and the accompanied intermodel data transfer, the nesting itself consumes CPU time. In our tests the workload with respect to the number of grid points treated by a processor element was equal among the parent and the child simulations. With this optimal configuration, the twoway nesting consumed about 10 %–16 % of the CPU time in our tests, while it consumes only about 2 % in the oneway nesting. This suggests that most of the CPU time taken by twoway nesting is consumed in the anterpolation and the associated child to parent data transfer.
It is important to bear in mind that if the workload between child and parent processes is not well balanced, the faster processes need to wait before the data transfer can start until the slower processes reach that point, reducing the computational efficiency of the nesting.
This article documents and evaluates an online LES–LES nesting scheme implemented into the PALM model system 6.0. The nesting system relies on the postinsertion approach and features both oneway and twoway coupling approaches. We give a detailed description of the model's relevant technical, algorithmic, and numerical aspects and provide evidence for the accuracy gains the method introduces with a dramatically reduced computational cost compared to globally refined grid resolution. The nesting approach has proven particularly essential in urban boundary layer studies requiring obstacleresolving LES.
The implementation of this threedimensional nesting system is based on twolevel parallelism involving intermodel and intramodel parallelization using the MPI. This enables our nesting implementation to flexibly support multiple child domains, which can be nested within their parent domain either in a parallel or recursively cascading configuration. All solutions involved within the nested simulation are advanced using a globally synchronized time step, whereas the coupling between each parent–child pair is performed with interpolation (parent to child) and anterpolation (child to parent) operations.
The nesting method is evaluated by performing a series of numerical experiments with an objective to demonstrate that the refined child solution (nested within a coarser parent) approaches the nonnested reference solution obtained by employing fine resolution globally.
The first test case features horizontally homogeneous convective boundary layer (CBL) with no mean wind. In this case, first and secondorder boundary layer statistics are well captured in the child domain and are closely comparable to nonnested highresolution reference statistics. Further, due to the local nature of turbulence production and the weak advection from parent into the child, the flow statistics show almost no dependence on the distance to the child boundaries. However, in the case of averaging times that are several hours long, we found that a nonphysical secondary circulation develops despite the surface heating being homogeneous. We hypothesize that this secondary circulation is an inherent consequence of the spatially changing description of flow physics in the parent and child solutions. Even though we demonstrated this issue with a rather idealized setup using an unrealistically long averaging time and these nonphysical circulations are probably minimized in less idealized simulations, e.g., those with wind, a diurnal cycle, or realistic terrain surfaces, we believe that this should be kept in mind when applying the nesting system to CBL problems.
The second test case simulated neutrally stratified boundary layer flow over flat terrain. The nested simulations reveal that the flow solution within the child domain must undergo a development phase, as the flow solution adjusts to the higher resolution before reaching equilibrium state again. The required development length depends on the gridspacing ratio between parent and child. However, a purely sheardriven flow over a homogeneous flat terrain can be considered an extreme scenario with respect to the development length of turbulence, while in cases with more complex surface geometry the flow adapts within shorter development distances. Beyond the development distance, the child solution for gridspacing ratios of 2 and 3 agree well with the nonnested finereference solution, but in cases with a gridspacing ratio of 4 the results clearly deviate from the finereference solution.
The third numerical experiment featured boundary layer flow (similar to second test case) over a smooth threedimensional hill. This test case also exploits wind tunnel measurements to strengthen the nesting model evaluation. In this case, the flow statistics in the windward and the leeward part of the hill are almost the same as in a finereference simulation and agree well with wind tunnel observations presented in Ishihara et al. (1999).
The final test case examines a flow system where a fully developed boundary layer flow becomes incident with a staggered arrangement of cubeshaped obstacles. This flow scenario closely resembles an obstacleresolving urban boundary layer flow situation. The case revealed that in twoway coupled simulations, the anterpolated child solution introduces discrepancies within the parent domain, which manifest as elevated pressure drag within the anterpolated zone. This complication is remedied by introducing a canopyrestricted anterpolation approach where anterpolation is omitted within the obstacle canopy. By computing comparison metrics, rootnormalizedmeansquare difference, and fractional bias to quantify the difference between the finereference and nested solutions, the canopyrestricted twoway coupling is shown to be the best coupling strategy for obstacleresolving LES studies.
Future outlook
Future development is planned to include the following tasks. Incorporation of PALM's Lagrangian particle model in the nesting system in order to enable Lagrangian dispersion studies in urban environments in such a way that particles can be transferred between parent and child domains depending on their position. Thus, the longdistance transport of, e.g., pollutants, can be simulated in a coarseresolution parent grid, while dispersion on the street scale for specific locations can be simulated in a fineresolution child domain. We note that this has been already implemented into PALM and is available to users, but further sensitivity tests with respect to the treatment of stochastic subgridscale particle speeds (Weil et al., 2004) are still pending. A thorough description and verification of the particle nesting will be published in a followup article. Further, we note that the PALM model system 6.0 also includes a RANS (Reynoldsaveraged Navier–Stokes) mode offering two different turbulence closures to calculate the eddy diffusivity, which are a TKEl and a TKEϵ closure according to Mellor and Yamada (1974, 1982). Besides the LES–LES nesting, the nesting system is being extended to handle RANS–LES and RANS–RANS nesting, which require coupling of additional RANS variables. Moreover, in a companion paper in this special issue we present a pure oneway offline mesoscale nesting method in which the PALM model system 6.0 is nested into mesoscale models such as COSMO or WRF. This will allow for modeling of mesoscale processes on a much larger coarsegrid domain as shown by, e.g., MuñozEsparza et al. (2017), while concurrently focusing on finescale processes within certain areas using the present LES nesting approach.
Furthermore, to date the time step in all parent and child models has been synchronized and restricted to the minimum of the time steps determined by each model independently using the CFL criterion. In our experience, the global time step is often restricted by the flow around building edges where high wind speeds occur within the finegrid child domains. Hence, we plan to implement a timesplitting into PALM where the parent and child models will be coupled only at the end of the parent time steps. This would allow us to run coarserscale parent domains with larger time steps. Thus, computational time could be saved in the timeintegration of the parent simulation as well as in the intermodel communication between parent and child.
A1 General
The nested model system is implemented using two levels of MPI communicators. The intermodel communication (communication between model domains) is handled by a global communicator using the onesided communication pattern (remote memory access, RMA). The intramodel communication (communication between subdomains within each model domain) is twosided and it is handled using a 2D communicator that has a different color for each model. The intramodel communication system is the baseline parallelization of PALM (Maronga et al., 2015).
Data transferred from parent to child and from child to parent is always stored in the coarser parent model grid in order to minimize the amount of data transfer. This means that the interpolations and anterpolations are always performed by the child. For this purpose, children contain auxiliary arrays that follow the parent grid spacings and indexing for each prognostic variable to be coupled, covering the overlap domain plus the necessary number of ghost node layers.
A2 Initialization
Mapping between each parent and child model domain decompositions and all the necessary index mappings are determined in the initialization phase and stored so that the coupling actions during the timestepping are straightforward and efficient.
Initial conditions for the root are set similarly to nonnested runs. The root then sends initial field data to its children, which interpolate their own initial conditions from the data received from the root. Next the firstlevel children send their data to their children, if any, and so on. The basic interpolation subroutines for child boundary conditions operate only on the ghost nodes behind the child model boundaries. Therefore a separate threedimensional interpolation subroutine is implemented to generate initial fields for all the nest domains from their parent model fields. The same interpolation algorithm is used here as in the interpolations for child boundary conditions.
A3 Modularization
The data transfer between parents and children is conducted by code contained by five specific Fortran modules forming a module set called PALM model coupler (PMC). Calls to the PMC subroutines are mostly made in the PMC interface module (pmc_interface_mod.f90), such that only a small number of calls to the PMC interface subroutines are needed within the baseline PALM code. In this way, the changes to the baseline code were kept minimal. The PMC interface module also contains subroutines for the nestingrelated initialization actions, interpolation, anterpolation, child massbalance forcing, etc.
A4 MPI implementation
While reading the input namelists, the PALM root process checks if a namelist called “&nesting_parameters” is given in the parameter input file PARIN. If not, subroutine called pmc_init_model resets all nestingrelated parameters (coupling_layout etc.) and sets MPI_COMM_WORLD as the base global MPIcommunicator comm_palm. The run then continues in standard way without nesting. If the namelist “&nesting_parameters” is found and correctly input, the root process of the root model distributes this information to all other processes via MPI_COMM_WORLD. Following this, all the necessary nestingrelated parameters are determined, and the base communicator is split into different colors for each model based on the model identification number. The term color here means that the communicator has the same name for all models (process groups), but they are, however, individual communicators, guaranteeing that communication of one model is not interfered with by the others. The splitting is performed by calling MPI_COMM_SPLIT. Now each model has its own process group and associated individual base communicator color, such that each model's internal communication is not visible to other models. After this the mappings between models are determined. Each model, except the root model, identifies its parent model and creates an intercommunicator between the process groups of itself and its parent model. This is realized by calling MPI_INTERCOMM_CREATE. In the same way, each model identifies all of its children, if any, and creates intercommunicators between the process groups of itself and all of its children. These intercommunicators are only used to transfer setup data between the root processes of the parent and child models. For 3D model data transfers between parent and child, a specific intracommunicator is created by merging intercommunicators of each process within the remote process groups. This is made after pmc_init_model separately for child and parent models (note that a model may be both parent and child) in subroutines pmc_childinit and pmc_parentinit by calling MPI_INTERCOMM_MERGE. After the PMC initialization, the run of each model goes as usual. A Cartesian topologybased communicator comm_2d is created by each model from the color of the base communicator comm_palm using MPI_CART_CREATE.
The internal model communication is done in the usual way, i.e., by calling the boundary exchange routines. All data transfer between parent and child models is done within the PMC interface. For this communication MPI onesided communication (RMA) is used. An RMA window is opened on the parent side. To transfer data from parent to child, the parent fills the RMA window via a local copy. After synchronization via MPI_WIN_FENCE, the child processes can fetch the data across the network with MPI_GET. While transferring data in the opposite direction, the child first transfers the data via MPI_PUT. After another MPI_WIN_FENCE call, the parent copies the data out of the RMA window into the local model data area.
Should higher interpolation accuracy across the boundaries be sought, the following considerations are relevant. As stated by Zhou et al. (2018), to satisfy the global fluxconservation requirement, one of the flux factors, either the advective velocity component or the advected variable, must be constant within the anterpolation cell. This implies zerothorder interpolation. The other factor must be interpolated using any reversible interpolation scheme.
As stated above, the quadratic Clark and Farley (1984) scheme should not be used because it employs a stencil wider than the parent grid cell, which leads to problems with complex geometries. On the other hand, trilinear interpolation has a favorable stencil width, but it is not suitable for the scheme as it is not reversible. However, linearly interpolated values ${\stackrel{\mathrm{\u0303}}{\mathit{\varphi}}}_{i,j,k}$ can be made reversible by introducing an additional correction ${\mathit{\varphi}}_{i,j,k}={\stackrel{\mathrm{\u0303}}{\mathit{\varphi}}}_{i,j,k}+\mathrm{\Delta}{\mathit{\varphi}}_{i,j,k}$, which guarantees the reversibility. The reversibility correction $\mathrm{\Delta}{\mathit{\varphi}}_{i,j,k}$ depends on the difference $\mathrm{\Delta}{\mathrm{\Phi}}_{I,J,K}$ between the original parent grid value ${\mathrm{\Phi}}_{I,J,K}$ and the value obtained by anterpolating the linearly interpolated values to the parent grid node $I,J,K$ as
$\mathrm{\Delta}{\mathrm{\Phi}}_{I,J,K}$ is a constant value within the parent grid cell, and hence a question arises: how to distribute the correction to the child grid nodes $i,j,k$ such that ${\widehat{\mathrm{\Delta}\mathit{\varphi}}}_{I,J,K}=\mathrm{\Delta}{\mathrm{\Phi}}_{I,J,K}$? The simplest choice is $\mathrm{\Delta}{\mathit{\varphi}}_{i,j,k}=\mathrm{\Delta}{\mathrm{\Phi}}_{I,J,K}$, but this choice is not recommendable in the cases of positive definite scalar variables as it could lead to negative values when Φ is close to zero. In principle, this problem could be avoided by weighting the local corrections in proportion to the local differences ${\mathrm{\Phi}}_{I,J,K}{\stackrel{\mathrm{\u0303}}{\mathit{\varphi}}}_{i,j,k}$, but this simply reduces the method back to the zerothorder baseline method. To make this approach useful, a more advanced technique to distribute the correction ought to be developed. However, this is beyond the scope of the present work, as stated in Sect. 3.4.4.
The PALM model system is freely available at http://palmmodel.org (last access: 31 May 2021) and distributed under the GNU General Public License v3 (http://www.gnu.org/copyleft/gpl.html, last access: 31 May 2021). However, the simulations presented in this document were performed using a slightly modified code based on revision 4295. This modified source code (4295M), as well as the input files for the test runs, is available at https://doi.org/10.25835/0090593 (Hellsten et al., 2020). Numerous pre and postprocessing scripts are available at https://doi.org/10.5281/zenodo.4005687 (Auvinen et al., 2020b).
Coordination of the study was done by AH and SR. Design and implementation of the intermodel communication was done by KK. Theoretical considerations and implementation of the nesting interface was done mainly by AH, with contributions from SR, MA, MS, and BM. Simulations, postprocessing, and analysis of model results were done by AH, MS, MA, BM, CK, FB, GT, and NM. Drafting of the manuscript was done by AH, MA, MS, and BM. Revision of the manuscript was done by all authors.
The authors declare that they have no conflict of interest.
Test runs with PALM have been performed at the supercomputers of the NorthGerman Supercomputing Alliance (HLRN), Germany, and CSC – IT Center for Science, Finland. We wish to thank the anonymous referees, as well as Sebastian Giersch at Leibniz Universität Hannover and JukkaPekka Keskinen at Finnish Meteorological Institute, for their help in improving the manuscript.
This research has been supported by the Academy of Finland (grant no. 277664), the Federal German Ministry of Education and Research (BMBF) (grant no. 01LP1601), and the Federal German Ministry of Economy and Energy (BMU) (grant no. 0325719C.).
This paper was edited by Paul Ullrich and reviewed by two anonymous referees.
Ahmad, N. H., Inagaki, A., Kanda, M., Onodera, N., and Aoki, T.: Largeeddy simulation of the gust index in an urban area using the lattice Boltzmann method, Bound.Lay. Meteorol., 163, 447–467, 2017. a
Aidun, C. K. and Clausen, J. R.: LatticeBoltzmann method for complex flows, Annu. Rev. Fluid Mech., 42, 439–472, 2010. a
Auvinen, M., Boi, S., Hellsten, A., Tanhuanpää, T., and Järvi, L.: Study of Realistic Urban Boundary Layer Turbulence with HighResolution LargeEddy Simulation, Atmosphere, 11, 201, https://doi.org/10.3390/atmos11020201, 2020a. a, b, c
Auvinen, M., Karttunen, S., and Kurppa, M.: P4UL: Pre and PostProcessing Python Library for Urban LES Simulations, Zenodo, https://doi.org/10.5281/zenodo.4005687, 2020b. a
BouZeid, E., Overney, J., Rogers, B. D., and Parlange, M. B.: The Effects of Building Representation and Clustering in LargeEddy Simulations of Flows in Urban Canopies, Bound.Lay. Meteorol., 132, 415–436, https://doi.org/10.1007/s1054600994106, 2009. a
Britter, R. E. and Hanna, S. R.: Flow and Dispersion in Urban Areas, Annu. Rev. Fluid Mech., 35, 469–496, https://doi.org/10.1146/annurev.fluid.35.101101.161147, 2003. a, b
Buccolieri, R. and Hang, J.: Recent Advances in Urban Ventilation Assessment and Flow Modelling, Atmosphere, 10, 144, https://doi.org/10.3390/atmos10030144, 2019. a
Chung, D. and McKeon, B. J.: Largeeddy simulation of largescale structures in long channel flow, J. Fluid Mech., 661, 341–364, https://doi.org/10.1017/S0022112010002995, 2010. a
Clark, T. and Farley, R.: Severe downslope windstorm calculations in two and three spatial dimensions using anelastic interactive grid nesting: A possible mechanism for gustiness, J. Atmos. Sci., 41, 329–350, 1984. a, b, c, d, e, f, g, h
Clark, T. and Hall, W.: Multidomain simulations of the time dependent NavierStokes equations: benchmark error analysis of some nesting procedures, J. Comput. Phys., 92, 456–481, 1991. a, b
Daniels, M., Lundquist, K., Mirocha, J., Wiersema, D., and Chow, F.: A new vertical grid nesting capability in the Weather Research and Forecasting WRF Model, Mon. Weather Rev., 144, 3725–3747, 2016. a
de Roode, S. R., Duynkerke, P. G., and Jonker, H. J.: Largeeddy simulation: How large is large enough?, J. Atmos. Sci, 61, 403–421, 2004. a
Deardorff, J.: Stratoculumuscapped mixed layers derived from a threedimensional model, Bound.Lay. Meteorol., 18, 495–527, 1980. a
Diebold, M., Higgins, C., Fang, J., Bechmann, A., and Parlange, M. B.: Flow over Hills: A LargeEddy Simulation of the Bolund Case, Bound.Lay. Meteorol., 148, 177–194, https://doi.org/10.1007/s1054601398070, 2013. a
Fishpool, G. M., Lardeau, S., and Leschziner, M. A.: Persistent NonHomogeneous Features in Periodic ChannelFlow Simulations, Flow Turbul. Combust., 83, 323–342, https://doi.org/10.1007/s104940099209z, 2009. a
Gehrke, K. F., Sühring, M., and Maronga, B.: Modeling of landsurface interactions in the PALM model system 6.0: Land surface model description, first evaluation, and sensitivity to model parameters, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd2020197, in review, 2020. a
Giometto, M., Christen, A., Meneveau, C., Fang, J., Krafczyk, M., and Parlange, M.: Spatial Characteristics of Roughness Sublayer Mean Flow and Turbulence Over a Realistic Urban Surface, Bound.Lay. Meteorol., 160, 425–452, 2016. a
Gropp, W., Lusk, E., and Skjellum, A.: Using MPI: Portable parallel programming with the Message Passing Interface, 2nd edition, MIT Press, Cambridge, MA, 1999. a
Hackbusch, W.: Multigrid methods and applications, Springer, Berlin, Heidelberg, New York, 378 pp., 1985. a
Heinze, R., Dipankar, A., Henken, C. C., Moseley, C., Sourdeval, O., Trömel, S., Xie, X., Adamidis, P., Ament, F., Baars, H., Barthlott, C., Behrendt, A., Blahak, U., Bley, S., Brdar, S., Brueck, M., Crewell, S., Deneke, H., Di Girolamo, P., Evaristo, R., Fischer, J., Frank, C., Friederichs, P., Göcke, T., Gorges, K., Hande, L., Hanke, M., Hansen, A., Hege, H.C., Hoose, C., Jahns, T., Kalthoff, N., Klocke, D., Kneifel, S., Knippertz, P., Kuhn, A., van Laar, T., Macke, A., Maurer, V., Mayer, B., Meyer, C. I., Muppa, S. K., Neggers, R. A. J., Orlandi, E., Pantillon, F., Pospichal, B., Röber, N., Scheck, L., Seifert, A., Seifert, P., Senf, F., Siligam, P., Simmer, C., Steinke, S., Stevens, B., Wapler, K., Weniger, M., Wulfmeyer, V., Zängl, G., Zhang, D., and Quaas, J.: Largeeddy simulations over Germany using ICON: a comprehensive evaluation, Q. J. Roy. Meteor. Soc., 143, 69–100, https://doi.org/10.1002/qj.2947, 2017. a
Hellsten, A. and Zilitinkevich, S.: Role of convective structures and background turbulence in the dry convective boundary layer, Bound.Lay. Meteorol., 149, 323–353, 2013. a
Hellsten, A., Ketelsen, K., Sühring, M., Auvinen, M., Maronga, B., Knigge, C., Barmpas, F., Tsegas, G., Moussiopoulos, N., and Raasch, S.: Dataset: A Nested MultiScale System Implemented in the LargeEddy Simulation Model PALM model system 6.0, PALM input files and source code, Leibniz Universität Hannover, https://doi.org/10.25835/0090593, 2020. a
Heus, T., van Heerwaarden, C. C., Jonker, H. J. J., Pier Siebesma, A., Axelsen, S., van den Dries, K., Geoffroy, O., Moene, A. F., Pino, D., de Roode, S. R., and VilàGuerau de Arellano, J.: Formulation of the Dutch Atmospheric LargeEddy Simulation (DALES) and overview of its applications, Geosci. Model Dev., 3, 415–444, https://doi.org/10.5194/gmd34152010, 2010. a
Huq, S., De Roo, F., Raasch, S., and Mauder, M.: Vertically nested LES for highresolution simulation of the surface layer in PALM (version 5.0), Geosci. Model Dev., 12, 2523–2538, https://doi.org/10.5194/gmd1225232019, 2019. a, b, c, d, e, f, g, h
Hutchins, N. and Marusic, I.: Evidence of very long meandering features in the logarithmic region of turbulent boundary layers, J. Fluid Mech., 579, 1–28, 2007. a
Hutchins, N., Chauhan, K., Marusic, I., Monty, J., and Klewicki, J.: Towards reconciling the largescale structure of turbulent boundary layers in the atmosphere and laboratory, Bound.Lay. Meteorol., 145, 273–306, 2012. a
Ishihara, T., Hibi, K., and Oikawa, S.: A wind tunnel study of turbulent flow over a threedimensional steep hill, J. Wind Eng. Ind. Aerod., 83, 95–107, https://doi.org/10.1016/S01676105(99)000641, 1999. a, b
Karttunen, S., Kurppa, M., Auvinen, M., Hellsten, A., and Järvi, L.: Largeeddy simulation of the optimal streettree layout for pedestrianlevel aerosol particle concentrations – A case study from a cityboulevard, Atmos. Environ X, 6, 100 073, https://doi.org/10.1016/j.aeaoa.2020.100073, 2020. a
Kataoka, H. and Mizuno, M.: Numerical flow computation around aeroelastic 3D square cylinder using inflow turbulence, Wind and Structures, 5, 379–392, 2002. a
Khan, B., Banzhaf, S., Chan, E. C., Forkel, R., KananiSühring, F., Ketelsen, K., Kurppa, M., Maronga, B., Mauder, M., Raasch, S., Russo, E., Schaap, M., and Sühring, M.: Development of an atmospheric chemistry model coupled to the PALM model system 6.0: implementation and first applications, Geosci. Model Dev., 14, 1171–1193, https://doi.org/10.5194/gmd1411712021, 2021. a
Krč, P., Resler, J., Sühring, M., Schubert, S., Salim, M. H., and Fuka, V.: Radiative Transfer Model 3.0 integrated into the PALM model system 6.0, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd2020168, in review, 2020. a
Kurihara, Y., Tripoli, G., and Bender, M.: Design of a Movable NestedMesh Primitive Equation Model, Mon. Weather Rev., 107, 239–249, 1979. a, b
Kurppa, M., Hellsten, A., Roldin, P., Kokkola, H., Tonttila, J., Auvinen, M., Kent, C., Kumar, P., Maronga, B., and Järvi, L.: Implementation of the sectional aerosol module SALSA2.0 into the PALM model system 6.0: model development and first evaluation, Geosci. Model Dev., 12, 1403–1422, https://doi.org/10.5194/gmd1214032019, 2019. a, b
Kurppa, M., Roldin, P., Strömberg, J., Balling, A., Karttunen, S., Kuuluvainen, H., Niemi, J. V., Pirjola, L., Rönkkö, T., Timonen, H., Hellsten, A., and Järvi, L.: Sensitivity of spatial aerosol particle distributions to the boundary conditions in the PALM model system 6.0, Geosci. Model Dev., 13, 5663–5685, https://doi.org/10.5194/gmd1356632020, 2020. a
Letzel, M., Krane, M., and Raasch, S.: High resolution urban largeeddy simulation studies from street canyon to neighbourhood scale, Atmos. Environ., 42, 8770–8784, 2008. a, b
Maronga, B. and Raasch, S.: LargeEddy Simulations of Surface Heterogeneity Effects on the Convective Boundary Layer During the LITFASS2003 Experiment, Bound.Lay. Meteorol., 146, 17–44, https://doi.org/10.1007/s105460129748z, 2013. a
Maronga, B., Gryschka, M., Heinze, R., Hoffmann, F., KananiSühring, F., Keck, M., Ketelsen, K., Letzel, M. O., Sühring, M., and Raasch, S.: The Parallelized LargeEddy Simulation Model (PALM) version 4.0 for atmospheric and oceanic flows: model formulation, recent developments, and future perspectives, Geosci. Model Dev., 8, 2515–2551, https://doi.org/10.5194/gmd825152015, 2015. a, b, c, d, e
Maronga, B., Gross, G., Raasch, S., Banzhaf, S., Forkel, R., Heldens, W., KananiSühring, F., Matzarakis, A., Mauder, M., Pavlik, D., Pfafferott, J., Schubert, S., Seckmeyer, G., Sieker, H., and Winderlich, K.: Development of a new urban climate model based on the model PALM – Project overview, planned work, and first achievements, Meteorol. Z., 28, 105–119, https://doi.org/10.1127/metz/2019/0909, 2019. a
Maronga, B., Banzhaf, S., Burmeister, C., Esch, T., Forkel, R., Fröhlich, D., Fuka, V., Gehrke, K. F., Geletič, J., Giersch, S., Gronemeier, T., Groß, G., Heldens, W., Hellsten, A., Hoffmann, F., Inagaki, A., Kadasch, E., KananiSühring, F., Ketelsen, K., Khan, B. A., Knigge, C., Knoop, H., Krč, P., Kurppa, M., Maamari, H., Matzarakis, A., Mauder, M., Pallasch, M., Pavlik, D., Pfafferott, J., Resler, J., Rissmann, S., Russo, E., Salim, M., Schrempf, M., Schwenkel, J., Seckmeyer, G., Schubert, S., Sühring, M., von Tils, R., Vollmer, L., Ward, S., Witha, B., Wurps, H., Zeidler, J., and Raasch, S.: Overview of the PALM model system 6.0, Geosci. Model Dev., 13, 1335–1372, https://doi.org/10.5194/gmd1313352020, 2020. a, b, c, d
Mellor, G. L. and Yamada, T.: A Hierarchy of Turbulence Closure Models for Planetary Boundary Layers, J. Atmos. Sci, 31, 1791–1806, https://doi.org/10.1175/15200469(1974)031<1791:AHOTCM>2.0.CO;2, 1974. a
Mellor, G. L. and Yamada, T.: Development of a turbulence closure model for geophysical fluid problems, Rev. Geophys., 20, 851–875, https://doi.org/10.1029/RG020i004p00851, 1982. a
Miller, M. J. and Thorpe, A. J.: Radiation conditions for the lateral boundaries of limitedarea numerical models, Q. J. Roy. Meteor. Soc., 107, 615–628, https://doi.org/10.1002/qj.49710745310, 1981. a
Mirocha, J., Kirkil, G., BouZeid, E., Chow, F. K., and Kosović, B.: Transition and equilibration of neutral atmospheric boundary layer flow in oneway nested largeeddy simulations using the Weather Research and Forecasting model, Mon. Weather Rev., 141, 918–940, https://doi.org/10.1175/MWRD1100263.1, 2013. a
Moeng, C., Dudhia, J., Klemp, J., and Sullivan, P.: Examinning twoway grid nesting for largeeddy simulation of the PBL using the WRF model, Mon. Weather Rev., 135, 2295–2311, 2007. a, b, c, d, e, f, g
MuñozEsparza, D., Kosović, B., GarcíaSánchez, C., and van Beeck, J.: Nesting turbulence in an offshore convective boundary layer using largeeddy simulations, Bound.Lay. Meteorol., 151, 453–478, 2014. a
MuñozEsparza, D., Lundquist, J. K., Sauer, J. A., Kosović, B., and Linn, R. R.: Coupled mesoscaleLES modeling of a diurnal cycle during the CWEX13 field campaign: From weather to boundarylayer eddies, J. Adv. Model. Earth Syst., 9, 1572–1594, https://doi.org/10.1002/2017MS000960, 2017. a, b
Munters, W., Meneveau, C., and Meyers, J.: Shifted periodic boundary conditions for simulations of wallbounded turbulent flows, Phys. Fluids, 28, 025 112, https://doi.org/10.1063/1.4941912, 2016. a
Nakayama, H., Takemi, T., and Nagai, H.: Development of LOcalscale Highresolution atmospheric DIspersion Model using LargeEddy Simulation. Part 5: detailed simulation of turbulent flows and plume dispersion in an actual urban area under real meteorological conditions., J. Nucl. Sci. Technol., 53, 887–908, https://doi.org/10.1080/00223131.2015.1078262, 2016. a
Nunalee, C. G., Kosović, B., and Bieringer, P. E.: Development of LOcalscale Highresolution atmospheric DIspersion Model using LargeEddy Simulation. Part 5: detailed simulation of turbulent flows and plume dispersion in an actual urban area under real meteorological conditions., Atmos. Environ., 99, 571–581, https://doi.org/10.1016/j.atmosenv.2014.09.070, 2014. a
Patrinos, A. N. A. and Kistler, A. L.: A numerical study of the Chicago lake breeze, Bound.Lay. Meteorol., 12, 93–123, 1977. a
Raasch, S. and Schröter, M.: PALM – A largeeddy simulation model performing on massively parallel computers, Meteorol. Z., 10, 363–372, 2001. a, b
Resler, J., Krč, P., Belda, M., Juruš, P., Benešová, N., Lopata, J., Vlček, O., Damašková, D., Eben, K., Derbek, P., Maronga, B., and KananiSühring, F.: PALMUSM v1.0: A new urban surface model integrated into the PALM largeeddy simulation model, Geosci. Model Dev., 10, 3635–3659, https://doi.org/10.5194/gmd1036352017, 2017. a
Saiki, E. M., Moeng, C.H., and Sullivan, P. P.: Largeeddy simulation of the stably stratified planetary boundary layer, Bound.Lay. Meteorol., 95, 1–30, 2000. a
Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Duda, M. G., Huang, X.Y., Wang, W., and Powers, J. G.: G.: A description of the Advanced Research WRF version 3, Tech. Rep., NCAR/TN475+ STR, NCAR, 2008. a
Sühring, M., Maronga, B., Herbort, F., and Raasch, S.: On the Effect of Surface HeatFlux Heterogeneities on the MixedLayerTop Entrainment, Bound.Lay. Meteorol., 151, 531–556, https://doi.org/10.1007/s1054601499137, 2014. a
Sullivan, P. and Patton, E.: The effect of mesh resolution on convective boundary layer statistics and structures generated by largeeddy simulation, J. Atmos. Sci, 68, 2395–2415, 2011. a
Sullivan, P., McWilliams, J., and Moeng, C.H.: A grid nesting method for largeeddy simulation of planetary boundarylayer flow, Bound.Lay. Meteorol., 80, 167–202, 1996. a, b, c
Tominaga, Y. and Stathopoulos, T.: CFD simulation of nearfield pollutant dispersion in the urban environment: A review of current modeling techniques, Atmos. Environ., 79, 716–730, https://doi.org/10.1016/j.atmosenv.2013.07.028, 2013. a
Tseng, Y.H., Meneveau, C., and Parlange, M. B.: Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation, Environ. Sci. Technol., 40, 2653–2662, 2006. a
Vonlanthen, M., Allegrini, J., and Carmeliet, J.: Assessment of a oneway nesting procedure for obstacle resolved large eddy simulation of the ABL, Comput. Fluids, 140, 136–147, https://doi.org/10.1016/j.compfluid.2016.09.016, 2016. a
Vonlanthen, M., Allegrini, J., and Carmeliet, J.: Multiscale interaction between a cluster of buildings and the abl developing over a real terrain, Urban Climate, 20, 1–19, https://doi.org/10.1016/j.uclim.2017.02.009, 2017. a
Weil, J., Sullivan, P., and Moeng, C.: The use of largeeddy simulations in Lagrangian particle dispersion models, J. Atmos. Sci, 61, 2877–2887, 2004. a
Wicker, L. J. and Skamarock, W. C.: Timesplitting methods for elastic models using forward time schemes, Mon. Weather Rev., 130, 2088–2097, 2002. a, b, c, d
Wiersema, D. J., Lundquist, K. A., and Chow, F. K.: Mesoscale to microscale simulations over complex terrain with the immersed boundary method in the Weather Research and Forecasting Model, Mon. Weather Rev., 148, 577–595, 2020. a
Williamson, J. H.: Lowstorage RungeKutta schemes, J Comput. Phys., 35, 48–56, 1980. a
Xie, Z. and Castro, I.: LES and RANS for turbulent flows over arrays of wallmounted obstacles, Flow Turbul. Combust., 76, 291–312, 2006. a
Zängl, G., Reinert, D., Rípodas, P., and Baldauf, M.: The ICON (ICOsahedral Nonhydrostatic) modelling framework of DWD and MPIM: Description of the nonhydrostatic dynamical core, Q. J. Roy. Meteor. Soc., 141, 563–579, https://doi.org/10.1002/qj.2378, 2015. a
Zhou, B., Xue, M., and Zhu, K.: A gridrefinementbased approach for modelling the convective boundary layer in the gray zone: Algorithm implementation and testing, J. Atmos. Sci, 75, 1143–1161, 2018. a, b, c, d, e, f, g, h, i
Zhu, P., Albrecht, B. A., Ghate, V. P., and Zhu, Z.: Multiplescale simulations of stratocumulus clouds, J. Geophys. Res.Atmos., 115, D23201, https://doi.org/10.1029/2010JD014400, 2010. a
 Abstract
 Introduction
 The PALM model system 6.0 (LES mode)
 Nesting system
 Numerical experiments
 Conclusions and future outlook
 Appendix A: Technical realization
 Appendix B: Thoughts on an alternative interpolation method
 Code and data availability
 Author contributions
 Competing interests
 Acknowledgements
 Financial support
 Review statement
 References
 Abstract
 Introduction
 The PALM model system 6.0 (LES mode)
 Nesting system
 Numerical experiments
 Conclusions and future outlook
 Appendix A: Technical realization
 Appendix B: Thoughts on an alternative interpolation method
 Code and data availability
 Author contributions
 Competing interests
 Acknowledgements
 Financial support
 Review statement
 References