Lagrangian particle dispersion models require interpolation of all meteorological input variables to the position in space and time of computational particles. The widely used model FLEXPART uses linear interpolation for this purpose, implying that the discrete input fields contain point values. As this is not the case for precipitation (and other fluxes) which represent cell averages or integrals, a preprocessing scheme is applied which ensures the conservation of the integral quantity with the linear interpolation in FLEXPART, at least for the temporal dimension. However, this mass conservation is not ensured per grid cell, and the scheme thus has undesirable properties such as temporal smoothing of the precipitation rates. Therefore, a new reconstruction algorithm was developed, in two variants. It introduces additional supporting grid points in each time interval and is to be used with a piecewise linear interpolation to reconstruct the precipitation time series in FLEXPART. It fulfils the desired requirements by preserving the integral precipitation in each time interval, guaranteeing continuity at interval boundaries, and maintaining non-negativity. The function values of the reconstruction algorithm at the sub-grid and boundary grid points constitute the degrees of freedom, which can be prescribed in various ways. With the requirements mentioned it was possible to derive a suitable piecewise linear reconstruction. To improve the monotonicity behaviour, two versions of a filter were also developed that form a part of the final algorithm. Currently, the algorithm is meant primarily for the temporal dimension. It was shown to significantly improve the reconstruction of hourly precipitation time series from 3-hourly input data. Preliminary considerations for the extension to additional dimensions are also included as well as suggestions for a range of possible applications beyond the case of precipitation in a Lagrangian particle model.

In numerical models, extensive
variables (those being proportional to the volume or area that they
represent, e.g. mass and energy) are usually discretised as grid-cell
integral values so that conservation properties can be fulfilled. A typical
example is the precipitation flux in a meteorological forecasting model.
Usually, one is interested in the precipitation at the surface, and thus the
quantity of interest is a two-dimensional horizontal integral (coordinates

In Lagrangian particle dispersion models (LPDMs)

Illustration of the basic problem using an isolated precipitation event lasting one time interval represented by the thick blue line. The amount of precipitation is given by the blue-shaded area. Simple discretisation would use the green circles as discrete grid-point representation and interpolate linearly in between, indicated by the green line and the green-shaded area. Note that supporting points for the interpolation are shifted by a half-time interval compared to the times when other meteorological fields are available.

FLEXPART is a LPDM, which is typically applied to study air pollution but is
also used for other problems requiring the quantification of atmospheric
transport, such as the global water cycle or the exchange between the
stratosphere and the troposphere; see

This software is
available from the FLEXPART community website in different versions as
described in

Example of so-called “disaggregation” of precipitation data for
the use in FLEXPART as currently implemented, with the case of an isolated
precipitation period. Note that the supporting points for the interpolation
now coincide with the times when other meteorological fields are available.
Colours are used as in Fig.

Horizontally, the precipitation values are averages for a grid cell around
the grid point to which they are ascribed, and FLEXPART uses bilinear
interpolation to obtain precipitation rates at particle positions. This
causes the same problem of spreading out the information to the neighbouring
grid cells and implied smoothing.

In reality, the problem is even more complex. In ECMWF's MARS archive, variables such as precipitation are stored on a reduced Gaussian grid, and upon extraction to the latitude–longitude grid they are interpolated without paying attention to mass conservation. This needs to be addressed in the future on the level of the software used internally by MARS. Our discussion here is assuming that this would already have happened, and even if that is not the case, adding another step of non-mass-conserving interpolation makes things even worse.

However, the supporting points in space are not shifted between precipitation and other variables as is the case for the temporal dimension.The goal of this work is to develop a reconstruction algorithm for the
one-dimensional temporal setting which

strictly conserves the amount of precipitation within each single time interval,

preserves the non-negativity,

is continuous at the interval borders,

and ideally also should reflect a natural precipitation curve (this latter condition can be understood in the sense that the reconstruction graph should possess good monotonicity properties),

should be efficient and easy to implement within the existing framework of the FLEXPART code and its data extraction preprocessor.

Precipitation rate linearly interpolated using a sub-grid with two
additional points. Colours as in Fig.

It can be noted that in principle a single sub-grid point per time interval would be sufficient. This, however, would result in very high function values and steep slopes of the reconstructed curve, which appears to be less realistic and thus not desirable.

As we shall see in the next section, closing the algorithm for such isolated
precipitation events is quite straightforward, since the only degree of
freedom constituting the height of the reconstruction function is determined
by the amount of precipitation in the interval. However, the situation
becomes much more involved if longer periods of precipitation occur, i.e.
several consecutive time intervals with positive data. Then, in general, each
sub-grid function value constitutes 1 degree of freedom
(Fig.

Illustration of a reconstruction for longer periods with positive data values, where each sub-grid function value constitutes 1 degree of freedom.

Therefore, in order to close the algorithm, we have to fix all of these
additionally arising degrees of freedom. As a first step we make a choice for
the slope in the central subinterval, which relates the two inner sub-grid
function values. Three possible approaches are discussed for this choice.
Conservation provides a second condition. These two can be considered to
determine the two inner sub-grid points. Then, the function values at the
grid points in between time intervals of positive data are left to be
prescribed, and as each point belongs to two intervals, this corresponds to
the third degree of freedom. The steps leading to the final algorithm (of
which there are some variants) are presented in Sect.

In the following Sect.

Section

The conclusions (Sect.

A widely used form of interpolation is the well-known spline interpolation
consisting of piecewise polynomials, which are typically chosen as cubic ones

The issue of mass-conservative interpolation emerges also in the context of
semi-Lagrangian finite-volume advection schemes, which have become very
popular. These schemes, with a two-dimensional application in mind, are known
under the heading of “remapping”. Eulerian grid cells are mapped to the
respective areas covered in the previous time step, and then the mass in this
area is calculated by a reconstruction function from the available grid-cell
average values

An interesting example of such a semi-Lagrangian conservative remapping is
given by

Considering the differences mentioned between the reconstruction problem arising in the context of semi-Lagrangian advection schemes and of the LPDM FLEXPART and the fact that in addition that linear interpolation is used in FLEXPART for all other quantities and that evaluation of the interpolation function has to be done efficiently for up to millions of particles in each time step, we have chosen to construct a non-negative, continuous, and conservative reconstruction algorithm based upon piecewise linear interpolation. Contrary to standard piecewise linear methods, we divide each grid interval into three subintervals, so that our method has some similarity with a piecewise parabolic approach while being simpler and presumably faster.

In accordance with the considerations presented in Sect.

Our aim is to find a piecewise linear function

to be continuous,

to preserve the non-negativity such that

to conserve the precipitation amount within each single time interval

These conditions are also listed in Table

It is evident that the function

Schematic overview of the basic notation in a precipitation interval
with the original precipitation rate

The key requirement for the interpolation algorithm

We first demonstrate the basic idea of the interpolation algorithm for the
simplest case of an isolated precipitation event; i.e. we assume an interval

Isolated precipitation event (no precipitation in

Whereas the derivation of the algorithm for the isolated precipitation event
is straightforward, the problem becomes considerably more involved if
consecutive intervals with non-zero precipitation occur. Treating each
interval as an isolated precipitation event as demonstrated in
Fig.

Therefore, we now consider the case of two consecutive intervals with
non-zero data

In the following, we assume the boundary values

As a means to reflect the actual course of precipitation, a natural first
step is to prescribe the central slope

Other possible approaches for the central slope which have not been selected
would be the following:

Setting

A more advanced, data-driven approach would be to represent the
tendency of the surrounding data values by the centred finite difference

Now, the function values

Equations (

A further possible approach would be to assign

We shall note that instead of prescribing the function values at the grid points directly, there are also other possible approaches. Two of them have been looked at and are discussed in the following paragraphs.

Instead of a function value, we might prescribe an additional slope. We
tested a basic finite difference approach in terms of the involved data
values as well as a symmetric version of it. As this does not preseve
monotonicity, we also derived a global algorithm, where the slope of the
right subinterval in

Another approach would be to formulate the reconstruction problem as an optimisation problem. However, for large data sets this turned out to be much more expensive than the ad hoc methods described before (using the MATLAB Optimisation Toolbox). As the final aim is to solve the interpolation problem for large data sets in three dimensions, this approach was not further studied.

The preservation of non-negativity is a challenging requirement, as discussed
above. In the following, we investigate sufficient conditions for the
non-negativity to hold. The algorithm consisting of Eqs. (

We thus return to the geometric-mean method (Eq.

Results with the IA0 algorithm. The original precipitation rate

Figure

Illustration of the monotonicity filter construction. The original
precipitation rate

In response to this problem, we introduce a monotonicity filter which is
active only in the regions where the graph of

More precisely, given

Illustration of the basic IA0 algorithm in

It is also possible to construct an algorithm which directly incorporates the
idea from the monotonicity filter introduced above. In order to apply the
filter in a single sweep, we need a kind of educated guess for

Overview of the two algorithms IA1 and IA2.

Three interpolation algorithms – IA0, IA1, and IA2 – were developed. They
were introduced on an additional sub-grid based on the geometric mean and
fulfil the conditions to be non-negative, continuous, and area-conserving.
The basic algorithm is called IA0. A monotonicity filter was then introduced
to improve the realism of the reconstructed function. The IA1 algorithm
requires a second sweep through the data, while IA2 has a monotonicity filter
already built into the main algorithm. The equations defining IA1 and IA2 are
listed in Table

We have also carried out a preliminary investigation of the two-dimensional case. In the case of precipitation, this could be used for horizontal interpolation. We follow the same approach and introduce a sub-grid with two additional grid points, now for both directions.

The isolated two-dimensional precipitation event can then easily be represented on the sub-grid as a truncated pyramid. For multiple adjacent cells with non-zero data, this type of interpolation is, however, not suitable due to the non-vanishing values at the boundaries of the grids which would be difficult to formulate.

A more advantageous approach is the bilinear interpolation, which defines the
function in a square uniquely through its four corner values. (Note that we
assume that the grid spacing is equal in both directions, without loss of
generality, as this can always be achieved by simple scaling.) The main idea
here is to apply the bilinear interpolation in each of the nine sub-squares.
We recall that for given function values

The evaluation of the new algorithms IA1 and IA2 was carried out in three
steps. First, the interpolation algorithms were applied to ideal, synthetic
time series to verify the fulfilment of the requirements. Next, they were
validated with ECMWF data. Short sample sections were analysed visually. The
main validation is then based on statistical metrics. The original algorithm
from the ECMWF data extraction for FLEXPART (flex_extract) was also included
in the evaluation. In the following, it is referred to as the Interpolation
algorithm FLEXPART (IFP). This allows us to see and quantify the
improvements through the new algorithms. The IFP is not published, but it is
included in the flex_extract download on the FLEXPART website
(

Verification is the part of evaluation where the algorithm is tested against
the requirements to show whether it is doing what it is supposed to do. These
requirements, mentioned in the previous sections, are classified into strict
requirements (main conditions, stRE) and soft requirements (soRE), as
formulated in Table

Classification of requirements for the interpolation algorithm. They are classified into strict requirements (stRE), which are essential and need to be fulfilled, and soft requirements (soRE), which are desirable but not absolutely necessary.

The synthetic time series for the first tests is specified with 3-hourly
resolution. It consists of four isolated precipitation events, with constant
precipitation rates during the events and durations which increase from one
to four 3 h intervals. As the variation within each 3 h interval is
unknown, it is visualised as a step function. We refer to it as the
synthesised 3-hourly (S3h) time series. Both new algorithms IA1 and IA2 and
the currently used IFP were applied to these data. The IFP produces 3-hourly
disaggregated output which is divided into 1 h segments by the usual linear
interpolation between the supporting points. As all three algorithms are
intended to be used in connection with linear interpolation, they are
visualised by connecting the resulting supporting points with straight lines.
Figure

It is easy to see that IFP violates requirement stRE1 (cf.
Table

Verification of the interpolation algorithms for four simple precipitation events. The 3-hourly synthetic precipitation rate (S3h) is illustrated as a step function in light blue. Reconstructions are shown as linear connections of their respective supporting points, with the current FLEXPART algorithm (IFP) in green, the newly developed algorithm IA1 in orange, and IA2 (also new) in red (dashed–dotted).

Concerning the behaviour of the two newly developed algorithms IA1 and IA2,
we clearly see in Fig.

With respect to soRE1 (monotonicity), we are faced with the overshooting
behaviour already mentioned. In the events lasting three and four intervals,
the new algorithms introduce a local minimum in the centre of the event. As
directly intelligible, it is not possible for the interpolated curve to turn
into a constant value without overshoot. This would either lead to excess
mass in the inner period as seen in the IFP algorithm, or to a lack in the
outermost periods. Obviously, interpolated curves have to overshoot to
compensate for the gradual rise (or fall) near the borders of precipitation
periods. While IA1 accomplishes this within a single 3 h interval, algorithm
IA2 falls off more slowly towards the middle with the consequence of
requiring another interval on each side for compensation. In order to
investigate how these wiggles would develop in an even longer event, a case
with eight constant values, lasting 24 h, was constructed
(Fig.

The symmetry condition (soRE2) is satisfied by IA1 but not by IA2. The
wiggles in the 24 h event (Fig.

Verification of the different behaviour of the interpolation
algorithms for a longer constant precipitation event plotted in

Same as Fig.

In the next step, we extended the verification to a case with more realistic
but still synthetic data (Fig.

This case provides more interesting structures for looking at monotonicity than the idealised case with constant precipitation values in each rainy period. For the new algorithms, minor violations of monotonicity can be observed, e.g. around hour 30 and a smaller one after hour 3. They occur when a strong increase of the precipitation rate is followed by a weaker one or vice versa. Thus, they represent a transition to the situation discussed above where the overshooting is unavoidable, and it is difficult to judge which overshoot is possibly still realistic. Subjectively, we would prefer an algorithm that would be less prone to this phenomenon; however, we consider this deviation from soRE1 to be tolerable. The symmetry requirement (soRE2) is not strictly tested here, as no symmetric structure was prescribed as input, but it can be noted that gross asymmetries as we see in IFP as the shifting of peaks to the border of an interval do not occur in IA1 and IA2m.

The reconstructed precipitation curves resulting from algorithms IA1 and IA2m have a more realistic shape (soRE3) than that from IFP. Due to the two additional supporting points per interval, they are able to adapt better to strong variations. Computational efficiency (soRE4) is not tested for this still short test period.

Summarising the verification with synthetic cases, it is confirmed that IA1 and IA2m fulfill all of the strict requirements whereas IFP does not. The soft requirements are fulfilled by the new algorithms with a minor deficiency for the monotonicity condition. Next, they will be validated with real data.

Verification of the different behaviour of the interpolation algorithms for a complex synthetic precipitation time series. S3h is the input data series with 3 h resolution (light blue), IFP is the linearly interpolated curve according to the current scheme in FLEXPART (green), while IA1 (orange) and IA2m (red; dashed–dotted) are the reconstructions using the new algorithms.

The validation with ECMWF data makes use of precipitation data retrieved with 1-hourly and 3-hourly time resolution. The 3-hourly data serve as input to the algorithms, while the 1-hourly data are used to validate the reconstructed 1-hourly precipitation amounts. In this way, the improvement of replacing IFP by one of the new algorithms can be quantified. By using a large set of data, robust results are obtained.

Fields of both large-scale and convective precipitation in the operational
deterministic forecasts were extracted from ECMWF's MARS archive with
0.5

As explained in footnote 2, currently the precipitation extracted on a lat–long grid is a point value. As the global grid includes the poles, there are 361 points per meridian. Nevertheless, as explained, we also use the concept of a cell horizontally, as does FLEXPART.

8761 h including the last hour of 2013). They were extracted as 3-hourly and as 1-hourly fields. ECMWF output distinguishes these two precipitation types, derived from the grid-scale cloud microphysics scheme in the case of large-scale precipitation and from the convection scheme in the case of convective precipitation. Note that parameterised convection by definition is a sub-gridscale process, while reported precipitation intensities are averaged over the grid cell. Precipitation data are accumulated from the start of each forecast at 00:00 and 12:00 UTC. We used both these forecasts, so that the forecast lead time is at most 12 h. This is in line with typical data use in FLEXPART. Data were immediately de-accumulated to 1 and 3 h sums (see the “Data availability” section for more details).Two short periods in January 2014 were selected for visual inspection at a
grid cell with significant precipitation, one dominated by large-scale and
another one by convective precipitation. Convective precipitation occurs less
frequently (cf. Table

Sample periods in January 2014 at the grid cell centred on
48

Similar to the synthetic cases, large discrepancies between the real ECMWF
data and the interpolated data from the IFP algorithm can be found. This is
true especially for the convective precipitation, where frequently the real
peaks are clipped and the mass is instead redistributed to neighbouring time
intervals with lower values, leading to a significant positive bias there.
The function curves of IA1 and IA2m follow the R3h signal and are even able
to capture the tendency of the R1h signal as long as R1h does not have too
much variability within the 3 h intervals of R3h. Again, in the convective
part there is an interval where monotonicity is violated, near 11 January
12:00 UTC. The secondary minimum occurs a bit earlier in the IA1 algorithm
than in the IA2m algorithm, which seems typical for the case of the ascending
graph (vice versa in the descending sections; see also
Fig.

The large-scale precipitation rate time series is smoother and precipitation
events last longer (Fig.

Regarding the monotonicity, the large-scale precipitation time series produces a few instances with unsatisfactory monotonic behaviour, for example on 14 January 12:00 UTC or on 14 January 21:00 UTC, in the IA1 curve while the IA2m algorithm avoids the secondary minima in these cases (there are other cases where the behaviour is vice versa; not shown). The double peak structure in the IA1 and IA2m reconstructions on 15 January between 06:00 and 15:00 UTC is similar to the plateau-like ideal cases where overshooting is unavoidable.

Notwithstanding the minor problems with the monotonicity condition, the reconstructed precipitation curves from IA1 and IA2m are much closer to the real ones than the IFP curve. Therefore, we consider requirement soRE3 as basically fulfilled. This example raises the expectation that the new algorithms will be capable of improving the performance of the FLEXPART model.

Statistical metrics for the global large-scale and convective
precipitation rates (

Root-mean-square error (RMSE,

A statistical evaluation comparing the 1-hourly precipitation reconstructed
by the new IA1 and IA2m algorithms as well as by the old IFP algorithm from
3-hourly input data (R3h) to the reference 1-hourly data (R1h) was carried
out. While the R1h data directly represent the amount of precipitation in the
respective hour, the output of the algorithms represents precipitation rates
at the supporting points of the time axis, and the hourly integrals had to be
calculated, under the assumption of linear interpolation. The data set
comprises the whole year of 2014 and all grid cells on the globe as described
in Sect.

A set of basic metrics is presented in Table

The root mean square error (RMSE), the normalised root mean square error
(NMSE), and the correlation coefficient (

Another aspect is the ability of the algorithms to conserve the ratio of dry
and wet intervals (Table

Frequencies of dry (

Finally, two-dimensional histograms (relative frequency distributions) are
provided for a more detailed insight into the relationship between the
reconstructed and the true R1h values (Fig.

Two-dimensional histogram (relative frequencies) showing the
relationship between the hourly precipitation reconstructed by

The distributions are clearly asymmetric with respect to the diagonal,
especially for the convective precipitation. One has to be careful in the
interpretation, however, because most cases are concentrated in the lower
left corner (log scale for the frequencies, spanning many orders of
magnitude). Thus, at least for the high values, more points fall below the
diagonal, indicating more frequent underprediction. This might be due to the
short duration of peaks with the highest intensity. For both precipitation
types, but especially for convective precipitation, an overestimation of very
low intensities is noticeable. Zooming in, the first R1h bin for the
convective precipitation shows enhanced values corresponding to the bias
towards wet cases in Table

Potential applications for the new algorithm include situations where
computational performance is relevant. For the precipitation (and possibly
other input data; see Sect.

During the evaluation process, a computationally more efficient version of the IA1 algorithm was developed. It applies the monotonicity filter within one sweep through the time series (filter trailing behind the reconstruction) rather than processing the series twice. The algorithmic equations are unchanged. We refer to this version as speed-optimised Interpolation Algorithm 1 (IA1s). It was verified that results are not different from the standard IA1.

The wall clock time for the application of each of the algorithms to the
1-year global test data set is listed in Table

Computing time (wall clock) for the processing of 1-year of global data (ECMWF test data used in this paper) with the old IFP algorithm and the new IA1, IA1s, and IA2m algorithms, on a Linux server with Intel(R) Xeon(R) E5-2690 @ 2.90 GHz CPU, single thread. The fastest algorithm is marked in bold.

We have provided a one-dimensional, conservative, and positively definite reconstruction algorithm suitable for the interpolation of a gridded function whose grid values represent integrals over the grid cell, such as precipitation output from numerical models. The approach is based on a one-dimensional piecewise linear function with two additional supporting points within each grid cell, dividing the interval into three pieces.

This approach has three degrees of freedom, similar to a piecewise parabolic polynomial. They are fixed through the mass conservation condition, the slope of the central interval which is taken as the average of the slopes of the two outer subintervals, and the left and right border grid points (each counting as half a degree of freedom). For the latter, the geometric mean value of the bordering integral values is chosen. Its main advantage is that the function values vanish automatically if one of the involved values is zero, which is a necessary condition for continuity. However, the geometric mean in general converges too slowly with vanishing values to prevent negative values under all conditions. This led us further to derive a sufficient condition for non-negativity and restrict the function values accordingly by these upper bounds.

This non-negative geometric-mean-based algorithm, however, still violates monotonicity. Therefore, we further introduced a (conservative) filter for regions where the remapping function takes an M- or W-like shape, requiring a second run through the data (IA1). Alternatively, the filter can be applied during the first sweep immediately after the construction of the next interval (IA1s). We also showed how this basic idea of the monotonicity filter can be directly incorporated into the construction of an algorithm (IA2). As in this case the algorithm is not symmetric, we apply it a second time in the other direction and average the results (IA2m).

The evaluation, consisting of verification and validation, confirmed the advantages of the new algorithms IA1 (including IA1s) and IA2m. After the verification of our requirements, each evaluation step revealed a significant improvement by the new algorithms as compared to the algorithm currently used for the FLEXPART model. Nevertheless, the soft requirement of monotonicity has not been fulfilled perfectly, but the deviation is considered to be acceptable. The modified version of IA1 (IA1s) yields identical results to IA1 and is quite fast. However, the results of the quantitative statistical validation would slightly favour the IA2m algorithm, whose computational performance is still acceptable, even though in this modified form the original IA2 algorithm is applied twice.

The next steps will be the integration of the method into the preprocessing of the meteorological input data for the FLEXPART Lagrangian dispersion model and the model itself for the temporal interpolation of precipitation. The application to two dimensions, intended for spatial interpolation, is also under investigation. Options include the straightforward operator-splitting approach as well as an extension based on bilinear interpolation with additional supporting points. As the monotonicity filter appears to be not yet perfect, this may also be revisited.

It may be noted that there is a wide range of useful applications of such conservative reconstructions. Interestingly, at least in the geoscientific modelling community, they have largely remained restricted to the specific problem of semi-Lagrangian advection schemes. Therefore, we sketch out more possible use cases below.

In typical LPDMs, other extensive quantities which are being used, apart from precipitation, are surface fluxes of heat and momentum which enter boundary-layer parameter calculations and which could be treated similarly, especially for temporal interpolation. The often-used 3-hourly input interval is quite coarse and may, for example, clip the peak values of the turbulent heat flux.

In many applications, output is required for single points representing
measurement stations or, in the case of backward runs

This latter example could be easily extended to all kinds of model output
postprocessing, where currently methods that are too simple often prevail. It
should be clear that applying naive bilinear interpolation to gridded output
of precipitation and other extensive quantities, including fluxes, introduces
systematic errors as highlighted in Sect.

Finally, this also includes contouring software. Contouring involves interpolation between neighbouring supporting points to determine where the contour line should intersect the cell boundaries. It is obvious that linear interpolation is inadequate for extensive quantities whose values represent grid averages. This holds in particular for precipitation, energy fluxes, and trace species concentrations. While we cannot expect the many contouring packages to be rewritten with an option for conservative interpolation, our method (once extended to two dimensions) provides an easy implementation through preprocessing resulting in an auxiliary grid with triple (one-dimensional) resolution that then could be linearly interpolated without violating mass conservation, thus enabling it to be used with standard contouring software.

The piecewise linear reconstruction routines IA1, IA2, and
IA1s are written in Python2. The code is included in the Supplement and is
licensed under the Creative Commons Attribution 4.0 International License.
For IA2m, IA2 has to be called with the original and the reversed time series
and the results have to be averaged. The software for the statistical
evaluation (written in Python2) is available on request by contacting the
second author, A. Philipp (anne.philipp@univie.ac.at). It relies on the NumPy

The precipitation data used for evaluating the
interpolation algorithms were extracted from ECMWF through MARS retrievals

We show in the following that the monotonicity filter as introduced in
Sect.

The M-shape: this corresponds to the case

The W-shape: this corresponds to the case

The supplement related to this article is available online at:

PS formulated the problem in Sect.

The authors declare that they have no conflict of interest.

Sabine Hittmeir thanks the Austrian Science Fund (FWF) for the support via the Hertha Firnberg project T-764-N32, which also finances the open-access publication. We thank the Austrian Meteorological Service ZAMG for access to the ECMWF data. The second and third author are grateful to Christoph Erath (then Department of Mathematics, University of Vienna; now at TU Darmstadt, Germany) for an early stage discussion of the problem and useful suggestions about literature from the semi-Lagrangian advection community. We also thank the reviewers for careful reading, thus helping to improve the paper. Edited by: Simon Unterstrasser Reviewed by: Wayne Angevine and one anonymous referee