Strategies for Conservative and Non-Conservative Monotone Remapping on the Sphere
- 1Department of Mathematics, University of California-Davis, Davis, CA 95616
- 2Department of Land, Air and Water Resources, University of California-Davis, Davis, CA 95616
- 1Department of Mathematics, University of California-Davis, Davis, CA 95616
- 2Department of Land, Air and Water Resources, University of California-Davis, Davis, CA 95616
Abstract. Monotonicity is an important property of remapping operators for coupled weather and climate models. However, it is often challenging to design highly accurate operators that avoid the generation of new extrema or keep a remapped field between physically prescribed bounds. To that end, this paper explores several traditional and novel approaches for both conservative and non-conservative monotone remapping on the sphere. The accuracy and effectiveness of these algorithms are evaluated in the context of several different real and idealized fields and meshes.
David H. Marsico and Paul A. Ullrich
Status: final response (author comments only)
-
RC1: 'Comment on gmd-2022-248', Colin Cotter, 11 Nov 2022
This paper provides a useful evaluation of remapping schemes between grids on the sphere, which is important for model intercomparison.
A few questions and comments:
1. The largest errors occur when interpolating to the lat-long grid. Do we see particularly large errors at any point on the grid (e.g. poles or equator).
2. I think that it might also be relevant to mention conservative mass fixers developed for semi-Lagrangian schemes, such as the variants of Zerroukat, which make local corrections to fix both conservation and bounds.
3. Please provide some more information about how R is constructed for each combination of grids.
4. I'm confused by the reference of supermeshing in the nonconservative section - this technique is introduced usually to ensure conservation.
-
AC1: 'Reply on RC1', David Marsico, 20 Nov 2022
1. “The largest errors occur when interpolating to the lat-long grid. Do we see particularly large errors at any point on the grid (e.g. poles or equator).”
In general, the errors are not the largest at the equator or poles for latitude-longitude meshes. They are largest in regions of high curvature, where we would expect there to be overshoots or undershoots. See the attached plot for the errors on a lat/lon grid for two of the test cases.
2. “I think that it might also be relevant to mention conservative mass fixers developed for semi-Lagrangian schemes, such as the variants of Zerroukat, which make local corrections to fix both conservation and bounds.”
The CAAS algorithm is similar to the one described in the paper “A simple mass conserving semi-Lagrangian scheme for transport problems” and we will include a mention of it in our revised manuscript.
3. “Please provide some more information about how R is constructed for each combination of grids.”
For the conservative remapping, R is constructed according to a two stage procedure for finite-volume meshes. First, a fitting procedure is used to construct a local nth degree polynomial. This polynomial is then integrated over each overlap face that intersects a given target face (details can be found in the papers Arbitrary-Order Conservative and Consistent Remapping and a Theory of Linear Maps: Part I/II ). CAAS is then applied as a post-processing operation once R has been applied to the source field.
For the non-conservative remapping, the entries of each row of R are determined by approximating each value on the target mesh as a weighted sum of values on the source mesh, as in equation (19). We consider both the non-integrated and integrated versions.
For the non-conservative non-integrated remapping, the non-zero entries of each row of R correspond to the faces on the source mesh whose centers form the nodes of a polygon that contains a given point on the target mesh. The values are then determined by whatever weights we’re using, i.e. bilinear, generalized barycentric, or Delaunay triangulation interpolation.
For the non-conservative integrated remapping, R is constructed by way of the overlap mesh. By using the overlap mesh, we ensure that every face on the source mesh is sampled. Specifically, we approximate the value of the field on each target mesh face by integrating the source field over that target face by applying numerical quadrature to each intersecting overlap face (see equations (27) and (28) for this written out fully).
4. “I'm confused by the reference of supermeshing in the nonconservative section - this technique is introduced usually to ensure conservation.”
While the overlap mesh is generally used to ensure conservation, we have adapted it to be used in a non-conservative context. For non-conservative remapping, the overlap mesh provides a systematic way of ensuring that every point on the source mesh is sampled, because the overlap mesh consists of all faces common to both the source and target meshes. The sample points used are obtained by triangulating the supermesh faces.
-
AC1: 'Reply on RC1', David Marsico, 20 Nov 2022
-
RC2: 'Comment on gmd-2022-248', Robert Oehmke, 18 Nov 2022
+ General comments:
This was an interesting paper and I thought that overall the ideas were good and potentially useful. (There are definitely some things in this paper that I'm interested in trying in our code.) However, I thought that some of the sections could be expanded a bit to make the overall algorithm clearer (e.g. 4.1.2). The one thing that I saw that should be changed is under "Requested minor revisions" below. The comments below that in the "Questions and comments" section are just suggestions.
+ Requested minor revisions:
- The one thing that I saw that should probably be changed is that you make a general statement about how it was shown that the integrated versions are capable of maintaining accuracy across arbitrary source mesh resolutions (line 318). However, in section 4.4 test 2 you only show the bilinear results for integrated vs. non-integrated for a source refined beyond the resolution of the target mesh. Given this, I think that you should either add graphs for the other 2 non-conservative methods in that second test or just mention the bilinear in that conclusion sentence. It could be that I'm misunderstanding what's being shown in that section (4.4.), if so a bit more explanation in there about why just bilinear is being shown in the second test would be useful. Also, I think "arbitrary" is a bit strong for that sentence, maybe something like "wide range" would be better to describe what you show.
+ Questions and comments:
- Line 26: I wondered if you meant "non-conservative" at the end of this line, since you talk about conservative in the next part.
- Line 191: ESMF also supports regridding where the data values are on the nodes, so dual conversion isn't always necessary.
- Section 4.1.1. It would be useful to have a diagram showing how this algorithm works (e.g. with the 6 panels on the sphere showing a coarse triangulation and destination point.)
- Line 220: What happens if a set of source point spans two panels? (e.g. do you have an overlap region so that a destination point can't land between two panels)
- Line 221: You could add a sentence or two about how you find the triangle that contains the point (e.g. do you just loop or is there a search structure involved)
- Section 4.1.2: I thought that the broader algorithm could be fleshed out a bit more so that the description was at a similar level to other sections. Even a few sentences describing how you find the polygon that would contain the point (or a pointer if you're doing it the same as in another section)
- Line 245: Does this scheme for calculating the weights work if the polygon is concave?
- Line 318 it says " second accuracy" should it be "second order accuracy"?
-
AC2: 'Reply on RC2', David Marsico, 23 Nov 2022
- The one thing that I saw that should probably be changed is that you make a general statement about how it was shown that the integrated versions are capable of maintaining accuracy across arbitrary source mesh resolutions (line 318). However, in section 4.4 test 2 you only show the bilinear results for integrated vs. non-integrated for a source refined beyond the resolution of the target mesh. Given this, I think that you should either add graphs for the other 2 non-conservative methods in that second test or just mention the bilinear in that conclusion sentence. It could be that I'm misunderstanding what's being shown in that section (4.4.), if so a bit more explanation in there about why just bilinear is being shown in the second test would be useful. Also, I think "arbitrary" is a bit strong for that sentence, maybe something like "wide range" would be better to describe what you show.
All of the integrated versions of the schemes are capable of maintaining second order accuracy when the source mesh resolution is refined significantly beyond that of the target mesh, not just the bilinear one. Thank you for pointing out a potential source of confusion, and we’ll add a clarifying sentence or a graph showing the results for all of the integrated schemes. The reason why we only included bilinear is because the figures look nearly identical in all cases, and we thought it would sufficient to just show one, and then mention how the other schemes are similar. For example, we’ve included a figure that shows a comparison of the error for the integrated and non-integrated Delaunay triangulation weighting (in the figure “Delaunay_IntegratedDelaunay_error”). Note how similar it looks to Figure 12. We can provide the raw error noms for these tests if necessary.
+ Questions and comments:
- Line 26: I wondered if you meant "non-conservative" at the end of this line, since you talk about conservative in the next part.
We do mean conservative here. The next sentence, the one that begins with “In the conservative case…”, is meant to describe the way monotonicity in the conservative case has been achieved in other contexts, i.e. through limiters. In the sentence after that, we describe the specific way we achieve monotonicity, i.e. through the CAAS algorithm.
- Line 191: ESMF also supports regridding where the data values are on the nodes, so dual conversion isn't always necessary.
Thank you for letting us know. We’ll be sure to mention this in a revised manuscript.
- Section 4.1.1. It would be useful to have a diagram showing how this algorithm works (e.g. with the 6 panels on the sphere showing a coarse triangulation and destination point.)
We’ve attached a file with a potential figure to be included in the paper as an illustration of the algorithm (the file called “Delaunay_picture.pdf”). Is this what you had in mind? The figure shows a simple representation of how the source faces on each panel are projected onto a plane and then triangulated.
- Line 220: What happens if a set of source point spans two panels? (e.g. do you have an overlap region so that a destination point can't land between two panels)
Yes, we have a such a region of about 10 degrees on each side of the panel to prevent something like this.
- Line 221: You could add a sentence or two about how you find the triangle that contains the point (e.g. do you just loop or is there a search structure involved)
We use a kd-tree to accomplish this. First, we use the tree to find the face whose center is nearest the target point. If the target point is in this source face, we stop. Otherwise we search through neighboring source faces until we find one that contains the target point. This algorithm works efficiently in practice.
- Section 4.1.2: I thought that the broader algorithm could be fleshed out a bit more so that the description was at a similar level to other sections. Even a few sentences describing how you find the polygon that would contain the point (or a pointer if you're doing it the same as in another section)
Thank you for the suggestion. We will add more of a description, and perhaps another figure to provide an intuitive understanding of these weights (similar to Figure 8). And yes, the way we find a polygon that contains a given point is the same for each method (see the previous comment for a description). We’ll be sure to point this out.
- Line 245: Does this scheme for calculating the weights work if the polygon is concave?
The scheme will not necessarily work for concave faces. However, it can be adapted by triangulating it and then applying the weighting formula for each sub-triangle.
- Line 318 it says " second accuracy" should it be "second order accuracy"?
Thank you for pointing out this typo.
-
RC4: 'Reply on AC2', Robert Oehmke, 29 Nov 2022
Thanks for the response to my comments.
Looking at the graph you sent, I can see what you mean about the similarity between the bilinear and the Delaunay. If the other looks like that as well, I think that just adding a sentence or two indicating that the rest are very similar to the bilinear (or just putting in the graphs if that seems better to you) would answer my comment.
The picture illustrating the Delaunay does look like what I was thinking of. (I just thought a high level visual would help illustrate what's happening.)
All the rest sounds good.
Looking forward to seeing the revised paper.
-
RC4: 'Reply on AC2', Robert Oehmke, 29 Nov 2022
-
AC2: 'Reply on RC2', David Marsico, 23 Nov 2022
-
RC3: 'Comment on gmd-2022-248', Anonymous Referee #3, 19 Nov 2022
This study was an inspiring and interesting paper on remapping. We wanted to experiment with some of the ideas from this paper on our system. In the process, I thought it would be nice if there were additional explanations to help understand them.
1. Many readers of this paper, including myself, will desire to compare which remapping method is the best from a monotonicity. The picture needs to be improved to help them understand intuitively. For example, it is proposed to unify the axis (y-axis in Figures 1-4, 9-11, and 12) and color range (Figures 5 and 7) for comparison between figures. In addition, in order to distinguish the dense lines in Fig. 9-11, more distinct markers should be used or a table containing actual error values ââshould be included (suggested to add as a supplementary material).
2. In the remapping result of the vortex case (Fig. 5), I would like to comment on the reason why a very conspicuous irregular pattern occurs around (0Ë, 90ËN). In particular, I wonder why these errors are weakened (Fig 5b) when a local bound is applied.
3. It could be out of purpose of this study, but it is curious about efficiency, another desirable property of the remapping operator. I wonder how long each of the remapping methods used in this study takes to calculate. Also, if possible, I would like to hear answers about whether there is a dependency on the data-type of variable.
- Additional minor typos.L51: In the second part we, we show >> In the second part, we show
L61: a a set of discrete nodes >> a set of ...-
AC3: 'Reply on RC3', David Marsico, 23 Nov 2022
1. Many readers of this paper, including myself, will desire to compare which remapping method is the best from a monotonicity. The picture needs to be improved to help them understand intuitively. For example, it is proposed to unify the axis (y-axis in Figures 1-4, 9-11, and 12) and color range (Figures 5 and 7) for comparison between figures. In addition, in order to distinguish the dense lines in Fig. 9-11, more distinct markers should be used or a table containing actual error values ââshould be included (suggested to add as a supplementary material).
Thank you for the suggestion. I’ve attached an updated version of Figure 3, but with a consistent y-axis for each subfigure. Is this what you had in mind? The figure is called “FvtoFV_Errors.pdf”. We can also include the actual error norms for the figures 9-11 as a supplement.
Can you clarify what you mean by color ranges for figures 5 and 7? Figures 5 and 7 use the same color schemes and are bounded between 0 and 1 (except figure 5a, in order to show the overshoots and undershoots)
2. In the remapping result of the vortex case (Fig. 5), I would like to comment on the reason why a very conspicuous irregular pattern occurs around (0Ë, 90ËN). In particular, I wonder why these errors are weakened (Fig 5b) when a local bound is applied.
Yes, there are significant errors in the remapped field when CAAS is not applied. This is because high-order methods lead to overshoots and undershoots of the global bounds. Applying CAAS with local bounds preservation will prevent these overshoots/undershoots because the remapped fields are constrained to like between 0 and 1.
3. It could be out of purpose of this study, but it is curious about efficiency, another desirable property of the remapping operator. I wonder how long each of the remapping methods used in this study takes to calculate. Also, if possible, I would like to hear answers about whether there is a dependency on the data-type of variable.The non-integrated schemes are significantly faster than their integrated counterparts. This is because the non-integrated schemes do not rely on the overlap mesh, but the integrated versions do. The overlap mesh can be very large, and it therefore takes much longer to iterate over.
It took approximately half an hour to generate the offline maps for the integrated schemes for a cubed sphere source mesh with 1,382,400 faces and for a target mesh with 16,200 faces. It never takes more than a few minutes to generate the maps for the non-integrated schemes. There does not seem to be a dependency on the data-type.
- Additional minor typos.
L51: In the second part we, we show >> In the second part, we show L61: a a set of discrete nodes >> a set of ...Thank you for pointing these out.
-
RC5: 'Reply on AC3', Anonymous Referee #3, 04 Dec 2022
>> Thanks for the response.
1. Many readers of this paper, including myself, will desire to compare which remapping method is the best from a monotonicity. The picture needs to be improved to help them understand intuitively. For example, it is proposed to unify the axis (y-axis in Figures 1-4, 9-11, and 12) and color range (Figures 5 and 7) for comparison between figures. In addition, in order to distinguish the dense lines in Fig. 9-11, more distinct markers should be used or a table containing actual error values ââshould be included (suggested to add as a supplementary material).
Thank you for the suggestion. I’ve attached an updated version of Figure 3, but with a consistent y-axis for each subfigure. Is this what you had in mind? The figure is called “FvtoFV_Errors.pdf”. We can also include the actual error norms for the figures 9-11 as a supplement.
Can you clarify what you mean by color ranges for figures 5 and 7? Figures 5 and 7 use the same color schemes and are bounded between 0 and 1 (except figure 5a, in order to show the overshoots and undershoots)
>> It seems hard to aware that the color scheme in figure 5a is not bounded on purpose. It would be nice if there was a mention of this in the text or caption. The modified figure reflects the reviewer's intention well.
2. In the remapping result of the vortex case (Fig. 5), I would like to comment on the reason why a very conspicuous irregular pattern occurs around (0˚E, 90˚N). In particular, I wonder why these errors are weakened (Fig 5b) when a local bound is applied.Yes, there are significant errors in the remapped field when CAAS is not applied. This is because high-order methods lead to overshoots and undershoots of the global bounds. Applying CAAS with local bounds preservation will prevent these overshoots/undershoots because the remapped fields are constrained to like between 0 and 1.
>> I'm still wondering why the patterns that are removed from the CAAS local bound (fig 5b) are not removed when using the local-p bound (fig 5c).
3. It could be out of purpose of this study, but it is curious about efficiency, another desirable property of the remapping operator. I wonder how long each of the remapping methods used in this study takes to calculate. Also, if possible, I would like to hear answers about whether there is a dependency on the data-type of variable.
The non-integrated schemes are significantly faster than their integrated counterparts. This is because the non-integrated schemes do not rely on the overlap mesh, but the integrated versions do. The overlap mesh can be very large, and it therefore takes much longer to iterate over.
It took approximately half an hour to generate the offline maps for the integrated schemes for a cubed sphere source mesh with 1,382,400 faces and for a target mesh with 16,200 faces. It never takes more than a few minutes to generate the maps for the non-integrated schemes. There does not seem to be a dependency on the data-type.
>> I think this answer will help me a lot. Thank you.-
AC4: 'Reply on RC5', David Marsico, 16 Dec 2022
>> It seems hard to aware that the color scheme in figure 5a is not bounded on purpose. It would be nice if there was a mention of this in the text or caption. The modified figure reflects the reviewer's intention well.
Thank you for the suggestion. We will unify the colorbars for all 4 subplots in figure 5a.
>> I'm still wondering why the patterns that are removed from the CAAS local bound (fig 5b) are not removed when using the local-p bound (fig 5c).
CAAS with local bounds enforces a much tighter form of bound preservation than CAAS with local-p bounds, which is why the oscillatory patterns appear in 5c but not 5b.
-
AC4: 'Reply on RC5', David Marsico, 16 Dec 2022
-
RC5: 'Reply on AC3', Anonymous Referee #3, 04 Dec 2022
-
AC3: 'Reply on RC3', David Marsico, 23 Nov 2022
David H. Marsico and Paul A. Ullrich
Model code and software
tempestremap: v2.1.6 Ullrich, P. A., Vijay Mahadevan, Rajeev Jain, Mark Taylor, David Hall, Iulian Grindeanu, Walter Hannah, David Marsico, Matthew Thompson, Andrew Bradley, Jake Bolewski, Jason Sarich, Simon Byrne https://doi.org/10.5281/zenodo.7121451
David H. Marsico and Paul A. Ullrich
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
387 | 105 | 25 | 517 | 5 | 4 |
- HTML: 387
- PDF: 105
- XML: 25
- Total: 517
- BibTeX: 5
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1