Fast Barnes Interpolation
 Federal Office of Meteorology and Climatology MeteoSwiss, Zurich, Switzerland
 Federal Office of Meteorology and Climatology MeteoSwiss, Zurich, Switzerland
Abstract. Barnes interpolation is a method that is widely used in geospatial sciences like meteorology to remodel data values recorded at irregularly distributed points into a representative analytical field. When implemented naively, the computational complexity of Barnes interpolation depends directly on both the number of sample points and the number of grid points. In the era of highly resolved grids and overwhelming numbers of sample points, which originate e.g. from the Internet of Things or from crowdsourced data, this computation can be quite demanding even on highperformance machines.
This paper presents new approaches how very good approximations of Barnes interpolation can be implemented using fast algorithms. Two use cases are in particular considered, namely (1) where the used grid is embedded in the Euclidean plane and (2) where the grid is located on the unit sphere.
Bruno K. Zürcher
Status: open (extended)

RC1: 'Comment on gmd2022116', Anonymous Referee #1, 31 Aug 2022
reply
It is a good paper. The idea of approximating Guassian weights using the central limit theorem (CLT) is very good. The Approximation 3 is a happy result mainly because of cancellation of the common factors in the numerator and the denominator. The algorithms for implementation are also nice and thorough. It is surprising to read how the computing time is reduced from a naive Barnes interpolation. Nevertheless, I have some comments and questions, basically for a better presentation.
1. It uses a special kind of the uniform random variable on an interval with a threshold. Hope the author describe why this threshold (sqrt(3/n) sigma) was chosen.
2. page 3, line 54: hope the author provide the definition of the nfold convolution of p(x) with itself.
3. p5 line 92: hope the author provide more concretely what *^nx and *^yn are, because some readers may know it after reading the Algorithms 2, 3, and 4.
4. In Section 5, the author applied 4fold convolution. It is known that a summation of uniform random variables converge slowly to the CLT compared to other unimodal random variables, even though the convergence in distribution and the convergence in pdf are different. Thus I think only 4fold is too small. I would recommend to apply at least 10fold convolutions in real applications to ensure a good approximation.
5. The result in this paper is only a nice approximation to Barnes interpolation. I think the title can be changed to 'Fast approximation to Barnes interpolation'.
6. It uses a special kind of the uniform random variable and a numerical integration for convolution in Algorithm 2. It is still good, but I guess one may consider other random variables which may accelerate the convergene to the CLT. Then numerical integration may be a little more complex than that in this paper. This may increase computing time in Algorithm 2, but may be alright with a smaller n.
7. In Algorithms 2 and 3, I was a bit confused with the notations g, r_T, and F[i,j]. Hope the author kindly give short descriptions on these notations to improve reader's understanding.
8. Finally, I think that the idea of approximating Guassian weights using the CLT is very good and so it can be applied to the other area of numerical computations which use Gaussian weights. Hope the author try to search the literature if any body has already published or applied this idea.

AC1: 'Reply on RC1', Bruno Zürcher, 08 Oct 2022
reply
Dear Referee,
Thank you very much for reading the manuscript carefully and for your valuable comments. Below I give a first response to these comments, point after point.
1. page 3, line 65: The calculation, why the interval has the length (sqrt(3/n) sigma) is actually not obvious. It is basically chosen in this way, such that the variance of the resulting uniform distribution u_n(x) is (sigma^2/n).
I will add in the manuscript a short comment about that and a calculation that verifies this.2. page 3, line 54: I propose to provide the definition of the nfold convolution of p(x) with itself in an appendix to the manuscript.
3. page 5, line 92: I agree, the explanation of the operators *^nx and *^yn in the text is minimalistic. I propose to give a thorough derivation of them in the appendix.
4. Section 5, rate of convergence: Maybe one has to differentiate from apllication to application. When Barnes interpolation is used to visualize data by means of isolines  as done in the manuscript  I noticed that the resulting isolines found more or less their stable form already after applying a 3fold convolution. For other applications, which require a higher precision, a 10fold convolution or even more might make sense.
I extend the corresponding remark on p. 18 line 295 accordingly.5. I can change the title to something like "Fast Approximative Barnes Interpolation". I did not use the term "approximative" in the title, because other _feasible_ methods to compute Barnes interpolation are approximations as well, but do not emphasize this in an obvious way.
6. Yes, there are many other PDFs that have a faster CLTconvergence to a Gaussian than the uniform PDF  in the extreme case one could even take a normal distribution itself for which we would have n = 1.
But when using a nontrivial PDF, it is clear that putting and moving the weight window (as described on p. 9 line 161) over the data to be convolved requires in general 2T+1 multiplications and 2T additions per element. Thus, the simplicity of algorithm 2 is lost to a far part and the algorithmic complexity of it grows to O(L*T) instead of O(L). Overall we then could not claim a complexity of O(N+W*H) anymore, this would be rather O(N+W*H*T).
Your comment leads me to consider to add a remark on p. 6 line 128 that Approximation 3 is valid in a much more general context, i.e. also for other (nonuniform) PDFs. In the case of a normal distribution even equality holds.7. I try to write this more clear.
8. This is a good question. I know that the CLT is used by some applications to quickly generate the weights of a normal distribution (as above, of course only in an approximative way). Further I assume that some image processing programs like Photoshop employ this technique in order to apply a Gaussian blur filter on an image. This process can be regarded as a special case of the method presented in this manuscript, where the observation points in question are given by the image matrix. In this case the points are located on a regular grid and the denominator of equation (8) simplifies to 1.
I try to find references for such applications and will mention them in the text of the conclusions section.Best regards and thank you once more
Bruno

AC1: 'Reply on RC1', Bruno Zürcher, 08 Oct 2022
reply
Bruno K. Zürcher
Bruno K. Zürcher
Viewed
HTML  XML  Total  BibTeX  EndNote  

268  112  9  389  7  4 
 HTML: 268
 PDF: 112
 XML: 9
 Total: 389
 BibTeX: 7
 EndNote: 4
Viewed (geographical distribution)
Country  #  Views  % 

Total:  0 
HTML:  0 
PDF:  0 
XML:  0 
 1