Submitted as: development and technical paper 27 Jan 2021

Submitted as: development and technical paper | 27 Jan 2021

Review status: a revised version of this preprint is currently under review for the journal GMD.

Fast and accurate learned multiresolution dynamical downscaling for precipitation

Jiali Wang1, Zhengchun Liu2, Ian Foster2, Won Chang3, Rajkumar Kettimuthu2, and V. Rao Kotamarthi1 Jiali Wang et al.
  • 1Environmental Science Division, Argonne National Laboratory, Lemont, IL, USA
  • 2Data Science and Learning Division, Argonne National Laboratory, Lemont, IL, USA
  • 3Division of Statistics and Data Science, University of Cincinnati, Cincinnati, OH, USA

Abstract. This study develops a neural network-based approach for emulating high-resolution modeled precipitation data with comparable statistical properties but at greatly reduced computational cost. The key idea is to use combination of low- and high- resolution simulations to train a neural network to map from the former to the latter. Specifically, we define two types of CNNs, one that stacks variables directly and one that encodes each variable before stacking, and we train each CNN type both with a conventional loss function, such as mean square error (MSE), and with a conditional generative adversarial network (CGAN), for a total of four CNN variants.We compare the four new CNN-derived high-resolution precipitation results with precipitation generated from original high resolution simulations, a bilinear interpolater and the state-of-the-art CNN-based super-resolution (SR) technique. Results show that the SR technique produces results similar to those of the bilinear interpolator with smoother spatial and temporal distributions and smaller data variabilities and extremes than the high resolution simulations. While the new CNNs trained by MSE generate better results over some regions than the interpolator and SR technique do, their predictions are still not as close as ground truth. The CNNs trained by CGAN generate more realistic and physically reasonable results, better capturing not only data variability in time and space but also extremes such as intense and long-lasting storms. The new proposed CNN-based downscaling approach can downscale precipitation from 50 km to 12 km in 14 min for 30 years once the network is trained (training takes 4 hours using 1 GPU), while the conventional dynamical downscaling would take 1 months using 600 CPU cores to generate simulations at the resolution of 12 km over contiguous United States.

Jiali Wang et al.

Status: final response (author comments only)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on gmd-2020-412', Anonymous Referee #1, 29 Mar 2021
  • CEC1: 'Comment on gmd-2020-412', Juan Antonio Añel, 03 Apr 2021
    • CC1: 'Reply on CEC1', Zhengchun Liu, 30 Apr 2021
  • RC2: 'Comment on gmd-2020-412', Anonymous Referee #2, 25 May 2021
  • AC1: 'Comment on gmd-2020-412', V. Rao Kotamarthi, 08 Jul 2021

Jiali Wang et al.

Jiali Wang et al.


Total article views: 673 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
453 205 15 673 9 7
  • HTML: 453
  • PDF: 205
  • XML: 15
  • Total: 673
  • BibTeX: 9
  • EndNote: 7
Views and downloads (calculated since 27 Jan 2021)
Cumulative views and downloads (calculated since 27 Jan 2021)

Viewed (geographical distribution)

Total article views: 520 (including HTML, PDF, and XML) Thereof 519 with geography defined and 1 with unknown origin.
Country # Views %
  • 1


Latest update: 17 Sep 2021
Short summary
Downscaling, the process of generating a higher spatial or time dataset from a coarser observational or model dataset, is a widely used technique. Two common methodologies for performing downscaling are to use either dynamic (physics based) or statistical (empirical). Here we develop a novel methodology, using conditional generative adversarial network (CGAN), to perform the downscaling of a model precipitation forecasts and describe the advantages of this method compared to the others.