Articles | Volume 16, issue 20
https://doi.org/10.5194/gmd-16-5895-2023
https://doi.org/10.5194/gmd-16-5895-2023
Model evaluation paper
 | 
20 Oct 2023
Model evaluation paper |  | 20 Oct 2023

Key factors for quantitative precipitation nowcasting using ground weather radar data based on deep learning

Daehyeon Han, Jungho Im, Yeji Shin, and Juhyun Lee

Related authors

Prediction of monthly Arctic sea ice concentrations using satellite and reanalysis data based on convolutional neural networks
Young Jun Kim, Hyun-Cheol Kim, Daehyeon Han, Sanggyun Lee, and Jungho Im
The Cryosphere, 14, 1083–1104, https://doi.org/10.5194/tc-14-1083-2020,https://doi.org/10.5194/tc-14-1083-2020, 2020
Short summary

Related subject area

Atmospheric sciences
QES-Plume v1.0: a Lagrangian dispersion model
Fabien Margairaz, Balwinder Singh, Jeremy A. Gibbs, Loren Atwood, Eric R. Pardyjak, and Rob Stoll
Geosci. Model Dev., 16, 5729–5754, https://doi.org/10.5194/gmd-16-5729-2023,https://doi.org/10.5194/gmd-16-5729-2023, 2023
Short summary
A two-way coupled regional urban–street network air quality model system for Beijing, China
Tao Wang, Hang Liu, Jie Li, Shuai Wang, Youngseob Kim, Yele Sun, Wenyi Yang, Huiyun Du, Zhe Wang, and Zifa Wang
Geosci. Model Dev., 16, 5585–5599, https://doi.org/10.5194/gmd-16-5585-2023,https://doi.org/10.5194/gmd-16-5585-2023, 2023
Short summary
Simulations of idealised 3D atmospheric flows on terrestrial planets using LFRic-Atmosphere
Denis E. Sergeev, Nathan J. Mayne, Thomas Bendall, Ian A. Boutle, Alex Brown, Iva Kavčič, James Kent, Krisztian Kohary, James Manners, Thomas Melvin, Enrico Olivier, Lokesh K. Ragta, Ben Shipway, Jon Wakelin, Nigel Wood, and Mohamed Zerroukat
Geosci. Model Dev., 16, 5601–5626, https://doi.org/10.5194/gmd-16-5601-2023,https://doi.org/10.5194/gmd-16-5601-2023, 2023
Short summary
Emulating lateral gravity wave propagation in a global chemistry–climate model (EMAC v2.55.2) through horizontal flux redistribution
Roland Eichinger, Sebastian Rhode, Hella Garny, Peter Preusse, Petr Pisoft, Aleš Kuchař, Patrick Jöckel, Astrid Kerkweg, and Bastian Kern
Geosci. Model Dev., 16, 5561–5583, https://doi.org/10.5194/gmd-16-5561-2023,https://doi.org/10.5194/gmd-16-5561-2023, 2023
Short summary
Evaluating WRF-GC v2.0 predictions of boundary layer height and vertical ozone profile during the 2021 TRACER-AQ campaign in Houston, Texas
Xueying Liu, Yuxuan Wang, Shailaja Wasti, Wei Li, Ehsan Soleimanian, James Flynn, Travis Griggs, Sergio Alvarez, John T. Sullivan, Maurice Roots, Laurence Twigg, Guillaume Gronoff, Timothy Berkoff, Paul Walter, Mark Estes, Johnathan W. Hair, Taylor Shingler, Amy Jo Scarino, Marta Fenn, and Laura Judd
Geosci. Model Dev., 16, 5493–5514, https://doi.org/10.5194/gmd-16-5493-2023,https://doi.org/10.5194/gmd-16-5493-2023, 2023
Short summary

Cited articles

Adewoyin, R. A., Dueben, P., Watson, P., He, Y., and Dutta, R.: TRU-NET: a deep learning approach to high resolution prediction of rainfall, Mach. Learn., 110, 2035–2062, https://doi.org/10.1007/s10994-021-06022-6, 2021. 
Agrawal, S., Barrington, L., Bromberg, C., Burge, J., Gazen, C., and Hickey, J.: Machine learning for precipitation nowcasting from radar images, arXiv [preprint], arXiv:1912.12132, 2019. 
Albu, A.-I., Czibula, G., Mihai, A., Czibula, I. G., Burcea, S., and Mezghani, A.: NeXtNow: A Convolutional Deep Learning Model for the Prediction of Weather Radar Data for Nowcasting Purposes, Remote Sens.-Basel, 14, 3890, https://doi.org/10.3390/rs14163890, 2022. 
Aswin, S., Geetha, P., and Vinayakumar, R.: Deep learning models for the prediction of rainfall, 2018 International Conference on Communication and Signal Processing (ICCSP), 0657–0661, 2018. 
Ayzel, G.: RainNet: a convolutional neural network for radar-based precipitation nowcasting, GitHub [code], https://github.com/hydrogo/rainnet (last access: 18 September 2023), 2020. 
Download
Short summary
To identify the key factors affecting quantitative precipitation nowcasting (QPN) using deep learning (DL), we carried out a comprehensive evaluation and analysis. We compared four key factors: DL model, length of the input sequence, loss function, and ensemble approach. Generally, U-Net outperformed ConvLSTM. Loss function and ensemble showed potential for improving performance when they synergized well. The length of the input sequence did not significantly affect the results.