Received: 01 Nov 2022 – Discussion started: 19 Dec 2022
Abstract. The Single Column Atmospheric Model (SCAM) is an essential tool for analyzing and improving the physics schemes of CAM. Although it already largely reduces the compute cost from a complete CAM, the exponentially-growing parameter space makes a combined analysis or tuning of multiple parameters difficult. In this paper, we propose a hybrid framework that combines parallel execution and a learning-based surrogate model, to support large-scale sensitivity analysis (SA) and tuning of combinations of multiple parameters. We start with a workflow (with modifications to the original SCAM) to support the execution and assembly of a large number of sampling, sensitivity analysis, and tuning tasks. By reusing the 3,840 instances with the variation of 11 parameters, we train a neural network (NN) based surrogate model that achieves both accuracy and efficiency (with the computational cost reduced by several orders of magnitude). The improved balance between cost and accuracy enables us to integrate NN-based grid search into the traditional optimization methods to achieve better optimization results with fewer compute cycles. Using such a hybrid framework, we explore the joint sensitivity of multi-parameter combinations to multiple cases using a set of three parameters, identify the most sensitive three-parameter combination out of eleven, and perform a tuning process that reduces the error of precipitation by 5 % to 15 % in different cases.
To further improve the efficiency of experiments using SCAM, we train a neural network-based surrogate model to support large-scale sensitivity analysis and tuning of combinations of multiple parameters. Using a hybrid method, we explore the joint sensitivity of multi-parameter combinations to typical cases and identify the most sensitive three-parameter combination out of eleven, and perform a tuning process that reduces the error of precipitation in these cases.
To further improve the efficiency of experiments using SCAM, we train a neural network-based...