Empirical values and assumptions in the convection of numerical models

Convection influences climate and weather events over a wide range of spatial and temporal scales. Therefore, accurate predictions of the time and location of convection and its development into severe weather are of great importance. Convection has to be parameterized in Numerical Weather Prediction models, Global Climate Models, and Earth System Models (NWPs, GCMs, and ESMs) as the key physical processes occur at scales much lower than the model grid size. The 10 convection schemes described in the literature represent the physics by simplified models that require assumptions about the processes and the use of a number of parameters based on empirical values. The present paper examines these choices and their impacts on model outputs and emphasizes the importance of observations to improve our current understanding of the physics of convection. 15 Table of contents

The paper is organized as follows. A brief note on model parameterization, tuning, and the importance of convection follows (Sect. 1.1 and 1.2). Then, the main strategies to model cumulus convection are briefly presented to provide the framework to the rest of the paper (Sect. 2). The core of the review is in the following three sections, which present the assumptions and empirical values in the trigger (Sect. 3), the cloud model (Sect. 4) and the closure of the scheme (Sect. 5). The paper concludes 90 with notes and considerations on the topic, bringing together the most important results. The acronyms used through the paper may be found in Table 1.

Model parameterizations
Parameterizations in numerical models address the fact that some significant physical processes in nature occur at scales much lower than the grid size used in models (Arakawa and Schubert, 1974;Stensrud, 2007;McFarlane, 2011). That is the case of 95 convection, where spatial resolutions of at least 100 m are required to realistically solve its dynamics (Bryan et al., 2003).
However, typical horizontal grid resolutions in current models range from a kilometer scale for high resolution NWP applied to a particular area, to dozens of kilometers in global NWPs, GCMs, and ESMs. With these model grids, convection is a subgrid-scale process not explicitly resolved. The physics is represented by a simplified model that requires assumptions about the processes and the use of several parameters based on empirical values. These are used as thresholds, constraints, or mean 100 values of a number of processes, whereas the former simplification requires a compromise between reducing complexity and a fair representation of the atmosphere.
While sometimes neglected and seldom explicit, tuning is an integral procedure of modeling (Hourdin et al., 2017;Schmidt et al., 2017;Tapiador et al., 2019a, b). It consists of estimating sensible values for the empirical parameters to reduce the discrepancies between model outputs and observations. An example of these discrepancies is shown in Fig. 1. Hence, tuning 105 may have a significant influence on model results and can help identify the parts of the model that need further attention.
However, blind tuning can mask fundamental problems within the parameterization, leading to non-realistic physical states of the system, compensating for errors that translate into an inappropriate budget equilibrium, or affect other metrics (Tapiador et al., 2019a). This is particularly important for climate models, since projections and simulations of future climates always include the ceteris paribus assumption (Smith, 2002). Indeed, parameters that work well for the present climate may not do so 110 for the future. Understanding the range of validity of the choices and the logical steps for the selections can help produce stronger and more robust simulations.
Such a wealth of papers illustrates the strength of this research topic in a vast number of fields. Of these, developing 130 parameterization schemes for models is a thriving subfield, with several teams advancing the field (see Sect. 2 below).
Difficulties persist, however. Convective processes have been identified as a major source of uncertainty in the latest decadal survey (National Academies of Sciences, Engineering and Medicine, 2018), and dedicated efforts are needed to fill the gaps in our present knowledge of the processes involved. Owing to the influence of convection on climate and weather events over a large range of spatial and temporal scales, one of 135 the most important objectives of the latest decadal survey is to improve the predictions of the timing and location of convective storms, and their evolution into severe weather. Besides the drawbacks associated with the spatial resolution, the multiscale interactions leading to the organization and evolution of convective systems are difficult to observe and represent.
Improving the observed and modeled representation of natural, low-frequency modes of weather/climate variability was identified in the survey as one of the most important challenges of the coming decade. Including interactions between large-140 scale circulation and organization of convection such as Madden-Julian Oscillation (MJO) or El Niño-Southern Oscillation (ENSO) aims to improve predictions by 50 % at lead times of 1 week to 2 months, which will have a high societal impact.
It is essential to further understand the physics and dynamics of the underlying processes, currently crudely parameterized in the majority of models. Advanced observations of atmospheric convection and high-resolution models are also needed. While models will likely increase their nominal resolution in the next decade, it is also likely that global, century-long simulations 145 from multi-ensembles under different assumptions will need to resort to parameterizing the most computing-intensive tasks.

Overview of the main schemes in cumulus convection modeling
Soon after Charney and Eliassen (1964), and Ooyama (1964) introduced the idea of cumulus parameterization, two approaches emerged: the convergence and the adjustment schemes (Arakawa, 2004). Later, a new scheme was introduced by Ooyama  The main assumptions in convective parameterizations concern the trigger model, the representation of the mutual interaction between cumulus clouds and the large-scale environment (cloud model), and the closure of the scheme. 160 As of 2020, the main cumulus convection schemes publicly available for NWPs are convergence schemes, adjustment schemes, mass flux schemes, cloud system resolving models (CSRM), and super-parameterization (SP). The purpose of this paper is not to compare the performances of the schemes but to investigate their empirical values and assumptions so the focus on the following section is on these.

Convergence schemes: the key role of the total moisture convergence parameter 165
Convergence schemes consider that synoptic scale convergence destabilizes the atmosphere, while the heat released through condensation in cumulus clouds stabilizes it. Typical examples of this approach are Charney andEliassen (1964), Ooyama (1964) and Kuo (1974). Charney and Eliassen (1964) did not use cloud models to explain these interactions. Instead, the concept of conditional instability of the second kind (CISK) was introduced. Ooyama (1964) used a similar formulation, but represented the heating released through condensation in cumulus clouds in terms of a mass flux and considered the 170 entrainment of ambient air. Kuo (1965Kuo ( , 1974 used a simple cloud model scheme to describe the interaction between a largescale environment and cumulus clouds. One of the key assumptions in this scheme is that the total moisture convergence can be divided into a fraction , which is stored in the atmosphere, and the remaining fraction (1 − ), which precipitates and heats the atmosphere. This parameter was further modified by Anthes (1977), who proposed a relationship between and the mean relative humidity (RH) in the troposphere, with ≤ 1. In the evaluation of rainfall rates using the Global Atmospheric 175 Research Program Atlantic Tropical Experiment (GATE) scale phase III, Krishnamurti et al. (1980) obtained the most realistic precipitation rates for ≈ 0. In a later paper, Krishnamurti et al. (1983) introduced an additional subgrid-scale moisture supply to account for the observed vertical distributions of heat and moisture. The total moisture supply was expressed as = (1 + ) ! , with ! the large-scale moisture supply. The authors used a multiple regression approach to find the values of and . Another approach consists of using the wet-bulb characteristics to locally determine the partition between 180 precipitation and moistening (Geleyn, 1985).
Due to its formulation, the Kuo scheme cannot produce a realistic moistening of the atmosphere and cannot represent shallow convection. Moreover, it assumes that convection consumes water and not energy, which violates causality (Raymond and Emanuel, 1993;. Despite these drawbacks, it can produce acceptable results in various applications (Kuo and Anthes, 1984;Molinari, 1985;Pezzi et al., 2008), such as in GCMs and NWP models (Rocha and Caetano, 2010;Mbienda et 185 al., 2017). The convective parameterization scheme demands the least computational power and is thus sometimes used for large, centennial simulations.

Adjustment schemes: two strategies to remove instability
In adjustment schemes, the atmospheric instability is removed through an adjustment towards a reference state. Therefore, the physical properties of clouds are implicit and no cloud models are needed. The first proposed adjustment scheme was the moist 190 convective adjustment by Manabe et al. (1965), also known as the hard adjustment. In this parameterization, moist convection occurs if the air is supersaturated and conditionally unstable. The instability is removed through an instantaneous adjustment of the temperature to a moist-adiabatic lapse rate, and of water vapor mixing ratio to saturation. Moreover, all the condensed water in this process precipitates immediately. The main problems of this scheme are the production of very large precipitation rates, and its saturated final state after convection, which is rarely observed in nature. 195 The so-called soft or relaxed adjustment schemes attempt to alleviate these problems by assuming that the hard adjustment occurs only over a fraction of the grid area, or by specifying the final mean RH (Cotton and Anthes, 1992). For example, Miyakoda et al. (1969) defined saturation as 80 % RH, while Kurihara (1973) performed the adjustment based on the buoyancy condition of a hypothetical cloud element instead of the saturation criterion.
Further improvements to the adjustment schemes were introduced by Betts and Miller (1986), whose scheme is also known as 200 a penetrative adjustment scheme. The authors proposed an adjustment of large-scale atmospheric temperature and humidity to reference profiles over a specified time scale (adjustment timescale). The reference profiles, different for shallow and deep convection, are quasi-equilibrium states based on observational data from GATE, Barbados Oceanographic and Meteorological Experiment (BOMEX), and Atlantic Trade-Wind EXperiment (ATEX). For the construction of the temperature reference profile, Betts (1986) used a mixing line model (Betts, 1982(Betts, , 1985. Then, the moisture reference profile was 205 calculated from the temperature profile by specifying the pressure difference between air parcel saturation level and pressure level at cloud base, freezing level, and cloud top. Therefore, the three adjustment parameters used in this scheme are the adjustment timescale , the stability weight Ws, and the saturation pressure departure, Sp. The sensitivity of the scheme to the adjustment parameters has been evaluated by numerous authors. For instance, Baik et al. (1990) analyzed the influence of different values of each adjustment parameter on the simulation of a tropical cyclone, while 210 Vaidya and Singh (1997) did the same for the simulation of a monsoon depression using four sets of values, including those from Betts and Miller (1986) and Slingo et al. (1994). In all cases, the adjustment parameters had to be modified depending on the different climate regimes. While Baik et al. (1990) set Ws = 0.95 and Sp = (-30, -37.5, -38) hPa as the optimal parameters to simulate a tropical cyclone, Vaidya and Singh (1997) obtained the best forecast for a monsoon depression with Ws = 1.0 and Sp = (-60, -70, -50) hPa. Despite the improvements achieved through adjusting the parameters for different climate conditions, 215 the original Betts-Miller scheme occasionally produced heavy spurious rainfall over warm water and light precipitation over oceanic regions (Janjić, 1994). To overcome this problem, Janjić (1994) proposed considering a range of reference equilibrium states, and characterizing the convective regimes by a parameter called "cloud efficiency", which is related to precipitation production and depends on cloud entropy. This parameter is the sort of empirical value that requires attention when future climates are to be simulated. The modified scheme, known as the Betts-Miller-Janjić (BMJ) scheme, is one of the most widely 220 used adjustment schemes in NWP models (Vaidya and Singh, 2000;Fiori et al., 2014;Fonseca et al., 2015;García-Ortega et al., 2017), despite its large bias for light rainfall (Gallus and Segal, 2001;Jankov and Gallus, 2004;Jankov et al., 2005). Convective adjustment schemes are computationally efficient, which makes them suitable for large-scale simulations.

Mass flux schemes: assuming the rates of mass detrainment and entrainment 225
Because of the nature of both convergence and adjustment schemes, a cloud model is not needed to describe the interaction between cumulus clouds and the large-scale environment. This is not the case for the mass-flux schemes, where convective instability is removed through the vertical transport of heat, moisture, and momentum. The first formulation of this type was introduced by Ooyama (1971). The author assumed that cumulus clouds of different sizes coexist, and that they could be represented by an ensemble of independent non-interacting buoyant elements. The definition of the so-called dispatcher 230 function would close the parameterization. However, the author left this question open. Yanai et al. (1973) and Arakawa and Schubert (1974), hereafter AS, considered that an ensemble of cumulus clouds in a large-scale system is confined to an area that is large enough to contain the ensemble, and small enough compared to the large-scale system. The equations of mass continuity, heat, and moisture continuity are where s is the dry static energy, v is the horizontal velocity, w is the vertical velocity, QR the heating rate due to radiation, L the latent heat of vaporization, c the rate of condensation per unit mass of air, e the rate of evaporation of cloud water, q the water vapor mixing ratio, and the bar denotes horizontal averages over the hypothesized area. Using several assumptions, such as that mass exchange between cumulus cloud and the large-scale environment takes place through detrainment of cloud air D and entrainment of environmental air E, and following the analysis performed by Gregory and Miller (1989) (the reader is 240 referred to Bechtold (2009) for a detailed explanation), the budget equations for a single entraining plume are where and are the rates of mass detrainment and entrainment per unit pressure interval, M the cumulus mass flux, l the mixing ratio of liquid water, and r the rate of rainwater generation. Subscript i denotes the ith cumulus cloud, and subscript D the value in the detraining air. 245 Mass flux convective parameterization schemes still are the most common convective parameterizations used in ESMs, Regional Climate Models (RCMs), and NWP models.

Cloud System Resolving Models (CSRM)
The performances of the previous schemes prompted the search for new strategies to model convection. Krueger (1988) put forward the CSRM idea (also known as the explicit convection, convection-permitting or cloud ensemble models) to explicitly 250 simulate convective processes over a kilometer scale, instead of using parameterizations. Convective parameterizations tend to produce too little heavy rain and too much light rain (Kooperman et al., 2018), and have problems representing diurnal precipitation cycles over land (Pritchard et al., 2011). The use of convection-permitting models can solve errors associated with other convective parameterizations (Kendon et al., 2012;Prein et al., 2013;Brisson et al., 2016), but entails an extremely high computational cost, which limits its application in climate modeling (Wagner et al., 2018;Randall et al., 2019). However, 255 it is also widely used in NWP (Kain et al., 2006;Gebhardt et al., 2011).

Super-Parameterization (SP)
Hybrid approaches also exist. SP (also known as cloud-resolving convective parameterization (CRCP) or multiscale model framework (MMF)) is an approach between parameterized and explicit convection, which consists of replacing the convective parameterizations by 2D cloud resolving models (CRMs), or even a 3D LES model, at each grid cell of a GCM (Grabowski 260 and Smolarkiewicz, 1999;Grabowski, 2016). SP is mostly applied in GCMs (Grabowski, 2001;Khairoutdinov and Randall, 2003;Khairoutdinov et al., 2005;Zhu et al., 2009;Jung and Arakawa, 2014;Sun and Pritchard, 2016). Several studies have compared the performance of SP with convective parameterizations, in particular, using the Community Atmosphere Model (CAM).
Among the most notable improvements achieved by SP in CAM are simulations of heavy rainfall events that are much more 265 similar to observations, a better diurnal precipitation cycle over land (Khairoutdinov et al., 2005;DeMott et al., 2007;Zhu et al., 2009;Holloway et al., 2012;Rosa and Collins, 2013), and the production of a realistic MJO (Thayer-Calder and Randall, 2009;Holloway et al., 2013). However, simulations with SP also have problems that need solving, such as the failure to simulate light rainfall rates reported by Zhu et al., (2009). The computational cost of this approach is also higher than the one for convective parameterizations (Krishnamurthy and Stan, 2015) but smaller than the computational cost for global CSRMs 270 to perform climate simulations (Randall et al., 2003). This paper considers all the aforementioned convective parameterizations with emphasis on the mass-flux schemes.

Trigger function: assumptions and empiricisms
In a CP, the accurate simulation of convection greatly depends on the trigger function. The trigger function has to determine 275 whether convectively unstable air at the boundary layer leads to the onset of convection and if so, activate the CP.
There are as many strategies to initiate convection as there are convection schemes. This section focuses on the assumptions and empirical values of the most important trigger functions, the starting levels, and the impacts of the trigger formulations on the simulation of convective processes. Table 2 lists the most common choices used in the main trigger function types. 280 Table 2: A sample of empirical values and assumptions used in the main trigger function types.

Trigger function types
According to the physical variable used as the main trigger condition, the most commonly used trigger functions in CPs may 285 be classified into (1) moisture convergence, (2) cloud work function (CWF), (3) convective available potential energy (CAPE), and (4) large-scale vertical velocity. Other triggers used are (5) stochastic and (6) heated condensation framework (HCF) triggers. Table 3 lists the assumptions and empirical values used in the main trigger function types, which are discussed below.

Moisture convergence trigger
The main condition to activate convection, together with the existence of a deep layer of conditional instability, is exceeding 290 a minimum threshold value of the vertically integrated moisture convergence. This is the case in the Anthes-Kuo scheme (Kuo, 1965;Anthes, 1977) and in the original Tiedtke scheme (Tiedtke, 1989). The latter has undergone several modifications since its publication. For instance, Gregory et al. (2000) substituted the condition of positive buoyancy to activate deep convection by a minimum cloud depth threshold in the ECMWF convective parameterization. Zhang et al. (2011) proposed a modified version of the Tiedtke scheme with the aim of improving the representation of marine boundary layer clouds over the southeast 295 Pacific. Among these modifications, deep convection is allowed to occur only when the vertically averaged relative humidity (RH) exceeds 80 %. A new modified Tiedtke scheme used in the Integrated Forecasting System (IFS) and in the Weather Research and Forecasting model (WRF) model uses the trigger criteria from Jakob and Siebesma (2003), and Bechtold et al. (2004), which include the search for unstable parcels within the lowest 300 hPa above the ground. The simulation of the diurnal cycle of precipitation using this new trigger and new entrainment rates improved in comparison to previous versions of IFS 300 (Bechtold et al., 2004).

CWF trigger
The first CWF trigger was introduced by AS, who proposed that convection activation depends on a threshold value of the CWF, which is defined as the integral buoyancy force of each entraining cloud between cloud base and cloud top. Several variations of the original CWF trigger function have been suggested. In the relaxed Arakawa-Schubert scheme (RAS) (Moorthi 305 and Suarez, 1992), the activation of convection depends on a critical value of the CWF, while the simplified Arakawa-Schubert scheme (SAS) (Pan and Wu, 1995) triggers convection if the CWF is positive, as shown in Table 2. Another condition to activate convection in SAS is based on the pressure difference between the starting point and the level of free convection (LFC), which defines a threshold value for the convection inhibition (CIN) factor. With the aim of decreasing convection in large-scale subsidence regions and increasing it in large-scale convergent regions, Han and Pan (2011) modified the limit to 310 reach the LFC, which is now proportional to large-scale vertical velocity w. Further improvements to the SAS activation criteria include a grid-spacing dependency in the convective trigger function (Lim et al., 2014), considering the spatial resolution dependency, and a new definition of the CIN threshold value applying a scale-aware factor .

CAPE trigger
Many CPs have been proposed to simplify the formulation and implementation of the AS scheme. Among other assumptions, some CPs substitute the convection trigger based on CWF by CAPE, defined in a similar way as CWF but without including dilution of ascending parcel by entrainment. For instance, BMJ developed a new parameterization based on empirical results, 320 in which the activation of convection requires the existence of CAPE. In this scheme, cloud base is the lifting condensation level (LCL) of a lifted parcel with the largest CAPE in the lowest 130 hPa of the model. From there, the parcel is lifted moist adiabatically until the equilibrium level (EL) is reached. In general, the cloud top is at the level immediately beneath EL.
Moreover, deep convection continues if the cloud depth is greater than a certain value and covers at least two model layers (Baldwin et al., 2002). Finally, deep convection activates if the adjustment using reference profiles of temperature (based on 325 a moist adiabat) and moisture (based on imposed sub-saturation at the cloud base) results in the column drying. The BMJ scheme is currently used in NCEP North American Mesoscale model (NAM), MM5, and WRF models. Another important convective parameterization also using a CAPE trigger is the Zhang-McFarlane scheme (Zhang and McFarlane, 1995). To improve climate simulations in the Canadian Climate Center GCM, the authors proposed a simplified version of the AS scheme that includes a positive CAPE trigger. However, it initiates convection too often during the day, which led Xie and Zhang 330 (2000) to modify the scheme. They kept the positive CAPE condition and added a second condition based on the change of CAPE due to large-scale forcing (dCAPE). This new trigger improved the simulations of the Intertropical Convergence Zone (ITCZ) and MJO (Zhang, 2002;Song and Zhang, 2009;Zhang and Song, 2010). Alternative formulations of convection trigger include the addition of an RH threshold of 80 % in the convection trigger (Zhang and Mu 2005a, b) to suppress convection if the boundary layer air is too dry. Another modification is the inclusion of dilution in CAPE calculation due to entrainment 335 (dilute CAPE) by Neale et al. (2008) to reduce excessive precipitation over land in the simulations of ENSO.

Large-scale vertical velocity trigger
Drawing on the observations in Fritsch and Chappell (1980) suggesting a positive impact of background vertical motion on convective development, Kain and Fritsch (1990) (KF) proposed a trigger based on large-scale vertical velocity. In this scheme, the first potential source layer for convection, also known as the updraft source layer (USL), is a layer of at least 60 hPa 340 thickness that is constructed by mixing vertically adjacent layers, beginning at the surface. The temperature and pressure of the parcel at its LCL is calculated, as well as a temperature perturbation , which is proportional to w (see Table 3). If the sum of the parcel temperature and the temperature perturbation is higher than the environmental temperature, the parcel is released from its LCL. Above the LCL, the parcel is lifted upwards with entrainment, detrainment, water loading, and a vertical velocity determined by the Lagrangian parcel method (Bechtold et al., 2001). Convection is activated if the vertical velocity 345 remains positive for a minimum depth of 3-4 km. Otherwise, the USL is moved up one model level and the procedure starts again. This process continues until a suitable USL is found or the search has moved up above the lowest 300 hPa of the atmosphere, where the search is terminated. To extend the application of the KF scheme to a broad range of scales, Bechtold et al. (2001) related the temperature perturbation to the grid-scale vertical velocity through a slightly different mathematical expression (see Table 3). It is widely used at ECMWF. Other authors, such as Ma and Tan (2009), included moisture advection 350 in the temperature perturbation to improve the KF scheme for the case of weak synoptic forcing. Berg et al. (2013) defined a probability density function (PDF) that generates a range of virtual potential temperature and water vapor mixing ratio to substitute in the trigger function. With this new trigger, the scheme more realistically accounts for subgrid variability within the convective boundary layer in a way. Both the modified version of the KF scheme, and the KF itself, are used in the WRF mode. 355 Table 3: A sample of empirical values and assumptions used in the trigger.

Components
Empirical value or assumption Choices in the literature Reference

Buoyancy threshold
Includes a temperature perturbation linked to the large-scale vertical velocity Fritsch and Chappell (1980) . & BBBB is the normalized using a reference grid space of 25 km Bechtold et al. (2001) , where * = 2 cm s !" , and #$# is the height (m) of the LCL above the ground Kain (2004) Includes a constant = 0.65 K Emanuel and Živković-Rothman (1999) = 0.90 K Bony and Emanuel (2001) Includes composed of horizontal + and vertical ' components with associated normalized moisture advections ( + and ' ) = + + + ' ' Ma and Tan (2009) Uses probability density function (PDF) Substitute in the trigger function by a generated range of virtual potential temperature and water vapor mixing ratio ' Berg et al. (2013)

Stochastic trigger 360
The traditional convective triggers lead to deficiencies in the simulation of different atmospheric events, as stated in Sect. 2.
A promising strategy to reduce these deficiencies is the use of stochastic triggering (Rochetin et al. 2014a, b). Instead of using a deterministic parameterization in which the subgrid-scale response is fixed to a certain resolved-scale state, the response is sampled from a suitable probability distribution (Dorrestijn et al., 2013). For example, Majda and Khouider (2002), and  (Emanuel, 1991). 370

HCF trigger
Unlike some of the trigger criteria already discussed, a more recent trigger function by Tawfik and Dirmeyer (2014), the HCF, is not based on the lifting parcel method, but uses vertical profiles of temperature and humidity. First, it finds the buoyant condensation level (BCL), which is the level at which saturation would occur through buoyant mixing as a result of sensible heating from the surface. To find the BCL, it increases the near-surface potential temperature through small increments and 375 mixes the specific humidity from the surface to the level of neutral buoyancy, i.e., the top of the potential mixed layer (PML).
If saturation does not occur at this level, the procedure to find the BCL is repeated until saturation is reached, while if saturation occurs, several variables are determined. The first variable is the buoyant mixing potential temperature, .+ , also known as the convective threshold. This is the temperature that the 2 m potential temperature needs to reach the BCL. The second variable, the potential temperature deficit, /01 , is defined as the difference between the .+ and the 2 m potential temperature, 380 or the sum of all the temperature increments needed to attain the BCL. Hence, it is a measure of convective inhibition similar to CIN in the parcel-based approach. In HCF, convection will activate when /01 ≤ 0. The HCF trigger reduces the number of false positives compared to the parcel-based trigger. When the HCF trigger is implemented in the NCEP Climate Forecast System version 2 (CFSv2), the representation of the Indian monsoon and tropical cyclone intensity improves (Bombardi et al., 2016). In the Community Earth System Model (CESM), the strategy improves the frequency of heavy precipitation events and 385 reduces the overactivation of convection in the model (Tawfik et al., 2017).

Starting levels
The LFC, USL or starting level for updraft is located at, or near, the cloud base or at the top of the planetary boundary layer.
Different methods are applied for calculating the LFC in the literature, such as those used by KF and BMJ already described in Sect. 3.1, or the one used by Grell (1993), who determined the USL as the maximum value of the moist static energy, ℎ. 390 Table 4 lists a sample of the main assumptions and empirical values used to determine the starting levels.
While the starting level for the ascending currents (updrafts) is reasonably evident, the starting level for the descending currents (downdrafts), usually called the level of free sinking (LFS), may start at any vertical level no lower than the cloud base. Several convective parameterizations, such as those proposed by Tiedtke (1989) or Bechtold et al. (2001), follow the definition suggested by Fritsch and Chappell (1980), who assumed that LFS is the level at which the temperature of a saturated mixture 395 of equal amounts of updraft and environmental air becomes smaller than the environmental temperature. In contrast, Grell (1993) determined LFS as the minimum value of ℎ, and Zhang and McFarlane (1995) matched LFS with the lowest updraft detrainment level. However, if the minimum value of ℎ is lower than the bottom level of updraft detrainment, LFS is determined as in Grell (1993).  (1974) : !$% !:!"# no, where 12& = −5 · 10 !) (−1 · 10 !) ) hPa s !" and 1-9 = −5 · 10 !8 (−2 · 10 !; ) hPa s !" , respectively, over land (ocean). CIN varies within the range 120−180 hPa Han and Pan (2011) ωmin and ωmax are computed assuming that ω depends on the horizontal resolution of the model

400
where σ is a scale-aware factor Kwon and Hong (2017)

LFS
Level at which the temperature of a saturated mixture of equal amounts of updraft and environmental air becomes less than Tenv Fritsch and Chappell (1980); Tiedtke (1989); Bechtold et al. (2001) Level of minimum environmental saturated equivalent potential temperature between LCL and cloud top Kain and Fritsch (1990); Wu (2012) Coincides with the level of minimum moist static energy h if lower than the base of the detrainment layer. If not, it matches the detrainment level Grell et al. (1991); Zhang and McFarlane (1995) https://doi.org/10.5194/gmd-2021-61 Preprint. Discussion started: 30 April 2021 c Author(s) 2021. CC BY 4.0 License.

Components Empirical value or assumption
Choices in the literature Reference Level above the minimum moist static energy h Grell (1993); Pan and Wu (1995) The highest level where equal parts of evaporatively cooled environmental air and cloudy air become unstable with respect to the environment Nordeng (1994) Located within the range 120−150 hPa above USL Kain (2004) Level where the saturated updraft terminates 150 hPa above the ground Stratton and Stirling (2012) Level of minimum moist static energy h Baba (2019)

Impact of trigger functions on convective models
Differences between trigger functions depend on the identification of the source layer of convective air and on how this layer of unstable air can give rise to convection. While near-surface air is selected as the source layer in some CPs (Tiedtke, 1989;Donner, 1993;Bechtold et al., 2001;Tawfik and Dirmeyer, 2014), in others, the choice is the layer of maximum moist static 405 energy, h (Arakawa and Schubert, 1974;Grell, 1993;Zhang and McFarlane, 1995;Wu, 2012). On the other hand, different convection triggers are used to determine whether unstable air turns into convection, as mentioned in the previous section.
However, the best way to construct a trigger function is still unknown and, in many cases, an ad hoc formulation leads to poor performance in the activation of convection at the right location and time (Suhas and Zhang, 2014;Song and Zhang, 2017).
Comparison between the performance of different trigger functions and observations from different climates leads to 410 improvements in the formulation of the activation criteria for convection. Suhas and Zhang (2014) used three intensive observation period (IOP) datasets from the Atmospheric Radiation Measurement (ARM) program, and long-term singlecolumn models (SCMs) to evaluate the performance of different trigger functions (Arakawa-Schubert scheme, Bechtold scheme, Donner scheme, Kain-Fritsch scheme, Tiedtke scheme, and four variants of the Zhang-McFarlane scheme). The dilute dCAPE trigger function showed the best performance in both the tropics and midlatitudes, while the undilute dCAPE was as 415 good as the dilute dCAPE only for the tropics. Furthermore, the Bechtold and the dilute CAPE trigger functions were among the best performing schemes. As a follow-up, Song and Zhang (2017) used observations from the Green Ocean Amazon (GOAmazon) field campaign to evaluate and improve the trigger functions selected in Suhas and Zhang (2014), with the addition of the HCF. In their study, the dCAPE-type triggers also ranked first, followed by the Bechtold and HCF triggers.
The dCAPE trigger improved with an optimization of the entrainment rate and dCAPE threshold, while the undilute dCAPE 420 trigger performed better with the inclusion of a 700-hPa upward motion.
The convection trigger criterion plays a crucial role in the simulation of a wide number of atmospheric events. The impact of the trigger function on the correct simulation of the diurnal cycle of convection and precipitation in atmospheric models has been widely studied, especially over land (Bechtold et al., 2004;Knievel et al., 2004;Lee et al., 2007aLee et al., , b, 2008Hara et al., 2009;Evans and Westra, 2012). The common problem in the simulation of the diurnal cycle is that it peaks too early and its 425 amplitude is too high (Yang and Slingo, 2001;Collier and Bowman, 2004). Moreover, the diurnal cycle of precipitation peaks too early over land (in general, 2 to 4 hours before the observed maxima) (Dai, 2006), which is related to the formulation of the trigger function (Betts and Jakob, 2002;Bechtold et al., 2004). Lee et al. (2008) performed a sensitivity analysis with four different trigger functions implemented in the relaxed Arakawa-Schubert scheme (RAS) and found significant differences in the diurnal cycle of precipitation over the Great Plains in the United States. Several studies have performed sensitivity analyses 430 and found possible ways to improve the simulation of the diurnal cycle. Models with finer resolution provided a better simulation in the amplitude, variability, and timing of the diurnal cycle Sato et al., 2009). The inclusion of the effect of moisture advection in the trigger function improved the distribution and intensity of convective precipitation in the MM5 (Ma and Tan, 2009). The use of different initiation and termination conditions in the SAS scheme led to a better diurnal variation of precipitation (Han et al., 2019) although it increased the excessive precipitation and did not alleviate the 435 bias in the phase of precipitation intensity. The modification of both the trigger and closure criteria by considering cold pools could minimize the bias in the diurnal cycle of convection (Rio et al., 2009(Rio et al., , 2013. Another important case are the deficiencies in the simulation of the MJO (Lin et al., 2006), which are often improved by the modification of the trigger function. For example, Wang and Schlesinger (1999) found that a better representation of the MJO was possible by adding a moisture trigger to the convective parameterization used in the atmospheric general circulation model at the University of Illinois, Urban-440 Champaign (UIUC). Zhang and Mu (2005b) used the same approach in the National Center for Atmospheric Research (NCAR) Community Climate Model version 3 (CCM3) as well as Lin et al. (2008) in the Seoul National University (SNU) atmospheric general circulation model. Another example is a better representation of the Indian summer monsoon rainfall by the addition of HCF to the trigger function in the Climate Forecast System version 2 (CFSv2) (Bombardi et al., 2015).
The lack of "convective memory" effects in the models based on the quasi-equilibrium (QE) assumption causes a convective 445 parameterization to be triggered, regardless of the convection stage, as long as the convection criteria are met. Different ways to include the memory effect have been proposed, such as using prognostic cumulus kinetic energy (Pan and Randall, 1998), or an ensemble of cold pools (Grandpeix and Lafore, 2010;Del Genio et al., 2015).

Cloud model: types and choices
The cloud model represents the interaction between cumulus clouds and the large-scale environment. Thus, it determines the 450 vertical distribution of convective heat and moisture through the parameterization of the mass flux profile, the entrainment/detrainment, and the microphysics. This section discusses the main types of mass flux and entrainment/detrainment schemes adopted in the literature, as well as the main assumptions and empirical values employed in the formulation of the cloud model.

Mass flux scheme types 455
According to the approach used to estimate the unknown quantities in Eq. (2), mass flux schemes are classified into bulk, spectral and episodic mixing models.

Bulk models
The ensemble of clouds within a grid box is represented by a single cloud model. Yanai et al. (1973) are the main representatives of this type of scheme. In their diagnostic study, clouds are classified according to their cloud tops, and the 460 steady plume hypothesis (Morton et al., 1956) is applied. It is assumed that all clouds have a common cloud base height, and that the values on detrainment are identical to the values inside the plume. In mesoscale models, Fritsch and Chappell (1980) and Kain and Fritsch (1992) also applied the steady hypothesis, as did Singh et al. (2019) in their study of the relationship between humidity, instability, and precipitation in the tropics. Tiedtke (1989), and Gregory and Rowntree (1990) applied the same approach as Yanai et al. (1973) in their schemes at the ECMWF, and at the U.K. Meteorological Office. The scheme 465 used at ECMWF has undergone several modifications since then (Nordeng, 1994;Gregory et al., 2000;Li et al., 2007;Zhang et al., 2011;Kim and Kang, 2012;Stevens et al., 2013). Many mass flux parameterizations use the bulk-cloud approach (Siebesma and Holtslag, 1996;Bechtold et al., 2001;Neggers et al., 2009;Yano and Baizig, 2012;Loriaux et al., 2013) with different formulations of their cloud models (i.e., formulation of the mass flux at cloud base, entrainment, detrainment, microphysics). 470

Spectral models
In contrast to bulk models, spectral models select a certain parameter to group the plumes into different types, each of them with a cloud model. The majority of spectral approaches use a constant entrainment rate, while other authors choose the pressure depth (Hack et al., 1984), or the radius and vertical velocity at cloud base (Nober and Graf, 2005). In contrast to Yanai et al. (1973), AS applied the quasi-equilibrium hypothesis (QE), which assumes that convection is in a quasi-equilibrium with 475 the large-scale environment. Since the publication of the original version, the AS scheme has undergone several modifications. Moorthi and Suarez (1992) proposed a simplified version called the relaxed Arakawa-Schubert (RAS) parameterization with a simpler closure formulation. Grell (1993) changed the spectrum of cloud sizes in AS for a single cloud top at a particular location and time. Pan and Wu (1995) developed the so-called simplified Arakawa-Schubert model (SAS), which is a modified version of the model proposed by Grell (1993). Han and Pan (2011) further modified SAS to overcome unrealistic grid-scale 480 precipitation and develop a mass flux parameterization for shallow convection.

Episodic mixing models
Drawing on the continuous entrainment and average buoyancy used in entraining/detraining plume models in both bulk and spectral formulations, Emanuel (1991 proposed the so-called episodic mixing model, which is based on the stochastic mixing model of Raymond and Blyth (1986), and the observations of Taylor and Baker (1991), among others. Thus, Emanuel 485 assumed that mixing is highly inhomogeneous and episodic, and applied the buoyancy sorting hypothesis, which is the basis of a number of cumulus parameterizations (James and Markowski, 2010;Park, 2014), especially those focused on shallow convection (Bretherton et al., 2004;De Rooy and Siebesma, 2008;Neggers et al., 2009;Pergaud et al., 2009). The Emanuel scheme and its modified versions (Emanuel and Živković-Rothman, 1999;Grandpeix et al., 2004;Peng et al., 2004) are widely used in RCMs (Zou et al., 2014;Raju et al., 2015;Bhatla et al., 2016;Gao et al., 2016;Kumar and Dimri, 2020). 490 The aforementioned mass flux scheme types are explained from the point of view of the ascending currents. However, convective downdrafts, i.e., descendent currents caused by evaporation of condensate and rainwater loading, should be taken into account. Simply put, they may be considered as bottom-up updrafts. Downdrafts are of great importance in atmospheric convection. As Plant and Yano (2015)  (2) (Emanuel, 1991;Xu et al., 2002).

Entrainment and detrainment
The mixing of air masses due to entrainment of environmental air into clouds and detrainment of cloudy air into the environment are key processes in convective parameterizations (Blyth, 1993;Luo et al., 2010;Donner et al., 2016) as they 505 modify the vertical profiles of heat and moisture within cloudy air. Sanderson et al. (2008) identified the entrainment rate as one of the dominant parameters affecting climate sensitivity after evaluating thousands of GCM simulations. Other authors, such as Rougier et al. (2009), Klocke et al. (2011) and Zhao (2014) have obtained similar conclusions in their analyses. In addition, the influence of convective detrainment of water vapor and hydrometeors from cumulus clouds is an important source of water that strongly impacts climate simulations (Ramanathan and Collins, 1991;Lindzen et al., 2001).  Tables 5 and 6 and in   Tables 7 and 8, respectively.

The choice of lateral vs cloud-top entrainment 515
Since Stommel (1947) provided the first description of cumulus cloud dilution by entrainment of environmental air, two conceptual models are still competing: the lateral entrainment model and the cloud-top entrainment model.
In the lateral entrainment model, Stommel (1947) considered that environmental air enters the cloud through the lateral cloud edges and continuously dilutes cloudy air during its ascent, regardless of whether it is considered a plume or a bubble. Several aircraft observations and experiments in water tanks (Turner, 1962;Morton, 1965) contributed to the formulation of the lateral 520 entrainment theory. However, authors such as Warner (1970) pointed out the deficiencies of this theory in predicting the right profile of liquid water content (LWC).
In order to address these deficiencies, Squires (1958) proposed another entrainment model, the cloud-top entrainment. This author suggested that environmental air enters the cloud predominantly at or near the cloud top, descends through penetrative downdrafts created by evaporative cooling, and dilutes the cloud by turbulent mixing. Paluch (1979) provided more evidence 525 for cloud-top entrainment in her study on cumulus clouds over Colorado. The author found that the cloud water-mixing ratio and the wet equivalent potential temperature follow a line at a single level, the so-called "mixing line", which connects cloud base and cloud top. Paluch interpreted it as an evidence for a two-point mixing scenario. Further studies (Boatman and Auer, 1983;Lamontagne and Telford, 1983;Jensen et al., 1985;Reuter and Yau, 1987) confirmed Paluch's results. However, several authors have criticized the mixing line source levels (Blyth et al., 1988;Malinowski and Pawlowska-Mankiewicz, 1989;Raga 530 et al., 1990;Grabowski and Pawlowska, 1993;Neggers et al., 2002;Zhao and Austin, 2005), and the interpretation of the mixing line (Betts and Albrecht, 1987;Taylor and Baker, 1991;Grabowski and Pawlowska, 1993;Siebesma, 1998;Böing et al., 2014).
Which of the two models predominates in cumulus convection remained unclear for many years. The increase in computational power in recent decades has promoted the use of LES to study entrainment and detrainment mainly in shallow cumulus clouds. 535 Several authors, such as  and Böing et al. (2014), have applied LES to identify the dominant process in mixing in cumulus clouds, concluding that cloud-top entrainment is insignificant compared to lateral entrainment.

Main empirical values in entrainment and detrainment formulations
Aircraft observations and experiments in water tanks (Turner, 1962;Morton, 1965) led to the formulation of the lateral entrainment theory, which anticipates that the fractional entrainment rate (hereafter entrainment rate) changes with the cloud 540 radius (Malkus, 1959;Squires and Turner, 1962) where M is the mass flux, z is the height, denotes the entrainment rate, C is a constant, and R is the radius of the rising plume.
As De Rooy et al. (2013) pointed out in their review article on entrainment and detrainment in cumulus convection, many cloud models still use this formulation (Arakawa and Schubert, 1974;Kain and Fritsch, 1990;Donner, 1993), sometimes 545 assuming a constant entrainment rate. Houghton and Cramer (1951) improved this theory by taking into account the increase of vertical velocity due to buoyancy.
Thus, the authors distinguish between dynamical entrainment due to larger-scale organized inflow, 567 , and turbulent entrainment caused by turbulent mixing, 89:; (often described with an eddy diffusivity approach). Hence, the change of mass flux with height, including the detrainment, δ, of negative buoyant mixtures, is given by 550 (4) Tiedtke (1989) and Nordeng (1994) assumed that turbulent entrainment is inversely proportional to cloud radii, as in Simpson and Wiggert (1969) and Simpson (1971). They used typical cloud sizes for different types of convection to fix the values of entrainment rates. For penetrative and midlevel convection, the entrainment rate was fixed to 89:; = 1 ⋅ 10 <= m <2 , which is a typical value for tropical clouds in (Simpson, 1971). For shallow convection, the entrainment rate was based on typical values 555 for large trade cumuli, 89:; = 3 ⋅ 10 <= m <2 (Nitta, 1975). Gregory and Rowntree (1990) also assumed a turbulent entrainment rate, but inversely proportional to the height, while in Bechtold et al. (2008), 89:; depends on the saturation specific humidity (Table 5). Dynamical entrainment 567 is proportional to moisture convergence and occurs only in the lower part of the cloud layer up to the level of strongest vertical ascent in Tiedtke (1989). In Nordeng (1994), it is based on momentum convergence. Gregory and Rowntree (1990) (Table 6). Kain and Fritsch (1990) introduced another type of parameterization based on the buoyancy sorting. In their parameterization, homogeneous mixing of cloudy and environmental air was assumed, leading to mixtures with different buoyancy properties 565 that have the same probability of occurrence. Moreover, the authors modified Eq. (3) to make it pressure-dependent. The fraction of environmental air that makes the mixture neutrally buoyant is the so-called critical mixing fraction > , which determines whether a mixture entrains or detrains after mixing. Thus, entrainment of positive buoyant mixtures occurs if < c, while > c leads to immediate detrainment of negative buoyant mixtures. Therefore, detrainment can occur at any level where > c, unlike in the Arakawa-Schubert scheme, where only the cloud top detrainment is considered. Moreover, 570 the maximum entrainment rate is proportional to pressure and inversely proportional to updraft radius. However, the Kain-Fritsch scheme had deficiencies, such as excessive detrainment or the production of unrealistic deep saturated layers. To handle the excessive detrainment, Bretherton et al. (2004) modified c by defining a critical eddy-mixing distance dc based on observations and LES results that revealed fractions of negative buoyant air in the updrafts (Taylor and Baker, 1991;Siebesma and Cuijpers, 1995). Thus, dc is the distance that negative buoyant mixtures in absence of entrainment can continue upwards 575 before their velocity drops to zero, i.e., before detraining. Mixtures of this kind are included in the definition of c together with positive buoyant mixtures, which leads to new definitions of entrainment/detrainment rates. In newer versions of the KF scheme, a mitigation of unrealistic deep saturated layers is achieved by assuming that the entrainment of environmental air cannot be lower than 50 % of the total environmental air involved in the mixing process in the updraft, and that cloud radius depends on the convergence of the subcloud layer (Kain, 2004). Recently, Zheng et al. (2016) modified the minimum 580 entrainment equation in Kain (2004) to include both organized and turbulent entrainment. The authors made the equation scaledependent and expressed it in terms of subcloud layer depth instead of cloud radius. Another scheme based on the buoyancysorting hypothesis, but assuming episodic mixing, is the Emanuel scheme (Emanuel, 1991), where, in contrast to the KF scheme, the resulting mixtures just ascend or descend to their level of neutral buoyancy to detrain.
Apart from buoyancy, another environmental quantity that might influence entrainment, and therefore convection, is RH. assumed that the entrainment rate follows a stochastic Poisson process. Above the condensation level, Sušelj et al. (2013) used a stochastic approach similar to Romps and Kuang (2010), but adjusted for steady-state updrafts, while below the condensation 615 level, the entrainment rate is constant. Using this formulation, the authors achieved good results for several shallow cumulus convection events.   = 2 · 10 !8 m !" Tiedtke (1989); Nordeng (1994); Möbis and Stevens (2012) is constrained 6 = 2 5 and 1-9 6 = 2/(^− C ) where ^ is height of the detrainment level, and C is the cloud base height Zhang and McFarlane (1995) Less attention has been paid to the parameterizations of the detrainment process. Many convection schemes set it as a constant value (see Tables 7 and 8), while others consider detrainment to be negligible (Lu et al., 2012). Tiedtke (1989) and Nordeng (1994) assumed a turbulent detrainment inversely proportional to cloud radii and fixed its value to 89:; = 1 ⋅ 10 <= m <2 for 630 penetrative and midlevel convection (see Table 7). On the other hand, Gregory and Rowntree (1990) assumed a turbulent detrainment rate inversely proportional to the height and smaller than 89:; , while Bechtold et al. (2008) set 89:; to a constant value. Dynamical detrainment 567 occurs above the cloud top in Tiedtke (1989), while in Nordeng (1994, it is computed for a spectrum of clouds detraining at different heights. In Gregory and Rowntree (1990), it is activated when B is less than 0.2 K and in Bechtold et al. (2008), it is proportional to the decrease in updraft vertical kinetic energy at the top of the cloud. For 635 downdraft, Bechtold et al. (2014) set 89:; = 89:; , and enforced 567 over the lowest 50 hPa. As in the case of entrainment rates in downdrafts, a common practice in the definition of detrainment rates for downdraft consists in assuming a similar parameterization as for updrafts (Table 8).   , where the subscript cb means cloud base, and = 25 m corresponds to an excess buoyancy of 1 K at cloud base and a vertical velocity of 1 m s -1 at that level.

Nordeng (1994),
Proportional to the decrease in updraft vertical kinetic energy at the top of the cloud Bechtold et al. (2008); Zhang and Song (2016) Proportional to the loss of buoyancy Derbyshire et al. (2011) No distinction Occurs only in a thin layer at cloud top Arakawa and Schubert (1974) Only at levels of neutral buoyancy Emanuel (1991); Moorthi and Suarez (1992) Does not exist around cloud edges Grell et al. (1994) Depends on a critical eddy-mixing distance dc and a critical mixing fraction   Emanuel (1991) Only over a fixed layer of 60 hPa that extends from DDL to DBL 6 = 0 m !" apart from the detrainment layer Bechtold et al. (2001) Linear function of pressure between the top of USL and the base of the downdraft Kain (2004) Proportional to the updraft convergence of the updraft mass flux Gerard and Geleyn (2005) When downdraft becomes positively buoyant, with 75% of its mass detraining at each subsequent Kim et al. (2013) Only in the lowest 1000 m above the ground or starting at LFC, whichever is located higher above the ground Grell and Freitas (2014)

Impact of entrainment and detrainment on convective models
The discussion above illustrates the many nuances in the modeling of convection, the importance of empirical values in the 645 final results and the need to further research to disentangle the many details involved. It is accepted that the parameterizations of entrainment and detrainment still have great uncertainties (Romps, 2010;Becker and Hohenegger, 2018) and problems in producing a realistic representation of convection (Mapes and Neale, 2011). For example, Siebesma and Holtslag (1996) evaluated a mass flux shallow cumulus based on BOMEX results and found that lateral entrainment and detrainment rates were one order of magnitude larger than those used in Tiedtke scheme (Tiedtke, 1989). Using an RCM over the Maritime 650 Continent region, Wang et al. (2007) demonstrated that changes in the values of the fractional entrainment/detrainment rates in Tiedtke scheme affect the simulation of the tropical precipitation diurnal cycle. Over land, Del Genio and Wu (2010) used a CRM to study the transition from shallow to deep convection in diurnal cycles and inferred entrainment rates. Subsequently, the authors compared results from three different entrainment parameterizations to the results obtained with CRM and concluded that the best results were achieved by the entrainment parameterization of Gregory (2001). The entrainment rate 655 depends on the parcel buoyancy, the convective updraft speed, and a free parameter representing the fraction of the buoyant turbulent kinetic energy generation used for entrainment. On the other hand, Stratton and Stirling (2012)  Perhaps not surprisingly, MJO simulations are also sensitive to entrainment (Hannah and Maloney, 2011;Del Genio et al., 660 2012;Klingaman and Woolnough, 2014). Hannah and Maloney (2011)  representation of MJO was also achieved by  using a GCM to evaluate the tropical subseasonal variability.
However, this improvement was at the expense of an increased bias in the mean state, typical for other GCMs with stronger MJO (Kim et al., 2011).
Other studies have evaluated the impact of entrainment/detrainment formulation on large-scale features, such as the double Intertropical Convergence Zone (ITCZ) (Chikira, 2010;Chikira and Sugiyama, 2010;Möbis and Stevens, 2012;Oueslati and 675 Bellon, 2013). Möbis and Stevens (2012) used both the Tiedtke and Nordeng schemes in an aquaplanet GCM to evaluate the sensitivity of ITCZ to the choice of the convective parameterization. The Tiedkte scheme produced a double ITCZ, while the Nordeng scheme, with a higher lateral entrainment rate, led to a single ITCZ. In the works by Chikira (2010) and Chikira and Sugiyama (2010), the entrainment rate from AS was replaced by a formulation that depends on the surrounding environment following Gregory (2001) and Neggers et al. (2002). With this new formulation, variability and climatology improved, 680 including the double ITCZ and the South Pacific Convergence Zone (SPCZ). Oueslati and Bellon (2013) obtained similar improvements in their study of the effects of entrainment on ITCZ by increasing entrainment in a hierarchy of models (coupled ocean-atmosphere GCM, atmospheric GCM, and aquaplanet GCM), at the cost of an overestimation of precipitation in the center of convergence zones. The role of entrainment on large-scale features was also underlined by Hirota et al. (2014) in their comparison of four atmospheric models with different entrainment formulations over tropical oceans. 685 Based on Zhang (2002) and using sounding data from the Coupled Ocean-Atmosphere Response Experiment (COARE), the South Pacific Convergence Zone (SGP97) and the Tropical Warm Pool -International Cloud Experiment (TWP-ICE), Zhang (2009) concluded that the entrainment of environmental air also affects CAPE and closure assumptions in CPs. The drier the entrained air, the stronger is the dilution effect that acts to reduce CAPE. Moreover, dilute CAPE shows a better correlation with the consumption of CAPE than undilute CAPE. 690 As mentioned in Sect. 4.2.2, less attention has been paid to the parameterizations of the detrainment process. Based on LES results for shallow convection, De Rooy and Siebesma (2008) proposed a new detrainment parameterization that led to improvements for ARM, BOMEX, and RICO shallow convection cases. Moreover, the authors revealed a greater variation in the detrainment rates from hour to hour and case to case than the variation in the entrainment rates. Derbyshire et al. (2011) confirmed this finding using a CRM and an adaptive detrainment model. Later, De Rooy and Siebesma (2010) showed that 695 detrainment strongly influences the vertical structure of the mass flux.

Microphysics in convective clouds
The representation of microphysical processes in cumulus parameterizations is key to simulations of climate change (Ramanathan and Collins, 1991;Rennó et al., 1994;Lindzen et al., 2001). Convective microphysics greatly affects the representation of convective clouds due to its influence on detrainment of water vapor and hydrometeors, and the interaction 700 between clouds and aerosols (Khain et al., 2005;Koren et al., 2005;Rosenfeld et al., 2008;Song and Zhang, 2011;Song et al., 2012;Tao et al., 2012). However, many convective parameterization schemes treat microphysical processes crudely, specifying an empirically determined conversion rate from cloud water to rainwater (Arakawa and Schubert, 1974;Tiedtke, 1989;Zhang and McFarlane, 1995;Han and Pan, 2011) or a certain precipitation efficiency like in Emanuel (1991) (see Table   9). A brief description of the main assumptions and empirical values used in the representation of microphysics in CPs is 705 presented here for the same of completeness. For a detailed review of microphysics parameterizations, the reader is referred to Zhang and Song (2016) for convection and Tapiador et al. (2019a) for a full account.  Function of wind shear and subcloud RH Grell (1993); Grell and Dévényi (2002) Varies with lower and middle troposphere RH  Proportional to a maximum precipitation efficiency PEmax where 2 is the in-cloud temperature, 2 is the in-cloud condensed water mixing ratio, and / ( 2 ) is the temperature-dependent threshold condensed water value above which precipitation occurs. 1-9 = 1 (EZ99) and 0.999 (BE01) Emanuel and Živković-Rothman (1999); Bony and Emanuel (2001) Function of wind shear and cloud base height Bechtold et al. (2001) Proportional to CCN ≈ j ];!" C X; , where j is the total volume of condensed water accumulated over the cloud lifetime, 6 Nd is the droplet concentration, ) = 1.9, and ) = 1.13 Jiang et al. (2010); Grell and Freitas (2014)

Conversion of cloud water to rainwater 710
Despite the importance of microphysical processes in the simulation of surface precipitation, radiation or cloud cover, only a few convection schemes attempt to realistically represent these processes. A common approach is to assume that a specified fraction of the condensate is instantaneously removed as rain. In Yanai et al. (1973) and Tiedtke (1989), the conversion rate from cloud water to rainwater is assumed to be proportional to cloud water mixing ratio ql with an empirical function K(z) conversion coefficient that depends on height, as shown in Table 10. Other assumptions include a constant conversion 715 coefficient Cc (Arakawa and Schubert, 1974;Grell, 1993;Zhang and McFarlane, 1995) or define a temperature-dependent threshold water content lwc, above which all cloud water is converted to precipitation (Emanuel and Živković-Rothman, 1999).
Few schemes with a more realistic treatment of the conversion of cloud water to rainwater can be found in the literature on convection. Autoconversion of cloud water in the convection scheme is considered in Sud and Walker (1999), following Sundqvist (1978), as well as in Zhang et al. (2005). The latter included the autoconversion of cloud water and other 720 microphysical processes for both cloud water and ice in the Tiedtke scheme. However, neither the size nor the number concentration of both hydrometeors is considered explicitly. This makes it impossible to account for aerosol-convection interaction, which is of great importance in climate simulations. To overcome this shortcoming, Song and Zhang (2011) and Song et al. (2012) added mass mixing ratio and number concentration of each hydrometeor in their parameterization. Another more realistic treatment of condensation is that proposed by Bony and Emanuel (2001). In this scheme, the condensed water 725 produced at the subgrid scale is predicted by the convection scheme, while its spatial distribution is predicted by a statistical cloud scheme through a probability distribution function of the total water. Indeed, the parameterization of the microphysics is more comprehensively devoted to this specific problem.  The amount of condensate removal from the updraft depends on the mean vertical velocity in a layer of depth δz, and the concentration of condensate at the bottom of the layer Kain and Fritsch (1990) All water content in excess of a threshold of the cloud water content lwc is converted to precipitation where * = 1.1 g kg !" is a warm cloud autoconversion threshold, and / = −55 ℃ is a critical temperature below which all cloud water is converted to precipitation Emanuel and Živković-Rothman (1999) https://doi.org/10.5194/gmd-2021-61 Preprint. Discussion started: 30 April 2021 c Author(s) 2021. CC BY 4.0 License.

Empirical value or assumption
Choices in the literature Reference Both liquid and solid precipitation depend on a condensate to precipitation conversion factor cr and the in-cloud vertical velocity w , where ∆ is the generation of solid precipitation, and . = 0.02 s !" Bechtold et al. (2001) Convective precipitation depends linearly on cloud water content lw and a function of temperature T and the cloud droplet number concentration (CDNC). It forms if the convective layer is at least 150 hPa deep Nober et al. (2003) Function of temperature T = K · exp[ ( )] , ≤ 0 ℃ , > 0 ℃ , where = 2.0 · 10 !) m !" and = 0.07 ℃ !" Han et al. (2016)

Evaporation in downdrafts
Downdrafts are greatly affected by evaporation of hydrometeors and detrained cloud droplets due to latent cooling. Therefore, a realistic representation of this microphysical process is needed. However, only a limited number of convective parameterizations, such as Emanuel (1991), include an explicit calculation of this process, as shown in Table 11. Instead, crude assumptions can be found in the literature. For example, the evaporation of hydrometeors is ignored in Yanai et al. (1973), 735 while Tiedtke (1989) assumed an instantaneous evaporation of detrained cloud water. Other authors have related the evaporation in the downdraft to the precipitation rate (Betts and Miller, 1986) or avoided any microphysical formulation by assuming that the evaporation of rain acts to maintain a constant RH at each level (Fritsch and Chappell, 1980;Zhang and McFarlane, 1995). This allows evaporation to be calculated backwards. More sophisticated formulations include those of Kreitzberg and Perkey (1976) based on Kessler (1969), and Song and Zhang (2011) based on Sundqvist (1988). 740 Table 11: A sample of empirical values and assumptions used in the evaporation in the downdraft.

Empirical value or assumption
Choices in the literature Reference Detrained liquid water takes place at the same level where water detrains Arakawa and Schubert (1974) Related to the precipitation efficiency PE = %'-, , with %'-, = −0.25 Betts and Miller (1986) Detrained cloud condensates evaporate immediately Tiedtke (1989) Function of the precipitation mixing ratio qprec and environmental thermodynamic properties where 6 is the mixing ratio in the downdrafts, and D-B the saturation mixing ratio Emanuel (1991) Assumed to maintain a constant RH at each level = 100 % (ZM95) Zhang and McFarlane (1995) Takes place when the RH is smaller than a certain threshold value

Aerosols
Aerosols play a key role in the climate system due to their influence on the Earth's energy budget through absorption and scattering of solar radiation. Focused on microphysical processes, aerosols serve as cloud condensation nuclei (CCN) and ice 745 nuclei (IN) and thus affect cloud properties, dynamics, and precipitation. However, aerosol-convection interactions are very complex processes, seldom included in convection microphysics. Zhang et al. (2005) developed a new parameterization accounting for the effects of aerosols in stratiform and convective clouds. This was later modified by Lohmann (2008) to include droplet activation by aerosols in terms of the updraft velocity w, temperature, aerosol number concentration, and size distribution, while ice nucleation is a function of w, aerosol properties, and air temperature. More recently, Grell and Freitas 750 (2014) developed a new convective parameterization that includes an interaction with aerosols through an autoconversion of cloud water to rainwater dependent on CCN, parameterized in terms of the aerosol optical thickness (AOT) at 550 nm, as well as an aerosol dependent evaporation of cloud drops. The authors also included tracer transport and wet scavenging in their parameterization. This convection scheme is currently available in WRF.

Closure: strategies to close the budget equation 755
Closure consists in defining the intensity or strength of convection, i.e., the amount of convection regulated by large-scale variables. Therefore, it is essential to close the budget equations (Eq. (2)). Despite the number of hypotheses proposed in the literature, it is still considered an unresolved problem (Yano et al., 2013). The following subsections discuss the main closure types, as well as their main assumptions and empirical values. The impact of the closure formulation in convective model concludes the section. 760

Closure types
Existing convective closures can be classified into diagnostic, prognostic, and stochastic. While diagnostic closures relate cumulus effects to the large-scale dynamics at a particular time scale, prognostic closures perform a time integration of explicitly formulated transient processes. Stochastic closures include randomness elements to closure schemes, such as the first-order Markov process in Lin and Neelin (2003) or the Gaussian white noise in Stechmann and Neelin (2011). In the 765 following, we focus only on deterministic closures.

Diagnostic closures
Diagnostic closures include different types of closures based on a certain physical variable that expresses the intensity of convection. Table 12 shows a sample of empirical values and assumptions used in the closure in the updraft. In moisture convergence schemes, moisture convergence or vertical advection of moisture are selected as the closure variable (Kuo, 1974;770 Anthes, 1977;Krishnamurti et al., 1980Krishnamurti et al., , 1983Kuo and Anthes, 1984;Molinari and Corsetti, 1985;Tiedtke, 1989), therefore assuming that convection consumes the moisture supplied by the large-scale processes.  Kuo (1974); Tiedtke (1989); Gerard (2007) CWF QE assumption Arakawa and Schubert (1974); Grell (1993) Relaxed at a certain time scale τ Pan and Wu (1995); Lim et al. (2014) (includes a factor depending on the vertical velocity at the cloud base) Relaxed at a certain time scale τ and towards a CWF reference value .%n = 10 J kg !" Zhao et al. (2018) CAPE Consumed by convective activity at a certain time scale τ Fritsch and Chappell (1980); Betts (1986); Betts and Miller (1986) (deep convection is suppressed if the precipitation rate is negative), Nordeng (1994); Gregory et al. (2000); Bechtold et al. (2001) Consumption proportional to heat and moisture sources Donner (1993); Donner et al. (2001); Wilcox and Donner (2007) Consumed at an exponential rate by cumulus convection Zhang and McFarlane (1995) Modified by the vertical velocity Stratton and Stirling (2012) Boundary-layer QE (CAPE) QE between increased boundary layer moist entropy and decreased entropy due to moist downdrafts Emanuel (1995); Raymond (1995) Cloud-base upward mass flux is relaxed toward subcloud-layer QE. Includes a fixed relaxation rate α and a convection buoyancy threshold δTk = 0.02 kg (m 7 s K) !" and o = 0.65 K (EZ99), 0.90 K (BE01) Emanuel and Živković-Rothman (1999);Bony and Emanuel (2001) Free tropospheric QE (dCAPE) Convective and large-scale processes in the free troposphere above the boundary layer are in balance. Contribution from the free troposphere to changes in CAPE is negligible. Zhang (2002); Zhang and Mu (2005a); Zhang and Wang (2006); Song and Zhang (2009)

775
The first parameterizations based on moisture convergence were too crude to produce results similar to those observed in nature, which led to the formulation of mass flux schemes. Early parameterizations lacked a theoretical framework to explain the interactions between the large-scale dynamics and convection or were incomplete, such as in Ooyama (1971). In an attempt to overcome this drawback, Arakawa and Schubert (1974) proposed a closed theory based on the QE of the CWF, which is similar to CAPE. Since then, many CPs use CAPE-like closures, generally assuming that the adjustment occurs at a relaxed 780 time scale in contrast to the instantaneous adjustment proposed in Arakawa and Schubert (1974), among others. Table 13 lists the most important choices made for the relaxation time scale.  Betts (1986); Betts and Miller (1986); Zhang and McFarlane (1995); Zhang (2002Zhang ( , 2003; Zhang and Mu (2005b); Zhang and Wang (2006); Song and Zhang (2009)

785
Following Lin et al. (2015), CAPE-like closures can be classified into two types according to the decomposition and constraints applied to the closure variable: the flux type and the state type. In the flux type, the change of the CAPE-like variable is decomposed into its large-scale and convective components, with a much smaller change in CAPE compared to any of the flux terms. Of these types of closures, CAPE is the most commonly used closure variable in CPs (Fritsch and Chappell, 1980;Kain and Fritsch, 1993;Zhang and McFarlane, 1995;Gregory et al., 2000;Bechtold et al., 2001) with adjustment time scales varying 790 from constant values to functional forms (Bechtold et al., 2008). Another CAPE-related closure is dilute CAPE, which adds dilution effects due to entrainment to the definition of CAPE. It is currently available in an updated version of the Kain scheme in WRF (Kain, 2004), as well as in CAM5 (Neale et al., 2008;Wang and Zhang, 2013), CAM6, and the Met Office Unified Model Global Atmosphere 7.0 (GA7.0) (Walters et al., 2019). While the preceding schemes applied convective closure to the full troposphere, Emanuel (1995) and Raymond (1995) proposed the so-called boundary-layer QE, where only the boundary 795 layer component of the CAPE closure is considered. On the other hand, Zhang (2002) introduced a modified version of the QE assumption, in which only dCAPE is employed as the closure variable, without considering the effect of boundary layer forcing. This type of closure, known as the free tropospheric QE or the parcel-environment QE, provides a better simulation of the diurnal cycle of precipitation than the boundary-layer QE (Zhang, 2003), as well as a better representation of MJO and ITCZ than the QE assumption used in the Zhang-McFarlane scheme (Zhang and Mu, 2005b;Zhang and Wang, 2006;Song 800 and Zhang, 2009;Zhang and Song, 2010). More recently, Bechtold et al. (2014) developed a modified version of the free tropospheric QE hypothesis by adding a convective adjustment time scale for the free troposphere, as well as a time-scalebased coupling coefficient between the free troposphere and the boundary layer. The authors also replaced the dCAPE closure variable for PCAPE, defined as the integral over pressure of the buoyancy of an entraining ascending parcel with density scaling. The implementation of this closure in the ECMWF IFS led to a better representation of the diurnal cycle of 805 precipitation.
In contrast to the previous flux-type closures, state-type closures decompose the change of CAPE-like variable into its boundary layer component and free troposphere component, with a much smaller change in CAPE compared to any of the state terms. The main representatives of state-type closures are the convective adjustment schemes of Betts (1986) and. Differences between these adjustment schemes are in the adjustment time scale and reference profiles selected for the 810 adjustment. More recently, authors such as Khouider andMajda (2006, 2008) and Kuang (2008) applied this scheme only to the lower troposphere.
An alternative principle to QE is the so-called activation control proposed by Mapes (1997), in which the intensity of deep convection is controlled by inhibition and initiation processes at low levels, and closure is formulated in terms of CIN and the turbulent kinetic energy (TKE) (Mapes, 2000;Fletcher and Bretherton, 2010). However, as highlighted in (Yano and Plant,815 2012b) this formulation is not self-consistent, which is a must, as models are intended to test physical hypotheses.
This section presented the assumptions and empirical values used in the formulation of the closure for updrafts. However, the magnitude of the downdrafts should also be addressed. In the schemes where it is included, it is commonly expressed as a fraction g / of the closure of the corresponding updraft, setting g / as a certain value (Johnson, 1976;Tiedtke, 1989;Baba, 2019). Alternatively, other authors have related g / to precipitation efficiency (Emanuel, 1995;Bechtold et al., 2001), the RH 820 in the LFS (Kain, 2004) or proposed a formula for g / in terms of the total precipitation rate within the updraft (Zhang and McFarlane, 1995). Table 14 lists some of the empirical values and assumptions used in closure in the downdraft.  Grell (1993); Grell et al. (1994); Pan and Wu (1995) Function of updraft mass flux Mu, height z, and maximum downdraft entrainment rate 1-9 6 , where is a proportionality factor that depends on the total precipitation and evaporation rates Zhang and McFarlane (1995) (downdraft ensemble is constrained both by the availability of precipitation and by the requirement that the net mass flux at cloud base be positive) 6 (z) = −α 6(#PQ) , with 6(#PQ) = 2 (1 − #PQ BBBBBBBB ) 5(#PQ) , where #PQ is the mean (fractional) RH at LFS, 5(#PQ) is 5 at LFS, and 1-9 6 = 5 · 10 !8 m !" Wu (2012)

Prognostic closures 825
Compared to the QE assumption used in the majority of the diagnostic closures mentioned above, prognostic closures do not distinguish between large-scale and convective processes and substitute the QE assumption with time integration of prognostic equations. These equations explicitly account for the time changes of different physical variables, i.e., convective kinetic energy or h, which are related to the cloud-base mass flux through a dimensional parameter. Energy dissipation rate is also included in this type of closure through a dissipation term, either determined by a second dimensional parameter called 830 dissipation time (Randall and Pan, 1993;Pan and Randall, 1998;Yano and Plant, 2012a) or expressed in terms of the entrainment rate and an aerodynamic friction coefficient (Gerard and Geleyn, 2005).

Impact of closure on convective models
The closure problem is one of the major challenges in CPs. As well as being essential to close the budget equations (Eq. (2)), it plays an important role in the performance of CPs. For instance, replacing the CAPE closure used in the Zhang-McFarlane 835 scheme by a dCAPE closure, together with the addition of an RH threshold for convection trigger and the removal of the restriction in the convection originating level, the simulated MJO is more consistent with the observations in terms of variability in precipitation, outgoing longwave radiation and zonal wind, and exhibits a clear eastward propagation (Zhang and Mu, 2005b). However, the precipitation signal and the time period of the MJO differ from the observations. This revision of the Zhang-McFarlane scheme used in the NCAR CCSM3 also alleviates the biases related to the double ITCZ in precipitation 840 and cold tongue in Sea Surface Temperature (SST) over the equator, among other benefits (Zhang and Wang, 2006;Song and Zhang, 2009;Zhang and Song, 2010). Other processes related to the ENSO and the diurnal cycle of precipitation are also known to be sensitive to the convective closure used in CPs (Zhang, 2002;Neggers et al., 2004;Wu et al., 2007;Bechtold et al., 2014;Yang et al., 2018).

Conclusions 845
Numerical models need simplifications to be able to cope with the complexity of the physical processes actually ocurring in the atmosphere. The degree of simplification in the physics is evolving inversely to the availability of computational power.
Thus, early convective parameterizations (as well as parameterizations of radiation, turbulence, microphysics, etc.) were based on very simple assumptions, such as the conditional instability of the second kind (CISK), first presented by Charney and Eliassen (1964) and Ooyama (1964). CISK states that cyclones provide moisture that maintains cumulus clouds, and cumulus 850 clouds provide the heat that cyclones need. Despite its simplicity, this parameterization achieved acceptable results in the simulation of the life cycle of tropical cyclones (Ooyama, 1969). Simulations improved with further refinements of the interaction of cumulus clouds with the large-scale environment by, for instance, Ooyama (1971) (a statistical ensemble of bubbles represent cumulus convection), Yanai et al. (1973) (detrainment and cumulus-induced subsidence), and Arakawa and Schubert (1974) (cloud work function and adjustment towards quasi-equilibrium). With the increase in computational power, 855 more complex parameterizations and new variables based on observations can be used to achieve better spatial and temporal resolutions within models. Thus, convective parameters require fine tuning, but there is no explicit methodology to do so. In some cases, the authors use the variables that are easiest to measure. In others, mean values describe processes that cannot be modeled in sufficient detail, or the values represent particular conditions for certain locations and atmospheric events (Mauritsen et al., 2012). For instance, Bony and Emanuel (2001) adjusted their water vapor and temperature prediction using 860 the TOGA-COARE data measured in Western Pacific Ocean in 1993, while Betts and Miller (1986) used GATE datasets measured over the tropical Atlantic Ocean in 1974 to develop their deep convection scheme. Hence, empirical values and assumptions selected this way might yield good results when compared to observations from certain locations and less good results for others. Commonly, manual tuning of convective parameters is used, although various automatic methods have recently been used to estimate parameters, including the variational method (Emanuel and Živković-Rothman, 1999), Bayesian 865 calibration (Hararuk et al., 2014;Wu et al., 2018), simulated annealing method (Jackson et al., 2004(Jackson et al., , 2008Liang et al., 2014), genetic algorithm (Lee et al., 2006), or ensemble data assimilation (Ruiz et al., 2013;Li et al., 2018), among others.
Comparisons with observations were, and still are, crucial to the development of convective parameterizations. For instance, the underprediction of large-scale precipitation by dry adiabatic models compared to observations led to the inclusion of moist adiabatic processes in NWP models (Smagorinsky, 1956), and the lake-effect snow observations (Niziol et al., 1995) forced 870 to reduce the minimum cloud-depth threshold in Kain and Fritsch (1993) to 2 km. Although observations can be used to tune parameters in convective schemes to reduce errors, it is unclear whether these tuned parameters based on particular datasets can improve model skills across different locations, model resolutions or atmospheric events. Moreover, it is known that model results are sensitive to the empirical values in convection. Numerous sensitivity studies have reported that the location and intensity of precipitation are extremely sensitive to cumulus parameterization (Bechtold et al., 2008;Ma and Tan, 2009;Chikira 875 and Sugiyama, 2010). For instance, Wang et al. (2007) improved the simulated diurnal cycle over land and ocean by increasing the entrainment/detrainment rates for deep and shallow convection used in the Tiedtke scheme, which tends to simulate convective precipitation too early in the day and with an unrealistic amplitude over land. Thus, the choice of a convective scheme impacts the diurnal cycle (Bechtold et al., 2004;Wang et al., 2007), as well as the simulation of monsoon precipitation in climate models (Mukhopadhyay et al., 2010), the MJO (Lin et al., 2006), the ENSO (Wu et al., 2007;Neale et al., 2008), 880 and the ITCZ configuration (Liu et al., 2019). This topic has profound practical effects: it has also been shown that choices in the convective parameterization affect the prediction of track, intensity and associated rainfall of tropical cyclones (Mohandas and Ashrit, 2014). Indeed, timely providing the correct amount of precipitation at the right location is still a challenge for models. Figure 2 is an example of how different the precipitation field may look depending on the cumulus parameterization used. All a priori sensible methods locate the maximum and minima in different parts of typhoon Chaba and predict different 885 areas and total accumulations. In the climate model realm, validation exercises focusing on precipitation (Tapiador et al., 2012(Tapiador et al., , 2017(Tapiador et al., , 2018 have shown the importance and challenges of comparing model outputs with precipitation measurements in order to improve model performance. Indeed, the difficulties of quantitative precipitation estimation suggest precipitation as a privileged metric to gauge model performance (Tapiador et al., 2019b). The "ultimate test", as has been described, makes precipitation science an active field of research. As discussed in such paper, there is no complete agreement even in the 890 reference data, with datasets differing even in such aggregated value as the global mean value of the precipitation on Earth.
Advances in satellite precipitation estimation (Kummerow et al., 1998;Joyce et al., 2004;Okamoto et al., 2005;Ushio and Kachi, 2010;Watanabe et al., 2010Watanabe et al., , 2011Kucera et al., 2013;Hou et al., 2014;Huffman et al., 2015;Xie et al., 2017;Levizzani and Cattani, 2019;Skofronick-Jackson et al., 2019) are indispensable to advance further, since direct estimates of precipitation (pluviometers, disdrometers) and ground radars are limited to land areas. These advances need to be parallel with 895 an explicit account of what is empirical in models in order to benefit both fields. Algorithm developers in the satellite realm are perhaps more used to specifying their assumptions through the Algorithm Theoretical Basis Documents (ATBD) but a full comparison between the physics and empirical values behind both algorithms and parameterizations is much needed to advance the field. On that note, it is clear that better access to climate models code would contribute to address scientific gaps in climate models and to improve their reliability (Añel et al., 2021). It would be also highly desirable that scientists not only specify the 900 parameterizations they have used, but also the assumptions and empirical values they have actually selected within these.
Tables 2-11 can be used to easily identify and pinpoint their choices. The benefit will be immense as some discrepancies could be readily attributed to known issues (i.e. heavy spurious rainfall over warm water in adjustment schemes) or identified as cofounding variables. As in the case of the microphysics, making transparent the codes, the assumptions and the empiricisms can only benefit the community and dispel any potential concerns. 905 Indeed, the focus of this paper is not comparing the publicly available convection schemes or to lean users towards one or another but to explore the Physics behind the modules, and to do that from an objective and independent point of view. Neither is the paper about criticizing the simplifications that are inherent to modeling the atmosphere, or the limitations of current methods. On the contrary, the research arises from the conviction that models are the way forward to advance climate research. 910 Being aware of the potential misuse of the results shown here to attempt discrediting models, it is important to vaccinate uninformed critics and discourage futile attempts: neither this paper nor Tapiador et al. (2019a) cast any shadow on model outputs. On the contrary, they display and celebrate the delicate intricacies, nuances, precise measurements and careful choices made by the community to craft complex tools to forecast, simulate and predict precipitation.

Code and data availability
There is no code or data relevant to this paper.

Competing interests
The authors declare that they have no conflict of interest. They have not participated in the development any existing convection module or engaged in any collaboration or discussion with their developers in order to prepare this paper. Their 925 review is an independent, purely objective analysis based on literature and stays neutral on the suitability or performances of any of the parameterizations for any alleged purpose. García-Ortega, E., Lorenzana, J., Merino, A., Fernández-González, S., López, L. and Sánchez, J. L.: Performance of multiphysics ensembles in convective precipitation events over northeastern Spain, Atmospheric Research, 190, 55-67, https://doi.org/10.1016/j.atmosres.2017.02.009, 2017. 1120