the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
ICLASS 1.1, a variational Inverse modelling framework for the Chemistry Landsurface Atmosphere Soil Slab model: description, validation, and application
Peter J. M. Bosman
Maarten C. Krol
This paper provides a description of ICLASS 1.1, a variational Inverse modelling framework for the Chemistry Landsurface Atmosphere Soil Slab model. This framework can be used to study the atmospheric boundary layer, surface layer, or the exchange of gases, moisture, heat, and momentum between the land surface and the lower atmosphere. The general aim of the framework is to allow the assimilation of various streams of observations (fluxes, mixing ratios at multiple heights, etc.) to estimate model parameters, thereby obtaining a physical model that is consistent with a diverse set of observations. The framework allows the retrieval of parameters in an objective manner and enables the estimation of information that is difficult to obtain directly by observations, for example, free tropospheric mixing ratios or stomatal conductances. Furthermore, it allows the estimation of possible biases in observations. Modelling the carbon cycle at the ecosystem level is one of the main intended fields of application. The physical model around which the framework is constructed is relatively simple yet contains the core physics to simulate the essentials of a wellmixed boundary layer and of the land–atmosphere exchange. The model includes an explicit description of the atmospheric surface layer, a region where scalars show relatively large gradients with height. An important challenge is the strong nonlinearity of the model, which complicates the estimation of the best parameter values. The constructed adjoint of the tangent linear model can be used to mitigate this challenge. The adjoint allows for an analytical gradient of the objective cost function, which is used for minimisation of this function. An implemented Monte Carlo way of running ICLASS can further help to handle nonlinearity and provides posterior statistics on the estimated parameters. The paper provides a technical description of the framework, includes a validation of the adjoint code, in addition to tests for the full inverse modelling framework, and a successful example application for a grassland in the Netherlands.
 Article
(3397 KB) 
Supplement
(604 KB)  BibTeX
 EndNote
Exchanges of heat, mass, and momentum between the land surface and the atmosphere play an essential role for weather, climate, air quality, and biogeochemical cycles. Surface heating under sunny daytime conditions usually leads to the growth of a relatively wellmixed layer close to the land surface, i.e. the convective boundary layer (CBL). This layer is directly impacted by exchange processes with the land surface and is also a layer wherein often humans live. Modelling the composition and thermodynamic state of the CBL in its interaction with the land surface is the target of the Chemistry Landsurface Atmosphere Soil Slab model (CLASS; VilàGuerau De Arellano et al., 2015). This model and similar models have been applied frequently, e.g. for understanding the daily cycle of evapotranspiration (van Heerwaarden et al., 2010), studying the effects of aerosols on boundary layer dynamics (Barbaro et al., 2014), studying the effects of elevated CO_{2} on boundary layer clouds (VilàGuerau De Arellano et al., 2012), or for studying the ammonia budget (Schulte et al., 2021). Next to a representation of the CBL, the CLASS model includes a simple representation for the exchange of gases, heat, moisture, and momentum between the land surface and the lower atmosphere. The model explicitly accounts for the surface layer, which is, under sunny daytime conditions, a layer within the CBL that is close to the surface, with relatively strong vertical gradients of scalars (e.g. specific humidity and temperature) and momentum (Stull, 1988). The best model performance is during the convective daytime period. Since the CBL model physics are relatively simple and only include the essential boundary layer processes, the model performs best on what might be called golden days. Those are days on which advection is either absent or uniform in time and space, deep convection and precipitation are absent, and sufficient incoming shortwave radiation heats the surface, allowing for the formation of a prototypical convective boundary layer. When these assumptions are met, the evolution of the budgets of heat, moisture, and gases is, to a large extent, determined by local land–atmosphere interactions. The aforementioned assumptions should ideally be valid for the whole modelled period. They should ideally hold on a spatial scale large enough that violations of the assumptions in the region do not influence the model simulation location. In practice, days are often not ideal, e.g. a timevarying advection can be present. This does not necessarily mean that the model cannot be applied to that day, but the performance is likely to be worse.
To further our understanding of the land–atmosphere exchange, tall tower observational sites have been established, for instance at Cabauw, the Netherlands (Bosveld et al., 2020; Vermeulen et al., 2011), Hyytiälä, Finland (Vesala et al., 2005), and Harvard Forest, USA (Commane et al., 2015). These observational sites provide time series of different types of measurements (observation streams). Even so, many studies only use a (small) fraction of the different streams of observations available for a specific day and location (e.g. VilàGuerau De Arellano et al., 2012). A model like CLASS, containing both a mixed layer and land surface part, can be used to fit an extensive set of observation streams simultaneously. When model results are consistent with a diverse set of measurements, this gives more confidence that the internal physics are robust and that the model has been adequately parameterised to reliably simulate reality. However, an important difficulty in the application of a model like CLASS concerns parameter tuning to obtain a good fit to observations. Some parameters can be obtained quite directly from observations (for instance, initial mixed layer humidity), but, for example, estimating free tropospheric lapse rates or certain land surface parameters is often more challenging. When many parameters need to be determined, the feasible parameter space becomes vast. If this vast parameter space is not properly explored, the obtained parameters can be subjective and suboptimal. The estimation of parameters is further complicated by possible overfitting and the problem of parameter equifinality (Tang and Zhuang, 2008), the latter occurs especially in case not enough types of observations are used. Next to that, some of the available ecosystem/CBLlevel observations may suffer from biases. An example is the closure of the surface energy balance, where the available energy is often larger than the sum of the latent and sensible turbulent heat fluxes (Foken, 2008). This energy balance closure problem is a known issue with eddy covariance observations (Foken, 2008; Oncley et al., 2007; Renner et al., 2019), and various explanations have been suggested (Foken, 2008).
The above text illustrates the need for an objective optimisation framework capable of correcting observations for biases. We therefore present a description of ICLASS, an inverse modelling framework built around the CLASS model, including a bias correction scheme for specific bias patterns. This framework can estimate model parameters by minimising an objective cost function using a variational (Chevallier et al., 2005) framework. ICLASS uses a Bayesian approach, in the sense that it combines information, both from observations and from prior knowledge about the parameters, to come to a solution with a reduced uncertainty in the optimised parameters. A major strength of this framework is that it allows the incorporation of several streams of observations, for instance, chemical fluxes, mixing ratios, temperatures at multiple heights, and radiosonde observations of the boundary layer height. By optimising a number of predefined key parameters of the model, we aim to obtain a diurnal simulation that is consistent with a diverse set of measurements. Additionally, error statistics that are estimated provide information about the constraints the measurements place on the model parameters. Modelling the carbon cycle at ecosystem level is one of the main intended fields of application. As an example, with some extensions to the framework, ICLASS could be applied to ecosystem observations of the coupled exchange of CO_{2} and carbonyl sulfide (a tracer for obtaining stomatal conductance; Whelan et al., 2018).
Besides ICLASS, there have been extensive earlier efforts in the literature to estimate the parameters in land–atmosphere exchange models. For example, Bastrikov et al. (2018) and Mäkelä et al. (2019) optimised parameters of the land surface models ORCHIDEE and JSBACH, respectively. These models simulate additional processes not included in the CLASS model, which enables the calculation of additional land surface variables. For example, in contrast to ORCHIDEE, CLASS cannot simulate leaf phenology or the allocation of carbon to different biomass pools. However, a distinct advantage of the CLASS model is the coupling of a land surface model to a mixed layer model. This facilitates the inclusion of atmospheric observations such as mixing ratios in the optimisation of land–atmosphere exchange parameters. Next to that, simple models like CLASS have the advantage of requiring less computation time, and the output might be more easily understood. Kaminski et al. (2012) and Schürmann et al. (2016) also assimilate both landsurfacerelated and atmosphererelated observations. In those studies, a land surface model is coupled to an atmospheric transport model. Meteorology is not simulated in those studies. In ICLASS, meteorology adds an additional set of observation streams that can be used to optimise landsurfacerelated parameters that are linked both to gas fluxes and meteorology.
An important challenge for the optimisation framework is the strong nonlinearity of the model. As an example, the change in mixedlayer specific humidity (q) with time is a function of q itself; a stronger evapotranspiration flux leads to an increased specific humidity in the mixed layer, which in turn reduces the evapotranspiration flux again (van Heerwaarden et al., 2009). The nonlinearity causes numerically calculated cost function gradients to deviate from the true analytical gradients, since the cost function can vary irregularly with a changing model parameter value. This is hampering the proper minimisation of the cost function when using numerically calculated gradients. An adjoint has been used in the past to optimise parameters, e.g. for land surface models (Raoult et al., 2016; Ziehn et al., 2012). Constructing the adjoint of the tangent linear model is a way to obtain more accurate gradient calculations, as the adjoint provides a locally exact analytical gradient of the cost function at the locations where the function is differentiable. This approach, furthermore, allows the efficient retrieval of the sensitivity of model output to model parameters. Also, using an analytical gradient is generally computationally less expensive compared to using a numerical gradient (Doicu et al., 2010, p. 17). Margulis and Entekhabi (2001a) constructed an adjoint model framework of a coupled land surface–boundary layer model, which they used to study differences in daytime sensitivity of surface turbulent fluxes for the same model in coupled and uncoupled modes (Margulis and Entekhabi, 2001b). However, their CBL model neither includes carbon dioxide nor does it allow the modelling of scalars at specific heights in the surface layer. We expect these to be important for our framework that aims to make optimal use of several information streams. To do gradient calculations within the ICLASS framework, we constructed the adjoint of CLASS.
The paper is structured as follows. First, we give some information on the (slightly adapted) forward model CLASS (Sect. 2). The inverse modelling framework built around CLASS is described in Sect. 3. Information on how error statistics are employed and produced follows in Sect. 4, after which we provide a description of the model output (Sect. 5) and technical details of the code (Sect. 6). Afterwards, we present the adjoint and gradient tests that serve as validation for the constructed adjoint model (Sect. 7). Further information about the adjoint model is available in the Supplement. In Sect. 8, we perform observation system simulation experiments that validate the full inverse modelling framework. In the last section before the concluding discussion, we present an example application for a grassland site in the Netherlands (Cabauw), where a very comprehensive meteorological dataset is complemented with detailed measurements of CO_{2} mixing ratios and surface fluxes.
The employed forward model in our inverse modelling framework is the (slightly adapted) Chemistry Landsurface Atmosphere Soil Slab model (CLASS; VilàGuerau De Arellano et al., 2015). The model code is freely available on GitHub (https://classmodel.github.io/, last access: 9 December 2022). We made use of the Python version of CLASS to construct our inverse modelling framework. We will briefly describe the essentials of the model which are relevant for the inverse modelling framework.
The model consists of several parts, namely the mixed layer, the surface layer, and the land surface, including the soil (Fig. 1). It is a conceptual model that uses a relatively small set of differential equations (Wouters et al., 2019). The core of the model is a box model representation of an atmospheric mixed layer. Therefore, an essential assumption of the model is that, during the daytime, turbulence is strong enough to maintain wellmixed conditions in this layer (Ouwersloot et al., 2012). The mixed layer tendency equation for any scalar (e.g. CO_{2}, heat) is as follows:
The surface flux is the exchange flux with the land surface (including vegetation and soil). The entrainment flux is the exchange flux between the mixed layer and the overlying free troposphere. For moisture and chemical species, a cloud mass flux can also be included in the equation. The mixed layer height is dynamic during the day and evolves under the driving force of the surface heat fluxes and largescale subsidence. Cloud effects on the boundary layer height and growth due to mechanical turbulence can also be accounted for.
Above the mixed layer, a discontinuity occurs in the scalar quantities, representing an infinitely small inversion layer. Above the inversion, the scalars are assumed to follow a linear profile with height in the free troposphere (Fig. 1). The entrainment fluxes are calculated as follows. First, the buoyancy entrainment flux is taken as a fixed fraction of the surface flux of this quantity (Stull, 1988, p. 478), to which the entrainment flux driven by shear can optionally be added. From this virtual heat entrainment flux, an entrainment velocity is calculated. The entrainment flux for a specific scalar (e.g. CO_{2}) is then obtained by multiplying the entrainment velocity with the value of the (inversion layer) discontinuity for the respective scalar.
The surface layer is defined in the model as the lowest 10 % of the boundary layer. In this (optional) layer, the Monin–Obukhov similarity theory (Monin and Obukhov, 1954; Stull, 1988) is employed. In the original CLASS surface layer, scalars such as temperature are evaluated at 2 m height. For some scalars, we have extended this to multiple userspecified heights. This allows the comparison of observations of chemical mixing ratios and temperatures at different heights (e.g. along a tower) to the model output. Since the steepness of vertical profiles depends on wind speed and roughness of the surface, these gradients reflect information about these quantities.
The (optional) land surface includes a simple soil representation and an ags (Jacobs, 1994; Ronda et al., 2001) module. This ags module (Jacobs, 1994; Ronda et al., 2001) is a bigleaf method (Friend, 2001) for calculating the exchange of CO_{2} between atmosphere and biosphere and the stomatal resistance. The latter is used for calculating the H_{2}O exchange. As an alternative to ags, a Jarvis–Stewart approach (Jarvis, 1976; Stewart, 1988) can also be used in the calculation of the H_{2}O exchange. The latter approach is more simple; herein, the stomatal conductance consists of a maximum conductance multiplied with a set of factors between 0 and 1 (Jacobs, 1994). In CLASS, there are four factors included which represent limitations due to the amount of incoming light, the temperature, vapour pressure deficit, and soil moisture. The land surface part is responsible for calculating the exchange fluxes of sensible heat, latent heat, and CO_{2} between the mixed layer and the land surface. The model has a module for calculating long and shortwave radiation dynamically. In this module, shortwave radiation is calculated using the date and time, cloud cover, and albedo. For longwave radiation, surface temperature and the temperature at the top of the surface layer are used. The resulting net radiation is used implicitly in the calculation of the heat fluxes, thereby obtaining a closed energy balance in the model (simplified calculation). Soil temperature and moisture are also simulated based on a force–restore model. The soil heat flux to the atmosphere is calculated based on the gradient between soil and surface temperature, and the latter is obtained from a simplified energy balance calculation.
More details on the equations in the model can be found in VilàGuerau De Arellano et al. (2015). Note that some relatively small changes with respect to the original CLASS model have been implemented, as documented in the ICLASS manual, which is part of the material that can be downloaded via the Zenodo link in the code and data availability section.
3.1 General
Inverse modelling is based on using observations and, ideally, prior information to statistically optimise a set of variables driving a physical system (Brasseur and Jacob, 2017). The n variables to be optimised are contained in a state vector x. In our framework, this vector can be subdivided in two vectors. Those are x_{m}, containing state variables belonging to the input of CLASS (e.g. CO_{2} advection and albedo), and x_{b}, containing state variables belonging to our bias correction scheme (see Sect. 3.2). The forwardmodel H (CLASS) projects the vector x_{m} to provide model output that can be compared to observations, e.g. temperatures at different heights (full list in the ICLASS manual). This output does not only depend on x_{m} but also on model parameters that are not part of the state. Those are contained in a vector p. The result is contained in vector H(x_{m},p). Note that an overview of all inversemodelling variables defined in this section is given in Table B1, including the dimensions and units. We initially define a cost function J (–) as follows (Brasseur and Jacob, 2017):
where x_{A} represents the a priori estimate of the state vector, y is the vector of observations used within the modelled time window, and H(x_{m},p) is the vector of model results at the times of the observations. The latter vector is the model equivalent of the observation vector y. The superscript T means transpose, S_{A} represents the a priori error covariance matrix (defined in the Supplement), and S_{O} represents the matrix of the observational error covariances. This cost function quantifies two aspects, namely the fit between model output and observations, in addition to how well the posterior state matches with prior information about the state. Regarding the observational error covariance matrix S_{O} (fully defined in Brasseur and Jacob, 2017), we assume, for simplicity, the observational errors to be uncorrelated (as in, e.g., McNorton et al., 2018; Chevallier et al., 2010; Ma et al., 2021). This simplifies the matrix S_{O} to a diagonal matrix, with the observational error variances as diagonal elements. In this way, Eq. (2) simplifies to the following:
Here, ${\mathit{\sigma}}_{\mathrm{O},i}^{\mathrm{2}}$ is the ith diagonal element of S_{O}, and m is the number of observations. It is customary to refer to the first term of this cost function as the background part and to the second part of this function as the data part. Note that if the a priori errors are also uncorrelated, then the first term in the equation can be simplified in a similar way as the second term. The observational error variance linked to the ith observation (${\mathit{\sigma}}_{\mathrm{O},i}^{\mathrm{2}}$) can be further split up as follows:
where ${\mathit{\sigma}}_{\mathrm{I},i}^{\mathrm{2}}$ is the instrument (measurement) error variance, ${\mathit{\sigma}}_{\mathrm{M},i}^{\mathrm{2}}$ is the model error variance, and ${\mathit{\sigma}}_{\mathrm{R},i}^{\mathrm{2}}$ is the representation error variance (see Brasseur and Jacob, 2017). These errors are assumed to be independent of each other and normally distributed.
At this point, we introduce two extra features to the cost function. First, we allow the user to specify a weight for each individual observation, in case some observations are deemed less important than others. In principle, the observational error variances could also be adapted for this purpose, but, by using weights, we can keep realistic error estimations (important for Sect. 4.2). Those weights can also be used to manipulate the relative importance of the background term and the data term. This is similar to the regularisation factor explained in Brasseur and Jacob (2017). Second, we introduce part of our bias correction scheme in the data part of the cost function, namely scaling factors for observations. These factors can also be optimised. With the additions mentioned above, the cost function, as given in Eq. (3), modifies to the following:
where w_{i} (–) is a weight for each individual observation in the cost function. s_{i} (–) is a scaling factor for observation y_{i}, which is identical for each time step but is allowed to differ between each observation stream. The background term in the cost function can be left out if the user desires so. The introduction of the scaling factors means we need to adapt the observational error variances as well. Thus, the ith observational error variance is now given by the following:
where ${\mathit{x}}_{\mathrm{m}}^{\mathit{\left\{}\mathrm{t}\mathit{\right\}}}$ is the unknown vector of the “true” values of the model parameters in the state vector, and ${s}_{i}^{\mathit{\left\{}\mathrm{t}\mathit{\right\}}}$ is the true value of the scaling factor for the ith observation. The decomposition from Eq. (4) remains valid.
In the statistical optimisation, we attempt to find the values of the state vector x, such that the function in Eq. (5) reaches its absolute minimum. This is done by starting from an initial guess (x=x_{A}), after which the state vector is improved iteratively. The cost function and the gradient of the cost function (derivatives with respect to all parameters) are computed for different combinations of parameters in the state vector (Fig. 2). The framework uses, by default, a truncated Newton method, the tnc algorithm (The SciPy community, 2022; Nash, 2000), for the optimisations. Truncated Newton methods are suitable for nonlinear optimisation problems (Nash, 2000). The chosen algorithm allows the specification of hard bounds for the state vector parameters, thus preventing unphysical parameter values for individual parameters in the state vector. Raoult et al. (2016) used similar constraints in their inverse modelling system, and Bastrikov et al. (2018) used bound constraints as well. The analytical cost function gradient calculations are described in Sect. 3.4. A basic numerical derivative option (Sect. 3.5) is available as well, although we expect this, in general, to be outperformed by the analytical derivative.
3.2 State vector parameters
As mentioned before, the state vector can be decomposed into vector x_{m} and vector x_{b}. Vector x_{m} contains state variables related to the input of CLASS, such as the initial conditions (e.g. initial mixed layer potential temperature), boundary conditions (e.g. CO_{2} advection), and uncertain model constants (e.g. roughness length for heat). The full list of model parameters that can be optimised is given in the ICLASS manual.
Vector x_{b} contains parameters belonging to our bias correction scheme in the data part of the cost function. There are two ways of bias correcting. The first one is by using observation scaling factors for observation streams. These scaling factors (s_{i}) have been introduced in Eq. (5). As an example, in the state vector to be optimised, one can include a single scaling factor for all CO_{2} surface flux observations. The second possible method of bias correcting (Sect. 3.3) is implemented specifically for the energy balance closure problem (Foken, 2008; Oncley et al., 2007; Renner et al., 2019), and it involves a parameter Frac_{H} (–) to be optimised.
3.3 Bias correction for energy balance closure
We first define an observational energy balance closure residual as follows (Foken, 2008):
where R_{n} is the time series of net radiation measurements, H_{orig} and LE_{orig} are the measured sensible and latent heat fluxes, and G is the measured soil heat flux. The difference between the measured net radiation and the sum of the measured heat fluxes is calculated for every time step, and this represents the energy balance closure residual. If desired, the user can easily specify an expression of their own for ε_{eb}. Subsequently, the observations for the sensible and latent heat flux are adapted as follows:
This implies that the energy balance closure residual is added partly to the sensible heat flux and partly to the latent heat flux, thereby closing the energy balance in the observations. This partitioning is determined by parameter Frac_{H}, which is taken to be constant during the modelled period. This approach of closing the energy balance is similar to Renner et al. (2019), but we optimise a parameter, instead of using the evaporative fraction, for partitioning ε_{eb}. The limitations of this approach are that we assume the radiation and soil heat flux measurements to be biasfree and the Frac_{H} parameter constant.
3.4 Analytical derivative
For the optimisations, we do not only compute a cost function, but we also use the gradient of the cost function with respect to the state vector elements. This informs us on the direction in which the cost function lowers. How the gradient with respect to an individual element is calculated depends on which state vector element is considered. In case the ith state vector element is a model input parameter, the gradient with respect to this element is computed as follows (similar to Brasseur and Jacob, 2017, their Eq. 11.105):
where ${\mathbf{\nabla}}_{{\mathit{x}}_{\mathrm{m}}}\mathit{H}({\mathit{x}}_{\mathrm{m}},\mathit{p})$ is the local Jacobian matrix of H. l is the number of nonmodelinput parameters in the state x occurring before x_{i}. F is the forcing vector, with elements F_{k} defined as follows:
There is one forcing element in the vector for every observation that is used, and F_{k} is the forcing related to the observation y_{k}. For calculating the model input part of the analytical derivative, we constructed the adjoint of the model, $\left({\mathbf{\nabla}}_{{\mathit{x}}_{\mathrm{m}}}\mathit{H}\right({\mathit{x}}_{\mathrm{m}},\mathit{p}){)}^{\mathrm{T}}$, which is used to obtain a locally exact analytical gradient (in specific cases the gradient is not exact; see the section of “if statements” in the Supplement). More information on the adjoint is given in the Supplement.
In case the ith state vector element is an observation scaling factor (Eq. 5) for observation stream j, the gradient of the cost function with respect to the ith state vector element is computed as follows:
where m_{j} is the number of observations of the type (stream) to which the observation scaling factor is applied, e.g. if the observation scale is a scaling factor for surface CO_{2} flux observations, then m_{j} is the number of surface CO_{2} flux observations. y_{j,k} is the kth observation of this observation stream. F_{k} is the forcing related to the observation y_{j,k}.
Finally, if the ith state vector element is Frac_{H}, then the gradient is calculated as follows:
where F_{H} and F_{LE} are the forcing vectors for the sensible and latent heat fluxes, respectively, y_{H} and y_{LE} are the observation vectors for the sensible and latent heat fluxes, respectively, and ⋅ is the Euclidian inner product. Note that, when the Frac_{H} parameter is included in the state, the observation scaling factors for sensible and latent heat flux observations will be set equal to 1 and are not allowed to be included in the state. The terms $\frac{\mathrm{d}{\mathit{y}}_{H}}{{\mathrm{dFrac}}_{H}}$ and $\frac{\mathrm{d}{\mathit{y}}_{LE}}{{\mathrm{dFrac}}_{H}}$ represent, respectively, the derivatives of the sensible and latent heat flux observations to the Frac_{H} parameter, which follow from Eqs. (8) and (9) as follows:
Note that Eqs. (10), (12), and (13) all have the same first term that originates from the background part of the cost function.
3.5 Numerical derivative
A simple numerical derivative is available as alternative to the analytical gradient. The derivative of the cost function to the ith state element is numerically calculated as follows:
where α is a very small perturbation to state parameter x_{i}, with a default value of 10^{−6}, and has the units of x_{i}.
3.6 Handling convergence challenges
The highly nonlinear nature of the optimisation problem can cause the optimisation to become stuck in a local minimum of the cost function (Santaren et al., 2014; Bastrikov et al., 2018; Ziehn et al., 2012). This means that the resulting posterior state vector can depend on the prior starting point (Raoult et al., 2016), and the resulting posterior state can remain far from optimal. In the worst case, the nonlinearity of the model can even lead to a crash of the forward model. This happens with certain combinations of input parameters that lead to unphysical situations or undesired numerical behaviour. After starting from a userspecified prior state vector, the tnc algorithm autonomously decides which parameter values are tested during the rest of the optimisation. It is possible to place hard bounds on individual parameters when using the tnc algorithm, but this does not always prevent all possible problematic combinations of parameters from being evaluated.
To obtain information about the uncertainty in the posterior solution, and to deal with the challenges described above, the framework also allows a Monte Carlo approach (Tarantola, 2005). This entails the framework not starting at a single state vector with prior estimates but instead using an ensemble of prior state vectors x_{A}, thus leading to an ensemble of posterior parameter estimates. The ensemble of optimisations can be executed in parallel on multiple processors, thereby reducing the time it takes to perform the total optimisation. More details on the Monte Carlo mode of ICLASS are given in the Sect. 4.2.
As an additional way to improve the posterior solution, we have implemented a restart algorithm. If the optimisation results in a cost function that is higher than a userspecified number, then the framework will restart the optimisation from the best state reached so far. This fresh start of the tnc algorithm, whereby the algorithm's memory is cleared, often leads to a further lowering of the cost function. The maximum number of restarts (>=0) is specified by the user. If an ensemble is used, then every individual member with too high a posterior cost function will be restarted.
4.1 Prior and observations
From a Bayesian point of view, ICLASS can combine information, both from observations and from prior knowledge about the state vector, to come to a solution with a reduced uncertainty in the state parameters. In the derivation of Eq. (2), it is assumed that both prior and observational errors follow a (multivariate) normal distribution (Tarantola, 2005). However, some prior input parameters are bounded (Sect. 3.6), e.g. the albedo cannot be negative. Dealing with bounds is a known challenge in inverse modelling, and several approaches can be followed (Miller et al., 2014). As an example, Bergamaschi et al. (2009) had to deal with methane emissions in the state vector becoming negative. Their solution was to make the emissions a function of an emission parameter that is being optimised instead of optimising the emissions themselves. Through their choice of function, the emissions cannot become negative, even though the emission parameter is unbounded. In our case, such an approach is more difficult, as we have a diverse set of bounded parameters.
Our approach is to enforce hard bounds for state values via the tnc algorithm. It, however, means that the normality assumption will be violated to some extent, as the normal distribution (for which the user specifies the variance) for prior parameter values then becomes a truncated normal distribution. For a parameter following a truncated normal prior distribution, the prior variance used in the cost function is not (fully) equal to the variance of the actual prior distribution. The extent to which this is the case depends on the degree of truncation.
Our system also allows for specifying covariances between state elements in S_{A}. We assume observational errors to be uncorrelated (see Sect. 3.1). Equation (4) states that the observational error variance consists of an instrument error, a model error, and a representation error variance. The instrument and representation error standard deviation are taken from user input, while the model error standard deviation can either be specified by the user or estimated from a sensitivity analysis. In the latter case, an ensemble of forwardmodel runs is performed. For each of these runs, a set of parameters, not belonging to the state vector, is perturbed (so the parameters are part of the vector p and not from x_{m}; see Sect. 3.1). These perturbed parameters are used together with the unperturbed prior vector x_{m} as model input. The user should specify which parameters should be perturbed, and the distribution from which random numbers will be sampled to add to the default (prior) model parameters. For the latter, there is the choice between a normal, bounded normal, uniform, or triangular distribution. The model error standard deviation for each observation stream at each observation time is then obtained from the spread in the ensemble of model output for the observation stream at that time.
4.2 Ensemble mode to estimate posterior uncertainty
When ICLASS is run in Monte Carlo (ensemble) mode, it allows the estimation of the uncertainty in the posterior state. The principle for obtaining posterior error statistics bears a strong similarity to what has been done in Chevallier et al. (2007), and our method will be shortly explained here. We start with the prior state specified by the user. Then, for each ensemble member and for every state parameter, a random number is drawn from a normal distribution with the variance of the respective parameter specified by the user and a mean of 0. When covariances are also specified for the prior parameters, a multivariate normal distribution can be used to sample from. The sampled numbers are then added to the state vector. In case any nonzero covariances are used to perturb, and then, if the new state vector falls out of bounds (if specified), it is discarded, and the procedure is repeated for that ensemble member. In case no covariance structure is used, then only the parameter which falls out of its bounds has to be replaced. It has to be noted that this effectively leads to sampling from a truncated (multivariate) normal distribution.
Similarly, for every ensemble member, a random number drawn from a normal distribution (without bounds) is added to the (scaled) observations. This modifies the cost function of an individual ensemble member (Eq. 5) as follows:
${\mathit{x}}_{\mathrm{A}}^{\mathit{\left\{}\mathrm{p}\mathit{\right\}}}$ is the perturbed prior state vector. The random numbers ε_{i} in the data part of the cost function are sampled from normal distributions with mean 0 and standard deviation σ_{O,i}. This can be seen as either a perturbation of the scaled observations or a perturbation of the model–data mismatch. As a consequence of the change in the cost function equation, the equation for the forcing vector (Eq. 11) is updated as well:
Furthermore, in Eqs. (10), (12), and (13), for perturbed ensemble members, ${\mathit{x}}_{\mathrm{A}}^{\mathit{\left\{}\mathrm{p}\mathit{\right\}}}$ is used instead of x_{A}. The vector ε_{eb} (Eq. 7) is not perturbed and kept identical for all members. For every ensemble member, an optimisation is performed. Each optimisation is classified as being either successful or not successful. Here, “successful” is defined as having a final reduced chisquare (see Sect. 5; Eq. 19) smaller or equal than a userspecified number. The posterior state parameters for the successful members are considered a sample of the posterior state space and are saved in a matrix. Those parameters can be used to estimate the true posterior probability density functions (pdf's). A covariance and a correlation matrix are then constructed using the matrix of the posterior state parameters. These covariance and correlation matrices are used as an estimate of the true posterior state covariance and correlation matrices, respectively. If the user prefers to do so, the numeric parameters not belonging to the state can also be perturbed in the ensemble, in a similar way to that outlined in Sect. 4.1.
Note that, next to the ensemble, there will also be a run with an unperturbed prior, just as is the case when no ensemble is used. We refer to this run as member 0. This member is, however, not included in the calculation of ensemblebased error statistics (e.g. correlation matrix), as the choice of prior values is not random for this member.
ICLASS can write several output files. The output includes the obtained parameters, the posterior reduced chisquare statistic, and the posterior and prior cost function. The reduced chisquare goodnessoffit statistic is defined in ICLASS as follows:
In this equation, m is the number of observations, and n is the number of state parameters (see Sect. 3 for the other symbols). Note that if all the weights are set to 1, then the denominator simplifies to m+n. The n will only be included in the denominator when the user chooses to include a background part in the cost function (default). In a simple case where the prior errors are uncorrelated, this ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ statistic for the posterior solution should be around 1 when the optimisation converges well and priors and errors are properly specified (e.g. Michalak et al., 2005). In a case where all weights are 1, this can be understood as follows. The average value of the ith posterior observation residual squared, $\left(H\right({\mathit{x}}_{\mathrm{m},\mathrm{post}},\mathit{p}{)}_{i}{s}_{i}\phantom{\rule{0.125em}{0ex}}{y}_{i}{)}^{\mathrm{2}}$, should be close to ${\mathit{\sigma}}_{\mathrm{O},i}^{\mathrm{2}}$, and the average value of the ith posterior data residual squared, $({x}_{\mathrm{post},i}{x}_{\mathrm{A},i}{)}^{\mathrm{2}}$, should be close to the ith diagonal element of the a priori error covariance matrix when the optimisation converges well and errors and prior parameters are properly specified. We have m observation residuals and n data residuals (if background part included). In this example with a diagonal S_{A} matrix, the residuals are assumed to be independent of each other. Each squared residual contributes on average a value of approximately 1 to the cost function, summing to approximately m+n and, thus, ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}\approx \mathrm{1}$. Note, however, that the ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ statistic can be misleading, in particular when observational errors are correlated (Chevallier, 2007). Furthermore, as mentioned in Sect. 4.1, prior parameters can follow a truncated normal distribution, violating the normality assumption. The impact of this depends on the degree of truncation and also on the number of observations and so on. It can lead to an ideal ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ value diverting from 1.
ICLASS also calculates a prior and posterior partial cost function and a posterior ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ statistic for individual observation streams and for the correspondence to prior information (background). The partial cost function for observation stream j is calculated as follows:
where m_{j} is the number of observations of stream j, y_{j,i} is the ith observation of stream j, and H(x_{m},p)_{i} is the model output corresponding with y_{j,i}. In case scaled observations are perturbed in the ensemble (Sect. 4.2), ε_{i} is a random number; otherwise, it is 0. The ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ statistic for observation stream j is defined here as follows:
This equation resembles Eq. (13) in Meirink et al. (2008), but note that the variable $\frac{{\mathit{\chi}}^{\mathrm{2}}}{{n}_{s}}$ in their paper is similar to ${\mathit{\chi}}_{\mathrm{r},j}^{\mathrm{2}}$ in our paper. The ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ statistic for the background part is calculated as follows (similar to Eq. 26 of Michalak et al., 2005):
where J_{b} is the background part of the cost function, i.e. the first term from Eq. (5). The mean bias error, root mean squared error, and the ratio of model and observation variance for every observation stream is also calculated, both for the prior and posterior state. The mean bias error for the jth observation stream is defined here as follows:
In this equation, m_{j} is the number of observations of type (stream) j, $H({\mathit{x}}_{\mathrm{m}},\mathit{p}{)}_{j,i}$ is the model output for observation stream j at time index i, y_{j,i} is an observation of stream j at time index i, and s_{j} is an observation scaling factor for observations of stream j. Note that, even when the scaled observations are perturbed, we do not include those perturbations here (compare Eq. 20). In case the Frac_{H} parameter is used, the energybalancecorrected observations (Eqs. 8 and 9) will be used in the equation above. For the root mean squared error and the ratio of model and observation variance, the observation scales and energybalancecorrected observations are used as well. ICLASS also calculates the normalised deviation of the posterior from the prior for every state parameter x_{i} as follows:
where σ_{A,i} is the square root of the prior variance of parameter i, i.e. the square root of diagonal element i from matrix S_{A}. Note that, also in case of an ensemble, we use the unperturbed prior, i.e. the prior of member 0.
If run in ensemble mode, we additionally store estimates of the posterior error covariance matrix (Sect. 4), the posterior error correlation matrix, the ratio of the posterior and prior variance for each of the state parameters, the mean posterior state, and the optimised and prior states for every single ensemble member. When nonstate parameters are perturbed in the ensemble, these parameters are part of the output. Moreover, the correlation of the posterior state parameters with these nonstate parameters can also be written as output. When the model error standard deviations are estimated by ICLASS (Sect. 4.1), there is a separate file containing statistics on the estimated error standard deviations. Finally, there are additional output files with information about the optimisation process. For every model simulation (and for every ensemble member) in the iterative optimisation process, one can find the parameter values used in this simulation and the value of the cost function split up into a data and a background part. For the gradient calculations, one can find the parameter values used and the derivatives of the cost function with respect to every state parameter. The derivatives of the background part are also provided separately. More details on the output of ICLASS can be found in a separate section of the manual. An overview of the output variables defined in this section is given in Table B2.
We have written the entire code in Python 3 (https://www.python.org, last access: 9 December 2022). Using the framework requires a Python 3 installation, including the NumPy (van der Walt et al., 2011; https://numpy.org), SciPy (https://scipy.org, last access: 9 December 2022), and Matplotlib (Hunter, 2007; https://matplotlib.org, last access: 9 December 2022) libraries. The code is operating system independent and consists of the following three main files:

forwardmodel.py, which contains the adapted CLASS model.

inverse_modelling.py, which contains most of the inverse modelling framework, such as the function to be minimised in the optimisation and the model adjoint.

optimisation.py, which is the file to be run by the user for performing an actual optimisation. In this file, the observations are loaded, the state vector defined, and so on. The input paragraphs in this file should be adapted by the user to the optimisation performed.
There are three additional files. The first is optimisation_OSSE.py and is similar to optimisation.py but is meant specifically for observation system simulation experiments (also, in this file, the user should adapt the input paragraphs to the optimisation to be performed). The second file is testing.py, which contains tests for the code, as described in Sect. 7. The file postprocessing.py is a script that can be run, after the optimisations are finished, for postprocessing the output data, e.g. to plot a coloured matrix of correlations. Using the Pickle module, the optimisation can store variables on the disk near the end of the optimisation, and these variables can be read in again in the postprocessing.py script. This has the advantage that, for example, the formatting of plots can easily be adapted without redoing the optimisation. This script should be adapted by the user to the optimisation performed and the output desired.
When using the adjoint modelling technique, an extensive testing system is required to make sure that the analytical gradient of the cost function computed by the adjoint model is correct. There are two tests that are essential, which are described below.
The gradient test (for the tangent linear model) is a test to determine whether the derivatives with respect to the model input parameters in the tangent linear model are constructed correctly. The construction of the adjoint is based on the tangent linear model, so errors in the tangent linear propagate to the adjoint model. In the gradient test, model input state variables are perturbed, leading to a change in model output. The change in model output when employing the tangent linear model is compared to the change in model output when a numerical finite difference approximation is employed. Mathematically, the test for the ith model output element can be written as follows (similar to Honnorat et al., 2007; Elizondo et al., 2000):
where Δx_{m} is a vector of ones, with the same length as vector x_{m}. α is a small positive number, H represents the forwardmodel operator, H(x_{m},p)_{i} is the ith model output element, x_{m} is the vector of the model input variables to be tested (model parameters part of state), ⋅ is the Euclidian inner product, and vector p is the set of nonstate parameters used by the model. Several increasingly smaller values are tested for α. When the tangent linear model is correct, then the righthand side of the equation will converge to the lefthand side when α becomes progressively smaller, although for too small α values, numerical errors start to arise (Elizondo et al., 2000). Instead of using the full tangent linear model, individual model statements can also be checked. In this case, x_{m} and H(x_{m},p)_{i} can contain intermediate (not part of model input or output) model variables. The gradient test is considered successful if the ratio of the left and righthand sides of the equation lies in the interval [0.999–1.001]. The results of a simple example of a gradient test, involving the derivative of the 2 m temperature with respect to the roughness length for heat (z_{0h}), is shown in Table 1.
The dot product (adjoint) test checks whether the adjoint model code is correct for the given tangent linear code. It tests whether the identity
holds (Krol et al., 2008; Meirink et al., 2008; Honnorat et al., 2007; Claerbout, 12004). The equation can also be written as follows:
In this equation, ${\mathbf{H}}_{\mathrm{L}}={\mathbf{\nabla}}_{{\mathit{x}}_{\mathrm{m}}}\mathit{H}({\mathit{x}}_{\mathrm{m}},\mathit{p})$ represents the tangent linear model operator, i.e. a matrix (the local Jacobian of H) with the element on the ith row and jth column given by $\frac{\mathrm{d}\mathit{H}({\mathit{x}}_{\mathrm{m}},\mathit{p}{)}_{i}}{\mathrm{d}{x}_{\mathrm{m},j}}$. x_{m} is, in this equation, a vector with perturbations to the model state, ${\mathbf{H}}_{\mathrm{L}}^{\mathrm{T}}$ represents the adjoint model operator, and 〈〉 is the Euclidian inner product. A very small deviation from 0 is acceptable due to machine rounding errors (Claerbout, 12004). Our criterion for passing the test, when using 64 bit floating point calculations, is that the absolute value of the lefthand side of Eq. (27) should be $<=\mathrm{5}\times {\mathrm{10}}^{\mathrm{13}}$. This test can also be applied to individual statements or a block of statements in the adjoint model code. In this case, the definitions of the variables in Eq. (27) change slightly, e.g. ${\mathbf{H}}_{\mathrm{L}}^{\mathrm{T}}$ then represents a part of the adjoint model. The adjoint test will be illustrated by a (slightly simplified) example from the model code, namely the wellknown Stefan–Boltzmann law for outgoing longwave radiation. The forwardmodel code reads as follows:
Lwout = bolz * Ts ** 4.
The tangent linear model code for this statement reads as follows:
dLwout = bolz * 4 * Ts ** 3. * dTs.
And the corresponding adjoint code is as follows:
adTs += bolz * 4 * Ts ** 3. * adLwout adLwout = 0.
In this case, x_{m} corresponds to dTs
, and the H_{L}x_{m} variable is dLwout
. The vector y is adLwout
, and ${\mathbf{H}}_{\mathrm{L}}^{\mathrm{T}}\mathit{y}$ is adTs
. In the test, x_{m} and y are assigned random numbers. When we evaluated Eq. (27) on this part of the code, the result was less than $\mathrm{1}\times {\mathrm{10}}^{\mathrm{15}}$ (which corresponds to approximately 5 × the machine precision), meaning that the test passes.
We have constructed a separate script that performs a vast number of gradient tests and adjoint tests and informs the user whenever a test fails. This file (testing.py) is included with the code files. The number of time steps we specified for testing is small, as the computational burden increases with the number of time steps. The model passes the vast majority of the tests that are executed in this file. Closer inspection of the tests that fail reveals that numerical noise is a likely explanation for the small fraction of tests that are labelled as failing. Further validation of the inverse modelling framework follows in the next section.
To test the ability of the system to properly estimate model parameters, we show, in this chapter, the results of observation system simulation experiments (OSSEs) with artificial data. These types of experiments are classically used to test the ability of the system to properly estimate model parameters. For example, similar tests have been performed by Henze et al. (2007), for their adjoint of a chemical transport model. In our experiments, we simulate a growing convective boundary layer for a location at midlatitudes from 10–14 h, including surface layer calculations. In the chosen setup, the land surface is coupled to the boundary layer. The land surface provides heat and moisture and exchanges CO_{2} with the CBL.
8.1 Parameter estimation
We perform, in this section, a total of five main experiments of varying complexity. The things that differ among experiments are the choice of observations and state vectors. The procedure for the first four experiments is as follows. We first run the model with chosen values of a set of parameters we want to optimise. A set of model output data from this simulation then serves as the observations, while the parameters used to create these observations are referred to as the true parameters. Then we perform an optimisation using these observations, starting from a perturbed prior state vector. In the cost function, we do not include the background part to make sure that it is possible to retrieve the true parameters. This is because the background part of the cost function implies a penalty for deviating from the prior state. This penalty implies that, when the model is run with the true parameters, the cost function would still not be 0. Next to that, the minimum of the cost function is (generally) shifted. When leaving out the background part, the framework should, if converging properly, be able to find back the parameter values that were used for creating the observations, starting from any other prior state vector.
Since reallife observations usually contain noise, the last experiment follows the same procedure, except that the observations are also perturbed. This is done by adding white Gaussian noise to the observations. For each observation y_{i}, a random number drawn from a normal distribution with mean 0 and standard deviation equal to a specified σ_{I,i} (Table 3) is added. The model and representation errors are set to 0 in all experiments. All weights for the cost function were set to 1. Table 2 lists, for every experiment, the prior, the true, and the optimised state variables. The complexity of the experimental setup increases from the top to the bottom of the table.
For the first experiment, we perturbed the initial boundary layer height and albedo. The rest of the parameters are left unchanged compared to the true parameters. The observation streams we used are specific humidity and boundary layer height. The specific humidity is expected to be influenced by the albedo, as the amount of available net radiation is relevant for the amount of evapotranspiration, which influences the specific humidity. The boundary layer height is relevant for specific humidity as well, as it determines the size of the mixing volume in which evaporated water is distributed. The state parameters are thus expected to have a profound influence on the cost function.
The optimised model shows a very good fit to the observations (Fig. 3), and the original parameter values are found with a precision of at least five decimal places (Table 2; fewer than five decimal places are shown). This result indicates the optimisations work in such simple situations.
To test the system for a more complex problem, we change from two parameters in the state to five (Table 2; second block), while keeping the same observations. Again, based on physical reasoning, the added parameters are expected to influence the cost function. However, one might expect that the observations might not be able to uniquely constrain the state, i.e. parameter interdependency issues might arise for this setup (equifinality). As an example, a reduction in soil moisture might influence the observations in a similar way as a reduction in stomatal conductance does. Mathematically, this means that multiple nearly equivalent local minima might be present in the cost function.
Also for this more complex case, we manage to obtain a very good fit (Fig. 3c and d). However, inspecting the obtained parameter values (Table 2; second block; column labelled “Two obs streams”), we see that the optimisation does not fully converge to the true state. This is likely caused by parameter interdependency, as described earlier. To solve this, extra observation streams need to be added that allow the disentangling of the effects of the different parameters in the state. We have tested this in the third experiment by adding the sensible and latent heat flux, the temperature at 2 m height, and the surface CO_{2} flux to the observations. With this more complete set of observations, the true parameter values are found with a precision of at least four decimal places (column labelled “Six obs streams” in Table 2). As a side experiment, not included in Table 2, we have run the same experiment as before but using the numerical derivative described in Sect. 3.5. In this case, convergence is notably slower, e.g. more than 6 times as many iterations were needed to reduce the cost function to less than 0.1 % of its prior value. This indicates that, at least for this more complex nonlinear optimisation case, an analytical gradient outperforms our simple numerical gradient calculation.
In the fourth main experiment, we increase the number of state parameters to 10 (Table 2). To further constrain the parameters, we expand our set of observation streams with the CO_{2} mixing ratio measured at 20 m height. Also, for this complex setup, the framework managed to find the true values, with a precision of one decimal place for the initial boundary layer height and more for the other parameters.
These tests show that (at least for the range of tests performed) the framework is able to find the minimum of the cost function well. Additionally, we performed another observation system simulation experiment to test the framework's usefulness in a more realistic situation with noise on the observations. The result can be seen in Fig. 3e–h. The framework finds a good fit to the observations, even though it has now become impossible to fit every single (perturbed) observation. The posterior root mean squared error is about the same size (slightly smaller) as the observational error standard deviation for all observation streams (Table 3). This confirms the visual good fit from Fig. 3e–h, given that these observational error standard deviations were used to create the random perturbations for the observations. The ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ statistic of the optimisation has become close to 1 (0.83), indicating no strong over or underfitting of the observations.
8.2 Posterior uncertainties and bias correction
In this section, we describe two additional experiments that focus on the validation of the two bias correction methods (Sect. 3) and the posterior uncertainty in the optimised parameters. As a starting point, we take the 10parameterstate experiment with unperturbed observations from the previous section. We add two additional state parameters related to bias correction (Table 4). We have tested that, for this setup, ICLASS is able to retrieve the true parameters well (Table 4; column labelled “Optimised unp.”), including the parameters related to bias correction (Frac_{H} and ${s}_{{\mathrm{CO}}_{\mathrm{2}}}$). We now perturb the true observations by adding, to each observation y_{i}, a random number from a normal distribution with mean 0 and standard deviation 1.5 σ_{O,i}. The factor 1.5 means that we misspecify the observational errors in the framework, as can happen in real situations too. In this OSSE, we will not simply attempt to fit the observations as well as possible, but we want to employ both observations and prior information. We therefore include the background part of the cost function. The prior error standard deviations are given in Table 4; the a priori error covariance matrix was chosen to be diagonal. As we want an estimate of the posterior parameter uncertainty, we ran ICLASS using an ensemble of 100 perturbed members. For simplicity, we take the total observational error equal to the measurement error, as for the previous OSSEs.
From the 101 members in the ensemble, the unperturbed optimisation has the lowest posterior cost function. Figure 4 indicates that the optimised solution shows a strongly improved fit with the (adapted) observations compared to the prior fit. As expected, due to the perturbations on the observations and the background part of the cost function, the true parameters are not found anymore. For slightly more than half of the state parameters, the optimised parameter value is located less than 1 posterior standard deviation away from the true value (Table 4; rightmost column). The largest relative deviation from the truth is for w_{g}, and the posterior value is approximately 2 standard deviations away from the true value. The parameter uncertainty, as expressed by the standard deviations of the state parameters in Table 4, is reduced for every parameter after the optimisation. In addition, the posterior root mean squared error (RMSE) is reduced for every observation stream after optimisation (Table 5). We see that the posterior RMSE now approaches 1.5×σ_{O}, which can be explained by the intentional misspecification of errors. For the same reason, the values of the partial reduced chisquare statistics (${\mathit{\chi}}_{\mathrm{r},j}^{\mathrm{2}}$) should be near 2.25 (1.5 × 1.5) for all observation streams, except CO${}_{{\mathrm{2}}_{\mathrm{20}}}$. The latter is more complex due to the presence of the bias correction scaling factor (${s}_{{\mathrm{CO}}_{\mathrm{2}}}$). Indeed, we find ${\mathit{\chi}}_{\mathrm{r},j}^{\mathrm{2}}$ values ranging between 2.09 and 2.17 for all streams, except CO${}_{{\mathrm{2}}_{\mathrm{20}}}$ (Table 5). Note that, for the analysis of the posterior uncertainty, we have only used members with a posterior ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ equal to or lower than 4.5. Therefore, only 38 of the 100 perturbed members were used. We conclude here that ICLASS performs well in all our tests with artificial data, and we present an application with real observations in the next section (more extensive).
9.1 Setup
For this example of how the framework can be used, we used data obtained at Cabauw, the Netherlands, from 25 September 2003. A 213 m high measurement tower is present at this location (Bosveld et al., 2020). This day is chosen here since it has been used in several studies before (VilàGuerau De Arellano et al., 2012; CassoTorralba et al., 2008), and we consider it to be a golden day (Sect. 1), for which the CLASS model is expected to perform at its best.
The set of observation streams we used is given in Table 6, and a description of these streams is given in Table A2. Some of the observations we used are not directly part of the Cabauw dataset; we have derived them from other observations in the same dataset. The parameters we optimise (the state) are given in Table 7, and a description of these parameters can be found in Table A1. The model settings, prior state, and nonstate parameters are to a large extent based on VilàGuerau De Arellano et al. (2012). The modelled period is from 09:00–15:00 UTC, and we activated the surface layer option in the model. We ran ICLASS in the ensemble mode with 175 members, and an ensemble member was considered successful when having a final ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}<=\mathrm{2.0}$ (Eq. 19). The prior error standard deviations are given in Table 7. The a priori error covariance matrix was assumed to be diagonal. The representation errors were set to 0. The chosen weights and model and measurement error standard deviations are given in the supplementary material, and further detailed settings of the optimisation can be found via the Zenodo link in the code and data availability section.
9.2 Computational costs
To give an idea of the computational costs involved, we added a timer to an optimisation. An optimisation, like the one described in this section but without the use of an ensemble, took, in our specific case, about 1000 times the time of an individual model simulation using the original CLASS model. However, in our test, an individual simulation with this model took less than 1 s using a single CPU, so the computational cost is still relatively small. The total time it takes to perform an optimisation is dependent on how well the optimisation converges and on the configuration. Using an ensemble increases the computational costs, but multiple ensemble members can be run in parallel on multiprocessor computing systems. An optimisation with an ensemble, like the one described in this section, took, in our specific case, approximately 2 h, using 32 cores of a highperformance computing cluster. The total summed CPU time was about 45 h.
9.3 Model fit
Figure 5 shows that the optimised run shows a much better fit than the prior run to the subset of observations present in this figure. For temperature, a total of seven observation heights are included, but we only show three for brevity. The cost function is reduced from 4702 to 126. The ${\mathit{\chi}}_{\mathrm{r}}^{\mathrm{2}}$ goodnessoffit statistic has a value of 0.80 for the optimised run, indicating a slight overfitting (or nonoptimal error specifications). For all observation streams, the root mean squared error (Sect. 5; Table 6) and the absolute value of the mean bias error (Eq. 23; not shown) are reduced after optimisation. The ratio of model and observation variance becomes closer to 1 for the vast majority of observation streams, except for three of the specific humidity streams and the sensible heat flux.
9.4 Bias correction
Let us now turn to the energy balance (Fig. 6). As described in Sect. 3.3, we forced the energy balance in the observations to close by partitioning the energy balance gap partly to H and partly to LE. From Fig. 6, it is clear that, in the original observations, the sum of H and LE is generally lower than in both the prior and posterior model runs. Both the model and the corrected observations take the energy balance into account. When the difference in R_{n}−G between model and observations is small (Eq. 7), we might normally expect the model to more easily fit the corrected observations than the original ones. Also in this example, the posterior fit is improved compared to the prior, especially for the sensible heat flux (Table 6). We notice that most of the additional energy is partitioned to LE (Table 7 and Fig. 6). The energy balance closure residual (LHS of Eq. 7) is important to account for in this case, as the mean value of the residuals is only 11 % smaller as the mean of the measured (without applying Eq. 8) sensible heat flux. Note that, for some of the data points around noon, the energy balance correction tends to decrease the heat fluxes (Fig. 6). Inspection of a satellite image of that day revealed that this is likely caused by the presence of high clouds, causing a fast drop in net radiation. The measured heat fluxes tend to react slower, however, leading to a negative value for some elements of vector ε_{eb} in Eq. (7).
As is the case for the surface energy balance closure problem, there can also be biases in the CO_{2} flux eddy covariance observations (Liu et al., 2006). Here we have neglected this for simplicity.
^{*} The 2 m temperature and 2 m dew point observations (the latter used for deriving q_{2}) were actually taken at 1.5 m. We do our calculations as if the observations were taken at 2 m. This rounding might have introduced some bias to the T_{2} and q_{2} observations. We tested the impact of this by performing a corrected optimisation (without an ensemble). The state parameters resulting from this optimisation were all within the 2 standard deviation posterior uncertainty range of the uncorrected optimisation (x±2σ_{post}), and 9 out of 14 parameters were within the 1 standard deviation uncertainty range (x±σ_{post}).
9.5 Optimised parameters
The optimised state is shown in Table 7. Advection of heat remains relatively close to 0 after optimisation (−0.13 ± 0.06 K h^{−1}, where 0.06 is σ_{post}). This is slightly outside the range of 0.1–0.3 K h^{−1} found by CassoTorralba et al. (2008), who analysed the same day (using a longer time period). The result can be considered to be in agreement with Bosveld et al. (2004), who concluded largescale heat advection to be negligible for this day. For CO_{2}, however, we find an advection of 5.8 ± 1.9 ppm h^{−1} (where ppm is parts per million), which is higher than what was found by CassoTorralba et al. (2008, their Fig. 9). We shortly return to this in Sect. 9.6. The two parameters that deviate the most from their prior values (normalised with the prior standard deviation; Table 7) are ${\mathit{\gamma}}_{{\mathrm{CO}}_{\mathrm{2}}}$ and R_{10}. Both parameters are linked to the CO_{2} budget, ${\mathit{\gamma}}_{{\mathrm{CO}}_{\mathrm{2}}}$ influences the amount of entrained CO_{2} from the free troposphere, while R_{10} influences the amount of CO_{2} entering the mixed layer via respiration. From Fig. 5c, it is clear that the prior run has way too strong a net surface CO_{2} flux compared to the observations, which explains why R_{10} is strongly increased in the optimised state. Two main pathways through which the CO_{2} flux can decrease (in magnitude) are either by increasing R_{10} or by decreasing the conductance (via α_{sto}). However, the latter also impacts the model fit of the latent heat flux. Unfortunately, separate measurements of gross primary production (GPP) and respiration (e.g. derived from carbonyl sulfide observations; Whelan et al., 2018) are not available in our dataset. These could help to further constrain the CO_{2}related parameters. In a more detailed study, the parameter estimates might still be improved by, for example, better estimating the vector p and the observational errors.
9.6 Posterior uncertainty and correlations
The discussion above illustrates that not all of the parameters in the state may be assumed to be fully independent. To analyse the posterior correlations, we have constructed a correlation matrix (Sect. 4.2), as shown in Fig. 7. From the 174 perturbed ensemble members, 150 were successful and were used for the calculation of statistics. As expected, the correlation between α_{sto} and R_{10} is very strongly positive (0.94), and this can likely be explained by their opposite effects on the CO_{2} flux, as explained in the previous paragraph. Some of the correlations between parameters can be relatively complex, e.g. a correlation between two posterior parameters might involve a third parameter that correlates with both.
Coming back to the discrepancy in CO_{2} advection between our analysis and what was found by CassoTorralba et al. (2008), it can be noted that the adv${}_{{\mathrm{CO}}_{\mathrm{2}}}$ parameter is relatively strongly correlated with both the ${\mathrm{\Delta}}_{{\mathrm{CO}}_{\mathrm{2}}}$ and ${\mathit{\gamma}}_{{\mathrm{CO}}_{\mathrm{2}}}$ parameters (Fig. 7; correlation of −0.65 and −0.80, respectively). This can indicate that entrainment from the free troposphere is hard to disentangle from advection in shaping the CO_{2} budget, with the current set of observations we incorporate. Differences in how entrainment is handled by CassoTorralba et al. (2008) might explain part of the difference in estimated CO_{2} advection.
To check to what extent the obtained correlations are robust and independent of the selected number of ensemble members, we reconstructed the correlation matrix using only half of the successful perturbed members (75 of the 150). The average absolute value of difference between the nondiagonal matrix entries when using the subsample and the nondiagonal matrix entries when using the full successful perturbed ensemble amounts to 0.05, with a maximum of 0.23 for one entry. This suggests that using 150 members in the correlation analysis leads to a reasonably robust estimate of the posterior correlations.
Another option we explore is to analyse the posterior pdf's for the successful perturbed members in the ensemble. As an example, we show the pdf's for two parameters, adv_{θ} and γ_{q} (Fig. 8). For adv_{θ}, the posterior uncertainty is clearly reduced compared to the prior, as the posterior pdf is markedly narrower. For γ_{q} (the freetropospheric specific humidity lapse rate), however, there is no clear reduction in uncertainty. The wide posterior pdf implies that similar results can be obtained over a relatively wide range of γ_{q}, possibly by perturbing other parameters with a similar effect. To further constrain this parameter, more specific observation streams would need to be added, possibly from radiosondes. The use of ICLASS in, for example, the planning of observational campaigns can therefore help to determine beforehand what type of observations are needed to better constrain the processes represented in the model. This is done through the use of observation system simulation experiments, similar to, for example, Ye et al. (2022).
We have presented here a description of ICLASS, a variational Inverse modelling framework for the Chemistry Landsurface Atmosphere Soil Slab model. This framework serves as a tool to study the atmospheric boundary layer and/or land–atmosphere exchange. It avoids the need for manual trialanderror in choosing parameter values for the model when fitting observations, thereby providing more objectivity. Some extent of subjectivity, however, remains, as the proper specification of errors is not always simple (e.g. Rödenbeck et al., 2003). The use of more advanced error estimation methods can mitigate this. The very nonlinear model around which the framework is built makes the optimisation challenging. In ICLASS, two main ways in which this challenge is tackled are as follows:

The use of an analytical gradient of the cost function involving the model adjoint, thus allowing for more precise gradient calculations.

The possibility of running ICLASS in a Monte Carlo way. This involves perturbing the prior state vector and the (scaled) observations. When a single optimisation does not converge, then the use of an ensemble can provide a solution.
The latter way of running ICLASS also has the advantage that posterior error statistics can be obtained, which is of paramount importance in inverse modelling.
The model is relatively simple yet contains the physics to model the essentials of the convective boundary layer and of land surface–atmosphere exchange. Its simplicity, however, means that the model does not perform well in every situation. The best performance can be expected for days when the boundary layer resembles a prototypical convective boundary layer. We have shown the usefulness of ICLASS by applying it to a golden day at a Dutch grassland site where extensive data are available. The fit to several streams of observations simultaneously was greatly improved in the posterior compared to the prior. We have to keep in mind, however, that we cannot expect a relatively simple model to capture all smallscale processes playing a role in the convective boundary layer and in land surface–atmosphere exchange (e.g. heterogeneous surface heating, heterogeneous evaporation and influence of individual thermals).
Key strengths of this framework are that observations from several information streams can be used simultaneously, and surface layer profiles can be taken into account. The framework allows the integration of knowledge from ecosystemlevel studies (fluxes) and global studies (mixing ratios). It can be seen as a tool that maximises the use of available observational data at CBL/ecosystem level (e.g. along tall towers like the Cabauw tower). Observation system simulation experiments using ICLASS can also help in the planning of observational campaigns so as to determine in advance which observation streams are needed to better constrain model processes. Another feature of the framework is the capacity of correcting observations for biases. A specific bias correction system for the energy balance closure problem is implemented. The energy balance residual was shown to be substantial at the Dutch grassland site in our application example. Correcting for biases is critical in inverse modelling to prevent bias errors from propagating in parameter estimates. The correction of biases is, however, a very complex topic. There are limitations to the level of complexity that our bias correction methods can handle, and ICLASS cannot be expected to deal completely with all bias issues.
ICLASS is computationally relatively cheap to run and can be extended in the future by, for example, incorporating a more detailed representation of the vegetation. This extension can further improve the capabilities to fit sets of observations at locations with a more complex vegetation structure.
The code is hosted at GitHub. The current version of ICLASS is available from https://github.com/PBosmanatm/ICLASS (last access: 9 December 2022) under a GNU General Public License, v3.0 or later. The adapted forward model and the ICLASS manual are also included in the GitHub repository. This repository is linked to Zenodo, which provides DOIs for released software. The release on which this reference paper is based can be found at https://doi.org/10.5281/zenodo.7239147 (Bosman and Krol, 2022) (GNU General Public License, v3.0 or later). The code and the data used for creating the plots with optimisation results in this paper, and for the content of the tables, are part of the downloadable material. The data used in this paper can differ somewhat from the most recent version of these data. Boundary layer height data were provided by Henk Klein Baltink (KNMI). These data (and all other used data) can be found via the Zenodo link. The newest version of the Cabauw data (except for boundary layer heights) can be found at the following locations: the CO_{2} mixing ratios can be found at the ICOS (https://www.icoscp.eu/dataproducts/ERE99D85, https://doi.org/10.18160/ERE99D85, Drought 2018 Team and ICOS Atmosphere Thematic Centre, 2020) and ObsPack (https://gml.noaa.gov/ccgg/obspack/, The Global Monitoring Laboratory of the National Oceanic and Atmospheric Administration, 2022) websites. Temperature, heat fluxes, and so on can be found at https://dataplatform.knmi.nl/dataset/?tags=Insitu&tags=CESAR (KNMI, 2022).
The supplement related to this article is available online at: https://doi.org/10.5194/gmd16472023supplement.
MCK and PJMB designed the study. PJMB performed the majority of the coding and adjoint model construction, with help from MCK. PJMB performed the numerical optimisations and wrote the paper, the latter with help from MCK.
The contact author has declared that neither of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work is part of the COSOCS project (http://cosocs.eu/, last access: 9 December 2022), a project that received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation programme (grant no. 742798). Part of this work was carried out on the Dutch national einfrastructure, with the support of SURF Cooperative. We are grateful to Jordi VilàGuerau De Arellano, Fred Bosveld, Arnoud Frumau, and Henk Klein Baltink, for helping us with and/or providing (directly or indirectly) the Cabauw data. We would also like to thank the developers of the CLASS model. Thanks also to the three anonymous reviewers, for helping us to improve the paper. We furthermore thank Chiel van Heerwaarden, Linda Kooijmans, and Jordi VilàGuerau De Arellano for providing comments on an earlier version of this work.
This research has been supported by the European Research Council under the European Union's Horizon 2020 Research and Innovation programme (COSOCS; grant no. 742798).
This paper was edited by Jinkyu Hong and reviewed by three anonymous referees.
Barbaro, E., VilàGuerau de Arellano, J., Ouwersloot, H. G., Schröter, J. S., Donovan, D. P., and Krol, M. C.: Aerosols in the convective boundary layer: Shortwave radiation effects on the coupled landatmosphere system, J. Geophys. Res.Atmos., 119, 5845–5863, https://doi.org/10.1002/2013JD021237, 2014. a
Bastrikov, V., MacBean, N., Bacour, C., Santaren, D., Kuppel, S., and Peylin, P.: Land surface model parameter optimisation using in situ flux data: comparison of gradientbased versus random search algorithms (a case study using ORCHIDEE v1.9.5.2), Geosci. Model Dev., 11, 4739–4754, https://doi.org/10.5194/gmd1147392018, 2018. a, b, c
Bergamaschi, P., Frankenberg, C., Meirink, J. F., Krol, M., Villani, M. G., Houweling, S., Dentener, F., Dlugokencky, E. J., Miller, J. B., Gatti, L. V., Engel, A., and Levin, I.: Inverse modeling of global and regional CH_{4} emissions using SCIAMACHY satellite retrievals, J. Geophys. Res.Atmos., 114, 1–28, https://doi.org/10.1029/2009JD012287, 2009. a
Bosman, P. and Krol, M.: PBosmanatm/ICLASS: ICLASS v1.1, Zenodo [code and data set], https://doi.org/10.5281/zenodo.7239147, 2022. a
Bosveld, F., Van Meijgaard, E., Moors, E., and Werner, C.: Interpretation of flux observations along the Cabauw 200 m meteorological tower, in: 16th Symposium on Boundary Layers and Turbulence 6.18, 1–4, Portland, USA, https://ams.confex.com/ams/BLTAIRSE/webprogram/Paper78632.html (last access: 9 December 2022), 2004. a
Bosveld, F. C., Baas, P., Beljaars, A. C. M., Holtslag, A. A. M., de Arellano, J. V.G., and van de Wiel, B. J. H.: Fifty Years of Atmospheric BoundaryLayer Research at Cabauw Serving Weather, Air Quality and Climate, Bound.Lay. Meteorol., 177, 583–612, https://doi.org/10.1007/s1054602000541w, 2020. a, b
Brasseur, G. and Jacob, D.: Inverse Modeling for Atmospheric Chemistry, in: Modeling of Atmospheric Chemistry, Cambridge University Press, Cambridge, 487–537, https://doi.org/10.1017/9781316544754.012, 2017. a, b, c, d, e, f
CassoTorralba, P., de Arellano, J. V. G., Bosveld, F., Soler, M. R., Vermeulen, A., Werner, C., and Moors, E.: Diurnal and vertical variability of the sensible heat and carbon dioxide budgets in the atmospheric surface layer, J. Geophys. Res.Atmos., 113, D12119, https://doi.org/10.1029/2007JD009583, 2008. a, b, c, d, e
Chevallier, F.: Impact of correlated observation errors on inverted CO_{2} surface fluxes from OCO measurements, Geophys. Res. Lett., 34, L24804, https://doi.org/10.1029/2007GL030463, 2007. a
Chevallier, F., Fisher, M., Peylin, P., Serrar, S., Bousquet, P., Bréon, F.M., Chédin, A., and Ciais, P.: Inferring CO_{2} sources and sinks from satellite observations: Method and application to TOVS data, J. Geophys. Res., 110, D24309, https://doi.org/10.1029/2005JD006390, 2005. a
Chevallier, F., Bréon, F.M., and Rayner, P. J.: Contribution of the Orbiting Carbon Observatory to the estimation of CO_{2} sources and sinks: Theoretical study in a variational data assimilation framework, J. Geophys. Res., 112, D09307, https://doi.org/10.1029/2006JD007375, 2007. a
Chevallier, F., Ciais, P., Conway, T. J., Aalto, T., Anderson, B. E., Bousquet, P., Brunke, E. G., Ciattaglia, L., Esaki, Y., Fröhlich, M., Gomez, A., GomezPelaez, A. J., Haszpra, L., Krummel, P. B., Langenfelds, R. L., Leuenberger, M., Machida, T., Maignan, F., Matsueda, H., Morguí, J. A., Mukai, H., Nakazawa, T., Peylin, P., Ramonet, M., Rivier, L., Sawa, Y., Schmidt, M., Steele, L. P., Vay, S. A., Vermeulen, A. T., Wofsy, S., and Worthy, D.: CO_{2} surface fluxes at grid point scale estimated from a global 21 year reanalysis of atmospheric measurements, J. Geophys. Res., 115, D21307, https://doi.org/10.1029/2010JD013887, 2010. a
Claerbout, J. F.: Earth soundings analysis: processing versus inversion, Blackwell Scientific Publications, Cambridge, http://sep.stanford.edu/sep/prof/pvi.pdf (last access: 9 December 2022), 2004. a, b
Commane, R., Meredith, L. K., Baker, I. T., Berry, J. A., Munger, J. W., Montzka, S. A., Templer, P. H., Juice, S. M., Zahniser, M. S., and Wofsy, S. C.: Seasonal fluxes of carbonyl sulfide in a midlatitude forest, P. Natl. Acad. Sci. USA, 112, 14162–14167, 2015. a
Doicu, A., Trautmann, T., and Schreier, F.: Numerical Regularization for Atmospheric Inverse Problems, Springer Praxis Books in environmentral sciences, https://doi.org/10.1007/9783642054396, 2010. a
Drought 2018 Team and ICOS Atmosphere Thematic Centre: Drought2018 atmospheric CO_{2} Mole Fraction product for 48 stations (96 sample heights), https://doi.org/10.18160/ERE99D85, 2020. a
Elizondo, D., Faure, C., and Cappelaere, B.: Automatic versus Manual differentiation for nonlinear inverse modeling, Tech. rep., INRIA (Institut National de Recherche en Informatique et en Automatique), https://hal.inria.fr/inria00072666/document (last access: 9 December 2022), 2000. a, b
Foken, T.: The Energy Balance Closure Problem: An Overview, Ecol. Appl., 18, 1351–1367, 2008. a, b, c, d, e
Friend, A. D.: Modelling Canopy CO_{2} Fluxes: Are “BigLeaf” Simplifications Justified?, Global Ecol. Biogeogr., 10, 603–619, 2001. a
Henze, D. K., Hakami, A., and Seinfeld, J. H.: Development of the adjoint of GEOSChem, Atmospheric Chemistry and Physics, 7, 2413–2433, https://doi.org/10.5194/acp724132007, 2007. a
Honnorat, M., Marin, J., Monnier, J., and Lai, X.: Dassflow v1.0: a variational data assimilation software for 2D river flows, Tech. rep., INRIA (Institut National de Recherche en Informatique et en Automatique), http://hal.inria.fr/inria00137447 (last access: 9 December 2022), 2007. a, b
Hunter, J.: Matplotlib: A 2D Graphics Environment, Comput. Sci. Eng., 9, 90–95, https://doi.org/10.1109/MCSE.2007.55, 2007. a
Jacobs, C.: Direct impact of atmospheric CO_{2} enrichment on regional transpiration, PhD thesis, Wageningen University, ISBN 9789054852506, 1994. a, b, c
Jarvis, P.: The interpretation of the variations in leaf water potential and stomatal conductance found in canopies in the field, Philos. T. Roy. Soc. Lond. B, 273, 593–610, https://doi.org/10.1098/rstb.1976.0035, 1976. a
Kaminski, T., Knorr, W., Scholze, M., Gobron, N., Pinty, B., Giering, R., and Mathieu, P.P.: Consistent assimilation of MERIS FAPAR and atmospheric CO_{2} into a terrestrial vegetation model and interactive mission benefit analysis, Biogeosciences, 9, 3173–3184, https://doi.org/10.5194/bg931732012, 2012. a
Krol, M. C., Meirink, J. F., Bergamaschi, P., Mak, J. E., Lowe, D., Jöckel, P., Houweling, S., and Röckmann, T.: What can 14CO measurements tell us about OH?, Atmospheric Chemistry and Physics, 8, 5033–5044, https://doi.org/10.5194/acp850332008, 2008. a
Liu, H., Randerson, J. T., Lindfors, J., Massman, W. J., and Foken, T.: Consequences of incomplete surface energy balance closure for CO_{2} fluxes from openpath CO_{2}/H_{2}O infrared gas analysers, Bound.Lay. Meteorol., 120, 65–85, https://doi.org/10.1007/s105460059047z, 2006. a
Ma, J., Kooijmans, L. M. J., Cho, A., Montzka, S. A., Glatthor, N., Worden, J. R., Kuai, L., Atlas, E. L., and Krol, M. C.: Inverse modelling of carbonyl sulfide: implementation, evaluation and implications for the global budget, Atmos. Chem. Phys., 21, 3507–3529, https://doi.org/10.5194/acp2135072021, 2021. a
Mäkelä, J., Knauer, J., Aurela, M., Black, A., Heimann, M., Kobayashi, H., Lohila, A., Mammarella, I., Margolis, H., Markkanen, T., Susiluoto, J., Thum, T., Viskari, T., Zaehle, S., and Aalto, T.: Parameter calibration and stomatal conductance formulation comparison for boreal forests with adaptive population importance sampler in the land surface model JSBACH, Geosci. Model Dev., 12, 4075–4098, https://doi.org/10.5194/gmd1240752019, 2019. a
Margulis, S. A. and Entekhabi, D.: A Coupled Land Surface–Boundary Layer Model and Its Adjoint, J. Hydrometeorol., 2, 274–296, https://doi.org/10.1175/15257541(2001)002<0274:ACLSBL>2.0.CO;2, 2001a. a
Margulis, S. A. and Entekhabi, D.: Feedback between the land surface energy balance and atmospheric boundary layer diagnosed through a model and its adjoint, J. Hydrometeorol., 2, 599–620, https://doi.org/10.1175/15257541(2001)002<0599:FBTLSE>2.0.CO;2, 2001b. a
McNorton, J., Wilson, C., Gloor, M., Parker, R. J., Boesch, H., Feng, W., Hossaini, R., and Chipperfield, M. P.: Attribution of recent increases in atmospheric methane through 3D inverse modelling, Atmos. Chem. Phys., 18, 18149–18168, https://doi.org/10.5194/acp18181492018, 2018. a
Meirink, J. F., Bergamaschi, P., and Krol, M. C.: Fourdimensional variational data assimilation for inverse modelling of atmospheric methane emissions: method and comparison with synthesis inversion, Atmos. Chem. Phys., 8, 6341–6353, https://doi.org/10.5194/acp863412008, 2008. a, b
Michalak, A. M., Hirsch, A., Bruhwiler, L., Gurney, K. R., Peters, W., and Tans, P. P.: Maximum likelihood estimation of covariance parameters for Bayesian atmospheric trace gas surface flux inversions, J. Geophys. Res.Atmos., 110, D24107, https://doi.org/10.1029/2005JD005970, 2005. a, b
Miller, S. M., Michalak, A. M., and Levi, P. J.: Atmospheric inverse modeling with known physical bounds: an example from trace gas emissions, Geosci. Model Dev., 7, 303–315, https://doi.org/10.5194/gmd73032014, 2014. a
Monin, A. and Obukhov, A.: Basic laws of turbulent mixing in the atmosphere near the ground, Tr. Akad. Nauk SSSR Geophiz. Inst., 24, 1963–1987, 1954. a
Nash, S. G.: A survey of truncatedNewton methods, J. Comput. Appl. Math., 124, 45–59, https://doi.org/10.1016/S03770427(00)00426X, 2000. a, b
Oncley, S. P., Foken, T., Vogt, R., Kohsiek, W., DeBruin, H. A. R., Bernhofer, C., Christen, A., van Gorsel, E., Grantz, D., Feigenwinter, C., Lehner, I., Liebethal, C., Liu, H., Mauder, M., Pitacco, A., Ribeiro, L., and Weidinger, T.: The Energy Balance Experiment EBEX2000. Part I: overview and energy balance, Bound.Lay. Meteorol., 123, 1–28, https://doi.org/10.1007/s1054600791611, 2007. a, b
Ouwersloot, H. G., VilàGuerau de Arellano, J., Nölscher, A. C., Krol, M. C., Ganzeveld, L. N., Breitenberger, C., Mammarella, I., Williams, J., and Lelieveld, J.: Characterization of a boreal convective boundary layer and its impact on atmospheric chemistry during HUMPPACOPEC2010, Atmos. Chem. Phys., 12, 9335–9353, https://doi.org/10.5194/acp1293352012, 2012. a
Raoult, N. M., Jupp, T. E., Cox, P. M., and Luke, C. M.: Landsurface parameter optimisation using data assimilation techniques: the adJULES system V1.0, Geosci. Model Dev., 9, 2833–2852, https://doi.org/10.5194/gmd928332016, 2016. a, b, c
Renner, M., Brenner, C., Mallick, K., Wizemann, H.D., Conte, L., Trebs, I., Wei, J., Wulfmeyer, V., Schulz, K., and Kleidon, A.: Using phase lags to evaluate model biases in simulating the diurnal cycle of evapotranspiration: a case study in Luxembourg, Hydrol. Earth Syst. Sci., 23, 515–535, https://doi.org/10.5194/hess235152019, 2019. a, b, c
Rödenbeck, C., Houweling, S., Gloor, M., and Heimann, M.: CO_{2} flux history 1982–2001 inferred from atmospheric data using a global inversion of atmospheric transport, Atmos. Chem. Phys., 3, 1919–1964, https://doi.org/10.5194/acp319192003, 2003. a
Ronda, R. J., de Bruin, H. A. R., and Holtslag, A.: Representation of the Canopy Conductance in Modeling the Surface Energy Budget for Low Vegetation, Am. Meteorol. Soc., 40, 1431–1444, 2001. a, b
Santaren, D., Peylin, P., Bacour, C., Ciais, P., and Longdoz, B.: Ecosystem model optimization using in situ flux observations: benefit of Monte Carlo versus variational schemes and analyses of the yeartoyear model performances, Biogeosciences, 11, 7137–7158, https://doi.org/10.5194/bg1171372014, 2014. a
Schulte, R., van Zanten, M., RutledgeJonker, S., Swart, D., Wichink Kruit, R., Krol, M., van Pul, W., and VilàGuerau de Arellano, J.: Unraveling the diurnal atmospheric ammonia budget of a prototypical convective boundary layer, Atmo. Environ., 249, 118153, https://doi.org/10.1016/J.ATMOSENV.2020.118153, 2021. a
Schürmann, G. J., Kaminski, T., Köstler, C., Carvalhais, N., Voßbeck, M., Kattge, J., Giering, R., Rödenbeck, C., Heimann, M., and Zaehle, S.: Constraining a landsurface model with multiple observations by application of the MPICarbon Cycle Data Assimilation System V1.0, Geosci. Model Dev., 9, 2999–3026, https://doi.org/10.5194/gmd929992016, 2016. a
Stewart, J.: Modelling surface conductance of pine forest, Agr. Forest Meteoro., 43, 19–35, https://doi.org/10.1016/01681923(88)900032, 1988. a
Stull, R. B.: An introduction to boundary layer meteorology, Kluwer Academic Publishers, Dordrecht, https://doi.org/10.1007/9789400930278, 1988. a, b, c
Tang, J. and Zhuang, Q.: Equifinality in parameterization of processbased biogeochemistry models: A significant uncertainty source to the estimation of regional carbon dynamics, J. Geophys. Res.Biogeo., 113, https://doi.org/10.1029/2008JG000757, 2008. a
Tarantola, A.: Inverse problem theory and methods for model parameter estimation, in: Other Titles in Applied Mathematics, Society for Industrial and Applied Mathematics (siam), Philadelphia, USA, https://doi.org/10.1137/1.9780898717921, 2005. a, b
The Global Monitoring Laboratory of the National Oceanic and Atmospheric Administration: Observation Package (ObsPack) Data Products, https://gml.noaa.gov/ccgg/obspack/, last access: 9 December 2022. a
The Royal Netherlands Meteorological Institute (KNMI): KNMI Data Platform, https://dataplatform.knmi.nl/dataset/?tags=Insitu&tags=CESAR, last access: 9 December 2022. a
The SciPy community: scipy.optimize.fmin_tnc, https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_tnc.html, last access: 9 December 2022. a
van der Walt, S., Colbert, S., and Varoquaux, G.: The NumPy Array: A Structure for Efficient Numerical Computation, Comput. Sci. Eng., 13, 22–30, https://doi.org/10.1109/MCSE.2011.37, 2011. a
van Heerwaarden, C. C., VilàGuerau de Arellano, J., Moene, A. F., and Holtslag, A. A. M.: Interactions between dryair entrainment, surface evaporation and convective boundarylayer development, Q. J. Roy. Meteor. Soc., 135, 1277–1291, https://doi.org/10.1002/qj.431, 2009. a
van Heerwaarden, C. C., VilàGuerau de Arellano, J., Gounou, A., Guichard, F., and Couvreux, F.: Understanding the daily cycle of evapotranspiration: A method to quantify the influence of forcings and feedbacks, J. Hydrometeorol., 11, 1405–1422, https://doi.org/10.1175/2010JHM1272.1, 2010. a
Vermeulen, A. T., Hensen, A., Popa, M. E., van den Bulk, W. C. M., and Jongejan, P. A. C.: Greenhouse gas observations from Cabauw Tall Tower (1992–2010), Atmos. Meas. Tech., 4, 617–644, https://doi.org/10.5194/amt46172011, 2011. a
Vesala, T., Suni, T., Rannik, Ü., Keronen, P., Markkanen, T., Sevanto, S., Grönholm, T., Smolander, S., Kulmala, M., Ilvesniemi, H., Ojansuu, R., Uotila, A., Levula, J., Mäkelä, A., Pumpanen, J., Kolari, P., Kulmala, L., Altimir, N., Berninger, F., Nikinmaa, E., and Hari, P.: Effect of thinning on surface fluxes in a boreal forest, Global Biogeochem. Cy., 19, GB2001, https://doi.org/10.1029/2004GB002316, 2005. a
VilàGuerau De Arellano, J., Van Heerwaarden, C. C., and Lelieveld, J.: Modelled suppression of boundarylayer clouds by plants in a CO_{2}rich atmosphere, Nat. Geosci., 5, 701–704, https://doi.org/10.1038/ngeo1554, 2012. a, b, c, d
VilàGuerau De Arellano, J., Van Heerwaarden, C. C., Van Stratum, B. J., and Van Den Dries, K.: Atmospheric boundary layer: Integrating air chemistry and land interactions, Cambridge University Press, https://doi.org/10.1017/CBO9781316117422, 2015. a, b, c
Whelan, M. E., Lennartz, S. T., Gimeno, T. E., Wehr, R., Wohlfahrt, G., Wang, Y., Kooijmans, L. M. J., Hilton, T. W., Belviso, S., Peylin, P., Commane, R., Sun, W., Chen, H., Kuai, L., Mammarella, I., Maseyk, K., Berkelhammer, M., Li, K.F., Yakir, D., Zumkehr, A., Katayama, Y., Ogée, J., Spielmann, F. M., Kitz, F., Rastogi, B., Kesselmeier, J., Marshall, J., Erkkilä, K.M., Wingate, L., Meredith, L. K., He, W., Bunk, R., Launois, T., Vesala, T., Schmidt, J. A., Fichot, C. G., Seibt, U., Saleska, S., Saltzman, E. S., Montzka, S. A., Berry, J. A., and Campbell, J. E.: Reviews and syntheses: Carbonyl sulfide as a multiscale tracer for carbon and water cycles, Biogeosciences, 15, 3625–3657, https://doi.org/10.5194/bg1536252018, 2018. a, b
Wouters, H., Petrova, I. Y., van Heerwaarden, C. C., VilàGuerau de Arellano, J., Teuling, A. J., Meulenberg, V., Santanello, J. A., and Miralles, D. G.: Atmospheric boundary layer dynamics from balloon soundings worldwide: CLASS4GL v1.0, Geosci. Model Dev., 12, 2139–2153, https://doi.org/10.5194/gmd1221392019, 2019. a
Ye, H., You, W., Zang, Z., Pan, X., Wang, D., Zhou, N., Hu, Y., Liang, Y., and Yan, P.: Observing system simulation experiment (OSSE)quantitative evaluation of lidar observation networks to improve 3D aerosol forecasting in China, Atmo. Res., 270, 106069, https://doi.org/10.1016/j.atmosres.2022.106069, 2022. a
Ziehn, T., Scholze, M., and Knorr, W.: On the capability of Monte Carlo and adjoint inversion techniques to derive posterior parameter uncertainties in terrestrial ecosystem models, Global Biogeochem. Cy., 26, GB3025, https://doi.org/10.1029/2011GB004185, 2012. a, b
 Abstract
 Introduction
 Forward model
 Inverse modelling framework
 Error statistics
 Output
 Technical details of the code
 Adjoint model validation
 Inverse modelling validation: observation system simulation experiments
 Application example: CO_{2} and boundary layer meteorology at a Dutch grassland
 Concluding discussion
 Appendix A: Description of used parameters and observation streams
 Appendix B: Inversemodelling variables
 Code and data availability
 Author contributions
 Competing interests
 Disclaimer
 Acknowledgements
 Financial support
 Review statement
 References
 Supplement
 Abstract
 Introduction
 Forward model
 Inverse modelling framework
 Error statistics
 Output
 Technical details of the code
 Adjoint model validation
 Inverse modelling validation: observation system simulation experiments
 Application example: CO_{2} and boundary layer meteorology at a Dutch grassland
 Concluding discussion
 Appendix A: Description of used parameters and observation streams
 Appendix B: Inversemodelling variables
 Code and data availability
 Author contributions
 Competing interests
 Disclaimer
 Acknowledgements
 Financial support
 Review statement
 References
 Supplement