the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Estimation of above- and below-ground ecosystem parameters for the DVM-DOS-TEM v0.7.0 model using MADS v1.7.3: a synthetic case study
Abstract. The permafrost region contains a significant portion of the world's soil organic carbon, and its thawing, driven by accelerated Arctic warming, could lead to the substantial release of greenhouse gases, potentially disrupting the global climate system. Accurate predictions of carbon cycling in permafrost ecosystems hinge on the robust calibration of model parameters. However, manually calibrating numerous parameters in complex process-based models is labor-intensive and further complicated by equifinality – the presence of multiple parameter sets that can equally fit the observed data. Incorrect calibration can lead to unrealistic ecological predictions. In this study, we employed the Model Analysis and Decision Support (MADS) software package to automate and enhance the accuracy of parameter calibration for carbon dynamics within the coupled Dynamic Vegetation Model, Dynamic Organic Soil Model, and Terrestrial Ecosystem Model (DVM-DOS-TEM), a process-based ecosystem model designed for high-latitude regions. The calibration process involved adjusting rate-limiting parameters to accurately replicate observed carbon and nitrogen fluxes and stocks in both soil and vegetation. Gross primary production, net primary production, vegetation carbon, vegetation nitrogen, and soil carbon and nitrogen pools served as synthetic observations for a black spruce boreal forest ecosystem. To validate the efficiency of this new calibration method, we utilized model-generated synthetic observations. This study demonstrates the calibration workflow, offers an in-depth analysis of the relationships between parameters and synthetic observations, and evaluates the accuracy of the calibrated parameter values.
- Preprint
(1446 KB) - Metadata XML
-
Supplement
(479 KB) - BibTeX
- EndNote
Status: open (until 03 Jan 2025)
-
RC1: 'Comment on gmd-2024-158', Anonymous Referee #1, 25 Nov 2024
reply
The manuscript by Jafarov et al. present an automated parameter calibration method applied to the MADS package with the aim to improve the calibration of a set of carbon dioxide and nitrogen rates parameters in high latitude regions. The method presented uses an initial guess as a seed for the calibration, and discusses the accuracy of the parametric calibration based on four cases of parameter perturbation by 10%,20%,50% and 90%. The accuracy of the obtained calibration is then assessed by comparison with the target values.
In my opinion, this manuscript presents an interesting and relevant approach to parameter calibration, that can be widely used in panarctic environments, where the site information is scarce (particularly in winter) and the sensitivity of models to certain parameters is very high. The approach to the problem is clear as well as the methodology used and the steps followed to obtain the results. The conclusions are well supported by the results. The abstract reflects the content of the paper and the title is also adequate to the content.
I do have however, one major comment to the approach used. In the manuscript is underlined the importance of calibrating the model against data. Moreover, in the methods section is specified that the observation data is available in the selected site for the calibration. Despite this, the authors choose to calibrate against synthetic data. I can imagine the reason to do this is to simplify the perturbation parametrization, but as the manuscript underlines the importance of the first guess in the calibration convergence, I wonder if this is done to guarantee the results convergence. In any case, due its obvious importance, I think is necessary to run the calibration method against the available observed data. If the results are convergent, its comparison with the synthetic data calibration, and how differ from the differences between the synthetic and the observed data itself would be a valuable addition to the manuscript. On the contrary, if the calibration with observed data is divergent for all perturbations, that would limit the applicability of the method. In case of a convergence for at least some of the perturbations, would give valuable information about the applicable range of perturbation, and necessary first guess accuracy range. In any case, I believe that in order to discuss the value of the method presented, the calibration with observed data is necessary.
In addition, I have some individual minor comments:
- On the subsection 2.1 Synthetic data for Black Spruce forest site, the setup is presented as a forest community type (CMT). I assumed this CMT is composed by 4 plant functional types (PFT) and one “soil” type based in the information on table 1 and 2. This classification is later introduced in 2.2, lines 166 to 170. For clarity, I would recommend move this paragraph or a version of it from 2.2 to 2.1, before the tables reference.
- On the caption of Figure 1 a reference is made to the website of MADS, I think it would be better to move this link to the data and model availability section (6)
- On line 242, the acronym definition for the Levenberg-Marquardt algorithm is not necessary as it was defined on line 223.
- On the results section, the first results referenced, in line 304, are the results located on the supplementary, while the first results in the manuscript are presented in line 306. My understanding is that the supplementary materials should be provided to support the information on the manuscript but should not be an integral and necessary part of it. I can understand that the addition to the main manuscript of the figures S2 to S5 in the main manuscript would extend it significantly and unnecessarily, as the information is an extension of the results presented in figures 3,4 and 5. I would recommend to reorganize the text, so the supplementary figures are presented after the figures on the main manuscript, and as an additional source of information, not a result per se.
- Figures 3, 4 and 5 present the same colour range for widely different value ranges. In my opinion this is confusing as, comparing between % variance results, would seem visually that higher values on a higher variance are lower than lover values in a lower variance (i.e. Figure 3: Cleaf 20% variance column 1 is lower than the same parameter values for 90% variance in columns 2 and 3). I would recommend use a unified colour scale, so the values between variances are clearly inter-comparable. As the ranges between the colour scales depending the variance are highly variable, I would recommend use value-separated colour ranges for the error score (i.e. for figure 3a-d: 0-1 blue, 1-5 orange, 5-20 red).
Citation: https://doi.org/10.5194/gmd-2024-158-RC1
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
151 | 52 | 109 | 312 | 15 | 4 | 6 |
- HTML: 151
- PDF: 52
- XML: 109
- Total: 312
- Supplement: 15
- BibTeX: 4
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1