the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Effects of forcing differences and initial conditions on inter-model agreement in the VolMIP volc-pinatubo-full experiment
Claudia Timmreck
Myriam Khodri
Anja Schmidt
Matthew Toohey
Manabu Abe
Slimane Bekki
Jason Cole
Shih-Wei Fang
Wuhu Feng
Gabriele Hegerl
Ben Johnson
Nicolas Lebas
Allegra N. LeGrande
Graham W. Mann
Lauren Marshall
Landon Rieger
Alan Robock
Sara Rubinetti
Kostas Tsigaridis
Helen Weierbach
Download
- Final revised paper (published on 16 Mar 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 12 Nov 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
CEC1: 'Comment on gmd-2021-372', Juan Antonio Añel, 26 Nov 2021
Dear authors,
After checking your manuscript, it has come to our attention that it does not comply with our Code and Data Policy.
https://www.geoscientific-model-development.net/policies/code_and_data_policy.html
The first problem comes from the statement that you make in the 'Code and data availability section. It reads, "The time series used in the analysis will be made available via permanent public repository with doi upon final publication". We can not accept this. You must store permanently all the files used for the review process. It does not mind that they are not the final files; this is necessary for transparency and the public review that we follow during the Discussions stage.
Also, in your work, you use six modes. You mention the generic name of the version of the model and cite relevant references. However, you must be more precise about the exact version number of the models used and the submodels that are part of them (at minimum in the Code availability section). Moreover, you must include the necessary information to obtain the code of the models used. This means that the exact version of the models used in the manuscript must be available in a permanent repository that complies with our above-mentioned code policy.
Remember that you must include the modified 'Code and Data Availability' section in a potential reviewed version of your manuscript, with all the relevant new information.
Please, reply as soon as possible to this comment with the link for it so that it is available for the peer-review process, as it should be.Juan A. Añel
Geosc. Mod. Dev. Executive Editor
Citation: https://doi.org/10.5194/gmd-2021-372-CEC1 -
AC1: 'Reply on CEC1', Davide Zanchettin, 15 Dec 2021
Dear Editor,
thank you for your comment, which stimulates us to provide additional details about the models used in our study and clarify the availability of data during the revision process.
All six models used in our study are in their exact CMIP6 version as employed for the CMIP6-endorsed initiative VolMIP. Some model descriptions are available in the es-doc:
https://explore.es-doc.org/cmip6/models/ipsl/ipsl-cm6a-lr
https://explore.es-doc.org/cmip6/models/miroc/miroc-es2l
https://explore.es-doc.org/cmip6/models/mohc/ukesm1-0-ll
Information about the output data for each model can be found at the following doi links at the WDCC Data Portal CERA, that also provide details and references for all models:
MPI-ESM-1.2-LR: doi:10.22033/ESGF/CMIP6.742
CanESM5: doi:10.22033/ESGF/CMIP6.1303
MIROC-ES2L: doi:10.22033/ESGF/CMIP6.902
IPSL-CM6-LR: doi:10.22033/ESGF/CMIP6.1534
GISS-E2.1-G: doi:10.22033/ESGF/CMIP6.1400
UKESM1: doi:10.22033/ESGF/CMIP6.1569
In the revision of the manuscript, we will improve the description of all models, add relevant information about the exact version of each model that was used for CMIP6-VolMIP and provide a code availability statement for each model in the exact version employed.
The raw data used in this study are part of the output of CMIP6, hence they are available through the ESGF for both the piControl and the volc-pinatubo-full experiments. As stated in our manuscript, to facilitate the revision process and the public discussion of our manuscript we have provided the spatially-average data we have calculated from the gridded monthly output of each model in a long-term archive at: https://vesg.ipsl.upmc.fr/thredds/catalog/VOLMIP/volc-pinatubo-full/catalog.html
Sincerely,
Davide Zanchettin on behalf of all coauthors
Citation: https://doi.org/10.5194/gmd-2021-372-AC1
-
AC1: 'Reply on CEC1', Davide Zanchettin, 15 Dec 2021
-
RC1: 'Comment on gmd-2021-372', Anonymous Referee #1, 22 Dec 2021
This paper presents an overview of initial results of the VolMIP volc-pinatubo-full experiment and discuss future directions for the improved experiment. I find it generally well written with useful details. However, there are some places that need more information and/or clarification, which would strengthen the key messages.
No observed values are used when evaluating model climatology and responses to volcanic forcing. Although I agree that the aim of this paper is to provide an initial assessment based on idealized experiments, not historical transient experiments which are comparable to the observations, assessing the degree of inter-model agreement in volcanic influences without any relevant comparison with observed values could be misleading given that models may have systematic biases. I strongly suggest including observed values somehow in their plots and interpreting results accordingly.
This study aims at providing preliminary assessments but more efforts to quantify factors responsible for inter-model discrepancies would be useful. One way would be to add summary bar graphs or tables for some key variables (with observed estimates if possible, see my comment above) where readers can find actual values for individual models and how much differences exist between models and also between different ocean initial conditions (ENSO phases). Mostly, time series are displayed and it is inconvenient to identify specific model responses.
Some places need more explanations for better understanding. It's unclear how authors have selected samples for "equally distributed cold/neutral/warm states of ENSO and negative/neutral/positive states of NAO". Exact details of sampling methods look very important for interpreting results as well as for planning the next VolMIP protocol. Also, authors consider radiation feedbacks in their evaluations but its association with inter-model spreads needs to be explained more clearly. Another one is why ECS is considered here, which represents equilibrium sensitivity to doubled CO2.
Authors conclude that influence of ocean initial conditions is weak or even negligible but this conclusion can be dependent on how to measure ENSO-like responses. Other studies used relative SST as authors briefly mentioned, and results can be affected much by applying different metrics. Since understanding ENSO influence is one of major issues, I think that adding more discussion with appropriate sensitivity tests would be useful, e.g. comparing relative SST responses with Nino3.4 responses. In terms of NAO or AO responses, target season and region can be revised as boreal winter and high latitude areas, for better comparisons with previous findings.
Citation: https://doi.org/10.5194/gmd-2021-372-RC1 -
AC2: 'Reply on RC1', Davide Zanchettin, 14 Jan 2022
We thank Referee #1 for her/his helpful comments on our manuscript. We report in italics relevant comments by the referee.
No observed values are used when evaluating model climatology and responses to volcanic forcing. Although I agree that the aim of this paper is to provide an initial assessment based on idealized experiments, not historical transient experiments which are comparable to the observations, assessing the degree of inter-model agreement in volcanic influences without any relevant comparison with observed values could be misleading given that models may have systematic biases. I strongly suggest including observed values somehow in their plots and interpreting results accordingly
We agree with the referee that a comparison with observed climate anomalies around the Pinatubo eruption may be a worthy addition to our analysis. In the revised manuscript we will therefore include observed anomalies in some of the figures (for instance, those regarding the temperature and precipitation response) and discuss how our multi-model results compare with observations.
This study aims at providing preliminary assessments but more efforts to quantify factors responsible for inter-model discrepancies would be useful. One way would be to add summary bar graphs or tables for some key variables (with observed estimates if possible, see my comment above) where readers can find actual values for individual models and how much differences exist between models and also between different ocean initial conditions (ENSO phases). Mostly, time series are displayed and it is inconvenient to identify specific model responses.
We are in favor of adding some relevant information in a more quantitative way, for instance through tables. In the revised manuscript we plan to include supplementary tables for relevant statistics of the global-mean near-surface temperature response, including ensemble-mean, ensemble spread, and maximum cooling with timing, also accounting for the effect of initial conditions and specifically the ENSO state. We would like to remind that all time series used in the manuscript are publicly available, so that quantitative estimates can be easily calculated in follow-up studies.
Some places need more explanations for better understanding. It's unclear how authors have selected samples for "equally distributed cold/neutral/warm states of ENSO and negative/neutral/positive states of NAO". Exact details of sampling methods look very important for interpreting results as well as for planning the next VolMIP protocol. Also, authors consider radiation feedbacks in their evaluations but its association with inter-model spreads needs to be explained more clearly. Another one is why ECS is considered here, which represents equilibrium sensitivity to doubled CO2.
In the revised manuscript we will provide further metadata regarding the simulations, including the initial states sampled for the different participating models. The VolMIP protocol was somehow vague on how the "equally distributed cold/neutral/warm states of ENSO and negative/neutral/positive states of NAO" were to be sampled, so different groups proceeded differently, sometimes with a subjective approach. We will explain this better in the revised manuscript. In the revised manuscript we will also put ECS in better context.
Authors conclude that influence of ocean initial conditions is weak or even negligible but this conclusion can be dependent on how to measure ENSO-like responses. Other studies used relative SST as authors briefly mentioned, and results can be affected much by applying different metrics. Since understanding ENSO influence is one of major issues, I think that adding more discussion with appropriate sensitivity tests would be useful, e.g. comparing relative SST responses with Nino3.4 responses. In terms of NAO or AO responses, target season and region can be revised as boreal winter and high latitude areas, for better comparisons with previous findings.
We agree that relative SSTs are a better basis than absolute SSTs to calculate the Nino3.4 index and capture ENSO-like responses. In fact, we used the Nino3.4 index as defined by the VolMIP protocol. This is one of the aspects where there is room for improvement in a possible second phase of the initiative. Therefore, we will keep the original analysis based on absolute SSTs, but also add results corresponding to a Nino3.4 index calculated from relative SSTs. We will compare results and discuss the implications for the protocol in follow-up VolMIP activities.
Citation: https://doi.org/10.5194/gmd-2021-372-AC2
-
AC2: 'Reply on RC1', Davide Zanchettin, 14 Jan 2022
-
RC2: 'Comment on gmd-2021-372', Anonymous Referee #2, 23 Dec 2021
General comments and recommendation:
This paper aims at describing preliminary results based on the VOLMIP initiative, and in particular a multi-model large ensemble of 5-year experiments including the forcing of a Pinatubo-like volcanic eruption based on a common protocol for all the models. This article is scientifically well constructed, well presented and very well written. Merging and analysing such a large dataset is a huge and valuable effort. One small weakness of this article is that some aspects could be detailed for a better understanding, in particular for some analysis that would benefit from more explanations. A list of comments and suggestions is presented below, these ones corresponding to minor revisions that need to be addressed before publication. Most of them are discussion points for which it could be interesting to get the view of the authors. A list of technical corrections is also presented at the end of this document.
Discussion points
- I understand the benefits to build a multi-model protocol in which the volcanic forcing is commonly defined, to allow a “good agreement between the different models » and highlight the differences between the models in terms of response to external forcing independently to the implementation of the forcing. Overall, is there any risk that the homogenisation of the climate models encouraged in model inter-comparisons support the building of a unique family of similar models, all of them showing the same uncertainties that would be therefore more difficult to estimate? Should we encourage more contrasted model developments for a better understanding of the processes at play?
- Could we expect an impact of the anthropogenic forcings on the climate response to volcanic eruptions? In other words, would we expect different conclusions starting the volc-pinatubo-full experiment from control experiments produced with constant anthropogenic forcings corresponding to those observed at the beginning of the XXIth century, and/or in transient forcing experiments?
- P4, L.125-130: How do we deal the fact that the modes of variability typically show different patterns for the different models? Why not having considered an EOF approach, using for each model its mode of variability with its specific pattern?
- P5, Characteristics of volc-pinatubo-full, the multi-model ensemble: Would it be possible to give more details about the spectral resolution of the models? Does it differ among the models? The way that the VOLMIP forcing is distributed over the spectral bands could be detailed. More information about the vertical distribution of the forcing in the different models would be also welcomed: is the forcing vertically distributed in a stationary way, using monthly climatologies of the elevation of the atmospheric layers, or is the forcing vertically distributed on-line? I saw that this weakness is discussed at the end of the paper, but why not including more information about these model features in this publication?
- Initial conditions: Why not sampling the QBO on a similar way as the other modes? QBO impact on climate response to volcanic forcing has been evidence in Thomas et al., 2009. (Thomas, M. A., Giorgetta, M. A., Timmreck, C., Graf, H.-F., and Stenchikov, G.: Simulation of the climate impact of Mt. Pinatubo eruption using ECHAM5 – Part 2: Sensitivity to the phase of the QBO and ENSO, Atmos. Chem. Phys., 9, 3001–3009, https://doi.org/10.5194/acp-9-3001-2009, 2009). At lower frequency, why not considering different states of the AMV that might affect also the response of the modes of variability (Ménégoz, M., Cassou, C., Swingedouw, D., Ruprich-Robert, Y., Bretonnière, P.A. and Doblas-Reyes, F., 2018. Role of the Atlantic Multidecadal Variability in modulating the climate response to a Pinatubo-like volcanic eruption. Climate Dynamics, 51(5), pp.1863-1883). This point is discussed at the end of the article. Nevertheless, we do not know the reasons for which these modes have not been considered in the first edition of VOLMIP.
- P10: the ENSO differences among the models based on a temperature average over the Niño 3.4 area only might be affected by the ENSO specific position in each model. The ENSO signature in models is often shifted Southward/Northward Eastward/Westward as compared to the observations, and it differs clearly from one model to another one. This could be discussed in the article. The same issue can be highlighted for the NAO signature, and this might be a much more important issue considering the typical spatial biases of the NAO pattern in the current generation of AOGCMs.
- P13, L. 402: “dynamical responses may be masked by broad tropical radiative cooling effects » -> So why not considering a relative ENSO index (Nino3.4 tas minus tropical tas) as done in several publications (e.g. Khodri, M., Izumo, T., Vialard, J., Janicot, S., Cassou, C., Lengaigne, M., Mignot, J., Gastineau, G., Guilyardi, E., Lebas, N. and Robock, A., 2017. Tropical explosive volcanic eruptions can trigger El Niño by cooling tropical Africa. Nature communications, 8(1), pp.1-13.). This is discussed in the end of the article, but why not including directly such a “RENSO index” in the article?
- P13-14: feedbacks: more explanations about the LW and SW ratios would be welcomed, to allow a better understanding of the sign of the feedbacks (negative versus positive) as well as the processes that are suggested in this Section. It is delicate in particular to understand whether the LW and SW changes are related to aerosol changes or to changes in the atmospheric temperature.
Technical corrections
- L71: “sensitivity experiments aimed” -> which one? Maybe more information could be given here.
- P6, L. 165: there is a more recent description of the Orchidee surface scheme in Cheruy et al., 2020 (Cheruy, F., Ducharne, A., Hourdin, F., Musat, I., Vignon, É., Gastineau, G., Bastrikov, V., Vuichard, N., Diallo, B., Dufresne, J.L. and Ghattas, J., 2020. Improved nearâsurface continental climate in IPSLâCM6AâLR by combined evolutions of atmospheric and land surface physics. Journal of Advances in Modeling Earth Systems, 12(10), p.e2019MS002005.)
- P10, L. 294: why referring to Table 1 here?
- P10, L.302: It is stated that IPSL-CM6 is warmer in the tropical Pacific, but this is only verified in the Nino 3.4 domain, since it is cooler on average over the whole tropical area if I understand well Figure 3?
- Figures 4-5-6-7-8-9: A vertical line could indicate the exact timing of the eruption.
- Figure 10 caption: it is not totally clear whether the y axis show simply LWt/LWs↑ for one experiment (volc-pinatubo-full) or an anomaly difference between this experiment and the control (as mentioned in the text at Line 409).
- P14, L. 414: “a tendential lowering » -> lowering with the time after the eruption?
- P15: 4.5 effect of sampling strategy: Again, it could be relevant to consider relative ENSO index (Nino 3.4 versus all the tropical areas) to disentangle the dynamical response of ENSO from the radiative cooling. The fact that the winter NAO does not affect the climate response to the volcanic forcing might be also explained by the relative small persistence of this mode of variability as compared to ENSO or AMV for example.
- P15: Ensemble size: In Figure 12, what is the period considered to compute the GST? (First year post-eruption?)
- P16, L.480: compare -> compared
- P18, L. 560: to understanding -> to understand.
Citation: https://doi.org/10.5194/gmd-2021-372-RC2 -
AC3: 'Reply on RC2', Davide Zanchettin, 14 Jan 2022
We thank Referee #2 for her/his helpful comments on our manuscript. We report in italics relevant comments by the referee.
Concerning the specific discussion points raised by the referee:
I understand the benefits to build a multi-model protocol in which the volcanic forcing is commonly defined, to allow a “good agreement between the different models » and highlight the differences between the models in terms of response to external forcing independently to the implementation of the forcing. Overall, is there any risk that the homogenisation of the climate models encouraged in model inter-comparisons support the building of a unique family of similar models, all of them showing the same uncertainties that would be therefore more difficult to estimate? Should we encourage more contrasted model developments for a better understanding of the processes at play?
This is a very useful comment. The key point of using a consistent forcing across models in terms of aerosol radiative properties is explained in the VolMIP original paper (Zanchettin et al., 2016). In brief, this allows us to focus on how models differ in the climate response, as uncertainties generated by aerosol chemical and microphysical properties are neglected. In fact, in this sense VolMIP is a companion to the SPARC/SSiRC Interactive Stratospheric Aerosol Model Intercomparison project (ISA-MIP, Timmreck et al., 2018)) which covers the uncertainties in the pathway from the eruption source to the volcanic radiative forcing. Specifically, the aim of ISA-MIP is to constrain and to improve global aerosol models by using a range of observations in order to reduce the forcing uncertainties. Both initiatives are supposed to interact in order to progress our understanding of how climate responds to strong volcanic eruptions. Of course, model agreement does not necessarily imply agreement with observations, and we will include a comparison with observed anomalies to illustrate any potential bias in our multi-model ensemble;
Could we expect an impact of the anthropogenic forcings on the climate response to volcanic eruptions? In other words, would we expect different conclusions starting the volc-pinatubo-full experiment from control experiments produced with constant anthropogenic forcings corresponding to those observed at the beginning of the XXIth century, and/or in transient forcing experiments?
Several studies point to the fact that background climate conditions can affect the climate response to volcanic forcing. We consider the volc-pinatubo experiments idealized exactly because the forcing is realistic but the background climate state (from piControl) is different from the actual climate state during the 1991 Pinatubo eruption. In the original manuscript, we even illustrate some details about the mean climate state and variability in piControl simulated by the different models as a possible source of inter-model differences. Therefore, we agree with the referee that conclusions might differ if the background climate state differs and/or in transient conditions, i.e., in presence of additional forcing agents. We will better stress this in the revised manuscript.
P4, L.125-130: How do we deal the fact that the modes of variability typically show different patterns for the different models? Why not having considered an EOF approach, using for each model its mode of variability with its specific pattern?
We dealt with it in the definition phase of the VolMIP protocol, when we choose to use box-based indices over EOF-based ones because our expectation is that total variability is separated differently into principal components in different models, which would add another level of uncertainty. We agree that there is an intrinsic problem in the use of predefined indices based on mathematical constructs (being EOF or box-based indices) rather than physical understanding and will expand discussion in this regard in the revised manuscript.
P5, Characteristics of volc-pinatubo-full, the multi-model ensemble: Would it be possible to give more details about the spectral resolution of the models? Does it differ among the models? The way that the VOLMIP forcing is distributed over the spectral bands could be detailed. More information about the vertical distribution of the forcing in the different models would be also welcomed: is the forcing vertically distributed in a stationary way, using monthly climatologies of the elevation of the atmospheric layers, or is the forcing vertically distributed on-line? I saw that this weakness is discussed at the end of the paper, but why not including more information about these model features in this publication?
We will improve the description of all models in the revised manuscript. Spectral bands differ across models, and EVA produces forcing input data for each model’s specifics.
Initial conditions: Why not sampling the QBO on a similar way as the other modes? QBO impact on climate response to volcanic forcing has been evidence in Thomas et al., 2009. (Thomas, M. A., Giorgetta, M. A., Timmreck, C., Graf, H.-F., and Stenchikov, G.: Simulation of the climate impact of Mt. Pinatubo eruption using ECHAM5 – Part 2: Sensitivity to the phase of the QBO and ENSO, Atmos. Chem. Phys., 9, 3001–3009, https://doi.org/10.5194/acp-9-3001-2009, 2009). At lower frequency, why not considering different states of the AMV that might affect also the response of the modes of variability (Ménégoz, M., Cassou, C., Swingedouw, D., Ruprich-Robert, Y., Bretonnière, P.A. and Doblas-Reyes, F., 2018. Role of the Atlantic Multidecadal Variability in modulating the climate response to a Pinatubo-like volcanic eruption. Climate Dynamics, 51(5), pp.1863-1883). This point is discussed at the end of the article. Nevertheless, we do not know the reasons for which these modes have not been considered in the first edition of VOLMIP
We agree that the state of QBO is a potential influencing factor on the climate response to volcanic eruptions. Since not all models spontaneously generate a QBO, we decided to not include it as a requirement for sampling in the final protocol. The original VolMIP paper states that “volcanic radiative sampling of an eastern phase of the Quasi-Biennial Oscillation (QBO), as observed after the 1991 Pinatubo eruption, is preferred for those models that spontaneously generate such mode of stratospheric variability.” We will report this more explicitly in the revised manuscript. Concerning the AMV, this is certainly of interest. The problem is how the increase in the number of variables used for the sampling affects the ensemble size. We need a balance, and for the short-term scales that are focus of volc-pinatubo experiments we decided to opt for the NAO as a descriptor of the North Atlantic state. For the volc-long VolMIP simulations with the focus on the multiannual to decadal scale climate response to volcanic forcing we use the AMOC as reference index for sampling initial conditions. We will elaborate further on this in the revised manuscript, including a perspective on the implications of the coupling between AMV and NAO/AMOC suggested in the literature.
P10: the ENSO differences among the models based on a temperature average over the Niño 3.4 area only might be affected by the ENSO specific position in each model. The ENSO signature in models is often shifted Southward/Northward Eastward/Westward as compared to the observations, and it differs clearly from one model to another one. This could be discussed in the article. The same issue can be highlighted for the NAO signature, and this might be a much more important issue considering the typical spatial biases of the NAO pattern in the current generation of AOGCMs.
We will elaborate further on the ENSO index, also following a comment by Referee #1. As highlighted in a point response above, inter-model differences and biases with respect to observations make it difficult to identify optimal indices that are expected to capture specific dynamics. We will discuss this better in the revised manuscript. At least for the NAO a recent paper using the same index definition employed in VolMIP suggests a marked consistency across CMIP6 models (Cusinato et al., 2021).
P13, L. 402: “dynamical responses may be masked by broad tropical radiative cooling effects » -> So why not considering a relative ENSO index (Nino3.4 tas minus tropical tas) as done in several publications (e.g. Khodri, M., Izumo, T., Vialard, J., Janicot, S., Cassou, C., Lengaigne, M., Mignot, J., Gastineau, G., Guilyardi, E., Lebas, N. and Robock, A., 2017. Tropical explosive volcanic eruptions can trigger El Niño by cooling tropical Africa. Nature communications, 8(1), pp.1-13.). This is discussed in the end of the article, but why not including directly such a “RENSO index” in the article?
We will use relative SSTs in addition to absolute SSTs to calculate the Nino3.4 index in the revised manuscript. We would like to remark here that the choice of using absolute SSTs was based on the original VolMIP protocol using it. We strongly believe that ENSO deserves a follow-up specific study beyond the initial results considered here, that are mostly focused on assessing the effectiveness of the VolMIP protocol.
P13-14: feedbacks: more explanations about the LW and SW ratios would be welcomed, to allow a better understanding of the sign of the feedbacks (negative versus positive) as well as the processes that are suggested in this Section. It is delicate in particular to understand whether the LW and SW changes are related to aerosol changes or to changes in the atmospheric temperature.
We will improve the discussion about LW and SW ratios in the revised manuscript. This analysis is and will remain nonetheless only preliminary for a study using also the volc-pinatubo-strat/surf experiments to fully disentangle the feedbacks involved in the response.
We will account for all technical corrections requested by the referee.
References
Cusinato, E., A. Rubino, and D. Zanchettin. Winter Euro-Atlantic Climate Modes: Future Scenarios From a CMIP6 Multi-Model Ensemble. Geophys. Res. Lett., 48, e2021GL094532, doi: https://doi.org/10.1029/2021GL094532, 2021Timmreck, C., Mann, G. W., Aquila, V., Hommel, R., Lee, L. A., Schmidt, A., Brühl, C., Carn, S., Chin, M., Dhomse, S. S., Diehl, T., English, J. M., Mills, M. J., Neely, R., Sheng, J., Toohey, M., and Weisenstein, D.: The Interactive Stratospheric Aerosol Model Intercomparison Project (ISA-MIP): motivation and experimental design, Geosci. Model Dev., 11, 2581–2608, https://doi.org/10.5194/gmd-11-2581-2018, 2018.
Zanchettin, D., Khodri, M., Timmreck, C., Toohey, M., Schmidt, A., Gerber, E. P., Hegerl, G., Robock, A., Pausata, F. S. R., Ball, W. T., Bauer, S. E., Bekki, S., Dhomse, S. S., LeGrande, A. N., Mann, G. W., Marshall, L., Mills, M., Marchand, M., Niemeier, U., Poulain, V., Rozanov, E., Rubino, A., Stenke, A., Tsigaridis, K., and Tummon, F.: The Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP): experimental design and forcing input data for CMIP6, Geosci. Model Dev., 9, 2701–2719, https://doi.org/10.5194/gmd-9-2701-2016, 2016.
Citation: https://doi.org/10.5194/gmd-2021-372-AC3