the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Monsoon Mission Coupled Forecast System Version 2.0: Model Description and Indian Monsoon Simulations
Deepeshkumar Jain
Suryachandra A. Rao
Ramu Dandi
Prasanth A. Pillai
Ankur Srivastava
Maheshwar Pradhan
Abstract. We describe the Monsoon Mission Coupled Forecast System version 2 (MMCFSv2) model, which substantially upgrades the present operational MMCFSv1 (version 1) at the India Meteorology Department. We evaluate MMCFSv2 based on the latest 25 years (1998–2022) of retrospective coupled hindcast simulations of the Indian Summer Monsoon with April initial conditions from Coupled Forecast System Reanalysis. MMCFSv2 simulates the tropical wind, rainfall, and temperature structure reasonably well. MMCFSv2 captures surface winds well and reduces precipitation biases over land, except in India and North America. The dry bias over these regions remained similar to MMCFSv1. MMCFSv2 captures significant features of the Indian monsoon, including the intensity and location of the maximum precipitation centres and the large-scale monsoon circulation. MMCFSv2 improves the phase skill (anomaly correlation coefficient) of the interannual variation of ISMR by 17 % and enhances the amplitude skill (Normalized Root Mean Square Error) by 20 %. MMCFSv2 shows improved teleconnections of ISMR with the equatorial Indian and Pacific oceans. This 25-year hindcast dataset will serve as the baseline for future sensitivity studies of MMCFSv2.
- Preprint
(4225 KB) - Metadata XML
- BibTeX
- EndNote
Deepeshkumar Jain et al.
Status: final response (author comments only)
-
CEC1: 'Comment on gmd-2023-53', Juan Antonio Añel, 05 May 2023
Dear authors,
Unfortunately, after checking your manuscript, it has come to our attention that it does not comply with our "Code and Data Policy".
https://www.geoscientific-model-development.net/policies/code_and_data_policy.htmlIn your manuscript, you state that "MMCFSv2 and the model data used for this study is available on Indian Institute
of Tropical Meteorology High Performance Computer (IITM-HPC)". This does not comply with our policy. You must publish your code and data in one of the appropriate repositories that we list.I should note that, actually, your manuscript should not have been accepted in Discussions, given this lack of compliance with our policy. Therefore, the current situation with your manuscript is irregular. In this way, if you do not fix this problem in a prompt manner, we will have to reject your manuscript for publication in our journal.
Therefore, please, publish your code in one of the appropriate repositories, and reply to this comment with the relevant information (link and DOI), as it should be available for the Discussions stage. Also, please, include the relevant primary input/output data.
Also, you must include in a potentially reviewed version of your manuscript the modified 'Code and Data Availability' section, the DOI of the code and data repositories.
In this case, and given that currently the code is not shared in any repository, I have to emphasize that your manuscript will be rejected for publication if you fail to comply with our request.
Juan A. Añel
Geosci. Model Dev. Exec. EditorCitation: https://doi.org/10.5194/gmd-2023-53-CEC1 -
AC1: 'Reply on CEC1', Anguluri Suryachandra rao, 05 May 2023
Dear Editor,
We have uploaded the model code to the public repository at
https://github.com/deepeshkumar-tropmet/MMCFSv2
We have also uploaded the model output used for the analysis presented in this manuscript at
https://zenodo.org/record/7900790#.ZFU-T5FBxcA (DOI 10.5281/zenodo.7900790)
The input data to the model is available at CFSR website
https://www.ncei.noaa.gov/data/climate-forecast-system/access/operational-analysis/initial-conditions-high-resolution/
I am unsure if I can attach a revised version of the manuscript along with this reply as it is mentioned here not to.
Request you to please let me know how do I update the manuscript.
Thanking You,
Regards,
Deepeshkumar Jain
Citation: https://doi.org/10.5194/gmd-2023-53-AC1 -
CEC2: 'Reply on AC1', Juan Antonio Añel, 06 May 2023
Dear authors,
Thank you for your quick reply to our request. However, unfortunately, again, I have to insist that you do not comply with our policy. And I have to say that it is quite disappointing, given that I have pointed you to the policy in my previous comment, which it is clear you have obviated and not read carefully.
For example, you have archived your code on GitHub. However, GitHub is not a suitable repository for scientific publication. GitHub itself instructs authors to use other alternatives for long-term archival and publishing, such as Zenodo. Our policy clearly says "Project or institution websites and online revision control sites such as GitHub, GitLab or Bitbucket are made for code development but not suitable for archiving frozen code versions."
Therefore, please, publish your code in one of the appropriate repositories, and reply to this comment with the relevant information (link and DOI) as soon as possible.
Also, the input data are stored in noaa.gov servers. This is not a suitable repository. Given that the files there are a few GB, there is no reason not to store them in one of the repositories that we can accept and are listed in our policy. However, I have to note that according to your manuscript, you perform hindcast experiments for the period 1998-2022, and in the NOAA repository that you have linked, there are data only from 2011. What is worse, for many of the years, the repository says that for many years the files have been removed and moved to a different web address. Please, double-check this, and provide a new repository, its link and DOI with all the necessary input data.
To clarify things, it is not necessary at this stage that you upload a modified version of your manuscript. Simply post the information here in the Discussions. If, eventually, the Handling Topical Editor considers that your manuscript deserves to continue through the review process or acceptance for publication, you will be able to upload a new version of the manuscript, which should include the new information about the repositories, DOIs, links, etc.
Juan A. Añel
Geosci. Model Dev. Exec. EditorCitation: https://doi.org/10.5194/gmd-2023-53-CEC2 -
CEC3: 'Reply on CEC2', Juan Antonio Añel, 06 May 2023
Dear authors,
I forgot to add in my other reply to this comment that in the GitHub repository where you have published your model, there is no license listed. If you do not include a license, despite what you intend, the code is not "free-libre open-source" (FLOSS); it continues to be your property, and nobody can use it or test it (which precludes the replicability of your work). Therefore, when uploading the model's code to the new repository, you could want to choose a FLOSS license. We recommend the GPLv3. You only need to include the file 'https://www.gnu.org/licenses/gpl-3.0.txt' as LICENSE.txt with your code. Also, you can choose other options that Zenodo provides: GPLv2, Apache License, MIT License, etc.
Juan A. AñelGeosci. Model Dev. Exec. EditorCitation: https://doi.org/10.5194/gmd-2023-53-CEC3
-
CEC3: 'Reply on CEC2', Juan Antonio Añel, 06 May 2023
-
CEC2: 'Reply on AC1', Juan Antonio Añel, 06 May 2023
-
AC2: 'Reply on CEC1', Anguluri Suryachandra rao, 09 May 2023
Dear Editor,
Please have a look at the coupled code structure in the supplementary file attached.
A part of the code which is essential for coupling MOM6, CICE5, and GFS-SL together using NEMS framework
is made available at Zenodo Data server (https://doi.org/10.5281/zenodo.7914786).
The coupled model consists of MOM6, CICE5, and GFS-SL as its components.
Each of these model components codes (MOM6, CICE5, GFS-SL, and NEMS)
are governed by their respective licenses and we cannot provide
a license that overrides these licenses. The license files can be found in their respective folders.
MOM6 model code is available under LGPLv3 license at https://github.com/mom-ocean/MOM6.
CICE model is available under "CICE Consortium" license (detailed license file is
LICENSE.pdf inside CICE folder). The code and license are available at github.com/CICE-Consortium (now upgraded
to version 6.0). CICE version 5 (used in MMCFSv2) is available at https://github.com/COSIMA/cice5
To the best of our knowledge (from the code), the GFS-SL code comes under GPLv3 License.
The GFS-SL code used in the MMCFSv2 for this study was downloaded from
https://www.nco.ncep.noaa.gov/pmb/codes/nwprod/
They (NCEP) have now upgraded it (GFS) to finite volume version of the model.
I will need permission from NCEP if the GFS-SL code needs to be shared.
The NOAA Earth System Modelling Framework available at https://github.com/NOAA-EMC/NEMS
also comes under Lesser General Public License v3.0.
The NEMS framework allows numerous component models to be coupled together
(please refer to the supplementary image file).
The complete list of allowed components can be found at NEMS/src/makefile.
A choice is to be made regarding the component models to be coupled together.
The component models are then coupled to NEMS coupler using a CAP code.
The CICE_CAP code used for coupling the CICE code with NEMS coupler can be found at
https://earthsystemmodeling.org/docs/nightly/develop/cice/structcice__cap__mod_1_1cice__internalstate__type.html
The MOM6 CAP code used for coupling the ocean model with NEMS coupler can be found at
https://ncar.github.io/MOM6/APIs/mom__cap_8F90_source.html
The GSM CAP code for coupling GFS-SL with NEMS coupler was developed by NCEP and can be found in the
GSM directory of the uploaded code.
These cap codes, once compiled along with their component model codes are ready to be coupled to NEMS framework.
There are *.mk files associated with each component (GSM-INSTALL/gsm.mk, MOM6-INSTALL/mom6.mk, ../CICE-INSTALL/cice.mk) that
needs to be modified accordingly and the paths of these files need to be specified at NEMS/src/conf/configure.nems.NUOPC
file.
We would be happy to provide the complete coupled code for review purposes. However, putting a gplv3 license
above the entire MMCFSv2 code is beyond our powers as the component models belong to their respective groups and are
governed by their respective licenses.
The initial condition files for running the model (from 1998-2010) are available at
UCAR website with DOI: 10.5065/D69K487J (https://rda.ucar.edu/datasets/ds093.0/dataaccess/)
and from 2011-2022 at DOI: 10.5065/D61C1TXF (https://rda.ucar.edu/datasets/ds094-0/).
We have run the model for 25 years, each year having 10 ensembles. Each ensemble's
initial condition file size is approximately 1.6GB. So, the total size of the initial files downloaded
from NCEP-CFSR for a total of 250 ensembles is 400GB. Since the data is
publically available freely at the above mentioned
DOI's, it will be unreasonable (as well as might be restricted by NCEP)
to upload them again on a public server. Most of the open data
servers have size limits well below 400GB. Should the necessity arise for review purposes,
we shall be able to make available the files for the particular years of interest.
The model output is compared with ERA5, GPCP, and IMD data.
Each of these datasets has their own terms of use and belong to their
respective organizations. Processed data that is used in the manuscript is
made available for the years 1998-2022 at Zenodo's server
(https://doi.org/10.5281/zenodo.7913886).Thanks and Regards,
Deepeshkumar Jain
-
CEC4: 'Reply on AC2', Juan Antonio Añel, 10 May 2023
Dear authors,
Thanks for your new reply. We appreciate that you have published in Zenodo the code for MOM6, CICE5, and GFS-SL.
I should note again that the remainder of your reply, discussing GitHub repositories and other servers, does not add anything to this Discussion. It must be clear that they are not acceptable repositories, so discussing them only adds noise to a situation where we have to elucidate if your manuscript complies with our policy and if we can continue with the review process or if, on the other hand, we have to reject your manuscript. Therefore, for the remainder of this discussion, please, avoid mentioning such sites, as it only complicates the understanding of the situation regarding your work.
For the input data, 400 GB is not an unreasonable size to share. For example, Zenodo admits up to 50 GB. You can even create a repository and DOI for each member of the ensemble. Therefore, you should publish them. Also, we do not accept that data is accessible "if it is necessary for the review process". Actually, such a thing does not apply here. Our journal has an open discussion process, where anyone should be able to review a manuscript and access all the assets of the works anonymously. All the necessary assets (code and data) have to be published upfront at the moment of submission.
Also, in your reply, you state that you can not publish MMCFSv2, that it is " beyond our powers". I am not familiar with the MMCFSv2 code. Therefore, to be able to assess if we can accept an exception regarding your manuscript and this model, we need clarification of several issues:
- how it is developed, who is/are the developers and owners of its copyright?
- how is its license established or decided? This is to make sure that for example, a law, forbids you of publishing the model or that there is another reasonable reason why you can not publish it.
- which is the current license of the MMCFSv2?
I make clear that we are not asking you to relicense the model. You could publish it with its current license.
Also, despite you are not publishing MMCFSv2, at minimum, please, provide us with details about where you store internally the model's code.
Regards,
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/gmd-2023-53-CEC4 -
AC3: 'Reply on CEC4', Anguluri Suryachandra rao, 19 May 2023
Dear Editor,
We are uploading the processed input data used for the MMCFSv2 simulations (processed to regrid it to model resolution) at Zenodo's server.
Please find below the link to the data from 1998 to 2012. Input data from 2013 to 2022 is being uploaded. Its taking a lot of time to upload. Request you few more days to complete the upload.
Input data from 1998-2000 - https://doi.org/10.5281/zenodo.7935628
Input data from 2001-2003 - https://doi.org/10.5281/zenodo.7947318
Input data from 2004-2006 - https://doi.org/10.5281/zenodo.7947974
Input data from 2007-2009 - https://doi.org/10.5281/zenodo.7948155
Input data from 2010-2012 - https://doi.org/10.5281/zenodo.7949802
The model code in full is made available at https://doi.org/10.5281/zenodo.7905721
The model was built by bringing together four major components -
a) GFS-SL (gplv3 license) - owner is NCEP
b) NEMS Coupler (lgplv3 license) - owner is NOAA - (maintained on github by them unfortunately)
c) MOM6 ocean model (lgplv3 license) - owner is GFDL, Princeton (maintained on github by them)
d) CICE5 ice model (CICE Consortium license) - owner is LANL (again maintained on github by LANL)
All the individual model component codes mentioned above allow
for code modification and redistribution as long as we include their respctive license
files along with the code. We have uploaded these license files along with the model code
(both our own code, and the full code).We are not the owners of the above mentioned code bases.
We downloaded these models and wrote/modified the scripts and code to make the coupled runs possible.
Our own contribution to MMCFSv2 code is uploaded
at https://doi.org/10.5281/zenodo.7914786 along with all the license files of the individual models.
We are ready to put our own contributions under gplv3 license and the above mentioned
components will continue under their respective groups licenses.
Thanks and Regards,
Deepeshkumar Jain
Citation: https://doi.org/10.5194/gmd-2023-53-AC3 -
AC4: 'Reply on AC3', Anguluri Suryachandra rao, 20 May 2023
Please find the complete processed data that we used to initialize and run the MMCFSv2 simulations from 1998 to 2022 at below mentioned repositories
Input data from 1998 to 2000 - https://doi.org/10.5281/zenodo.7935628
Input data from 2001 to 2003 - https://doi.org/10.5281/zenodo.7947318
Input data from 2004 to 2006 - https://doi.org/10.5281/zenodo.7947974
Input data from 2007 to 2009 - https://doi.org/10.5281/zenodo.7948155
Input data from 2010 to 2012 - https://doi.org/10.5281/zenodo.7949802
Input data from 2013 to 2015 - https://doi.org/10.5281/zenodo.7950855
from 2016-2018 - https://doi.org/10.5281/zenodo.7949863
from 2019-2021 - https://doi.org/10.5281/zenodo.7950964
and Input for for 2022 - https://doi.org/10.5281/zenodo.7951983
Please note that the original raw data belongs to NCEP (Saha, Suranjana, and Coauthors, 2010: The NCEP Climate Forecast System Reanalysis. Bull. Amer. Meteor. Soc., 91, 1015–1057. https://doi.org/10.1175/2010BAMS3001.1).
Thanks and Regards,
Deepeshkumar Jain
Citation: https://doi.org/10.5194/gmd-2023-53-AC4 -
CEC5: 'Reply on AC3', Juan Antonio Añel, 21 May 2023
Dear authors,
Many thanks for your reply and for sharing the necessary code and data. We can now consider the current version of your manuscript compliant with our Code and Data Policy.
Please, do not forget to include in any potentially reviewed version of your manuscript all the information on repositories and versions posted here in Discussions.
Regards,
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/gmd-2023-53-CEC5 -
AC5: 'Reply on CEC5', Anguluri Suryachandra rao, 24 May 2023
Thank you for considering the manuscript for discussions.
Thanks and Regards,,
Deepeshkumar Jain
Citation: https://doi.org/10.5194/gmd-2023-53-AC5
-
AC5: 'Reply on CEC5', Anguluri Suryachandra rao, 24 May 2023
-
AC4: 'Reply on AC3', Anguluri Suryachandra rao, 20 May 2023
-
AC3: 'Reply on CEC4', Anguluri Suryachandra rao, 19 May 2023
-
CEC4: 'Reply on AC2', Juan Antonio Añel, 10 May 2023
-
AC1: 'Reply on CEC1', Anguluri Suryachandra rao, 05 May 2023
-
AC6: 'Comment on gmd-2023-53', Anguluri Suryachandra rao, 13 Jun 2023
We have updated Figure 11 in the original manuscript to reflect the following modifications/updations. These updations do not change the conclusions from the Figure. The modifications are as below:
- We have included the 2022 rainfall forecasts from 6 NMME models (viz. CCSM3, CCSM4, GFDL_aero, CFSV2, GMAO, and SPSIC3) which was not available at the time of writing the original manuscript. Now the updated figure have rainfall data from 1998-2022 for MMCFSv1,v2 and the above NMME models.
- In the study, we used GPCP and IMD as rainfall observations over India. The GPCP rainfall uses same set of rain gauge stations for the entire period, while IMD observational updates as per the availability of rain gauge data on a particular date and the station. Thus GPCP is more consistent compared to IMD for the longer periods. In Figure 11, the normalized standard deviation, anomaly correlation coefficient, and normalized root mean square error for various NMME models and MMCFS (v1 and v2) are calculated with respect to observations. While MMCFS (v1 and v2) scores were evaluated against GPCP, NMME models scores got evaluated against IMD observations unintentionally. GPCP and IMD observations have slightly different magnitudes and standard deviations due to the above mentioned reasons (as can be seen in XLSX file uploaded at https://doi.org/10.5281/zenodo.8024087). We have modified the Figure 11 in the original manuscript so that the reference data for both MMCFS (v1 and v2) and NMME models is common (i.e., GPCP). Please find the figure attached as supplementary.
To reiterate, the conclusions drawn from the Figure that MMCFSv2 performs better than other models, when all the three scores are considered, remains unchanged.
If given an opportunity to revise the manuscript, we would like to modify the lines in the manuscript discussing Figure 11 to reflect the above modifications.
Original Lines 299 to 315 of the manuscript read:
"Pillai et al., (2018) compared the seasonal prediction skill of ISMR in MMCFSv1 (T382) with the US National Multi-Model Ensemble (NMME) project for the simulation years of 1981-2009. They found that MMCFSv1 has better skill in reproducing interannual variability of ISMR (ACC=0.55) compared to the other NMME models (ACC<0.4) and MMCFSv1 is better at simulating the observed standard deviation of ISMR. The Taylor diagram (Taylor, 2000) in Fig. 11 compares the skill of MMCFS (v1 and v2) in reproducing observed Standard Deviation (SD, normalized), Root Mean Squared Error normalized with observed standard deviation (NRMSE), and the ACC of ISMR for the years of 1998-2022, with the NMME models for 1998-2021 (as 2022 data is not available for NMME models). Figure 11 only shows the NMME models which have data from 1998-2021. There are five models which simulate the observed SD reasonably well (normalized SD approximately 1.0), viz. MMCFS (v1 and v2), CFSv2, GFDL_FLORA, and FLORB. All the other models have lower standard deviations compared to observations. A 10 % deviation from the climatological mean is sufficient to have an excess or a drought monsoon over India (Singh et al., 2015). Hence, getting the NRMSE below 1.0 is crucial. Two models which stand out in terms of NRMSE are GFDL_Aero (0.69) and MMCFSv2 (0.82). All the other models simulate NRMSE larger than 0.85. MMCFSv2 reduces the NRMSE from 1.06 of MMCFSv1 to 0.82 with respect to GPCP, which is about 20 %. Though GFDL_Aero has the lowest NRMSE, it has lower than observed normalized SD of 0.83 compared to 0.96 of MMCFSv2. GFDL_Aero also has lower ACC of 0.46 compared to 0.72 of MMCFSv2. MMCFSv2 has the highest skill in capturing the interannual variability of ISMR (ACC=0.72) compared to all the other models. Hence, in terms of SD, NRMSE, and the ACC, MMCFSv2 stands out compared to all the other NMME models and the MMCFSv1."
These lines need to be modified to reflect the changes in the scores and classification of NMME models on the Taylor diagram. Modified text will read like:
"Pillai et al., (2018) compared the seasonal prediction skill of ISMR in MMCFSv1 (T382) with the US National Multi-Model Ensemble (NMME) project for the simulation years of 1981-2009. They found that MMCFSv1 has better skill in reproducing interannual variability of ISMR (ACC=0.55) compared to the other NMME models (ACC<0.4) and MMCFSv1 is better at simulating the observed standard deviation of ISMR. The Taylor diagram (Taylor, 2000) in Fig. 11 compares the skill of MMCFS (v1 and v2), and NMME models in reproducing observed Standard Deviation (SD, normalized), Root Mean Squared Error normalized with observed standard deviation (NRMSE). Of these NMME models, GFDL_FLORA, GFDL_FLORB, and SPISV2 have data for the years of 1998-2021. We found that removal of year 2022 from other models does not change the scores significantly. There are five models which simulate the observed SD reasonably well (normalized SD approximately 1.0), viz. MMCFSv2, GFDL_Aero, SIPSv2, SPSIC3, GMAO. All the other models have comparatively larger or smaller standard deviations with respect to observations. A 10 % deviation from the climatological mean is sufficient to have an excess or a drought monsoon over India (Singh et al., 2015). Hence, getting the NRMSE below 1.0 is crucial. Two models which stand out in terms of NRMSE are MMCFSv2 (0.82) and GFDL_Aero (0.85). All the other models simulate NRMSE larger than 0.85. MMCFSv2 reduces the NRMSE from 1.04 of MMCFSv1 to 0.82 with respect to GPCP, which is about 20 %. GFDL_Aero also has lower ACC of 0.53 compared to 0.72 of MMCFSv2. MMCFSv2 has the highest skill in capturing the interannual variability of ISMR compared to all the other models. Hence, in terms of SD, NRMSE, and the ACC, MMCFSv2 stands out compared to all the other NMME models and the MMCFSv1."
Thanks and Regards,
Deepeshkumar Jain
-
CC1: 'Comment on gmd-2023-53', Arindam Chakraborty, 15 Jun 2023
a a
Citation: https://doi.org/10.5194/gmd-2023-53-CC1 -
RC1: 'Comment on gmd-2023-53', Anonymous Referee #1, 15 Jun 2023
Two versions of a coupled climate model were compared regarding their prediction skill of the mean climate, the Indian summer monsoon rainfall (ISMR), and associated teleconnection. The model is Monsoon Mission CFS with version 1 and version 2. Twenty-five years of coupled seasonal hindcasts, starting in 1998, were performed from April to September. The V1 model uses 12 ensemble members, whereas the V2 model uses 10. MMCFSv1 used a horizontal resolution of T382, and that for MMCFSv2 was T574. Despite this, there are many differences between the v1 and v2 versions, including the ocean model and the coupler. The spatial and vertical structure of different fields were presented of the mean conditions and of the year-to-year co-variability. The authors state that there is an improvement in the skill of the model in predicting the seasonal mean rainfall over Indian land. This is in spite of the fact that the well known dry bias of the model did not improve from v1 to v2.
The major problem of this manuscript is it compares two different versions of the model with too many changes, including physics, the coupling method, and resolution. Previous studies have shown that changing horizontal resolution can have impacts on the model’s mean conditions and teleconnection. I believe the differences between MMCFSv1 and MMCFSv2 shown in this study will fall well within the differences of changing only the resolution of the model. Thus, I suggest simulations with exactly the same resolution so that the results can be interpreted from the point of view of other components (version) and not merely from the resolution. My other specific comments are given below.
Specific Comments:
- Simulations used a time length of 25 years (1998-2022). It is appreciable that the authors used recent years. However, this choice does not include some critical years like 1994, 1997, and 1983 when the CFSv2 model is known to have difficulty predicting seasonal mean conditions. It is because, for example, despite an El Nino, the summer monsoon was normal due to the positive Indian Ocean Dipole.
- The above comment brings to the point that a 25-year simulation is long enough to establish a particular (version) model is better than another. A short simulation misses several years of extremes and usual conditions. Other studies using CFSv2 used data sets from 1981/1982 (e.g., Ramu et al. 2016, Pillai et al. 2018). It should be possible to extend the simulations back to the available initial conditions for the robustness of the conclusions. Are the two correlation coefficients of the seasonal mean time series significantly different?
- Most of the results were presented and discussed through eyeball comparison. For example, Fig 2 shows horizontal wind vectors at 850 hPa. The text says there is an improvement of the Somali jet in MMCFSv2 to MMCFSv1. What is the definition of the Somali Jet? At what height is it maximum, and what is the three-dimensional wind structure? It is only possible to make a conclusive statement with a definition and quantitive assessment of the phenomenon.
- A similar argument can be given for Fig 4, where the zonal mean vertical structure of zonal wind is compared. The changes in MMCFSv2 from MMCFSv1 are both toward and away from the observations. It is better to show a pattern correlation between the total winds to demonstrate if one model is better than the other.
- Line 29: “The standard deviation of … high mean precipitation.” - The figure suggests that the standard deviation is high over regions with a high mean. It happens to be over the northern Bay of Bengal over south Asia. A scatter plot of mean vs variance would reveal this. In fact, over Indian land, the ratio of variance to mean will be higher than that over the north Bay of Bengal. Thus, the statement made here is not correct.
- Fig 6: The difference panels (right column) show most of the bias near the equator within 2-3 degrees. Please reduce the interval of the colour scale to capture details of these biases.
- In general, MMCFSv2 is particularly warmer than MMCFSv1 near the surface (Fig 6 and 7). A detailed reason is necessary to present to understand the changes. MMCFSv2 has several differences in modelling compared to MMCFSv1 (Table 1). Which of these components is responsible for this switch in surface temperature bias from largely cold to largely warm, especially in the northern hemisphere?
- Fig 10 suggests that overall there is an improvement (in correlation) between the model and observation in the latest version. However, there are years like 2000, 2004, and 2019 when v2’s anomaly was opposite to that observed by v1’s anomaly was correct. One key characteristic of a good seasonal forecast is to capture the seasonal mean extremes. A scatter plot of observation vs model would bring out if there’s any difference between the two versions of the model in this respect.
- In Table 3, how is the correlation over the western parts of the equatorial Indian Ocean?
- It is unclear what is the reason behind improved skill (interannual correlation of seasonal mean). Does it come because of a better simulation of climate patterns or its teleconnection to the monsoon?
Citation: https://doi.org/10.5194/gmd-2023-53-RC1 -
RC2: 'Comment on gmd-2023-53', Anonymous Referee #2, 03 Jul 2023
This study introduced the Monsoon Mission Coupled Forecast System Version 2.0 and compared its hindcast results with the previous version for the recent 25 years. The MMCFS v2 simulates better tropical wind and rainfall compared to the v1 model, while the temperature fields became worse. MMCFSv2 captures significant features of the Indian monsoon, including the intensity and location of the maximum precipitation centers and the large-scale monsoon circulation. The 25-y hindcast results from the v2 model are compared with that from v1, and found that the v2 model improves the simulation skill in rainfall pattern and amplitude.
This manuscript is titled with a model description, but there is no detailed information about the key model configurations. There is no information regarding what has been changed compared to the original component model, or the v2 model is just an integration of existing component models. The manuscript described the basic performance of the v2 model in simulating the mean states and interannual variation, while the reason for these improvements is not well present.
Major comments:
- Based on the description of the MMCFSv2 model, it coupled the MOM6, GFS-SL, and CICE5 together. Is there any model tuning before the hindcast simulation?
- The evaluation of the mean states shows bias in SST, circulation, and precipitation. The linkage among those biases should be discussed in detail, especially for the v2 model.
- There are large differences between the v1 and v2 models. What is the major cause of those changes?
- The evaluation of the mean states of the MMCFS v2 show that it has a larger bias in the SST and surface temperature compared to the MMCFS v1 model. However, the v2 model shows better performance in simulating the 850 hPa and 200 hPa circulations and precipitation. Why and how can the v2 improve precipitation with degradation in SST and surface temperature?
- A skillful seasonal prediction relies on reliable data assimilation for the initial conditions. The ICs are all obtained from NCEP CFSR. Does the CFSR have the same component models and the same resolutions as MMCFS v2? What is the difference between the CFSR and MMCFS v2 in terms of model configurations? Do these two models share similar model performance and bias? If the two models have different mean states, why can the ICs from CFSR be used in the v2 model? What is the impacts of initial shock and model drift on the hindcast results?
- In Fig. 11, please also show the simulated ISMR anomaly for the v1 and v2 models. Based on this limited time period hindcast results, it is hard to say which one is better. In Fig. 11 and L285-286, by comparing the blue and purple bars, MMCFSv2 has the correct sign for 19 years. And MMCFS v1 predicted the correct sign for 18 years. The climate impacts for the extreme years (e.g., anomaly exceeding 10%) are more significant. A better prediction is more valuable for extreme years than normal years. For those extreme dry (e.g., 2002, 2004, 2009, 2015) and wet (2019, 2020) years, the simulation from the v1 model looks better than the v2 model, as shown in Fig. 11.
- The authors claim that the v2 model has better skills in simulating the ISMR. However, why can the v2 model do better in the hindcast experiment? This physical explanation is missing in the current manuscript.
Minor comments:
- It is better to discuss the temperature bias first and then consider its impact on precipitation and circulation. In section 4.1.1, why is an improvement in circulation? Is it due to the changes in model physics or model resolution?
- L114-115. The initialization of the two versions of models needs to be further documented. The MMCFSv1 and v2 have different resolutions. Does the CFSR system provide all the initial conditions for the v1 and v2 resolution? Is there any data assimilation in preparing the initial condition by the authors?
- L125 What are the initial dates for the 12-member prediction in MMCFSv1? It would be better to introduce it here than refer to a paper.
- In Fig. 6, where is the 0.5K contour line?
- Fig. 6-8 show the MMCFS v2 model has a large bias compared to the previous one. Is this due to the energy bias in the AGCM? Or the problem in air-sea coupling? It is better to reduce this apparent mean bias before the prediction.
- Table 3, please add the significant test for these numbers. What are the definitions of these modes?
- L396, ‘Fig. 14(c)’ should be Fig. 14(b).
- ‘MMCFSv2 captures these teleconnection patterns over the tropical Oceans and the eastern Indian Ocean (Fig. 14 (c)).’. This is not true.
- L403-414, Fig. 15 did not describe the impact of the IDO. Please do not use IOD in the context.
- L444-445, based on Fig. 6-8. This is not true for SST and surface temperature.
Citation: https://doi.org/10.5194/gmd-2023-53-RC2 -
AC7: 'Comment on gmd-2023-53', Anguluri Suryachandra rao, 16 Aug 2023
Please find below our response to the referees' comments.
We thank the reviewers for their useful comments, and we have incorporated these suggestions which can be addressed at this stage. One comment which was common between the two reviewers is lack of attribution of improvements to resolution, parametrization, and component of the model. It is a well-known fact that attribution of improvements/deteoration to any of the suggested component is impossible without carrying a sensitivity study, while keeping all other configurations same except that particular component. Hence, this aspect could not be addressed in this revision. However, improvements in skill of the seasonal prediction skill of ISMR could be attributed to improved mean state and its teleconnections.
Referees Comments 1 -Two versions of a coupled climate model were compared regarding their prediction skill of the mean climate, the Indian summer monsoon rainfall (ISMR), and associated teleconnection. The model is Monsoon Mission CFS with version 1 and version 2. Twenty-five years of coupled seasonal hindcasts, starting in 1998, were performed from April to September. The V1 model uses 12 ensemble members, whereas the V2 model uses 10. MMCFSv1 used a horizontal resolution of T382, and that for MMCFSv2 was T574. Despite this, there are many differences between the v1 and v2 versions, including the ocean model and the coupler. The spatial and vertical structure of different fields were presented of the mean conditions and of the year-to-year co-variability. The authors state that there is an improvement in the skill of the model in predicting the seasonal mean rainfall over Indian land. This is in spite of the fact that the well known dry bias of the model did not improve from v1 to v2.
The major problem of this manuscript is it compares two different versions of the model with too many changes, including physics, the coupling method, and resolution. Previous studies have shown that changing horizontal resolution can have impacts on the model’s mean conditions and teleconnection. I believe the differences between MMCFSv1 and MMCFSv2 shown in this study will fall well within the differences of changing only the resolution of the model. Thus, I suggest simulations with exactly the same resolution so that the results can be interpreted from the point of view of other components (version) and not merely from the resolution. My other specific comments are given below.
Reply – MMCFSv1 uses Eulerian dynamical core while MMCFSv2 uses Semi-lagrangian one. Thus, the two resolutions mentioned T382 in MMCFSv1(EL) and T574 in MMCFSv2(SL) correspond to the similar physical resolution of 1152x576 grid points in the horizontal (equivalent to ~38km resolution).
MMCFSv2 is a major upgrade (a new model even, considering the upgrades) over MMCFSv1 in terms of the framework and the component models. Hence, the manuscript can be looked at as a comparison between simulations by two different models, rather than a sensitivity study. MMCFSv2 will replace MMCFSv1 for future research works at IITM. We agree that the skill improvements as well as the limitations such as high tropospheric temperature and SST shown could have come from any of the component upgrades and this will need a thorough investigation (and a study on its own). Our focus in the present paper was to find the baseline performance of this new model in simulating ISMR and tropical climate compared to MMCFSv1. This baseline will be helpful in defining the scope of many future sensitivity studies.
Specific Comments:1) Simulations used a time length of 25 years (1998-2022). It is appreciable that the authors used recent years. However, this choice does not include some critical years like 1994, 1997, and 1983 when the CFSv2 model is known to have difficulty predicting seasonal mean conditions. It is because, for example, despite an El Nino, the summer monsoon was normal due to the positive Indian Ocean Dipole.
2) The above comment brings to the point that a 25-year simulation is long enough to establish a particular (version) model is better than another. A short simulation misses several years of extremes and usual conditions. Other studies using CFSv2 used data sets from 1981/1982 (e.g., Ramu et al. 2016, Pillai et al. 2018). It should be possible to extend the simulations back to the available initial conditions for the robustness of the conclusions. Are the two correlation coefficients of the seasonal mean time series significantly different?
Reply – We would like to reply to the above two comments together here. The choice of simulation duration was made based on various factors mentioned below.
The MMCFSv2 aims to improve IMD’s operational forecast by replacing the old generation model with a new one. As operational forecasts need the verification of models’ performance during the recent period, we have carried out the hindcast experiment for 1998-2022. Recently, many operational centers (IMD, NCEP) have changed their climatology to include recent years. For example, NCEP-CFSv2 seasonal forecasts are now based on 1991-2020. We do agree that resolving the known problems of MMCFSv1 should be one of the major foci of future studies, however extending hindcast for such a long duration really requires lot of computational resources and the same are not available at this moment. We will address these issues in future once the required computational and storage resources are available.
Shi et al. (2014) used Ratio of Predictable Components to determine the length of hindcast simulations sufficient for studying predictability over different global regions. They showed that over the tropical regions, (including the South Asian region), a duration of 20 years is sufficient for studying hindcast predictability. Our simulation duration (25 years) satisfies this condition very well, confirming that the hindcast duration is enough to cover several of the instances covered in the comment.
MMCFSv1 is known to have difficulty in capturing critical years like 1994, 1997, and 1983. These years were characterized by EL-Nino and positive IOD. The years 2012, 2015, and 2019 from our simulations are similar El-Nino years having positive IOD. Out of these three years, MMCFSv1 could not capture 2012 and 2019, while MMCFSv2 could not capture 2019.
3) Most of the results were presented and discussed through eyeball comparison. For example, Fig 2 shows horizontal wind vectors at 850 hPa. The text says there is an improvement of the Somali jet in MMCFSv2 to MMCFSv1. What is the definition of the Somali Jet? At what height is it maximum, and what is the three-dimensional wind structure? It is only possible to make a conclusive statement with a definition and quantitive assessment of the phenomenon.
Reply – Somali Jet (most intense at 800-900hPa) is the low-level southwesterly jet over the Arabian Sea in the summer months, off the coast of Somalia. The difference in wind speeds averaged over 10-15N, 45-50E box between MMCFSv2 winds and ERA5 at 850hPa is lesser (0.09m/s) compared to that of MMCFSv1 (-0.4m/s). Similarly all the results were analyzed both qualitatively and quantitatively (with various statistics).
The difference between observed and simulated winds shown in the manuscript was used to conclude that the Findlater jet seen at 850hPa was better simulated by MMCFSv2. We have included a zoomed in picture of winds at 925, 850, 700hPa, and 500hPa (Figure 1a-d) with colored wind magnitude as a supplement to this reply. Both versions of MMCFS can capture this jet at 850hPa height. The difference between MMCFSv2 winds and ERA5 at 850hPa is lesser compared to that of MMCFSv1 over most of the tropical Indian ocean region.4) A similar argument can be given for Fig 4, where the zonal mean vertical structure of zonal wind is compared. The changes in MMCFSv2 from MMCFSv1 are both toward and away from the observations. It is better to show a pattern correlation between the total winds to demonstrate if one model is better than the other.
Reply – We computed the pattern correlation between the observed and simulated total zonal mean winds. Both MMCFSv1 and MMCFSv2 have a pattern correlation of 0.99. As can be inferred from this, both models have the pattern of zonal mean winds close to observations. The difference plot (Fig. 4 in manuscript) captures much more details of the simulated wind biases compared to observations.
5) Line 29: “The standard deviation of … high mean precipitation.” - The figure suggests that the standard deviation is high over regions with a high mean. It happens to be over the northern Bay of Bengal over south Asia. A scatter plot of mean vs variance would reveal this. In fact, over Indian land, the ratio of variance to mean will be higher than that over the north Bay of Bengal. Thus, the statement made here is not correct.Reply – The ratio of variance (and standard deviation) to mean (shown in supplement) also suggests that compared to variability over oceans (including BoB), the variability over Indian land is less. This was shown in Fig. 1 of the manuscript. As suggested, we plotted variance, standard deviation, ratio of variance to mean, and ratio of standard deviation to mean. Please find these figures in the attached supplement (Figure 2a-c). Even the ratios suggest that variability over Indian land is lesser compared to oceans. Our aim in writing this (line 29) was to emphasize the difficulty faced by models in predicting the low variability of mean ISMR over land.
6) Fig 6: The difference panels (right column) show most of the bias near the equator within 2-3 degrees. Please reduce the interval of the colour scale to capture details of these biases.
Reply – Please find revised Fig. 6 (Figure 3 of the supplement) in which we have adjusted the contour interval to capture more details over the equator.
7) In general, MMCFSv2 is particularly warmer than MMCFSv1 near the surface (Fig 6 and 7). A detailed reason is necessary to present to understand the changes. MMCFSv2 has several differences in modelling compared to MMCFSv1 (Table 1). Which of these components is responsible for this switch in surface temperature bias from largely cold to largely warm, especially in the northern hemisphere?
Reply - Please refer to Figure 4a and 4b of the attached supplement. Except over equatorial Pacific ocean (EPO), MMCFSv1 simulates deeper mixed layer depths compared to observations (C-GLORS). MMCFSv2 improves on this bias of MMCFSv1 with shallower MLD compared to observations, except over EPO, where the bias remains similar.
Based on the reviewers suggestion, we carried out heat budget analysis of ocean mixed layer (Figure 4) and found that the shallower MLD of MMCFSv2 is the major cause of higher SST as for a given Qnet (net energy transfer to MLD), shallower MLD will result in warmer SST.
8) Fig 10 suggests that overall there is an improvement (in correlation) between the model and observation in the latest version. However, there are years like 2000, 2004, and 2019 when v2’s anomaly was opposite to that observed by v1’s anomaly was correct. One key characteristic of a good seasonal forecast is to capture the seasonal mean extremes. A scatter plot of observation vs model would bring out if there’s any difference between the two versions of the model in this respect.Reply – Scatter plot Figure 5a (of the supplement) will now be added in the manuscript. From the scatter plot it is evident that many observed normal years were predicted as extremes in v1. Hence, we calculated the false alarm rates and the hit rates for both the models. We used two criteria for defining normal years, viz 10% and 5% departure from the climatological mean. Table 5 in the supplement summarizes the false alarms and hit rates. As seen from the table, MMCFSv1 has a higher false alarm rate and a lower hit rate than MMCFSv2.
9) In Table 3, how is the correlation over the western parts of the equatorial Indian Ocean?
Reply – In both these models the western parts of eq. Indian Ocean are positively correlated. MMCFSv1 skill (SST) over west IOD box (10S-10N, 50-70E) is 0.44. MMCFSv2 skill (SST) over west IOD box (10S-10N, 50-70E) is 0.40.
10) It is unclear what is the reason behind improved skill (interannual correlation of seasonal mean). Does it come because of a better simulation of climate patterns or its teleconnection to the monsoon?
Reply – The mean state of the atmosphere has improved, both in terms of precipitation and circulation (850hPa winds). This has resulted in improved teleconnections (Figure 16). The pattern correlation between spatial structure of teleconnections (in Figure 16 of the manuscript) has improved from 0.38 in MMCFSv1 to 0.60 in MMCFSv2. Hence the interannual variability has improved.
Referees Comments 2 :
This study introduced the Monsoon Mission Coupled Forecast System Version 2.0 and compared its hindcast results with the previous version for the recent 25 years. The MMCFS v2 simulates better tropical wind and rainfall compared to the v1 model, while the temperature fields became worse. MMCFSv2 captures significant features of the Indian monsoon, including the intensity and location of the maximum precipitation centers and the large-scale monsoon circulation. The 25-y hindcast results from the v2 model are compared with that from v1, and found that the v2 model improves the simulation skill in rainfall pattern and amplitude.
This manuscript is titled with a model description, but there is no detailed information about the key model configurations. There is no information regarding what has been changed compared to the original component model, or the v2 model is just an integration of existing component models. The manuscript described the basic performance of the v2 model in simulating the mean states and interannual variation, while the reason for these improvements is not well present.
Reply –Details will be provided in Tabel1 (revised) and the corresponding description in the manuscript.
Major comments:
1) Based on the description of the MMCFSv2 model, it coupled the MOM6, GFS-SL, and CICE5 together. Is there any model tuning before the hindcast simulation?
Reply – There was no tuning done before carrying out the simulations. These are the very first hindcasts which will guide us in tuning the model in future simulations. This dataset will be the baseline for future sensitivity studies with MMCFSv2. The first of these will be correcting the SST and temperature bias.
2) The evaluation of the mean states shows bias in SST, circulation, and precipitation. The linkage among those biases should be discussed in detail, especially for the v2 model.
Reply – The biases in SST comes from shallower MLD simulated by MMCFSv2 (Figure 4a and 4b of supplement). The improved circulation of MMCFSv2 is most likely the result of improved convective centers from MMCFSv1 (Figure 9, manuscript). Revised manuscript will have more details on the linkages between biases. Though we would like to say that establishing these linkages (cause and effect) in a highly non-linear coupled model is not trivial and is a study on its own. Our focus with the present manuscript is to document the performance of MMCFSv2 and to explore the opportunities for future research.
3) There are large differences between the v1 and v2 models. What is the major cause of those changes?
Reply – The biggest change, which we believe, has contributed to the most difference is MOM6 ocean model. MOM6 is running at higher resolution than MOM4. Improvements brought by MOM6 over MOM4 include using C-grid stencil over B-grid stencil. C-grid stencil is preferred for simulations involving an active mesoscale eddy field. MOM6 uses scale-aware parameterizations for mesoscale eddy-permitting regimes. As we see from the results, MOM6 produces significantly different SST patterns. The difference in SST pattern is the result of shallower MLD. The better winds are the result of better convective centers of precipitation. The better ISMR skill in the model is the result of better teleconnections (Fig. 16 of the manuscript). Though, it is impossible to attribute the improvements to any particular component of the model as all the components are coupled to each other as a result biases in one component can influence the other component.
4) The evaluation of the mean states of the MMCFS v2 show that it has a larger bias in the SST and surface temperature compared to the MMCFS v1 model. However, the v2 model shows better performance in simulating the 850 hPa and 200 hPa circulations and precipitation. Why and how can the v2 improve precipitation with degradation in SST and surface temperature?Reply – If we only consider the SST magnitudes, then there is a degradation in MMCFSv2 simulations. However, the SST gradients (both zonal and meridional) show similar structures between v1 and v2. Please see the mean SST gradients in the supplementary Figures 6 (a-b). As much as the overall magnitudes, we also believe gradients and deep moist convection plays a significant role in establishing global circulation patterns (Lindzen and Nigam 1987; Back and Bretherton 2009; Wallace 1989; Chelton et al. 2004). We also found that the convection centers are better simulated in MMCFSv2. This has resulted in better 850hPa winds.
5) A skillful seasonal prediction relies on reliable data assimilation for the initial conditions. The ICs are all obtained from NCEP CFSR. Does the CFSR have the same component models and the same resolutions as MMCFS v2? What is the difference between the CFSR and MMCFS v2 in terms of model configurations? Do these two models share similar model performance and bias? If the two models have different mean states, why can the ICs from CFSR be used in the v2 model? What is the impacts of initial shock and model drift on the hindcast results?Reply - CFSR uses the same model configuration as MMCFSv1 (viz. MOM4, GFS-EL, SIS). MMCFSv2 has an upgraded setup (MOM6, GFS-SL, and CICE5) compared to CFSR. We re-gridded the ocean initial conditions from CFSR to MOM6 grids. We also re-gridded atmospheric model initial conditions for using them in GFS-SL.
The effects of initial shock in MMCFSv1 were studied by Shukla et al., (2018). They used Latent heat flux over Arabian sea as one of the major variable for their analysis. Carrying out a similar analysis is not possible with current MMCFSv2 setup. However, to address reviewers query, we have compared the effect of initial shock with 2 ensemble mean. Please note that we will not include this in the manuscript as this is not the complete analysis.
We took three years, viz 2002 (deficit), 2003 (normal), and 2010 (excess) ISMR years. We took the mean of 1st April 00 and 12Z initial condition simulations and computed the difference from 00 and 12Z of 21st April initial conditions for Latent heat flux over Arabian Sea (8-16N, 54-74E). To our surprise, the difference (Figure 8 of supplement) shows a larger initial shock in LHF in MMCFSv1 compared to MMCFSv2.
6) In Fig. 11, please also show the simulated ISMR anomaly for the v1 and v2 models. Based on this limited time period hindcast results, it is hard to say which one is better. In Fig. 11 and L285-286, by comparing the blue and purple bars, MMCFSv2 has the correct sign for 19 years. And MMCFS v1 predicted the correct sign for 18 years. The climate impacts for the extreme years (e.g., anomaly exceeding 10%) are more significant. A better prediction is more valuable for extreme years than normal years. For those extreme dry (e.g., 2002, 2004, 2009, 2015) and wet (2019, 2020) years, the simulation from the v1 model looks better than the v2 model, as shown in Fig. 11.
Reply - Figure 8 in supplementary shows the interannual variability of observed and simulated ISMR anomalies. Yes, for years 2000, 2004, and 2019, the v2 anomaly was opposite to that observed and v1 was correct. Conversely, for 2008, 2011, 2017, and 2018, v2 got the sign correctly and v1 was the opposite. The attached scatter plot in Figure 5a (percentage departure) highlights the years mentioned here as well as the other extreme years such as 2007 (v2 is better than v1), 2010 (v1 and v2 gets similar result), 2002, and 2009. Figure 5a of supplementary shows a lot of normal years which were wrongly predicted as extreme years in MMCFSv1. Hence, we calculated the false alarm rates and the hit rates for these years. We used two criteria for defining normal years, viz 10% and 5% departure from the climatological mean. The table 5 of supplementary summarizes the false alarms and hit rates. As seen in the table, MMCFSv1 has a higher false alarm rate and a lower hit rate than MMCFSv2.
7) The authors claim that the v2 model has better skills in simulating the ISMR. However, why can the v2 model do better in the hindcast experiment? This physical explanation is missing in the current manuscript.Reply - The mean state of the atmosphere has improved, both in terms of precipitation and circulation (850hPa winds). This has resulted in improved teleconnections (Figure 16). The pattern correlation between spatial structure of teleconnections in Figure 16 (manuscript) has improved from 0.38 in MMCFSv1 to 0.60 in MMCFSv2. Hence the interannual variability skill has improved.
Minor comments:8) It is better to discuss the temperature bias first and then consider its impact on precipitation and circulation. In section 4.1.1, why is an improvement in circulation? Is it due to the changes in model physics or model resolution?
Reply – We thank the referee for this. This will improve the presentation of results significantly. We will rearrange the discussion so that temperature biases are discussed before discussion circulation.
9) L114-115. The initialization of the two versions of models needs to be further documented. The MMCFSv1 and v2 have different resolutions. Does the CFSR system provide all the initial conditions for the v1 and v2 resolution? Is there any data assimilation in preparing the initial condition by the authors?
Reply – Same CFSR initial conditions were used for both the models. We have not done any data assimilation from our side before carrying out these simulations other than regridding.
10) L125 What are the initial dates for the 12-member prediction in MMCFSv1? It would be better to introduce it here than refer to a paper.
Reply – The initial dates for these simulations were similar between v1 and v2 (mentioned in the manuscript), except that v1 had two additional ensembles starting from 00z and 12z of 26th April.
11) In Fig. 6, where is the 0.5K contour line?
Reply – We have updated the Figure with more contour lines. Please see Figure 3 of supplementary.
12) Fig. 6-8 show the MMCFS v2 model has a large bias compared to the previous one. Is this due to the energy bias in the AGCM? Or the problem in air-sea coupling? It is better to reduce this apparent mean bias before the prediction.Reply – All the coupled climate models have biases and it is impossible to carry out simulations only after the biases have been reduced/removed. CMIP6 models also suffer from similar warm SST biases (Zhang et al. 2023) as MMCFSv2. The warm SST bias is one of the first bias which will be addressed in future for MMCFSv2. The warmer SST in MMCFSv2 can be explained in terms of the shallower MLD simulated by the model for a given Qnet (Figure 4a and 4b).
13) Table 3, please add the significant test for these numbers. What are the definitions of these modes?
Reply – Given an opportunity to revise the manuscript, we will add the significant test scores. We will also include the definition of these modes.14) L396, ‘Fig. 14(c)’ should be Fig. 14(b).
Reply – Will be corrected in the revised manuscript.
15) ‘MMCFSv2 captures these teleconnection patterns over the tropical Oceans and the eastern Indian Ocean (Fig. 14 (c)).’. This is not true.
Reply – Will be corrected in the revised manuscript.16) L403-414, Fig. 15 did not describe the impact of the IDO. Please do not use IOD in the context.
Reply – Will be corrected in the revised manuscript.17) L444-445, based on Fig. 6-8. This is not true for SST and surface temperature.
Reply – Will be corrected in the revised manuscript.References -
1) Shi, W., et al. "Impact of hindcast length on estimates of seasonal climate predictability." Geophysical research letters 42.5 (2015): 1554-1559.
2) Sridevi, Ch, et al. "Rainfall forecasting skill of GFS model at T1534 and T574 resolution over India during the monsoon season." Meteorology and Atmospheric Physics 132 (2020): 35-52.
3)Shukla, R.P., Huang, B., Marx, L. et al. Predictability and prediction of Indian summer monsoon by CFSv2: implication of the initial shock effect. Clim Dyn 50, 159–178 (2018). https://doi.org/10.1007/s00382-017-3594-0
4) Zhang, Qibei, et al. "Understanding models' global sea surface temperature bias in mean state: from CMIP5 to CMIP6." Geophysical Research Letters 50.4 (2023): e2022GL100888.
Deepeshkumar Jain et al.
Deepeshkumar Jain et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
461 | 148 | 37 | 646 | 2 | 2 |
- HTML: 461
- PDF: 148
- XML: 37
- Total: 646
- BibTeX: 2
- EndNote: 2
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1