The atmosphere-ocean general circulation model EMAC-MPIOM

Introduction Conclusions References


Introduction
Coupled atmosphere-ocean general circulation models (AO-GCMs) are essential tools in climate research.They are used to project the future climate and to study the actual state of our climate system (Houghton et al., 2001).An AO-GCM comprises an atmospheric general circulation model (A-GCM), also including a land-surface component, and an ocean model (an Ocean General Circulation Model, O-GCM), also including a sea-ice component.In addition, biogeochemical components can be added, for example, if Correspondence to: A. Pozzer (pozzer@cyi.ac.cy) constituent cycles, such as the carbon, sulfur or nitrogen cycle are to be studied.Historically, the different model components have been mostly developed independently, and at a later stage they have been connected to create AO-GCMs (Valcke, 2006;Sausen and Voss, 1996).However, as indicated by the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4), no model used in the AR4 presented a complete and online calculation of atmospheric chemistry.The main motivation of this work is to provide such a model to the scientific community, which is indeed essential to effectively study the intricate feedbacks between atmospheric composition, element cycles and climate.
Here, a new coupling method between the ECHAM/MESSy Atmospheric Chemistry (EMAC) model, (Roeckner et al., 2006;Jöckel et al., 2006, ECHAM5 version 5.3.02) and the ocean model MPIOM (Marsland et al., 2003, version 1.3.0) is presented, with the coupling based on the Modular Earth Submodel System (MESSy2, Jöckel et al., 2010).In the present study, only the dynamical coupling will be discussed.Hence EMAC is, so far, only used as an AO-GCM, i.e. all processes relevant for atmospheric chemistry included in EMAC are switched off.This first step towards including an explicit calculation of atmospheric chemistry in a climate model is needed to test the coupling, i.e. the option to exchange a large amount of data between the model components, and to maintain optimal performance of the coupled system.
In Sect.2, different coupling methods are briefly reviewed, followed (Sect.3) by a technical description of the method used in this study.A run-time performance analysis of the model system is presented in Sect.4, and in Sect.5, results from EMAC-MPIOM are shown in comparison to other models and observations.
Published by Copernicus Publications on behalf of the European Geosciences Union.
As sketched in Fig. 1, at least two different methods exist to couple the components of an a AO-GCM: -internal coupling: the different components of the AO-GCM are part of the same executable and share the same parallel decomposition topology.In an operator splitting approach, the different components (processes) are calculated in sequence.This implies that each task collects the required information, and performs the interpolation between the grids.
-external coupling: the different components (generally an atmosphere GCM and an ocean GCM) of the AO-GCM are executed as separate tasks1 , at the same time, i.e. in parallel.An additional external coupler program synchronises the different component models (w.r.t.simulation time) and organises the exchange of data between the different component models.This involves the collection of data, the interpolation between different model grids, and the redistribution of data.
External coupling is the most widely used method, e.g. by the OASIS coupler (Valcke et al., 2006;Valcke, 2006).The OASIS coupler is used, for example, in the ECHAM5/MPIOM coupled climate model of the Max Planck Institute for Meteorology (Jungclaus et al., 2007) and in the Hadley Centre Global Environment Model (Johns et al., 2006).Also the Community Climate System Model 3 (CCSM3, Collins et al., 2006) adopts a similar technique for information exchange between its different components.Internal coupling is instead largely used in the US, e.g. in the new version of the Community Climate System Model 4 (CCSM4, Gent et al., 2011) and in the Earth System Modeling Framework (ESMF, Collins et al., 2005).
Following the MESSy standard (Jöckel et al., 2005), and its modular structure, it is a natural choice to select the internal coupling method as a preferred technique to couple EMAC and MPIOM.In fact, the aim of the MESSy system is to implement the processes of the Earth System as submodels.Hence, the coupling routines have been developed as part of the MESSy infrastructure as a separate submodel (see A2O submodel below).

MPIOM as MESSy submodel
According to the MESSy standard definition, a single time manager clocks all submodels (= processes) in an operator  and C2) of an AO-GCM (upper panel "internal method", as implemented here, lower panel "external method" as used for example in the OASIS coupler).The colours denote the different executables.
splitting approach.The MPIOM source code files are compiled and archived as a library.Minor modifications were required in the source code, and all were enclosed in preprocessor directives (#ifdef MESSY), which allow to reproduce the legacy code if compiled without this definition.About 20 modifications in 11 different files were required.The majority of these modifications are to restrict write statements to one PE (processor), in order to reduce the output to the logfile.The main changes in the original source code modify the input of the initialisation fields (salinity and temperature from the Levitus climatology), with which the ocean model can now be initialised at any date.Another main modification is related to the selection of various parameters for coupled and non-coupled simulations.In the original MPIOM code, this selection was implemented with preprocessor directives, hence reducing the model flexibility at run-time.In the EMAC-MPIOM coupled system, the preprocessor directives have been substituted by a logical namelist parameter, and in one case (growth.f90) the routines in the coupled case were moved to a new file (growth coupled.f90).
The main program (mpiom.f90) is eliminated and substituted by a MESSy submodel interface (SMIL) module (messy mpiom e5.f90).This file mimics the time loop of MPIOM with the calls to the main entry points to those subroutines, which calculate the ocean dynamics.For the entry points, initialisation, time integration and a finalising phase are distinguished.The MPIOM-library is linked to the model system, operating as a submodel core layer of the MPIOM submodel.Following the MESSy standard, a strict separation of the process formulations from the model infrastructure (e.g.time management, I/O, parallel decomposition etc.) was implemented.I/O units, for example, are generated dynamically at run-time.In addition, the two model components (EMAC and MPIOM) use the same high level API (application programmers interface) to the MPI (Message Passing Interface) library.This implies that the same subroutines (from mo mpi.f90) are used for the data exchange between the tasks in MPIOM and EMAC, respectively.
The new MESSy interface (Jöckel et al., 2010) introduces the concept of "representations", which we make use of here.The "representation" is a basic entity of the submodel CHANNEL (Jöckel et al., 2010), and it allows an easy management of the memory, internal data exchange and output to files.New representations for the ocean variables (2-D and 3-D fields) have been introduced, consistent with the dimensioning of the original MPIOM arrays and compatible with the MPIOM parallel domain decomposition.Application of the CHANNEL submodel implies that no more specific output routines are required for the ocean model; the output files now have the same format and contain the same meta information for both the atmosphere and the ocean components.Furthermore, in the CHANNEL API, each "representation" is related to the high-level MPI API via a definition of the gathering (i.e.collecting a field from all tasks) and scattering (i.e.distributing a field to all tasks) subroutines.In case of the new MPIOM "representations", the original gathering and scattering subroutines from MPIOM are applied.As implication, the spatial coverage of each core is independently defined for the two AO-GCM components and constrained by the values of NPX and NPY set in the run-script, both for the atmosphere and for the ocean model.In fact, both models, EMAC and MPIOM, share the same horizontal domain decomposition topology for their grid-point-space representations, in which the global model grid is subdivided into NPX times NPY sub-domains (in North-South and East-West direction, respectively, for ECHAM5 and in East-West and North-South direction, respectively for MPIOM).Hence, the same task, which calculates a sub-domain in the atmosphere, also calculates a sub-domain in the ocean, and the two subdomains do not necessarily match geographically.An example is shown in Fig. 2, where possible parallel domain decompositions of EMAC and MPIOM are presented.A total of 16 tasks (specifically with NPX = 4 and NPY = 4) is used, and the color indicates the task number in the atmosphere and ocean model, respectively.Other decompositions are possible, depending on the values of NPX and NPY.

The A2O submodel
As described in Sect.3.1, the two components of the AO-GCM (EMAC and MPIOM) run within the MESSy structure, sharing the same time manager.To couple the two model components (EMAC and MPIOM) physically, some gridded information has to be exchanged (see Table 1).For this purpose, a new submodel, named A2O, was developed.In EMAC, a quadratic Gaussian grid (corresponding to the chosen triangular spectral truncation) is used, whereas MPIOM operates on a curvilinear rotated grid.The exchanged gridded information must therefore be transformed between the different grids.
Additionally, because the period between two subsequent data exchange events is generally different from the GCMs time step, the variables needed for the coupling have to be accumulated and averaged before being transformed.The accumulation process is performed at each time step, by adding the particular instantaneous value, multiplied by the GCM time step length (in seconds), to the accumulated fields.The averaging is done at a coupling time step, by dividing the A. Pozzer et al.: The EMAC-MPIOM model accumulated fields by the coupling period (in seconds) and resetting the accumulated values to zero.This procedure also allows to change the GCMs time step and/or the coupling frequency during run-time.
The submodel A2O (Atmosphere to Ocean, and vice versa) performs the required accumulation/averaging in time and the subsequent grid-transformation.The submodel implementation is such that three different setups are possible: -EMAC and MPIOM are completely decoupled, -EMAC or MPIOM are one-way forced, i.e. one component delivers the boundary conditions to the other, but not vice versa, -EMAC and MPIOM are fully coupled, i.e. the boundary conditions are mutually exchanged in both directions.
The setup is controlled by the A2O CPL-namelist, which is described in detail in the Supplement.In Table 1 the variables required for the physical coupling are listed.The fields are interpolated between the grids with a bilinear remapping method for scalar fields, while a conservative remapping method is used for flux fields (see Sect. 3.3).
For the interpolation the respective weights between the different model grid-points (atmosphere and ocean) are calculated during the initialisation phase of the model (see also Sect. 3.3).This allows that any combination of grids and/or parallel decompositions can be used without additional preprocessing.
One of the main advantages of the coupling approach adopted in this study (internal coupling) is the implicit "partial" parallelisation of the coupling procedure.Generally, one problem of the coupling routines is that the required information must first be collected from the different tasks of one model component, then processed (e.g.interpolated) and finally re-distributed to the tasks of the other model component.This process requires a "gathering" of information from different tasks, a subsequent grid transformation, and a "scattering" of the results to the corresponding target tasks.This process is computationally expensive, in particular, if many fields need to be exchanged (as is the case for interactive atmosphere-ocean chemistry).In the internal coupling approach, only the "gathering" (or collection) and the grid-transformation steps are required.During the initialisation phase of the model system, each task (in any of the AO-GCM components) stores the locations (indices) and the corresponding weights required for the transformation from the global domain of the other AO-GCM component.These weights are calculated for the global domain of the other AO-GCM component, because the applied search algorithm (see Sect. 3.3) is sequential and in order to reduce the algorithm complexity in the storage process.Then, within the time integration phase, each task collects the required information from the global field of the other AO-GCM component.Due to this procedure, the interpolation is performed simultaneously by all tasks (without the need to scatter, i.e. to distribute information) and thus increasing the coupling performance (see Sect. 4).It must, however, be noted that the new version of the OASIS coupler (Version 4; Redler et al., 2010) supports a fully parallel interpolation, which means the interpolation is performed in parallel for each intersection of source and target sub-domains.This will potentially increase the run-time performance of OASIS coupled parallel applications.

Grid-transformation utilising the SCRIP library
For the transformation of fields between the different grids (i.e. from the atmosphere grid to the ocean grid and vice versa), the SCRIP (Spherical Coordinate Remapping and Interpolation Package) routines (Jones, 1999) are used.These state-of-the-art transformation routines are widely used, for instance in the COSMOS model and the CCSM3 model.The SCRIP routines allow four types of transformations between two different grids: -first-and second-order conservative remapping (in the MESSy system, only the first order is used), -bilinear interpolation with local bilinear approximation, -bicubic interpolation, -inverse-distance-weighted averaging (with a userspecified number of nearest neighbour values).
The library has been embedded into the MESSy2 interface-structure as independent generic module (messy main gridtrafo scrip.f90).
For the coupling of EMAC and MPIOM presented here, this module is called by the submodel A2O.It can, however, also be used for grid-transformations by other MESSy submodels.According to the MESSy standard, the parameters used by A2O for the SCRIP library routines can be modified from their default values by changing the A2O submodel CPL-namelist (see the Supplement).
In Fig. 3, an example of a grid transformation with conservative remapping from the atmosphere grid to the ocean grid is shown.The patterns are preserved and the fluxes are conserved, not only on the global scale but also on the local scale.

Analysis of the run-time performance
The run-time performance is a critical aspect for climate models and the coupling as such must not drastically decrease the AO-GCM execution speed.In order to evaluate the run-time performance, we compare the EMAC-MPIOM performance with that of the COSMOS-1.0.0 model.Since both models share the same components (ECHAM5 and MPIOM), differences in the achieved efficiency can be attributed to the different coupling methods.In fact, the efficiency of the AO-GCM depends on the efficiency of the component models and on the load balancing between them.
For the comparison, we compiled and executed both model systems with the same setup on the same platform: a 64bit Linux cluster, with 24 nodes each equipped with 32 GB RAM and 2 Intel 5440 (2.83 GHz, 4 cores) processors, for a total of 8 cores per node.The Intel Fortran Compiler (version 11.1.046)together with the MPI-library mvapich2-1.2 has been used with the optimisation option -O1 to compile both model codes.The two climate models were run with no output for one month at T31L19 resolution for the atmosphere and at GR30L40 resolution for the ocean.The atmosphere and the ocean model used a 40 and 144 min time-step, respectively.In both cases (EMAC-MPIOM and COSMOS), the same convective and large scale cloud parameterisations were used for the atmosphere, and the same algorithms for advection and diffusion in the ocean, respectively.The radiation in the atmosphere was calculated every 2 simulation hours.In addition, the number of tasks requested in the simulation were coincident with the number of cores allocated (i.e. one task per core).
Since in COSMOS the user can distribute a given number of tasks almost arbitrarily between ECHAM5 and MPIOM (one task is always reserved for OASIS), the wall-clock-time required for one simulation with a given number of tasks is not unambiguous.To investigate the distribution of tasks for the optimum load balance, a number of test simulations are usually required for any given setup.Here, we report only the times achieved with the optimal task distribution.In contrast, EMAC-MPIOM does not require any task distribution optimisation and the simulation is performed with the maximum possible computational speed.
Three factors contribute to the differences in the model performance: -The MESSy interface decreases the performance of EMAC in the "GCM-only mode" compared to ECHAM5 by ∼ 3-5 %, and therefore, EMAC-MPIOM is expected to be at least ∼ 3-5 % slower than COS-MOS (see the link "ECHAM5/MESSy Performance" at http://www.messy-interface.org).
-EMAC-MPIOM calculates the interpolation weights during its initialisation phase, whereas COSMOS reads pre-calculated values from files.This calculation is computationally expensive and depends on the AO-GCM component resolutions and on the number of tasks selected.In fact, as seen before in Sect.3.2, each task calculates the interpolation weights from the global domain of the other AO-GCM component, with the interpolation algorithm scanning the global domain for overlaps with the local domain.This calculation is performed only during the initialisation phase.
-The OASIS coupler requires a dedicated task to perform the grid transformations.Hence, for a very low core  number, the single core used by OASIS limits the overall performance of the COSMOS model.
The total wall-clock-time required to complete the simulation of one month shows a constant bias of 58 s for EMAC-MPIOM compared to COSMOS.This bias is independent on the number of tasks used and results from non-parallel process in EMAC-MPIOM, mainly caused by the different initialisation phases of the two climate models.To analyse the performances of the models, this constant bias has been subtracted from the data, so that only the wall-clock times of the model integration phase are investigated.In Fig. 4, the wallclock times required to complete the integration phase of one month simulation are presented, dependent on the number of cores (= number of tasks) used.The wall-clock-times correlate very well between COSMOS and EMAC-MPIOM (see Fig. 4, R 2 = 0.998), showing that the model scalability is similar in both cases.Overall, the difference in the performances can be quantified by the slope of the regression line (see Fig. 4).This slope shows that EMAC-MPIOM has an approx.10 % better scalability (0.89 times) than COSMOS.In general, the improvement in the performance is due to a reduction of the gather/scatter operations between the different tasks.In fact, as described in Sect.3.2, the EMAC-MPIOM model does not perform the transformation as a separate task sequentially, but, instead, performs the interpolation simultaneously for all tasks in their part of the domain.
It must be stressed that this analysis does not allow a general conclusion, which is valid for all model setups, resolutions, task numbers, etc.Most likely, the results obtained here are not even to be transferable to other machines/architectures or compilers.However, it is possible to conclude that the coupling method implemented here, does not deteriorate the performance of the coupled model.

Evaluation of EMAC-MPIOM
In order to test, if the chosen coupling method technically works and does not deteriorate the climate of the physically coupled atmosphere-ocean system, we performed a number of standard climate simulations with EMAC-MPIOM and analysed the results.This analysis is not presented in full detail, because the dynamical components of EMAC-MPIOM (i.e.ECHAM5 and MPIOM) are the same as in the COSMOS model.Therefore, we refer to Jungclaus et al. (2007) for a detailed overview of the model climatology.
The model resolution applied here for the standard simulations is T31L19 for the atmosphere component EMAC and GR30L40 for the ocean component MPIOM.This resolution is coarser than the actual state-of-the-art resolution used in climate models.However, near future EMAC-MPIOM simulations with atmospheric and/or ocean chemistry included will be limited by the computational demands and therefore are required to be run at such lower resolutions.It is hence essential to obtain reasonable results at this rather coarse resolution, which has been yet widely used to couple ECHAM5 with MPIOM.Following the Coupled Model Intercomparison Project (CMIP3) recommendations, three simulations have been performed with different Greenhouse gas (GHG) forcings: -a "preindustrial control simulation" with constant preindustrial conditions (GHG of the year 1850), hereafter referred to as PI, -a "climate of the 20 century" simulation (varying GHG from 1850 to 2000) hereafter referred to as TRANS, and -a "1 % yr −1 CO 2 increase to doubling" simulation (with other GHG of the year 1850), hereafter referred to as CO2×2.
These simulations have been chosen to allow some of the most important evaluations that can be conducted for climate models of this complexity.In addition, the output from a large variety of well tested and reliable climate models can be used to compare the results with.Because these models had been run at higher resolutions and with slightly different set-ups, some differences in the results are expected, nevertheless providing important benchmarks.The series of annual values of the GHG for the TRANS simulations have been obtained from the framework of the ENSEMBLES European project and include CO 2 (Etheridge et al., 1998), CH 4 (Etheridge et al., 2002), N 2 O (Machida et al., 1995) and CFCs (Walker et al., 2000).

Surface temperature
As shown by Jungclaus et al. (2007), the sea surface temperature (SST) and the sea ice are the most important variables for the determination of the atmosphere-to-ocean fluxes and of the correctness of the coupling processes.
In Fig. 5, the SST of simulation TRANS is compared to the SST from the Atmospheric Model Intercomparison Project (AMIP, Taylor et al., 2000), compiled by Hurrell et al. (2008) based on monthly mean Hadley Centre sea ice and SST data (HadlSST, version 1) and weekly optimum interpolation (OI) SST analysis data (version 2) of the National Oceanic and Atmospheric Administration (NOAA).Both datasets are averaged over the years 1960-1990.The correlation between the two datasets is high (R 2 = 0.97), which confirms that the model is generally correctly reproducing the observed SST.
Although the correlation is high, it is interesting to analyse the spatial differences between the AMIPII data and the TRANS simulation.In Fig. 6 the spatial distribution of the differences corresponding to the data shown in Fig. 5 is presented.Although the deviation from the observed values is less than 1 K in most regions over the ocean, in some regions the deviation is larger.The largest biases (up to 6 K) are located in the North Atlantic and in the Irminger and Labrador Seas in the Northwestern Atlantic.Deviations of similar magnitude, but with opposite sign are present in the Kuroshio region.Despite the low resolution applied for the simulations (T31L19 for the atmosphere model and GR30L40 for the ocean), these results are in line with what has been obtained by the coupled model COSMOS (Jungclaus et al., 2007), where the biases of similar intensity are found in the same regions.Again, similarly to what has been obtained by Jungclaus et al. (2007), a warmer SST is observed at the west coasts of Africa and the Americas (see Fig. 6).This is probably due to an underestimation of stratocumulus cloud cover in the model atmosphere, which is also an issue with other models (e.g.Washington et al., 2000;Roberts et al., 2004), and possibly, an underestimation of the coastal upwelling in that region.Additionally, the cold bias in the North Atlantic SST is related to a weak meridional overturning circulation and associated heat transport.Finally, in the southern ocean, the too high SSTs near Antarctica and too low SSTs on the northern flank of the Antarctic Circumpolar Current (ACC) are mostly due to a positioning error of the ACC.(Rayner et al., 2003), both for the year 1900-1999 (not detrended).
The surface temperature changes during the 20th century have been compared with model results provided for the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4).In Fig. 7, the global average surface temperature increase with respect to the 1960-1990 average is shown for simulation TRANS in comparison to a series of simulations by other models, which participated in the third phase of the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3, Meehl et al., 2007).The overall increase of the surface temperature is in line with what has been obtained by other climate models of the same complexity.The global surface temperature is somewhat lower compared to those of other models of the CMIP3 database in the 1850-1880 period, while the trend observed during the 1960-1990 period is very similar for all models.
The tropical ocean seasonal mean inter-annual variability is shown in Fig. 8.It is known that ENSO (El Niño-Southern Oscillation) is the dominating signal of the variability in the Tropical Pacific Ocean region.Although in the East Pacific the simulated variability correlates well with the observed one (see Fig. 8), in the western Tropical Pacific, the model generates a somewhat higher inter-annual variability, which is absent in the observations.The cause is most probably the low resolution of the models.The ocean model, as applied here, has a curvilinear rotated grid with the lowest resolution in the Pacific Ocean (see also AchutaRao and Sperber (2006, and references therein) for a review on ENSO simulations in climate models).Although the variability is generally higher in the model than in the observations, an ENSO signal is observed, as shown in Fig. 9.In this figure, the monthly variability of the SST is depicted for the so called ENSO region 3.4 (i.e. between 170 • and 120 • W and between 5 • S and 5 • N).The model variability is confirmed to be higher than the observed one; nevertheless, the model reproduces the correct seasonal phase of El Niño, with a peak of the SST anomaly in the boreal winter.Compared to the difficulties in representing the correct inter-annual variability in the Pacific Ocean, in the Indian Ocean the model reproduces the observed patterns with better agreement to the observations.During July, August and September the model reproduces (with a slight overestimation) the correct variability in the central Indian Ocean, while the patterns produced by the model are qualitatively similar to the observed one during April, May and June.The model is, however, strongly overestimating the variability during October, November and December in the Indian Ocean, especially in the Southern part, while in January, February and March the simulated open ocean shows a too high inter-annual variability over the central-south Indian Ocean and a too low variability near the Northern coasts.

Ice coverage
The correct simulation of the ice coverage is essential for climate models, due to the albedo feedback.As shown by Arzel et al. (2006) there are large differences w.r.t.sea ice coverage simulations between the models used for the IPCC AR4.Arzel et al. (2006) showed that, although the multimodel average sea ice extend may agree with the observations, differences by a factor of 2 can be found between individual model simulations.In Fig. 10 the polar sea ice coverage fractions for September and March are shown, calculated as a 1960-1990 average climatology from the TRANS simulation.In the same figure the observations are also shown (Rayner et al., 2003), averaged over the same period.In the Northern Hemisphere (NH) winter, the warm Norwegian Atlantic current is present, impeding the ice formation at the Norwegian coast.Nevertheless, the model is clearly predicting a too high ice coverage, especially over the Barent Shelf and at the west coast of Svalbard.At the same time the model overestimates the presence of ice around the coast of Greenland and at the coasts of Newfoundland and Labrador.The model reproduces, with better agreement, the retreat of the sea-ice during summer, with a strong reduction of the sea ice in the Barents and Kara Seas.Again, a somewhat higher ice coverage is present at the east coast of Greenland and northern Iceland.In the Antarctic, the eastern coast of the Antarctic peninsula (Weddel Sea) is ice covered throughout the year.The model reproduces the right magnitude of the retreat of the ice during summer, although with some overestimation in the Ross Sea.During the Southern Hemisphere (SH) winter, an underestimation of the ice coverage is present at 30 • E, while an overestimation occurs over the Amundsen Sea.To compare the changes of the sea ice coverage during the 20th century, the annual sea ice coverage area has been calculated from the simulations TRANS and PI and compared with the dataset by Rayner et al. (2003), which is based on observations (see Fig. 11).The simulated sea ice coverage agrees with the observations, although with an overestimation (up to 8 %).In addition, the simulated inter-annual variability is much larger than what is observed.Nevertheless the model is able to mimic the decrease in the sea ice area coverage observed after 1950, although with a general overestimation.

Thermohaline circulation and meridional overturning circulation
Deep water formation mainly takes place in the North Atlantic Ocean, and in the northern and southern parts of the Greenland Scotland Ridge.The correct representation of deep water formation is important for climate models, to maintain the stability of the climate over a long time period.Figure 12 presents the maximum depth of convection estimated as the deepest model layer, where the diffusive vertical velocity is greater than zero.In the North Atlantic Ocean convection is present between Greenland and Newfoundland (Labrador Sea), with convection deeper than 1500 m.Although the model simulation agrees with the observations in this region (Pickart et al., 2002), a deep convection feature (which is the main region of deep water formation in the model) is present at the east coast of Newfoundland, which is clearly in contrast to the observations.The reason is a weak MOC (Meridional Overturning Circulation) which, combined with the strong presence of ice during winter in the Labrador sea (see Fig. 10), forces the deep water formation in the model to be located further to the South than what is observed.Nevertheless, strong convective movement occurs in the Greenland and Norwegian Seas, reaching up the coast of Svalbard.This zone of deep water formation is well known and appears to be well simulated by the model.In the SH, convection occurs mainly outside the Weddel Sea and Ross Sea, with some small convective events all around the Southern Ocean and with the major events occurring between 0 and 45 • E.

Jet streams
The jet streams are strong air currents concentrated within a narrow region in the upper troposphere.The predominant one, the polar-front jet, is associated with synoptic weather systems at mid-latitudes.
Hereafter, jet stream always refers to the polar-front jet.The adequate representation of the jet stream by a model indicates that the horizontal temperature gradient (the main cause of these thermal winds) is reproduced correctly.In Fig. 13, the results from simulation TRANS are compared with the NCEP/NCAR (National Centers for Environmental Prediction/ National Center for Atmospheric Research) Reanalysis (Kalnay et al., 1996).The maximum zonal wind speed is reproduced well by the model, with the SH jet stream somewhat stronger than the NH jet stream ( 30 and 22 m s −1 , respectively).The location of the maximum wind, however, is slightly shifted poleward by 5 • .The vertical position of the jet streams is also 50 hPa higher than the observed.The NH jet stream has a meridional extension which is in line with what is observed, while the simulated SH jet stream is narrower in the latitudinal direction compared to the re-analysis provided by NCEP.In fact, the averaged zonal wind speed higher than 26 m s −1 in the SH is located between 40-30 • S in the model results, while it is distributed on a larger latitudinal range ( 50-25 • S) in the NCEP re-analysis data.Finally, while the NCEP data show a change of direction between the tropical and extratropical zonal winds, the simulation TRANS reproduces such features only in the lower troposphere and in the stratosphere, while in the upper troposphere (at around 200 hPa) westerly winds still dominate.Although some differences arise from the comparison, the general features of thermal winds are reproduced correctly by the model, despite the low resolution used for the atmosphere model (T31L19).

Precipitation
The representation of precipitation, being a very important climate variable, is still challenging for coupled climate models (Dai, 2006).The data from the Global Precipitation Climatology Project (GPCP, Adler et al., 2003) are used to evaluate the capability of EMAC-MPIOM in reproducing this important quantity.As for many other climate models, also the results from simulation TRANS show two zonal bands of high biased precipitation in the tropics, separated by a dry bias directly at the equator (see Fig. 14).These zonal bands (located over the Pacific Ocean) are persistent throughout the year and the magnitude is independent of the season.In addition, the Northern Intertropical Convergence Zone (ITCZ) is located slightly too far north compared to the observations during summer and autumn (see Fig. 15, JJA and SON), and too far south during winter and spring (see Fig. 15, DJF and MAM).For boreal autumn and winter the simulation shows Fig. 14.Zonally averaged difference in the precipitation rate (in mm day −1 ) between climatologies derived from simulation TRANS  and from observations (Global Precipitation Climatology Project, 1979-2009, Adler et al., 2003).
a distinct minimum at around 30 • S, which is weaker in the observations.Finally, the model largely underestimates the precipitation over Antarctica throughout the year and in the storm track during the NH winter.This is associated with the underestimation of the sea surface temperature in these regions.

Climate sensitivity
To estimate the climate sensitivity of the coupled model EMAC-MPIOM, the results from the CO2×2 simulation are analysed.The simulation yields a global average increase of the surface temperature of 2.8 K for a doubling of CO 2 .As mentioned in the IPCC AR4, the increase in the temperature for a CO 2 doubling "is likely to be in the range 2 to 4.5 • C with a best estimate of about 3 • C".The value obtained in this study is thus in line with results from the CMIP3 multi-model dataset.For the same experiment, for example, the models ECHAM5/MPIOM (with OASIS coupler) and INGV-SX6 show an increase of the global mean surface temperature of 3.35 K and 1.86 K, respectively.To calculate the climate sensitivity of the model, the mean radiative forcing at the tropopause (simulation CO2×2) was calculated for the years 1960-1990 as 4.0 W m −2 .This implies a climate sensitivity of the model of 0.7 K W −1 m 2 , in line with what has been estimated by most models from the CMIP3 dataset (e.g.ECHAM5/MPIOM, INGV-SX6, INM-CM3 and IPSL-CM4 with 0.83, 0.78, 0.52 and 1.26 K W −1 m 2 , respectively).Despite the usage of the same dynamical components, EMAC-MPIOM and ECHAM5/MPIOM do not present the same climate sensitivity, because of the different resolution and boundary conditions (GHG vertical profiles) used in the model simulations here considered.

Summary and outlook
A new internal coupling method, based on the MESSy interface, between EMAC and MPIOM is presented.It shows a comparable run-time performance as the external COSMOS coupling approach using OASIS3 under comparable conditions and for the set-up tested here.Despite the fact that the effective performances of the model components are not deteriorated by the new approach, it is hardly possible to estimate in general which coupling method yields the best performance of the climate model, because it is determined by the number of available tasks, the achievable load balance, the model resolution and complexity, and the single component scalability.Additionally, the scaling and load imbalance issues cannot be regarded separately, rendering a general statement about the performance and scaling features of the internal versus external coupling method hardly possible.The efforts for implementing either the internal or the external coupling approach primarily depend on the code structure of the legacy models to be coupled.In both cases, the legacy codes need to be equipped with additional infrastructure defining the interfaces.The external approach is by design potentially more favourable for less structured codes.Hence, in most cases, the external approach requires smaller coding effort to be implemented than the internal approach.
To evaluate the EMAC-MPIOM model system, we performed selected climate simulations to prove that the EMAC-MPIOM climate is neither deteriorated by the new approach, nor does the new model system produce results that differ from those of other climate models under similar conditions and forcings.
Following the MESSy philosophy, a new submodel (named A2O) was developed to control the exchange of information (coupling) between the AO-GCM components.However, since this submodel is flexibly controlled by a namelist, it can be used to convert any field present in one AO-GCM component to the other one and vice versa.Thanks to this capability, A2O can be used not only to control the physical coupling between the two AO-GCM components, but also to exchange additional information/fields between the two domains of the AO-GCM, including physical and chemical (e.g.tracer mixing ratios) data.Hence, as a future model development, the ocean biogeochemistry will be included via the MESSy interface and coupled to the air chemistry submodels of EMAC, using the AIRSEA submodel previously developed (Pozzer et al., 2006).This will allow a complete interaction between the two AO-GCM domains, exchanging not only quantities necessary for the physical coupling of EMAC and MPIOM (i.e.heat, mass and momentum as shown here) but also chemical species of atmospheric or oceanic interest, leading to a significant advancement towards a more detailed description of biogeochemical processes in the Earth system.Supplementary material related to this article is available online at: http://www.geosci-model-dev.net/4/771/2011/gmd-4-771-2011-supplement.pdf.

Fig. 1 .
Fig. 1.Coupling methods between the different model components (C1and C2) of an AO-GCM (upper panel "internal method", as implemented here, lower panel "external method" as used for example in the OASIS coupler).The colours denote the different executables.

Fig. 2 .
Fig. 2. Parallel (horizontal) "4 times 4" domain decomposition for a model setup with 16 tasks for the atmosphere model (upper panel) and the ocean model (lower panel).The color code denotes the task number.

Fig. 3 .
Fig. 3.Example of a grid transformation with the SCRIP library routines embedded in the generic MESSy submodel MAIN GRIDTRAFO and called by A2O: the precipitation minus evaporation field on the EMAC grid (top) has been transformed to the MPIOM grid (bottom) using the conservative remapping.

Fig. 4 .
Fig. 4. Scatter plot of the time (seconds wall-clock) required to simulate one month with the COSMOS-1.0.0 model (horizontal axis) and with the EMAC-MPIOM model with the same setup.The color code denotes the number of tasks used (for clarity the number of tasks used are shown also on the top of the points).In these simulations one task per core has been used.The regression line is shown in red and the result of the linear regression is denoted in the top left side of the plot.The constant bias of 58 s has been subtracted from the data.

Fig. 5 .
Fig. 5. Scatter plot of 1960-1990 average sea surface temperatures from the Taylor et al. (2000) dataset versus those resulting from simulation TRANS (in K).

Fig. 6 .
Fig.6.Surface temperature differences between the AMIP II(Taylor et al., 2000) dataset and the simulation TRANS (in K).Both datasets have been averaged over the years1960-1990.

Fig. 8 .
Fig. 8. Standard deviation of the seasonal mean inter-annual variability of the SST (in K).The left and right columns show results from the TRANS simulation, and from the HadISST data(Rayner et al., 2003), respectively, both for the year 1900-1999 (not detrended).

Fig. 9 .
Fig. 9. Standard deviation of monthly mean inter-annual variability of the SST (in K) averaged over the NINO3.4region.The black line shows results from the TRANS simulation, and the red line from the HadISST data(Rayner et al., 2003), both for the year 1900-1999 (not detrended).

Fig. 10 .
Fig. 10.Simulated and observed polar ice coverage.The upper and lower rows show March and September, respectively.Observations and results from simulation TRANS are averaged for the years 1960-1990.Observations are from the HadISST (Rayner et al., 2003) data set.

Fig. 11 .
Fig. 11.Global sea ice coverage (in 10 12 m 2 ).The black line shows the HadISST(Rayner et al., 2003) data, while the blue and the red lines represent the model results from simulations PI and TRANS, respectively.Dashed and solid lines represent annual and decadal running means, respectively.

Fig. 13 .
Fig. 13.Climatologically averaged zonal wind.The color denotes the wind speed in m s −1 as calculated from simulation TRANS for the years 1968-1996, while the contour lines denote the wind speed calculated from the NCEP/NCAR Reanalysis 1 for the same years.The vertical axis is in hPa.

Fig. 15 .
Fig. 15.Seasonal zonal average of climatological precipitation rate (in mm day −1 ).The red lines show observations from the Global Precipitation Climatology Project (1979-2009 climatology), the black lines represent results from the simulation TRANS (1950-2000 climatology).

Table 1 .
Variables to be exchanged by A2O for a physical coupling between EMAC and MPIOM.