<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing with OASIS Tables v3.0 20080202//EN" "journalpub-oasis3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:oasis="http://docs.oasis-open.org/ns/oasis-exchange/table" xml:lang="en" dtd-version="3.0">
  <front>
    <journal-meta><journal-id journal-id-type="publisher">GMD</journal-id><journal-title-group>
    <journal-title>Geoscientific Model Development</journal-title>
    <abbrev-journal-title abbrev-type="publisher">GMD</abbrev-journal-title><abbrev-journal-title abbrev-type="nlm-ta">Geosci. Model Dev.</abbrev-journal-title>
  </journal-title-group><issn pub-type="epub">1991-9603</issn><publisher>
    <publisher-name>Copernicus Publications</publisher-name>
    <publisher-loc>Göttingen, Germany</publisher-loc>
  </publisher></journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.5194/gmd-12-233-2019</article-id><title-group><article-title>Toward modular in situ visualization in Earth system models: <?xmltex \hack{\break}?>the regional modeling system RegESM 1.1</article-title><alt-title>In situ visualization with integrated RegESM 1.1</alt-title>
      </title-group><?xmltex \runningtitle{In situ visualization with integrated RegESM 1.1}?><?xmltex \runningauthor{U.~U. Turuncoglu}?>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes" rid="aff1">
          <name><surname>Turuncoglu</surname><given-names>Ufuk Utku</given-names></name>
          <email>ufuk.turuncoglu@itu.edu.tr</email>
        <ext-link>https://orcid.org/0000-0001-5499-7326</ext-link></contrib>
        <aff id="aff1"><institution>Informatics Institute, Istanbul Technical University, Istanbul, Turkey</institution>
        </aff>
      </contrib-group>
      <author-notes><corresp id="corr1">Ufuk Utku Turuncoglu (ufuk.turuncoglu@itu.edu.tr)</corresp></author-notes><pub-date><day>16</day><month>January</month><year>2019</year></pub-date>
      
      <volume>12</volume>
      <issue>1</issue>
      <fpage>233</fpage><lpage>259</lpage>
      <history>
        <date date-type="received"><day>13</day><month>July</month><year>2018</year></date>
           <date date-type="rev-request"><day>27</day><month>July</month><year>2018</year></date>
           <date date-type="rev-recd"><day>6</day><month>December</month><year>2018</year></date>
           <date date-type="accepted"><day>21</day><month>December</month><year>2018</year></date>
      </history>
      <permissions>
        
        
      <license license-type="open-access"><license-p>This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this licence, visit <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link></license-p></license></permissions><self-uri xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019.html">This article is available from https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019.html</self-uri><self-uri xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019.pdf">The full text article is available as a PDF file from https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019.pdf</self-uri>
      <abstract>
    <p id="d1e79">The data volume produced by regional and global multicomponent Earth system
models is rapidly increasing because of the improved spatial and temporal
resolution of the model components and the sophistication of the numerical
models regarding represented physical processes and their complex non-linear
interactions. In particular, very small time steps need to be defined in
non-hydrostatic high-resolution modeling applications to represent the
evolution of the fast-moving processes such as turbulence, extratropical
cyclones, convective lines, jet streams, internal waves, vertical turbulent
mixing and surface gravity waves. Consequently, the employed small time steps
cause extra computation and disk input–output overhead in the modeling system
even if today's most powerful high-performance computing and data storage
systems are considered. Analysis of the high volume of data from multiple
Earth system model components at different temporal and spatial resolutions
also poses a challenging problem to efficiently perform integrated data
analysis of the massive amounts of data when relying on the traditional
postprocessing methods today. This study mainly aims to explore the
feasibility and added value of integrating existing in situ visualization and
data analysis methods within the model coupling framework. The objective is
to increase interoperability between Earth system multicomponent code and
data-processing systems by providing an easy-to-use, efficient, generic and
standardized modeling environment. The new data analysis approach enables
simultaneous analysis of the vast amount of data produced by multicomponent
regional Earth system models during the runtime. The presented methodology
also aims to create an integrated modeling environment for analyzing
fast-moving processes and their evolution both in time and space to support a
better understanding of the underplaying physical mechanisms. The
state-of-the-art approach can also be employed to solve common problems in the
model development cycle, e.g., designing a new subgrid-scale parameterization
that requires inspecting the integrated model behavior at a higher temporal
and spatial scale simultaneously and supporting visual debugging of the
multicomponent modeling systems, which usually are not facilitated by
existing model coupling libraries and modeling systems.</p>
  </abstract>
    </article-meta>
  </front>
<body>
      

<sec id="Ch1.S1" sec-type="intro">
  <title>Introduction</title>
      <p id="d1e89">The multiscale and inherently coupled Earth system models (ESMs) are
challenging to study and understand. Rapid developments in Earth system
science, as well as in high-performance computing and data storage systems,
have enabled fully coupled regional or global ESMs to better represent
relevant processes, complex climate feedbacks and interactions among the
coupled components. In this context, regional ESMs are employed when the
spatial and temporal resolutions of the global climate models are not
sufficient to resolve local features such as complex topography, land–sea
gradients and the influence of human activities on a smaller spatial scale.
Along with the development of the modeling systems, specialized software
libraries for the model coupling become more and more critical to reduce the
complexity of the coupled model development and increase the
interoperability, reusability and efficiency of the existing modeling
systems. Currently, the existing model coupling software libraries have two
main categories: couplers and coupling frameworks.</p>
      <p id="d1e92">Couplers are mainly specialized in performing specific operations more
efficiently and quickly, such as<?pagebreak page234?> coordination of components and interpolation
among model components. For example, Ocean Atmosphere Sea Ice Soil (OASIS3) <xref ref-type="bibr" rid="bib1.bibx61" id="paren.1"/> uses
multiple executable approaches for coupling model components, but sequentially
performing internal algorithms such as sparse matrix multiplication (SMM)
operation for interpolation among model grids becomes a bottleneck, along with
increased spatial resolution of the model components. To overcome the
problem, OASIS4 uses parallelism in its internal algorithms
<xref ref-type="bibr" rid="bib1.bibx43" id="paren.2"/>, and OASIS3-MCT <xref ref-type="bibr" rid="bib1.bibx12" id="paren.3"/> interfaced with the
Model Coupling Toolkit <xref ref-type="bibr" rid="bib1.bibx26 bib1.bibx29" id="paren.4"><named-content content-type="pre">MCT;</named-content></xref> provides a parallel
implementation of interpolation and data exchange. Besides generic couplers
like OASIS, domain-specific couplers such as the Oceanographic Multi-purpose
Software Environment <xref ref-type="bibr" rid="bib1.bibx42" id="paren.5"><named-content content-type="pre">OMUSE;</named-content></xref> that aims to provide a
homogeneous environment for ocean modeling to make verification of simulation
models with different codes and numerical methods and the Community Surface
Dynamics Modeling System <xref ref-type="bibr" rid="bib1.bibx45" id="paren.6"><named-content content-type="pre">CSDMS;</named-content></xref> to develop integrated
software modules for modeling of Earth surface processes are introduced.</p>
      <p id="d1e120">A coupling framework is an environment for coupling model components through
a standardized calling interface and aims to reduce the complexity of regular
tasks such as performing spatial interpolation across different computational
grids and transferring data among model components to increase the efficiency
and interoperability of multicomponent modeling systems. Besides the
synchronization of the execution of individual model components, a coupling
framework can simplify the exchange of metadata related to model components
and exchanged fields through the use of existing conventions such as CF
(climate and forecast) convention. The Earth System Modeling Framework (ESMF)
is one of the most famous examples of this approach <xref ref-type="bibr" rid="bib1.bibx50" id="paren.7"/>.
ESMF consists of a standardized superstructure for coupling components of
Earth system applications through a robust infrastructure of high-performance
utilities and data structures that ensure consistent component behavior
<xref ref-type="bibr" rid="bib1.bibx24" id="paren.8"/>. ESMF is also extended to include the
National Unified Operational Prediction Capability (NUOPC) layer. The NUOPC
layer simplifies component synchronization and run sequence by providing
an additional programming interface between the coupled model and ESMF
through the use of a NUOPC “cap”. In this case, a NUOPC cap is a
Fortran module that serves as the interface to a model when it is used in a
NUOPC-based coupled system. The term “cap” is used because it is a small
software layer that sits on top of a model code, making calls into it and
exposing model data structures in a standard way. In addition to generic
modeling framework like ESMF, the Modular System for Shelves and Coasts
<xref ref-type="bibr" rid="bib1.bibx30" id="paren.9"><named-content content-type="pre">MOSSCO;</named-content></xref> creates a state-of-the-art domain and process
coupling system by taking advantage of both ESMF and the Framework for Aquatic
Biogeochemical Models <xref ref-type="bibr" rid="bib1.bibx10" id="paren.10"><named-content content-type="pre">FABM;</named-content></xref> for marine coastal Earth
system community.</p>
      <p id="d1e139">The recent study of <xref ref-type="bibr" rid="bib1.bibx5" id="text.11"/> to investigate the degree of
modularity and design of the existing global climate models reveals that the
majority of the models use central couplers to support data exchange, spatial
interpolation and synchronization among model components. In this approach,
direct interaction does not have to occur between individual model components
or modules, since the specific coupler component manages the data transfer.
This approach is also known as the hub-and-spoke method of building a
multicomponent coupled model. A key benefit of using a hub-and-spoke
approach is that it creates a more flexible and efficient environment for
designing sophisticated multicomponent modeling systems regarding represented
physical processes and their interactions. The development of the more
complex and high-resolution modeling systems leads to an increased demand for
both computational and data storage resources. In general, the high volume of
data produced by the numerical modeling systems may not allow storing all the
critical and valuable information to use later, despite recent advances in
storage systems. As a result, the simulation results are stored in a limited
temporal resolution (i.e., monthly averages), which are processed after
numerical simulations are finished (postprocessing). The poor representation of
the results of numerical model simulations prevents analyzing the
fast-moving processes such as extreme precipitation events, convection,
turbulence and non-linear interactions among the model components on a high
temporal and spatial scale with the traditional postprocessing approach.</p>
      <p id="d1e146">The analysis of leading high-performance computing systems reveals that the
rate of disk input–output (I/O) performance is not growing at the same speed
as the peak computational power of the systems <xref ref-type="bibr" rid="bib1.bibx1 bib1.bibx2" id="paren.12"/>.
The recent report of the US Department of Energy (DOE) also indicates that the
expected rate of increase in I/O bandwidth (100 times) will be slower than
the peak system performance (500 times) of the new generations of exascale
computers <xref ref-type="bibr" rid="bib1.bibx8" id="paren.13"/>. Besides, the movement of large volumes of data
across relatively slow network bandwidth servers fails to match the ultimate
demands of data processing and to archive tasks of the present
high-resolution multicomponent ESMs. As a result, the traditional
postprocessing approach has become a bottleneck in monitoring and analysis
of fast-moving processes that require very high spatial resolution, due to
the present technological limitations in high-performance computing and
storage systems <xref ref-type="bibr" rid="bib1.bibx4" id="paren.14"/>. In the upcoming computing era,
new state-of-the-art data analysis and visualization methods are needed to
overcome the above limitations evocatively.</p>
      <p id="d1e158">Besides the traditional data analysis approach, the so-called in situ
visualization and co-processing approaches allow researchers to analyze the
output while running the numerical simulations simultaneously. The coupling
of<?pagebreak page235?> computation and data analysis helps to facilitate efficient and optimized
data analysis and visualization pipelines and boosts the data analysis
workflow. Recently, a number of in situ visualization systems for analyzing
numerical simulations of Earth system processes have been implemented. For
instance, the ocean component of Model for Prediction Across Scales (MPAS)
has been integrated with an image-based in situ visualization tool to examine
the critical elements of the simulations and reduce the data needed to
preserve those elements by creating a flexible work environment for data
analysis and visualization <xref ref-type="bibr" rid="bib1.bibx4 bib1.bibx40" id="paren.15"/>. Additionally, the
same modeling system (MPAS-Ocean) has been used to study eddies in
large-scale, high-resolution simulations. In this case, the in situ
visualization workflow is designed to perform eddy analysis at higher spatial
and temporal resolutions than available with traditional postprocessing
facing storage size and I/O bandwidth constraints <xref ref-type="bibr" rid="bib1.bibx62" id="paren.16"/>.
Moreover, a regional weather forecast model (Weather Research and Forecasting
model; WRF) has been integrated with an in situ visualization tool to track
cyclones based on an adaptive algorithm <xref ref-type="bibr" rid="bib1.bibx31" id="paren.17"/>. Despite the
lack of generic and standardized implementation for integrating model
components with in situ visualization tools, the previous studies have shown
that in situ visualization can produce analyses of simulation results,
revealing many details in an efficient and optimized way. It is evident that
more generic implementations could facilitate smooth integration of the
existing stand-alone and coupled ESMs with available in situ visualization
tools <xref ref-type="bibr" rid="bib1.bibx3 bib1.bibx9 bib1.bibx11" id="paren.18"/> and improve interoperability
between such tools and non-standardized numerical simulation codes.</p>
      <p id="d1e173">The main aim of this paper is to explore the added value of integrating
in situ analysis and visualization methods with a model coupling framework
(ESMF) to provide in situ visualization for easy-to-use, generic,
standardized and robust scientific applications of Earth system modeling. The
implementation allows existing ESMs coupled with the ESMF library to take
advantage of in situ visualization capabilities without extensive code
restructuring and development. Moreover, the integrated model coupling
environment allows sophisticated analysis and visualization pipelines by
combining information coming from multiple ESM components (i.e., atmosphere,
ocean, wave, land surface) in various spatial and temporal resolutions.
Detailed studies of fundamental physical processes and interactions among
model components are vital to the understanding of complex physical processes
and could potentially open up new possibilities for the development of ESMs.</p>
</sec>
<sec id="Ch1.S2">
  <title>The design of the modeling system</title>
      <p id="d1e182">RegESM (Regional Earth System Model, version 1.1) modeling system can use five
different model components to support many different modeling applications
that might require detailed representation of the interactions among
different Earth system processes (Fig. <xref ref-type="fig" rid="Ch1.F1"/>a–b). The implementation
of the modeling system follows the hub-and-spoke architecture. The driver
that is responsible for the orchestration of the overall modeling system
resides in the middle and acts as a translator among model components
(atmosphere, ocean, wave, river routing and co-processing). In this case,
each model component introduces its NUOPC cap to plug into the modeling
system. The modeling system is validated in different model domains such as
the Caspian Sea <xref ref-type="bibr" rid="bib1.bibx59" id="paren.19"/>, Mediterranean Basin
<xref ref-type="bibr" rid="bib1.bibx48 bib1.bibx56" id="paren.20"/> and Black Sea Basin.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F1" specific-use="star"><caption><p id="d1e195">Design of the RegESM coupled modeling system: <bold>(a)</bold> model
components including the co-processing component and <bold>(b)</bold> their interactions
(orange arrows represent the redistribution and green arrows represent
interpolation).</p></caption>
        <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f01.png"/>

      </fig>

<sec id="Ch1.S2.SS1">
  <title>Atmosphere models (ATMs)</title>
      <p id="d1e215">The flexible design of the RegESM modeling system allows choosing a different
atmospheric model component (ATM) in the configuration of the coupled model
for a various type of application. Currently, two different atmospheric
models are compatible with the RegESM modeling system: (1) RegCM4 <xref ref-type="bibr" rid="bib1.bibx19" id="paren.21"/>,
which is developed by the Abdus Salam International Centre for Theoretical
Physics (ICTP), and (2) the Advanced Research WRF model <xref ref-type="bibr" rid="bib1.bibx47" id="paren.22"><named-content content-type="pre">ARW;</named-content></xref>, which is developed and sourced from
National Center for Atmospheric Research (NCAR). In this study, RegCM 4.6 is
selected as an atmospheric model component because the current implementation
of WRF coupling interface is still experimental and does not support coupling
with the co-processing component yet, but the next version of the modeling system
(RegESM 1.2) will be able to couple the WRF atmospheric model with the co-processing
component. The NUOPC cap of atmospheric model components defines state
variables (i.e., sea surface temperature, surface wind components), rotates
the winds relative to Earth, applies unit conversions and performs vertical
interpolation to interact with the newly introduced co-processing component.</p>
<sec id="Ch1.S2.SS1.SSS1">
  <title>RegCM</title>
      <p id="d1e231">The dynamical core of RegCM4 is based on the primitive equation,
hydrostatic version of the NCAR
and Pennsylvania State University mesoscale model MM5 <xref ref-type="bibr" rid="bib1.bibx20" id="paren.23"/>. The
latest version of the model (RegCM 4.6) also allows the non-hydrostatic
dynamical core to support applications with high spatial resolutions (<inline-formula><mml:math id="M1" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">10</mml:mn></mml:mrow></mml:math></inline-formula> km). The model includes two different land surface models:
(1) Biosphere-Atmosphere Transfer Scheme <xref ref-type="bibr" rid="bib1.bibx14" id="paren.24"><named-content content-type="pre">BATS;</named-content></xref> and
(2) Community Land Model (CLM), version 4.5 <xref ref-type="bibr" rid="bib1.bibx49" id="paren.25"/>. The model
also includes specific physical parameterizations to define air–sea
interaction over the sea and lakes <xref ref-type="bibr" rid="bib1.bibx25" id="paren.26"><named-content content-type="pre">one-dimensional lake
model;</named-content></xref>. The Zeng ocean air–sea parameterization
<xref ref-type="bibr" rid="bib1.bibx63" id="paren.27"/> is extended to introduce the atmosphere model as<?pagebreak page236?> a
component of the coupled modeling system. In this way, the atmospheric model
can exchange both two- and three-dimensional fields with other model
components such as ocean, wave and river-routing components that are
active in an area inside of the atmospheric model domain as well as the in situ
visualization component.</p>
</sec>
<sec id="Ch1.S2.SS1.SSS2">
  <title>WRF</title>
      <p id="d1e270">The WRF model consists of fully compressible non-hydrostatic equations, and
the prognostic variables include the three-dimensional wind, perturbation
quantities of pressure, potential temperature, geopotential, surface
pressure, turbulent kinetic energy and scalars (i.e., water vapor mixing
ratio, cloud water). The model is suitable for a broad range of applications
and has a variety of options to choose parameterization schemes for the
planetary boundary layer (PBL), convection, explicit moisture, radiation and
soil processes to support analysis of different Earth system processes. The
PBL scheme of the model has a significant impact on exchanging moisture,
momentum and energy between air and sea (and land) due to the used
alternative surface layer options (i.e., drag coefficients) in the model
configuration. A few modifications are done in the WRF (version 3.8.1) model
itself to couple it with the RegESM modeling system. These modifications include
rearranging of WRF time-related subroutines, which are inherited from the
older version of the ESMF time manager API (Application Programming Interface)
that was available in 2009, to compiling the model with the newer version of the ESMF
library (version 7.1.0) together with the older version that requires mapping
of time manager data types between old and new versions.</p>
</sec>
</sec>
<sec id="Ch1.S2.SS2">
  <title>Ocean models (OCNs)</title>
      <p id="d1e280">The current version of the coupled modeling system supports two different
ocean model components (OCNs): (1) Regional Ocean Modeling System <xref ref-type="bibr" rid="bib1.bibx46 bib1.bibx23" id="paren.28"><named-content content-type="pre">ROMS
revision 809;</named-content></xref>, which is developed and
distributed by Rutgers University, and (2) MIT general circulation model
<xref ref-type="bibr" rid="bib1.bibx32 bib1.bibx33" id="paren.29"><named-content content-type="pre">MITgcm version c63s;</named-content></xref>. In this case, the ROMS
and MITgcm models are selected due to their large user communities and
different vertical grid representations. Although the selection of ocean
model components depends on user experience and application, often the choice
of vertical grid system has a determining role in some specific applications.
For example, the ROMS ocean model uses a terrain-following (namely
<inline-formula><mml:math id="M2" display="inline"><mml:mi>s</mml:mi></mml:math></inline-formula> coordinates) vertical grid system that allows a better representation of
the coastal processes, but MITgcm uses <inline-formula><mml:math id="M3" display="inline"><mml:mi>z</mml:mi></mml:math></inline-formula> levels generally used for
applications that involve open oceans and seas. Similar to the atmospheric
model component, both ocean models are slightly modified to allow data
exchange with the other model components. In the current version of the
coupled modeling system, there is no interaction between wave and ocean model
components, which could be crucial for some applications (i.e., surface ocean
circulation and wave interaction) that need to consider the two-way
interaction between waves and ocean currents. The exchange fields defined in
the coupled modeling system between ocean and atmosphere strictly depend on
the application and the studied problem. In some studies, the ocean model
requires heat, freshwater and momentum fluxes to be provided by the
atmospheric component, while in others, the ocean component retrieves surface
atmospheric conditions (i.e., surface<?pagebreak page237?> temperature, humidity, surface
pressure, wind components, precipitation) to calculate fluxes internally, by
using bulk formulas <xref ref-type="bibr" rid="bib1.bibx59" id="paren.30"/>. In the current design of the
coupled modeling system, the driver allows selecting the desired exchange
fields from the predefined list of the available fields. The exchange field
list is a simple database with all fields that can be exported or imported by
the component. In this way, the coupled modeling system can be adapted to
different applications without any code customizations in both the driver and
individual model components.</p>
<sec id="Ch1.S2.SS2.SSS1">
  <title>ROMS</title>
      <p id="d1e315">The ROMS is a three-dimensional, free-surface, terrain-following numerical
ocean model that solves the Reynolds-averaged Navier–Stokes equations using
the hydrostatic and Boussinesq assumptions. The governing equations are in
flux form, and the model uses Cartesian horizontal coordinates and
vertical sigma coordinates with three different stretching functions. The model
also supports second-, third- and fourth-order horizontal and vertical
advection schemes for momentum and tracers via its preprocessor flags.</p>
</sec>
<sec id="Ch1.S2.SS2.SSS2">
  <title>MITgcm</title>
      <p id="d1e324">The MIT general circulation model (MITgcm) is a generic and widely used ocean
model that solves the Boussinesq form of Navier–Stokes equations for an
incompressible fluid. It supports both hydrostatic and non-hydrostatic
applications with a spatial finite-volume discretization on a curvilinear
computational grid. The model has an implicit free surface in the surface and
partial step topography formulation to define vertical depth layers. The
MITgcm model supports different advection schemes for momentum and tracers
such as centered second-order, third-order upwind and second-order flux
limiters for a variety of applications. The model used in the coupled
modeling system was slightly modified by ENEA (Italian National Agency
for New Technologies, Energy and Sustainable Economic Development) to allow data exchange with
other model components. The detailed information about the regional
applications of the MITgcm ocean model is described in the study of
<xref ref-type="bibr" rid="bib1.bibx7" id="text.31"/> using the PROTHEUS modeling system specifically developed for
the Mediterranean Sea.</p>
</sec>
</sec>
<sec id="Ch1.S2.SS3">
  <title>Wave model (WAV)</title>
      <p id="d1e338">Surface waves play a crucial role in the dynamics of the PBL in the atmosphere
and the currents in the ocean. Therefore, the wave component is included in
the coupled modeling system to have a better representation of atmospheric
PBL and surface conditions (i.e., surface roughness, friction velocity and wind
speed). In this case, the wave component is based on WAM Cycle 4 (4.5.3-MPI).
WAM is a third-generation model without any assumption on the spectral
shape <xref ref-type="bibr" rid="bib1.bibx37" id="paren.32"/>. It considers all the main processes that
control the evolution of a wave field in deep water, namely the generation by
wind, the non-linear wave–wave interactions and also white capping. The
model was initially developed by the Helmholtz-Zentrum Geesthacht (GKSS, now HZG)
in Germany. The original version of the WAM model was slightly modified to
retrieve surface atmospheric conditions (i.e., wind speed components or
friction velocity and wind direction) from the RegCM4 atmospheric model and
to send back calculated surface roughness. In the current version of the
modeling system, the wave component cannot be coupled with the WRF model due to
the missing modifications on the WRF side. In RegCM4, the received
surface roughness is used to calculate air–sea transfer coefficients and
fluxes over sea using Zeng ocean air–sea parameterization <xref ref-type="bibr" rid="bib1.bibx63" id="paren.33"/>.
In this design, it is also possible to define a threshold for maximum
roughness length (the default value is 0.02 m) and friction velocity (the
default value is 0.02 m) in the configuration file of RegCM4 to ensure the
stability of the overall modeling system. Initial investigation of the added
value of atmosphere–wave coupling in the Mediterranean Sea can be found in
<xref ref-type="bibr" rid="bib1.bibx48" id="text.34"/>.</p>
</sec>
<sec id="Ch1.S2.SS4">
  <title>River-routing model (RTM)</title>
      <p id="d1e356">To simulate the lateral freshwater fluxes (river discharge) at the land
surface and to provide river discharge to ocean model component, the RegESM
modeling system uses the Hydrological Discharge (HD, version 1.0.2) model
developed by Max Planck Institute <xref ref-type="bibr" rid="bib1.bibx21 bib1.bibx22" id="paren.35"/>. The
model is designed to run in a fixed global regular grid with 0.5<inline-formula><mml:math id="M4" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>
horizontal resolution using daily time series of surface runoff and drainage
as input fields. In that case, the model uses the pre-computed river channel
network to simulate the horizontal transport of the runoff within model
watersheds using different flow processes such as overland flow, baseflow and
river flow. The river-routing model (RTM) plays an essential role in the
freshwater budget of the ocean model by closing the water cycle between the
atmosphere and ocean model components. The original version of the model was
slightly modified to support interaction with the coupled model components.
To close the water cycle between land and ocean, the model retrieves surface and
subsurface runoff from the atmospheric component (RegCM or WRF) and provides
estimated river discharge to the selected ocean model component (ROMS or
MITgcm). In the current design of the driver, rivers can be represented in
two different ways: (1) individual point sources that are vertically
distributed to model layers and (2) imposed as freshwater surface boundary
condition like precipitation (<inline-formula><mml:math id="M5" display="inline"><mml:mi>P</mml:mi></mml:math></inline-formula>) or evaporation minus precipitation
(<inline-formula><mml:math id="M6" display="inline"><mml:mrow><mml:mi>E</mml:mi><mml:mo>-</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula>). In this case, the driver configuration file is used to select the
river representation type (1 or 2) for each river individually. The first
option is preferred if river plumes need to be defined correctly by
distributing river discharge vertically among the<?pagebreak page238?> ocean model vertical
layers. The second option is used to distribute river discharge to the ocean
surface when there is a need to apply river discharge to a large areal extent
close to the river mouth. In this case, a special algorithm implemented in
the NUOPC cap of ocean model components (ROMS and MITgcm) is used to find
affected ocean model grids based on the effective radius (in kilometers) defined in
the configuration file of the driver.</p>
</sec>
<sec id="Ch1.S2.SS5">
  <title>The driver: RegESM</title>
      <p id="d1e396">RegESM (version 1.1) is completely redesigned and improved version of the
previously used and validated coupled atmosphere–ocean model (RegCM-ROMS) to
study the regional climate of the Caspian Sea and its catchment area
<xref ref-type="bibr" rid="bib1.bibx59" id="paren.36"/>. To simplify the design and to create a more generic,
extensible and flexible modeling system that aims to support easy integration
of multiple model components and applications, RegESM uses a driver to
implement the hub-and-spoke approach. In this case, all the model components
are combined using ESMF (version 7.1.0) to structure the coupled
modeling system. ESMF is selected because of its unique online
regridding capability, which allows the driver to perform different
interpolation types (i.e., bilinear, conservative) over the exchange fields
(i.e., sea surface temperature, heat and momentum fluxes) and the NUOPC
layer. The NUOPC layer is a software layer built on top of ESMF. It
refines the capabilities of ESMF by providing a more precise definition of a
component model and how components should interact and share data in a
coupled system. ESMF also provides the capability of transferring
computational grids in the model component memory, which has critical
importance in the integration of the modeling system with a co-processing
environment (see also Sect. <xref ref-type="sec" rid="Ch1.S3"/>). The RegESM modeling system also
uses ESMF and the NUOPC layer to support various configurations of component
interactions such as defining multiple coupling time steps among the model
components. An example configuration of the four-component (ATM, OCN, RTM
and WAV) coupled modeling system can be seen in Fig. <xref ref-type="fig" rid="Ch1.F2"/>. In this
case, the RTM component runs in a daily time step (slow) and interacts with
ATM and OCN components, but ATM and OCN components can interact each other
more frequently (fast) such as every 3 h.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F2" specific-use="star"><caption><p id="d1e408">The run sequence of model components in the case of explicit type
coupling. In this case, the fast coupling time step is used for the
interaction between the atmosphere, ocean and wave components. The slow
coupling time step is only used to interact with the river-routing
component.</p></caption>
          <?xmltex \igopts{width=384.112205pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f02.png"/>

        </fig>

      <p id="d1e417">The interaction (also called as run sequences) among the model components and
driver is facilitated by the connector components provided by the NUOPC layer.
Connector components are mainly used to create a link between individual
model components and the driver. In this case, the number of active components
and their interaction determines the number of connector components created in
the modeling system. The interaction between model components can be either
(1) bidirectional, such as atmosphere and ocean coupled modeling systems,
or (2) unidirectional, such as atmosphere and co-processing modeling systems.
In the unidirectional case, the co-processing component does not interact
with the atmosphere model and only processes retrieved information; thus, there
is one connector component.</p>
      <p id="d1e420">The RegESM modeling system can use two different types of time-integration
coupling schemes between the atmosphere and ocean components: (1) explicit and
(2) semi-implicit (or leapfrog) (Fig. <xref ref-type="fig" rid="Ch1.F3"/>). In the explicit type
coupling, two connector components (ATM-OCN and OCN-ATM directions) are
executed concurrently at every coupling time step, and model components start
and stop at the same model time (Fig. <xref ref-type="fig" rid="Ch1.F3"/>a). In the semi-implicit
coupling type (Fig. <xref ref-type="fig" rid="Ch1.F3"/>b), the ocean model receives surface boundary
conditions from the atmospheric model at one coupling time step ahead of the
current ocean model time. The semi-implicit coupling aimed at lowering the
overall computational cost of a simulation by increasing stability for longer
coupling time steps.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F3" specific-use="star"><caption><p id="d1e432">Schematic representation of <bold>(a)</bold> explicit and
<bold>(b)</bold> semi-implicit model coupling between two model components
(atmosphere and ocean). The numbers indicate the execution orders, which are
initialized in each coupling interval.</p></caption>
          <?xmltex \igopts{width=355.659449pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f03.png"/>

        </fig>

      <p id="d1e447">As described earlier, the execution of the model components is controlled by
the driver. Both sequential and concurrent execution of the model components
is allowed in the current version of the modeling system. If the model
components and the driver are configured to run in sequence on the same set
of persistent execution threads (PETs), then the modeling system executes in
a sequential mode. This mode is a much more efficient way to run the modeling
system in case of limited computing resources. In the concurrent type of
execution, the model components run in mutually exclusive sets of PETs, but
the NUOPC connector component uses a union of available computational
resources (or PETs) of interacted model components. This way, the modeling
system can support a variety of computing systems ranging from local servers
to large computing systems that could include high-speed performance
networks, accelerators (i.e., graphics processing unit or GPU) and parallel
I/O capabilities. The main drawback of concurrent execution approach is
assigning the correct amount of computing resource to individual model components,
which is not an easy task and might require an extensive performance
benchmark of a specific configuration of the model components, to achieve
best available computational performance. In this case, a load-balancing
analysis of individual components and driver play a critical role in the
performance of the overall modeling system. For example, the LUCIA
(Load-balancing Utility and Coupling Implementation Appraisal) tool can be
used to collect all required information such as waiting time and calculation
time of each system component for a load-balancing analysis in the
OASIS3-MCT-based coupled system.</p>
      <p id="d1e450">In general, the design and development of the coupled modeling systems
involve a set of technical difficulties that arise due to the usage of the
different computational grids in the model components. One of the most common
examples is the mismatch between the land–sea masks of the model components
(i.e., atmosphere and ocean models). In this case, the unaligned land–sea
masks might produce artificial or unrealistic surface heat and momentum
fluxes around the coastlines, narrow bays, straits and seas. The simplest
solution is to modify the land–sea masks of the individual model<?pagebreak page239?> components
manually to align them; however, this requires time and is complex (especially
when the horizontal grid resolution is high). Besides, the procedure needs to
be repeated each time the model domain (i.e., shift or change in the model
domain) or horizontal grid resolution is changed.</p>
      <p id="d1e453">The RegESM modeling system uses a customized interpolation technique that also
includes extrapolation to overcome the mismatched land–sea mask problem for
the interaction between the atmosphere, ocean and wave components. This approach
helps to create more generic and automatized solutions for the remapping of
the exchange fields among the model components and enhance the flexibility of
the modeling system to adapt to different regional modeling applications.
There are three main stages in the customized interpolation technique:
(1) finding destination grid points that the land–sea mask type does not
match completely with the source grid (unmapped grid points;
Fig. <xref ref-type="fig" rid="Ch1.F4"/>), (2) performing bilinear interpolation to transfer the
exchange field from source to destination grid and (3) performing extrapolation
in destination grid to fill unmapped grid points that are found in first
step.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F4" specific-use="star"><caption><p id="d1e460">Processing flowchart of the algorithm to find mapped and unmapped
grid points for two-step interpolation.</p></caption>
          <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f04.png"/>

        </fig>

      <p id="d1e470">To find the unmapped grid points, the algorithm first interpolates the field
from source to destination grid (just over the sea) using a nearest-neighbor-type
interpolation (from Field_A to Field_B). Similarly, the same operation
is repeated by using a bilinear-type interpolation (from Field_A to
Field_C). Then, the results of both interpolations (Field_B and Field_C)
are
compared to identify unmapped grid points for the bilinear interpolation
(Fig. <xref ref-type="fig" rid="Ch1.F4"/>).</p>
      <p id="d1e475">The field can then be interpolated from the source to the destination grid
using a two-step interpolation approach. In the first step, the field is
interpolated from source to destination grid using a bilinear interpolation.
Then, nearest-neighbor-type interpolation is used on the destination grid to
fill unmapped grid points. One of the main drawbacks of this method is that
the result field might include unrealistic values and sharp gradients in the
areas of complex land–sea mask structure (i.e., channels, straits). The
artifacts around the coastlines can be fixed by applying a light<?pagebreak page240?> smoothing
after interpolation or using more sophisticated extrapolation techniques such
as the sea-over-land approach <xref ref-type="bibr" rid="bib1.bibx27 bib1.bibx15" id="paren.37"/>, which are not
included in the current version of the modeling system. Also, the usage of
the mosaic grid along with the second-order conservative interpolation method,
which gives smoother results when the ratios between horizontal grid
resolutions of the source and destination grids are high, can overcome
unaligned land–sea mask problem. The next major release of ESMF library (8.0)
will include the creep fill strategy <xref ref-type="bibr" rid="bib1.bibx27" id="paren.38"/> to fill unmapped grid
points.</p>
</sec>
</sec>
<sec id="Ch1.S3">
  <title>Integration of a co-processing component in the RegESM modeling system</title>
      <p id="d1e491">The newly designed modeling framework is a combination of the ParaView
co-processing plugin – which is called Catalyst <xref ref-type="bibr" rid="bib1.bibx18" id="paren.39"/> – and
ESMF library that is specially designed for coupling different ESMs to create
more complex regional and global modeling systems. In conventional
co-processing-enabled simulation systems (single physical model component
such as atmosphere along with co-processing support), the Catalyst is used to
integrate the ParaView visualization pipeline with the simulation code to support
in situ visualization through the use of application-specific custom adaptor
code <xref ref-type="bibr" rid="bib1.bibx31 bib1.bibx4 bib1.bibx40 bib1.bibx62" id="paren.40"/>. A visualization
pipeline is defined as a data flow network in which computation is described
as a collection of executable modules that are connected in a directed graph
representing how data move between modules <xref ref-type="bibr" rid="bib1.bibx38" id="paren.41"/>. There are
three types of modules in a visualization pipeline: sources (file readers and
synthetic data generators), filters (for transforming data) and sinks (file
writers and rendering module that provide images to a user interface). The
adaptor code acts as a wrapper layer and transforms information coming from
simulation code to the co-processing component in a compatible format that is
defined using ParaView/Catalyst and VTK (Visualization Toolkit) APIs.
Moreover, the adaptor code is responsible for defining the underlying
computational grids and associating them with the multidimensional fields.
After defining computational grids and fields, ParaView processes the
received data to perform co-processing to create desired products such as
rendered visualizations, added value information (i.e., spatial and temporal
averages, derived fields) as well as writing raw data to the disk storage
(Fig. <xref ref-type="fig" rid="Ch1.F5"/>a).</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F5" specific-use="star"><caption><p id="d1e507">Comparison of the <bold>(a)</bold> conventional and <bold>(b)</bold> ESMF-integrated in situ visualization system.</p></caption>
        <?xmltex \igopts{width=327.206693pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f05.png"/>

      </fig>

      <p id="d1e522">The implemented novel approach aims to create a more generic and standardized
co-processing environment designed explicitly for Earth system science
(Fig. <xref ref-type="fig" rid="Ch1.F5"/>b). With this approach, existing ESMs, which are coupled
with the ESMF library using the NUOPC interface, may benefit from<?pagebreak page241?> the use of an
integrated modeling framework to analyze the data flowing from the
multicomponent and multiscale modeling system without extensive code
development and restructuring. In this design, the adaptor code interacts
with the driver through the use of the NUOPC cap and provides an abstraction
layer for the co-processing component. As discussed previously, ESMF uses a standardized interface (initialization, run and finalize
routines) to plug new model components into existing modeling system such as
RegESM in an efficient and optimized way. To that end, the new approach will
benefit from the standardization of common tasks in the model components to
integrate the co-processing component with the existing modeling system. In this
case, all information (grids, fields and metadata) required by
ParaView/Catalyst is received from the driver, and direct interaction between
other model components and the co-processing component is not allowed
(Fig. <xref ref-type="fig" rid="Ch1.F5"/>b). The implementation logic of the adaptor code is very
similar to the conventional co-processing approach (Fig. <xref ref-type="fig" rid="Ch1.F5"/>a).
However, in this case, it uses the standardized interface of ESMF and the NUOPC layer to define the computational grid and associated
two- and three-dimensional fields of model components. The adaptor layer maps the
field (i.e., <italic>ESMF_Field</italic>) and grid (i.e., <italic>ESMF_Grid</italic>)
objects to their VTK equivalents through the use of VTK and co-processing
APIs, which are provided by ParaView and the co-processing plugin (Catalyst).
Along with the usage of the new approach, the interoperability between
simulation code and in situ visualization system are enhanced and
standardized. The new design provides an easy-to-develop, extensible and
flexible modeling environment for Earth system science.</p>
      <p id="d1e537">The development of the adaptor component plays an essential role in the
overall design and performance of the integrated modeling environment. The
adaptor code mainly includes a set of functions to initialize
(defining computational grids and associated input ports), run and finalize
the co-processing environment. Similarly, ESMF also uses the
same approach to plug new model components into the modeling system as ESMF
components. In ESMF, the simulation code is separated into three
essential components (initialization, run and finalize) and calling
interfaces are triggered by the driver to control the simulation codes (i.e.,
atmosphere and<?pagebreak page242?> ocean models). In this case, the initialization phase includes
definition and initialization of the exchange variables, reading input
(initial and boundary conditions) and configuration files and defining the
underlying computational grid (step 1 in Fig. <xref ref-type="fig" rid="Ch1.F6"/>). The run phase
includes a time stepping loop to run the model component in a defined period
and continues until simulation ends (step 4 in Fig. <xref ref-type="fig" rid="Ch1.F6"/>). The time
interval to exchange data between the model and co-processing component can be
defined using a coupling time step just like the interaction among other model
components. According to the ESMF convention, the model and co-processing
components are defined as a gridded component, while the driver is a coupler
component. In each coupling loop, the coupler component prepares exchange
fields according to the interaction among components by applying regridding
(except coupling with the co-processing component), performing a unit conversion
and common operations over the fields (i.e., rotation of wind field).</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F6" specific-use="star"><caption><p id="d1e547">The interaction between the driver defined by ESMF/NUOPC and
the co-processing component (ParaView/Catalyst).</p></caption>
        <?xmltex \igopts{width=327.206693pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f06.png"/>

      </fig>

      <p id="d1e556">In the new version of the RegESM modeling system (1.1), the driver is
extended to redistribute two- and three-dimensional fields from physical model
components to allow interaction with the co-processing component. In the
initialization phase, the numerical grid of ESMF components is transformed
into their VTK equivalents using adaptor code (step 3 in Fig. <xref ref-type="fig" rid="Ch1.F6"/>).
In this case, the <italic>ESMF_Grid</italic> object is used to create
<italic>vtkStructuredGrid</italic> along with their modified parallel two-dimensional
decomposition configuration, which is supported by ESMF/NUOPC grid transfer
capability (Fig. <xref ref-type="fig" rid="Ch1.F7"/>). According to the design, each model component
transfers their numerical grid representation to the co-processing component at
the beginning of the simulation (step 1 in Fig. <xref ref-type="fig" rid="Ch1.F6"/>) while assigning
an independent two-dimensional decomposition ratio to the retrieved grid
definitions. The example configuration in Fig. <xref ref-type="fig" rid="Ch1.F7"/> demonstrates
mapping of the <inline-formula><mml:math id="M7" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula>
decomposition ratio (in <inline-formula><mml:math id="M8" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="M9" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> directions) of the ATM component to <inline-formula><mml:math id="M10" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:math></inline-formula>
in the co-processing component. Similarly, the ocean model transfers its
numerical grid with the <inline-formula><mml:math id="M11" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">4</mml:mn></mml:mrow></mml:math></inline-formula> decomposition ratio to the co-processing component
with <inline-formula><mml:math id="M12" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:math></inline-formula> (Fig. <xref ref-type="fig" rid="Ch1.F7"/>). In this case, ATM and OCN model
components do not need to have the same geographical domain. The only
limitation is that the domain of the ATM model component must cover the entire
OCN model domain for an ATM-OCN coupled system to provide the surface
boundary condition for the OCN component. The main advantage of the generic
implementation of the driver component is to assign different computational
resources to the components. The computational resource with accelerator
support (GPU) can be independently used by the co-processing component to do
rendering (i.e., isosurface extraction, volume rendering and texture
mapping) and processing the high volume of data in an efficient and optimized
way. The initialization phase is also responsible for defining exchange
fields that will be transferred among the model components and maps
<italic>ESMF_Field</italic> representations as <italic>vtkMultiPieceDataSet</italic> objects
in the co-processing component (steps 2–3 in Fig. <xref ref-type="fig" rid="Ch1.F6"/>). Due to the
modified two-dimensional domain decomposition structure of the numerical
grids of the simulation codes, the adaptor code also modifies the definition
of ghost regions – a small subset of the global domain that is used to
perform numerical operations around edges of the decomposition elements. In
this case, the ghost regions (or halo regions in ESMF convention) are updated
by using specialized calls, and after that, the simulation data are passed
(as <italic>vtkMultiPieceDataSet</italic>) to the co-processing component. During the
simulation, the co-processing component of the modeling system also
synchronizes with the simulation code and retrieves updated data (step 5 in
Fig. <xref ref-type="fig" rid="Ch1.F6"/>) to process and analyze the results (step 6 in
Fig. <xref ref-type="fig" rid="Ch1.F6"/>). The interaction between the driver and the adaptor continues
until the simulation ends (steps 4, 5 and 6 in Fig. <xref ref-type="fig" rid="Ch1.F6"/>) and the
driver continues to redistribute the exchange fields using
<italic>ESMF_FieldRedist</italic> calls. The NUOPC cap of model components also
supports vertical interpolation of the three-dimensional exchange fields to
height (from terrain-following coordinates of the RegCM atmosphere model) or
depth coordinate (from <inline-formula><mml:math id="M13" display="inline"><mml:mi>s</mml:mi></mml:math></inline-formula> coordinates of the ROMS ocean model) before passing
information to the co-processing component. In this design, the vertical
interpolation is introduced to have a consistency in the vertical scales and
units of the data coming from the atmosphere and ocean components. Then,
finalizing routines of the model and co-processing components are called to
stop the model simulations and the data analysis pipeline that destroy the
defined data structure(s) and free the memory (steps 7–8 in
Fig. <xref ref-type="fig" rid="Ch1.F6"/>).</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F7" specific-use="star"><caption><p id="d1e671">Two-component (atmosphere and ocean) representation of the grid transfer
and remapping feature of the ESMF/NUOPC interface.</p></caption>
        <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f07.jpg"/>

      </fig>

</sec>
<sec id="Ch1.S4">
  <title>Use case and performance benchmark</title>
      <p id="d1e686">To test the capability of the newly designed integrated modeling system
described briefly in the previous section, the three-component (atmosphere,
ocean and co-processing) configuration of RegESM 1.1 modeling system is
implemented to analyze Category 5 Hurricane Katrina. Hurricane Katrina was
the costliest natural disaster and has been named one of the five deadliest
hurricanes in the history of the United States, and the storm is currently
ranked as the third most intense United States land-falling tropical cyclone.
After establishing itself on the southern Florida coast as a weak Category 1
storm around 22:30 UTC, 25 August 2005, it strengthened to a Category 5 storm
by 12:00 UTC, 28 August 2005, as the storm entered the central Gulf of Mexico
(GoM). The model simulations are performed over a 3-day period, i.e.,
27–30 August 2005, which is the most intense period of the cyclone, to
observe the evolution of Hurricane Katrina and understand the importance
of air–sea interaction regarding its development and predictability. The next
section mainly details the three-component configuration of the modeling
system as well as the computing environment, preliminary benchmark results
performed with limited<?pagebreak page243?> computing resources (without GPU support) and analysis
of the evolution of Hurricane Katrina.</p>
<sec id="Ch1.S4.SS1">
  <title>Working environment</title>
      <p id="d1e694">The model simulations and performance benchmarks are done on a cluster
(SARIYER) provided by the National Center for High-Performance Computing
(UHeM) in Istanbul, Turkey. The CentOS 7.2 operating system installed in
compute nodes is configured with a two Intel Xeon CPU E5-2680 v4 (2.40 GHz)
processors (total 28 cores) and 128 GB RAM. In addition to the compute nodes,
the cluster is connected to a high-performance parallel disk system (Lustre)
with 349 TB storage capacity. The performance network, which is based on
Infiniband FDR (56 Gbps), is designed to give the highest performance for the
communication among the servers and the disk system. Due to the lack of GPU
accelerators in the entire system, the in situ visualization integrated
performance benchmarks are done with the support of software rendering
provided by the Mesa library. Mesa is an open-source OpenGL implementation that
supports a wide range of graphics hardware each with its back end called a
renderer. Mesa also provides several software-based renderers for use on
systems without graphics hardware. In this case, ParaView is installed with
Mesa support to render information without using hardware-based accelerators.</p>
</sec>
<sec id="Ch1.S4.SS2">
  <title>Domain and model configurations</title>
      <p id="d1e703">RegESM 1.1 is configured to couple
atmosphere (ATM; RegCM) and ocean (OCN; ROMS) models with the newly introduced
co-processing component (ParaView/Catalyst version 5.4.1) to analyze the
evolution of Hurricane Katrina and to assess the overall performance of the
modeling system. In this case, two atmospheric model domains were designed
for RegCM simulations using an offline nesting approach, as shown in
Fig. <xref ref-type="fig" rid="Ch1.F8"/>. The outer atmospheric model domain (low resolution, LR)
with a resolution of 27 km is centered at  25.0<inline-formula><mml:math id="M14" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> N, 77.5<inline-formula><mml:math id="M15" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> W,
and covers almost  the entire United States, the western part of the Atlantic
Ocean and northeastern part of the Pacific Ocean for better representation of
the large-scale atmospheric circulation systems. The outer domain is enlarged
as much as possible to minimize the effect of the lateral boundaries of the
atmospheric model in the simulation results of the inner model domain. The
horizontal grid spacing of the inner domain (high resolution, HR) is 3 km and
covers the entire GoM and the western Atlantic Ocean to provide
high-resolution atmospheric forcing for coupled atmosphere–ocean model
simulations and perform cloud-resolving simulations. Unlike the outer domain,
the model for the inner domain is configured to use the non-hydrostatic
dynamical core (available in RegCM 4.6) to allow better representation of local-scale
vertical acceleration and essential pressure features.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F8" specific-use="star"><caption><p id="d1e728">Domain for the RegESM simulations with topography and bathymetry of
the region. The solid white boxes represent boundaries of the atmosphere
(both outer and inner) and ocean model domains.</p></caption>
          <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f08.png"/>

        </fig>

      <?pagebreak page244?><p id="d1e737">The lateral boundary condition for the outer domain is obtained from the European
Centre for Medium-Range Weather Forecasts (ECMWF) latest global atmospheric
reanalysis <xref ref-type="bibr" rid="bib1.bibx13" id="paren.42"><named-content content-type="pre">ERA-Interim project;</named-content></xref>, which is available at 6 h
intervals at a resolution of <inline-formula><mml:math id="M16" display="inline"><mml:mrow><mml:mn mathvariant="normal">0.75</mml:mn><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup><mml:mo>×</mml:mo><mml:mn mathvariant="normal">0.75</mml:mn><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:mrow></mml:math></inline-formula> in the
horizontal and 37 pressure levels in the vertical. On the other hand, the
lateral boundary condition of the inner domain is specified by the results of
the outer model domain. The Massachusetts Institute of Technology-Emanuel
convective parameterization scheme <xref ref-type="bibr" rid="bib1.bibx16 bib1.bibx17" id="paren.43"><named-content content-type="pre">MIT-EMAN;</named-content></xref>
for the cumulus representation along with the subgrid explicit moisture
<xref ref-type="bibr" rid="bib1.bibx41" id="paren.44"><named-content content-type="pre">SUBEX;</named-content></xref> scheme for large-scale precipitation are used for
the low-resolution outer domain.</p>
      <?pagebreak page245?><p id="d1e775">As it can be seen in Fig. <xref ref-type="fig" rid="Ch1.F8"/>, the ROMS ocean model is configured to
cover the entire GoM to allow better tracking of Hurricane Katrina. In
this case, the used ocean model configuration is very similar to the
configuration used by the Physical Oceanography Numerical Group (PONG), Texas
A&amp;M University (TAMU), in which the original model configuration can be
accessed from their THREDDS (Thematic Real-time Environmental Distributed
Data Services) data server (TDS). THREDDS is a service that aims to provide
access to an extensive collection of real-time and archived datasets, and TDS
is a web server that provides metadata and data access for scientific
datasets, using a variety of remote data access protocols. The ocean model
has a spatial resolution of <inline-formula><mml:math id="M17" display="inline"><mml:mrow><mml:mn mathvariant="normal">1</mml:mn><mml:mo>/</mml:mo><mml:mn mathvariant="normal">36</mml:mn></mml:mrow></mml:math></inline-formula><inline-formula><mml:math id="M18" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>, which corresponds to a
non-uniform resolution of around 3 km (<inline-formula><mml:math id="M19" display="inline"><mml:mrow><mml:mn mathvariant="normal">655</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">489</mml:mn></mml:mrow></mml:math></inline-formula> grid points) with
highest grid resolution in the northern part of the domain. The model has 60
vertical sigma layers (<inline-formula><mml:math id="M20" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="italic">θ</mml:mi><mml:mi mathvariant="normal">s</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">10.0</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M21" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="italic">θ</mml:mi><mml:mi mathvariant="normal">b</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">2.0</mml:mn></mml:mrow></mml:math></inline-formula>)
to provide a detailed representation of the main circulation patterns of the
region and vertical tracer gradients. The bottom topography data of the GoM
are constructed using the ETOPO1 dataset <xref ref-type="bibr" rid="bib1.bibx6" id="paren.45"/>, and minimum
depth (<inline-formula><mml:math id="M22" display="inline"><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mi mathvariant="normal">c</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>) is set to 400 m. The bathymetry is also modified so
that the ratio of depth of any two adjacent columns does not exceed 0.25 to
enhance the stability of the model and ensure hydrostatic consistency that
prevents pressure gradient error. The Mellor–Yamada level 2.5 turbulent
closure <xref ref-type="bibr" rid="bib1.bibx36" id="paren.46"><named-content content-type="pre">MY;</named-content></xref> is used for vertical mixing, while rotated
tensors of the harmonic formulation are used for horizontal mixing. The
lateral boundary conditions for the ROMS ocean model are provided by Naval
Oceanographic Office Global Navy Coastal Ocean Model (NCOM) during
27–30 August 2005.</p>
      <p id="d1e863">The model coupling time step between the atmosphere and ocean model components is
set to 1 h but a 6 min coupling time step is used to provide one-way
interaction with the co-processing component to study Hurricane Katrina in a very
high temporal resolution. In the coupled model simulations, the ocean model
provides sea surface temperature (SST) data to the atmospheric model in the region where their
numerical grids overlap. In the rest of the domain, the atmospheric model
uses SST data provided by the ERA-Interim dataset (prescribed SST). The results
of the performance benchmark also include additional tests with a smaller
coupling time step such as 3 min for the interaction with the co-processing
component. In this case, the model simulations for the analysis of Hurricane
Katrina run over 3 days, but only 1 day of simulation length is chosen
in the performance benchmarks to reduce the compute time.</p>
</sec>
<sec id="Ch1.S4.SS3">
  <title>Performance benchmark</title>
      <p id="d1e872">A set of simulations is performed with different model configurations to
assess the overall performance of the coupled modeling system by focusing on
the overhead of the newly introduced co-processing component
(Table <xref ref-type="table" rid="Ch1.T1"/>). The performance benchmarks include analysis of the
extra overhead provided by the co-processing component, coupling interval
between physical models and the co-processing component under different rendering
loads such as various visualization pipelines (Table <xref ref-type="table" rid="Ch1.T1"/>). In this
case, same model domains that are described in the previous section
(Sect. <xref ref-type="sec" rid="Ch1.S4.SS2"/>) are also used in the benchmark simulations. The LR
atmospheric model domain includes around 900 000 grid points, while the HR
domain contains 25 million grid points to test scaling up to a large number
of processors. In both cases, the ocean model configuration is the same, and
it has around 19 million grid points. Besides the use of a non-hydrostatic
dynamical core in the atmospheric model component in the HR case, the rest of
the model configuration is preserved. To isolate the overhead of the driver
from the overhead of the co-processing component, first individual model
components (ATM and OCN) are run in stand-alone mode, and then the best-scaled
model configurations regarding two-dimensional decomposition configuration
are used in the coupled model simulations; CPL (two-component case:
atmosphere–ocean) and COP (three-component case: atmosphere, ocean and
the co-processing component). Due to the current limitation in the integration of
the co-processing component, the coupled model only supports sequential-type
execution when the co-processing component is activated, but this limitation
will be removed in the future version of the modeling system (RegESM 2.0). As
mentioned in the previous section, the length of the simulations is kept
relatively short (1 day) in the benchmark analysis to perform many
simulations with different model configurations (i.e., coupling interval,
visualization pipelines and domain decomposition parameters).</p>

<?xmltex \floatpos{t}?><table-wrap id="Ch1.T1" specific-use="star"><caption><p id="d1e884">Tested model configurations for benchmark simulations. Note that the
dimensions of vertical coordinates of ATM and OCN components are shown here
after vertical interpolation from sigma to height and <inline-formula><mml:math id="M23" display="inline"><mml:mi>s</mml:mi></mml:math></inline-formula> coordinates to
depth. The visualization pipelines are also given in the Supplement.</p></caption><oasis:table frame="topbot"><oasis:tgroup cols="4">
     <oasis:colspec colnum="1" colname="col1" align="justify" colwidth="79.667717pt"/>
     <oasis:colspec colnum="2" colname="col2" align="justify" colwidth="119.501575pt"/>
     <oasis:colspec colnum="3" colname="col3" align="justify" colwidth="119.501575pt"/>
     <oasis:colspec colnum="4" colname="col4" align="justify" colwidth="119.501575pt"/>
     <oasis:thead>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1"/>
         <oasis:entry colname="col2">P1: Case I</oasis:entry>
         <oasis:entry colname="col3">P2: Case II</oasis:entry>
         <oasis:entry colname="col4">P3: Case III</oasis:entry>
       </oasis:row>
     </oasis:thead>
     <oasis:tbody>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Visualization</oasis:entry>
         <oasis:entry colname="col2"><?xmltex \igopts{width=119.501575pt}?><inline-graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-g01.png"/></oasis:entry>
         <oasis:entry colname="col3"><?xmltex \igopts{width=119.501575pt}?><inline-graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-g02.png"/></oasis:entry>
         <oasis:entry colname="col4"><?xmltex \igopts{width=119.501575pt}?><inline-graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-g03.png"/></oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Pipeline</oasis:entry>
         <oasis:entry colname="col2"><?xmltex \igopts{width=119.501575pt}?><inline-graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-g04.png"/></oasis:entry>
         <oasis:entry colname="col3"><?xmltex \igopts{width=119.501575pt}?><inline-graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-g05.png"/></oasis:entry>
         <oasis:entry colname="col4"><?xmltex \igopts{width=119.501575pt}?><inline-graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-g06.png"/></oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Primitives</oasis:entry>
         <oasis:entry colname="col2">ATM: contour for topography polyline for coastline and direct volume rendering for clouds</oasis:entry>
         <oasis:entry colname="col3">ATM: same as the previous case but it includes isosurface for wind speed and glyph for wind at specified level</oasis:entry>
         <oasis:entry colname="col4">ATM: contour for topography, isosurface for wind speed colored by relative humidity <?xmltex \hack{\hfill\break}?>OCN: contour for bathymetry, direct volume rendering for current</oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Domain size</oasis:entry>
         <oasis:entry colname="col2">ATM <?xmltex \hack{\hfill\break}?>LR: <inline-formula><mml:math id="M24" display="inline"><mml:mrow><mml:mn mathvariant="normal">170</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">235</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">27</mml:mn></mml:mrow></mml:math></inline-formula> <?xmltex \hack{\hfill\break}?>HR: <inline-formula><mml:math id="M25" display="inline"><mml:mrow><mml:mn mathvariant="normal">880</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">1240</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">27</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">ATM <?xmltex \hack{\hfill\break}?>Same as Case I</oasis:entry>
         <oasis:entry colname="col4">ATM <?xmltex \hack{\hfill\break}?>Same as Case I <?xmltex \hack{\hfill\break}?>OCN <?xmltex \hack{\hfill\break}?> <inline-formula><mml:math id="M26" display="inline"><mml:mrow><mml:mn mathvariant="normal">653</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">487</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">21</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Number of fields</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M27" display="inline"><mml:mrow><mml:mn mathvariant="normal">1</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula>-D ATM <?xmltex \hack{\hfill\break}?>Relative humidity</oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M28" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula>-D ATM <?xmltex \hack{\hfill\break}?>Relative humidity <?xmltex \hack{\hfill\break}?>Wind (<inline-formula><mml:math id="M29" display="inline"><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi></mml:mrow></mml:math></inline-formula>)</oasis:entry>
         <oasis:entry colname="col4"><inline-formula><mml:math id="M30" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula>-D ATM <?xmltex \hack{\hfill\break}?>Relative humidity <?xmltex \hack{\hfill\break}?>Wind (<inline-formula><mml:math id="M31" display="inline"><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi></mml:mrow></mml:math></inline-formula>) <?xmltex \hack{\hfill\break}?> <inline-formula><mml:math id="M32" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula>-D OCN <?xmltex \hack{\hfill\break}?>Ocean current (<inline-formula><mml:math id="M33" display="inline"><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi></mml:mrow></mml:math></inline-formula>) <?xmltex \hack{\hfill\break}?>Land–sea mask</oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Data size <?xmltex \hack{\hfill\break}?>ATM<inline-formula><mml:math id="M34" display="inline"><mml:mo>+</mml:mo></mml:math></inline-formula>OCN (MB)</oasis:entry>
         <oasis:entry colname="col2">LR: 8.3 <?xmltex \hack{\hfill\break}?>HR: 224.0</oasis:entry>
         <oasis:entry colname="col3">LR: 33.2 <?xmltex \hack{\hfill\break}?>HR: 896.0</oasis:entry>
         <oasis:entry colname="col4">LR: <inline-formula><mml:math id="M35" display="inline"><mml:mrow><mml:mn mathvariant="normal">33.2</mml:mn><mml:mo>+</mml:mo><mml:mn mathvariant="normal">25.4</mml:mn><mml:mo>=</mml:mo><mml:mn mathvariant="normal">58.6</mml:mn></mml:mrow></mml:math></inline-formula> <?xmltex \hack{\hfill\break}?>HR: <inline-formula><mml:math id="M36" display="inline"><mml:mrow><mml:mn mathvariant="normal">896.0</mml:mn><mml:mo>+</mml:mo><mml:mn mathvariant="normal">25.4</mml:mn><mml:mo>=</mml:mo><mml:mn mathvariant="normal">921.4</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">Time (s)</oasis:entry>
         <oasis:entry colname="col2">LR: 2.3–3.7 <?xmltex \hack{\hfill\break}?>HR: 17.7–65.0</oasis:entry>
         <oasis:entry colname="col3">LR: 2.3–3.8 <?xmltex \hack{\hfill\break}?>HR: 18.4–79.3</oasis:entry>
         <oasis:entry colname="col4">LR: 6.8–14.6 <?xmltex \hack{\hfill\break}?>HR: 7.8–10.1</oasis:entry>
       </oasis:row>
     </oasis:tbody>
   </oasis:tgroup></oasis:table></table-wrap>

      <p id="d1e1274">In the benchmark results, the slightly modified version of the speedup is
used because the best possible sequential implementation of the utilized
numerical model (stand-alone and coupled) does not exist for the used
demonstration application and model configurations. In this case, the
speedup is defined as the ratio of the parallel execution time for the
minimum number of processors required to run the simulation
(<inline-formula><mml:math id="M37" display="inline"><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mi mathvariant="normal">p</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:msub><mml:mi>N</mml:mi><mml:mo>min⁡</mml:mo></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>; based on 140 cores in this study) to the parallel
execution time (<inline-formula><mml:math id="M38" display="inline"><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mi mathvariant="normal">p</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:mi>N</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>; see Eq. <xref ref-type="disp-formula" rid="Ch1.E1"/>).
            <disp-formula id="Ch1.E1" content-type="numbered"><mml:math id="M39" display="block"><mml:mrow><mml:mi>S</mml:mi><mml:mo>(</mml:mo><mml:mi>N</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac style="display"><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mi mathvariant="normal">p</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:msub><mml:mi>N</mml:mi><mml:mo>min⁡</mml:mo></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mi mathvariant="normal">p</mml:mi></mml:msub><mml:mo>(</mml:mo><mml:mi>N</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mfrac></mml:mstyle></mml:mrow></mml:math></disp-formula>
          <?xmltex \hack{\newpage}?>The measured wall-clock time and the calculated speedup of
stand-alone model components (ATM and OCN) can be seen in Fig. <xref ref-type="fig" rid="Ch1.F9"/>.
In this case, two different atmospheric model configurations are considered
to see the effect of the domain size and non-hydrostatic dynamical core on
the benchmark results (LR and HR; Fig. <xref ref-type="fig" rid="Ch1.F8"/>). The results show that
the model scales pretty well and it is clear that the HR case shows better
scaling results than LR configuration of <?pagebreak page247?>the atmospheric component (ATM) as
expected. It is also shown that around 588 processors, which is the highest
available compute resource, the communication among the processors dominates
the benchmark results of LR case, but it is not evident in the HR case that
scales very well without any performance problem (Fig. <xref ref-type="fig" rid="Ch1.F9"/>a).
Similar to the atmospheric model component, the ocean model (OCN) is also
tested to find the best two-dimensional domain decomposition configuration
(tiles in <inline-formula><mml:math id="M40" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="M41" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> directions). As it can be seen from Fig. <xref ref-type="fig" rid="Ch1.F9"/>b,
the selection of the tile configuration affects the overall performance of
the ocean model. In general, the model scales better if the tile in the <inline-formula><mml:math id="M42" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula>
direction is bigger than the tile in the <inline-formula><mml:math id="M43" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> direction, but this is more
evident in the small number of processors. The tile effect is mainly due to
the memory management of the Fortran programming language (column-major
order) as well as the total number of active grid points (not masked as land)
placed in each tile. The tile options must be selected carefully while
considering the dimension of the model domain in each direction. In some tile
configuration, it is not possible to run the model due to the underlying
numerical solver and the required minimum ghost points. To summarize, the
ocean model scales well until 588 cores with the best tile configurations
indicated in Fig. <xref ref-type="fig" rid="Ch1.F9"/>b.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F9" specific-use="star"><caption><p id="d1e1406">Benchmark results of stand-alone <bold>(a)</bold> atmosphere (ATM; both
LR and HR) and <bold>(b)</bold> ocean (OCN) models. Note that timing results of
the atmosphere model are in log axes to show both LR and HR cases in the same
figure. The black lines represent measured wall-clock times in seconds and red
lines show speedup. The envelope represents the timing and speedup results
that are done using the same number of cores but different two-dimensional
decomposition configuration. The best two-dimensional decomposition
parameters are also shown in the timing results for the ocean model case.</p></caption>
          <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f09.png"/>

        </fig>

      <p id="d1e1421">The performance of the two-component modeling system (CPL) can be
investigated using the benchmark results of the stand-alone atmosphere and
ocean models. Similar to the benchmark results of the stand-alone model
components, the measured wall-clock time and the calculated speedup of the
coupled model simulations are also shown in Fig. <xref ref-type="fig" rid="Ch1.F10"/>. In this case,
the best two-dimensional decomposition parameters of the stand-alone ocean
model simulations are used in the coupled model simulations
(Fig. <xref ref-type="fig" rid="Ch1.F9"/>b). The overhead is calculated by comparing the CPL
wall-clock time to the sum of the stand-alone OCN and ATM wall-clock time as they
run sequentially. The comparison of the stand-alone and coupled model
simulations shows that the driver component introduces an additional
5 %–10 % (average is 5 % for LR and 6 % for HR cases) overhead in
the total execution time, which slightly increases along with the used total
number of processors. The extra overhead is mainly due to the interpolation
(sparse matrix multiply performed by ESMF) and extrapolation along the
coastlines to match land–sea masks of the atmosphere and ocean models and
fill the unmapped grid points to exchange data (Fig. <xref ref-type="fig" rid="Ch1.F4"/>) and
slightly increases along with the increased number of cores as well as the number of
MPI (Message Passing Interface) communications between the model components (Figs. <xref ref-type="fig" rid="Ch1.F9"/> and
<xref ref-type="fig" rid="Ch1.F10"/>a).</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F10" specific-use="star"><caption><p id="d1e1436">Benchmark results of the <bold>(a)</bold> CPL simulations, <bold>(b)</bold> COP
simulations with the P1 visualization pipeline, <bold>(c)</bold> COP simulations with
the P2 visualization pipeline and <bold>(d)</bold> COP simulations with the P3
visualization pipeline. CPL represents the two-component modeling system (ATM
and OCN), and COP indicates three-component modeling system (ATM, OCN and
co-processing). Note that the HR case requires at least 140 cores to run and
the speedup results are given based on 140 cores.</p></caption>
          <?xmltex \igopts{width=327.206693pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f10.jpg"/>

        </fig>

      <p id="d1e1457">To further investigate the overhead introduced by the newly designed
co-processing component, the three-component modeling system (COP) is tested
with three different visualization pipelines (P1, P2 and P3;
Table <xref ref-type="table" rid="Ch1.T1"/>) using two different atmospheric model configurations (LR
and HR) and coupling intervals (3 and 6 min with co-processing). In this
case, the measured total execution time during the COP benchmark results also
includes vertical interpolation (performed in the ESMF cap of the model
components) to map data from sigma coordinates to height (or depth)
coordinates for both physical model components (ATM and OCN).</p>
      <p id="d1e1462">As shown in Fig. <xref ref-type="fig" rid="Ch1.F10"/>b–d, the co-processing components require
10 %–40 % extra execution time for both LR and HR cases depending on
the used visualization pipeline when it is compared with CPL simulations. The
results also reveal that the fastest visualization pipeline is P3 and the
slowest one is P1 for the HR case (Fig. <xref ref-type="fig" rid="Ch1.F10"/>b and d). In this case,
the components are all run sequentially, and the performance of the
co-processing component becomes a bottleneck for the rest of the modeling
system, especially for the computing environment without GPU support like the
system used in the benchmark simulations. It is evident that if the
co-processing were run concurrently in a dedicated computing resource, the
overall performance of the modeling system would be improved because of the
simultaneous execution of the physical models and co-processing components.
Table <xref ref-type="table" rid="Ch1.T1"/> also includes the execution time of the single
visualization pipeline (measured by using the <italic>MPI_Wtime</italic> call) isolated
from the rest of the tasks. In this case, each rendering task gets 2–4 s
for P1 and P2 cases and 7–15 s for the P3 case in LR atmospheric model
configuration. For the HR case, P1 and P2 take around 17–80 s, and the P3 case
is rendered in around 8–10 s. These results show that the time spent in the
co-processing component (sending data to ParaView/Catalyst and rendering to
create the output) fluctuates too much and that this component does not
present a predictable and stable behavior. It might be due to the particular
configuration of ParaView, which is configured to use software-based
rendering to process data in CPUs and load in the used high-performance
computing system (UHeM) even if the benchmark tests are repeated multiple
times.</p>
      <p id="d1e1475">In addition to the testing modeling system with various data-processing loads,
a benchmark with an increased coupling time step is also performed (see P23M in
Fig. <xref ref-type="fig" rid="Ch1.F10"/>c). In this case, the coupling time step between physical
model components and the co-processing component is decreased (from 6 to
3 min) to produce output in a doubled frame rate, but coupling intervals
between physical model components (ATM and OCN) are kept same (1 h). The
benchmark results show that increasing the coupling time step also raises overhead
due to the co-processing from 45 % to 60 % for the HR case and pipeline P2
when it is compared with the results of two-component simulations (CPL;
Fig. <xref ref-type="fig" rid="Ch1.F10"/>a). It is also shown that the execution time of
the co-processing-enabled coupled simulations increases but the difference between
the P2 and P23M cases is reduced from 66 % to 37 % when the number of
processors is increased from 140 to 588.</p>
      <p id="d1e1482">In addition to the analysis of timing profiles of modeling system under
different rendering loads, the amount of data exchanged and used in the
in situ visualization case can be compared with the amount of data that would
be required for offline visualization at the same temporal frequency to
reveal the added value of the newly introduced co-processing<?pagebreak page248?> component. For
this purpose, the amount of data exchanged with the co-processing component is
given in Table <xref ref-type="table" rid="Ch1.T1"/> for three different visualization pipelines (P1,
P2 and P3). In co-processing mode, the data retrieved from model component
memory (single time step) by the driver are passed to ParaView/Catalyst
for rendering. In addition to processing data concurrently with the
simulation on the co-processing component, the offline visualization
(postprocessing) consists of computations done after the model is run and
requires storing numerical results in a disk environment. For example,
a 3-day long simulation with a 6 min coupling interval produces around 160 GB
data (720 time steps) just for a single variable from high-resolution
atmosphere component (P1 visualization pipeline) in the case using offline
visualization. With co-processing, the same analysis can be done by applying
the same visualization pipeline (P1), which requires processing only 224 MB
data stored in the memory, in each coupling interval. Moreover, storing
results of the 3-day long, high-resolution simulation of the RegCM atmosphere
model (in NetCDF format) for offline visualization requires around 1.5 TB
data in the case using a 6 min interval in the default configuration (<inline-formula><mml:math id="M44" display="inline"><mml:mrow><mml:mn mathvariant="normal">7</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula>-D fields and <inline-formula><mml:math id="M45" display="inline"><mml:mrow><mml:mn mathvariant="normal">28</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:math></inline-formula>-D  fields). It is evident that the usage of the co-processing component
reduces the amount of data stored in the disk and allows a more efficient data
analysis pipeline.</p>
      <p id="d1e1511">Besides the minor fluctuations in the benchmark results, the modeling system
with the co-processing component scales pretty well to the higher number of
processors (or cores) without any significant performance pitfalls in the
current configuration. On the other hand, the usage of accelerator-enabled
ParaView configuration (i.e., using the NVIDIA EGL library) and ParaView plugins
with accelerator support such as the NVIDIA IndeX volume rendering plugin and new
VTK-m filters to process data on GPU will improve the benchmark result. The
NVIDIA IndeX for the ParaView plugin enables large-scale and high-quality volume
data visualization capabilities of the NVIDIA IndeX library inside
ParaView and might help to reduce time to process high-resolution spatial
data (HR case). In addition to NVIDIA IndeX plugin, VTK-m is a toolkit of
scientific visualization algorithms for emerging processor architectures such
as GPUs <xref ref-type="bibr" rid="bib1.bibx39" id="paren.47"/>. The model configurations used in the
simulations also write simulation results to the disk in NetCDF format. In
the case of disabling of writing data to disk or configuring the models to write
data with large time intervals (i.e., monthly), the simulations with an active
co-processing component will run much faster and make the analysis of the
model results in real time efficient especially in live mode (see
Sect. <xref ref-type="sec" rid="Ch1.S5.SS1"/>).</p>
</sec>
</sec>
<?pagebreak page249?><sec id="Ch1.S5">
  <title>Demonstration application</title>
      <p id="d1e1527">The newly designed modeling system can analyze numerical simulation results
in both in situ (or live) and co-processing modes. In this case, a Python
script, that defines the visualization pipeline, mainly controls the
selection of the operating mode and is generated using the ParaView
co-processing plugin. The user could also activate live visualization mode
just by changing a single line of code (they would need to set
coprocessor.EnableLiveVisualization as True) in Python script. This section
aims to give more detailed information about two different approaches by
evaluating the numerical<?pagebreak page250?> simulation of Hurricane Katrina in both models to reveal
the designed modeling system capability and its limitations.</p>
<sec id="Ch1.S5.SS1">
  <title>Live visualization mode</title>
      <p id="d1e1535">While the live visualization was designed to examine the simulation state at a
specific point in time, the temporal filters such as ParticlePath,
ParticleTracer and TemporalStatistics that are designed to process data using
multiple time steps cannot be used in this mode. However, live visualization
mode allows connecting to the running simulation anytime through the ParaView
graphical user interface (GUI) in order to make a detailed analysis by modifying existing visualization
pipelines defined by a Python script. In this case, the numerical simulation
can be paused while the visualization pipeline is modified and will continue
to run with the revised one. It is evident that the live visualization
capability gives full control to the user to make further investigation about
the simulation results and facilitate better insight into the underlying
physical process and its evolution in time.</p>
      <p id="d1e1538">The current version of the co-processing-enabled modeling system can process
data of multiple model components by using the multichannel input port feature
of ParaView/Catalyst. In this case, each model has two input channels based
on the rank of exchange fields. For example, the atmospheric model component has
<italic>atm_input2d</italic> and <italic>atm_input3d</italic> input channels to make
processing available for both two- and three-dimensional exchange fields. The
underlying adaptor code resides between the NUOPC cap of the co-processing
component and ParaView/Catalyst and provides two grid definitions (2-D and
3-D) for each model component for further
analysis. In this design, the ParaView co-processing plugin is used to
generate Python co-processing scripts, and the user needs to map data sources to
input channels by using predefined names such as <italic>atm_input2d</italic> and
<italic>ocn_input3d</italic>. Then, the adaptor provides the required data to
the co-processing component through each channel to perform rendering and data
analysis in real time. The fields that are used in the co-processing
component are defined by generic ASCII-formatted driver configuration file
(<italic>exfield.tbl</italic>), which is also used to define exchange fields among
other model components such as atmosphere and ocean models.
Figure <xref ref-type="fig" rid="Ch1.F11"/> shows a screenshot of the live visualization of the
three-dimensional relative humidity field provided by the low-resolution
atmospheric model component, underlying topography information and vorticity
of ocean surface that is provided by the  model component.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F11" specific-use="star"><caption><p id="d1e1561">Volume rendering of atmospheric relative humidity field
(<italic>atm_input3d</italic>) as well as the vorticity field in the ocean surface
(<italic>ocn_input2d</italic>) from the COP_LR simulation using ParaView/Catalyst in
live mode.</p></caption>
          <?xmltex \igopts{width=398.338583pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f11.png"/>

        </fig>

</sec>
<sec id="Ch1.S5.SS2">
  <title>Co-processing mode</title>
      <p id="d1e1582">In addition to the live visualization mode that is described briefly in the
previous section, ParaView/Catalyst also allows rendering and storing data
using a predefined co-processing pipeline (in Python) for further analysis.
Co-processing mode can be used for three purposes: (1) the simulation output
can be directed to the co-processing component to render data in batch mode
and write image files to the disk, (2) added value information (i.e.,
vorticity from wind components, eddy kinetic energy from ocean current) can
be calculated and stored in a disk for further analysis, and (3) simulation
output can be stored in a higher temporal resolution to process it later
(postprocessing) or create a representative dataset that can be used to
design visualization pipeline for co-processing or live visualization modes.
In this case, the newly designed modeling system can apply multiple
visualization and data-processing pipelines to the simulation results at each
coupling time step to make a different set of analyses over the results of
same numerical simulation for more efficient data analysis. The modeling
system also facilitates multiple input ports to process data flowing from
multiple ESM components. In this design, input ports are defined
automatically by the co-processing component based on activated model
components (ATM, OCN, etc.) and each model component has two ports to handle
two- and three-dimensional grids (and fields) separately such as
<italic>atm_input2d</italic>, <italic>atm_input3d</italic>, <italic>ocn_input2d</italic> and
<italic>ocn_input3d</italic>.</p>
      <p id="d1e1597">To test the capability of the co-processing component, the evolution of
Hurricane Katrina is investigated by using two different configurations of
the coupled model (COP_LR and COP_HR) that are also used to analyze the
overall computational performance of the modeling system (see
Sect. <xref ref-type="sec" rid="Ch1.S4.SS3"/>). In this case, both model configurations use the same
configuration of the OCN model component, but the different horizontal resolution
of the ATM model is considered (27 km for LR and 3 km for HR cases).</p>
      <p id="d1e1602">Figure <xref ref-type="fig" rid="Ch1.F12"/> shows 3-hourly snapshots of the model-simulated clouds
that are generated by processing the three-dimensional relative humidity field
calculated by the low-resolution version of the coupled model (COP_LR) using
the NVIDIA IndeX volume rendering plugin as well as streamlines of Hurricane
Katrina, which is calculated using a three-dimensional wind field. The
visualization pipeline also includes sea surface height and surface current
from the ocean model component to make an integrated analysis of the model
results. Figure <xref ref-type="fig" rid="Ch1.F12"/>a–b show the streamlines that are produced by
extracting the hurricane using the ParaView <italic>Threshold</italic> filter. In this
case, the extracted region is used as a seed to calculate backward and
forward streamlines. In Fig. <xref ref-type="fig" rid="Ch1.F12"/>c–e, sea surface height, sea
surface current and surface wind vectors (10 m) are shown together to give
insight about the interaction of ocean-related variables with the atmospheric
wind. Lastly, the hurricane reaches land and starts to disappear due
to increased surface roughness and lack of energy source
(Fig. <xref ref-type="fig" rid="Ch1.F12"/>f). While the low-resolution atmosphere model
configuration is used, the information produced by the new modeling system
enabled investigating the evolution of the hurricane in a very high temporal
resolution, which was impossible before. A day-long animation that is also
used to create Fig. <xref ref-type="fig" rid="Ch1.F12"/> can be found in the supplemental video
<xref ref-type="bibr" rid="bib1.bibx53" id="paren.48"/>.</p>
      <?pagebreak page251?><p id="d1e1622">In addition to low-resolution model results revealing the evolution of the
hurricane in a very high temporal resolution, low- and high-resolution model
results are also compared to see the added value of the increased horizontal
resolution of the atmospheric model component regarding representation of the
hurricane and its structure. To that end, a set of visualization pipelines
is designed to investigate the vertical updraft in the hurricane, simulated
track, precipitation pattern and ocean state. In this case, two time
snapshots are considered: (1) 28 August 2005, 00:00 UTC, at the early stage
of the hurricane in Category 5, and (2) 29 August 2005, 00:00 UTC, just before
Katrina makes its third and final landfall near the Louisiana–Mississippi
border, where the surface wind is powerful and surface currents had a strong
onshore component <xref ref-type="bibr" rid="bib1.bibx34 bib1.bibx35" id="paren.49"/>. In the analysis of vertical
structure, the hurricane is isolated based on the criteria of surface wind
speed that exceeds 20 m s<inline-formula><mml:math id="M46" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>, and the seed (basically set of points
defined as <italic>vtkPoints</italic>) for the ParaView
<italic>StreamTracerWithCustomSource</italic> filter are defined dynamically using
<italic>ProgrammableFilter</italic> as a circular plane with a radius of 1.2<inline-formula><mml:math id="M47" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>
and points distributed with 0.2<inline-formula><mml:math id="M48" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> interval in both directions (<inline-formula><mml:math id="M49" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula> and
<inline-formula><mml:math id="M50" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula>) around the center of mass of the isolated region. Then, forward and
backward streamlines of vorticity are computed separately to see inflow at
low and intermediate levels and outflow at upper levels for both low- (COP_LR;
Fig. <xref ref-type="fig" rid="Ch1.F13"/>a, b, d and e) and high-resolution (COP_HR;
Fig. <xref ref-type="fig" rid="Ch1.F14"/>a, b, d and e) cases. The analysis of simulations reveals
that the vertical air movement shows higher spatial variability in
high-resolution simulation (COP_HR) case even if the overall structure of
the hurricane is similar in both cases. As expected, the strongest winds
occur in a region forming a ring around the eyewall of the hurricane, which
is where the lowest surface pressure occurs.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F12" specific-use="star"><caption><p id="d1e1689">Rendering of the multicomponent (ATM-OCN-COP) fully coupled
simulation using ParaView. The temporal interval for the processed data is
defined as 6 min.</p></caption>
          <?xmltex \igopts{width=426.791339pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f12.png"/>

        </fig>

      <?xmltex \floatpos{t}?><fig id="Ch1.F13" specific-use="star"><caption><p id="d1e1700">Rendering of three-dimensional vorticity streamlines (s<inline-formula><mml:math id="M51" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>),
total precipitation (mm day<inline-formula><mml:math id="M52" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>) and sea surface temperature anomaly
(<inline-formula><mml:math id="M53" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>C) of the COP_LR simulation for 28 August 2005, 00:00 UTC
<bold>(a–c)</bold> and 29 August 2005, 00:00 UTC <bold>(d–f)</bold>. Streamlines
are calculated only from the eye of the hurricane. In this case, red- and
yellow-colored forward streamlines represent cloud liquid water content
(kg kg<inline-formula><mml:math id="M54" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>), and blue-colored backward streamlines indicate wind speed
(m s<inline-formula><mml:math id="M55" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>). The solid yellow line represents the best track of Hurricane
Katrina, which is extracted from the HURDAT2 database. The larger versions of
figures are also given in the Supplement.</p></caption>
          <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f13.jpg"/>

        </fig>

      <?xmltex \floatpos{t}?><fig id="Ch1.F14" specific-use="star"><caption><p id="d1e1775">Same as Fig. <xref ref-type="fig" rid="Ch1.F13"/> but for the COP_HR simulation. The larger
versions of figures are also given in the Supplement. The comparison of low- and
high-resolution model results is shown in the supplemental video.</p></caption>
          <?xmltex \igopts{width=341.433071pt}?><graphic xlink:href="https://gmd.copernicus.org/articles/12/233/2019/gmd-12-233-2019-f14.jpg"/>

        </fig>

      <p id="d1e1786">Also, the analysis of cloud liquid water content shows that low and
intermediate levels of the hurricane have higher water content and spatial
distribution of precipitation is better represented in the high-resolution case
(Fig. <xref ref-type="fig" rid="Ch1.F14"/>a–b and d–e), which is consistent with the previous
modeling study of <xref ref-type="bibr" rid="bib1.bibx51" id="text.50"/>.</p>
      <p id="d1e1794">It is also seen that the realistic principal and secondary precipitation
bands around the eye of the hurricane are more apparent and well structured
in the high-resolution simulation, while the low-resolution case does not show
those small-scale features (Fig. <xref ref-type="fig" rid="Ch1.F13"/>a–b and d–e). On the ocean
side, the Loop Current, which is a warm ocean current that flows northward
between Cuba and the Yucatan Peninsula and moves north into the Gulf of
Mexico, loops east and south before exiting to the east through the Florida
Straits and joining the Gulf Stream and is well defined by the ocean model
component in both cases (Figs. <xref ref-type="fig" rid="Ch1.F13"/>c and f; <xref ref-type="fig" rid="Ch1.F14"/>c and f). The
track of the hurricane is also compared with the HURDAT2 second-generation
North Atlantic (NATL) hurricane database, which is the longest and most
complete record of tropical cyclone (TC) activity in any of the world's
oceans <xref ref-type="bibr" rid="bib1.bibx28" id="paren.51"/>. In this case, the eye of the hurricane is
extracted as a region with surface pressure anomaly greater than
15 mbar (shown as a circular region near the best track). As  can be
seen from the figures, Katrina moves over in the central Gulf, which is
mainly associated with the Loop Current and persistent warm and cold eddies,
and intensifies<?pagebreak page252?> as it passes over the region due to the high ocean heat
content in both simulations (Figs. <xref ref-type="fig" rid="Ch1.F13"/>c and f and <xref ref-type="fig" rid="Ch1.F14"/>c and
f). The comparison of the low- and high-resolution simulations also indicates
that the diameter of hurricane-force winds at peak intensity is bigger in the
high-resolution simulation case on 29 August 2005, 00:00 UTC
(Figs. <xref ref-type="fig" rid="Ch1.F13"/>f and <xref ref-type="fig" rid="Ch1.F14"/>f). An animation that shows the
comparison of low- and high-resolution model results can be found in the
supplemental video <xref ref-type="bibr" rid="bib1.bibx54" id="paren.52"/>.</p>
      <p id="d1e1819">While the main aim of this paper is to give design details of the new in situ
visualization integrated modeling system and show its capability, the
performance of the coupled modeling system to represent one of the most
destructive hurricanes is very satisfactory especially for the high-resolution
case (COP_HR). Nonetheless, the individual components (atmosphere and ocean)
of the modeling system can be tuned to have better agreement with the
available observations and previous studies. Specifically for the analysis of
the hurricane, a better storm-tracking algorithm needs to be implemented
using ParaView the <italic>ProgrammableFilter</italic> by porting existing legacy
Fortran codes for more accurate storm tracking in both live and co-processing
modes.</p>
</sec>
</sec>
<sec id="Ch1.S6">
  <title>Discussion of the concepts associated with interoperability, portability and reproducibility</title>
      <?pagebreak page255?><p id="d1e1832">In the current design of the RegESM modeling system, the NUOPC cap of the
co-processing component is designed to work with regional modeling
applications that have specific horizontal grid (or mesh) types such as
rectilinear and curvilinear grids. The newly introduced co-processing
interface (NUOPC cap and adaptor code) now needs to be generalized to be
compatible with other regional and global modeling systems coupled through
ESMF and the NUOPC layer. Specifically, the following issues need to be addressed
to achieve better interoperability with existing modeling systems and model
components: (1) redesigning the NUOPC cap of the co-processing component to
support various global and regional mesh types such as cubed-sphere and
unstructured Voronoi meshes, (2) extending the adaptor code to represent mesh
and exchange fields provided by the NUOPC cap using VTK and ParaView/Catalyst
APIs, (3) adding support to the co-processing interface for models with online
nesting capability and (4) adding support to have common horizontal grid
definitions in the co-processing component and in the other components to
make integrated analysis of data (i.e., calculating air–sea temperature
difference and correlation) produced by the various model components.
Moreover, the co-processing interface can be tightly integrated with the
NUOPC layer to provide a simplified API for designing new in situ
visualization integrated modeling systems in an efficient and standardized
way. Besides the configuration used in this study, the RegESM modeling system
is also tested with different model configurations such as coupling RegCM,
MITgcm and the co-processing component to investigate air–sea interaction in the
Black Sea basin. Initial results show that the co-processing component can
also successfully process data flowing from different model configurations
supported by RegESM.</p>
      <p id="d1e1835">When the diverse nature of high-performance computing systems, their hardware
infrastructure (i.e., performance networks and storage systems) and software
stacks (i.e., operating systems, compilers, libraries for internode
communication and their different versions) is considered, realizing a fully
portable modeling system is becoming increasingly crucial for the scientific
community. In this case, the detailed examination of possible configurations
of the modeling system and existing computing
environments can help to improve the flexibility and portability of the
developed modeling system. Specifically for the RegESM modeling system, the
use case application and benchmark simulations revealed that the single
executable approach (combining all model components into one program) used in
the design of the modeling system can cause a portability problem when
visualization and simulation are run on concurrent resources. In the case of
a homogeneous computing environment (all nodes with or without GPU support),
the in situ enabled modeling system runs without any particular problem
because each MPI process has access to the same software and hardware
resources. In contrast, some computing systems may  not have homogeneous underlying hardware and software stacks
(e.g., mixed servers with and without GPU support). As a result, the
simulation with in situ visualization would fail due to missing shared
software libraries in the underlying GPU. In this case, two approaches can be
used to overcome the problem: (1) installation of required libraries on the
entire system even on servers that do not have GPU support and
(2) restructuring the modeling system to support two executables, one for the
co-processing component and one for the physical model component. The second
approach is considered a more generic and flexible solution and enhances the
portability of the modeling system. It also allows implementing a loosely
coupled in situ visualization system and enables the use of specialized
hardware (GPU and more memory) for rendering <xref ref-type="bibr" rid="bib1.bibx44" id="paren.53"/>. The main
drawback of the loosely coupled in situ visualization approach is that it
requires transferring data over the network. As a result, the network
performance can be a bottleneck for the modeling system, especially for
high-resolution multicomponent modeling applications.</p>
      <p id="d1e1841">When the complexity of regional and global ESMs is considered, developing
a fully reproducible and portable modeling system is a challenging task and
requires significant human interaction to keep track of detailed metadata and
provenance information about the model, simulation and computing environment
(in both software and hardware levels). The use of scientific workflows in
Earth system science has demonstrated advantages in terms of metadata,
provenance, error handling and reproducibility in an automatized and
standardized way <xref ref-type="bibr" rid="bib1.bibx57 bib1.bibx58 bib1.bibx52" id="paren.54"/>.
Additionally, the rapid development in the software container technology can
help to design flexible and portable computing environments. Hence, the
Docker container was implemented to examine the feasibility of using the
container approach for our newly developed in situ visualization integrated
modeling system. A container is a standard unit of software that helps to
create a software package including all its dependencies, which can then be
ported from one computing environment to another without worrying about the
underlying hardware infrastructure and software stack. It also enhances the
numerical reproducibility of simulations by creating a standardized computing
environment isolated from any dependencies. In this study, the Docker is
selected as a container environment because it is widely adopted across the
software industry and has a very active user community. Despite the
flexibility and easy-to-use nature of Docker containers, using specialized
hardware such as NVIDIA GPUs, which require kernel modules and user-level
libraries to operate, is not supported natively. Therefore, the Docker container
cannot access the underlying GPU resource to perform hardware-level rendering
for visualization and data analysis. To enable portable GPU-based containers,
NVIDIA developed a special container that loads the GPU driver into the container
at launch. As a part of this study, the newly developed RegESM modeling system
was tested with both Docker (software rendering through the use of the Mesa
library) and NVIDIA Docker (hardware based rendering). The initial results
show that RegESM can take advantage of the container approach to create a
portable and reproducible modeling system in both in situ and co-processing
modes without considerable performance loss
(<inline-formula><mml:math id="M56" display="inline"><mml:mrow><mml:mo>∼</mml:mo><mml:mn mathvariant="normal">5</mml:mn></mml:mrow></mml:math></inline-formula> %–10 %). The added value of using the NVIDIA Docker is that it enables
utilizing the underlying GPU resource to perform rendering (i.e.,
representation of clouds using a direct volume rendering method). More
information about a Docker container for in situ visualization enabled
modeling system can be found in the dedicated GitHub repository (see code
availability section).</p>
</sec>
<sec id="Ch1.S7" sec-type="conclusions">
  <title>Summary and conclusions</title>
      <p id="d1e1863">In this study, the newly developed in situ visualization integrated modeling
system (RegESM 1.1) is used to demonstrate the feasibility and added value of
the integrated modeling environment to analyze the high volume of data coming
from a multicomponent ESM in an integrated way, which was not possible
before. In this case, ParaView/Catalyst is used as a co-processing component
to process and render data. The results of the selected use case (evolution
of<?pagebreak page256?> Hurricane Katrina) show that the co-processing component provides an
easy-to-use and generic modeling and data analysis environment, which is
independent of the underlying physical model components used. Moreover, it
promotes the usage of the co-processing capability with the existing ESMs coupled
using ESMF and the NUOPC layer, without significant code restructuring
and development and helps to increase the interoperability between ESMs and
the ParaView co-processing plugin (Catalyst). In the current implementation, the
prototype version of the adaptor code acts as an abstraction layer to
simplify and standardize the regular tasks to integrate the simulation code
with an in situ visualization and analysis environment. The driver is also
responsible for redistributing the data to the co-processing component while
preserving its numerical grid along with the support of vertical
interpolation. Coupling of the co-processing component with the generic
driver facilitates the definition of custom data-processing pipelines
(defined by Python scripts) and allows analysis of data originating from
different components (i.e., atmosphere and ocean models) of the RegESM
modeling system in a very high temporal resolution. In this way, the RegESM
modeling system can be used to study various physical processes (i.e.,
extreme precipitation events, air–sea interaction, convection and
turbulence) that could not be analyzed with the traditional postprocessing
approaches.</p>
      <p id="d1e1866">While the results of the in situ visualization integrated modeling system are
encouraging, the co-processing component will be extended to support
different regional and global computational grid representations supported by
the ESMF library such as unstructured meshes for having a generic adaptor for
various model applications. Additionally, we are currently exploring (1) the
way to optimize the transfer of grid features and mapping of exchange fields
to enhance the overall performance of the modeling environment in terms of
memory usage and computational efficiency especially for very high-resolution
applications (<inline-formula><mml:math id="M57" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">3</mml:mn></mml:mrow></mml:math></inline-formula> km); (2) the possibility of automatic detection of
accelerators (GPUs) through the use of the driver component and assigning
available GPU resources automatically to the co-processing component for
rendering; (3) improving the modeling system and co-processing component to allow
nested applications (both atmosphere and ocean); and (4) developing more
applications of the integrated modeling environment (possibly with other ocean
and atmosphere components such as WRF and MITgcm) to analyze different
physical processes such as air–sea interactions in upwelling regions under
extreme atmospheric forcing conditions.</p>
</sec>

      
      </body>
    <back><notes notes-type="codeavailability">

      <p id="d1e1883">The RegESM modeling system is open source and available
under the MIT license, making it suitable for community usage. The license
allows modification, distribution, private and commercial uses. The source
code for all versions of the RegESM driver including 1.1 is distributed through
the public code repository hosted by GitHub
(<uri>https://github.com/uturuncoglu/RegESM</uri>, last access: 7 July 2018). The user guide and detailed information about the modeling
system are also distributed along
with the source code in the same code repository. The RegESM source code includes the required code
patches for the individual model components to be used as a component in the modeling system. On the other hand, the
source code of individual model components such as the ocean, wave and river-routing components and
the co-processing tool (ParaView/Catalyst) used in the modeling system is distributed mainly by their home institutes
that might apply different licensing types. The reader who wants to get more information about the
individual model components and their license type can refer to their websites. The release version 1.1 is permanently
archived on Zenodo and accessible under the digital object identifier <ext-link xlink:href="https://doi.org/10.5281/zenodo.1307212" ext-link-type="DOI">10.5281/zenodo.1307212</ext-link> <xref ref-type="bibr" rid="bib1.bibx60" id="paren.55"/>.
The demo configuration of the modeling system that is used in the NVIDIA GPU Technology Conference (GTC) 2018
is also permanently archived on Zenodo and accessible under the digital object
identifier <ext-link xlink:href="https://doi.org/10.5281/zenodo.1474753" ext-link-type="DOI">10.5281/zenodo.1474753</ext-link> <xref ref-type="bibr" rid="bib1.bibx55" id="paren.56"/>. The repository also includes detailed information about the installation of
the individual components of the modeling system, third-party libraries and commands to create the Docker container.</p>
  </notes><notes notes-type="videosupplement">

      <p id="d1e1904">The comparison of vorticity streamline of low- and high-resolution
fully coupled RegESM simulations for Hurricane Katrina is
illustrated by a video, which is available online at <ext-link xlink:href="https://doi.org/10.5446/37219" ext-link-type="DOI">10.5446/37219</ext-link>. In
addition, the evolution of Hurricane Katrina indicating atmosphere and
ocean states and their interaction is shown by a video, which is available
online at <ext-link xlink:href="https://doi.org/10.5446/37227" ext-link-type="DOI">10.5446/37227</ext-link>.</p>
  </notes><app-group>
        <supplementary-material position="anchor"><p id="d1e1913">The supplement related to this article is available online at: <inline-supplementary-material xlink:href="https://doi.org/10.5194/gmd-12-233-2019-supplement" xlink:title="pdf">https://doi.org/10.5194/gmd-12-233-2019-supplement</inline-supplementary-material>.</p></supplementary-material>
        </app-group><notes notes-type="competinginterests">

      <p id="d1e1922">The author declares that there is no conflict of
interest.</p>
  </notes><ack><title>Acknowledgements</title><p id="d1e1928">This study has been supported by a research grant (116Y136) provided by The
Scientific and Technological Research Council of Turkey (TUBITAK). The
computing resources used in this work were provided by the National Center
for High Performance Computing of Turkey (UHeM) under grant nos. 5003082013
and 5004782017. The Quadro K5200 used for the development of the prototype
version of the modeling system was donated by the NVIDIA Corporation as part
of the Hardware Donation Program. The author extends his grateful thanks to
Rocky Dunlap and Robert Oehmke from NOAA/ESRL and CIRES (Boulder, Colorado),
Gerhard Theurich from Science Applications International Corporation (McLean,
Virginia), Andy C. Bauer from Kitware Inc., USA, and Mahendra Roopa from
NVIDIA for their very useful suggestions and
comments.<?xmltex \hack{\newline}?><?xmltex \hack{\newline}?>Edited by: Sophie Valcke
<?xmltex \hack{\newline}?> Reviewed by: Carsten Lemmen and Rocky Dunlap</p></ack><ref-list>
    <title>References</title>

      <ref id="bib1.bibx1"><label>Ahern(2012)</label><mixed-citation>
Ahern, S.: High Performance Visualization: Enabling Extreme – Scale Scientific
Insight, The Path to Exascale, CRC Press/Francis–Taylor Group, 331–353,
2012.</mixed-citation></ref>
      <ref id="bib1.bibx2"><label>Ahrens(2015)</label><mixed-citation>
Ahrens, J.: Increasing scientific data insights about exascale class
simulations under power and storage constraints, IEEE Comput. Graph., 35,
8–11, 2015.</mixed-citation></ref>
      <ref id="bib1.bibx3"><label>Ahrens et al.(2005)</label><mixed-citation>
Ahrens, J., Geveci, B., and Law, C.: ParaView: An End-User Tool for Large
Data Visualization, Visualization Handbook, Elsevier, 2005.</mixed-citation></ref>
      <ref id="bib1.bibx4"><label>Ahrens et al.(2014)</label><mixed-citation>
Ahrens, J., Jourdain, S., O'Leary, P., Patchett, J., Rogers, D. H., Fasel,
P., Bauer, A., Petersen, M., and Samsel, F.: In Situ MPAS-Ocean Image-Based
Visualization. Proceedings of the International Conference for High
Performance Computing, Networking, Storage and Analysis SC14, 16–21
November 2014, New Orleans, LA, USA, 2014.</mixed-citation></ref>
      <ref id="bib1.bibx5"><label>Alexander and Easterbrook(2015)</label><mixed-citation>Alexander, K. and Easterbrook, S. M.: The software architecture of climate
models: a graphical comparison of CMIP5 and EMICAR5 configurations, Geosci.
Model Dev., 8, 1221–1232, <ext-link xlink:href="https://doi.org/10.5194/gmd-8-1221-2015" ext-link-type="DOI">10.5194/gmd-8-1221-2015</ext-link>, 2015.</mixed-citation></ref>
      <ref id="bib1.bibx6"><label>Amante and Eakins(2009)</label><mixed-citation>
Amante, C. and Eakins, B. W.: ETOPO1 1 Arc-Minute Global Relief Model:
Procedures, Data Sources and Analysis, NOAA Technical Memorandum NESDIS
NGDC-24, 19 pp., 2009.</mixed-citation></ref>
      <ref id="bib1.bibx7"><label>Artale et al.(2010)</label><mixed-citation>
Artale, V., Calmanti, S., Carillo, A., Dell'Aquila, A., Hermann, M.,
Pisacane, G., Ruti, P. M., Sannino, G., Struglia, M. V., Giorgi, F., Bi, X.,
Pal, J. S., and Rauscher, S.: An atmosphere ocean regional climate model for
the mediterranean area: assessment of a present climate simulation, Clim.
Dynam., 35, 721–740, 2010.</mixed-citation></ref>
      <ref id="bib1.bibx8"><label>Ashby et al.(2010)</label><mixed-citation>
Ashby, S., Beckman, P., Chen, J., Colella, P., Collins, B., Crawford, D.,
Dongarra, J., Kothe, D., Lusk, R., Messina, P., Mezzacappa, T., Moin, P.,
Norman, M., Rosner, R., Sarkar, V., Siegel, A., Streitz, F., White, A., and
Wright, M.: The opportunities and challenges of exascale computing: summary
report of the advanced scientific computing advisory committee (ASCAC)
subcommittee at the US Department of Energy Office of Science, 71 pp., 2010.</mixed-citation></ref>
      <ref id="bib1.bibx9"><label>Ayachit(2015)</label><mixed-citation>
Ayachit, U.: The ParaView Guide: A Parallel Visualization Application, Kitware, ISBN 978-1930934306, 2010.</mixed-citation></ref>
      <ref id="bib1.bibx10"><label>Bruggeman and Bolding(2014)</label><mixed-citation>
Bruggeman, J. and Bolding, K.: A general framework for aquatic biogeochemical
models, Environ. Modell. Softw., 61, 249–265, 2014.</mixed-citation></ref>
      <ref id="bib1.bibx11"><label>Childs et al.(2012)</label><mixed-citation>
Childs, H., Brugger, E., Whitlock, B., Meredith, J., Ahern, S., Pugmire, D.,
Biagas, K., Miller, M., Harrison, C., Weber, G. H., Krishnan, H., Fogal, T.,
Sanderson, A., Garth, C., Bethel, E. W., Camp, D., Rubel, O., Durant, M.,
Favre, J. M., and Navr'atil, P.: VisIt: An End-User Tool For Visualizing and
Analyzing Very Large Data. High Performance Visualization-Enabling
Extreme-Scale Scientific Insight, Chapman and Hall/CRC, 520 pp., 2012.</mixed-citation></ref>
      <ref id="bib1.bibx12"><label>Craig et al.(2017)</label><mixed-citation>Craig, A., Valcke, S., and Coquart, L.: Development and performance of a new version of the OASIS coupler, OASIS3-MCT_3.0, Geosci. Model Dev., 10, 3297–3308, <ext-link xlink:href="https://doi.org/10.5194/gmd-10-3297-2017" ext-link-type="DOI">10.5194/gmd-10-3297-2017</ext-link>, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx13"><label>Dee et al.(2011)</label><mixed-citation>
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P.,
Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P.,
Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bid- lot, J., Bormann, N.,
Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S.
B., Hersbach, H., Holm, E. V., Isaksen, L., Kallberg, P., Kohler, M.,
Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park,
B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thepaut, J.-N., and Vitart,
F.: The ERA-Interim reanalysis: configuration and performance of the data
assimilation system, Q. J. Roy. Meteor. Soc., 137, 553–597, 2011.</mixed-citation></ref>
      <ref id="bib1.bibx14"><label>Dickinson et al.(1989)</label><mixed-citation>
Dickinson, R. E., Errico, R. M., Giorgi, F., and Bates, G. T.: A regional
climate model for the western United States, Climatic Change, 15, 383–422,
1989.</mixed-citation></ref>
      <ref id="bib1.bibx15"><label>Dominicis et al.(2014)</label><mixed-citation>
Dominicis, M. D., Falchetti, S., Trotta, F., Pinardi, N., Giacomelli, L.,
Napolitano, E., Fazioli, L., Sorgente, R., Haley, P., Lermusiaux, P.,
Martins, F., and Cocco, M.: A relocatable ocean model in support of
environmental emergencies. The Costa Concordia emergency case, Ocean Dynam.,
64, 667–688, 2014.</mixed-citation></ref>
      <ref id="bib1.bibx16"><label>Emanuel(1991)</label><mixed-citation>
Emanuel, K. A.: A scheme for representing cumulus convection in large-scale
models, J. Atmos. Sci., 48, 2313–2335, 2011.</mixed-citation></ref>
      <ref id="bib1.bibx17"><label>Emanuel and Zivkovic-Rothman(1999)</label><mixed-citation>
Emanuel, K. A. and Zivkovic-Rothman, M.: Development and evaluation of a
convection scheme for use in climate models, J. Atmos. Sci., 56, 1766–1782,
1999.</mixed-citation></ref>
      <ref id="bib1.bibx18"><label>Fabian et al.(2011)</label><mixed-citation>
Fabian, N., Moreland, K., Thompson, D., Bauer, A., Marion, P., Geveci, B.,
Rasquin, M., and Jansen, K.: The paraview coprocessing library: A scalable,
general purpose in situ visualization library, in: Large Data Analysis and
Visualization (LDAV), 2011 IEEE Symposium, 89–96, October 2011.</mixed-citation></ref>
      <ref id="bib1.bibx19"><label>Giorgi et al.(2012)</label><mixed-citation>
Giorgi, F., Coppola, E., Solmon, F., Mariotti, L., Sylla, M. B., Bi, X.,
Elguindi, N., Diro, G. T., Nair, V., Giuliani, G., Turuncoglu, U. U.,
Cozzini, S., Guttler, I., O'Brien, T. A., Tawfik, A. B., Shalaby, A., Zakey,
A. S., Steiner, A. L., Stordal, F., Sloan, L. C., and Brankovic, C.: RegCM4:
model description and preliminary tests over multiple CORDEX domains, Clim.
Res., 52, 7–29, 2012.</mixed-citation></ref>
      <ref id="bib1.bibx20"><label>Grell(1995)</label><mixed-citation>
Grell, G., Dudhia, J., and Stauffer, D. R.: A description of the
fifth-generation Penn State/NCAR mesoscale model (MM5), Technical Note
NCAR/TN-398+STR, NCAR, 117 pp., 1995.</mixed-citation></ref>
      <ref id="bib1.bibx21"><label>Hagemann and Dumenil(1998)</label><mixed-citation>
Hagemann, S. and Dumenil, L.: A parameterization of the lateral waterflow for
the global scale, Clim. Dynam., 14, 17–31, 1998.</mixed-citation></ref>
      <ref id="bib1.bibx22"><label>Hagemann and Lydia(2001)</label><mixed-citation>
Hagemann, S. and Lydia, D. G.: Validation of the hydrological cycle of ECMWF
and NCEP reanalyses using the MPI hydrological discharge model, J. Geophys.
Res., 106, 1503–1510, 2001.</mixed-citation></ref>
      <ref id="bib1.bibx23"><label>Haidvogel et al.(2008)</label><mixed-citation>
Haidvogel, D. B., Arango, H. G., Budgell, W. P., Cornuelle, B. D.,
Curchitser, E., DiLorenzo, E., Fennel, K., Geyer, W. R., Hermann, A. J.,
Lanerolle, L., Levin, J., McWilliams, J. C., Miller, A. J., Moore, A. M.,
Powell, T. M., Shchepetkin, A. F., Sherwood, C. R., Signell, R. P., Warner,
J. C., and Wilkin, J.: Ocean forecasting in terrain-following coordinates:
formulation and skill assessment of the Regional Ocean Modeling System, J.
Comput. Phys., 227, 3595–3624, 2008.</mixed-citation></ref>
      <ref id="bib1.bibx24"><label>Hill et al.(2004)</label><mixed-citation>
Hill, C., DeLuca, C., Balaji, V., Suarez, M., and Da Silva, A.: The
architecture of the Earth System Modeling Framework, Comput. Sci. Eng., 6,
18–28, 2004.</mixed-citation></ref>
      <ref id="bib1.bibx25"><label>Hostetler et al.(1993)</label><mixed-citation>
Hostetler, S. W., Bates, G. T., and Giorgi, F.: Interactive Coupling of a
Lake Thermal Model with a Regional Climate Model, J. Geophys. Res., 98,
5045–5057, 1993.</mixed-citation></ref>
      <ref id="bib1.bibx26"><label>Jacob et al.(2005)</label><mixed-citation>
Jacob, R., Larson, J., and Ong, E.: M X N Communication and Parallel
Interpolation in Community Climate System Model Version, Int. J. High
Perform. C., 19, 293–307, 2005.</mixed-citation></ref>
      <ref id="bib1.bibx27"><label>Kara et al.(2007)</label><mixed-citation>
Kara, B. A., Wallcraft, A. J., and Hurlburt, H. E.: A Correction for Land
Contamination of Atmospheric Variables near Land–Sea Boundaries, J. Phys.
Oceanogr., 37, 803–818, 2007.</mixed-citation></ref>
      <ref id="bib1.bibx28"><label>Landsea and Franklin(2013)</label><mixed-citation>
Landsea, C. W. and Franklin, J. L.: Atlantic Hurricane Database Uncertainty
and Presentation of a New Database Format, Mon. Weather Rev., 141,
3576–3592, 2013.</mixed-citation></ref>
      <?pagebreak page258?><ref id="bib1.bibx29"><label>Larson et al.(2005)</label><mixed-citation>
Larson, J., Jacob, R., and Ong, E.: The Model Coupling Toolkit: a new
Fortran90 toolkit for building multiphysics parallel coupled models, Int. J.
High Perform. C., 19, 277–292, 2005.</mixed-citation></ref>
      <ref id="bib1.bibx30"><label>Lemmen et al.(2018)</label><mixed-citation>Lemmen, C., Hofmeister, R., Klingbeil, K., Nasermoaddeli, M. H., Kerimoglu,
O., Burchard, H., Kösters, F., and Wirtz, K. W.: Modular System for Shelves
and Coasts (MOSSCO v1.0) – a flexible and multi-component framework for
coupled coastal ocean ecosystem modelling, Geosci. Model Dev., 11, 915–935,
<ext-link xlink:href="https://doi.org/10.5194/gmd-11-915-2018" ext-link-type="DOI">10.5194/gmd-11-915-2018</ext-link>, 2018.</mixed-citation></ref>
      <ref id="bib1.bibx31"><label>Malakar et al.(2012)</label><mixed-citation>
Malakar, P., Natarajan, V., and Vadhiyar, S.: Integrated parallelization of
computations and visualization for large-scale applications, IEEE 26th
International Parallel and Distributed Processing Symposium Workshops PhD
Forum (IPDPSW), Shanghai, China, 21–25 May 2012.</mixed-citation></ref>
      <ref id="bib1.bibx32"><label>Marshall et al.(1997a)</label><mixed-citation>
Marshall, J., Adcroft, A., Hill, C., Perelman, L., and Heisey, C.: A
finite-volume, incompressible Navier Stokes model for, studies of the ocean
on parallel computers, J. Geophys. Res.-Oceans, 102, 5753–5766, 1997a.</mixed-citation></ref>
      <ref id="bib1.bibx33"><label>Marshall et al.(1997b)</label><mixed-citation>
Marshall, J., Hill, C., Perelman, L., and Adcroft, A.: Hydrostatic,
quasi-hydrostatic, and non-hydrostatic ocean modeling, J. Geophys.
Res.-Oceans, 102, 5733–5752, 1997b.</mixed-citation></ref>
      <ref id="bib1.bibx34"><label>McTaggart-Cowan et al.(2007a)</label><mixed-citation>
McTaggart-Cowan, R., Bosart, L. F., Gyakum, J. R., and Atallah, E. H.:
Hurricane Katrina (2005), Part I: Complex Life Cycle of an Intense Tropical
Cyclone, Mon. Weather Rev., 135, 3905–3926, 2007a.</mixed-citation></ref>
      <ref id="bib1.bibx35"><label>McTaggart-Cowan et al.(2007b)</label><mixed-citation>
McTaggart-Cowan, R., Bosart, L. F., Gyakum, J. R., and Atallah, E. H.:
Hurricane Katrina (2005). Part II: Evolution and Hemispheric Impacts of a
Diabatically Generated Warm Pool, Mon. Weather Rev., 135, 3927–3949, 2007b.</mixed-citation></ref>
      <ref id="bib1.bibx36"><label>Mellor and Yamada(1982)</label><mixed-citation>
Mellor, G. L. and Yamada, T.: Development of a turbulence closure model for
geophysical fluid problems, Rev. Geophys., 20, 851–875, 1982.</mixed-citation></ref>
      <ref id="bib1.bibx37"><label>Monbaliu et al.(2000)</label><mixed-citation>
Monbaliu, J., Hargreaves, R., Albiach, J., Luo, W., Sclavo, M., and Gunther,
H.: The spectral 5 wave model, WAM, adapted for applications with high
spatial resolution, Coast. Eng., 41, 41–62, 2000.</mixed-citation></ref>
      <ref id="bib1.bibx38"><label>Moreland(2013)</label><mixed-citation>Moreland, K.: A Survey of Visualization Pipelines, IEEE T. Vis. Comput. Gr.,
19, <ext-link xlink:href="https://doi.org/10.1109/TVCG.2012.133" ext-link-type="DOI">10.1109/TVCG.2012.133</ext-link>, 2013.</mixed-citation></ref>
      <ref id="bib1.bibx39"><label>Moreland(2016)</label><mixed-citation>Moreland, K., Sewell, C., Usher, W., Lo, L.-T., Meredith, J., Pugmire, D.,
Kress, J., Schroots, H., Ma, K.-L., Childs, H., Larsen, M., Chen, C.-M.,
Maynard, R., and Geveci, B.: VTK-m: Accelerating the Visualization Toolkit
for Massively Threaded Architectures, IEEE Comput. Graph., 36,
<ext-link xlink:href="https://doi.org/10.1109/MCG.2016.48" ext-link-type="DOI">10.1109/MCG.2016.48</ext-link>, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx40"><label>O'Leary et al.(2016)</label><mixed-citation>
O'Leary, P., Ahrens, J., Jourdain, S., Wittenburg, S., Rogers, D. H., and
Petersen, M.: Cinema image-based in situ analysis and visualization of
MPAS-ocean simulations, Lect. Notes Comput. Sc., 55, 43–48, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx41"><label>Pal et al.(2000)</label><mixed-citation>
Pal, J. S., Small, E. E., and Eltahir, E. A. B.: Simulation of regional-scale
water and energy budgets: Representation of subgrid cloud and precipitation
processes within RegCM, J. Geophys. Res.-Atmos., 105, 29579–29594, 2000.</mixed-citation></ref>
      <ref id="bib1.bibx42"><label>Pelupessy et al.(2017)</label><mixed-citation>Pelupessy, I., van Werkhoven, B., van Elteren, A., Viebahn, J., Candy, A.,
Portegies Zwart, S., and Dijkstra, H.: The Oceanographic Multipurpose
Software Environment (OMUSE v1.0), Geosci. Model Dev., 10, 3167–3187,
<ext-link xlink:href="https://doi.org/10.5194/gmd-10-3167-2017" ext-link-type="DOI">10.5194/gmd-10-3167-2017</ext-link>, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx43"><label>Redler et al.(2010)</label><mixed-citation>Redler, R., Valcke, S., and Ritzdorf, H.: OASIS4 – a coupling software for
next generation earth system modelling, Geosci. Model Dev., 3, 87–104,
<ext-link xlink:href="https://doi.org/10.5194/gmd-3-87-2010" ext-link-type="DOI">10.5194/gmd-3-87-2010</ext-link>, 2010.</mixed-citation></ref>
      <ref id="bib1.bibx44"><label>Rivi et al.(2012)</label><mixed-citation>
Rivi, M., Calori, L., Muscianisi, G., and Salvnic, V.: In-situ Visualization:
State-of-the-art and Some Use Cases, Tech. rep., Whitepaper PRACE, 2012.</mixed-citation></ref>
      <ref id="bib1.bibx45"><label>Overeem et al.(2013)</label><mixed-citation>Overeem, I., Berlin, M. M., and Syvitski, J. P. M.: Strategies for integrated
modeling: The community surface dynamics modeling system example, Environ.
Modell. Softw., 39, <ext-link xlink:href="https://doi.org/10.1016/j.envsoft.2012.01.012" ext-link-type="DOI">10.1016/j.envsoft.2012.01.012</ext-link>, 2013.</mixed-citation></ref>
      <ref id="bib1.bibx46"><label>Shchepetkin and McWilliams(2005)</label><mixed-citation>
Shchepetkin, A. F. and McWilliams, J. C.: The Regional Ocean Modeling
following coordinates ocean model, Ocean Modell., 9, 347–404, 2005.</mixed-citation></ref>
      <ref id="bib1.bibx47"><label>Skamarock et al.(2005)</label><mixed-citation>
Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Wang,
W., and Powers, J. G.: A description of the Advanced Research WRF version 2,
NCAR Tech. Note TN-468+STR, 88 pp., 2005.</mixed-citation></ref>
      <ref id="bib1.bibx48"><label>Surenkok and Turuncoglu(2015)</label><mixed-citation>
Surenkok, G. and Turuncoglu, U. U.: Investigating the Role of
Atmosphere–Wave Interaction in the Mediterranean Sea using coupled climate
model (RegESM), Geophys. Res. Abstr., 17, EGU2015-3644, EGU General Assembly
2015, Vienna, Austria, 2015.</mixed-citation></ref>
      <ref id="bib1.bibx49"><label>Tawfik and Steiner(2011)</label><mixed-citation>Tawfik, A. B. and Steiner, A. L.: The role of soil ice in land–atmosphere
coupling over the United States: a soil moisture precipitation winter
feedback mechanism, J. Geophys. Res., 116, D02113,
<ext-link xlink:href="https://doi.org/10.1029/2010JD014333" ext-link-type="DOI">10.1029/2010JD014333</ext-link>, 2011.</mixed-citation></ref>
      <ref id="bib1.bibx50"><label>Theurich et al.(2016)</label><mixed-citation>
Theurich, G., DeLuca, C., Campbell, T., Liu, F., Saint, K., Vertenstein, M.,
Chen, J., Oehmke, R., Doyle, J., Whitcomb, T., Wallcraft, A., Iredell, M.,
Black, T., Da Silva, A. M., Clune, T., Ferraro, R., Li, P., Kelley, M.,
Aleinov, I., Balaji, V., Zadeh, N., Jacob, R., Kirtman, B., Giraldo, F.,
McCarren, D., Sandgathe, S., Peckham, S., and Dunlap, R.: The Earth System
Prediction Suite: Toward a Coordinated U.S. Modeling Capability, B. Am.
Meteorol. Soc., 97, 1229–1247, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx51"><label>Trenberth et al.(2007)</label><mixed-citation>Trenberth, K. E., Davis, C. A., and Fasullo, J.: Water and energy budgets of
hurricanes: Case studies of Ivan and Katrina, J. Geophys. Res., 112, D23106,
<ext-link xlink:href="https://doi.org/10.1029/2006JD008303" ext-link-type="DOI">10.1029/2006JD008303</ext-link>, 2007.</mixed-citation></ref>
      <ref id="bib1.bibx52"><label>Turuncoglu(2012)</label><mixed-citation>
Turuncoglu, U. U.: Tools for Configuring, Building and Running Models (Vol.
5) – Applying Scientific Workflow to ESM (Sec. 2), Earth System Modeling,
SpringerBriefs, in: Earth System Sciences, edited by: Ford, R., 70 pp., 2012.</mixed-citation></ref>
      <ref id="bib1.bibx53"><label>Turuncoglu(2018a)</label><mixed-citation>Turuncoglu, U. U.: Simulation of Hurricane Katrina using in situ
visualization integrated regional earth system model (RegESM), German
National Library of Science and Technology (TIB), AV Portal of TIB Hannover,
<ext-link xlink:href="https://doi.org/10.5446/37227" ext-link-type="DOI">10.5446/37227</ext-link>, 2018a.</mixed-citation></ref>
      <ref id="bib1.bibx54"><label>Turuncoglu(2018b)</label><mixed-citation>Turuncoglu, U. U.: In situ visualisation of Hurricane Katrina. German
National Library of Science and Technology (TIB), AV Portal of TIB Hannover,
<ext-link xlink:href="https://doi.org/10.5446/37219" ext-link-type="DOI">10.5446/37219</ext-link>, 2018b.</mixed-citation></ref>
      <ref id="bib1.bibx55"><label>Turuncoglu(2018c)</label><mixed-citation>Turuncoglu, U. U.: uturuncoglu/GTC2018_demo: Version that is used in GTC
2018 (Version 1.0), Zenodo, <ext-link xlink:href="https://doi.org/10.5281/zenodo.1474753" ext-link-type="DOI">10.5281/zenodo.1474753</ext-link>, 2018c.</mixed-citation></ref>
      <ref id="bib1.bibx56"><label>Turuncoglu and Sannino(2017)</label><mixed-citation>
Turuncoglu, U. U. and Sannino, G.: Validation of newly designed regional
earth system model (RegESM) for Mediterranean Basin, Clim. Dynam., 48,
2919–2947, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx57"><label>Turuncoglu et al.(2011)</label><mixed-citation> Turuncoglu, U. U., Murphy,
S., DeLuca, C., and Dalfes, N.: A scientific workflow environment for earth
system related studies, Comput. Geosci., 37, 943–952, 2011.</mixed-citation></ref>
      <?pagebreak page259?><ref id="bib1.bibx58"><label>Turuncoglu et al.(2012)</label><mixed-citation>
Turuncoglu, U. U., Dalfes, N., Murphy, S., and DeLuca, C.: Towards
self-describing and workflow integrated Earth system models: a coupled
atmosphere-ocean modeling system application, Environ. Modell. Softw., 39,
247–262, 2012.</mixed-citation></ref>
      <ref id="bib1.bibx59"><label>Turuncoglu et al.(2013)</label><mixed-citation>Turuncoglu, U. U., Giuliani, G., Elguindi, N., and Giorgi, F.: Modelling the
Caspian Sea and its catchment area using a coupled regional atmosphere-ocean
model (RegCM4-ROMS): model design and preliminary results, Geosci. Model
Dev., 6, 283–299, <ext-link xlink:href="https://doi.org/10.5194/gmd-6-283-2013" ext-link-type="DOI">10.5194/gmd-6-283-2013</ext-link>, 2013.</mixed-citation></ref>
      <ref id="bib1.bibx60"><label>Turuncoglu et al.(2018)</label><mixed-citation>Turuncoglu, U. U., Giuliani, G., and Sannino, S.: uturuncoglu/RegESM: Official
COP enabled version (Version 1.1.0), Zenodo,
<ext-link xlink:href="https://doi.org/10.5281/zenodo.1307212" ext-link-type="DOI">10.5281/zenodo.1307212</ext-link>, 2018.
</mixed-citation></ref><?xmltex \hack{\newpage}?>
      <ref id="bib1.bibx61"><label>Valcke(2013)</label><mixed-citation>Valcke, S.: The OASIS3 coupler: a European climate modelling community
software, Geosci. Model Dev., 6, 373–388,
<ext-link xlink:href="https://doi.org/10.5194/gmd-6-373-2013" ext-link-type="DOI">10.5194/gmd-6-373-2013</ext-link>, 2013.</mixed-citation></ref>
      <ref id="bib1.bibx62"><label>Woodring et al.(2016)</label><mixed-citation>
Woodring, J., Petersen, M., Schmeisser, A., Patch-Ett, J., Ahrens, J., Hagen
H.: In situ eddy analysis in a high-resolution ocean climate model, IEEE T.
Vis. Comput. Gr., 22, 857–866, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx63"><label>Zeng et al.(1998)</label><mixed-citation>
Zeng, X., Zhao, M., and Dickinson, R. E.: Intercomparison of bulk aerodynamic
algorithms for computation of sea surface fluxes using TOGA COARE and TAO
data, J. Climate, 11, 2628–2644, 1998.</mixed-citation></ref>

  </ref-list></back>
    <!--<article-title-html>Toward modular in situ visualization in Earth system models: the regional modeling system RegESM 1.1</article-title-html>
<abstract-html><p>The data volume produced by regional and global multicomponent Earth system
models is rapidly increasing because of the improved spatial and temporal
resolution of the model components and the sophistication of the numerical
models regarding represented physical processes and their complex non-linear
interactions. In particular, very small time steps need to be defined in
non-hydrostatic high-resolution modeling applications to represent the
evolution of the fast-moving processes such as turbulence, extratropical
cyclones, convective lines, jet streams, internal waves, vertical turbulent
mixing and surface gravity waves. Consequently, the employed small time steps
cause extra computation and disk input–output overhead in the modeling system
even if today's most powerful high-performance computing and data storage
systems are considered. Analysis of the high volume of data from multiple
Earth system model components at different temporal and spatial resolutions
also poses a challenging problem to efficiently perform integrated data
analysis of the massive amounts of data when relying on the traditional
postprocessing methods today. This study mainly aims to explore the
feasibility and added value of integrating existing in situ visualization and
data analysis methods within the model coupling framework. The objective is
to increase interoperability between Earth system multicomponent code and
data-processing systems by providing an easy-to-use, efficient, generic and
standardized modeling environment. The new data analysis approach enables
simultaneous analysis of the vast amount of data produced by multicomponent
regional Earth system models during the runtime. The presented methodology
also aims to create an integrated modeling environment for analyzing
fast-moving processes and their evolution both in time and space to support a
better understanding of the underplaying physical mechanisms. The
state-of-the-art approach can also be employed to solve common problems in the
model development cycle, e.g., designing a new subgrid-scale parameterization
that requires inspecting the integrated model behavior at a higher temporal
and spatial scale simultaneously and supporting visual debugging of the
multicomponent modeling systems, which usually are not facilitated by
existing model coupling libraries and modeling systems.</p></abstract-html>
<ref-html id="bib1.bib1"><label>Ahern(2012)</label><mixed-citation>
Ahern, S.: High Performance Visualization: Enabling Extreme – Scale Scientific
Insight, The Path to Exascale, CRC Press/Francis–Taylor Group, 331–353,
2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib2"><label>Ahrens(2015)</label><mixed-citation>
Ahrens, J.: Increasing scientific data insights about exascale class
simulations under power and storage constraints, IEEE Comput. Graph., 35,
8–11, 2015.
</mixed-citation></ref-html>
<ref-html id="bib1.bib3"><label>Ahrens et al.(2005)</label><mixed-citation>
Ahrens, J., Geveci, B., and Law, C.: ParaView: An End-User Tool for Large
Data Visualization, Visualization Handbook, Elsevier, 2005.
</mixed-citation></ref-html>
<ref-html id="bib1.bib4"><label>Ahrens et al.(2014)</label><mixed-citation>
Ahrens, J., Jourdain, S., O'Leary, P., Patchett, J., Rogers, D. H., Fasel,
P., Bauer, A., Petersen, M., and Samsel, F.: In Situ MPAS-Ocean Image-Based
Visualization. Proceedings of the International Conference for High
Performance Computing, Networking, Storage and Analysis SC14, 16–21
November 2014, New Orleans, LA, USA, 2014.
</mixed-citation></ref-html>
<ref-html id="bib1.bib5"><label>Alexander and Easterbrook(2015)</label><mixed-citation>
Alexander, K. and Easterbrook, S. M.: The software architecture of climate
models: a graphical comparison of CMIP5 and EMICAR5 configurations, Geosci.
Model Dev., 8, 1221–1232, <a href="https://doi.org/10.5194/gmd-8-1221-2015" target="_blank">https://doi.org/10.5194/gmd-8-1221-2015</a>, 2015.
</mixed-citation></ref-html>
<ref-html id="bib1.bib6"><label>Amante and Eakins(2009)</label><mixed-citation>
Amante, C. and Eakins, B. W.: ETOPO1 1 Arc-Minute Global Relief Model:
Procedures, Data Sources and Analysis, NOAA Technical Memorandum NESDIS
NGDC-24, 19 pp., 2009.
</mixed-citation></ref-html>
<ref-html id="bib1.bib7"><label>Artale et al.(2010)</label><mixed-citation>
Artale, V., Calmanti, S., Carillo, A., Dell'Aquila, A., Hermann, M.,
Pisacane, G., Ruti, P. M., Sannino, G., Struglia, M. V., Giorgi, F., Bi, X.,
Pal, J. S., and Rauscher, S.: An atmosphere ocean regional climate model for
the mediterranean area: assessment of a present climate simulation, Clim.
Dynam., 35, 721–740, 2010.
</mixed-citation></ref-html>
<ref-html id="bib1.bib8"><label>Ashby et al.(2010)</label><mixed-citation>
Ashby, S., Beckman, P., Chen, J., Colella, P., Collins, B., Crawford, D.,
Dongarra, J., Kothe, D., Lusk, R., Messina, P., Mezzacappa, T., Moin, P.,
Norman, M., Rosner, R., Sarkar, V., Siegel, A., Streitz, F., White, A., and
Wright, M.: The opportunities and challenges of exascale computing: summary
report of the advanced scientific computing advisory committee (ASCAC)
subcommittee at the US Department of Energy Office of Science, 71 pp., 2010.
</mixed-citation></ref-html>
<ref-html id="bib1.bib9"><label>Ayachit(2015)</label><mixed-citation>
Ayachit, U.: The ParaView Guide: A Parallel Visualization Application, Kitware, ISBN 978-1930934306, 2010.
</mixed-citation></ref-html>
<ref-html id="bib1.bib10"><label>Bruggeman and Bolding(2014)</label><mixed-citation>
Bruggeman, J. and Bolding, K.: A general framework for aquatic biogeochemical
models, Environ. Modell. Softw., 61, 249–265, 2014.
</mixed-citation></ref-html>
<ref-html id="bib1.bib11"><label>Childs et al.(2012)</label><mixed-citation>
Childs, H., Brugger, E., Whitlock, B., Meredith, J., Ahern, S., Pugmire, D.,
Biagas, K., Miller, M., Harrison, C., Weber, G. H., Krishnan, H., Fogal, T.,
Sanderson, A., Garth, C., Bethel, E. W., Camp, D., Rubel, O., Durant, M.,
Favre, J. M., and Navr'atil, P.: VisIt: An End-User Tool For Visualizing and
Analyzing Very Large Data. High Performance Visualization-Enabling
Extreme-Scale Scientific Insight, Chapman and Hall/CRC, 520 pp., 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib12"><label>Craig et al.(2017)</label><mixed-citation>
Craig, A., Valcke, S., and Coquart, L.: Development and performance of a new version of the OASIS coupler, OASIS3-MCT_3.0, Geosci. Model Dev., 10, 3297–3308, <a href="https://doi.org/10.5194/gmd-10-3297-2017" target="_blank">https://doi.org/10.5194/gmd-10-3297-2017</a>, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib13"><label>Dee et al.(2011)</label><mixed-citation>
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P.,
Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P.,
Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bid- lot, J., Bormann, N.,
Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S.
B., Hersbach, H., Holm, E. V., Isaksen, L., Kallberg, P., Kohler, M.,
Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park,
B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thepaut, J.-N., and Vitart,
F.: The ERA-Interim reanalysis: configuration and performance of the data
assimilation system, Q. J. Roy. Meteor. Soc., 137, 553–597, 2011.
</mixed-citation></ref-html>
<ref-html id="bib1.bib14"><label>Dickinson et al.(1989)</label><mixed-citation>
Dickinson, R. E., Errico, R. M., Giorgi, F., and Bates, G. T.: A regional
climate model for the western United States, Climatic Change, 15, 383–422,
1989.
</mixed-citation></ref-html>
<ref-html id="bib1.bib15"><label>Dominicis et al.(2014)</label><mixed-citation>
Dominicis, M. D., Falchetti, S., Trotta, F., Pinardi, N., Giacomelli, L.,
Napolitano, E., Fazioli, L., Sorgente, R., Haley, P., Lermusiaux, P.,
Martins, F., and Cocco, M.: A relocatable ocean model in support of
environmental emergencies. The Costa Concordia emergency case, Ocean Dynam.,
64, 667–688, 2014.
</mixed-citation></ref-html>
<ref-html id="bib1.bib16"><label>Emanuel(1991)</label><mixed-citation>
Emanuel, K. A.: A scheme for representing cumulus convection in large-scale
models, J. Atmos. Sci., 48, 2313–2335, 2011.
</mixed-citation></ref-html>
<ref-html id="bib1.bib17"><label>Emanuel and Zivkovic-Rothman(1999)</label><mixed-citation>
Emanuel, K. A. and Zivkovic-Rothman, M.: Development and evaluation of a
convection scheme for use in climate models, J. Atmos. Sci., 56, 1766–1782,
1999.
</mixed-citation></ref-html>
<ref-html id="bib1.bib18"><label>Fabian et al.(2011)</label><mixed-citation>
Fabian, N., Moreland, K., Thompson, D., Bauer, A., Marion, P., Geveci, B.,
Rasquin, M., and Jansen, K.: The paraview coprocessing library: A scalable,
general purpose in situ visualization library, in: Large Data Analysis and
Visualization (LDAV), 2011 IEEE Symposium, 89–96, October 2011.
</mixed-citation></ref-html>
<ref-html id="bib1.bib19"><label>Giorgi et al.(2012)</label><mixed-citation>
Giorgi, F., Coppola, E., Solmon, F., Mariotti, L., Sylla, M. B., Bi, X.,
Elguindi, N., Diro, G. T., Nair, V., Giuliani, G., Turuncoglu, U. U.,
Cozzini, S., Guttler, I., O'Brien, T. A., Tawfik, A. B., Shalaby, A., Zakey,
A. S., Steiner, A. L., Stordal, F., Sloan, L. C., and Brankovic, C.: RegCM4:
model description and preliminary tests over multiple CORDEX domains, Clim.
Res., 52, 7–29, 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib20"><label>Grell(1995)</label><mixed-citation>
Grell, G., Dudhia, J., and Stauffer, D. R.: A description of the
fifth-generation Penn State/NCAR mesoscale model (MM5), Technical Note
NCAR/TN-398+STR, NCAR, 117 pp., 1995.
</mixed-citation></ref-html>
<ref-html id="bib1.bib21"><label>Hagemann and Dumenil(1998)</label><mixed-citation>
Hagemann, S. and Dumenil, L.: A parameterization of the lateral waterflow for
the global scale, Clim. Dynam., 14, 17–31, 1998.
</mixed-citation></ref-html>
<ref-html id="bib1.bib22"><label>Hagemann and Lydia(2001)</label><mixed-citation>
Hagemann, S. and Lydia, D. G.: Validation of the hydrological cycle of ECMWF
and NCEP reanalyses using the MPI hydrological discharge model, J. Geophys.
Res., 106, 1503–1510, 2001.
</mixed-citation></ref-html>
<ref-html id="bib1.bib23"><label>Haidvogel et al.(2008)</label><mixed-citation>
Haidvogel, D. B., Arango, H. G., Budgell, W. P., Cornuelle, B. D.,
Curchitser, E., DiLorenzo, E., Fennel, K., Geyer, W. R., Hermann, A. J.,
Lanerolle, L., Levin, J., McWilliams, J. C., Miller, A. J., Moore, A. M.,
Powell, T. M., Shchepetkin, A. F., Sherwood, C. R., Signell, R. P., Warner,
J. C., and Wilkin, J.: Ocean forecasting in terrain-following coordinates:
formulation and skill assessment of the Regional Ocean Modeling System, J.
Comput. Phys., 227, 3595–3624, 2008.
</mixed-citation></ref-html>
<ref-html id="bib1.bib24"><label>Hill et al.(2004)</label><mixed-citation>
Hill, C., DeLuca, C., Balaji, V., Suarez, M., and Da Silva, A.: The
architecture of the Earth System Modeling Framework, Comput. Sci. Eng., 6,
18–28, 2004.
</mixed-citation></ref-html>
<ref-html id="bib1.bib25"><label>Hostetler et al.(1993)</label><mixed-citation>
Hostetler, S. W., Bates, G. T., and Giorgi, F.: Interactive Coupling of a
Lake Thermal Model with a Regional Climate Model, J. Geophys. Res., 98,
5045–5057, 1993.
</mixed-citation></ref-html>
<ref-html id="bib1.bib26"><label>Jacob et al.(2005)</label><mixed-citation>
Jacob, R., Larson, J., and Ong, E.: M X N Communication and Parallel
Interpolation in Community Climate System Model Version, Int. J. High
Perform. C., 19, 293–307, 2005.
</mixed-citation></ref-html>
<ref-html id="bib1.bib27"><label>Kara et al.(2007)</label><mixed-citation>
Kara, B. A., Wallcraft, A. J., and Hurlburt, H. E.: A Correction for Land
Contamination of Atmospheric Variables near Land–Sea Boundaries, J. Phys.
Oceanogr., 37, 803–818, 2007.
</mixed-citation></ref-html>
<ref-html id="bib1.bib28"><label>Landsea and Franklin(2013)</label><mixed-citation>
Landsea, C. W. and Franklin, J. L.: Atlantic Hurricane Database Uncertainty
and Presentation of a New Database Format, Mon. Weather Rev., 141,
3576–3592, 2013.
</mixed-citation></ref-html>
<ref-html id="bib1.bib29"><label>Larson et al.(2005)</label><mixed-citation>
Larson, J., Jacob, R., and Ong, E.: The Model Coupling Toolkit: a new
Fortran90 toolkit for building multiphysics parallel coupled models, Int. J.
High Perform. C., 19, 277–292, 2005.
</mixed-citation></ref-html>
<ref-html id="bib1.bib30"><label>Lemmen et al.(2018)</label><mixed-citation>
Lemmen, C., Hofmeister, R., Klingbeil, K., Nasermoaddeli, M. H., Kerimoglu,
O., Burchard, H., Kösters, F., and Wirtz, K. W.: Modular System for Shelves
and Coasts (MOSSCO v1.0) – a flexible and multi-component framework for
coupled coastal ocean ecosystem modelling, Geosci. Model Dev., 11, 915–935,
<a href="https://doi.org/10.5194/gmd-11-915-2018" target="_blank">https://doi.org/10.5194/gmd-11-915-2018</a>, 2018.
</mixed-citation></ref-html>
<ref-html id="bib1.bib31"><label>Malakar et al.(2012)</label><mixed-citation>
Malakar, P., Natarajan, V., and Vadhiyar, S.: Integrated parallelization of
computations and visualization for large-scale applications, IEEE 26th
International Parallel and Distributed Processing Symposium Workshops PhD
Forum (IPDPSW), Shanghai, China, 21–25 May 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib32"><label>Marshall et al.(1997a)</label><mixed-citation>
Marshall, J., Adcroft, A., Hill, C., Perelman, L., and Heisey, C.: A
finite-volume, incompressible Navier Stokes model for, studies of the ocean
on parallel computers, J. Geophys. Res.-Oceans, 102, 5753–5766, 1997a.
</mixed-citation></ref-html>
<ref-html id="bib1.bib33"><label>Marshall et al.(1997b)</label><mixed-citation>
Marshall, J., Hill, C., Perelman, L., and Adcroft, A.: Hydrostatic,
quasi-hydrostatic, and non-hydrostatic ocean modeling, J. Geophys.
Res.-Oceans, 102, 5733–5752, 1997b.
</mixed-citation></ref-html>
<ref-html id="bib1.bib34"><label>McTaggart-Cowan et al.(2007a)</label><mixed-citation>
McTaggart-Cowan, R., Bosart, L. F., Gyakum, J. R., and Atallah, E. H.:
Hurricane Katrina (2005), Part I: Complex Life Cycle of an Intense Tropical
Cyclone, Mon. Weather Rev., 135, 3905–3926, 2007a.
</mixed-citation></ref-html>
<ref-html id="bib1.bib35"><label>McTaggart-Cowan et al.(2007b)</label><mixed-citation>
McTaggart-Cowan, R., Bosart, L. F., Gyakum, J. R., and Atallah, E. H.:
Hurricane Katrina (2005). Part II: Evolution and Hemispheric Impacts of a
Diabatically Generated Warm Pool, Mon. Weather Rev., 135, 3927–3949, 2007b.
</mixed-citation></ref-html>
<ref-html id="bib1.bib36"><label>Mellor and Yamada(1982)</label><mixed-citation>
Mellor, G. L. and Yamada, T.: Development of a turbulence closure model for
geophysical fluid problems, Rev. Geophys., 20, 851–875, 1982.
</mixed-citation></ref-html>
<ref-html id="bib1.bib37"><label>Monbaliu et al.(2000)</label><mixed-citation>
Monbaliu, J., Hargreaves, R., Albiach, J., Luo, W., Sclavo, M., and Gunther,
H.: The spectral 5 wave model, WAM, adapted for applications with high
spatial resolution, Coast. Eng., 41, 41–62, 2000.
</mixed-citation></ref-html>
<ref-html id="bib1.bib38"><label>Moreland(2013)</label><mixed-citation>
Moreland, K.: A Survey of Visualization Pipelines, IEEE T. Vis. Comput. Gr.,
19, <a href="https://doi.org/10.1109/TVCG.2012.133" target="_blank">https://doi.org/10.1109/TVCG.2012.133</a>, 2013.
</mixed-citation></ref-html>
<ref-html id="bib1.bib39"><label>Moreland(2016)</label><mixed-citation>
Moreland, K., Sewell, C., Usher, W., Lo, L.-T., Meredith, J., Pugmire, D.,
Kress, J., Schroots, H., Ma, K.-L., Childs, H., Larsen, M., Chen, C.-M.,
Maynard, R., and Geveci, B.: VTK-m: Accelerating the Visualization Toolkit
for Massively Threaded Architectures, IEEE Comput. Graph., 36,
<a href="https://doi.org/10.1109/MCG.2016.48" target="_blank">https://doi.org/10.1109/MCG.2016.48</a>, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib40"><label>O'Leary et al.(2016)</label><mixed-citation>
O'Leary, P., Ahrens, J., Jourdain, S., Wittenburg, S., Rogers, D. H., and
Petersen, M.: Cinema image-based in situ analysis and visualization of
MPAS-ocean simulations, Lect. Notes Comput. Sc., 55, 43–48, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib41"><label>Pal et al.(2000)</label><mixed-citation>
Pal, J. S., Small, E. E., and Eltahir, E. A. B.: Simulation of regional-scale
water and energy budgets: Representation of subgrid cloud and precipitation
processes within RegCM, J. Geophys. Res.-Atmos., 105, 29579–29594, 2000.
</mixed-citation></ref-html>
<ref-html id="bib1.bib42"><label>Pelupessy et al.(2017)</label><mixed-citation>
Pelupessy, I., van Werkhoven, B., van Elteren, A., Viebahn, J., Candy, A.,
Portegies Zwart, S., and Dijkstra, H.: The Oceanographic Multipurpose
Software Environment (OMUSE v1.0), Geosci. Model Dev., 10, 3167–3187,
<a href="https://doi.org/10.5194/gmd-10-3167-2017" target="_blank">https://doi.org/10.5194/gmd-10-3167-2017</a>, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib43"><label>Redler et al.(2010)</label><mixed-citation>
Redler, R., Valcke, S., and Ritzdorf, H.: OASIS4 – a coupling software for
next generation earth system modelling, Geosci. Model Dev., 3, 87–104,
<a href="https://doi.org/10.5194/gmd-3-87-2010" target="_blank">https://doi.org/10.5194/gmd-3-87-2010</a>, 2010.
</mixed-citation></ref-html>
<ref-html id="bib1.bib44"><label>Rivi et al.(2012)</label><mixed-citation>
Rivi, M., Calori, L., Muscianisi, G., and Salvnic, V.: In-situ Visualization:
State-of-the-art and Some Use Cases, Tech. rep., Whitepaper PRACE, 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib45"><label>Overeem et al.(2013)</label><mixed-citation>
Overeem, I., Berlin, M. M., and Syvitski, J. P. M.: Strategies for integrated
modeling: The community surface dynamics modeling system example, Environ.
Modell. Softw., 39, <a href="https://doi.org/10.1016/j.envsoft.2012.01.012" target="_blank">https://doi.org/10.1016/j.envsoft.2012.01.012</a>, 2013.
</mixed-citation></ref-html>
<ref-html id="bib1.bib46"><label>Shchepetkin and McWilliams(2005)</label><mixed-citation>
Shchepetkin, A. F. and McWilliams, J. C.: The Regional Ocean Modeling
following coordinates ocean model, Ocean Modell., 9, 347–404, 2005.
</mixed-citation></ref-html>
<ref-html id="bib1.bib47"><label>Skamarock et al.(2005)</label><mixed-citation>
Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Wang,
W., and Powers, J. G.: A description of the Advanced Research WRF version 2,
NCAR Tech. Note TN-468+STR, 88 pp., 2005.
</mixed-citation></ref-html>
<ref-html id="bib1.bib48"><label>Surenkok and Turuncoglu(2015)</label><mixed-citation>
Surenkok, G. and Turuncoglu, U. U.: Investigating the Role of
Atmosphere–Wave Interaction in the Mediterranean Sea using coupled climate
model (RegESM), Geophys. Res. Abstr., 17, EGU2015-3644, EGU General Assembly
2015, Vienna, Austria, 2015.
</mixed-citation></ref-html>
<ref-html id="bib1.bib49"><label>Tawfik and Steiner(2011)</label><mixed-citation>
Tawfik, A. B. and Steiner, A. L.: The role of soil ice in land–atmosphere
coupling over the United States: a soil moisture precipitation winter
feedback mechanism, J. Geophys. Res., 116, D02113,
<a href="https://doi.org/10.1029/2010JD014333" target="_blank">https://doi.org/10.1029/2010JD014333</a>, 2011.
</mixed-citation></ref-html>
<ref-html id="bib1.bib50"><label>Theurich et al.(2016)</label><mixed-citation>
Theurich, G., DeLuca, C., Campbell, T., Liu, F., Saint, K., Vertenstein, M.,
Chen, J., Oehmke, R., Doyle, J., Whitcomb, T., Wallcraft, A., Iredell, M.,
Black, T., Da Silva, A. M., Clune, T., Ferraro, R., Li, P., Kelley, M.,
Aleinov, I., Balaji, V., Zadeh, N., Jacob, R., Kirtman, B., Giraldo, F.,
McCarren, D., Sandgathe, S., Peckham, S., and Dunlap, R.: The Earth System
Prediction Suite: Toward a Coordinated U.S. Modeling Capability, B. Am.
Meteorol. Soc., 97, 1229–1247, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib51"><label>Trenberth et al.(2007)</label><mixed-citation>
Trenberth, K. E., Davis, C. A., and Fasullo, J.: Water and energy budgets of
hurricanes: Case studies of Ivan and Katrina, J. Geophys. Res., 112, D23106,
<a href="https://doi.org/10.1029/2006JD008303" target="_blank">https://doi.org/10.1029/2006JD008303</a>, 2007.
</mixed-citation></ref-html>
<ref-html id="bib1.bib52"><label>Turuncoglu(2012)</label><mixed-citation>
Turuncoglu, U. U.: Tools for Configuring, Building and Running Models (Vol.
5) – Applying Scientific Workflow to ESM (Sec. 2), Earth System Modeling,
SpringerBriefs, in: Earth System Sciences, edited by: Ford, R., 70 pp., 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib53"><label>Turuncoglu(2018a)</label><mixed-citation>
Turuncoglu, U. U.: Simulation of Hurricane Katrina using in situ
visualization integrated regional earth system model (RegESM), German
National Library of Science and Technology (TIB), AV Portal of TIB Hannover,
<a href="https://doi.org/10.5446/37227" target="_blank">https://doi.org/10.5446/37227</a>, 2018a.
</mixed-citation></ref-html>
<ref-html id="bib1.bib54"><label>Turuncoglu(2018b)</label><mixed-citation>
Turuncoglu, U. U.: In situ visualisation of Hurricane Katrina. German
National Library of Science and Technology (TIB), AV Portal of TIB Hannover,
<a href="https://doi.org/10.5446/37219" target="_blank">https://doi.org/10.5446/37219</a>, 2018b.
</mixed-citation></ref-html>
<ref-html id="bib1.bib55"><label>Turuncoglu(2018c)</label><mixed-citation>
Turuncoglu, U. U.: uturuncoglu/GTC2018_demo: Version that is used in GTC
2018 (Version 1.0), Zenodo, <a href="https://doi.org/10.5281/zenodo.1474753" target="_blank">https://doi.org/10.5281/zenodo.1474753</a>, 2018c.
</mixed-citation></ref-html>
<ref-html id="bib1.bib56"><label>Turuncoglu and Sannino(2017)</label><mixed-citation>
Turuncoglu, U. U. and Sannino, G.: Validation of newly designed regional
earth system model (RegESM) for Mediterranean Basin, Clim. Dynam., 48,
2919–2947, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib57"><label>Turuncoglu et al.(2011)</label><mixed-citation> Turuncoglu, U. U., Murphy,
S., DeLuca, C., and Dalfes, N.: A scientific workflow environment for earth
system related studies, Comput. Geosci., 37, 943–952, 2011.
</mixed-citation></ref-html>
<ref-html id="bib1.bib58"><label>Turuncoglu et al.(2012)</label><mixed-citation>
Turuncoglu, U. U., Dalfes, N., Murphy, S., and DeLuca, C.: Towards
self-describing and workflow integrated Earth system models: a coupled
atmosphere-ocean modeling system application, Environ. Modell. Softw., 39,
247–262, 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib59"><label>Turuncoglu et al.(2013)</label><mixed-citation>
Turuncoglu, U. U., Giuliani, G., Elguindi, N., and Giorgi, F.: Modelling the
Caspian Sea and its catchment area using a coupled regional atmosphere-ocean
model (RegCM4-ROMS): model design and preliminary results, Geosci. Model
Dev., 6, 283–299, <a href="https://doi.org/10.5194/gmd-6-283-2013" target="_blank">https://doi.org/10.5194/gmd-6-283-2013</a>, 2013.
</mixed-citation></ref-html>
<ref-html id="bib1.bib60"><label>Turuncoglu et al.(2018)</label><mixed-citation>
Turuncoglu, U. U., Giuliani, G., and Sannino, S.: uturuncoglu/RegESM: Official
COP enabled version (Version 1.1.0), Zenodo,
<a href="https://doi.org/10.5281/zenodo.1307212" target="_blank">https://doi.org/10.5281/zenodo.1307212</a>, 2018.

</mixed-citation></ref-html>
<ref-html id="bib1.bib61"><label>Valcke(2013)</label><mixed-citation>
Valcke, S.: The OASIS3 coupler: a European climate modelling community
software, Geosci. Model Dev., 6, 373–388,
<a href="https://doi.org/10.5194/gmd-6-373-2013" target="_blank">https://doi.org/10.5194/gmd-6-373-2013</a>, 2013.
</mixed-citation></ref-html>
<ref-html id="bib1.bib62"><label>Woodring et al.(2016)</label><mixed-citation>
Woodring, J., Petersen, M., Schmeisser, A., Patch-Ett, J., Ahrens, J., Hagen
H.: In situ eddy analysis in a high-resolution ocean climate model, IEEE T.
Vis. Comput. Gr., 22, 857–866, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib63"><label>Zeng et al.(1998)</label><mixed-citation>
Zeng, X., Zhao, M., and Dickinson, R. E.: Intercomparison of bulk aerodynamic
algorithms for computation of sea surface fluxes using TOGA COARE and TAO
data, J. Climate, 11, 2628–2644, 1998.
</mixed-citation></ref-html>--></article>
