the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A parallel implementation of the confined–unconfined aquifer system model for subglacial hydrology: design, verification, and performance analysis (CUAS-MPI v0.1.0)
Yannic Fischler
Thomas Kleiner
Christian Bischof
Jeremie Schmiedel
Roiy Sayag
Raban Emunds
Lennart Frederik Oestreich
Angelika Humbert
Download
- Final revised paper (published on 15 Sep 2023)
- Preprint (discussion started on 23 Jan 2023)
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2022-312', Anonymous Referee #1, 20 Feb 2023
The authors present a novel HPC modelling tool for subglacial water fluxes, for which they produce a validation and a computational performance assesment on a case set up at the whole Greenland scale. Finally they discuss its perspectives of applications.
The paper is clearly organised and it includes a careful study of the parallel behaviour of the developed modelling tool, with a module-wize quantification of the scalabilty. This point of especially important due to the limitations of applicability of modelling strategies that are related to too long computation times. The potential fallouts of this work are important, and adequatelly discussed. Nevertheless the manuscript suffers from formal flaws and is sometimes too elliptical – see the general and specific comments below.
So I suggest that this manuscript should be accepted for publication in GMD after minor revisions for applying the recommended improvements.
General comments :
The section titles should be more explicit and informative. For instance CUAS MPI / model / workflow are a bit short as section titles. The english language needs also improvements.
Specific comments :
- l 23-24-25 : « For simulating large areas, like entire ice sheets in adequate spatial resolution and in a temporal resolution to allow to represent changes on short temporal change, such as seasonal melt water input, an efficient numerical code is indispensable. » Clumsy and unprecises (‘adequat’ sufers of vagueness – adequat for what ?). To be rephrased.
- l 26-30 : OK but it is not said properly – the number of time steps depends on the length of the time step and of the time interval to be simulated. Maybe better to talk of ‘time discretization’ rather than ‘time step’? + english problems : an efficient code.
- l 33 : compute clusters → supercomputers. Add parenthesis around the reference to Balay et al.
- l 37 : repetitions (then … then)
- l 41 : The title of section 2 should be changed : a simple acronym cannot be a proper section title. At least the full name of the CUAS-MPI tool should be detailled alongside with its nature (e.g. : modeling tool for sub-glacial hydrology).
- l 51-52 « Areas with high permeability represent very efficient water transport, while low permeability represents an ineffective water system. » A bit tautological. Please be more precise (e.g., areas with high permeability are associated with higher density of channels or something like that, more descriptive.)
- l 55-56 : « To this end, an unconfined layer is incorporated, capturing the dynamics if the head is falling below the layer thickness allowing for further water drainage. » A little explaining figure (a sketch with both configuration, confined and unconfined layer) would be helpful for the reader and would enhanced the self-consistency of the paper. See also comments of the Appendix A.
- l 58 : « an evolution equation for the hydraulic head for Darcy flow » : a governing PDE for hydraulic head spatio-temporal variations, analogous to the diffusivity equation for groundwater.
- l 59-60 : « A second major equation is describing the change in transmissivity with time based on melting, creep and cavity formation. » This is a constitutive law for the hydraulic transmissivity of the equivalent porous medium, right ?
- l 62 : reference to Appendix B would be better place in the « Software-Design » section (once again the title should be improved). Besides, given its shortness, I think it could be introduced directly in the body of the text.
- l 62 : Since equations of the Appendix A are refered to explicitly in figure 1, I think that they should be introduced directly in the body of the text.
- Figure 1 : How are derived the flux and effective pressure ? Operational links (e.g. : arrows) between the boxes would improve the schematization of the modeling tool I think.
- l 66 : « such as Greenland » : Not specific enough. You mean probably « such as a set up for modelling the sub-glacial flows under the whole Greenland Ice Sheet » ?
- l 70-71 : « Time stepping parameters are optionally described by command line parameters or a time step file. » What time stepping parameters ? Is this only time step length, or is there any adaptive time step strategy ? In the first case, I don’t feel that this sentence is really necessary.
- l 72 : The MPI version uses « the same command line parameters » than the serial one. If you specify it, then you should add here a reference for finding the serial command line parameters.
- l 74 : « We use the well-known PETSc parallel math library » Here a reference for that tool should be given.
- l 82 : « if the problem size allows for comparison. » Why do you say that? Please explicit what you have in mind.
- l 86-87 : « which provides an uniform interface of the features we require in CUAS-MPI. » Unclear, to be rephrased.
- l 92 – section 2.3 Workflow : this section could be put in Appendix I think.
- l 136-137 : I don’t understand why the symbol infinity is used here. The boundary conditions are defined at the boundaries, and if using the method of the image wells these bopundaries should be at finite distances of the real well ?
- l 141 : equation 1 : Always give the units of all used variables and parameters.
- l 121 section 3 : this section should be rewritten by splitting it in two parts, one for the confined case, and one for the unconfined ones. A figure presenting the considered geometry (position of boundaries and of image wells) should be added.
- l 164-165 : « In case of a land terminating margin, the boundary condition is no flow ». Then rivers coming out of subglacial springs are neglected, right ? Adding a sentence about the consequences of this assumption would be interesting.
- l 193-194 : « high-throughput simulations ». I don’t know the throughput concept. It might be usefull to remind it.
- l 198-199 : « We employ GCC 11.2, Open MPI 4.1.4, PETSc 3.17.4 and NetCDF 4.7.4 with HDF5 1.8.22 for our performance experiments. » Please give the nature all these software (e.g. : the compiler GCC, the message-passing library OpenMPI 4.1.4, etc)
- l 202 : please give the topology of the network (e.g. : hypercube, fat tree, etc).
- Figure 3 : please add a scale for the shaded bed topography.
- l 213 : « full, half, quarter and one-eighth occupied compute nodes » I think that using populated instead of occupied would be more idiomatic → full, half, quarter and one-eighth populated computing nodes.
- l 216-218 : « The size of the circle indicates the hardware investment. The smallest circle indicates that only one node was used, the next size up indicates two nodes, then four nodes, and lastly 16 nodes. » This should also be specified directly on Figure 4.
- l 219-224 : I don’t understand the runtimes mentionned in the text. For instance 4900 s is stated for 12 threads on 1 node, while on Figure 4 the vertical coordinate of the corresponding point is 140 s. 4900 is not a CPU time, since 140 s *12 cores = 1680 cpu.s (or 140*96 = 13440, if we consider that the whole node is requested for the computation even if only 12 cores are really used), which is different from 4900.
- l 225-232 : Very interesting discussion, which concludes to the interest of half-populated runs with CUAS-MPI on the considered cluster. I think that the CPU times associated to the slurm requests (→ twice bigger for a half-populated run compared to the fully populated one for a given number of MPI processes - if the run times are equal !) should be also discussed.
- l 234 : I think that ‘parts’ would be a better word choice than ‘categories’.
- l 249 : « which writes a single output of configuration "large" » : please say here what is the total size (in bytes) of the outputs written in each case by NetCDF.
- l 254 : « scales up to 2304 MPI processes » : according to Figure 5 I would rather say that it scales up quasi-linearly ut to 1536 MPI processes.
- l 259-262 : « In particular, we see approximately linear scaling also for the "CUAS-MPI system kernels" and the system matrix routine up to at least 768 MPI processes. Then the "CUAS-MPI system kernels" and thereafter the system matrix creation reach their scaling limit and their respective runtime increases.» This is correct for CUAS-MPI system kernels, but not for the system matrix creation, which well scales up to 3072 MPI processes.
- l 264 : « grid data exchange communication caused by PETSc » : I think that this term is misleading, and so is the associated legend in Figure 6. The problem as far as I understood is not the kernel computations, but the communication to PETSc of the results obtained by these kernel computations, so that the PETSc linear solver can use them for solving the diffusivity equation governing the sub-glacial flow. So I would keep ‘kernel computations’ for the green curve in Figure 6, but use ‘kernel communications to PETSc’ instead of ‘PETSc communication’ for the pink one.
- l 267-268 : « In our studies of throughput, i.e. how many simulated system years we can run in a day of compute time (simulated years per day, SYPD). » I didn’t know this concept of throughput as a measure of result production rate – which does not mean that it does not already exist in the literature! But still I think that because it has been already used earlier in the manuscript (even in the abstract), its definition should have been explicited (or reminded) earlier as well.
- l 278 : The observation of a significant impact of I/O rate on computation time is interesting. This question has for instance also been investigated in another porous media related context in Orgogozo et al., 2014 (http://dx.doi.org/10.1016/j.cpc.2014.08.004 see section 4.4).
- Figure 7 : the letters a, b and c used in the caption are missing on the figure.
- l 284-288 : This paragraph is hard to follow. I don’t think that one can say that larger grids need more cells per MPI process than smaller grids for efficient parallelism. While using larger grid, one increases the computational load and thus the computation over communication ratio, which leads to better parallel efficiency for a given number of MPI processes. And also I don’t understand what is the sweet spot.
- l 291-292 : « The minimum of the total runtime per iteration is increasing with spatial resolution ». Please discuss why ?
- l 292-293 : « the spread is less than the total runtime per year, showing that the total runtime is driven by the increase in number of linear solver iterations with resolution. » : Hard to follow. Does panel b of Figure 7 present the total runtime per year ? If yes say it explicitely in the caption of the figure and refer explicitely to this figure in that discussion.
- l 295 : « but potentially not highest resolution. » Unclear.
- l 297 : « the amount of core-hours usually available for such runs. » Be more precise. What is the target of CPU hours for this kind of run, and why ?
- l 297-298 : « A simulation covering the 90 years from 2010 to 2100 in 600 m spatial and 1 hour temporal resolution requires a wall-clock time of 1350 hours (56 days) on 384 MPI processes. » Please precise again here that this quantification is obtained for the Lichtenberg HPC system. One could discuss how it could be extrapolated to larger supercomputers, for instance European level (‘tier-0’) ones.
- Section 6 : english langage problems (‘might of interest’, ‘now have now’, …)
- l 322 : G250 is too cryptic ; either add a more informative name or no name for this resolution.
- l 326-333 : This discussion is a bit confusing, to be rewritten more clearly. Add also physics-based arguments for the statement that there is no necessity of high frequent data exchange.
- l 345 : « computational granularity » : please remind briefly the definition of this concept here
Appendix A :
- Psi and b should be introduced in a little explaining figure as proposed above (see also the comment of lines 55-56).
- The units of all the used variables and parameters should be given.
- l 376 : « To allow a smooth transition between the confined and unconfined system, a range d is introduced. » This parameter should be a bit more discussed. What is its physical signification ? What is its range of possible values ?
- l 382 : The difference between Ss and Sy should be more explicited.
- l 387 : There is surely (at least) one publication associated with this complex equation ; please refer to it here.
- l 392 : there should be a link between h and pw ? How pi is computed, or from where does it come from if it is a forcing ? Please explicit these point here.
Appendix B :
- l 399 – 400 : ‘equidistant’ ; may be regular would be more appropriate ? And why using rectangular grid cells instead of square ones ?
- l 405-406 : « If an iterative solver is used, convergence is decided by the decrease of the residual norm relative to the norm of the right hand side (rtol) and the absolute size of the residual norm (atol). » Here an equation allowing to precisely identify the quantity rtol and atol should be provided. Typical ranges to be used should also be given.
Citation: https://doi.org/10.5194/gmd-2022-312-RC1 -
AC1: 'Reply on RC1', Yannic Fischler, 18 Apr 2023
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2022-312/gmd-2022-312-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Yannic Fischler, 18 Apr 2023
-
RC2: 'Comment on gmd-2022-312', Anonymous Referee #2, 24 Feb 2023
General comments:
The manuscript by Fischler et al. discusses the development and performance of an MPI-aided parallel implementation of a pre-existing subglacial hydrology model (Beyer et al, 2018). The method and some specificities of the parallel implementation are briefly presented (including its reliance on external libraries such as PETSc). The model is then validated using several test cases, some of which possess an analytical solution. Finally, runtime results for a series of bigger test cases (the size of the entire Greenland ice-sheet) are analyzed in order to derive tendencies concerning the scalability and throughput of the modeling tool, with the intent of comparing these to the performance of the well-known ice-sheet model ISSM (in view of a future coupling).
Overall I would not recommend this paper for publication in its present format. The development of an efficient subglacial hydrology modeling tool capable of scaling on supercomputers is of interest to the community; so there is no doubt that CUAS-MPI has scientific relevance. However, the overall quality of the manuscript is poor and the scientific significance and originality of the results presented is only marginal. To be considered for publication, improvements should concern how the paper is structured –to be able to better distinguish what will be discussed in each Section; but there also should be a rethink of the general content. Indeed, my biggest concern is that the manuscript does not present any novel concepts or introduce any new methodologies and that most conclusions drawn can already be found in previous work (see Beyer et al. 2018 and Fischler et al. 2022). The English quality also makes the manuscript difficult to read at times and many sentences/explanations would benefit from a rewrite. Finally I do not believe that the Introduction is complete with all relevant citations and references.
Due to the large number of major concerns, I have gathered all specific comments and technical corrections below, following the current structure of the paper. For each Section, I start with a paragraph summarizing my concerns before I provide more technical, line by line, suggestions.
Specific comments:
On a general note and as suggested by reviewer #1, I would strongly advise making the titles of Sections more informative.
- Abstract
The abstract suffers from a lot of repetitions (CUAS-MPI). Acronyms (such as MPI) should be detailed at least once. Beyer et al. 2018 should be cited.
- l. 1-2: what is meant by “as well as marginal lakes and rivers” ? Rephrase this sentence
- l. 2: “has been developed” makes it sound like CUAS is new, when this tool is pre-existing. Rephrase.
- l. 7: repetition of throughput could be avoided
- Introduction
My main issue with the introduction is that it lacks a proper state of the art. Context about the need for subglacial hydrology modeling should be provided, and different existing modeling approaches/proper citations for other similar modeling tools should also be acknowledged. The place and goal of the present study are also not clearly stated and should be elaborated upon. Finally, a sentence underlining the main findings should be included.
- l. 13: consider giving a ref to accompany the first sentence (especially for Antarctica)
- l. 15: use the same units (choose ma-1 or mmd-1)
- l. 19: “hundreds to thousands of metres thick ice” → “meters of thick ice”
- The sentence starting at the end of l. 20 should be rephrased, and the citations should be under (). I suggest transforming this sentence into a full paragraph with a proper state of the art of subglacial hydrology modeling.
- The sentence starting at the end of l. 23 is too vague, consider quantifying “adequate spatial resolution” and “temporal resolution”
- l. 26: same issue, consider quantifying “the desired amount of timesteps”
- l. 27: “ An example of relevance is the projection … until 2100…” → why is that? (consider providing a reference)
- l. 30: “This can only be achieved with a efficient codes”
- l. 31: again and unless I am missing something, CUAS is not new. The use of “developed” is misleading here. I suggest merging this short sentence with the following one.
- l. 33 “compute clusters” → supercomputers?
- l. 34-35: provide more information about these “pumping tests”. A short description with a proper reference would do.
- l. 35: “we employed CUAS-MPI then” → “we then employed CUAS-MPI”
- l. 36: perhaps expand on the “performance data” that you gather and provide a short description of the nature of Score-P
- l. 37: “first we explain the underlying model …” → “We explain the underlying model in Sect. 2”?
- Section 2
Section 2 appears a bit disorganized and could benefit from a complete makeover. The purpose of each subsection is not clear to me. Based on what I understand, I believe Appendix A should replace most of Section 2.1 (see below). It also sounds like Section 2.2 could be a good place to describe your original work: how did you make the pre-existing CUAS solver better, what did you change/improve? However, the description provided is too vague to be impactful (e. g. l.74-90: “the distributed memory features of PETSc”, “a PETSc feature”, “list of available PETSc solvers”, “the iterative GMRES solver”, “an adapter which provides a uniform interface to the features we require” etc.). I suggest rewriting this entire section with appropriate references to the external libraries (PETSc, NetCDF, HDF5) that you use, including a more specific description of the features that you select and why you chose to do so. Consider providing a more detailed layout of the code and its submodules in Fig 1 to accompany these developments. Finally, the point of Section 2.3 is also a little bit difficult for me to understand. I do not follow what is the relation between, and purpose of, the various scripts mentioned. Here also, I cannot differentiate what is original work when compared to the pre-existing CUAS workflow. I suggest rewriting this subsection, and accompanying the text of l. 109 - 118 (where “typical use cases” are employed as examples), perhaps sketching the workflow?
- l. 43: “different perspectives on the purpose of” → “different reasons for”? This sentence is awkward and could use a couple references to illustrate your point.
- End of l. 45, starting with “In the past …” is odd in this context. If a proper state of the art is placed in the Introduction, it should be removed altogether.
- l. 47: “the code that we present” → “the code that we employ/use”
- l. 50: “the void space is fully saturated” is confusing when it is said later in l. 55 that “it may happen that water supply is not sufficient to keep the water system fully saturated”. Please clarify.
- l. 54: “inefficient water transport”, but the entire sentence should be rephrased to be more impactful and precise.
- l. 58 - 63: As suggested by reviewer 1 I would strongly suggest moving Appendix A here, along with providing proper citations (to Beyer et al 2018 and de Fleurian et al 2014). Describing equations is always a difficult task, and given the length of Appendix A and the fact that the equations are referenced in the body of the text I do not believe it would make the paper unnecessarily longer (especially if the rest of Section 2.1 is rewritten in a more concise manner). Consider using an image to describe the model? Or at least a reference to Fig. 1 of Beyer et al 2018.
- l. 66: “its performance was too low for larger setups such as Greenland” is too vague. Please clarify
- l. 68: please provide a description of the “physics kernels”
- l. 74: provide a reference to PETSc
- I feel like Table 1 and the associated discussion in Section 2.2 (l. 87-90) is unnecessary
- l. 96-97: “a setup script’, “the mask”, “any environment”... please be more precise
- Usefulness of l.100 -104 is questionable (restarting a run is common practice)
- l. 103: “has be remapped” → “has been remapped”
- l. 111 -112: “Here the seasonal … and others with seasonal forcing” should be rephrased
- l. 113: can you elaborate on this “nesting approach” mentioned?
- l. 120: what is the meaning of that last sentence? (what are “these numbers”?)
- Section 3
This Section is interesting and the work conducted is solid but my impression is that the CUAS-MPI equations and numerical methods employed to solve them have not changed since what is described in the paper by Beyer et al., 2018. This makes the relevance of the proposed validations questionable. I would suggest either to elaborate on the relevance of these test cases, describing them in more depth, or moving them to an Appendix. If kept, consider providing a Table summarizing the various test cases performed (from the body of the text it is unclear how many tests are actually designed), and clearly identifying what code feature they are testing. Finally, I cannot emphasize enough how much a sketch or a picture would help the reader understand what these “pumping tests” are. There is a lot of math involved here that is not trivial, and a proper introduction of the terminology would be helpful.
- l. 127: consider providing a more detailed description of these “pumping tests”.
- l. 134: I am unfamiliar with the method of images in this context, I believe a short justification or a reference would be helpful
- l. 138: please provide a brief description of what is an “image wells”
- l. 140: define “drawdown”
- l. 145: what is “the non-linear unconfined case” ? (again a Table summarizing the test cases and giving them names would help)
- l. 153: what is “the specific yield” (another case for describing the model’s equation in a previous section)?
- Section 4
I feel like this section should be a subsection of Section 5. It could be useful to see the various grid resolutions, perhaps as a Supplementary Material?
- l. 162: the first sentence should be rephrased.
- Fig 3: the bed topography is not visible, consider changing the color scheme or the transparency. Also, there should be a scale for it.
- l. 165: please elaborate on the “Dirichlet boundary condition for the head”. A pressure is not homogeneous to the hydraulic head.
- l. 168: the citation Christmann et al. 2021 is odd. What are you trying to back up with it?
- l. 168: “To summarise it”, what is “it” ? Also, I am not sure what you mean by this sentence.
- l. 171: “Ice sheet basal melt” can you clarify what you mean here, where are you using this information into your equation system (what are the units) ? This sentence seems redundant with l. 182
- l. 176: remove the comma
- l. 184: you defined a symbol for the specific yield earlier, use it
- l. 191: remove the “residuum”
- Section 5
Section 5.1 is actually well written, and Fig. 4 is a good illustration, but the conclusions lack generality: do they hold for other node architectures? Can we identify directions for potential code improvement? It would help to more thoroughly quantify the memory requirements and their nature to answer these questions. It sounds like the point that is made is simply to provide users with recommendations when running this specific code on this specific supercomputer, which is not extremely useful.
In Section 5.2 the contributions from various pieces of the solver to the total runtime are quantified. The first comment that can be made here is that if outputting destroys the analysis that much, and you choose not to discuss this phenomenon, why include it in the curves ? It is very distracting, particularly for the lower resolutions since “NetCDF output” is seen to dominate runtime. More importantly, the conclusion here seems to be that there is a tipping point where scalability breaks, which depends on the relative costs of the various “subcategories” and the general amount of work per MPI process (G600 is experiencing scalability issues faster than G150 because it has less elements per threads). Not surprisingly, the curves presented in Fig. 5 and the explanation behind the observed tendencies are similar to what is discussed in depth in Fischler et al., 2022. The culprit for the inflexion is the same: both system matrix creation and PETSc communications experience scalability issues due to the overhead costs of MPI communications (citation from the 2022 paper: “with an increasing number of MPI processes, the communication overhead of the assembly starts dominating at some point.”). Unfortunately, the same can be said of Sect. 5.3: the two conclusions starting l. 279 “In general we see that for smaller grids there is no sense in using a large number of processes, as there is not enough work to be done for an efficient parallelization” and l. 285 “...more MPI processes, in general, cause more communication and synchronization overhead and more computation is needed to offset this” are similar to those drawn by Fischler et al. 2022, limiting the impact and importance of this study. Is there a way to differentiate this work from the 2022 paper? Perhaps perspective on code improvements could be discussed?
- The first paragraph starting l. 193 should be rephrased
- l. 195: “()” should be removed around Section 4
- l. 201: is there a link to learn more about that particular supercomputer?
- l. 203: what is “turbo-mode”?
- l. 205: “relative standard deviation” of …? (the time)
- I agree with reviewer #1 that the vertical coordinate of Fig. 4 is not consistent with the runtimes reported in the body of the text.
- l. 234 - 242: I would refrain from using too many “”, as they clutter the text. The terminology is awkward at times: these so-called functional categories should be put in the context of a workflow akin to Fig. 1 in Fischler et al 2022 (references to Fig 1 are not helping until the layout in Fig 1 is made more descriptive)… but also, it turns out that these functional categories are not directly the categories used in Fig. 5, so why bother? Explain what is shown in Fig 5 without the confusion of the “functional categories”. I also have no idea what the “NETCDF output” is part of: is it the I/O interface or is it the post-processing of the solution vector which ultimately would be a sub-category of the “CUAS-MPI solver “?
- l. 248: what does the “They” refer to here?
- l. 267: consider rephrasing that first sentence (why using the past tense here? Maybe I am missing the point).
- l. 272: the choice to output a solution here is odd, since this cost is obviously going to dominate the runtime for all coarser grids, based on the results from G600 of Fig 5.
- l. 276: I wouldn’t say “profitably”
- Fig 7a: why no more log scale for the x axis of Fig 7a?
- Fig 7b and c are difficult to read and the accompanying text is not very easy to understand. Are we supposed to “read” them as going from right to left = increasing the number of MPI processes? Also why is the inflexion (breaking of scalability) starting for a higher number of cells per MPI processes for finer resolutions?
- l. 279: “Smaller grids”= coarser resolution? (consider changing “larger grids” too)
- l. 288: what is the “sweet spot”?
- l. 292: that last sentence needs rephrasing, not sure what is said here (what is “the spread”?)
- Section 6
I suggest merging part of this section with the Conclusions, and including the rest (the discussion about runtime ant throughput) in the previous section.
- l. 296: First sentence should be rewritten. Give specifics -what is high, highest? What does “the code performs well” means ? What are the number of time steps required for a seasonal cycle?
- The requirements of 56 days for a 90 years run with 600m temporal res seems off, based on the results from Fig 7a. Please clarify. Also this is very much supercomputer-dependent and this should be specified.
- l. 301: what are “panGreenland simulations”?
- l. 302: what do you mean by that last sentence “The costs are also …”?
- What is the paragraph starting l. 307 doing here ? It sounds like it would be better placed in the Introduction
- l. 313: “is” at the end of the sentence should be removed
- l. 315 “In out work, the next step …” → “The next step …”
- l. 316: the ISSM citation is odd, the website should be a footnote or a real citation altogether?
- l. 320: odd citation for preCICE too
- Paragraph starting l. 315 can be a perspective for, say, the Conclusions section but its placement here is odd. You could merge with the next paragraph but emphasis should be on the test cases results
- l. 326-333: what is said here ? Is it an extension of the paragraph starting l. 315?
- Conclusions
This section is too incomplete. It should either be augmented with the suggested paragraphs of Sect. 6, or merged with the discussion section altogether.
- l. 349: CUAS-MPI is not a new code –or if it is it has not been demonstrated properly
- l. 355-356: too vague - consider describing what these enhancements and model adaptation should entail. The last sentence should be impactful.
- References
The doi for the paper by Young, T. J., Christoffersen, P., Bougamont, M., and Stewart, C. L. should read https://doi.org/10.1073/pnas.2116036119
Citation: https://doi.org/10.5194/gmd-2022-312-RC2 -
AC2: 'Reply on RC2', Yannic Fischler, 18 Apr 2023
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2022-312/gmd-2022-312-AC2-supplement.pdf