the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Australian tidal currents – assessment of a barotropic model (COMPAS v1.3.0 rev6631) with an unstructured grid
Mike Herzfeld
Mark Hemer
Darren Engwirda
Download
- Final revised paper (published on 09 Sep 2021)
- Preprint (discussion started on 14 Apr 2021)
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2021-51', Anonymous Referee #1, 25 May 2021
This paper discusses the validation of a new tide model for the waters surrounding Australia. The model is based on a new implementation of shallow water dynamics on an unstructured grid using the EMS modeling system which they have open-sourced. The authors provide a new compilation of tidal current observations in their domain, which should be quite useful for others. They provide nuanced and intelligent discussion of their process of model development (emphasizing details such as the hand-adjustment of topography and implementation of open boundary conditions) which should also help others. They systematically discuss the model-data intercomparison, emphasizing locations where tidal currents are relatively large in comparison with sub-tidal currents, which is appropriate considering the aimed-at operational uses for the model. Overall, the authors have produced a well-organized and thoughtful comparison, with the appropriate level of detail provided, and I think this paper requires only very minor adjustments before publication.
Detailed notes, itemized by line number:
L14: Should this read "Rood Mean Square Error (RMSE)"? Otherwise, why captials?
L15: Two periods.
Up to L70: This discussion of the grid development will be useful for others. Very good.
L91: Indeed this is unusual, but it is an indication that you have achieved a necessary level of accuracy. Interesting.
L100: When I first read this, I did not understand that the tidal synthesis was only used at the preliminary stage of model tuning. Later, at line 155, this is explained. I think this should be explained right away when the tidal synathesis is mentioned.
L106: Capitalize "TPXO".
L140: This is a clear explaination of the current meters and ADCP dataset.
L175: Are D and C in the same units, or is C a measure of area? If you believe the model errors are related to this quantity, perhaps it would be better to pliot the error statistics as a function of J. It does not seem that this J is used later, so maybe it can be omitted.
L182: Pleaase write out the expression for the relative error that includes the sub_o velocity.
Table 1: Please format the text so that the lower parts of letters are visible. Note, for example, how the "p", "y", and "g" are truncated from several of the place names.
Up to L230: This is a good overview of the errors. Appropriate detail.
To L305: A good explanation of why the discussion focusses on only certain stations.
L374: It would be useful to label Van Diemen Gulf; although, I guess it is the large body of water enclosing Christine Reef?
Fig 11: I cannot read the place names here. Can you please label Broad Sound?
L387: I think I know the location of this gauge, but I don't understand what we are supposed to observe from Fig 3.
L465: Good to see this basic comparison with TPXO here. You might wish to look at Zaron and Elipot JGR 2021, who compare currents from an earlier version of this model with drifter-derived currents. Alternately, you might find drifter-derived currents are another useful validation dataset.
L472: I don't have the expertise to comment on whether the model currents are operationally useful. Instead of saying they are "arguably" useful, it would be better if you can describe alernate viewpoints in a more detail. Are there defintions or criteria which would be useful for arguing this question? What criteria should be used to decide if a model is "good enough" to be useful for current predictions vs tidal energy site evaluation?
Citation: https://doi.org/10.5194/gmd-2021-51-RC1 -
AC1: 'Reply on RC1', David Griffin, 01 Jul 2021
Foreword
We thank the two referees and the one community member for their thoughtful and constructive comments on our paper. We have revised our manuscript in response to referee comments as described below and think the paper is now definitely improved, and hope that the Editor invites us to submit it. Our responses to comments are below in red, with new or altered snippets of the revised paper in green.
Referee 1
This paper discusses the validation of a new tide model for the waters surrounding Australia. The model is based on a new implementation of shallow water dynamics on an unstructured grid using the EMS modeling system which they have open-sourced. The authors provide a new compilation of tidal current observations in their domain, which should be quite useful for others. They provide nuanced and intelligent discussion of their process of model development (emphasizing details such as the hand-adjustment of topography and implementation of open boundary conditions) which should also help others. They systematically discuss the model-data intercomparison, emphasizing locations where tidal currents are relatively large in comparison with sub-tidal currents, which is appropriate considering the aimed-at operational uses for the model. Overall, the authors have produced a well-organized and thoughtful comparison, with the appropriate level of detail provided, and I think this paper requires only very minor adjustments before publication.
Thank you.
Detailed notes, itemized by line number:
L14: Should this read "Rood Mean Square Error (RMSE)"? Otherwise, why captials?
No, Root Sum Square is correct, because it is over 8 constituents, and we want to know the total error. ‘(RSS)’ could be added, but it is not used again in the abstract.
L15: Two periods.
Oops. Thank you.
Up to L70: This discussion of the grid development will be useful for others. Very good.
Thank you.
L91: Indeed this is unusual, but it is an indication that you have achieved a necessary level of accuracy. Interesting.
Agreed. See below for further discussion of this point.
L100: When I first read this, I did not understand that the tidal synthesis was only used at the preliminary stage of model tuning. Later, at line 155, this is explained. I think this should be explained right away when the tidal synathesis is mentioned.
Sorry, but it seems you have overlooked lines 108-110, which says that the analyses presented in the paper use constituents analysed from a long model run. We have clarified this point by saying “These trial model runs were too short for accurate decomposition into constituents, so we assessed them against….”
L106: Capitalize "TPXO".
Oops. Thank you.
L140: This is a clear explaination of the current meters and ADCP dataset.
Thank you.
L175: Are D and C in the same units, or is C a measure of area? If you believe the model errors are related to this quantity, perhaps it would be better to pliot the error statistics as a function of J. It does not seem that this J is used later, so maybe it can be omitted.
Thank you - this was unclear (and noted by another referee). C has the same units as D. We now say: “where D is the distance (km) to the model grid point, C is the characteristic size (km) of the cell (see Fig. 1),…”. Errors are not strongly related to J, and the form of J has little impact on the average error. But if it were omitted, people would ask ‘how did you interpolate the model to the obs?’
L182: Pleaase write out the expression for the relative error that includes the sub_o velocity.
Done.
Table 1: Please format the text so that the lower parts of letters are visible. Note, for example, how the "p", "y", and "g" are truncated from several of the place names.
Done
Up to L230: This is a good overview of the errors. Appropriate detail.
Thank you
To L305: A good explanation of why the discussion focusses on only certain stations.
Thank you
L374: It would be useful to label Van Diemen Gulf; although, I guess it is the large body of water enclosing Christine Reef?
Done
Fig 11: I cannot read the place names here. Can you please label Broad Sound?
Done (mentioned current meter sites are in bigger, brighter text and Broad Sound is labelled)
L387: I think I know the location of this gauge, but I don't understand what we are supposed to observe from Fig 3.
We agree that the reference to Fig. 3 was too cryptic, and have inserted an extra Figure here in support of the comment about how the modelled tidal height compares with the observations, which, on closer inspection, is actually better than we had written before. The new text reads “The tide gauge (at McEwin Islet) near the head of the Sound (Fig. 12) suggests that the second amplification process is also quite well modelled, since the modelled M2 amplitude there is nearly (within about 10%) as great as the observed value”.
L465: Good to see this basic comparison with TPXO here. You might wish to look at Zaron and Elipot JGR 2021, who compare currents from an earlier version of this model with drifter-derived currents. Alternately, you might find drifter-derived currents are another useful validation dataset.
Thank you. We will consider using drifters and gliders for validating the next version of this model, but first we wish to get access to all the other current meter time series that exist.
L472: I don't have the expertise to comment on whether the model currents are operationally useful. Instead of saying they are "arguably" useful, it would be better if you can describe alernate viewpoints in a more detail. Are there defintions or criteria which would be useful for arguing this question? What criteria should be used to decide if a model is "good enough" to be useful for current predictions vs tidal energy site evaluation?
This is a very hard question and we are quite sure there is not a unique answer - because there are so many potential applications. So we hope that our paper will equip users to assess the adequacy themselves of our tidal model for their application, with us making as few limiting decisions as possible.
Referee 2
This paper discusses the comparison of a new tide model for the waters surrounding Australia and both tidal heights and currents observations, with a dedicated focus on future operational tidal currents prediction (from model simulations) added value.
The compilation of tidal observations, especially tidal currents, is rather impressive and will provide a very useful database for further studies and/or model validation. The comparisons between the model’s simulations and observations are exhaustive and detailed, with very informative focus on regions of special interest. Currents data processing and inherent limitations are well presented and discussed.
Thank you
The figures where model and observed currents ellipses are very interesting, however the red colored observed ellipses are sometimes hardy distinguishable of the background currents amplitude pixels.
Yes, that is true in some cases, and is why we do not rely on the reader being able to see all the observed ellipses in every Figure. To deal with this problem, we have 1) shown the comparisons at either 2 or 3 scales: national (5000km, e.g. Fig 5), regional (500km, e.g. Fig 6) and local (100km, e.g. Fig.7), 2) chosen velocity scales carefully for each Figure to reach a compromise between overlapping ellipses in strong current regions and invisible ones where the amplitude is low, 3) listed region-averaged tabulated statistics of the model-obs comparisons both on the Figures and in the Tables, and 4) listed site-specific model-obs comparisons for all current meters. We don’t think there is much more we can do without including very many local-scale Figures.
I might suggest showing the model grid itself in an additional figure.
This is what Figures 1 and 2 are.
Same remark about tidal heights vector errors in addition to the modelled/observed amplitude and phase superimposed ones.
Tidal heights are not the principal focus of this paper (as is made clear in several places, starting with the title of the paper). Nevertheless, we have included model-obs height comparisons for completeness. Fig 3 and 4 show the model-obs comparisons for amplitude and phase separately, which we think is more illuminating than showing just the vector error (the combination of both components of the error). Table 2 lists statistics of amplitude, phase and also the combined (vector) error, averaged over regions. There are too many sites to include a heights-equivalent to Table 3.
The model is based on a new implementation of shallow water dynamics on an unstructured grid. As far as I understood, COMPAS model is a local evolution of the MPAS one, or at least inspired from it. Unlike the work made on the tidal observation compilation and processing, I find the modelling work rather not sufficiently convincing.
We are sorry to hear that, and have tried to make it more convincing, without repeating too much material from Herzfeld (2020) that documented the details of the model. We have emphasized to readers (at the beginning of the Model configuration section, see below) that this paper focusses on our assessment of the model, not its construction.
My first remarks concern the model grid design and setting. COMPAS developers made the choice of a basically hexagonal grid (and subsequent finite volume discretization). Despite some flexibility to tune the model resolution, it is much less flexible than triangle element grids, especially in following precisely the coastal geometry. Authors may comment on their choice.
Sorry, we disagree. COMPAS uses the dual of a Delaunay triangulation (a Voronoi diagram). Compared to using triangles, this is less prone to spurious short wave generation on a C grid. It can boundary-fit coastlines to the same degree as triangles (rays toward infinity in the Voronoi dual are truncated to the coast). COMPAS and MPAS-O are quite different in the way the coastline geometry/discretisation is treated, with COMPAS able to conform to the shoreline directly, while MPAS-O cannot. An example of a COMPAS coastline-fitted mesh is included below.
We have added text stating that certain aspects of COMPAS differ to MPAS in that they are coastally optimized.
The model resolution constraints (depth and currents magnitude) are also a bit surprising to me. In tidal applications, coastal geometry complexity, tidal wavelength (theoretically related to square root of depth, but possibly strongly controlled by local coastal geometry/dynamical resonance) and depth’s slope related tidal currents variability scales are the most efficient constraints in setting the appropriate local resolution, especially when tidal currents are specifically targeted. I’d like authors to comment on that.
We do indeed use the sqrt(gH) wavelength in setting resolution. We also add higher resolution as distance-to-coast, so again, more agreement in terms of 'coastal geometry'. We've used the magnitude of tidal currents, rather than grad(H) as an additional refinement metric, to give more detail in the high-speed areas of particular interest. We have clarified this in the manuscript.
The setting of bathymetry is mostly set from the best available global datasets for Australian Waters, still I wonder about the choice to extend the uncovered areas with DBDB2, which is a rather ancient bathymetry database. Authors may comment on their choice.
We agree that bathymetry choice is vital to improving performance, and is a priority for future model development. To this end, we hope to capitalise on the results of the ausSeabed initiative (http://www.ausseabed.gov.au/about). We have emphasized this more in the manuscript.
The setting of the minimum model depth suggests to me that wetting/drying capabilities were not available/used in the tidal simulations. This is by itself an annoying limitation, but also minimum depth settings can significantly change the model results and, in case where the original bathymetry dataset is accurate enough, deteriorate the simulation accuracy (reversely, a 5 to 10 m minimum depth setting can help to partly compensate for bathymetry inaccuracy in nearshore regions). I’d like authors to comment on that.
We now say: “COMPAS can be run with wetting and drying activated, not only for entire water columns, but also for individual layers as sea level falls or rises. For the present application, however, neither of these capabilities were exercised to any degree; the latter because the model was run in 2D mode. Lacking adequate near-shore bathymetry for much of this large country, we chose not to attempt to properly model the tides in the inter-tidal zone, and set the minimum depth (at zero tide) to 4 m for most of the grid, but 8 m where the tides are large in the NW, NE and in Gulf St Vincent. A channel of 12 m was manually included in King Sound (in the NW) to correct an obvious error there. A similar bathymetry correction was also made in Western Port (near Melbourne). These two manual corrections had significant effect on the local tidal response, and it is anticipated that further model improvement will follow from corrections throughout the domain based on a more complete set of observations of the real topography.”
My second set of remarks concerns the tidal forcing and dissipation. First having the best performances with the tidal potential left off is not a good indicator of the model performances. Also tidal loading and self-attraction forcing terms are not mentioned at all, I guess they are just no considered in COMPAS. If I am right, this is a very annoying omission for accurate tidal modelling.
Simulations were trialled with tidal potential included (equilibrium tide + self-loading/attraction). Results were found not to differ significantly from when they were absent. There is a cost to including these terms, as computation of the right ascension of the ascending node for the moon is expensive when computed at every grid point. Any changes to the solution did not warrant this additional expense. It appears that when the ratio of open boundary length (where the tide is imposed) to surface area is large, the effect of tidal potential on the solution is diminished, with the major contributor to forcing being the boundary forcing.
We have added that self-attraction/loading was trialled, and a reference to the tidal potential method used. “Tidal potential forcing and tidal self-attraction/loading (using the method of Sakamoto et al., 2013) is optionally applied in the model but we found that it made very little difference (excepting the run time) compared with other parameters such as friction, so we have omitted it for the long (1 year) run of the model described here.”
Equally important, the barotropic tides generate internal tides when their energy fluxes propagate across the shelf slope, and then are partly dissipated by the subsequent barotropic to baroclinic energy conversion. This is a quite large contributor to the barotropic tides dissipation, and it must be implemented through a parameterization in depth-averaged tidal models to reach the best accuracy, even at regional scales. Again, this point is not mentioned in the paper, I just can guess that such a convenient parameterization is not available in COMPAS.
The model was run in barotropic mode only. Baroclinic energy conversion is currently not available in 2D COMPAS simulations. We have now mentioned in the manuscript that a 3D baroclinic version is under development, which would address these issues explicitly: “In this paper, we assess the ability of this model to simulate barotropic tides (both currents and sea level) as a first step towards a baroclinic model of the tides, and then a baroclinic model with non-tidal flows as well.”
Many places in the Australian Waters are very challenging in terms of tidal dynamics, and will require raising the COMPAS tidal capabilities to a more comprehensive level, or at least discuss the impact of the missing tidal ingredients. I’d like authors to comment on these critical issues.
We certainly agree that our diverse tidal environment provides a significant challenge, especially since the bathymetry is uncertain in places, and there are inevitably some errors remaining in both the parent model and the validation data set. The importance of baroclinic processes can not be denied either. The paper now has a new final sentence: We conclude by reminding readers that the work reported here is just an initial step towards a more complete description of Australia’s tides, which will potentially include 1) the variation in the vertical dimension of the tidal currents, 2) finer horizontal resolution, 3) more accurate sea-floor topography, 4) more accurate offshore boundary conditions, and 5) within-domain tidal potential forcing and self-attraction.
Last but not least, the open boundary conditions setting can be potentially critical in the overall simulations accuracy, their discussion in section 2 could be complemented with a domain-wide vector difference between the forcing atlas (TPXO) and COMPAS results.
Thank you for the suggestion but we think this comparison with another model (the one we are nesting inside), while interesting to some readers, would be a distraction from the main emphasis of the paper, which is the assessment of our model against observations. There is also the question of which version of TPXO should we compare to? The one we nest inside (1/6°) or 1 or more versions of the 1/30° ‘Atlas’ product? We looked at this and decided to make just a short sentence summarising the salient facts (see the paragraph at end of section 7, now slightly edited to remind readers that tidal potential forcing is inactive in the present version of our model)
In summary, the observational and comparison sections are very informative and well organized, and I think they are fully suited for publication. Reversely, the modelling part really needs to be augmented/revised/strengthened. Consequently, I encourage the authors to make the necessary changes to the modeling sections to reach the same level of scientific value as for the observational ones. In consequence, I will consider publication after a major revision of the modeling discussion, with no doubt that the authors will be successful in submitting a more appropriate version. I will be happy to review any new submission, and will provide a more detailed review at this occasion as the present version is susceptible to significantly vary in the revised one.
We have made some small augmentations of the modelling section this paper but, as mentioned above, we have avoided repeating too much material from Herzfeld et al (2020) which documents the details of the model. The present paper focusses on our assessment of the model, not its construction. To clarify the scope of the paper, we have added the following text at the beginning of the Model configuration section:
As mentioned above, the work reported here was done for two reasons 1) to identify regions where tidal currents are prospective from a renewable energy point of view, 2) to lay the foundations of a more general-purpose national model of the tidal currents of Australia. The model we used is called COMPAS (Coastal Ocean Marine Prediction Across Scales). It is a fully non-linear 3D model that has been described in full by Herzfeld et al., (2020). In this paper, we assess the ability of this model to simulate barotropic tides (both currents and sea level) as a first step towards a baroclinic model of the tides, and then a baroclinic model with non-tidal flows as well.
Community Comment (Roger Proctor)
This paper describes the results of tidal simulations using a new unstructured grid model for Australian coastal waters, initially developed for a tidal renewable energy project. The model results, from depth-averaged simulations, are compared with observations from an unprecedented collection of tidal height and tidal current locations at which a minimum of 11 tidal constituents are available. This assembly of observed tidal constituents is valuable in its own right, and the published model tidal constituents form a useful dataset. The paper is divided into sections describing the model setup and preliminary experiments, the two observational datasets, the model-observation analysis methodology, followed by the results and a discussion. A comprehensive set of statistics is offered, resulting in a regional approach to assessing the quality of the model results. Overall the paper offers the reader several new perspectives: on the observation coverage of the tides around Australia; on the diversity of its tidal regimes; and on the ability of this new model to accurately represent these regimes. As such it is a valuable contribution to the journal and the published datasets of value to the community.
Thank you for the kind words
Some thoughts and suggested minor modifications are discussed below.
The discussion of model configuration suggests the use of the unstructured grid is a computational saving, indicating a regular grid model of similar resolution would require 1.5 million points to match the ‘mean resolution’ (not defined). This is not a large array for a simple 2D model so the saving, if any, may not be great. The smallest cell in the unstructured mesh is ~330m which is relatively large for some of the areas in question. I wondered if the computational constraints of the explicit scheme was limiting the calculation.
A model using 1.5 million surface cells is tractable, however, this will always run slower than one using just 12% as many cells, all other things being equal. Given that over 70 simulations were performed during the optimization procedure using a very modest number of processors, this saving in wall-time or CPU cost is non-negligible.
Although certain regions of the model are likely under-resolved, we considered this first attempt at a national model a good balance between accurately capturing the broad tidal circulation patterns and model throughput. We have added text to this effect in the manuscript, and also added the mean distance between centres (2100 m).
Since the simulations were conducted in 2D mode, semi-implicit approaches (essentially an implicit model in 2D mode) would be expected to increase throughput due to increased timesteps. However, the semi-implicit approach does have its drawbacks, notably, it is difficult to modularize open boundary conditions that can be ‘mixed and matched’, due to the explicit coding of these schemes as source terms into the matrix inversion procedure. Such models typically have quite a limited array of open boundary conditions, which may hinder optimization of the open boundary problem.
Lines 75-80 discuss the bathymetry used, and points to use of minimum depths, which would limit any wetting and drying, which may impact on results with large tidal range; was this tested in the preliminary experiments?
Yes – see discussion above.
Line 90+ describes the open boundary set up which is indeed quite unusual. A sentence or two to explain why this works would be helpful, particularly on how internally generated motions reaching the open boundary are handled.
Agreed, we now say: This situation is quite unusual, and suggests that the TPXO values at the boundary are largely in tune with the interior dynamics of the model (even though TPXO and COMPAS have their differences), obviating the need for strategies to make the boundary transmissive to outgoing signals.
Line 100+ describes the intitial experiments conducted to arrive at the finally chosen parameter settings (e.g. drag coefficient). Given that later in the paper, in discussing the results, there are several assertions as to discrepancies between model and observation, e.g. line 375, line 388, could these initial experiments offer any explanations?
There were 72 simulations performed during the optimization process. There were some step changes towards convergence to a skilful solution. Using TPXO on its native grid was one such step. These optimizations have led us to believe that friction modifications have negligible impact, tidal body force has a very small impact, open boundary configuration and bathymetry changes has a large impact. The open boundaries are now well optimized, and it is expected that further bathymetry improvements would decrease model-data discrepancies. We have emphasized the need for improved bathymetry in the manuscript.
Line 135-140 … how close to the island? The text seems to suggest that the model cell size may also need refining to capture the variability.
Table 3 lists all instrument positions. Distances to reefs on the GBR may be as low as 1km but this is uncertain due to bathymetry errors (see the differences between modelled and recorded depths) so we chose not to try and define ‘close’. The point is that islands or reefs are close enough to matter.
Line 155 … ‘for all the usual reasons’ might need an explanation.
We have added: “, some of which are 1) the nature of model (and observation) errors is likely to differ significantly depending on the constituent frequency and amplitude, 2) errors of the ellipse orientation are then easily distinguished from errors of the phase and major axis length, all of which impact differently on various users, 3) it is the most succinct way of describing the data set.”
Line 174, the penalty function; this is dimensionally imbalanced and needs an explanation for the D/5C component.
Sorry, this was not clear, as discussed above
Many of the figures, e.g. Figure 3, include tables of percentiles. Provide a sentence explaining these.
Sorry, we thought the caption to Figure 3 was sufficient. We have added (‘%’) to explain the use of that symbol.
Similarly, some tables (e.g. Table 2), have ‘%obs’ values which need an explanation.
We’ve added: The %obs row expresses the RSS values in the line above as a percentage of the observed RSS.
Line 281 refers to sites in Banks Strait but in the table they are labelled Bass.
Table 3 is now fixed, thank you.
Line 356, spell out RIB.
We’ve changed this to ‘speedboats’
Line 380 … it would be helpful to have Broad Sound marked on Figure 11.
Done, see above
Line 384 … explain why you query the mechanical current meters.
Did you overlook the next sentence, or want it expanded on? “Due to limited storage capacity, the flow direction was only sampled instantaneously once an hour, so short-period changes of direction were not averaged.” We’ve now added “To minimise noise due to waves (i.e, rectified orbital velocities spinning the rotor even when the current velocity is zero - Griffin, 1988)
Griffin (1988): Mooring Design to minimize Savonius rotor overspeeding due to wave action.
Mark Lady Musgrave on a figure.
Done, see above
Line 394 … suggest changing ‘the amplitude of S2 exceeds that of M2 (barely),’ to say 'the amplitude of S2 is of similar magnitude to that of M2,'.
Good idea. We’ve changed it to “the amplitudes of S2 and M2 are nearly the same,”
Line 408 … ‘and thus underestimates the errors’. How do you know?
Fair point. Neglecting the internal tide does little damage to the depth-mean velocity. We were thinking of users who will use our prediction of the depth-mean as a prediction of the tide at all depths. We have removed “and thus underestimates the errors”.
Line 415 … Given that the official predictions are available, might be a useful addition if you did compare. Even to demonstrate the adequacy, or otherwise, of the official predictions.
We will propose this to BoM (who issue the official predictions).
Line 416+ This doesn't offer an explanation of why the you think the tidal currents are poorly predicted in this region.
That is because 1) we are not sure of the reason, but have now added “It appears that this problem is largely inherited from the boundary conditions”, 2) it is a low-priority mystery, for the reason given (tidal currents are very small compared to non-tidal).
Line 430 … As we know, M4 and other higher harmonics are generated internally through non-linear model terms. Do you have anything to say on this generation mechanism within the model?
Lacking any evidence that the mechanism in the model is faithful to the real world, we’d rather not speculate on this. We’ve reworded this: where amplitudes up to 5.9 cm s-1 were observed (Fig. 13). Model amplitudes are comparable (up to 4.3 cm s-1)but there is not much correspondence with the observations. Given the complexity of both the observed and the modelled currents, and relatively small contribution to the total, we can’t be confident that the modelled M4 velocities are accurate enough to warrant inclusion of these constituents when making predictions.
Line 441 … Can I suggest rewriting this sentence ‘Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs (i.e. the sections where the shelf is narrow).’ as “ Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east (i.e. the sections where the shelf is narrow).’ Exceptions are Bass Strait and the South Australian gulfs.”
Hmm, we’re not sure that’s any better. So we’ve removed the bit in brackets, leaving “Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs.”
Line 480 … Whilst the focus of the paper is on tidal currents, the statement that non-tidal currents play an important role in many parts of the Australian coastal domain leads the reader to wonder whether future versions of the model will attempt to provide this missing component. In this context, lessons learnt by Witeranje et al (2018) may be useful. Also, some insight into what improvements are intended (or are in development) and why these are seen as improvements would be useful.
Non-tidal currents, as you know, is a totally different modelling problem, and not one that we want to discuss in this paper.
Ref: Wijeratne, S., Pattiaratchi, C., & Proctor, R. (2018). Estimates of surface and subsurface boundary current transport around Australia. Journal of Geophysical Research: Oceans, 123, 3444–3466. https://doi.org/10.1029/2017JC013221
Citation: https://doi.org/10.5194/gmd-2021-51-AC1
-
AC1: 'Reply on RC1', David Griffin, 01 Jul 2021
-
RC2: 'Comment on gmd-2021-51', Anonymous Referee #2, 08 Jun 2021
This paper discusses the comparison of a new tide model for the waters surrounding Australia and both tidal heights and currents observations, with a dedicated focus on future operational tidal currents prediction (from model simulations) added value.
The compilation of tidal observations, especially tidal currents, is rather impressive and will provide a very useful database for further studies and/or model validation. The comparisons between the model’s simulations and observations are exhaustive and detailed, with very informative focus on regions of special interest. Currents data processing and inherent limitations are well presented and discussed. The figures where model and observed currents ellipses are very interesting, however the red colored observed ellipses are sometimes hardy distinguishable of the background currents amplitude pixels. I might suggest showing the model grid itself in an additional figure. Same remark about tidal heights vector errors in addition to the modelled/observed amplitude and phase superimposed ones.
The model is based on a new implementation of shallow water dynamics on an unstructured grid. As far as I understood, COMPAS model is a local evolution of the MPAS one, or at least inspired from it. Unlike the work made on the tidal observation compilation and processing, I find the modelling work rather not sufficiently convincing.
My first remarks concern the model grid design and setting. COMPAS developers made the choice of a basically hexagonal grid (and subsequent finite volume discretization). Despite some flexibility to tune the model resolution, it is much less flexible than triangle element grids, especially in following precisely the coastal geometry. Authors may comment on their choice. The model resolution constraints (depth and currents magnitude) are also a bit surprising to me. In tidal applications, coastal geometry complexity, tidal wavelength (theoretically related to square root of depth, but possibly strongly controlled by local coastal geometry/dynamical resonance) and depth’s slope related tidal currents variability scales are the most efficient constraints in setting the appropriate local resolution, especially when tidal currents are specifically targeted. I’d like authors to comment on that. The setting of bathymetry is mostly set from the best available global datasets for Australian Waters, still I wonder about the choice to extend the uncovered areas with DBDB2, which is a rather ancient bathymetry database. Authors may comment on their choice. The setting of the minimum model depth suggests to me that wetting/drying capabilities were not available/used in the tidal simulations. This is by itself an annoying limitation, but also minimum depth settings can significantly change the model results and, in case where the original bathymetry dataset is accurate enough, deteriorate the simulation accuracy (reversely, a 5 to 10 m minimum depth setting can help to partly compensate for bathymetry inaccuracy in nearshore regions). I’d like authors to comment on that.
My second set of remarks concerns the tidal forcing and dissipation. First having the best performances with the tidal potential left off is not a good indicator of the model performances. Also tidal loading and self-attraction forcing terms are not mentioned at all, I guess they are just no considered in COMPAS. If I am right, this is a very annoying omission for accurate tidal modelling. Equally important, the barotropic tides generate internal tides when their energy fluxes propagate across the shelf slope, and then are partly dissipated by the subsequent barotropic to baroclinic energy conversion. This is a quite large contributor to the barotropic tides dissipation, and it must be implemented through a parameterization in depth-averaged tidal models to reach the best accuracy, even at regional scales. Again, this point is not mentioned in the paper, I just can guess that such a convenient parameterization is not available in COMPAS. Many places in the Australian Waters are very challenging in terms of tidal dynamics, and will require raising the COMPAS tidal capabilities to a more comprehensive level, or at least discuss the impact of the missing tidal ingredients. I’d like authors to comment on these critical issues. Last but not least, the open boundary conditions setting can be potentially critical in the overall simulations accuracy,their discussion in section 2 could be complemented with a domain-wide vector difference between the forcing atlas (TPXO) and COMPAS results.
In summary, the observational and comparison sections are very informative and well organized, and I think they are fully suited for publication. Reversely, the modelling part really needs to be augmented/revised/strengthened. Consequently, I encourage the authors to make the necessary changes to the modeling sections to reach the same level of scientific value as for the observational ones. In consequence, I will consider publication after a major revision of the modeling discussion, with no doubt that the authors will be successful in submitting a more appropriate version. I will be happy to review any new submission, and will provide a more detailed review at this occasion as the present version is susceptible to significantly vary in the revised one.
Citation: https://doi.org/10.5194/gmd-2021-51-RC2 -
AC1: 'Reply on RC1', David Griffin, 01 Jul 2021
Foreword
We thank the two referees and the one community member for their thoughtful and constructive comments on our paper. We have revised our manuscript in response to referee comments as described below and think the paper is now definitely improved, and hope that the Editor invites us to submit it. Our responses to comments are below in red, with new or altered snippets of the revised paper in green.
Referee 1
This paper discusses the validation of a new tide model for the waters surrounding Australia. The model is based on a new implementation of shallow water dynamics on an unstructured grid using the EMS modeling system which they have open-sourced. The authors provide a new compilation of tidal current observations in their domain, which should be quite useful for others. They provide nuanced and intelligent discussion of their process of model development (emphasizing details such as the hand-adjustment of topography and implementation of open boundary conditions) which should also help others. They systematically discuss the model-data intercomparison, emphasizing locations where tidal currents are relatively large in comparison with sub-tidal currents, which is appropriate considering the aimed-at operational uses for the model. Overall, the authors have produced a well-organized and thoughtful comparison, with the appropriate level of detail provided, and I think this paper requires only very minor adjustments before publication.
Thank you.
Detailed notes, itemized by line number:
L14: Should this read "Rood Mean Square Error (RMSE)"? Otherwise, why captials?
No, Root Sum Square is correct, because it is over 8 constituents, and we want to know the total error. ‘(RSS)’ could be added, but it is not used again in the abstract.
L15: Two periods.
Oops. Thank you.
Up to L70: This discussion of the grid development will be useful for others. Very good.
Thank you.
L91: Indeed this is unusual, but it is an indication that you have achieved a necessary level of accuracy. Interesting.
Agreed. See below for further discussion of this point.
L100: When I first read this, I did not understand that the tidal synthesis was only used at the preliminary stage of model tuning. Later, at line 155, this is explained. I think this should be explained right away when the tidal synathesis is mentioned.
Sorry, but it seems you have overlooked lines 108-110, which says that the analyses presented in the paper use constituents analysed from a long model run. We have clarified this point by saying “These trial model runs were too short for accurate decomposition into constituents, so we assessed them against….”
L106: Capitalize "TPXO".
Oops. Thank you.
L140: This is a clear explaination of the current meters and ADCP dataset.
Thank you.
L175: Are D and C in the same units, or is C a measure of area? If you believe the model errors are related to this quantity, perhaps it would be better to pliot the error statistics as a function of J. It does not seem that this J is used later, so maybe it can be omitted.
Thank you - this was unclear (and noted by another referee). C has the same units as D. We now say: “where D is the distance (km) to the model grid point, C is the characteristic size (km) of the cell (see Fig. 1),…”. Errors are not strongly related to J, and the form of J has little impact on the average error. But if it were omitted, people would ask ‘how did you interpolate the model to the obs?’
L182: Pleaase write out the expression for the relative error that includes the sub_o velocity.
Done.
Table 1: Please format the text so that the lower parts of letters are visible. Note, for example, how the "p", "y", and "g" are truncated from several of the place names.
Done
Up to L230: This is a good overview of the errors. Appropriate detail.
Thank you
To L305: A good explanation of why the discussion focusses on only certain stations.
Thank you
L374: It would be useful to label Van Diemen Gulf; although, I guess it is the large body of water enclosing Christine Reef?
Done
Fig 11: I cannot read the place names here. Can you please label Broad Sound?
Done (mentioned current meter sites are in bigger, brighter text and Broad Sound is labelled)
L387: I think I know the location of this gauge, but I don't understand what we are supposed to observe from Fig 3.
We agree that the reference to Fig. 3 was too cryptic, and have inserted an extra Figure here in support of the comment about how the modelled tidal height compares with the observations, which, on closer inspection, is actually better than we had written before. The new text reads “The tide gauge (at McEwin Islet) near the head of the Sound (Fig. 12) suggests that the second amplification process is also quite well modelled, since the modelled M2 amplitude there is nearly (within about 10%) as great as the observed value”.
L465: Good to see this basic comparison with TPXO here. You might wish to look at Zaron and Elipot JGR 2021, who compare currents from an earlier version of this model with drifter-derived currents. Alternately, you might find drifter-derived currents are another useful validation dataset.
Thank you. We will consider using drifters and gliders for validating the next version of this model, but first we wish to get access to all the other current meter time series that exist.
L472: I don't have the expertise to comment on whether the model currents are operationally useful. Instead of saying they are "arguably" useful, it would be better if you can describe alernate viewpoints in a more detail. Are there defintions or criteria which would be useful for arguing this question? What criteria should be used to decide if a model is "good enough" to be useful for current predictions vs tidal energy site evaluation?
This is a very hard question and we are quite sure there is not a unique answer - because there are so many potential applications. So we hope that our paper will equip users to assess the adequacy themselves of our tidal model for their application, with us making as few limiting decisions as possible.
Referee 2
This paper discusses the comparison of a new tide model for the waters surrounding Australia and both tidal heights and currents observations, with a dedicated focus on future operational tidal currents prediction (from model simulations) added value.
The compilation of tidal observations, especially tidal currents, is rather impressive and will provide a very useful database for further studies and/or model validation. The comparisons between the model’s simulations and observations are exhaustive and detailed, with very informative focus on regions of special interest. Currents data processing and inherent limitations are well presented and discussed.
Thank you
The figures where model and observed currents ellipses are very interesting, however the red colored observed ellipses are sometimes hardy distinguishable of the background currents amplitude pixels.
Yes, that is true in some cases, and is why we do not rely on the reader being able to see all the observed ellipses in every Figure. To deal with this problem, we have 1) shown the comparisons at either 2 or 3 scales: national (5000km, e.g. Fig 5), regional (500km, e.g. Fig 6) and local (100km, e.g. Fig.7), 2) chosen velocity scales carefully for each Figure to reach a compromise between overlapping ellipses in strong current regions and invisible ones where the amplitude is low, 3) listed region-averaged tabulated statistics of the model-obs comparisons both on the Figures and in the Tables, and 4) listed site-specific model-obs comparisons for all current meters. We don’t think there is much more we can do without including very many local-scale Figures.
I might suggest showing the model grid itself in an additional figure.
This is what Figures 1 and 2 are.
Same remark about tidal heights vector errors in addition to the modelled/observed amplitude and phase superimposed ones.
Tidal heights are not the principal focus of this paper (as is made clear in several places, starting with the title of the paper). Nevertheless, we have included model-obs height comparisons for completeness. Fig 3 and 4 show the model-obs comparisons for amplitude and phase separately, which we think is more illuminating than showing just the vector error (the combination of both components of the error). Table 2 lists statistics of amplitude, phase and also the combined (vector) error, averaged over regions. There are too many sites to include a heights-equivalent to Table 3.
The model is based on a new implementation of shallow water dynamics on an unstructured grid. As far as I understood, COMPAS model is a local evolution of the MPAS one, or at least inspired from it. Unlike the work made on the tidal observation compilation and processing, I find the modelling work rather not sufficiently convincing.
We are sorry to hear that, and have tried to make it more convincing, without repeating too much material from Herzfeld (2020) that documented the details of the model. We have emphasized to readers (at the beginning of the Model configuration section, see below) that this paper focusses on our assessment of the model, not its construction.
My first remarks concern the model grid design and setting. COMPAS developers made the choice of a basically hexagonal grid (and subsequent finite volume discretization). Despite some flexibility to tune the model resolution, it is much less flexible than triangle element grids, especially in following precisely the coastal geometry. Authors may comment on their choice.
Sorry, we disagree. COMPAS uses the dual of a Delaunay triangulation (a Voronoi diagram). Compared to using triangles, this is less prone to spurious short wave generation on a C grid. It can boundary-fit coastlines to the same degree as triangles (rays toward infinity in the Voronoi dual are truncated to the coast). COMPAS and MPAS-O are quite different in the way the coastline geometry/discretisation is treated, with COMPAS able to conform to the shoreline directly, while MPAS-O cannot. An example of a COMPAS coastline-fitted mesh is included below.
We have added text stating that certain aspects of COMPAS differ to MPAS in that they are coastally optimized.
The model resolution constraints (depth and currents magnitude) are also a bit surprising to me. In tidal applications, coastal geometry complexity, tidal wavelength (theoretically related to square root of depth, but possibly strongly controlled by local coastal geometry/dynamical resonance) and depth’s slope related tidal currents variability scales are the most efficient constraints in setting the appropriate local resolution, especially when tidal currents are specifically targeted. I’d like authors to comment on that.
We do indeed use the sqrt(gH) wavelength in setting resolution. We also add higher resolution as distance-to-coast, so again, more agreement in terms of 'coastal geometry'. We've used the magnitude of tidal currents, rather than grad(H) as an additional refinement metric, to give more detail in the high-speed areas of particular interest. We have clarified this in the manuscript.
The setting of bathymetry is mostly set from the best available global datasets for Australian Waters, still I wonder about the choice to extend the uncovered areas with DBDB2, which is a rather ancient bathymetry database. Authors may comment on their choice.
We agree that bathymetry choice is vital to improving performance, and is a priority for future model development. To this end, we hope to capitalise on the results of the ausSeabed initiative (http://www.ausseabed.gov.au/about). We have emphasized this more in the manuscript.
The setting of the minimum model depth suggests to me that wetting/drying capabilities were not available/used in the tidal simulations. This is by itself an annoying limitation, but also minimum depth settings can significantly change the model results and, in case where the original bathymetry dataset is accurate enough, deteriorate the simulation accuracy (reversely, a 5 to 10 m minimum depth setting can help to partly compensate for bathymetry inaccuracy in nearshore regions). I’d like authors to comment on that.
We now say: “COMPAS can be run with wetting and drying activated, not only for entire water columns, but also for individual layers as sea level falls or rises. For the present application, however, neither of these capabilities were exercised to any degree; the latter because the model was run in 2D mode. Lacking adequate near-shore bathymetry for much of this large country, we chose not to attempt to properly model the tides in the inter-tidal zone, and set the minimum depth (at zero tide) to 4 m for most of the grid, but 8 m where the tides are large in the NW, NE and in Gulf St Vincent. A channel of 12 m was manually included in King Sound (in the NW) to correct an obvious error there. A similar bathymetry correction was also made in Western Port (near Melbourne). These two manual corrections had significant effect on the local tidal response, and it is anticipated that further model improvement will follow from corrections throughout the domain based on a more complete set of observations of the real topography.”
My second set of remarks concerns the tidal forcing and dissipation. First having the best performances with the tidal potential left off is not a good indicator of the model performances. Also tidal loading and self-attraction forcing terms are not mentioned at all, I guess they are just no considered in COMPAS. If I am right, this is a very annoying omission for accurate tidal modelling.
Simulations were trialled with tidal potential included (equilibrium tide + self-loading/attraction). Results were found not to differ significantly from when they were absent. There is a cost to including these terms, as computation of the right ascension of the ascending node for the moon is expensive when computed at every grid point. Any changes to the solution did not warrant this additional expense. It appears that when the ratio of open boundary length (where the tide is imposed) to surface area is large, the effect of tidal potential on the solution is diminished, with the major contributor to forcing being the boundary forcing.
We have added that self-attraction/loading was trialled, and a reference to the tidal potential method used. “Tidal potential forcing and tidal self-attraction/loading (using the method of Sakamoto et al., 2013) is optionally applied in the model but we found that it made very little difference (excepting the run time) compared with other parameters such as friction, so we have omitted it for the long (1 year) run of the model described here.”
Equally important, the barotropic tides generate internal tides when their energy fluxes propagate across the shelf slope, and then are partly dissipated by the subsequent barotropic to baroclinic energy conversion. This is a quite large contributor to the barotropic tides dissipation, and it must be implemented through a parameterization in depth-averaged tidal models to reach the best accuracy, even at regional scales. Again, this point is not mentioned in the paper, I just can guess that such a convenient parameterization is not available in COMPAS.
The model was run in barotropic mode only. Baroclinic energy conversion is currently not available in 2D COMPAS simulations. We have now mentioned in the manuscript that a 3D baroclinic version is under development, which would address these issues explicitly: “In this paper, we assess the ability of this model to simulate barotropic tides (both currents and sea level) as a first step towards a baroclinic model of the tides, and then a baroclinic model with non-tidal flows as well.”
Many places in the Australian Waters are very challenging in terms of tidal dynamics, and will require raising the COMPAS tidal capabilities to a more comprehensive level, or at least discuss the impact of the missing tidal ingredients. I’d like authors to comment on these critical issues.
We certainly agree that our diverse tidal environment provides a significant challenge, especially since the bathymetry is uncertain in places, and there are inevitably some errors remaining in both the parent model and the validation data set. The importance of baroclinic processes can not be denied either. The paper now has a new final sentence: We conclude by reminding readers that the work reported here is just an initial step towards a more complete description of Australia’s tides, which will potentially include 1) the variation in the vertical dimension of the tidal currents, 2) finer horizontal resolution, 3) more accurate sea-floor topography, 4) more accurate offshore boundary conditions, and 5) within-domain tidal potential forcing and self-attraction.
Last but not least, the open boundary conditions setting can be potentially critical in the overall simulations accuracy, their discussion in section 2 could be complemented with a domain-wide vector difference between the forcing atlas (TPXO) and COMPAS results.
Thank you for the suggestion but we think this comparison with another model (the one we are nesting inside), while interesting to some readers, would be a distraction from the main emphasis of the paper, which is the assessment of our model against observations. There is also the question of which version of TPXO should we compare to? The one we nest inside (1/6°) or 1 or more versions of the 1/30° ‘Atlas’ product? We looked at this and decided to make just a short sentence summarising the salient facts (see the paragraph at end of section 7, now slightly edited to remind readers that tidal potential forcing is inactive in the present version of our model)
In summary, the observational and comparison sections are very informative and well organized, and I think they are fully suited for publication. Reversely, the modelling part really needs to be augmented/revised/strengthened. Consequently, I encourage the authors to make the necessary changes to the modeling sections to reach the same level of scientific value as for the observational ones. In consequence, I will consider publication after a major revision of the modeling discussion, with no doubt that the authors will be successful in submitting a more appropriate version. I will be happy to review any new submission, and will provide a more detailed review at this occasion as the present version is susceptible to significantly vary in the revised one.
We have made some small augmentations of the modelling section this paper but, as mentioned above, we have avoided repeating too much material from Herzfeld et al (2020) which documents the details of the model. The present paper focusses on our assessment of the model, not its construction. To clarify the scope of the paper, we have added the following text at the beginning of the Model configuration section:
As mentioned above, the work reported here was done for two reasons 1) to identify regions where tidal currents are prospective from a renewable energy point of view, 2) to lay the foundations of a more general-purpose national model of the tidal currents of Australia. The model we used is called COMPAS (Coastal Ocean Marine Prediction Across Scales). It is a fully non-linear 3D model that has been described in full by Herzfeld et al., (2020). In this paper, we assess the ability of this model to simulate barotropic tides (both currents and sea level) as a first step towards a baroclinic model of the tides, and then a baroclinic model with non-tidal flows as well.
Community Comment (Roger Proctor)
This paper describes the results of tidal simulations using a new unstructured grid model for Australian coastal waters, initially developed for a tidal renewable energy project. The model results, from depth-averaged simulations, are compared with observations from an unprecedented collection of tidal height and tidal current locations at which a minimum of 11 tidal constituents are available. This assembly of observed tidal constituents is valuable in its own right, and the published model tidal constituents form a useful dataset. The paper is divided into sections describing the model setup and preliminary experiments, the two observational datasets, the model-observation analysis methodology, followed by the results and a discussion. A comprehensive set of statistics is offered, resulting in a regional approach to assessing the quality of the model results. Overall the paper offers the reader several new perspectives: on the observation coverage of the tides around Australia; on the diversity of its tidal regimes; and on the ability of this new model to accurately represent these regimes. As such it is a valuable contribution to the journal and the published datasets of value to the community.
Thank you for the kind words
Some thoughts and suggested minor modifications are discussed below.
The discussion of model configuration suggests the use of the unstructured grid is a computational saving, indicating a regular grid model of similar resolution would require 1.5 million points to match the ‘mean resolution’ (not defined). This is not a large array for a simple 2D model so the saving, if any, may not be great. The smallest cell in the unstructured mesh is ~330m which is relatively large for some of the areas in question. I wondered if the computational constraints of the explicit scheme was limiting the calculation.
A model using 1.5 million surface cells is tractable, however, this will always run slower than one using just 12% as many cells, all other things being equal. Given that over 70 simulations were performed during the optimization procedure using a very modest number of processors, this saving in wall-time or CPU cost is non-negligible.
Although certain regions of the model are likely under-resolved, we considered this first attempt at a national model a good balance between accurately capturing the broad tidal circulation patterns and model throughput. We have added text to this effect in the manuscript, and also added the mean distance between centres (2100 m).
Since the simulations were conducted in 2D mode, semi-implicit approaches (essentially an implicit model in 2D mode) would be expected to increase throughput due to increased timesteps. However, the semi-implicit approach does have its drawbacks, notably, it is difficult to modularize open boundary conditions that can be ‘mixed and matched’, due to the explicit coding of these schemes as source terms into the matrix inversion procedure. Such models typically have quite a limited array of open boundary conditions, which may hinder optimization of the open boundary problem.
Lines 75-80 discuss the bathymetry used, and points to use of minimum depths, which would limit any wetting and drying, which may impact on results with large tidal range; was this tested in the preliminary experiments?
Yes – see discussion above.
Line 90+ describes the open boundary set up which is indeed quite unusual. A sentence or two to explain why this works would be helpful, particularly on how internally generated motions reaching the open boundary are handled.
Agreed, we now say: This situation is quite unusual, and suggests that the TPXO values at the boundary are largely in tune with the interior dynamics of the model (even though TPXO and COMPAS have their differences), obviating the need for strategies to make the boundary transmissive to outgoing signals.
Line 100+ describes the intitial experiments conducted to arrive at the finally chosen parameter settings (e.g. drag coefficient). Given that later in the paper, in discussing the results, there are several assertions as to discrepancies between model and observation, e.g. line 375, line 388, could these initial experiments offer any explanations?
There were 72 simulations performed during the optimization process. There were some step changes towards convergence to a skilful solution. Using TPXO on its native grid was one such step. These optimizations have led us to believe that friction modifications have negligible impact, tidal body force has a very small impact, open boundary configuration and bathymetry changes has a large impact. The open boundaries are now well optimized, and it is expected that further bathymetry improvements would decrease model-data discrepancies. We have emphasized the need for improved bathymetry in the manuscript.
Line 135-140 … how close to the island? The text seems to suggest that the model cell size may also need refining to capture the variability.
Table 3 lists all instrument positions. Distances to reefs on the GBR may be as low as 1km but this is uncertain due to bathymetry errors (see the differences between modelled and recorded depths) so we chose not to try and define ‘close’. The point is that islands or reefs are close enough to matter.
Line 155 … ‘for all the usual reasons’ might need an explanation.
We have added: “, some of which are 1) the nature of model (and observation) errors is likely to differ significantly depending on the constituent frequency and amplitude, 2) errors of the ellipse orientation are then easily distinguished from errors of the phase and major axis length, all of which impact differently on various users, 3) it is the most succinct way of describing the data set.”
Line 174, the penalty function; this is dimensionally imbalanced and needs an explanation for the D/5C component.
Sorry, this was not clear, as discussed above
Many of the figures, e.g. Figure 3, include tables of percentiles. Provide a sentence explaining these.
Sorry, we thought the caption to Figure 3 was sufficient. We have added (‘%’) to explain the use of that symbol.
Similarly, some tables (e.g. Table 2), have ‘%obs’ values which need an explanation.
We’ve added: The %obs row expresses the RSS values in the line above as a percentage of the observed RSS.
Line 281 refers to sites in Banks Strait but in the table they are labelled Bass.
Table 3 is now fixed, thank you.
Line 356, spell out RIB.
We’ve changed this to ‘speedboats’
Line 380 … it would be helpful to have Broad Sound marked on Figure 11.
Done, see above
Line 384 … explain why you query the mechanical current meters.
Did you overlook the next sentence, or want it expanded on? “Due to limited storage capacity, the flow direction was only sampled instantaneously once an hour, so short-period changes of direction were not averaged.” We’ve now added “To minimise noise due to waves (i.e, rectified orbital velocities spinning the rotor even when the current velocity is zero - Griffin, 1988)
Griffin (1988): Mooring Design to minimize Savonius rotor overspeeding due to wave action.
Mark Lady Musgrave on a figure.
Done, see above
Line 394 … suggest changing ‘the amplitude of S2 exceeds that of M2 (barely),’ to say 'the amplitude of S2 is of similar magnitude to that of M2,'.
Good idea. We’ve changed it to “the amplitudes of S2 and M2 are nearly the same,”
Line 408 … ‘and thus underestimates the errors’. How do you know?
Fair point. Neglecting the internal tide does little damage to the depth-mean velocity. We were thinking of users who will use our prediction of the depth-mean as a prediction of the tide at all depths. We have removed “and thus underestimates the errors”.
Line 415 … Given that the official predictions are available, might be a useful addition if you did compare. Even to demonstrate the adequacy, or otherwise, of the official predictions.
We will propose this to BoM (who issue the official predictions).
Line 416+ This doesn't offer an explanation of why the you think the tidal currents are poorly predicted in this region.
That is because 1) we are not sure of the reason, but have now added “It appears that this problem is largely inherited from the boundary conditions”, 2) it is a low-priority mystery, for the reason given (tidal currents are very small compared to non-tidal).
Line 430 … As we know, M4 and other higher harmonics are generated internally through non-linear model terms. Do you have anything to say on this generation mechanism within the model?
Lacking any evidence that the mechanism in the model is faithful to the real world, we’d rather not speculate on this. We’ve reworded this: where amplitudes up to 5.9 cm s-1 were observed (Fig. 13). Model amplitudes are comparable (up to 4.3 cm s-1)but there is not much correspondence with the observations. Given the complexity of both the observed and the modelled currents, and relatively small contribution to the total, we can’t be confident that the modelled M4 velocities are accurate enough to warrant inclusion of these constituents when making predictions.
Line 441 … Can I suggest rewriting this sentence ‘Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs (i.e. the sections where the shelf is narrow).’ as “ Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east (i.e. the sections where the shelf is narrow).’ Exceptions are Bass Strait and the South Australian gulfs.”
Hmm, we’re not sure that’s any better. So we’ve removed the bit in brackets, leaving “Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs.”
Line 480 … Whilst the focus of the paper is on tidal currents, the statement that non-tidal currents play an important role in many parts of the Australian coastal domain leads the reader to wonder whether future versions of the model will attempt to provide this missing component. In this context, lessons learnt by Witeranje et al (2018) may be useful. Also, some insight into what improvements are intended (or are in development) and why these are seen as improvements would be useful.
Non-tidal currents, as you know, is a totally different modelling problem, and not one that we want to discuss in this paper.
Ref: Wijeratne, S., Pattiaratchi, C., & Proctor, R. (2018). Estimates of surface and subsurface boundary current transport around Australia. Journal of Geophysical Research: Oceans, 123, 3444–3466. https://doi.org/10.1029/2017JC013221
Citation: https://doi.org/10.5194/gmd-2021-51-AC1
-
AC1: 'Reply on RC1', David Griffin, 01 Jul 2021
-
CC1: 'Comment on gmd-2021-51', roger proctor, 09 Jun 2021
This paper describes the results of tidal simulations using a new unstructured grid model for Australian coastal waters, initially developed for a tidal renewable energy project. The model results, from depth-averaged simulations, are compared with observations from an unprecedented collection of tidal height and tidal current locations at which a minimum of 11 tidal constituents are available. This assembly of observed tidal constituents is valuable in its own right, and the published model tidal constituents form a useful dataset. The paper is divided into sections describing the model setup and preliminary experiments, the two observational datasets, the model-observation analysis methodology, followed by the results and a discussion. A comprehensive set of statistics is offered, resulting in a regional approach to assessing the quality of the model results. Overall the paper offers the reader several new perspectives: on the observation coverage of the tides around Australia; on the diversity of its tidal regimes; and on the ability of this new model to accurately represent these regimes. As such it is a valuable contribution to the journal and the published datasets of value to the community. Some thoughts and suggested minor modifications are discussed below.
The discussion of model configuration suggests the use of the unstructured grid is a computational saving, indicating a regular grid model of similar resolution would require 1.5 million points to match the ‘mean resolution’ (not defined). This is not a large array for a simple 2D model so the saving, if any, may not be great. The smallest cell in the unstructured mesh is ~330m which is relatively large for some of the areas in question. I wondered if the computational constraints of the explicit scheme was limiting the calculation.
Lines 75-80 discuss the bathymetry used, and points to use of minimum depths, which would limit any wetting and drying, which may impact on results with large tidal range; was this tested in the preliminary experiments?
Line 90+ describes the open boundary set up which is indeed quite unusual. A sentence or two to explain why this works would be helpful, particularly on how internally generated motions reaching the open boundary are handled.
Line 100+ describes the intitial experiments conducted to arrive at the finally chosen parameter settings (e.g. drag coefficient). Given that later in the paper, in discussing the results, there are several assertions as to discrepancies between model and observation, e.g. line 375, line 388, could these initial experiments offer any explanations?
Line 135-140 … how close to the island? The text seems to suggest that the model cell size may also need refining to capture the variability.
Line 155 … ‘for all the usual reasons’ might need an explanation.
Line 174, the penalty function; this is dimensionally imbalanced and needs an explanation for the D/5C component.
Many of the figures, e.g. Figure 3, include tables of percentiles. Provide a sentence explaining these. Similarly, some tables (e.g. Table 2), have ‘%obs’ values which need an explanation.
Line 281 refers to sites in Banks Strait but in the table they are labelled Bass.
Line 356, spell out RIB.
Line 380 … it would be helpful to have Broad Sound marked on Figure 11.
Line 384 … explain why you query the mechanical current meters. Mark Lady Musgrave on a figure.
Line 394 … suggest changing ‘the amplitude of S2 exceeds that of M2 (barely),’ to say 'the amplitude of S2 is of similar magnitude to that of M2,'.
Line 408 … ‘and thus underestimates the errors’. How do you know?
Line 415 … Given that the official predictions are available, might be a useful addition if you did compare. Even to demonstrate the adequacy, or otherwise, of the official predictions.
Line 416+ This doesn't offer an explanation of why the you think the tidal currents are poorly predicted in this region.
Line 430 … As we know, M4 and other higher harmonics are generated internally through non-linear model terms. Do you have anything to say on this generation mechanism within the model?
Line 441 … Can I suggest rewriting this sentence ‘Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs (i.e. the sections where the shelf is narrow).’ as “ Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east (i.e. the sections where the shelf is narrow).’ Exceptions are Bass Strait and the South Australian gulfs.”
Line 480 … Whilst the focus of the paper is on tidal currents, the statement that non-tidal currents play an important role in many parts of the Australian coastal domain leads the reader to wonder whether future versions of the model will attempt to provide this missing component. In this context, lessons learnt by Witeranje et al (2018) may be useful. Also, some insight into what improvements are intended (or are in development) and why these are seen as improvements would be useful.
Ref: Wijeratne, S., Pattiaratchi, C., & Proctor, R. (2018). Estimates of surface and subsurface boundary current transport around Australia. Journal of Geophysical Research: Oceans, 123, 3444–3466. https://doi.org/10.1029/2017JC013221
Citation: https://doi.org/10.5194/gmd-2021-51-CC1 -
AC1: 'Reply on RC1', David Griffin, 01 Jul 2021
Foreword
We thank the two referees and the one community member for their thoughtful and constructive comments on our paper. We have revised our manuscript in response to referee comments as described below and think the paper is now definitely improved, and hope that the Editor invites us to submit it. Our responses to comments are below in red, with new or altered snippets of the revised paper in green.
Referee 1
This paper discusses the validation of a new tide model for the waters surrounding Australia. The model is based on a new implementation of shallow water dynamics on an unstructured grid using the EMS modeling system which they have open-sourced. The authors provide a new compilation of tidal current observations in their domain, which should be quite useful for others. They provide nuanced and intelligent discussion of their process of model development (emphasizing details such as the hand-adjustment of topography and implementation of open boundary conditions) which should also help others. They systematically discuss the model-data intercomparison, emphasizing locations where tidal currents are relatively large in comparison with sub-tidal currents, which is appropriate considering the aimed-at operational uses for the model. Overall, the authors have produced a well-organized and thoughtful comparison, with the appropriate level of detail provided, and I think this paper requires only very minor adjustments before publication.
Thank you.
Detailed notes, itemized by line number:
L14: Should this read "Rood Mean Square Error (RMSE)"? Otherwise, why captials?
No, Root Sum Square is correct, because it is over 8 constituents, and we want to know the total error. ‘(RSS)’ could be added, but it is not used again in the abstract.
L15: Two periods.
Oops. Thank you.
Up to L70: This discussion of the grid development will be useful for others. Very good.
Thank you.
L91: Indeed this is unusual, but it is an indication that you have achieved a necessary level of accuracy. Interesting.
Agreed. See below for further discussion of this point.
L100: When I first read this, I did not understand that the tidal synthesis was only used at the preliminary stage of model tuning. Later, at line 155, this is explained. I think this should be explained right away when the tidal synathesis is mentioned.
Sorry, but it seems you have overlooked lines 108-110, which says that the analyses presented in the paper use constituents analysed from a long model run. We have clarified this point by saying “These trial model runs were too short for accurate decomposition into constituents, so we assessed them against….”
L106: Capitalize "TPXO".
Oops. Thank you.
L140: This is a clear explaination of the current meters and ADCP dataset.
Thank you.
L175: Are D and C in the same units, or is C a measure of area? If you believe the model errors are related to this quantity, perhaps it would be better to pliot the error statistics as a function of J. It does not seem that this J is used later, so maybe it can be omitted.
Thank you - this was unclear (and noted by another referee). C has the same units as D. We now say: “where D is the distance (km) to the model grid point, C is the characteristic size (km) of the cell (see Fig. 1),…”. Errors are not strongly related to J, and the form of J has little impact on the average error. But if it were omitted, people would ask ‘how did you interpolate the model to the obs?’
L182: Pleaase write out the expression for the relative error that includes the sub_o velocity.
Done.
Table 1: Please format the text so that the lower parts of letters are visible. Note, for example, how the "p", "y", and "g" are truncated from several of the place names.
Done
Up to L230: This is a good overview of the errors. Appropriate detail.
Thank you
To L305: A good explanation of why the discussion focusses on only certain stations.
Thank you
L374: It would be useful to label Van Diemen Gulf; although, I guess it is the large body of water enclosing Christine Reef?
Done
Fig 11: I cannot read the place names here. Can you please label Broad Sound?
Done (mentioned current meter sites are in bigger, brighter text and Broad Sound is labelled)
L387: I think I know the location of this gauge, but I don't understand what we are supposed to observe from Fig 3.
We agree that the reference to Fig. 3 was too cryptic, and have inserted an extra Figure here in support of the comment about how the modelled tidal height compares with the observations, which, on closer inspection, is actually better than we had written before. The new text reads “The tide gauge (at McEwin Islet) near the head of the Sound (Fig. 12) suggests that the second amplification process is also quite well modelled, since the modelled M2 amplitude there is nearly (within about 10%) as great as the observed value”.
L465: Good to see this basic comparison with TPXO here. You might wish to look at Zaron and Elipot JGR 2021, who compare currents from an earlier version of this model with drifter-derived currents. Alternately, you might find drifter-derived currents are another useful validation dataset.
Thank you. We will consider using drifters and gliders for validating the next version of this model, but first we wish to get access to all the other current meter time series that exist.
L472: I don't have the expertise to comment on whether the model currents are operationally useful. Instead of saying they are "arguably" useful, it would be better if you can describe alernate viewpoints in a more detail. Are there defintions or criteria which would be useful for arguing this question? What criteria should be used to decide if a model is "good enough" to be useful for current predictions vs tidal energy site evaluation?
This is a very hard question and we are quite sure there is not a unique answer - because there are so many potential applications. So we hope that our paper will equip users to assess the adequacy themselves of our tidal model for their application, with us making as few limiting decisions as possible.
Referee 2
This paper discusses the comparison of a new tide model for the waters surrounding Australia and both tidal heights and currents observations, with a dedicated focus on future operational tidal currents prediction (from model simulations) added value.
The compilation of tidal observations, especially tidal currents, is rather impressive and will provide a very useful database for further studies and/or model validation. The comparisons between the model’s simulations and observations are exhaustive and detailed, with very informative focus on regions of special interest. Currents data processing and inherent limitations are well presented and discussed.
Thank you
The figures where model and observed currents ellipses are very interesting, however the red colored observed ellipses are sometimes hardy distinguishable of the background currents amplitude pixels.
Yes, that is true in some cases, and is why we do not rely on the reader being able to see all the observed ellipses in every Figure. To deal with this problem, we have 1) shown the comparisons at either 2 or 3 scales: national (5000km, e.g. Fig 5), regional (500km, e.g. Fig 6) and local (100km, e.g. Fig.7), 2) chosen velocity scales carefully for each Figure to reach a compromise between overlapping ellipses in strong current regions and invisible ones where the amplitude is low, 3) listed region-averaged tabulated statistics of the model-obs comparisons both on the Figures and in the Tables, and 4) listed site-specific model-obs comparisons for all current meters. We don’t think there is much more we can do without including very many local-scale Figures.
I might suggest showing the model grid itself in an additional figure.
This is what Figures 1 and 2 are.
Same remark about tidal heights vector errors in addition to the modelled/observed amplitude and phase superimposed ones.
Tidal heights are not the principal focus of this paper (as is made clear in several places, starting with the title of the paper). Nevertheless, we have included model-obs height comparisons for completeness. Fig 3 and 4 show the model-obs comparisons for amplitude and phase separately, which we think is more illuminating than showing just the vector error (the combination of both components of the error). Table 2 lists statistics of amplitude, phase and also the combined (vector) error, averaged over regions. There are too many sites to include a heights-equivalent to Table 3.
The model is based on a new implementation of shallow water dynamics on an unstructured grid. As far as I understood, COMPAS model is a local evolution of the MPAS one, or at least inspired from it. Unlike the work made on the tidal observation compilation and processing, I find the modelling work rather not sufficiently convincing.
We are sorry to hear that, and have tried to make it more convincing, without repeating too much material from Herzfeld (2020) that documented the details of the model. We have emphasized to readers (at the beginning of the Model configuration section, see below) that this paper focusses on our assessment of the model, not its construction.
My first remarks concern the model grid design and setting. COMPAS developers made the choice of a basically hexagonal grid (and subsequent finite volume discretization). Despite some flexibility to tune the model resolution, it is much less flexible than triangle element grids, especially in following precisely the coastal geometry. Authors may comment on their choice.
Sorry, we disagree. COMPAS uses the dual of a Delaunay triangulation (a Voronoi diagram). Compared to using triangles, this is less prone to spurious short wave generation on a C grid. It can boundary-fit coastlines to the same degree as triangles (rays toward infinity in the Voronoi dual are truncated to the coast). COMPAS and MPAS-O are quite different in the way the coastline geometry/discretisation is treated, with COMPAS able to conform to the shoreline directly, while MPAS-O cannot. An example of a COMPAS coastline-fitted mesh is included below.
We have added text stating that certain aspects of COMPAS differ to MPAS in that they are coastally optimized.
The model resolution constraints (depth and currents magnitude) are also a bit surprising to me. In tidal applications, coastal geometry complexity, tidal wavelength (theoretically related to square root of depth, but possibly strongly controlled by local coastal geometry/dynamical resonance) and depth’s slope related tidal currents variability scales are the most efficient constraints in setting the appropriate local resolution, especially when tidal currents are specifically targeted. I’d like authors to comment on that.
We do indeed use the sqrt(gH) wavelength in setting resolution. We also add higher resolution as distance-to-coast, so again, more agreement in terms of 'coastal geometry'. We've used the magnitude of tidal currents, rather than grad(H) as an additional refinement metric, to give more detail in the high-speed areas of particular interest. We have clarified this in the manuscript.
The setting of bathymetry is mostly set from the best available global datasets for Australian Waters, still I wonder about the choice to extend the uncovered areas with DBDB2, which is a rather ancient bathymetry database. Authors may comment on their choice.
We agree that bathymetry choice is vital to improving performance, and is a priority for future model development. To this end, we hope to capitalise on the results of the ausSeabed initiative (http://www.ausseabed.gov.au/about). We have emphasized this more in the manuscript.
The setting of the minimum model depth suggests to me that wetting/drying capabilities were not available/used in the tidal simulations. This is by itself an annoying limitation, but also minimum depth settings can significantly change the model results and, in case where the original bathymetry dataset is accurate enough, deteriorate the simulation accuracy (reversely, a 5 to 10 m minimum depth setting can help to partly compensate for bathymetry inaccuracy in nearshore regions). I’d like authors to comment on that.
We now say: “COMPAS can be run with wetting and drying activated, not only for entire water columns, but also for individual layers as sea level falls or rises. For the present application, however, neither of these capabilities were exercised to any degree; the latter because the model was run in 2D mode. Lacking adequate near-shore bathymetry for much of this large country, we chose not to attempt to properly model the tides in the inter-tidal zone, and set the minimum depth (at zero tide) to 4 m for most of the grid, but 8 m where the tides are large in the NW, NE and in Gulf St Vincent. A channel of 12 m was manually included in King Sound (in the NW) to correct an obvious error there. A similar bathymetry correction was also made in Western Port (near Melbourne). These two manual corrections had significant effect on the local tidal response, and it is anticipated that further model improvement will follow from corrections throughout the domain based on a more complete set of observations of the real topography.”
My second set of remarks concerns the tidal forcing and dissipation. First having the best performances with the tidal potential left off is not a good indicator of the model performances. Also tidal loading and self-attraction forcing terms are not mentioned at all, I guess they are just no considered in COMPAS. If I am right, this is a very annoying omission for accurate tidal modelling.
Simulations were trialled with tidal potential included (equilibrium tide + self-loading/attraction). Results were found not to differ significantly from when they were absent. There is a cost to including these terms, as computation of the right ascension of the ascending node for the moon is expensive when computed at every grid point. Any changes to the solution did not warrant this additional expense. It appears that when the ratio of open boundary length (where the tide is imposed) to surface area is large, the effect of tidal potential on the solution is diminished, with the major contributor to forcing being the boundary forcing.
We have added that self-attraction/loading was trialled, and a reference to the tidal potential method used. “Tidal potential forcing and tidal self-attraction/loading (using the method of Sakamoto et al., 2013) is optionally applied in the model but we found that it made very little difference (excepting the run time) compared with other parameters such as friction, so we have omitted it for the long (1 year) run of the model described here.”
Equally important, the barotropic tides generate internal tides when their energy fluxes propagate across the shelf slope, and then are partly dissipated by the subsequent barotropic to baroclinic energy conversion. This is a quite large contributor to the barotropic tides dissipation, and it must be implemented through a parameterization in depth-averaged tidal models to reach the best accuracy, even at regional scales. Again, this point is not mentioned in the paper, I just can guess that such a convenient parameterization is not available in COMPAS.
The model was run in barotropic mode only. Baroclinic energy conversion is currently not available in 2D COMPAS simulations. We have now mentioned in the manuscript that a 3D baroclinic version is under development, which would address these issues explicitly: “In this paper, we assess the ability of this model to simulate barotropic tides (both currents and sea level) as a first step towards a baroclinic model of the tides, and then a baroclinic model with non-tidal flows as well.”
Many places in the Australian Waters are very challenging in terms of tidal dynamics, and will require raising the COMPAS tidal capabilities to a more comprehensive level, or at least discuss the impact of the missing tidal ingredients. I’d like authors to comment on these critical issues.
We certainly agree that our diverse tidal environment provides a significant challenge, especially since the bathymetry is uncertain in places, and there are inevitably some errors remaining in both the parent model and the validation data set. The importance of baroclinic processes can not be denied either. The paper now has a new final sentence: We conclude by reminding readers that the work reported here is just an initial step towards a more complete description of Australia’s tides, which will potentially include 1) the variation in the vertical dimension of the tidal currents, 2) finer horizontal resolution, 3) more accurate sea-floor topography, 4) more accurate offshore boundary conditions, and 5) within-domain tidal potential forcing and self-attraction.
Last but not least, the open boundary conditions setting can be potentially critical in the overall simulations accuracy, their discussion in section 2 could be complemented with a domain-wide vector difference between the forcing atlas (TPXO) and COMPAS results.
Thank you for the suggestion but we think this comparison with another model (the one we are nesting inside), while interesting to some readers, would be a distraction from the main emphasis of the paper, which is the assessment of our model against observations. There is also the question of which version of TPXO should we compare to? The one we nest inside (1/6°) or 1 or more versions of the 1/30° ‘Atlas’ product? We looked at this and decided to make just a short sentence summarising the salient facts (see the paragraph at end of section 7, now slightly edited to remind readers that tidal potential forcing is inactive in the present version of our model)
In summary, the observational and comparison sections are very informative and well organized, and I think they are fully suited for publication. Reversely, the modelling part really needs to be augmented/revised/strengthened. Consequently, I encourage the authors to make the necessary changes to the modeling sections to reach the same level of scientific value as for the observational ones. In consequence, I will consider publication after a major revision of the modeling discussion, with no doubt that the authors will be successful in submitting a more appropriate version. I will be happy to review any new submission, and will provide a more detailed review at this occasion as the present version is susceptible to significantly vary in the revised one.
We have made some small augmentations of the modelling section this paper but, as mentioned above, we have avoided repeating too much material from Herzfeld et al (2020) which documents the details of the model. The present paper focusses on our assessment of the model, not its construction. To clarify the scope of the paper, we have added the following text at the beginning of the Model configuration section:
As mentioned above, the work reported here was done for two reasons 1) to identify regions where tidal currents are prospective from a renewable energy point of view, 2) to lay the foundations of a more general-purpose national model of the tidal currents of Australia. The model we used is called COMPAS (Coastal Ocean Marine Prediction Across Scales). It is a fully non-linear 3D model that has been described in full by Herzfeld et al., (2020). In this paper, we assess the ability of this model to simulate barotropic tides (both currents and sea level) as a first step towards a baroclinic model of the tides, and then a baroclinic model with non-tidal flows as well.
Community Comment (Roger Proctor)
This paper describes the results of tidal simulations using a new unstructured grid model for Australian coastal waters, initially developed for a tidal renewable energy project. The model results, from depth-averaged simulations, are compared with observations from an unprecedented collection of tidal height and tidal current locations at which a minimum of 11 tidal constituents are available. This assembly of observed tidal constituents is valuable in its own right, and the published model tidal constituents form a useful dataset. The paper is divided into sections describing the model setup and preliminary experiments, the two observational datasets, the model-observation analysis methodology, followed by the results and a discussion. A comprehensive set of statistics is offered, resulting in a regional approach to assessing the quality of the model results. Overall the paper offers the reader several new perspectives: on the observation coverage of the tides around Australia; on the diversity of its tidal regimes; and on the ability of this new model to accurately represent these regimes. As such it is a valuable contribution to the journal and the published datasets of value to the community.
Thank you for the kind words
Some thoughts and suggested minor modifications are discussed below.
The discussion of model configuration suggests the use of the unstructured grid is a computational saving, indicating a regular grid model of similar resolution would require 1.5 million points to match the ‘mean resolution’ (not defined). This is not a large array for a simple 2D model so the saving, if any, may not be great. The smallest cell in the unstructured mesh is ~330m which is relatively large for some of the areas in question. I wondered if the computational constraints of the explicit scheme was limiting the calculation.
A model using 1.5 million surface cells is tractable, however, this will always run slower than one using just 12% as many cells, all other things being equal. Given that over 70 simulations were performed during the optimization procedure using a very modest number of processors, this saving in wall-time or CPU cost is non-negligible.
Although certain regions of the model are likely under-resolved, we considered this first attempt at a national model a good balance between accurately capturing the broad tidal circulation patterns and model throughput. We have added text to this effect in the manuscript, and also added the mean distance between centres (2100 m).
Since the simulations were conducted in 2D mode, semi-implicit approaches (essentially an implicit model in 2D mode) would be expected to increase throughput due to increased timesteps. However, the semi-implicit approach does have its drawbacks, notably, it is difficult to modularize open boundary conditions that can be ‘mixed and matched’, due to the explicit coding of these schemes as source terms into the matrix inversion procedure. Such models typically have quite a limited array of open boundary conditions, which may hinder optimization of the open boundary problem.
Lines 75-80 discuss the bathymetry used, and points to use of minimum depths, which would limit any wetting and drying, which may impact on results with large tidal range; was this tested in the preliminary experiments?
Yes – see discussion above.
Line 90+ describes the open boundary set up which is indeed quite unusual. A sentence or two to explain why this works would be helpful, particularly on how internally generated motions reaching the open boundary are handled.
Agreed, we now say: This situation is quite unusual, and suggests that the TPXO values at the boundary are largely in tune with the interior dynamics of the model (even though TPXO and COMPAS have their differences), obviating the need for strategies to make the boundary transmissive to outgoing signals.
Line 100+ describes the intitial experiments conducted to arrive at the finally chosen parameter settings (e.g. drag coefficient). Given that later in the paper, in discussing the results, there are several assertions as to discrepancies between model and observation, e.g. line 375, line 388, could these initial experiments offer any explanations?
There were 72 simulations performed during the optimization process. There were some step changes towards convergence to a skilful solution. Using TPXO on its native grid was one such step. These optimizations have led us to believe that friction modifications have negligible impact, tidal body force has a very small impact, open boundary configuration and bathymetry changes has a large impact. The open boundaries are now well optimized, and it is expected that further bathymetry improvements would decrease model-data discrepancies. We have emphasized the need for improved bathymetry in the manuscript.
Line 135-140 … how close to the island? The text seems to suggest that the model cell size may also need refining to capture the variability.
Table 3 lists all instrument positions. Distances to reefs on the GBR may be as low as 1km but this is uncertain due to bathymetry errors (see the differences between modelled and recorded depths) so we chose not to try and define ‘close’. The point is that islands or reefs are close enough to matter.
Line 155 … ‘for all the usual reasons’ might need an explanation.
We have added: “, some of which are 1) the nature of model (and observation) errors is likely to differ significantly depending on the constituent frequency and amplitude, 2) errors of the ellipse orientation are then easily distinguished from errors of the phase and major axis length, all of which impact differently on various users, 3) it is the most succinct way of describing the data set.”
Line 174, the penalty function; this is dimensionally imbalanced and needs an explanation for the D/5C component.
Sorry, this was not clear, as discussed above
Many of the figures, e.g. Figure 3, include tables of percentiles. Provide a sentence explaining these.
Sorry, we thought the caption to Figure 3 was sufficient. We have added (‘%’) to explain the use of that symbol.
Similarly, some tables (e.g. Table 2), have ‘%obs’ values which need an explanation.
We’ve added: The %obs row expresses the RSS values in the line above as a percentage of the observed RSS.
Line 281 refers to sites in Banks Strait but in the table they are labelled Bass.
Table 3 is now fixed, thank you.
Line 356, spell out RIB.
We’ve changed this to ‘speedboats’
Line 380 … it would be helpful to have Broad Sound marked on Figure 11.
Done, see above
Line 384 … explain why you query the mechanical current meters.
Did you overlook the next sentence, or want it expanded on? “Due to limited storage capacity, the flow direction was only sampled instantaneously once an hour, so short-period changes of direction were not averaged.” We’ve now added “To minimise noise due to waves (i.e, rectified orbital velocities spinning the rotor even when the current velocity is zero - Griffin, 1988)
Griffin (1988): Mooring Design to minimize Savonius rotor overspeeding due to wave action.
Mark Lady Musgrave on a figure.
Done, see above
Line 394 … suggest changing ‘the amplitude of S2 exceeds that of M2 (barely),’ to say 'the amplitude of S2 is of similar magnitude to that of M2,'.
Good idea. We’ve changed it to “the amplitudes of S2 and M2 are nearly the same,”
Line 408 … ‘and thus underestimates the errors’. How do you know?
Fair point. Neglecting the internal tide does little damage to the depth-mean velocity. We were thinking of users who will use our prediction of the depth-mean as a prediction of the tide at all depths. We have removed “and thus underestimates the errors”.
Line 415 … Given that the official predictions are available, might be a useful addition if you did compare. Even to demonstrate the adequacy, or otherwise, of the official predictions.
We will propose this to BoM (who issue the official predictions).
Line 416+ This doesn't offer an explanation of why the you think the tidal currents are poorly predicted in this region.
That is because 1) we are not sure of the reason, but have now added “It appears that this problem is largely inherited from the boundary conditions”, 2) it is a low-priority mystery, for the reason given (tidal currents are very small compared to non-tidal).
Line 430 … As we know, M4 and other higher harmonics are generated internally through non-linear model terms. Do you have anything to say on this generation mechanism within the model?
Lacking any evidence that the mechanism in the model is faithful to the real world, we’d rather not speculate on this. We’ve reworded this: where amplitudes up to 5.9 cm s-1 were observed (Fig. 13). Model amplitudes are comparable (up to 4.3 cm s-1)but there is not much correspondence with the observations. Given the complexity of both the observed and the modelled currents, and relatively small contribution to the total, we can’t be confident that the modelled M4 velocities are accurate enough to warrant inclusion of these constituents when making predictions.
Line 441 … Can I suggest rewriting this sentence ‘Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs (i.e. the sections where the shelf is narrow).’ as “ Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east (i.e. the sections where the shelf is narrow).’ Exceptions are Bass Strait and the South Australian gulfs.”
Hmm, we’re not sure that’s any better. So we’ve removed the bit in brackets, leaving “Over the continental shelf, this is the case for the southern half of the continent from Ningaloo Reef in the west to Fraser Island in the east, excepting Bass Strait and the South Australian gulfs.”
Line 480 … Whilst the focus of the paper is on tidal currents, the statement that non-tidal currents play an important role in many parts of the Australian coastal domain leads the reader to wonder whether future versions of the model will attempt to provide this missing component. In this context, lessons learnt by Witeranje et al (2018) may be useful. Also, some insight into what improvements are intended (or are in development) and why these are seen as improvements would be useful.
Non-tidal currents, as you know, is a totally different modelling problem, and not one that we want to discuss in this paper.
Ref: Wijeratne, S., Pattiaratchi, C., & Proctor, R. (2018). Estimates of surface and subsurface boundary current transport around Australia. Journal of Geophysical Research: Oceans, 123, 3444–3466. https://doi.org/10.1029/2017JC013221
Citation: https://doi.org/10.5194/gmd-2021-51-AC1
-
AC1: 'Reply on RC1', David Griffin, 01 Jul 2021