the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A regional physical-biogeochemical ocean model for marine resource applications in the Northeast Pacific (MOM6-COBALT-NEP10k v1.0)
Abstract. Regional ocean models enable generation of computationally-affordable and regionally-tailored ensembles of near-term forecasts and long-term projections of sufficient resolution to serve marine resource management. Climate change, however, has created marine resource challenges, such as shifting stock distributions, that cut across domestic and international management boundaries and have pushed regional modeling efforts toward “coastwide” approaches. Here we present and evaluate a multidecadal hindcast with a Northeast Pacific (NEP) regional implementation of the Modular Ocean Model version 6 with sea ice and biogeochemistry that extends from the Chukchi Sea to the Baja California Peninsula at 10-km horizontal resolution (MOM6-COBALT-NEP10k, or “NEP10k”). This domain includes an Arctic-adjacent system with a broad shallow shelf seasonally covered by sea ice (the Eastern Bering Sea, EBS), a sub-Arctic system with upwelling in the Alaska Gyre and predominant downwelling winds and large freshwater forcing along the coast (the Gulf of Alaska, GoA), and a temperate, eastern boundary upwelling ecosystem (the California Current Ecosystem, CCE). The coastwide model was able to recreate seasonal and cross-ecosystem contrasts in numerous ecosystem-critical properties including temperature, salinity, inorganic nutrients, oxygen, carbonate saturation states, and chlorophyll. Spatial consistency between modeled quantities and observations generally extended to plankton ecosystems, though small to moderate biases were also apparent. Fidelity with observed zooplankton biomass, for example, was limited to first-order seasonal and cross-system contrasts. Temporally, simulated monthly surface and bottom temperature anomalies in coastal regions (< 500 m deep) closely matched estimates from data-assimilative ocean reanalyses. Performance, however, was reduced in some nearshore regions coarsely resolved by the model’s 10-km resolution grid, and the time series of satellite-based chlorophyll anomaly estimates proved more difficult to match than temperature. System-specific ecosystem indicators were also assessed. In the EBS, NEP10k robustly matched observed variations, including recent large declines, in the area of the summer bottom water “cold pool” (< 2 °C) which exerts a profound influence on EBS fisheries. In the GoA, the simulation captured patterns of sea surface height variability and variations in thermal, oxygen and acidification risk associated with local modes of inter-annual to decadal climate variability. In the CCE, the simulation robustly captured variations in upwelling indices and coastal water masses, though discrepancies in the latter were evident in the Southern California Bight. Enhanced model resolution may reduce such discrepancies, but any benefits must be carefully weighed against computational costs given the intended use of this system for ensemble predictions and projections. Meanwhile, the demonstrated NEP10k skill level herein, particularly in recreating cross-ecosystem contrasts and the time variation of ecosystem indicators over multiple decades, suggests considerable immediate utility for coastwide retrospective and predictive applications.
- Preprint
(12817 KB) - Metadata XML
-
Supplement
(9963 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on gmd-2024-195', Anonymous Referee #1, 14 Feb 2025
General Comments
The paper describes and evaluates a regional physical-biogeochemical model of the Northeast Pacific. The model is intended for use for projections and predictions related to living marine resource management. This model is very useful for fisheries applications particularly because it provides a unified framework along the coast from the CCS to the GoA. Ecologists studying species at risk (like Pacific Salmon) often struggle to find suitable environmental data. Therefore, this model represents a substantial advance for resource management.
Specific Comments
- My main concern is that the use of coarse gridded data products for model evaluation is not ideal for a regional model of this scale. These products (e.g., WOA, CODAP-NA, OISSTv2.1) are coarser than the model being evaluated which can make direct comparisons misleading. They are interpolated from sparse observations which can introduce biases particularly in regions with strong gradients (upwelling zones). As a result, the differences we see in many figures may not be due to model deficiencies. Moreover, comparisons with coarse gridded products do not highlight the added value of the model. I recommend further evaluation using ship-sampled data (i.e. CTDs and bottle data) or Argo data to provide a more thorough evaluation particularly of the biogeochemistry in the model. The use of direct in situ observations will be appreciated by ecologists who wish to use these data on the shelf.
- I recommend including a single composite metric like the Kling-Gupta efficiency (see Jackson et al 2019 https://doi.org/10.1016/j.envsoft.2019.05.001 ) and its components. This single metric that could be compared to other models. There are other options (Willmot score), but KGE has variability as one of its components and that is something you do not assess. I like that you consider bias separately to provide a clear explicit measure of error, but the analysis could benefit from a holistic assessment of how the bias interacts with the variability and correlation.
- The clarity of the writing in the manuscript could be improved by rewriting several sentences that have unclear antecedents (examples listed):
-
- L55 “This includes [...]” suggested rewrite-> “These ecosystems include valuable fisheres that represent [...]”
- L170 : “This was ...” This overmixing?
- L315: “This ...”
- L325:
- L415: “This ...” -> “This division...”
- L517 “This ...” These biases?
- L525
- L550 “This gradient?”
- L638
- L913
- L935
Technical Corrections
- Lines 60-61: consider referencing Christian and Holmes 2016 https://doi.org/10.1111/fog.12171 and Thompson et al. 2023
https://doi.org/10.1098/rstb.2022.0191
- L 63 and elsewhere- Check that your citations are in chronological order
- L100. Revise this sentence for clarity. I find the words “have contributed to” to be unclear. Climate models such as the NPGO and PDO result from a variety of different processes (e.g. Newman et al. 2016 ). They are associated with (correlated with) ecosystem regime shifts, but they are not phenomena in and of themselves and cannot, therefore, cause anything.
- L112 – there is evidence that CTW can propagate the ENSO signal to the GoA (Amaya et al 2023; https://doi.org/10.1038/s41467-023-36567-0)
- L149 - “time step”
- L255 – how long did it take for the model to “converge”? how do you know?
- L377: “We compared..” show me don’t tell me – what did you find?
- L404: “We also assessed the long-term trends [...]” where is this? what did you find? How did the bottle data compare to the model?
- L419 – in the caption of Fig. 1 you said that the white part was not in the computational domain. But here you say that you omit grid cells that contain only land. These can’t both be true; there are grid cells that contain both land and water.
- L501 space needed at start of paragraph
Citation: https://doi.org/10.5194/gmd-2024-195-RC1 -
RC2: 'Comment on gmd-2024-195', Anonymous Referee #2, 24 Mar 2025
Dear editor,
Thanks you for forwarding me this paper to review.
In this paper, a biogeochemical model is developed for the North Pacific region and assessed as to its suitability as a tool to assist in fisheries management.
I want to thank the authors for an interesting study and what I believe will become a useful contribution to their field. This paper is generally a solid piece of work and well-suited for eventual publication but I have a few suggestions I would like the authors to consider before I recommend publishing.
My biggest concerns are about the data products used in model-data assessments. In a few instances they are using the same product for model initialisation, boundaries and model assessments (glory, tpxo, WOA), which is not an independent comparison. I do see these sorts of non-independent comparisons as a useful tool as it shows how faithful the model downscales the original product. However, on their own and without more independent assessments, we don’t know whether the biases we are seeing is the model degrading the initial/boundary products or if the model is actually improving on them.
My next but related concern is to do with the Aletutain islands. It seems likely to me that there are some very fine scale processes occurring around these that could have a noticeable effect at the 10-20km scales that you are assessing. If your model is a higher resolution than the data products then it is likely that your model is better capturing these processes than your data products so you will need some higher resolution data products to properly assess the model performance in this region. I am not too familiar with the oceanography of the region, so it could be that the authors have already considered this - but I would like to see a short discussion on how well represented they expect this region around this island chain to be represented in their model and the observational datasets.
A few minor comments
There are quite a few acronyms which does reduce the readability, particularly in the abstract. I suggest carefully assessing which of these are really needed and which can be written out in full.
Fig 1 is on a different projection to the rest of the figures. I am less familiar with this region, and it did make it more difficult to locate the different regions on the subsequent figures.
lines 115-120. What sort of an effect does land-use changes have on this system?
Line 200: NEP domain -> NEP10km
Line 206: Suggest also including nudging timescales here.
Lines 305-306: It seems that for both of the mixed layer depth comparisons you are using the same calculation for model and observations (which is a +ve) – but I suggest rephrasing to make this clearer.
Line 345-350. It is hard to see where the 500 m contour is on fig 1, and it is hard to assess how many grid cells you have on the shelf. I suggest adding the 500 m contour to figure 1, and including some text to say how many grid cells wide this continental shelf region is (perhaps max and min lengths?) The shelf is quite narrow and for most of your domain, the area less than 500 m appears to be only a few grid cells wide. Is the 10km resolution model a good enough tool to describe the coastal region? If shelf conditions are a key fisheries-critical variable, then nesting into the coastal regional may be required (I am not suggesting you need to do this for the current publication). Your global observation products may also struggle to capture coastal processes. I think this paper could be complete without the coastal assessments, and a short discussion about how to approach this in future would work instead (for example, using in situ products that measure shelf scales combined with higher resolution models).
Line 394: What does CMEMS stand for?
Section 2.5.3: This section (and the similar results section) could be of value as it shows some of the good features that help your model code run faster. However, as written, I don’t think it is essential to be included as some of these results will be system specific (i.e. it is of interest to you, but not to a broader audience). If you include this section, then I would like to know more about where did your model runs. I note that you mention the computer at the end of the paper, but this needs to come earlier. I would also like to see a description of the computing system used as these results will likely vary across different computing architectures (e.g. inter-PE communication speeds will vary across different computers).
Line 428: How did you conclude that the 400s tracer time step was best?
Line 486: Can you indicate the Aleutian island chain on Figure 1?
Line 500 – 505: I’m not convinced by this statement. It is possible that these biases represent your model improving on the tpxo dataset. In my experience (in other parts of the world), a locally produced model tends to compare better to tide gages than the tpxo model for partially enclosed areas. Is there tide gage data for this area that you can compare to? If resolution is to blame for the bias, how does the tpxo resolution compare to your model?
Figure 18: note here that some of the apparently poorer comparison on the shelf compared to offshore could also be because the offshore comparisons were log transformed.
Line 734 – Suggest writing CPA in full as it is not used often, and readers will have forgotten what it stands for.
Line 879: your fig 18 suggests that the coastal biases are not small
Line 968: Quite often when you increase resolution you also need to decrease timestep. This could potentially be a lot more than 8-fold increase in computational cost!
Citation: https://doi.org/10.5194/gmd-2024-195-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
284 | 109 | 7 | 400 | 22 | 9 | 8 |
- HTML: 284
- PDF: 109
- XML: 7
- Total: 400
- Supplement: 22
- BibTeX: 9
- EndNote: 8
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1