the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A Twenty-Year Analysis of Winds in California for Offshore Wind Energy Production Using WRF v4.1.2
Abstract. Offshore wind resource characterization in the United States relies heavily on simulated winds from numerical weather prediction (NWP) models, given the lack of hub-height observations offshore. One such NWP data set used extensively by U.S. stakeholders is the Wind Integration National Dataset (WIND) Toolkit, a 7-year time-series data set produced in 2013 by the National Renewable Energy Laboratory. In this study, we present an update to that data set for offshore California that leverages recent advancements in NWP modeling capabilities and extends the period of record to a full 20 years. The data set predicts a significantly larger wind resource (0.25–1.75 m s−1 stronger), including in three Call Areas that the Bureau of Ocean Energy Management is considering for commercial activity. We conduct a set of yearlong simulations to study factors that contribute to this increase in the modeled wind resource. The largest impact arises from a change in the planetary boundary layer parameterization from the Yonsei University scheme to the Mellor-Yamada-Nakanishi-Niino scheme and their diverging wind profiles under stable stratification. Additionally, we conduct a refined wind resource assessment at the three Call Areas, characterizing distributions of wind speed, shear, veer, stability, frequency of wind droughts, and power production. We find that, depending on the attribute, the new data set can show substantial disagreement with the WIND Toolkit, thereby driving important changes in predicted power.
This preprint has been withdrawn.
-
Withdrawal notice
This preprint has been withdrawn.
-
Preprint
(7602 KB)
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2021-50', Anonymous Referee #1, 19 Apr 2021
General comments:
This paper deals with the differences in the modelled datasets between two different model setups. I would expect two different setups to produce different results, however, it is currently unclear which one of these setups is better, as the comparison with observations is not yet available. Regrettably, the performance of the PBL scheme near the surface (e.g. when compared to buoys) is not indicative of the performance at the hub heights. Moreover, even if we look at the verification results for buoys (Table A1) it is hard to argue that one setup is better than the other. The paper argues that the differences in wind-speed results from different PBL schemes can be explained by differences in frequency between different atmospheric stability classes. Do we know which of the PBL schemes provides a better (closer to observations) description of stability? Not at this point, regrettably. In summary, I am afraid that the lack of comparison with hub height observations diminishes the applicability of the conclusions carried out in this paper.
Specific comments (major)
- Changes in the MYNN PBL scheme: “The WIND Toolkit was developed using a 7-year (2007–2013) simulation with WRF 3.4.1. CA20 builds upon this by using WRF 4.1.2 across a 20-year period (2000–2019).” (Line 76-77). CA20 uses the MYNN parametrization scheme, the WIND Toolkit uses the YSU scheme (Lines 85-89). The problem is that in WRF version 3.7 the MYNN scheme underwent significant changes, and indeed the authors acknowledge this (Line 192). The thing that I do not understand is why if the WRF version is one of the factors that is analyzed during the sensitivity study, why aren’t the changes in the MYNN parametrization scheme also included in the sensitivity analysis (as a separate parameter)? There are no methodological difficulties, one would just need to run both WRF versions with the MYNN scheme, instead of the YSU scheme, as it was already done. The reason why I would like to see such analysis is that changes in the MYNN scheme can lead to significantly worse verification results at hub heights when compared to observations (see Figure 8, section 5.3 in Hahmann et al. 2020).
- Low-level jet: I am not a specialist in the wind climate of North America, but the coastal low-level jet along the coast of California seems to be a well-known phenomenon. Could it be associated with the large differences in results seen during May – July? I understand that investigation into meteorological processes is beyond the scope of this paper, but the link to the low-level jet (or lack thereof) could help interpreting the results, especially, as the low-level jet is linked to upwelling, which is already linked to the strongly stable atmosphere by the authors (Line 176). Furthermore, the presence of low-level jets and understanding of its typical height can help with the interpretation of shear results (Figure 13).
Specific comments (minor):
- Figures 2-5 show 100m winds. Figure 6 speaks about hub-height winds. Does “hub height” mean “100m” in Figure 6?
- “At all three sites, the relative frequency distribution of hub-height wind speeds bears resemblance to the Weibull distribution (Fig. 10)” (line 251). I would argue that these distributions, especially those seen in Figure 10a are very different from the Weibull distribution. The fact that the distributions differ from the Weibull distribution is not bad, that is just the feature of the region, but if the authors would like to claim closeness to the Weibull distribution, then I would ask them to fit the data to the Weibull distribution, estimate the coefficients, and show the fitted distribution in the figure.
- Figure 14 is very hard to interpret because the eye is drawn to the distribution of wind directions (wind rose) and it is hard to distinguish between α values inside each sector. Maybe the α distributions for only for certain key sectors can be shown, plotting them the same way as in Figure 13?
- The results for the wind drought, especially in Humboldt, is quite counterintuitive. CA20 shows much higher windspeeds on average, but the number of wind droughts also seems larger, at least for droughts that are 6 – 12h long. Maybe the authors would like to comment on this?
- If the authors would like to stress the differences in model performance between different regions (description of Table A1), I would suggest plotting the biases on a map using a larger circle, colored according to the bias, for each station. That would help with comprehending of the results.
Citation: https://doi.org/10.5194/gmd-2021-50-RC1 -
AC1: 'Reply on RC1', Alex Rybchuk, 23 Jun 2021
Dear Reviewer 1, thank you for taking the time to review our manuscript and thank you for the insightful feedback. In order to help simplify the review process, we have attached our feedback in the supplement. There, you will find your original comments as well as our responses marked in red.
-
RC2: 'Comment on gmd-2021-50', Anonymous Referee #2, 21 Apr 2021
Review of "A Twenty-Year Analysis of Winds in California for Offshore Wind Energy Production Using WRF V4.1.2" by Alex Rybchuk et al., GMD-2021-50
The manuscript describes a new model-generated dataset of wind resources for the California coast and compares it to an older dataset used by the wind industry. The manuscript is well written and well organized. The manuscript presents many interesting statistics of the wind climatology in these two datasets, e.g., the climatology of wind shear, wind veer, and "wind droughts". There are no observational datasets of wind in this region above the surface (buoys); thus, the comparison is purely made between two model-generated datasets. One could probably argue that the most recent one is more accurate, but the manuscript shows no evidence that this is true. The appendix contains a short evaluation against buoy data. But the authors well know that this is insufficient because the different surface and PBL scheme could give very different wind profiles (see, e.g., Draxl et al. 2014). My usual question for this type of manuscript is: "what new information does the manuscript provide that will help the scientific community in future investigations?" I cannot find any. Two datasets are compared, they are different in many aspects, but they cannot guide future WRF simulations for wind resource assessment. The information is perhaps valuable for wind farm developers and policymakers, but in my opinion, not to the reader of GMD.
Therefore, my recommendation is that the manuscript is rejected for publication in GMD but perhaps transferred to Wind Energy Science.
Also, I have a few more editorial comments:
- Please revise the figure captions. Most figure captions need further clarification. Many of them lack information on the averaging period. Also, please add (a), (b) labels to all sub-panels. These labels are a requirement from GMD.
- Abstract L5-6: "The data set predicts a significantly larger wind resource (0.25–1.75 m s−1 stronger)," Since the units are m s-1. This is wind speed, not wind resource.
- Page 4, the bottom of page: "CA20 applies spectral nudging on a 6-km domain every 6 hours" This statement is incorrect. In WRF, the nudging terms are applied at every time step. But the tendencies used to compute this term could come to 6 hourly data.
- Page 10, L180. Is the MYNN simulation used as the basis for the stability classes? I think this needs to be clarified.
- L210: You write, "The updated product contains higher horizontal resolution (31 km vs 79 km), higher temporal resolution (1 hour vs 6 hours)". But I understand that you use the 6-hour ERA5 data for the nudging, so this fact is inconsequential.
- I don't see the point of including section 3.4.6. The different factors are clearly interrelated and nonlinear. So why analyze their sum?
References:
Draxl et al. 2014: Evaluating winds and vertical wind shear from Weather Research and Forecasting model forecasts using seven planetary boundary layer schemes. Wind Energy https://doi.org/10.1002/we.1555
Citation: https://doi.org/10.5194/gmd-2021-50-RC2 -
AC2: 'Reply on RC2', Alex Rybchuk, 23 Jun 2021
Dear Reviewer 2, thank you for taking the time to review our manuscript and thank you for the insightful feedback. In order to help simplify the review process, we have attached our feedback in the supplement. There, you will find your original comments as well as our responses marked in red.
Interactive discussion
Status: closed
-
RC1: 'Comment on gmd-2021-50', Anonymous Referee #1, 19 Apr 2021
General comments:
This paper deals with the differences in the modelled datasets between two different model setups. I would expect two different setups to produce different results, however, it is currently unclear which one of these setups is better, as the comparison with observations is not yet available. Regrettably, the performance of the PBL scheme near the surface (e.g. when compared to buoys) is not indicative of the performance at the hub heights. Moreover, even if we look at the verification results for buoys (Table A1) it is hard to argue that one setup is better than the other. The paper argues that the differences in wind-speed results from different PBL schemes can be explained by differences in frequency between different atmospheric stability classes. Do we know which of the PBL schemes provides a better (closer to observations) description of stability? Not at this point, regrettably. In summary, I am afraid that the lack of comparison with hub height observations diminishes the applicability of the conclusions carried out in this paper.
Specific comments (major)
- Changes in the MYNN PBL scheme: “The WIND Toolkit was developed using a 7-year (2007–2013) simulation with WRF 3.4.1. CA20 builds upon this by using WRF 4.1.2 across a 20-year period (2000–2019).” (Line 76-77). CA20 uses the MYNN parametrization scheme, the WIND Toolkit uses the YSU scheme (Lines 85-89). The problem is that in WRF version 3.7 the MYNN scheme underwent significant changes, and indeed the authors acknowledge this (Line 192). The thing that I do not understand is why if the WRF version is one of the factors that is analyzed during the sensitivity study, why aren’t the changes in the MYNN parametrization scheme also included in the sensitivity analysis (as a separate parameter)? There are no methodological difficulties, one would just need to run both WRF versions with the MYNN scheme, instead of the YSU scheme, as it was already done. The reason why I would like to see such analysis is that changes in the MYNN scheme can lead to significantly worse verification results at hub heights when compared to observations (see Figure 8, section 5.3 in Hahmann et al. 2020).
- Low-level jet: I am not a specialist in the wind climate of North America, but the coastal low-level jet along the coast of California seems to be a well-known phenomenon. Could it be associated with the large differences in results seen during May – July? I understand that investigation into meteorological processes is beyond the scope of this paper, but the link to the low-level jet (or lack thereof) could help interpreting the results, especially, as the low-level jet is linked to upwelling, which is already linked to the strongly stable atmosphere by the authors (Line 176). Furthermore, the presence of low-level jets and understanding of its typical height can help with the interpretation of shear results (Figure 13).
Specific comments (minor):
- Figures 2-5 show 100m winds. Figure 6 speaks about hub-height winds. Does “hub height” mean “100m” in Figure 6?
- “At all three sites, the relative frequency distribution of hub-height wind speeds bears resemblance to the Weibull distribution (Fig. 10)” (line 251). I would argue that these distributions, especially those seen in Figure 10a are very different from the Weibull distribution. The fact that the distributions differ from the Weibull distribution is not bad, that is just the feature of the region, but if the authors would like to claim closeness to the Weibull distribution, then I would ask them to fit the data to the Weibull distribution, estimate the coefficients, and show the fitted distribution in the figure.
- Figure 14 is very hard to interpret because the eye is drawn to the distribution of wind directions (wind rose) and it is hard to distinguish between α values inside each sector. Maybe the α distributions for only for certain key sectors can be shown, plotting them the same way as in Figure 13?
- The results for the wind drought, especially in Humboldt, is quite counterintuitive. CA20 shows much higher windspeeds on average, but the number of wind droughts also seems larger, at least for droughts that are 6 – 12h long. Maybe the authors would like to comment on this?
- If the authors would like to stress the differences in model performance between different regions (description of Table A1), I would suggest plotting the biases on a map using a larger circle, colored according to the bias, for each station. That would help with comprehending of the results.
Citation: https://doi.org/10.5194/gmd-2021-50-RC1 -
AC1: 'Reply on RC1', Alex Rybchuk, 23 Jun 2021
Dear Reviewer 1, thank you for taking the time to review our manuscript and thank you for the insightful feedback. In order to help simplify the review process, we have attached our feedback in the supplement. There, you will find your original comments as well as our responses marked in red.
-
RC2: 'Comment on gmd-2021-50', Anonymous Referee #2, 21 Apr 2021
Review of "A Twenty-Year Analysis of Winds in California for Offshore Wind Energy Production Using WRF V4.1.2" by Alex Rybchuk et al., GMD-2021-50
The manuscript describes a new model-generated dataset of wind resources for the California coast and compares it to an older dataset used by the wind industry. The manuscript is well written and well organized. The manuscript presents many interesting statistics of the wind climatology in these two datasets, e.g., the climatology of wind shear, wind veer, and "wind droughts". There are no observational datasets of wind in this region above the surface (buoys); thus, the comparison is purely made between two model-generated datasets. One could probably argue that the most recent one is more accurate, but the manuscript shows no evidence that this is true. The appendix contains a short evaluation against buoy data. But the authors well know that this is insufficient because the different surface and PBL scheme could give very different wind profiles (see, e.g., Draxl et al. 2014). My usual question for this type of manuscript is: "what new information does the manuscript provide that will help the scientific community in future investigations?" I cannot find any. Two datasets are compared, they are different in many aspects, but they cannot guide future WRF simulations for wind resource assessment. The information is perhaps valuable for wind farm developers and policymakers, but in my opinion, not to the reader of GMD.
Therefore, my recommendation is that the manuscript is rejected for publication in GMD but perhaps transferred to Wind Energy Science.
Also, I have a few more editorial comments:
- Please revise the figure captions. Most figure captions need further clarification. Many of them lack information on the averaging period. Also, please add (a), (b) labels to all sub-panels. These labels are a requirement from GMD.
- Abstract L5-6: "The data set predicts a significantly larger wind resource (0.25–1.75 m s−1 stronger)," Since the units are m s-1. This is wind speed, not wind resource.
- Page 4, the bottom of page: "CA20 applies spectral nudging on a 6-km domain every 6 hours" This statement is incorrect. In WRF, the nudging terms are applied at every time step. But the tendencies used to compute this term could come to 6 hourly data.
- Page 10, L180. Is the MYNN simulation used as the basis for the stability classes? I think this needs to be clarified.
- L210: You write, "The updated product contains higher horizontal resolution (31 km vs 79 km), higher temporal resolution (1 hour vs 6 hours)". But I understand that you use the 6-hour ERA5 data for the nudging, so this fact is inconsequential.
- I don't see the point of including section 3.4.6. The different factors are clearly interrelated and nonlinear. So why analyze their sum?
References:
Draxl et al. 2014: Evaluating winds and vertical wind shear from Weather Research and Forecasting model forecasts using seven planetary boundary layer schemes. Wind Energy https://doi.org/10.1002/we.1555
Citation: https://doi.org/10.5194/gmd-2021-50-RC2 -
AC2: 'Reply on RC2', Alex Rybchuk, 23 Jun 2021
Dear Reviewer 2, thank you for taking the time to review our manuscript and thank you for the insightful feedback. In order to help simplify the review process, we have attached our feedback in the supplement. There, you will find your original comments as well as our responses marked in red.
Model code and software
Namelists for the CA20 Dataset and Figure Notebooks Optis, Mike, Rybchuk, Alex, Bodini, Nicola, Rossol, Michael, and Musial, Walt https://doi.org/10.5281/zenodo.4597548
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
1,357 | 1,890 | 66 | 3,313 | 41 | 50 |
- HTML: 1,357
- PDF: 1,890
- XML: 66
- Total: 3,313
- BibTeX: 41
- EndNote: 50
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
5 citations as recorded by crossref.
- Can reanalysis products outperform mesoscale numerical weather prediction models in modeling the wind resource in simple terrain? V. Pronk et al. 10.5194/wes-7-487-2022
- United States offshore wind energy atlas: availability, potential, and economic insights based on wind speeds at different altitudes and thresholds and policy-informed exclusions A. von Krauland et al. 10.1016/j.ecmx.2023.100410
- Assessing boundary condition and parametric uncertainty in numerical-weather-prediction-modeled, long-term offshore wind speed through machine learning and analog ensemble N. Bodini et al. 10.5194/wes-6-1363-2021
- Harnessing Offshore Wind Energy along the Mexican Coastline in the Gulf of Mexico—An Exploratory Study including Sustainability Criteria G. Hernández Galvez et al. 10.3390/su14105877
- Impact of physical parameterizations on wind simulation with WRF V3.9.1.1 under stable conditions at planetary boundary layer gray-zone resolution: a case study over the coastal regions of North China E. Yu et al. 10.5194/gmd-15-8111-2022