the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Improved objective identification of meteorological fronts: a case study with ERA-Interim
Philip G. Sansom
Jennifer L. Catto
Abstract. Meteorological fronts are important for their associated surface impacts, including extreme precipitation and extreme winds. Objective identification of fronts is therefore of interest in both operational and research settings. We have implemented a number of changes in a widely used objective front identification algorithm, and present the improvements associated with these changes. First, we show that a change to the order of operations from applying a mask then joining frontal points to contouring the thermal field then applying the mask, yields smoother fronts with fewer breaks. Next we address the selection of the identification parameters, including the thresholds and number of smoothing passes. This allows a comparison between datasets of differing resolutions. Finally, we have made a number of numerical improvements in the implementation of the algorithm, such as more accurate finite differencing, direct calculation of the wet-bulb potential temperature, and better handling of short fronts, which yield further benefits in smoothness and number of breaks. This updated version of the algorithm has been made fully portable and scalable to different datasets in order to enable future climatological studies of fronts and their impacts.
Philip G. Sansom and Jennifer L. Catto
Status: final response (author comments only)
-
RC1: 'Comment on gmd-2022-255', Anonymous Referee #1, 07 Dec 2022
The authors present an updated version of an established automated front identification method geared toward reanalysis datasets that are available with ever higher spatial resolution, along with an objective calibration method for the algorithm to make the identified fronts comparable between datasets with different spatial resolutions. They make a convincing case that their adaptations improve the efficacy of the algorithm, especially for moderate-resolution datasets like ERA-Interim that are still widely used, as well as climate models run at comparable resolutions. Given the high degree of subjectivity involved in automated front identification methods (choices of variables, of the degree of smoothing, of parameter thresholds), any successful effort to introduce more objectivity is welcome, and the presented calibration method appears to work well for the datasets in question. The manuscript is well structured and well written, and in my opinion only requires minor adaptations. I therefore recommend it for publication pending minor revisions.
A list of questions, requests and recommendations is provided in the attached document.
-
RC2: 'Comment on gmd-2022-255', Anonymous Referee #2, 30 Jan 2023
Summary
This paper looks into the issue of simplifying and speeding up objective front identification in re-analyses, with simple application across a range of numerical model types/resolutions in mind, to facilitate intercomparison of (e.g.) resolution impact. The goals are OK, but in my view they have not really been addressed or reached in any useful or useable way. And I do not agree that the main method chosen is the right one. In tandem, the paper has many other weak areas: textual clarity, figure clarity, disingenuous scale selection, result misinterpretation, unsubstantiated claims, etc. This all means that the manuscript does not unfortunately contain sufficient new science, new methods or new results to warrant publication in my view. With regard to the claimed innovations (e.g. in the abstract) these are either minor adjustments, or have been taken from another publication without acknowledgement. This also raises questions of scientific integrity, which is clearly disappointing. There are a few areas where, with further investigative work, a wholly restructured manuscript could potentially reach a publishable standard.
Detailed comments
Major Points
- In highlighting changing the ‘order of operations’ in the abstract you are merely copying the method of Hewson (1998) (his Fig 2), without acknowledging that. Similarly the recommendation to use a contouring algorithm (L111) is what Hewson (1998) proposed and implemented, and you don’t acknowledge that either. So whilst these updates do deliver better results, the authors are wrong to imply it is their innovation. Then referencing ‘more accurate finite differencing’ in the abstract would mean, one would think, that this is something new and very different, whereas it is a second order centred finite difference, which to my mind would ordinarily be the default way to compute del-squared on a grid. This is also rather trivial compared to the extensive work done to compute derivatives correctly in Hewson (1998) – ref: P46 and appendices 1 and 2. So I can’t really see how any of these aspects can be justifiably referenced in the abstract. Likewise the other methodological changes mentioned in the abstract - direct calculation of wet-bulb potential temperature and better handling of short fronts – are to my mind small adaptations that should appear only in the main text as minor points, and not in the abstract as if they represented major progress. So virtually all the “key points” of the abstract are not key points at all, but either copies of previously published work or small algorithm changes.
- L159-170. Here you describe the new approach of making different resolution datasets comparable from a front perspective, and this lies at the heart of the paper’s aims, particularly with regard to code provision. It is therefore of pivotal importance. You opt to adjust the smoothing whilst keeping the masking thresholds the same. I fundamentally disagree with this strategy, because, as comparison of Figs 3c and 3d shows, you can easily end up ‘smoothing to death’ in a way that results in the two input fields (and therefore resulting fronts) after smoothing looking almost identical, thereby destroying the whole point of having the higher resolution in the first place. A much better, much more scientifically justifiable strategy would be to do it the other way round; limit the smoothing, but adjust the thresholds, to give somewhat similar front lengths. Then any subsequent intercomparison will show where frontal frequencies differ because of resolution impacts. In that case maybe there would need to be a bit of latitude on the amount of smoothing, perhaps a bit more for the higher resolution datasets to get rid of contaminating non-frontal noise (and there some subjectivity becomes inevitable I feel). But the very big increase from 8 smoothing passes for ERA Interim to 96 for ERA5 you use goes well beyond, to my mind, what any such latitude should allow. One might argue 96 is on a par with the number of smoothing passes (100) used in Jenkner et al (2010). Whilst one could contest that too, importantly they were using a model with a 7km horizontal resolution, much less than the 31km of ERA5. To conclude my comments here I quote from the manuscript under review: “When comparing analyses from different weather and climate datasets, the most common approach is to interpolate all the datasets to a common resolution, usually the lowest resolution among the datasets of interest. For some features such as fronts that are more easily identified in higher resolution data, this can be limiting”. To me applying very heavy smoothing is just as limiting.
Other Points
- Title: Does not seem to reflect the contents of the paper. It gives the impression, to me, that this is a case study paper, which it is not.
- L32-33: Why is placing the front in the middle of a frontal zone, and on the warm air side of it, “very similar”. Seems pretty different to me.
- L55: What is a “contemporary high resolution re-analysis”. ERA5 at 31km resolution or, say, CERRA at 5.5km? Or even higher still. I would be inclined to say that means convection-resolving, which might mean of order 2km or less. So this evidently needs clarification.
- L65: How do you use the u and v values at 850mb to compute front speed. If it is as per a previous publication then cite and say so as a minimum, but ideally expand here.
- L75-77: Hewson shows this option but then highlights the limitations of this approach for curved fronts, and accordingly discards. Please take care to not imply otherwise.
- L75: “..as used in Berry..” – how do you know - you need to say. There is no reference in this Berry et al paper to what method they have used. Same comment applies to lines 171-172: "used repeated applications...". Again how do you know?
- L78 & L84 & L91: “For a one-dimensional front (Type 1 front in Hewson, 1998)” would be better than “in one dimension”.
- L91: Not necessarily 1/root(2).
- L91-93: Sentence means nothing. Please re-write from scratch.
- L97: Why the superscripted T?
- L109: Degrees of what?
- L117: “Moderately high resolution analyses such as ERA-Interim” – by today’s standard this is low resolution. See point 3 also.
- L118: “..is often narrow, frequently only one grid box wide..”. I suspect this would apply across many resolutions. If you don’t agree you have to provide clear evidence to justify this statement.
- L110-125: This is very jumbled. It looks from the figure like you are using contouring-based colour-fill to mask out potential fronts that don’t meet the masking criteria, but in the discussion you focus on using contouring to represent the locating diagnostic. The reader is left confused. And for a fair comparison – versus Berry et al – surely you should include the “search radius larger than one gridlength” on Fig. 1?
- Figure 1b: This is nothing new. It looks basically the same as in Hewson (1998). If you think otherwise then a detailed and convincing description of why it is different needs to be added.
- L128: “key parameters” is vague. Need to be much more specific; presumably you mean “tuning thresholds for masking diagnostics”?
- L135: “local minima and maxima”. This may be noise, or it may be a function of you having used the Hewson (1998) del-squared (eqn 5) approach, which is known to amplify frontal curvature, rather than the Hewson (1998) eqn 6, which does not. Needs further investigation and comment.
- L139-141: Also discussed in the following reference (which is listed in the Jenkner paper you quote). So please acknowledge. - Hewson TD. 2001. Objective identification of fronts, frontal waves and potential waves. In Cost Action 78 Final Report – Improvement of Nowcasting Techniques, Lagouvardos K, Liljas E, Conway B, Sunde J (eds). European Commission EUR 19544. Cambridge University Press: Luxembourg: 285–290. At the same time a simple illustration of the ‘cusp’ behaviour you describe as smoothing passes increase would add quite a lot I think.
- L144-145: There seems to me to be overall as much seasonal variation as there is latitudinal variation (whilst quantifying any difference is of course difficult given the different units, I am referring to differences across the range of values encountered for the two metrics, latitude and season).
- L146: “relatively constant” – in time or in space?
- L148-152: 25th and 50th percentiles seem rather arbitrary values; they also feel higher than I would have expected – giving 25% and 50% acceptance rates across the domain which is a lot. It would be nice to get a fuller picture of different percentile behaviours in some way, with also some example map plots of the diagnostics in two or more colours to show where the proposed criteria are satisfied and how they relate to the input fields. Use of percentile references is one of the genuinely newer parts of the paper and you should expand by showing more data and more related discussion.
- L155-157: discussion of K2 = 0 in the Berry paper and the current paper is muddled and not possible for me to follow. One aspect is that (so far as I can tell) Berry et al did not use the second mask, which K2 refers to, so to say they set K2=0 is a bit misleading.
- Figure 2 caption: Why zonal TFP? What does this mean? Surely you should us the full TFP here?
- Figure 2: Poor colour selection. Yellow is a bad colour to choose on a white background, and red and orange are a bit too similar for my liking.
- Figure 3c: there are two of these.
- L177-178: “is valuable due to the small scale of the quantities of interest, e.g. K1=….” This statement makes no sense to me. In what way is quoting a threshold indicative of small scale?
- L195-196: “…tends to relate to pre-frontal troughs…”. No it doesn’t. These are generally dynamically inert humidity gradients on the leading edge of the warm conveyor belt. They are discussed in Hewson (1998) and Hewson and Titley (2010). The latter paper indicates that, for operational implementation purposes, a third front mask based on theta only (so no humidity impact) is included with the aim of erasing these features. The authors might like to consider improving their study, and the resulting climatological frequencies, by doing the same.
- L201-202: The sentence spanning these lines does not reflect, at all, what the figure 4d shows.
- L202: “far southern ocean” – in large part this is actually sea ice.
- Figures 4d-f and discussion thereof: It would be far more informative for the reader to see percentage change in frequency on these plots, and have that discussed instead.
- Figure 4: Colour schemes are poor. One should be able to read off values without manual counting. If you are going to use colours why use monochrome red shades – the whole spectrum is available to you.
- L214: “Climatology” means “climatological quantile values”.
- Figs 6 and 7: Again a poorly chosen colour scheme. Only reds are used, which is bad as in EE above, besides which the scale selected highlights very little on panels 6a, b, c. The frequencies of cold and warm fronts would be of fundamental interest and utility, but this is all but lost due to the scales and colours selected.
- L219-229: Yes, interesting, but are such aspects not already in the Berry paper(s)?
- L221: Gradients? Of what? Do you mean light winds?
- L227: “Somewhat surprisingly” – it would be helpful and probably illuminating in the context of this discussion to look at seasonally average SST contours and gradients (and sea ice distribution).
- L235: It is vital to understand what you mean by aggregated. There are several possible meanings and because you don’t say it’s impossible for me to comment on Fig 8 or your inferences from that.
- L238: “Increased ability to resolve the required derivatives”. I have no idea what this means.
- L240: “where there fronts” is bad English
- L241: “a result of the increased resolution”. Really? But with extreme smoothing you make the input fields look virtually the same (Fig 3c and d)? And if what you state were the case then why would an increase not be seen elsewhere?
- L245: What do you mean by stable? Is this something to do with convection? And how does quasi stationary front frequency increase as a result? I don’t understand that.
- L247-248: What does this sentence mean? The scale on Figure 10 is very different to the scale on Figure 7, which is worrying in itself, and besides which there is so much smoothing applied that for ERA5 that I don’t think you could legitimately say anything about the resolution impact anyway.
- L250: “clearly visible”. Yes, of course it will be if the scale has been adjusted in such a way as to make it more visible than on the counterpart plot (Fig. 7)! And what about other SST gradient regions – edge of Kuroshio etc?
- L258: “single core of a 2 year old laptop” is not a very professional or durable way to describe computational requirements.
- L265: This sounds like an unsubstantiated comment and should be removed or demonstrated.
- L279-280: “Modest performance increases”. I don’t know what the basis for saying this is.
-
L286: “No performance benefit”. Likewise what is the metric used here? This needs to be introduced much earlier, and substantiated, rather than being just dropped into the conclusions from nowhere. For a broadscale picture the del-squared approach you have used is simpler and of itself is likely to be adequate in my view, because errors arising from exaggerated frontal curvature that you get with del-squared will probably not be so critical as they might be in real-time forecasting applications, but you don’t discuss this at all.
Citation: https://doi.org/10.5194/gmd-2022-255-RC2 -
EC1: 'Comment on gmd-2022-255', Jatin Kala, 09 Mar 2023
In reviewing the manuscript, i found a result which is very intriguing. The authors have acknowledge this result, but not really explained or carried out further analysis, that i think is warrated. I refer to this result:
"Somewhat surprisingly, cold fronts are slightly more common though less widely distributed
in the Southern Hemisphere during southern summer (DJF, Figure 7(a)) than in southern winter (JJA, Figure 7(c))."This is indeed very surprising, and runs counter-intuitive. When i focus on the region of southwest Western Australia (where i live and regularly check MSLP charts), the anlaysis shows a higher frequency of cold fronts in DJF as compared to JJA. I find this very odd, and would like the authors to dig a little further.
Regions with mediterrancean climates, such as southwest WA, get most rainfall in Winter (JJA), and the heaviest rain events, are most commonly associated with cold fronts. Yet, Figure 7 suggests there are more cold fronts in Summer than Winter, which is very counter-intuitive. It is generally accepted that cold-fronts bring rain, and it would not be un-reasonable to assume, at least based on first principle, that where you have more frequent cold fronts, one might expect more rainfall. Your results suggests the opposite.
Citation: https://doi.org/10.5194/gmd-2022-255-EC1
Philip G. Sansom and Jennifer L. Catto
Philip G. Sansom and Jennifer L. Catto
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
554 | 171 | 14 | 739 | 10 | 5 |
- HTML: 554
- PDF: 171
- XML: 14
- Total: 739
- BibTeX: 10
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1