|This a timely attempt at overhauling the UWG model. The topic is highly relevant, in particular for practitioners (architects, urban planners) wishing to incorporate urban microclimatic effects in their building energy models. Therefore, the effort of the authors is to be commended.|
Generally speaking, the manuscript is not yet ready for publication. Several of the points raised in the previous review round have not been addressed convincingly in the current version, and doing so may require major rework. More fundamentally, the validity/superiority of the proposed VCWG model over the original UWG model is not clearly established, despite the improvements proposed in this revised version—most notably, the incorporation of the Monin-Obukhov parameterization in the rural model.
1. New rural model: Why do the authors think that the Monin-Obukhov rural model is superior to the one in your original manuscript? This change seems to be mainly triggered by the first-round review comments highlighting “unjustified parameters” incorporated in the rural model of the previous manuscript. However, this is a major change and the transition from one to the other merits detailed discussion and justification in the final manuscript. Both parameterizations are imperfect, and the choice of one versus the other should be based on ultimate model accuracy, rather than the satisfaction of a review comment.
2. Assumption of constant specific humidity in the vertical direction in the rural model
* The authors’ replies to both reviewers’ comments on this topic are not convincing. The answer to the first reviewer is that the assumption is valid as long as vapour pressure is below its saturation value; and they proceed to show that this condition is indeed verified, at least over the limited two-week period of analysis. While the answer to the second reviewer seems to accept the reviewer’s viewpoint that even if this condition is verified, it does not constitute sufficient basis for the validity of said assumption. The authors’ final rational seems to be that there is no other feasible way to approach this matter: “This assumption is made, for lack of a better assumption”
* Why not use the Monin-Obukhov parameterization also for humidity? The authors mention the lack of surface latent flux measurement in the EPW file, but a basic soil water diffusion model similar to the one implemented in ENVI-met could overcome this problem. Of course, precipitation measurements would be required. The authors mention that this measurement is often missing in EPW files, but I don’t think that the authors should limit their methodology on the basis of such considerations. Even if precipitation happens to be missing, daily or monthly values are generally not difficult to obtain even for the most remote locations, and are probably sufficient to feed said soil model. Such a model would also help with the necessary incorporation of the evapotranspiration phenomenon (see my comment below).
3. Lack of proper evapotranspiration model to account for evaporative cooling provided by the vegetation: this was a major shortcoming of the original UWG and does not seem to have been addressed in this updated version (or at least, it is not mentioned in the manuscript)
4. Validation procedure and model accuracy
* Why not use 1 year instead of two weeks? The main advantage of UWG-like models is their ability to conduct annual analysis. The validation therefore should also be applied on an annual time scale. The validation period of two weeks is more appropriate for mesoscale or CFD models.
* Why not use the highly reliable and comprehensive Basel (BUBBLE) or Toulouse (CAPITOUL) observations, which would also make comparison to the original UWG more straightforward? The authors prefer to undertake their own measurement campaign in Guelph which is limited in duration to only two weeks.
* The average bias of the reference variables is actually quite high (−1.43 K, 1.06 m/s, 5 g/kg). The temperature bias in particular is of the same magnitude as the UHI intensity. The temperature RMSE is also quite high at 1.56 K. When it comes to UHI intensity, what is lacking is the RMSE between measured and modelled values—i.e., a measure of the goodness of fit. Calculating the standard deviation of UHI intensity with respect to its own average is not informative. Furthermore, using the proximity of the standard deviations of UHI measurements and UHI model predictions (respectively 1.23 K and 1.53 K) as an indication of the accuracy of the model is questionable.
* After the cursory and unconvincing model validation, the study attempts a sensitivity analysis (“model exploration”, section 3.2) which is now conducted using Vancouver rural weather data. The most important sensitivity analyses, those pertaining to plan area index, frontal area index, leaf area density, building energy configuration and radiation configuration are, again, conducted over a clearly insufficient period of 2 weeks. This is all the more surprising given that, for the sensitivity analysis, no urban measurements are required and computation time is not a major issue. In sections 3.2.5. and 3.2.6 the authors consider model variability for different seasons and different locations. This time the model is simulated for a full year—which shows that computation time is not an issue. Given the weakness of the model validation (my comments above), and of the methodology underlying the sensitivity analysis, I shall not discuss the outcome of the sensitivity study in any detail.
* Please explain the values of C_k (=2 for unstable, =1 for stable). They do not seem to be taken from the cited paper by Nazarian et al. (2019). In that paper, the product C_k.l_k is parametrized, not C_k separately.
* Why is the waste heat fraction set to 0.3? Please provide a justification for this value or conduct a sensitivity analysis. This parameter can have an important impact on UHI.
* The urban and rural measurement stations (both within the University of Guelph campus) are quite close to each other, separated by about 2 km. Please explain why the rural station is not more distant. Also the rural station is northeast of the urban station, i.e., downstream of the urban station given that the predominant wind direction is from west/southwest. Usually, an upstream rural station is preferred.
* Why do you combine the Guelph rural station data with that of London, Ontario? What do you mean by “combine”? There is also mention of “assembled EPW dataset” which is even more puzzling.
* HMP60 is a Vaisala sensor not Campbell.
* Please explain why an average building height of 20 m was selected. This seems quite high for that urban location.
* Similarly a plan area index of 0.55 (page 17) seems high. That urban area contains many empty (green) spaces. Additionally, this plan area index value largely exceeds the maximum value considered in the CFD-based parameterization of Nazarian (2019) which this paper seems to be using extensively. So you probably had to extrapolate Nazarian’s parameterization. This deserves some discussion. By the way, the plan area index value in Table 1 is given as 0.44 (exactly the maximum value consider by Nazarian) while the frontal area index becomes 055. Which is correct?
In conclusion, the present manuscript does not succeed is unambiguously establishing the superiority of the proposed model over the original UWG. A direct comparison of the performance between the two models is not provided. Without a doubt, the original UWG methodology presents shortcomings that need to be addressed in a demonstrably superior way. There is a key sentence in the abstract: “The results obtained from the explorations are reasonably consistent with previous studies in the literature, justifying the reliability and computational efficiency of VCWG for operational urban development projects”. Rather than being “reasonably consistent with previous studies”, the authors should demonstrate that their approach is superior. This is something that still remains to be established.