|Feedback on the author’s response:|
This reviewer was generally very pleased with the detailed nature of the response to this reviewers comments related to edge hitting parameters and equifinality. Below are a few follow up questions based on the author response.
“As shown in equation 48, observation error in the reflectance data was not estimated a priori based on the instrument itself, but was modeled as the residual error between the model and the data, analogous to what is done for any linear or nonlinear regression model.”….
“ Furthermore, because the variance slope and intercept are fit parameters, whose parametric uncertainty is being quantified and propagated, this makes it even less likely that our uncertainty estimate is overconfident. That said, the current approach does not formally account for any possible systematic errors in the observations, which could have a more serious impact on inferences. However, we would note that we are unaware of any derived data products that account for these systematic errors either.”
Response: This reviewer is well aware of this challenge, and recognizes in the absence of uncertainty estimates provided by the data product, end users are forced to make assumptions, or guess at how this may influence their assimilation. It is a bit concerning that this approach conflates potential instrument error and (known) model structural error. Perhaps use this as recommendation or call for data providers to give more quantitative estimate of uncertainty.
Specific comments on edited manuscript, using line-numbers as shown in tracked changes manuscript:
Line 4-7: “In addition, parameters to which vegetation models are known to be highly sensitive to…..”
Maybe simplify these 2-3 lengthy sentences to something more concise: “In addition certain parameters (e.g. SLA) that provide an outsized influence on vegetation model behavior, can be constrained by observations of shortwave radiation, thus reducing model forecast uncertainty.” For example.
Line 16-17: ‘Successfully constrained’ This is a bit vague, maybe say something like: ‘significantly reduced the parameter uncertainty’
Line 25: “In addition, we also highlight that our specific implementation is only valid for hemispherical reflectance data (a.k.a., albedo), whereas most surface reflectance products actually estimate the directional reflectance factor. Fortunately, the assumptions and parameters that define our hemispherical reflectance model and many others in the vegetation modeling community are readily adaptable to the prediction of directional reflectance, and we recommend that these adaptations be incorporated into the next generation of vegetation models.”
This paragraph seems a bit strange without clarifying. Need something like: “In this work the reflectance product was converted to hemispherical reflectance in order to directly compare with the model, however, in future work, we recommend that vegetation models add the capability to predict directional reflectance.”
Understandably bringing the observations closer to the model output goes against the grain of the manuscript, and tempers some of the ‘novelty’ in this manuscript, but is necessary.
Line 465: “our work is novel because it uses a canopy radiative transfer formulation that already exists inside the model itself.”
It’s unclear in this context if you are considering PROSPECT-5 internal or external to the model. Clearly it is internal to EDR, but external to ED2. In this context you are referring to the two-stream approach within ED2 as being internal to the model. PROSPECT-5 is tacked on to simulate leaf reflectance and transmittance. There is nothing ‘magical’ about being internal or external to a certain model, but it must be internal to the data assimilation system – which in this case includes ED2 and PROSPECT-5. Suggest reframing internal/external terminology to mean internal to the data assimilation system – internal to the model terminology is a bit confusing.
Line 515-530: This is an appropriate discussion in response to edge hitting parameters.
Line 565-585: I am generally satisfied with this explanation for how AVIRIS data is valid for comparison with reflectance simulated by EDR. I do think – a schematic that compares the extra steps required to bring AVIRIS data to something resembling reflectance would be helpful. Also bringing the observations closer to the model goes against the main advice posed by the authors -- that is include as much of the model as possible to bring it closer to the observations. Include the model within the data assimilation system.