the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Virtual joint field campaign: a framework of synthetic landscapes to assess multiscale measurement methods of water storage
Abstract. The major challenge of multiscale measurement methods beyond the point scale is their complex interpretation in the light of landscape heterogeneity. For example, methods like cosmic-ray neutron sensing, remote sensing, or hydrogravimetry are all able to provide an integral value on the water storage, representative for their individual measurement volume. A rigorous assessment of their performance is often hindered by the lack of knowledge about the truth at their corresponding scale, given the high complexity and detail of natural landscapes.
In this study we suggest a synthetic virtual landscape that allows for an exact definition of all variables of interest and, consequently, constitutes the so-called "virtual truth" free of knowledge gaps. Such a landscape can be explored in various "virtual field campaigns" using "virtual sensors" that mimic the response and characteristics of actual devices. We use dedicated physically-based models to simulate the signal a sensor would receive. These model outputs termed "virtual observations" can be explored and also allow the reconstruction of water storage, which can then readily be compared to the "virtual truth". Insights from this comparison could help to better understand real measurements and their uncertainties, and to challenge accepted knowledge about signal processing and data interpretation.
The "Virtual Joint Field Campaign" is an open collaborative framework for constructing such landscapes. It comprises data and methods to create and combine different compartments of the landscape (e.g. atmosphere, soil, vegetation). The present study demonstrates virtual observations with Cosmic Ray Neutron Sensing, Hydrogravimetry, and Remote Sensing in three exemplary landscapes. It enables unprecedented opportunities for the systematic assessment of the sensor’s strengths and weaknesses and even their combined use.
- Preprint
(27223 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on gmd-2024-106', Anonymous Referee #1, 17 Sep 2024
OVERVIEW
The paper describes the development of a virtual framework to generate synthetic landscape and measurement network simulations. The so-called “virtual joint field campaign” can be used to assess multiscale measurement methods of soil moisture and biomass.
GENERAL COMMENTS
The paper is well written, well-structured and clear. The topic is surely of interest for the readers of Geoscientific Model Development (GMD) as the paper introduces a new virtual framework to assess measurement methods of soil moisture and biomass in controlled experiments. However, I have three major comments that, in my opinion, need to be addressed before the publication.
MAJOR COMMENTS
- The virtual framework is, in principle, suitable for any measurement technique. The paper analyses three methods: cosmic-ray, remote sensing and gravimetric measurements. However, while cosmic-ray are well developed and described, this is not the case for remote sensing and gravimetric measurements.
Remote sensing and gravimetric measurements are only used in the hexland_tracks experiment, cosmic-ray in all the experiments. In addition, optical remote sensing data are considered, but currently no soil moisture products from optical data are available, only results from scientific papers. The use of microwave observations (e.g. from SAR data) would have been more appropriate. I suggest that the authors restructure the text to make it more balanced. Some parts can simply be put in the supplementary material. The remote sensing case study is very weak.
- While reading the paper, I wondered how the virtual truth was developed. It is only clear after carefully reading the methodology, I would suggest adding a paragraph at the end of the introduction clearly describing it. For example, I was expecting virtual experiments with time-varying soil moisture, but this is not the case. This should be mentioned, and I also wondered if this might be a strong limitation of the current design of the framework. I would suggest that the authors discuss this point.
- The second experiment (sierra-neutronica) is briefly described. While potentially interesting, as it was in the current version, it is described too briefly. I would suggest either removing the experiment or improving its description and relevance. What understanding do we gain from such an experiment?
SPECIFIC COMMENTS (L: line or lines)
L37: for remote sensing, active and passive microwave are mentioned here. In the paper optical remote sensing is considered, and also thermal data are used for retrieving soil moisture. Please improve the text here.
L39: Also, gamma-ray technique is worth to be mentioned here.
L46: “(reference welcome)” something is missing here
L52: Just a comment, interesting to see that such “virtual campaign” is similar to the concept of “digital twin” for developing scenario on the potential behaviour of the Earth System. The connection between the two concepts might be mentioned here.
L72: The free availability of scripts and data is very welcome, but it cannot be considered as a requirement for developing a virtual landscape.
L118: The spatial scale of remote sensing “10^-1…10^2 m” is not correct; it should be, at least, “10^1…10^3 m”, if high-resolution data for soil moisture and biomass are considered.
L124: “single fixed point in time”. This is mentioned here for the first time, likely better to underline it before (see also the second major comment).
L138: The “pattern” concept is not clear here. I would suggest clarifying.
L146-151: Also here, the different combination techniques are not fully clear. I would suggest clarifying.
L281: I would add “spatial”, i.e., “its spatial variability”.
Table 2: The combination for soil moisture should be 12 (4x3), not 8. Why?
Figure 3: In the caption, “top” and “bottom” should be “left” and “right”. “tops layer” should be “top layer”.
L367-368: Indeed, no soil moisture products from optical data is currently being developed.
Figure 6: It is missing the y-label in the plot on the right.
RECOMMENDATION
On this basis, I found the topic of the paper quite relevant and I suggest a major revision before its publication on GMD.
Citation: https://doi.org/10.5194/gmd-2024-106-RC1 - AC1: 'Reply on RC1', Till Francke, 27 Sep 2024
- The virtual framework is, in principle, suitable for any measurement technique. The paper analyses three methods: cosmic-ray, remote sensing and gravimetric measurements. However, while cosmic-ray are well developed and described, this is not the case for remote sensing and gravimetric measurements.
-
RC2: 'Comment on gmd-2024-106', Anonymous Referee #2, 01 Nov 2024
This paper presents a framework for simulating multiscale measurements of soil moisture in a virtual landscape. The justification for the idea is very convincing and likely of interest to GMD readers. The paper is well written and structured but there are a few issues that need addressing prior to publication – I don’t believe any are particularly difficult to address.
The abstract would benefit from a sentence setting out the application context (e.g. soli moisture sensing). This is very nicely explained the introduction but the abstract starts on, for me, a rather technical note. Furthermore, I appreciate the idea of the vJFC is to be quite generic (sentences around line 100), but I wonder if the title of the framework could be more specific to water storage applications? I think this would more accurately reflect what the code does at this stage – happy for you to argue against this if other applications are int eh pipeline.
My main concern about the paper is that there is not overarching explanation or justification for the choice of case studies. Some are more detailed than others and the conclusions are essentially written without references to the case studies. I appreciate the test cases are simply illustrations of a huge number of potential applications, but a rational should be given for the choices, especially as some but not all of the measurement types are presented for each test case. How do the test cases come together to demonstrate the codebase effectively? And on that point do you do any tests (e.g. unit tests) to check the code is working as expected?
I’m also not convinced I properly understood section 2.2.2 (L142). When merging if a compartment properties are replaced how is that different to stacking. Could some form of visualization be added to help with understanding how the landscapes are built in 2.2.2?
More specific comments:
I think GMD requires version numbers in the title?
L46: Check ‘(reference Welcome)’
L373: This case study is introduced due to its importance for all three observation types, but only CRNS is presented. Why not present results form all three instrument types? At the very least a clear justification for not doing this is needed, especially as the justification is currently broader than what’s presented. If the results are not particularly interesting for some reason could they be included nevertheless in a supplement? I appreciate it’s likely not necessary to go into great detail about results form every test case, they are primarily illustrative, but a rational is needed for the choices about what to present (see my main comment). To be clear, I’m not advocating for more test cases and sensor examples, but the explanation of why various virtual sensor examples have been chosen needs to be much stronger.
L485: When ‘relevant the aspects that merit further analysis’ are presented these should be linked with the case study where they emerged when relevant. This basically links in with my main critique of the paper – the purpose the test cases and rational of their choice is not well summarized and then not well used to support the conclusions. The conclusions could disaggregate between perceived applications and those illustrated by case studies as a way to link the case studies and conclusions.
L510: I’d like to see the limitations around assuming no measurement error introduced before the case studies, and certainly not in the conclusions (apologies if I missed this earlier in the manuscript). I was thinking about this issue while reading the case studies.
L517: Computational expense is often mentioned as a barrier, but I don’t think estimates of the computational expense are ever given. This would be very useful practical information for anyone using the package. Do I need a HPC system for this or is this expensive in a single computer context? As with the measurement error limitations this needs presenting before the conclusions.
Citation: https://doi.org/10.5194/gmd-2024-106-RC2 -
AC2: 'Reply on RC2', Till Francke, 05 Nov 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-106/gmd-2024-106-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Till Francke, 05 Nov 2024
Status: closed
-
RC1: 'Comment on gmd-2024-106', Anonymous Referee #1, 17 Sep 2024
OVERVIEW
The paper describes the development of a virtual framework to generate synthetic landscape and measurement network simulations. The so-called “virtual joint field campaign” can be used to assess multiscale measurement methods of soil moisture and biomass.
GENERAL COMMENTS
The paper is well written, well-structured and clear. The topic is surely of interest for the readers of Geoscientific Model Development (GMD) as the paper introduces a new virtual framework to assess measurement methods of soil moisture and biomass in controlled experiments. However, I have three major comments that, in my opinion, need to be addressed before the publication.
MAJOR COMMENTS
- The virtual framework is, in principle, suitable for any measurement technique. The paper analyses three methods: cosmic-ray, remote sensing and gravimetric measurements. However, while cosmic-ray are well developed and described, this is not the case for remote sensing and gravimetric measurements.
Remote sensing and gravimetric measurements are only used in the hexland_tracks experiment, cosmic-ray in all the experiments. In addition, optical remote sensing data are considered, but currently no soil moisture products from optical data are available, only results from scientific papers. The use of microwave observations (e.g. from SAR data) would have been more appropriate. I suggest that the authors restructure the text to make it more balanced. Some parts can simply be put in the supplementary material. The remote sensing case study is very weak.
- While reading the paper, I wondered how the virtual truth was developed. It is only clear after carefully reading the methodology, I would suggest adding a paragraph at the end of the introduction clearly describing it. For example, I was expecting virtual experiments with time-varying soil moisture, but this is not the case. This should be mentioned, and I also wondered if this might be a strong limitation of the current design of the framework. I would suggest that the authors discuss this point.
- The second experiment (sierra-neutronica) is briefly described. While potentially interesting, as it was in the current version, it is described too briefly. I would suggest either removing the experiment or improving its description and relevance. What understanding do we gain from such an experiment?
SPECIFIC COMMENTS (L: line or lines)
L37: for remote sensing, active and passive microwave are mentioned here. In the paper optical remote sensing is considered, and also thermal data are used for retrieving soil moisture. Please improve the text here.
L39: Also, gamma-ray technique is worth to be mentioned here.
L46: “(reference welcome)” something is missing here
L52: Just a comment, interesting to see that such “virtual campaign” is similar to the concept of “digital twin” for developing scenario on the potential behaviour of the Earth System. The connection between the two concepts might be mentioned here.
L72: The free availability of scripts and data is very welcome, but it cannot be considered as a requirement for developing a virtual landscape.
L118: The spatial scale of remote sensing “10^-1…10^2 m” is not correct; it should be, at least, “10^1…10^3 m”, if high-resolution data for soil moisture and biomass are considered.
L124: “single fixed point in time”. This is mentioned here for the first time, likely better to underline it before (see also the second major comment).
L138: The “pattern” concept is not clear here. I would suggest clarifying.
L146-151: Also here, the different combination techniques are not fully clear. I would suggest clarifying.
L281: I would add “spatial”, i.e., “its spatial variability”.
Table 2: The combination for soil moisture should be 12 (4x3), not 8. Why?
Figure 3: In the caption, “top” and “bottom” should be “left” and “right”. “tops layer” should be “top layer”.
L367-368: Indeed, no soil moisture products from optical data is currently being developed.
Figure 6: It is missing the y-label in the plot on the right.
RECOMMENDATION
On this basis, I found the topic of the paper quite relevant and I suggest a major revision before its publication on GMD.
Citation: https://doi.org/10.5194/gmd-2024-106-RC1 - AC1: 'Reply on RC1', Till Francke, 27 Sep 2024
- The virtual framework is, in principle, suitable for any measurement technique. The paper analyses three methods: cosmic-ray, remote sensing and gravimetric measurements. However, while cosmic-ray are well developed and described, this is not the case for remote sensing and gravimetric measurements.
-
RC2: 'Comment on gmd-2024-106', Anonymous Referee #2, 01 Nov 2024
This paper presents a framework for simulating multiscale measurements of soil moisture in a virtual landscape. The justification for the idea is very convincing and likely of interest to GMD readers. The paper is well written and structured but there are a few issues that need addressing prior to publication – I don’t believe any are particularly difficult to address.
The abstract would benefit from a sentence setting out the application context (e.g. soli moisture sensing). This is very nicely explained the introduction but the abstract starts on, for me, a rather technical note. Furthermore, I appreciate the idea of the vJFC is to be quite generic (sentences around line 100), but I wonder if the title of the framework could be more specific to water storage applications? I think this would more accurately reflect what the code does at this stage – happy for you to argue against this if other applications are int eh pipeline.
My main concern about the paper is that there is not overarching explanation or justification for the choice of case studies. Some are more detailed than others and the conclusions are essentially written without references to the case studies. I appreciate the test cases are simply illustrations of a huge number of potential applications, but a rational should be given for the choices, especially as some but not all of the measurement types are presented for each test case. How do the test cases come together to demonstrate the codebase effectively? And on that point do you do any tests (e.g. unit tests) to check the code is working as expected?
I’m also not convinced I properly understood section 2.2.2 (L142). When merging if a compartment properties are replaced how is that different to stacking. Could some form of visualization be added to help with understanding how the landscapes are built in 2.2.2?
More specific comments:
I think GMD requires version numbers in the title?
L46: Check ‘(reference Welcome)’
L373: This case study is introduced due to its importance for all three observation types, but only CRNS is presented. Why not present results form all three instrument types? At the very least a clear justification for not doing this is needed, especially as the justification is currently broader than what’s presented. If the results are not particularly interesting for some reason could they be included nevertheless in a supplement? I appreciate it’s likely not necessary to go into great detail about results form every test case, they are primarily illustrative, but a rational is needed for the choices about what to present (see my main comment). To be clear, I’m not advocating for more test cases and sensor examples, but the explanation of why various virtual sensor examples have been chosen needs to be much stronger.
L485: When ‘relevant the aspects that merit further analysis’ are presented these should be linked with the case study where they emerged when relevant. This basically links in with my main critique of the paper – the purpose the test cases and rational of their choice is not well summarized and then not well used to support the conclusions. The conclusions could disaggregate between perceived applications and those illustrated by case studies as a way to link the case studies and conclusions.
L510: I’d like to see the limitations around assuming no measurement error introduced before the case studies, and certainly not in the conclusions (apologies if I missed this earlier in the manuscript). I was thinking about this issue while reading the case studies.
L517: Computational expense is often mentioned as a barrier, but I don’t think estimates of the computational expense are ever given. This would be very useful practical information for anyone using the package. Do I need a HPC system for this or is this expensive in a single computer context? As with the measurement error limitations this needs presenting before the conclusions.
Citation: https://doi.org/10.5194/gmd-2024-106-RC2 -
AC2: 'Reply on RC2', Till Francke, 05 Nov 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-106/gmd-2024-106-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Till Francke, 05 Nov 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
285 | 68 | 170 | 523 | 6 | 6 |
- HTML: 285
- PDF: 68
- XML: 170
- Total: 523
- BibTeX: 6
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1