the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A Bayesian method for predicting background radiation at environmental monitoring stations
Abstract. Detector networks that measure environmental radiation serve as radiological surveillance and early warning networks in many countries across Europe and beyond. Their goal is to detect anomalous radioactive signatures that indicate the release of radionuclides to the environment. Often, the background H·*(10) is predicted using meteorological information. However, in dense detector networks the correlation between different detectors is expected to contain markedly more information. In this work, we investigate how the joint observations by neighbouring detectors can be leveraged to predict the background H·*(10). Treating it as a stochastic vector, we show that its distribution can be approximated as multivariate normal. We reframe the question of background prediction as a Bayesian inference problem including priors and likelihood. Finally, we show that the conditional distribution can be used to make predictions. To perform the inferences we use PyMC. All inferences are performed using real data for the nuclear sites in Doel and Mol, Belgium. We validate our calibrated model on previously unseen data. Application of the model to a case with known anomalous behaviour – observations during the operation of the BR1 reactor in Mol – highlights the relevance of our method for anomaly detection and quantification.
- Preprint
(7478 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on gmd-2024-137', Anonymous Referee #1, 12 Sep 2024
The authors address the interesting topic by developing a Bayesian method aimed at the prediction of background in atmospheric monitoring. The paper is well-written and the proposed methodology is clear. What I see as the main drawback of the paper is that it focuses mainly on local monitoring networks, limiting the application on a larger scale to zero. I suggest the authors should reformulate the title/abstract/introduction of the paper so that it clearly states that the aim is the local networks. This limitation is logical since the method is purely statistical with no additional information from the environment such as terrain or dispersion modeling, as also mentioned by the authors in the conclusion which is appreciated by this reviewer. Also, the authors made several strong assumptions that need further clarification, see the following comments.
Major comments:
line 72: the first paragraph of Section 2.1. motivates the paper regarding data and monitoring networks. Are these networks rather standard in other facilities or are they Belgium-specific? It should be somehow undestanded from the very beginning of the abstract/introduction.
line 97: without any further comments, the authors selected specific periods of data, however, it is not clear why the authors made this choice. Do the periods cover "standard" periods or do they have specific reasons? Please, clarify these choices.
line 160: The assumption of independence between S and R seems wrong for the defined model. The matrix R is defined based on \sigma_l (line 137) while the matrix S has \sigma_l elements on its diagonal (line 138). Hence, this assumption is not justified and should be either reformulated or omitted. However, this assumption seems crucial for further estimation. Please, comment on this or reformulate the estimation procedure.
line 222: I understand the assumption of Gaussianity on line 129 due to the tractability of the model. However, the assumption on line 222 is quite difficult to follow and accept since it seems very strong to the reviewer considering the complexity of the atmospheric environment. The errors may be huge and to accept such an assumption seems (maybe) possible for concrete and very compact networks, but it is not general.Minor comments:
line 3: the notation H*(10) is used in the abstract but it is defined later. Please, define it here or remove it from the abstract.
line 21: ...what a normal really means. - add a
line 64: ...how how the Bayesian... - remove how
line 134: Subsubsect. --> Sect.
eq. (3): here, \Sigma_{lm} is defined using R_{lm} which is defined, again, using \Sigma_{lm}. Please, clarify.
line 144: I suggest stating that the \mathcal{M} is the vector here for clarity and better understanding.
eq. (5, 7, 8): please, use brackets together with exp function for clarity.
line 171: I suggest defining LKJ correlation distribution since it is not standard and general knowledge. Also, its property related to the choice of \eta equal to 1 should be discussed.
line 186: I suggest removing the computer specifics since they are not of interest, computational complexity is not studied here.
eq. (10): use \widehat instead of the \hat symbol.Citation: https://doi.org/10.5194/gmd-2024-137-RC1 -
AC1: 'Reply on RC1', Jens Peter K.W. Frankemölle, 18 Dec 2024
Dear referee,
Thank you for taking the time to study our manuscript in detail and making valuable comments. In this reaction, we provide responses on a comment-by-comment basis.
Kind regards,
on behalf of all the authors,
Jens Peter Frankemölle----
The authors address the interesting topic by developing a Bayesian method aimed at the prediction of background in atmospheric monitoring. The paper is well-written and the proposed methodology is clear. What I see as the main drawback of the paper is that it focuses mainly on local monitoring networks, limiting the application on a larger scale to zero. I suggest the authors should reformulate the title/abstract/introduction of the paper so that it clearly states that the aim is the local networks. This limitation is logical since the method is purely statistical with no additional information from the environment such as terrain or dispersion modeling, as also mentioned by the authors in the conclusion which is appreciated by this reviewer. Also, the authors made several strong assumptions that need further clarification, see the following comments.
Our model is only valid at a rather local scale, as you rightly point out, and we concur that the title might suggest otherwise. We have therefore changed the title to "A Bayesian method for predicting background radiation at environmental monitorings stations in local-scale networks".
Major comments:
(1.1) line 72: the first paragraph of Section 2.1. motivates the paper regarding data and monitoring networks. Are these networks rather standard in other facilities or are they Belgium-specific? It should be somehow undestanded from the very beginning of the abstract/introduction.
Such networks are in fact rather standard across Europe as can be seen on the EURDEP map (https://remap.jrc.ec.europa.eu/Advanced.aspx) which we also reference in the introduction (lines 15 & 16 of the original manuscript).
We believe, however, that the short statement made at the beginning of the abstract (lines 1 & 2 of the original manuscript) that "Detector networks that measure environmental radiation serve as radiological surveillance and early warning networks in many countries across Europe and beyond" as well as the longer statement at the beginning of the introduction (lines 13 through 17 of the original manuscript) that "Networks that measure environmental radiation are operational in countries across Europe and beyond. Such networks monitor the environment for aberrant radioactivity that could, e.g., indicate the anomalous release of radionuclides from a nuclear facility. Within Europe, observations of national networks are collected on the EUropean Radiological Data Exchange Platform EURDEP (Sangiorgi et al., 2020), including those of the Belgian radiological surveillance network and early warning system Telerad (Sonck et al., 2010)" cover this point to satisfaction.
We do grant that we did not refer back to these statements in Sect. 2.1 (line 72). The revised manuscript, however, includes the following statement: "Such a network is not unique to Belgium, as described in the Introduction (cf. Sect. 1), but the Belgian network is among the densest networks in the world. Hourly values for the IMN and IMA, dating back many years, are also publicly available on a Belgian national platform (Telerad, 2024) similar to EURDEP (European Commission, 2024)." In the Introduction (line 16 of the original manuscript), we additionally added a direct reference to the EURDEP website in addition to the existing Sangiorgi et al (2020) paper that describes the platform so that people may more easily access the EURDEP.
(1.2) line 97: without any further comments, the authors selected specific periods of data, however, it is not clear why the authors made this choice. Do the periods cover "standard" periods or do they have specific reasons? Please, clarify these choices.
The window from 6 through 13 August was selected initially (sometime during 2023) because we were considering the SCK CEN context. Data selection was subject to two parameters: the BR1 should not be running and it should not be raining. Thankfully, in the July--August period, during which the BR1 is not operated, of the preceding year (2022) it did not rain for 8 days straight, so we selected this period. On the left, that period was bounded by an episode of rain, on the right by the starting up of the reactor again. The two other periods were chosen to be several weeks later than this initial period to check the quality of the calibrated model several weeks later, one subject to the reactor not running and the other to the reactor running.
We have described this rationale in the revised manuscript (line 98 of the original manuscript).
(1.3) line 160: The assumption of independence between S and R seems wrong for the defined model. The matrix R is defined based on \sigma_l (line 137) while the matrix S has \sigma_l elements on its diagonal (line 138). Hence, this assumption is not justified and should be either reformulated or omitted. However, this assumption seems crucial for further estimation. Please, comment on this or reformulate the estimation procedure.
There are actually two layers to this comment that we would like to disentangle. One relates to an (A) ambiguity in Eq. 3 that you also point out in comment (1.9), and the other to (B) the role of the prior information in estimating the posterior distribution
(A) For one, we agree that there is some ambiguity in the text following Eq. 3. There is apparent circular reasoning in Eq. 3 and the following text. Line 138 of the original manuscript correctly defines the matrix S in terms of σl. Meanwhile, line 137 of the original manuscript wrongly seems to suggest that we also use σl to define Rlm. In reality, this is not the case. The matrix R is directly inferred from the measurements M, along with S and μ, as is encoded in the formulation of the posterior: f(μ,S,R|M) ∝ f(M|μ,S,R)f(μ,S,R).
We have resolved this by dropping the "Σ / σl / σm" part from line 137 of the original manuscript, so that it now no longer seems as if R is defined in terms of σl.
(B) Regardless of the discussion above, however, the fact that we factorise the prior in Eq. 6 under the assumption of a priori independence between μ, S and R is quite acceptable. Independent priors for µ, S and R do not imply independent posteriors, i.e. f(μ,S,R)=f(μ)f(S)f(R) does not imply f(μ,S,R|M)=f(μ|M)f(S|M)f(R|M). In fact, posteriors will usually be joint distributions with, in principle, non-zero covariances across the board. So the assumption made in line 160 – in relating to the prior rather than the posterior – is actually not wrong.
(1.4) line 222: I understand the assumption of Gaussianity on line 129 due to the tractability of the model. However, the assumption on line 222 is quite difficult to follow and accept since it seems very strong to the reviewer considering the complexity of the atmospheric environment. The errors may be huge and to accept such an assumption seems (maybe) possible for concrete and very compact networks, but it is not general.
The assumption that we make in line 222 is actually not as strange on closer inspection, when considering the way in which we split up the measurement vector M in Eq. 1, i.e. M=H+E, where H relates to all the correllated effects due to – as you rightly point out, the complexity of the atmospheric environment – and E to the instrumental error – the instrumental error in turn being mainly determined by counting statistics. Since the radiation detectors are very precise – the 1-sigma due to the counting error for the spectroscopic detectors is on the order of 0.5 nSv/h – the correlated effects are typically larger.
So the only assumption that we really make is that the noise E can be ignored for the predictive estimation. The atmospheric complexity, meanwhile, does not enter through E but through H, and is retained. Under the assumption that these atmospheric processes drive changes in the entire local network at the same time – which we elaborate on a bit in the Introduction to the revised manuscript (cf. response 2.2 to the other reviewer) – we can capture quite significant excursions to the dose rate. Even to the extent, as can be seen in Figs. 5 and 6, that we are able to model a rain peak simply as an 'atmospherically driven error'.
We have added a comment to that effect directly succeeding the statement in line 222 of the original manuscript.
Minor comments:
(1.5) line 3: the notation H*(10) is used in the abstract but it is defined later. Please, define it here or remove it from the abstract.
We duly corrected this to "background ambient dose equivalent rate \dot{H}*(10)" in line 3.
(1.6) line 21: ...what a normal really means. - add a
We did not incorporate this remark. This sentence should be read as an inquiry into the meaning of the adjective "normal" in the preceding excerpt "normal behaviour", so inclusion of an indefinite article is not warranted on this occasion.
(1.7) line 64: ...how how the Bayesian... - remove how
We duly removed this.
(1.8) line 134: Subsubsect. --> Sect.
We duly corrected this on line 134 of the original manuscript as well as on numerous other locations that 'Subsect.' and 'Subsubsect.' appear throughout the manuscript.
(1.9) eq. (3): here, \Sigma_{lm} is defined using R_{lm} which is defined, again, using \Sigma_{lm}. Please, clarify.
We refer back to our response to comment (1.3), which describes the resolution of this comment.
(1.10) line 144: I suggest stating that the \mathcal{M} is the vector here for clarity and better understanding.
\mathcal{M} is in fact an N x k matrix. We duly clarified this in line 144 by writing: "Given a dataset \mathcal{M}=[M1,...,M2], with \mathcal{M} an N x k matrix of measurements by the entire network of k detectors described in Sect. 2.1 at N different points in time, and given the random variables of interest described in Sect. 2.2.1, we can write ..."
(1.11) eq. (5, 7, 8): please, use brackets together with exp function for clarity.
Brackets were added accordingly.
(1.12) line 171: I suggest defining LKJ correlation distribution since it is not standard and general knowledge. Also, its property related to the choice of \eta equal to 1 should be discussed.
We replaced Eq. 9 from the original manuscript by the formulation for the LKJ distribution function given in the original work by Lewandowski et al (2009). This results in Eqs. 9–11 in the revised manuscript. Additionally, we included Eq. 12 in the new manuscript, which is the special form of the LKJ distribution function for η=1 from which it becomes clear that the distribution no longer depends on the correlation matrix R, i.e. that all R are equally likely. We also included a reference to the work by Barnard et al (2000) to motivate the decomposition of Σ into SRS (both here, and during the first mention of the decomposition at the end of Sect. 2.2.1.).
(1.13) line 186: I suggest removing the computer specifics since they are not of interest, computational complexity is not studied here.
While we agree that the computer details are not of central importance to the present work, we do not feel that their presence is detrimental to the quality of the manuscript. In addition, we think that it is interesting for reader to know that this type of modelling can be done using PC-grade CPU and RAM rather than requiring cluster computations.
(1.14) eq. (10): use \widehat instead of the \hat symbol.
We duly corrected the occurences of the regular hat appearing over the \mathcal{M} in favour of the proposed \widehat.
Citation: https://doi.org/10.5194/gmd-2024-137-AC1
-
AC1: 'Reply on RC1', Jens Peter K.W. Frankemölle, 18 Dec 2024
-
RC2: 'Comment on gmd-2024-137', Anonymous Referee #2, 28 Nov 2024
The study proposes a new Bayesian inference method for predicting background radiation. This method considers the correlation between different detectors. The authors use observations from two nuclear sites in Belgium to validate and verify the new method and investigate its potential application in radiation anomaly detection and quantification. The manuscript is well-organized, but the authors could improve the language further. I also have some detailed comments below.
Line 3: Please define H*(10) at its first appearance, the same as BR1 in Line 11.
Lines 21-22 and 34-36: Do you mean the normal equals background radiation? According to the descriptions in the second paragraph, precipitation and cosmic radiation, which are background factors, can suddenly change the dose equivalent rate much. Are they considered normal or abnormal? Please clarify the concepts (normal, background, and anomalous).
Line 64: Delete the second “how”
Line 98: Delete “for”
Line 149: Which four?
Lines 155: This assumption limits the application of the method to situations with significant temporal variations, which is consistent with the statement in Line 165. But Figure 2 shows the excellent performance of the method even if apparent temporal variations exist. Please explain it.
Lines 206-208: This is what I am very concerned about. Under such an application, we must know which detectors are influenced by the local radiation source. As the example shown in Section 5, the results will be entirely different if you select different sectors to construct the Bayesian model. Please discuss the limitations.
Line 274: What do you mean by an order of magnitude more uncertainty? Compared to even-numbered stations? Does Figure 4 show that?
Line 287: Could you please provide some statistics to verify it (variance is overestimated)?
Citation: https://doi.org/10.5194/gmd-2024-137-RC2 -
AC2: 'Reply on RC2', Jens Peter K.W. Frankemölle, 18 Dec 2024
Dear referee,
Thank you for taking the time to study our manuscript in detail and making valuable comments. In this reaction, we provide responses on a comment-by-comment basis.
Kind regards,
on behalf of all the authors,
Jens Peter Frankemölle---
The study proposes a new Bayesian inference method for predicting background radiation. This method considers the correlation between different detectors. The authors use observations from two nuclear sites in Belgium to validate and verify the new method and investigate its potential application in radiation anomaly detection and quantification. The manuscript is well-organized, but the authors could improve the language further. I also have some detailed comments below.
We took your language suggestions into account (see below).
(2.1) Line 3: Please define H*(10) at its first appearance, the same as BR1 in Line 11.
We duly corrected this to "background ambient dose equivalent rate \dot{H}*(10)" in line 3.
(2.2) Lines 21-22 and 34-36: Do you mean the normal equals background radiation? According to the descriptions in the second paragraph, precipitation and cosmic radiation, which are background factors, can suddenly change the dose equivalent rate much. Are they considered normal or abnormal? Please clarify the concepts (normal, background, and anomalous).
In the first paragraph of the Introduction, we start out from the dichotomy of 'normal' versus 'anomalous' to ask the question: "What is normal in our context and what is not?" In the second paragraph we then list the factors that you mention – precipitation, cosmic radiation – and several others. In lines 34 and 35 we then conclude that "we refer to the sum of these processes as background radiation". So, as you rightly point out, we equal normal with background at that point. Afterwards, we drop the term 'normal' entirely and only refer to either 'background' or 'anomalies'.
In the revised manuscript, we added the following sentence at the end of the second paragraph (after line 35 of the original manuscript) to make this clear: "In the rest of this work, we will exclusively refer to background radiation to mean these normally occurring processes and anomalous radiation to be everything other than these normally occurring processes."
What we try to show in this work is in fact that also the (sudden) changes in these background processes can be modelled using spatial correlations, because they are largely constant over the scale of the local network. Meanwhile, processes like the BR1 emitting Ar-41 are not constant over the scale of the network and can hence be detected as anomalous. This is the point that we are trying to make throughout the rest of the introduction and, in fact, the rest of the work. However, we agree that this is actually only made clear in the method section (lines 125 through 133 of the original manuscript), which will be confusing to the readers. To get this point better across, we have now modified the penultimate paragraph of the introduction, replacing lines 53 through 55 by:
"In the current work, we present a Bayesian inference framework for the estimation of the background ambient dose equivalent rate observed in densely packed local detector networks. We assume that the processes that drive changes in the background occur at a scale that is larger than the typical scale of the local networks under consideration, and model the response to such an external driver by looking at the effect that it has on all detectors. What sets our work apart from other work is the fact that we allow for correlations between the different detectors in the network, so that the external driver does not necessarily affect all detectors equally. The Bayesian approaches..."
(2.3) Line 64: Delete the second “how”
We duly removed this.
(2.4) Line 98: Delete “for”
We removed this accordingly.
(2.5) Line 149: Which four?
The four distributions refer to the posterior, likelihood, prior and evidence, but we see how the statement in the original manuscript could have been confusing. We rewrote the entire paragraph, replacing lines 149 through 151 of the original manuscript by: "Strictly speaking, subscripts are required to indicate that Eq. 4 involves four different distributions. Instead, we denote all four distribution functions – of the posterior, likelihood, prior and evidence – as f to avoid cluttering the equations. Their varying arguments are, after all, sufficient to tell them apart."
(2.6) Lines 155: This assumption limits the application of the method to situations with significant temporal variations, which is consistent with the statement in Line 165. But Figure 2 shows the excellent performance of the method even if apparent temporal variations exist. Please explain it.
Line 155 of the original manuscript was – if not factually wrong – at least cutting corners somewhat. What we really mean to say is that we are simply not interested in representing any deterministic relationship (causality) in the time series. In essence, we treat time as another random variable that we ‘integrate out’ through marginalization.
We can illustrate this for one detector. As described in Eq. 1 the measured dose, represented by the stochastic variable (SV) M ~ f(m), is the sum of two SV’s H ~ f(h) and E ~ f(e). Setting E=0 for this example (i.e. M=H), we may introduce time t as a realization of some fourth SV T ~ f(t). In the most extreme situation of perfect correlation in time of H, we know that there is some function H=g(T) that transforms T to H. From statistics, we know that we may obtain the distribution of H as follows
f(h) = ∫ f(h|t) f(t) dt = ∫ δ(h-g(t)) f(t) dt.
Thus even for a perfectly correlated signal, we may define the distribution H ~ f(h) as we do in the manuscript when we consider the time series as a distribution f(t) from which we may randomly draw samples. The main assumption that we need to make is that the time series is sufficiently long that the function g(T) covers all the possible realizations of H. As you have noticed, the posterior predictive in Fig. 2 confirms that this is in fact a suitable assumption to make.
In the revised manuscript, we fleshed out the original statement in line 155 to better reflect the above.
(2.7) Lines 206-208: This is what I am very concerned about. Under such an application, we must know which detectors are influenced by the local radiation source. As the example shown in Section 5, the results will be entirely different if you select different sectors to construct the Bayesian model. Please discuss the limitations.
You are correct. In our current implementation, the selection of the detectors still requires 'expert knowledge'. We added a paragraph between the first and second paragraphs of Sect. 2.3 to discuss this (directly after line 208 of the orginal manuscript), which states:
"This does, however, still require `expert knowledge' in the sense that the model itself does not know which detectors are affected. Physics wise, this is related to the fact that the model is, by construction, time independent (cf. Sect. 2.2.2. Correlations in time of the individual detectors, a strong drop in which could point at atypical excursions of the dose rate (like the ones discussed later in Sect. 5) are not taken into account. While there may be engineering solutions even within the frame of the currently discussed method that could allow for this method to autonomously detect anomalies, consideration of those is beyond the scope of this work"
(2.8) Line 274: What do you mean by an order of magnitude more uncertainty? Compared to even-numbered stations? Does Figure 4 show that?
The even-numbered stations are equipped with spectroscopic detectors that – at low doses — calculate the ambient dose equivalent by integrating over the full gamma energy spectrum. These detectors have better counting characteristics than the cheaper Geiger—Mueller (GM) detectors that are used in the odd-numbered detectors. Likely, the remaining 'uncertainty' in the background on both occasions is driven by counting statistics. So yes, we compared the uncertainty between odd- and even-numbered detectors, which is shown by the relatively large size of the shaded area for the odd-numbered compared to the even-numbered detectors. We have made this somewhat more explicit in the text, replacing lines 274 through 276 of the original manuscript by:
"Meanwhile, the odd-numbered detectors have almost an order of magnitude more uncertainty compared to the even-numbered detectors – likely owing to considerably worse counting statistics – in their model `predictions', which is in excellent agreement with the actual spread in the Telerad time traces, which is evidenced by the fact that the shaded areas in Fig. 4 (which represents the uncertainty of the measurements) are much larger for the odd- than for the even-numbered detectors."
(2.9) Line 287: Could you please provide some statistics to verify it (variance is overestimated)?
Certainly. Since the signal is not steady-state, however, we cannot simply calculate the standard deviations and compare them. Instead, we now decided to calculate one-hour running variances for the Telerad data. From this timeseries of Telerad standard deviations, we then calculated the mean variance as well as the standard deviation to that mean variance. We do the same for model-predicted standard deviations, starting at time stamp 100 to avoid the period with rain. Doing so, we obtain
Time-averaged variance of the observations: (0.15 ± 0.10) nSv2 h–2
Time-averaged variance of the predictions: (0.21± 0.01) nSv2 h–2
So we indeed see that the variances (and by extension the standard deviations) are slightly overestimated by the model. We included this quantitative comparison directly after line 287 of the original manuscript.
Citation: https://doi.org/10.5194/gmd-2024-137-AC2
-
AC2: 'Reply on RC2', Jens Peter K.W. Frankemölle, 18 Dec 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
231 | 79 | 15 | 325 | 9 | 7 |
- HTML: 231
- PDF: 79
- XML: 15
- Total: 325
- BibTeX: 9
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1