NUREG-1507, New Section 9 - Final Report 091225
| ML25343A218 | |
| Person / Time | |
|---|---|
| Issue date: | 09/30/2025 |
| From: | Office of Nuclear Material Safety and Safeguards |
| To: | |
| References | |
| NUREG-1507 | |
| Download: ML25343A218 (0) | |
Text
9-1 NUREG-1507 - New Section on a priori MDC for Continuously Collected Data 9 MINIMUM DETECTABLE CONCENTRATION FOR CONTINUOUSLY COLLECTED DATA 9.1 Introduction The calculation of an a priori minimum detectable concentration (MDC) for continuously collected data (CCD) requires considerations and evaluations different from those assumed for the two-stage scanning process, as described in Section 3, for surveys with vigilance. As described in NUREG-1575, Revision 2, Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM), issued 2025, a two-stage scan survey with vigilance calls for pausing or stopping to investigate further when audio click data from a ratemeter indicate potential areas of concern. CCD can be described as data collected in real time at designated, nearly regular time intervals and geospatial distances. As applied for radiation scanning measurements, the geospatial (location) and temporal (date and time) information accompanies each recorded measurement. For CCD, there is no assumed surveyor vigilance. Without survey vigilance, there is no response to increased clicks and no stopping to count longer to ensure full-scale response or further scanning to identify the area of highest measurement. The recorded data alone are the basis for determining the scan MDC.
This section describes the process for establishing an a priori MDC for CCD, based on surveys performed in areas with characteristics similar to the areas for which the evaluation or investigation is being planned. Establishing an a priori MDC ensures that the design and conduct of scanning surveys using CCD for areas of the survey units will meet the defined objectives (i.e., being able to detect levels of contamination above background levels at an acceptable fraction of identified limits or regulatory criteria).
Postprocessing CCD scan measurements for a survey area, as opposed to a reference area, serves as the primary method for identifying areas for follow-up investigation. Section 6.3 describes different statistical methods than may be used for this survey unit postprocessing.
However, if instrumentation and methods are not capable of achieving a suitable MDC for CCD, then survey results may be inadequate. To this end, the a priori MDC is primarily a planning toola Data Quality Objective (DQO) to ensure that the selection of scanning instrumentation and the determination of scanning methods will provide appropriate detection capability to support the final status survey and dose compliance evaluations as required for license termination.
The two-stage scanning process described in Section 6 for surveys conducted with surveyor vigilance is characterized by (1) a surveyor listening to audible click data and (2) the surveyor pausing to count longer if an audible increase in counts occurs. This two-stage technique enables the surveyor to identify a finite area of elevated activity by audio response, with follow-up focused scanning measurement(s) to identify and quantify a steady-state detector response in standard units of measurement, such as counts per minute (cpm). For CCD, there is no assumption of the surveyors attentiveness to identify contamination in the environment using a detectors audio response (i.e., audible clicks).
With automated or semiautomated scanning systems, the time sequencing and geospatial distances can be reasonably controlled. When the surveyor manually controls the movement,
9-2 scanning speed is likely to vary due to normal human reactions and controls. Although the method of survey with vigilance may have some advantage for characterization and remediation surveys, it should be avoided when performing background reference area surveys for the purpose of determining an a priori MDC. Reasonable attention is needed to avoid overly biasing the data and statistical analysis. It is generally assumed that postprocessing and evaluation of the CCD will be used to identify a need for follow-up investigations (discussed in Section 6.3).
Data collection frequency should be automated, with no surveyor attention needed other than to initiate and terminate. In other words, the determination of the a priori scan MDC should be made with a constant scan speed to the extent practical. This is not to imply that surveyor vigilance cannot be conducted when performing CCD surveys for areas being evaluated.
Each recorded measurement must stand on its ownits recorded value along with any temporal and spatial information. This situation requires full understanding of the characteristics of recorded measurements. Postprocessing of the CCD, during which geospatial methods can be used to identify anomalies and spatial variations, can lead to resurveys or investigations. The design of the survey, including the selection of instruments and scan method (scan height and speed), becomes important to ensure an acceptable MDC that meets the MARSSIM criteria.
However, MARSSIM does not address MDC for CCD. Section 6.3.3.1 of MARSSIM Revision 2 states the following:
The scan MDC calculations include an index of sensitivity, surveyor efficiency, and observation interval for an assumed elevated area size and scan speed.
These equations do not necessarily apply to the CCD data collected without audible surveyor vigilance, and depending on the elevated area size and scan speed, the detector output may not pick up the full response rate.
Marianno et al. 2003 studied the impact of scan speed, depth of residual radioactivity, and selected response time on efficiency, with faster scan speeds, depth of residual radioactivity, and longer response times leading to lower efficiencies. Falkner and Marianno 2019 went on to develop a well-defined relationship between MDA [minimum detectable activity] and detector speed based on Monte Carlo N-Particle [MCNP] and model fits to modified four-parameter logistic function. Scan parameters and detector response time should be carefully selected as part of the DQO process and considered when calculating scan MDC.
In MARSSIM, the methods for calculating an MDC for scan surveys with vigilance make certain assumptions regarding the statistical characteristics of the scan data (from Pacific Northwest National Laboratory [PNNL] 2023):
Data distributions associated with each hypothesis are determined by making the following assumptions (NUREG-1507 [Revision 1]).
Data are assumed to follow Poisson distributions, which are adequately approximated by normal distributions in both the null and alternative hypotheses.
The data distribution under the null hypothesis (H0) is normal and centered at zero, representing the net noise distribution when no net activity is present.
9-3 The data distribution under the alternative hypothesis (HA) is normal and centered at a point greater than zero, representing the net signal distribution when net activity is present.
As discussed in Section 3.1, the application of MARSSIM, Revision 2, Equation 6-2, for calculating the a priori scan MDC for surveys with vigilance assumes that the measurements can be represented as a Poisson distribution, which at counts above about 70 can be represented as a normal distribution. For most environmental conditions, however, it may not be reasonable to assume that the CCD measurements will follow a normal distribution. Due to variation in material composition or geological conditions, the spatial distribution may be more random with potentially significant variation, even representing different distributions. Selection of a suitable reference area with representative characteristics (i.e., geology and background radiation) becomes an initial screening/evaluation step in the process for calculating a suitable a priori MDC for CCD.
As described in Section 6.2, the a priori scan MDC for surveys with vigilance is determined by (1) estimating the net minimum detectable count rate (MDCR) that a surveyor can distinguish from the background detector response, and (2) applying efficiency factors that relate to the surveyor, instrumentation, and source of radiation. As shown in Equation 6.3 of this document, this method includes consideration of an observational interval. As defined in Section 6.2.3, The observation interval during scanning is the actual time that the detector can respond to the contamination source. It depends on the scan speed, detector orientation, and size of the hotspot. Furthermore, a value of 0.25 square meter (m2) serves as a nominal area of concern for this calculation.
For CCD, the same process applies. First is the calculation of the MDCR, and second the correlation of the MDCR to an MDC. Essentially, all the same steps for correlating an MDCR to the MDC, as described in Section 4, can be applied to CCD. The primary difference is that with vigilance, the surveyor responds to an audible or visual increase, allowing for immediate follow-up scanning to identify an area of elevated activity. An observation time is included, representing the time that the source is detectable, that is, the time the detector is located over the 0.25 m2 area. For CCD, there is no observation time and no surveyor vigilance; instead, there is an accumulation time, as further described below.
The use of CCD does not rely on this two-stage scanning (although it is not necessarily excluded from the scan technique). The surveyor can still view the data as they are collected and make observations that can be used to further inform and develop the survey process.
The determination of the MDCR requires consideration of three main characteristics of CCD. As noted below, the first two reflect instrument/detector signal and scan method, while the third indicates how the resulting recorded measurement relates spatially:
(1) detector signal processing, or characteristics of the instrument response (signal) for the recorded value (2) scan time interval and the corresponding area of coverage (3) detector response (and the recorded measurement) considering the spatial characteristics of potential elevated areas
9-4 Each of these three characteristics is discussed further below.
9.1.1 Detector Signal Processing For CCD, the recorded measurement is a single value, typically in units of counts or count rate, such as counts per minute. The value also reflects how the instrument processes the signal (e.g., a pulse) from the detector to then reflect the instrument output, which is the recorded measurement. With CCD, it is necessary to understand the detector output signal as the recorded measurement. Ideally, the output, as recorded, should reflect the detector interactions (counts) over the defined time interval for recording the measurements (seconds). This may not always be the case for instruments whose outputs are expressed in different units or time intervals, such as counts per minute or equivalent exposure rate in units of microroentgen (µR) per hour (h). As discussed further in Section 9.1.2, calculation of the MDC is dependent on the relationship of the recorded measurement to the scanned area.
Without pausing, and depending on instrumentation signal processing, the electronically captured count (or count rate) data may not accurately reflect a representative measurement at a given location, as the instrument may not reflect the maximum or full-scale response, particularly if the observation interval over an area of elevated direct radiation is less than 2 to 4 seconds, equivalent to an area of 3 to 12 m2 at a scan speed of 1 meter (m) per second (s).
With CCD, another consideration is instrument dead time, meaning that the instrument may not be capable of processing a signal from the detector during signal processing and data transfer.
This is analogous to increased dead time created by increased count rate.
9.1.2 Accumulation Time Interval The accumulation time interval simply reflects the interval between recorded values. As discussed in Section 6.2.3, for surveys with vigilance, the observation interval during scanning is the actual time that the detector can respond to the contamination source. For CCD, this observation interval is better defined as the time interval between the recorded scan measurements. It is typically defined by an instrument setting before conducting the survey. It is independent of the scan speed and size of any area of contamination, such as an elevated area, sometimes referred to as a hotspot. However, the resulting magnitude of the recorded measurement can be a function of the scan speed and the size of the contaminated areas, which when combined can reflect the time interval during which the detector is over an area with elevated activity or not. While several recorded measurements can be combined to reflect a measurement over a longer time interval, and consequently reflect a larger area, such averaging can mask small elevated areas. Therefore, for CCD, the accumulation time interval reflects an instrument setting that specifies the time interval between the recorded measurements. During postprocessing, multiple measurements (frequency being a function of accumulation time) may be combined to provide an adjusted accumulation time to improve statistics, but at the risk of smoothing the data and masking small elevated areas. Section 9.7 discusses the effect that different intervals can have on ability to identify (detection) elevated areas.
Unlike the MDCR for two-stage scanning with vigilance, the MDCR for CCD is independent of the scan speed but dependent on accumulation time, which then correlates to a scan area, that is, a resulting correlation of the measurement to an area of activity level dependent on the accumulation time interval for the scan. Increasing the time interval can improve detection due to an increase in the total number of counts (detections). This can be done by either increasing the time between individual recorded measurements or combining multiple measurements to
9-5 represent an increased time interval, assuming the CCD reflects the actual counts occurring within the time interval, recorded as counts or an equivalent count rate. However, if the recorded CCD reflects an average or smoothed count rate, then the corresponding time interval will need to reflect the time over which the averaging or smoothing occurred.
Therefore, it is the combination of scan speed and time interval that gives the resulting scan measurement correlated to a scanned area. This combination reflects the scanned area covered (i.e., the counts over a scanned area). Without vigilance, there is no pausing to ensure full-scale meter response. The result is that the CCD reflect an average value for the area, as defined by the scan speed and the recorded time interval, as well as any consideration to the characteristics of the instrument data, as discussed in Section 9.1.1. This averaging can negatively affect the ability to identify small areas of elevated activity, which is discussed in the next section.
9.1.3 Detector Response to Spatial Variability Field of view is a common term for describing a detectors response to a sourcein this case, a contaminated area. The field of view refers to the area or region that can be observed or captured by a particular instrument or sensor (i.e., the extent of the observable space or landscape within the range of a specific device or equipment). The concept of a field of view is described in more detail in Section 7.2 for a collimated gamma detector, such as a sodium iodide (NaI) detector. For most alpha and beta detectors, the field of view can be approximated as the active detection surface area of the detector due to the limited penetration and path lengths for these particles.
For a typical background environment (common materials and natural conditions), the recorded CCD measurements can be expected to reflect a Poisson distribution, and a corresponding MDCR can be calculated to reflect a uniform radiation field (with expected variability). However, the ability to detect spatial variability (i.e., an elevated area) is a function of the size of an elevated area, as well as the scan speed and recording time. Therefore, for CCD, the MDCR also becomes inversely proportional to the size of an elevated area as the area decreases. In other words, as the dimensions of an elevated area decrease, the MDCR is likely to increase, depending on the observation interval for the elevated area.
Information collected during a historical site assessment may provide insight into physical characteristics of likely areas of contamination. Scoping and characterization survey data can be used to better define these characteristics. These information sources should be used for selecting scan survey instrumentation and survey methods, including whether a multidetector array is needed to identify and quantify small areas of elevated activity.
From an application standpoint, many of the instrument and source characteristics discussed in Sections 4 and 5 remain applicable for determining the instrument response and the calculated scan MDC for CCD. Section 9.6 illustrates how the instrument efficiency and source efficiency can be applied for calculating the MDC from the MDCR. Similarly, the findings discussed in Section 5, related to source effects and signal degradation, can be applied to scan CCD measurements in converting the MDCR to MDC. Section 9.9 illustrates how the dimensions of an elevated area and scan methods, including observation interval, can affect the MDCR and the resulting MDC.
9-6 9.2 Key Statistical Foundations For CCD, the accurate and defensible identification of radioactive contamination during scanning surveys relies fundamentally on the ability to distinguish an increased instrument response using rigorous statistical principles. Central to this process are two complementary frameworks: Curries Classical Detection Theory and Signal Detection Theory (Currie, 1968).
Together, these frameworks define how thresholds for detection are established, balancing the competing risks of false positives and false negatives in survey data interpretation.
As discussed in Section 3.1, applying the seminal methodology in Currie (1968) to the quantification of detection thresholds in radiometric measurements provides a statistically rigorous method to define two critical decision points when the data can be defined by a Poisson distribution:
critical level (LC): the net signal above background that must be exceeded to infer that contamination is possibly present, controlling the false positive (Type I) error rate, detection limit (LD): the true net signal level, such that contamination will be detected with a specified probability, typically controlling the false negative (Type II) error rate, In mathematical terms, the critical level (LC) is defined as follows:
(Eq. 9.1) where:
Z = the critical value from the standard normal distribution corresponding to the desired Type I error rate B = the standard deviation of the background measurement distribution The detection limit (LD) is expressed as follows:
(Eq. 9.2) where:
Z = the critical value from the standard normal distribution corresponding to the desired Type II error rate D = the standard deviation of the signal-plus-background distribution In most practical radiological scanning scenarios, particularly under high-count conditions, it is reasonable to assume D B. Under this assumption, the detection limit simplifies to the following:
(
)
D B
L z
z
=
+
x (Eq. 9.3)
C B
L z
=
x
(
)
D C
D L
L z
=
+
x
9-7 This form is especially practical in field conditions where measurements are numerous and rapid decisions about potential contamination are needed.
9.3 Lag-k Differencing Methods While classical detection approaches assume a mostly stationary, homogenous background radiation field, real-world scanning surveys often encounter spatial variability in background levels. Changes in characteristics due to variability in surface geology or anthropogenic fallout depositional patterns for land surveys, and variability in building materials for indoor surveys, can lead to widely varying recorded measurements. Without appropriate correction, such variability can mask localized contamination or produce false detections. As a statistical method, lag-k differences provide one way to address these challenges.
Lag-k differencing techniques can be broadly categorized into one-sided and centered approaches. In one-sided differencing, each observation is compared to a value either k steps ahead of (forward lag-k) or behind (backward lag-k) the point of interest.1 In contrast, centered lag-k differencing evaluates the difference between the observed point and the midpoint of points symmetrically spaced k steps around the current position. These techniques enhance detection by suppressing low-frequency background variations and amplifying localized anomalies. However, they exhibit distinct tradeoffs: one-sided methods are easy to implement and useful for directional scans but introduce asymmetry, potentially biasing detection depending on scan direction. Centered differencing is spatially balanced, offering improved accuracy near central regions of anomalies, but suffers from data loss at the boundaries and increased variance.
To address these limitations, different types of lag-k smoothing have been evaluated.
9.3.1 Values for k The selection of the lag distance k is a critical step in applying lag-k differencing methods. The lag parameter directly affects the sensitivity and specificity of detection; a smaller k may retain excessive background variation, while an excessively large k may smooth over localized contamination. Thus, selecting an optimal k under real survey conditions requires a principled statistical approach.
One common technique used in time series analysis, particularly from the Autoregressive Integrated Moving Average (ARIMA) modeling framework where selecting appropriate lag parameters, is essential for capturing the structure of serial dependence in the data. Specifically, Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) plots can be used to guide the selection of k. The bibliography in Section 9.12 contains more detailed information on these techniques.
The ACF measures the linear relationship between observations of a time series at different lags. Specifically, it quantifies the correlation between xt and xt-k, where k is the lag. The ACF helps identify the presence of temporal dependence in a series and is especially useful in 1
For CCD, it is assumed that measurements are recorded at a set frequency (e.g., every 1 or 2 seconds) and that scan speed remains constant (e.g., meters per second for NaI detector scanning and detector width for beta scanning). Therefore, the lag-k distance can be assumed to be represented by an equivalent number of recorded measurements reflecting the distance between measurements.
9-8 diagnosing patterns such as seasonality and the presence of autoregressive (AR) or moving average (MA) components.
The PACF measures the degree of association between a time series and its lagged values, after controlling for the values at all shorter lags. While the ACF considers direct and indirect effects, the PACF isolates the direct effect of xt on xt-k.
The PACF is particularly useful for identifying the order p of an AR(p) model. In such cases, the PACF typically shows a sharp cutoff after lag-p, indicating that the necessary structure has been accounted for by including terms to lag-p.
To determine k, the ACF and PACF are first calculated for a given sequence of background radiation measurements. The ACF quantifies the correlation between a time series and its lagged versions, offering insight into how strongly current values are related to past values at varying distances. In contrast, the PACF isolates the direct correlation between the series and a specific lag, removing the influence of all intermediate lags. Both functions are computed across a range of lag distances (k = 1,2,,K), where the choice of k is informed by the maximum spatial extent over which nearby observations are expected to be correlated. This enables characterization of the underlying spatial structure in the radiation background, providing a foundation for selecting model parameters or smoothing scales.
The resulting ACF and PACF plots are then analyzed to determine the optimal lag-k. This is identified as the point where the autocorrelation structure becomes negligiblespecifically, where the ACF or PACF crosses zero or falls within statistical insignificance for all higher lags.
The first lag at which this occurs indicates that serial dependence is no longer meaningful beyond that distance and can thus be used to define a cutoff for modeling or differencing.
Selecting this optimal k ensures that any subsequent spatial filtering or anomaly detection method is calibrated to reflect the actual extent of background correlation, reducing the risk of under-or over-smoothing the data.
This approach to selecting k ensures that the differencing process effectively removes correlated background noise while preserving signal anomalies due to contamination. While traditionally used in time-series forecasting, these diagnostic tools translate well to spatially ordered data sequences, such as those collected in scanning surveys, where measurements exhibit serial correlation based on traversal paths or environmental gradients.
In field applications, the choice of k may also be informed by domain-specific knowledge (e.g., detector integration time, scan speed, or expected contamination plume size) and can be refined iteratively using model validation techniques or simulated performance assessments.
9.3.2 Generalized Midpoint Lag-k Differencing (Arbitrary k)
Let xi be the ith observation along the transect. Generalized midpoint (MP) differencing extends the concept by allowing flexible lag distances:
(Eq. 9.4)
(
)
2 i k i k i MP i
x x
D x
+
+
=
9-9 Statistical Properties Assuming that the background data are random variables from a Poisson distribution where the criteria are met to approximate them as xi ~ N (B, 2B), the statistical properties of the differenced signal Di,MP can be analyzed.
Expected Value Each individual component of the expected value has an expected value B. Therefore (Eq. 9.5)
Variance The MA is defined as follows:
(Eq. 9.6)
This average is based on two independent observations, each with variance
. Thus, the variance of the average is as follows:
(Eq. 9.7)
Assuming independence between xi and its surrounding values (a reasonable approximation for sufficiently large k or weak autocorrelation), the variance of the differenced signal is as follows:
(Eq. 9.8)
This leads to the assumption that Now, let T* be the multiplying factor on the background standard deviation to yield the standard deviation of the differences. For example, in this case,
. The critical limit becomes the following:
(Eq. 9.9)
And, as derived in PNNL (2023), it follows that the detection limit becomes the following:
(
)
( )
(
)
0 2
2 i k i k B
B i MP i
B x
x E D E x E
µ
µ
µ
+
+
+
=
=
=
(
)
2 i k i k MP x
x x
+
+
=
2 B
(
)
(
)
(
)
2 2
2 2
2 2
4 4
2 B
B i k i k B
B MP x
x Var x Var
+
+
+
=
=
=
=
(
)
( )
(
)
2 2
2 1.5 2
B i MP i
MP B
B Var D Var x Var x
=
+
=
+
=
x
(
)
2
~
0,1.5 i MP B
D N
1.5 T =
1.5 C MP B
B L
z T
z
=
x x
=
x x
9-10 (Eq. 9.10)
This can be simplified by setting alpha () and beta () equal to each other. Then, the limit of detection for the lag-k MP becomes the following:
(Eq. 9.11)
Strengths and Limitations A notable strength of this approach is its flexibility to adjust spatial sensitivity through the tuning of the parameter k, allowing users to tailor the analysis based on the expected size of the contamination footprint. This adaptability makes the method suitable for a wide range of scenarios, from detecting localized hotspots to identifying broader spatial trends. However, this same flexibility introduces a challenge: the choice of k requires either a priori knowledge of the spatial structure of the contamination or an optimization process that aligns with the specific objectives of the survey. Without careful selection, the method may under-or over-smooth the data, potentially obscuring key spatial features.
To further refine detection in environments with spatially variable background radiation, the investigation is expanded beyond fixed-lag differencing to consider more flexible summarization techniques that evaluate each measurement xi in the context of its local neighborhood.
Specifically, two smoothing strategies are examined: centered MA and centered exponentially weighted moving average (EWMA). These methods allow for dynamic, data-adaptive background estimation, which can improve the robustness of detection under the nonstationary background conditions typically encountered in CCD-based scanning surveys. By evaluating each techniques mathematical behaviorparticularly the impact on bias, variance, and detection sensitivitythe tradeoffs are assessed between responsiveness to localized anomalies and suppression of background trends. This comparison offers valuable insight into which summarization strategies provide optimal performance for real-world CCD applications.
9.3.3 Differencing with a Centered Moving Average An alternative to using the MP in the smoother is to evaluate each observation relative to the average of surrounding points, thereby forming a smoothed local background estimate. This approach, referred to as differencing with a centered MA, is defined as follows:
(Eq. 9.12)
This formula subtracts the mean of the 2k neighboring observations (i.e., k observations before and after xi) from the central value. It is symmetrical, centered, and particularly effective at emphasizing deviations from a local background trend.
2 2
2 4
1 1
1 2
B B
D MP B
z z
T T
L z
T z
z z
=
+
+
+
+
2 2
2 2
1.5 D MP B
B L
z T
z z
z
=
x
+
=
x
+
(
)
1 1
2 k
i MA i
i j
i j
j D
x x
x k
+
=
=
+
9-11 Statistical Properties under Background-Only Conditions Assuming that the background data are random variables from a Poisson distribution where the criteria are met to approximate them as xi ~ N (B, 2B), the statistical properties of the differenced signal Di,MA can be analyzed.
Expected Value Each individual component of the expected value has expected value B. Therefore (Eq. 9.13)
Variance The MA is defined as follows:
(Eq. 9.14)
This average is based on 2 x k independent observations, each with variance
. Thus, the variance of the average is as follows:
(Eq. 9.15)
Assuming independence between xi and its surrounding values (a reasonable approximation for sufficiently large k or weak autocorrelation), the variance of the differenced signal is as follows:
(Eq. 9.16)
Therefore, the inflation factor for the variance due to this differencing is equivalent to the variance for the generalized MP lag-k differencing when k=1, and it becomes smaller as k increases.
Implications for Detection Limit Calculation Given the assumptions above and that the distribution of the difference is of the form
, the same ideas can be used for the critical value and detection limit as for the generalized MP lag-k. In this case, Now the critical limit (LC) becomes the following:
(
)
(
)
(
)
1 1
0 2
k i MA i
i j
i j
B B
j E D E x E
x x
k
µ
µ
+
=
=
+
=
=
(
)
1 1
2 k
MA i
j i
j j
x x
x k
+
=
=
+
(
)
(
)
2 2
2 1
2 2
2 B
MA B
Var x k
k k
=
x x
=
(
)
(
)
(
)
2 2
2 1
1 2
2 B
i MA i
MA B
B Var D Var x Var x k
k
=
+
=
+
=
+
(
)
- 2 2
~
0, i MA B
D N
T 1
1 2
T k
=
+
9-12 (Eq. 9.17)
The detection limit (LD), assuming
, for this differenced signal is as follows:
(Eq. 9.18)
This formulation allows adaptive smoothing; by increasing k, one can reduce the variance of the background estimate at the cost of reduced localization sensitivity.
Strengths and Limitations One advantage of the proposed method is its ability to suppress local fluctuations caused by stochastic noise, which enhances the stability and interpretability of spatial patterns. By smoothing these short-term or highly localized variations, the method allows underlying structures to emerge more clearly. Additionally, it is designed to adapt to gradual spatial trends, making it well suited for capturing broad-scale gradients without overfitting to transient or erratic variations in the data. Finally, the use of symmetric differencing helps reduce directional bias, ensuring that changes are treated consistently in each direction along the transect, thereby preserving isotropy in the analysis and improving the robustness of inference along the transect.
Despite its strengths, the method has several limitations. First, it assumes independence between each observation and its neighboring values, an assumption that may not strictly hold in practice, especially in datasets with strong local spatial autocorrelation. This can lead to biased estimates if nearby values are, in fact, correlated. Additionally, larger values of the smoothing parameter k tend to dampen fine-scale variability, which, while helpful for noise reduction, may reduce the methods sensitivity to subtle or small-scale contamination events that are important in certain applications. Finally, the approach requires complete and evenly spaced data, limiting its applicability in settings with missing observations or irregular spatial sampling, where interpolation or imputation may be needed before analysis.
9.3.4 Differencing with Centered Exponentially Weighted Moving Average Smoothing To improve spatial balance while retaining the adaptive smoothing benefits of exponential weighting, a centered EWMA approach is considered. Unlike the recursive, backward-looking EWMA used in control charts, the centered EWMA applies symmetric exponential weights around each observation Xi. This makes it more suitable for retrospective analysis of CCD survey data, where a full sequence of measurements is available post hoc.
The centered EWMA-smoothed background around Xi is defined as follows:
(Eq. 9.19) 1 1
2 C MA B
B L
z T
z k
=
x x
=
x x
+
=
2 2
1 2
2 1
2 D MA B
B L
z T
z z
z k
=
x
+
=
x
+
+
(
)
1 1
2 k
j i
j i
j j
i k
j j
w x
x s
w
+
=
=
+
=
9-13 where the weights
. The differenced signal is then as follows:
(Eq. 9.20)
Here, is the smoothing constant that controls the decay of weights. Smaller values of apply broader smoothing (i.e., more influence from distant neighbors), while larger values emphasize local information.
Value for To align EWMA smoothing with lag-based differencing techniques, the smoothing parameter can be interpreted as a function of the effective window size k. A common empirical relationship is as follows:
=
(
)
2 2
1 k +
(Eq. 9.21) which approximates the smoothing behavior of a centered MA with window width (2k + 1). This formulation provides a practical guideline for choosing based on the desired level of background suppression: smaller values of (associated with larger k) yield greater smoothing and slower responsiveness, while larger values emphasize more recent data and enable quicker detection of transients. This correspondence facilitates side-by-side comparisons of EWMA and lag-k differencing techniques within a unified analytical framework.
Statistical Properties under Background-Only Conditions Assuming that the background data are random variables from a Poisson distribution where the criteria are met to approximate them as xi ~ N (B, 2B), the statistical properties of the differenced signal Di,EWMA can be analyzed.
Expected Value Under the conditions above, all components in the smoothing window have the same expected value, B, as follows:
(Eq. 9.22)
Variance of the Centered EWMA-Differenced Signal Let
, and assume independence among the measurements in the smoothing window. The variance of the weighted average becomes the following:
(Eq. 9.23)
(
)
1 j
j w
=
(
)
,i EWMA i
i D
x s
=
( )
(
)
0 i
B i EWMA B
B E s E D
µ
µ
µ
=
=
=
1 k
j j
W w
=
=
( )
(
)
2 2
2 2
1 1
2 2
2 2
2 2
k k
j B
B j
j j
i w
w Var s W
W
=
=
x x
=
=
9-14 Assuming xi is independent of the surrounding values, the variance of Di,EWMA is the following:
(Eq. 9.24)
This expression reflects the variance inflation introduced by the centered EWMA smoothing. As
, the weights concentrate at the nearest neighbors, approaching the behavior of the centered lag-k MP differencing. As
, the smoothing window broadens and the variance inflation approaches that of a uniform MA.
Detection Limit for Centered EWMA Differencing Again, given the assumptions above and the distribution of the difference is of the form
, the same ideas can be used for the critical value and detection limit as were used for the generalized MP lag-k. In this case, Now the critical limit (LC) becomes the following:
(Eq. 9.25)
The detection limit (LD) for this differenced signal is as follows:
(Eq. 9.26)
This generalizes the classical detection limit framework to accommodate exponentially weighted background estimation, providing a flexible and data-adaptive detection scheme.
One advantage of this approach is its ability to adapt smoothly to spatial gradients in background radiation levels, making it effective for environments where background conditions change gradually across space. The method also retains spatial symmetry, which ensures that differencing is balanced across all directions, reducing potential bias introduced by directional filtering. Additionally, the technique supports fine-tuning of parameters, such as the weighting scheme or window width, to align with specific characteristics of the measurement noise or the anticipated size of anomalies. This flexibility makes the method broadly applicable to a range of survey conditions and anomaly detection goals.
However, the approach is not without limitations. First, it requires access to surrounding data points during postprocessing, which may pose challenges in real-time or edge-computing
(
)
(
)
( )
2 2
1 2
2 2
1 2
2 2
2 1
2 k
B j
j i EWMA i
i B
k j
j B
w Var D Var x Var s W
w W
=
=
x
=
+
=
+
=
+
1 0
(
)
- 2 2
~
0, i MA B
D N
T 2
1 2
1 2
k j
j w T
W
=
=
+
2 1
2 1
2 k
j j
C EWMA B
B w
L z
T z
W
=
=
x x
=
x x
+
2 1
2 2
2 2
2 1
2 k
j j
D EWMA B
B w
L z
T z
z z
W
=
=
x
+
=
x
+
+
9-15 applications where only local information is available. Moreover, interpretation of the filters weights and window width depends on the context, requiring a clear understanding of the spatial structure of both background noise and target anomalies. Finally, the method assumes independence in the calculation of variancean assumption that may not hold in the presence of spatially autocorrelated backgrounds, potentially leading to underestimation or mischaracterization of uncertainty in such cases.
9.3.5 Summary of Differencing Methods and Currie Comparison Table 9-1 summarizes the key properties of each lag-k method relative to Curries classical approach (Currie 1968) used in MARSSIM.
Table 9-1 Key Properties of Different Lag-k Methods and the Currie Approach Method Difference Definition Variance Currie (Classic)
(MARSSIM)
None Not applicable Generalized MP Lag-k
(
)
2 i k i k i
x x
x
+
+
1.5 MA
(
)
1 1
2 k
i i
j i
j j
x x
x k
+
=
+
2 1
1 2
B k
+
1 1
2k
+
Centered EWMA
(
)
i i
x s
, where is is the exponentially weighted average of neighbors 2
1 2
2 1
2 k
j j
B w
W
=
+
2 1
2 1
2 k
j j w W
=
+
When applying these methods in practice, each measure within the variance box is replaced with the actual sample variance of the differences from the observed data. For example, for the generalized MP lag-k, 2
1.5 B
x in the equation is replaced with the estimate of the variance of the differences, or 2
D MP
9.3.6 Choosing the Appropriate Detection Method The choice of differencing method should be informed by the nature of the background radiation, the expected spatial characteristics of anomalies, and the operational priorities of the survey. The Currie (1968) method is ideal when the background is stable and measurements are globally consistent, but it does not account for local variability. Generalized MP lag-k differencing offers flexibility for targeting small elevated areas of varying sizes and balances spatial sensitivity with reasonable variance inflation. MA smoothing is useful for surveys with moderate local variation, providing a fixed-window background estimate that improves robustness against noise. When background gradients are present or dynamically changing, T
2 B
2 1.5 B
x
9-16 centered EWMA smoothing provides an adaptive, data-driven solution that symmetrically weighs surrounding observations, making it particularly effective for environments with gradual background shifts. Ultimately, method selection should consider not only statistical performance, but also the scanning geometry, data availability, and acceptable tradeoffs between detection sensitivity and the risk of false positives.
9.3.7 Bias and Precision Checks on Detection Limit In many real-world applications of radiological detectionparticularly in field environmentsthe assumptions underlying classical calculations of critical limit (LC) and detection limit (LD) are often violated. Traditional methods, which rely on idealized assumptions such as normality or Poisson-distributed counting statistics with constant variance, may yield inaccurate or misleading results when confronted with overdispersed, skewed, autocorrelated, or otherwise nonconforming background data. In such cases, alternative statistical approaches are necessary to maintain the validity and defensibility of detection decisions.
Beyond addressing violations of model assumptions, these alternative techniques also have an important role in assessing the precision and bias of LD estimates derived from standard methods. Techniques such as robust estimation, nonparametric inference, and resampling proceduresincluding the bootstrapoffer valuable tools for quantifying uncertainty and generating alternative LD estimates. By leveraging these methods, practitioners can obtain not only more resilient thresholds under nonideal conditions, but also independent verification of results obtained under classical assumptions.
This work focuses on the bootstrap technique as a flexible and powerful approach for estimating LD in the presence of complex data behavior, while providing a means of evaluating the variability and potential bias of conventional estimates.
9.3.8 Bootstrap Technique The bootstrap technique is a nonparametric resampling method used to estimate the sampling distribution of a statistic by drawing repeated samples from the observed data. In the context of the situation above, the bootstrap begins with a set of background measurements assumed to represent the underlying population variability. A large number of bootstrap samples (typically 1,000 or more) are generated by randomly sampling with replacement from the original dataset, each sample being the same size as the original. For each bootstrap replicate, a statistic of interest (e.g., the standard deviation of the background) is computed and used to calculate derived quantities, such as the LC or LD, using standard parametric formulas. This process produces a distribution of values that reflect the empirical variability in the background data, enabling the analyst to derive robust point estimates and precision intervals.
Assessing Bias in the Estimated Detection Limit One potential limitation in conventional LD estimation is bias, especially when background variation is estimated from small or noisy samples. Using the bootstrap, the empirical bias of the LD estimator can be assessed directly:
(1)
Compute the LD from the original dataset:
9-17 (Eq. 9.27)
(2)
Generate B bootstrap samples of the background data (with replacement) and compute
( )
m B
and corresponding
( )
- m D
L for each:
( )
( )
2
2 1,
m m
D B
i L
z T
z m
M
=
x
+
=
(Eq. 9.28) where M is the number of bootstrap samples.
(3)
Compute the average bootstrap estimate:
( )
1 1
M m
D D
m L
L M
=
=
(Eq. 9.29)
(4)
Estimate the bias:
(
)
D D
D orig Bias L L
L
=
(Eq. 9.30)
If bias is substantial, then corrections or alternative estimation strategies may be warranted.
Bootstrap Precision Intervals for the Detection Limit The bootstrap provides a powerful, assumption-light method for constructing a precision interval around the LD:
(
)
(
)
(
)
1
/2
- 1
/2
Boot D
D PI L
L
=
(Eq. 9.31) where the endpoints are the empirical / 2 and (1 / 2) quantiles of the D
L distribution.
Interpretation and Application This precision interval provides a quantitative measure of the precision associated with the estimated detection limit. It is especially relevant when doing the following:
reporting LD estimates in regulatory submissions or technical reports comparing LD values across different survey methods (e.g., lag-k versus classical) conducting sensitivity analyses or validation studies communicating uncertainty in support of risk-informed decision-making 2
2 D i orig B
i L
z T
z
=
x
+
9-18 Importantly, this precision interval is not used to adjust or redefine the detection threshold. The LD remains a fixed rule for deciding whether a given signal warrants further investigation.
Instead, the precision interval conveys how precisely that threshold is estimated based on available data, which can be particularly useful in quality assurance, method comparison, and transparent reporting contexts.
9.3.9 Simulation Example This example is inspired by the methodology presented in Section 4.3.1 of PNNL (2023), which evaluates the performance of different statistical tests for detecting localized contamination in radiological surveys. That study compared traditional, non-localized detection approaches with localized hypothesis testing methodsspecifically the MP testunder both constant and spatially varying background radiation conditions. The example below extends that framework by incorporating additional smoothing-based methods, including centered MA and EWMA, to assess their ability to identify a point-source contamination event. Detection probabilities are simulated across a range of mean source intensities, and detection limits are compared to evaluate the sensitivity of each method.
The simulation constructs a synthetic radiological survey consisting of n=1,000 equally spaced spatial locations along a transect. At each location, background radiation levels are modeled as Poisson-distributed counts, with the mean background defined by a sinusoidal function:
µ() = 1000 + x sin 2
(Eq. 9.32) where A controls the amplitude of background variation. When A=0, the background is constant; when A > 0, the background exhibits spatial fluctuation. A point-source contamination is simulated by injecting an additional Poisson-distributed signal at a single locationspecifically, where the background is at its minimumto represent a localized increase in radiation. This contamination signal is varied across a range of mean values (e.g., 0 to 250 cpm) to estimate detection probabilities. The resulting dataset mimics realistic field measurements, including spatial trends and stochastic measurement noise.
For each method, the simulation is repeated 1,000 times per source level. The detection probability is computed as the proportion of simulations where the observed centered count or difference at the contamination event exceeds a calculated critical limit (LC). The detection limit (LD) is determined as the source strength at which the method achieves a 95 percent detection probability. This setup provides a fair and controlled comparison of each methods sensitivity under idealized background conditions.
Constant Background Scenario (A=0)
In this initial scenario, the background radiation level is assumed to be spatially uniform, simulating an idealized survey environment devoid of background drift or natural spatial variability. Each location along the transect is modeled as an independent realization from a Poisson distribution with a constant mean of 1,000 cpm. This controlled setup provides a baseline for evaluating detection-method performance under stationary conditions, where global estimates are expected to perform well and localized methods can be assessed for potential variance inflation. Before exploring detection results, the simulated background data are visualized (Figure 9-1) to establish a reference for subsequent comparisons.
9-19 Figure 9-1 Time-Series Plot of Simulated Background Count Data for A=0 Figure 9-1 offers a baseline view of a statistically stationary background, representing a scenario with no inherent spatial variability. Such a configuration isolates the performance of detection methods under idealized conditions, where any observed anomalies can be attributed solely to statistical fluctuations or deliberately injected signals, rather than to environmental or structural background changes. This controlled setting allows for a clear evaluation of each methods sensitivity and precision in the absence of confounding background trends.
With this context established, Table 9-2 gives the detection limit statistics for each method under these stationary conditions.
Table 9-2 Comparison of Standard Deviations, Critical Limits, and Detection Limits for Simulated Background Data with A=0 Technique Standard Deviations Critical Limit (LC)
Detection Limit (LD)
MARSSIM 30.77 50.61 103.93 MP 37.34 61.42 125.55 MA 31.36 51.58 105.86 EWMA 31.42 51.68 106.07 Under constant background conditions, the MARSSIM method demonstrated the lowest observed variability, resulting in the smallest estimated critical and detection limits. This outcome aligns with expectations, as the method leverages a global background estimate that is optimal in stationary environments with no spatial structure. The MA and EWMA methods both performed comparably well, offering a favorable tradeoff between noise reduction and sensitivity. Their localized smoothing techniques maintained low detection limits while providing
9-20 flexibility for potential future application to more variable backgrounds. In contrast, the MP method exhibited reduced effectiveness in this scenario, as indicated by its higher variability and correspondingly larger detection limit. These findings highlight the importance of aligning the choice of detection method with the background characteristics of the survey environment.
Figure 9-2 presents the simulation results for this scenario.
Figure 9-2 Probability of Detection as a Function of Source Count Rate for Each Method Using Simulated Background Data with A=0 Figure 9-2 presents a summary plot illustrating the probability of detection as a function of the mean source count rate (in cpm), which represents the strength of the injected point-source signal. The x-axis corresponds to the mean signal intensity, while the y-axis shows the empirical probability of detection, calculated as the proportion of simulations in which the method successfully identified the source as exceeding background levels. A horizontal dotted line is drawn at 0.95 to indicate the target detection probability threshold (95 percent), reflecting the commonly used criterion for acceptable sensitivity. Vertical dashed lines mark the estimated detection limits (LDs) for each method, defined as the lowest source strength at which the 95 percent detection probability is achieved. The methods are color-coded for clarity: the red line corresponds to the non-localized MARSSIM approach, black represents the MP method, blue denote the MA method, and green indicates the EWMA method. This visualization facilitates a direct comparison of the sensitivity performance across methods, highlighting both the shape of the detection curves and the specific thresholds at which each method attains the desired detection probability.
The simulation results presented in Figure 9-2 provide a comparative evaluation of the detection performance for each method under constant background conditions. The non-localized method (red line), which relies on a global estimate of the background, demonstrates the highest
9-21 sensitivity, achieving a 95 percent detection probability at the lowest mean source strength. This is reflected by its leftmost vertical dashed line, indicating the smallest detection limit (LD) among the methods. Such performance is consistent with expectations, as global background estimation is optimal when the background is flat and homogeneous.
MA and EWMA methods perform similarly well. Their detection probability curves nearly coincide with that of the non-localized method across the entire source-strength range, with only a slight rightward shift in their LDs. This suggests that although localized averaging introduces a modest increase in variance, it does not substantially degrade sensitivity under constant conditions. The minimal difference in performance supports their utility, particularly if some degree of background variability may be present.
In contrast, the MP method exhibits reduced sensitivity under these conditions. It requires a noticeably higher source strength to reach the 0.95 detection probability threshold, as evidenced by its rightmost vertical LD line. This diminished performance is expected, as the local differencing inherent in the MP method introduces additional variability when applied to flat backgrounds, thereby inflating the detection threshold. Overall, these results highlight the importance of aligning the choice of method with the underlying background structure.
This figure reinforces the conclusion that, under uniform background conditions, global or smoothed local methods (specifically, the MA and EWMA approaches) demonstrate superior sensitivity compared to the MP method. All three of these techniques achieve the target 95 percent detection probability at comparatively lower source strengths, underscoring their effectiveness in identifying low-level signals within stable and well-controlled environments.
Their ability to leverage either global estimates or localized smoothing allows them to minimize noise while maintaining detection accuracy. However, the MP method exhibits reduced performance in this setting, requiring substantially higher source strengths to reach the same detection threshold.
Non-Constant-Background Scenario (A=50)
In this simulation scenario, the background radiation level varies spatially, representing a more realistic survey environment in which ambient radiation naturally fluctuates across different locations. To emulate this variation, the background mean at each point along the transect is modulated using a sine wave with an amplitude of 50, introducing a smooth, periodic rise and fall in the expected count rates. This structured variability captures the type of background drift that might be encountered in practical field settings, such as surveys conducted over heterogeneous terrain or in environments with subtle environmental gradients. Before assessing detection performance, the background data are first examined to characterize the extent and nature of this spatial fluctuation.
9-22 Figure 9-3 Time-Series Plot of Simulated Background Count Data for A=50 Figure 9-3 depicts a background radiation field characterized by spatially varying mean values, resulting in a nonstationary signal that more accurately reflects real-world survey conditions.
The observed smooth, wavelike fluctuations in background counts represent natural environmental variation, which can either obscure true anomalies or mimic contamination, thereby complicating detection. In such contexts, traditional detection methods face increased challenges, as they must effectively account for background drift to minimize the risk of false negatives or overly conservative detection thresholds. This scenario serves as a critical test of each methods capacity to adapt to local trends in the background signal and accurately isolate genuine point-source anomalies in the presence of structured noise. With this complexity in mind, Table 9-3 evaluates the detection limit statistics.
Table 9-3 Comparison of Standard Deviations, Critical Limits, and Detection Limits for Simulated Background Data with A=50 Technique Standard Deviations Critical Limit (LC)
Detection Limit (LD)
MARSSIM 46.29 76.14 154.98 MP 37.09 61.01 124.73 MA 31.31 51.50 105.71 EWMA 31.43 51.70 106.10 The results, when considered alongside the standard deviations, underscore the importance of employing local smoothing or local comparison techniques in nonstationary environments.
These methods (i.e., MA, EWMA, and MP) are better equipped to adapt to spatial fluctuations in background radiation, as evidenced by their lower standard deviation values and stronger
9-23 detection performance under variable conditions. In contrast, global approaches such as the MARSSIM method, while effective under uniform backgrounds, become increasingly unreliable as structured variation in the background intensifies. The inability of global methods to account for localized trends can lead to inflated detection limits or missed anomalies. To further illustrate these performance differences, the plot in Figure 9-4 presents the simulation results across methods in this spatially structured background scenario.
Figure 9-4 Probability of Detection as a Function of Source Count Rate for Each Method Using Simulated Background Data with A=50 The simulation results under spatially structured background conditions highlight notable differences in method performance, as illustrated in Figure 9-4. Among all approaches, EWMA demonstrates the strongest performance. It achieves a 95 percent detection probability at the lowest source strength, indicated by the leftmost vertical dashed line. By assigning greater weight to nearby observations while still smoothing the background, EWMA effectively adapts to local structure, making it particularly well suited for environments with gradual background variation.
MA performs nearly as well, with a detection limit only slightly higher than that of EWMA. Its uniform local averaging effectively suppresses background trends while avoiding excessive sensitivity to random fluctuations, offering a balanced approach to managing structured noise.
MP outperforms the non-localized approach and maintains reasonable sensitivity, although it remains slightly less effective than MA or EWMA. Its use of symmetric differencing reduces the impact of slow background drift by canceling out gradual changes, thereby enhancing robustness in nonstationary settings.
9-24 The non-localized method exhibits the weakest performance in this scenario, with the highest detection limit among all methods. This decline in sensitivity is expected, as the global estimation of background variance does not account for spatial trends. Consequently, the structured variability inflates the methods variance estimate, raising the detection threshold and requiring stronger signals for reliable detection. These results collectively emphasize the advantage of localized techniques in handling complex, spatially variable backgrounds.
To further explore method performance under more pronounced background variation, an additional simulation scenario is considered in which the amplitude of the background modulation is increased to A=100. This introduces a stronger and more clearly defined periodic structure in the background signal, amplifying the spatial variability across the transect. Such a scenario serves to stress-test the detection methods, revealing their ability to distinguish genuine point-source signals from increasingly dominant background fluctuations. Before examining the detection results, the background data are presented first to visualize the extent and nature of this enhanced structural variation.
Figure 9-5 Time-Series Plot of Simulated Background Count Data for A=100 As shown in Figure 9-5, the underlying curve exhibits even less variability in the residuals than the scenario with A=50. The amplified periodic structure introduced by A=100 results in a smoother and more predictable background trend, which may facilitate detection for methods that effectively model or adjust for such a structure. This setup provides a clearer signal-to-noise environment, allowing for a more distinct evaluation of each methods capacity to identify point-source anomalies. Figure 9-6 presents the corresponding detection results.
9-25 Figure 9-6 Probability of Detection as a Function of Source Count Rate for Each Method Using Simulated Background Data with A=100 Figure 9-6 confirms that the overall performance pattern observed in previous scenarios remains consistent for the MP-differenced curves. Specifically, the MP, MA, and EWMA methods continue to exhibit relatively strong detection characteristics in the presence of structured background variation. In contrast, the non-localized MARSSIM method again performs poorly, with its detection probability curve failing to reach the 95 percent threshold and barely approaching 75 percent coverage across the range of simulated source strengths. Notably, the predefined upper bound of 250 cpm for the mean source strengthselected based on earlier simulationsproves insufficient in this case, as the red curve (representing MARSSIM) does not intersect the threshold line within this range. This suggests that under conditions of strong background structure, global methods may require substantially higher signal intensities to achieve reliable detection.
In summary, the results depicted in this plot underscore the importance of accounting for spatial background variation when selecting detection methods. Localized smoothing approaches specifically the EWMA and MA techniquesdemonstrate markedly superior performance compared to global methods such as MARSSIM. These localized techniques effectively adapt to structured background fluctuations, thereby enhancing sensitivity to weak point-source signals.
The MP method offers an intermediate solution, leveraging local differencing to mitigate background trends while maintaining computational simplicity. Among all approaches, EWMA consistently emerges as the most sensitive under conditions of spatially structured noise.
Collectively, these findings highlight the need to incorporate background nonstationarity into detection strategies to ensure robust and reliable identification of low-level sources in complex survey environments.
9-26 9.3.10 Field Survey Example Section 9.3.9 provides a simple baseline comparison of the different methods within a well-defined condition. It is not expected that actual field measurements will follow as such.2 In a manner similar to that described above for simulated data, CCD from a scan survey performed in an open land area have been evaluated. The area was approximately 350 meters by 350 meters, for a total of approximately 30 acres. More than 10,000 survey scan measurements with a 2 inch by 2 inch (2x2) NaI detector, coupled with a rate meter and global positioning system (GPS), were recorded on a nominal frequency of once every 2 seconds.
Figure 9-7 displays the raw count data collected along the transect. This visualization provides an initial overview of the spatial distribution of radiation measurements along the transect. The plot reveals underlying patterns and fluctuations that reflect a varying background area. The apparent segments showing higher and lower transect sequences reflect the characteristics of data when graphed along the transect, where multiple passes into and out of the elevated area are segregated. A heatmap of the data could be used for further evaluation with advanced geostatistical methods (as discussed in Section 6.3). Examining the raw data in this form is essential for identifying regions of interest, assessing data quality, and informing the selection of appropriate detection and smoothing techniques for subsequent analysis.
Figure 9-7 Time-Series Plot of Simulated Field Survey Count Data 2
Scanning surveys for uniform surfaces, such as flooring and walls with a single, common material composition, may follow similar data trends. However, gamma measurements in the outdoor environment can be expected to have higher levels (e.g., 6,000 to 12,000 cpm) and variability.
9-27 Given the observed patterns and inconsistencies in the raw data, localized smoothing methods are anticipated to provide the most effective approach for enhancing signal detection. The presence of irregular fluctuations and potential structural features suggests that techniques capable of adapting to local trends, such as MA or EWMA methods, are better suited for this dataset than global approaches.
To further assess the underlying structure in the data, the ACF and PACF plots for the field survey dataset, shown in Figure 9-8, are examined. These plots provide insights into the degree of persistence and potential lag-based relationships, which are critical for selecting and tuning appropriate modeling strategies.
Figure 9-8 ACF and PACF Plots for Field Survey Data In interpreting the correlation structure of the field survey data, greater emphasis is placed on the PACF plot, as the ACF displays a persistent correlation across many lags. This sustained autocorrelation suggests a strong underlying structure but offers limited guidance in determining an appropriate cutoff point for modeling. In contrast, the PACF plot provides more informative insights into the AR behavior of the process by isolating the direct influence of each lag.
Analysis of the PACF reveals a noticeable tapering of partial autocorrelations, with values diminishing substantially beyond -k=5. This pattern suggests that the data may be reasonably modeled with an AR process incorporating up to six lag terms. Beyond this point, the incremental contribution of earlier observations becomes negligible. This behavior is consistent with a moderately complex AR structure, supporting the use of localized modeling techniques that account for a finite memory of past values.
9-28 While the localized methods are anticipated to provide superior performanceparticularly given the spatial complexity and variability observed in the datathe results from the non-localized MARSSIM approach are included for comparison. Including MARSSIM allows for a baseline evaluation of how global background estimation performs relative to more adaptive, locally focused techniques. Table 9-4 presents the standard deviations of the differenced values for each method, alongside their corresponding detection limits (LD). These metrics offer a quantitative summary of variability reduction and sensitivity, facilitating a direct comparison of method effectiveness under the conditions represented by the field survey dataset.
Table 9-4 Standard Deviations and Detection Limits for Each Method for Field Survey Data Technique Standard Deviation (counts/interval)
Detection Limit (LD)
(counts/interval)
Detection Limit (LD) (cpm)
MARSSIMNon CCD NA 74 2,220 MARSSIMCCD 48.69 160 4,792 MP 32.31 109 3,270 MA 25.92 88 2,639 EWMA 25.38 86 2,587 As expected, the standard deviation of the raw data is considerably higher than that of the localized differenced datasets, reflecting the effectiveness of smoothing techniques in reducing variability. Among the methods evaluated, the EWMA-differenced data exhibit the lowest variability, highlighting its strength in suppressing background noise while preserving signal structure. A similar trend is observed in the LD estimates; consistent with results from simulation scenarios involving nonconstant background structure, the EWMA method yields the lowest LD, indicating the highest sensitivity to weak signals. This is followed closely by the MA and MP techniques, with the non-localized MARSSIM method producing the highest LD. These findings reinforce the advantage of localized approaches in environments with spatial heterogeneity, where global methods tend to suffer from inflated variance and diminished detection power.
Given that this analysis is based on real-world data, the bootstrap procedures outlined in Section 9.3.8 were applied to assess both the bias and the precision associated with the LD estimates. Bootstrapping provides a robust, nonparametric means of quantifying uncertainty, particularly in complex settings where analytical solutions may be difficult to derive. Through repeated resampling, distributions of the LD estimates were obtained from which bias values were calculated and precision intervals were derived. Table 9-5 summarizes the corresponding results, including point estimates of bias, lower precision limits (LPLs), upper precision limits (UPLs), and ranges of the 95 percent bootstrap precision intervals. These metrics offer insight into the reliability and stability of each method when applied to data with real-world variability.
9-29 Table 9-5 Bias and 95 Percent Bootstrap Confidence Intervals for Each Method for Example Field Data Type Bias LPL (cpm)
UPL (cpm)
UPLLPL (cpm)
MARSSIM
-0.531 4,844.65 4,926.72 82.07 MP 0.155 3,238.6 3,302.13 63.53 MA 0.124 2,616.02 2667.53 51.51 EWMA
-0.026 2,562.03 2,613.83 51.8 As summarized in the table, the EWMA method exhibits the smallest bias in absolute value among the evaluated techniques, while the MARSSIM method shows the largest. Nevertheless, the absolute magnitude of bias in all cases remains below 1, which is negligible in the context of measurement values on the order of thousands. Thus, from a practical standpoint, the differences in bias across methods are unlikely to influence decision-making. In terms of precision, the MA method yields the narrowest confidence interval, indicating greater stability in the LD estimates. The EWMA method performs comparably, further supporting its reliability in scenarios involving moderate background variability. These results reinforce the suitability of localized smoothing approaches for achieving both low bias and high precision in LD estimation.
Considering its superior sensitivity and consistent precision in this dataset, the EWMA technique emerges as the most effective method for estimating the LD. Its ability to adapt to local background structure while maintaining low variability in the estimates makes it particularly well suited for real-world applications where spatial heterogeneity is present. These strengths position EWMA as the preferred approach for reliable and sensitive detection in complex survey environments.
By systematically evaluating each of the simulation datasets alongside the field survey dataset, the performance of the proposed detection techniques was assessed across a spectrum of data complexities. This progression was intentionally designed to mirror increasing levels of structural and statistical complexity. It began with a dataset composed of constant values, providing a baseline for performance under ideal, stationary conditions. Subsequent simulations introduced structured variationfirst with moderate noise (A=50) and then with more pronounced, periodic structure and reduced noise (A=100). The final phase of evaluation used the field survey dataset, representing a real-world scenario characterized by overlapping spatial patterns, nonstationarity, and elevated variability.3 This staged framework enabled a comprehensive analysis of how each method responds to escalating signal complexity and noise, ultimately testing the methods robustness under biologically relevant conditions. Across all scenarios, the EWMA technique consistently matched or outperformed the alternative methods, demonstrating superior adaptability, sensitivity, and 3
The absolute values for the MDCR between the simulated dataset and the field survey dataset cannot be directly compared, as each has a different basis. The simulated dataset starts with a nominal 1,000 cpm mean, whereas the field survey dataset has a mean value around 9,000 cpm.
9-30 precision. These results strongly support the use of EWMA as a reliable and effective approach for detection in both controlled and real-world survey settings.
9.4 Approach for Determining Statistical Method As described in Section 9.3.9, a structured approach is recommended for determining which method will better calculate a validity and reliability detection limit (LD) estimates (i.e., an a priori MDCR):
(1)
The data should be examined to assess the underlying structure. If the background appears constant and can be represented as a normal distribution, then the MARSSIM method is appropriate. Otherwise, the EWMA technique should be employed to accommodate variability.
(2)
The bias and precision intervals of the LD estimates should be determined using a bootstrap resampling procedure (refer to Section 9.3.8), providing a robust assessment of the standard methods validity. If the resulting bias and precision are within acceptable bounds, then the LD derived from the standard approach can be retained.
(3)
However, if the bootstrap analysis reveals significant bias or imprecision, then the LD estimate obtained from the bootstrap procedure should be used instead.
This workflow ensures that the chosen detection limits are both statistically sound and tailored to the specific characteristics of the dataset.
9.5 Scanning with an Energy Window or Spectroscopy for Gamma-Emitting Radionuclides Section 7 discusses the use of spectrometric techniques to assess radioactivity but is mainly focused on static measurements and not scanning data. However, many of the characteristics and advantages discussed have applicability for CCD. A significant increase in sensitivity can be achieved using energy-based windows compared with gross instrument counting techniques.
Energy window and spectrometry allow a specific radionuclide to be measured by relying on characteristic energies of the radionuclide of concern to discriminate it from all sources present.
As discussed, in situ techniques can detect much lower concentrations of individual radionuclides. Applying in a scanning mode also overcomes a limitation for spot measurements, where data on spatially dependency may otherwise be lost due to the limited field of view for a localized field measurement. Section 7 also discusses the use of window and spectroscopic analytical methods.
As discussed in Section 9.2, variability in count rate for a background survey can negatively affect the calculated detection limits. This is especially true for measurements reporting gross gamma counts. An increase in gross gamma count rate could be due to a number of factors, including encountering an area with elevated levels of naturally occurring radionuclides, encountering an area contaminated with a radionuclide other than what the surveyor is looking for, or encountering an area contaminated with the radionuclide of interest. Collecting data within an energy window, rather than gross gamma data, provides information to determine when the increased count rate is due to the radionuclide of interest.
Focusing on the count rate associated with a specific gamma energy (i.e., energy window) for the radionuclide of interest, rather than a broad energy spectrum gross measurement, may reduce the degree of variability observed in background data, especially if the radionuclide of
9-31 interest does not exist in normal background. If the radionuclide of interest does also exist within background, the benefits of energy window or spectroscopic data may be less, but there is still an advantage to focusing on a specific energy window rather than the broader gross gamma window.
Some gamma detection systems allow for a complete gamma spectrum to be recorded at regular intervals for CCD, reported as counts within each channel. Having the appropriate conversion between channel number and gamma energy would allow for the one set of CCD to be analyzed for several different energies of interest. This would also allow for other background subtraction techniques (using the Compton continuum) to further reduce the influence of variability within background radiation on the overall variance of the dataset, and in turn result in a lower MDC.
9.5.1 Benchtop Testing A benchtop testing system was constructed to evaluate the effect of background radiation, scan speed, and data collection frequency on CCD. The testing system consisted of a conveyor belt measuring 4 meters by 25 millimeters. A 2x2 NaI detector was held static at the MP of the conveyor belt with a surface-to-detector distance of 10 centimeters. A 1 microcurie (Ci) cesium (Cs)-137 button source was placed on the conveyor belt. Sensors were affixed to the conveyor belt to log the beginning and end of each full-length run. Detector output (counts) was directly recorded every 0.5 second using a standalone microprocessor thereby eliminating any signal processing, such as time averaging, that might have been introduced otherwise by a commercially available instrument. The conveyor belt was run at three different speeds: 1 m/s, 0.5 m/s, and 0.25 m/s. For each belt speed, gross gamma data were collected for 10 full-length runs of the source on the conveyor belt. Data were also collected, using these parameters, with an energy window centered around the 662 kiloelectron volts (keV) gamma from the Cs-137 short half-life progeny barium (Ba)-137m. Background data were collected using these same parameters, also as gross gamma data and with an energy window focused on 662 keV.
Figures 9-9 through 9-14 display the count rate data as the Cs-137 source was moved along the conveyor belt. The x-axis ranges from 2 meters to 2 meters, and 0 represents the MP where the detector was located. Each line on the graph represents a separate run of the source along the length of the conveyor belt. As can be seen by comparing pairs of graphs for each scan speed (Figures 9-9 and 9-10, Figures 9-11 and 9-12, and Figures 9-13 and 9-14), the influence of the variability of natural background is much less when an energy window is used, allowing for easier identification of the Cs source passing underneath the detector.
Figure 9-15 shows the average gross gamma counts from the 10 runs for each scan speed as the source moved past the detector. Similarly, Figure 9-16 shows the average counts within the 662 keV energy window from the 10 runs for each scan speed as the source moved past the detector. As can be seen in Figure 9-15, the increase in gross gamma counts from the Cs source is more prominent for slower scan speeds (e.g., 0.25 m/s) than for faster scan speeds (e.g., 1 m/s). This is also true for the energy window count data, as Figure 9-16 illustrates.
Falkner and Marianno (2019) investigated the effect of increased scan speeds on the geometric efficiency of a typical 2x2 NaI gamma scintillation detector. In that investigation, MCNP was used to model the solid angle, subtended by a cylindrical detector from a Cs-137 source, as the scan speed increased and determine the detector efficiency. It clearly demonstrated that the calculated MDA increases as scan speed increases due to the decrease in the geometric efficiency of the detector. Falkner and Marianno (2019) also highlight the need to plan for
9-32 methods to produce a consistent scan speed during surveys, as slight variations in the speed can affect the detection systems ability to achieve the desired MDA.
Figure 9-9 Benchtop Gross Gamma Cs-137 Data for Belt Speed of 0.25 m/s for 10 Runs
9-33 Figure 9-10 Benchtop 662 keV Window Data for Belt Speed of 0.25 m/s for 10 Runs Figure 9-11 Benchtop Gross Gamma Cs-137 Data for Belt Speed of 0.5 m/s for 10 Runs
9-34 Figure 9-12 Benchtop 662 keV Window Data for Belt Speed of 0.5 m/s for 10 Runs Figure 9-13 Benchtop Gross Gamma Cs-137 Data for Belt Speed of 1 m/s for 10 Runs
9-35 Figure 9-14 Benchtop 662 keV Window Data for Belt Speed of 1 m/s for 10 Runs Figure 9-15 Averaged Benchtop Gross Gamma Background Data and Cs-137 Data for 0.25 m/s, 0.5 m/s and 1 m/s
9-36 Figure 9-16 Averaged Benchtop Background Data and Cs-137 Data within 662 keV Window for 0.25 m/s, 0.5 m/s and 1 m/s 9.6 Comparing Gross Gamma, Energy Window and Spectroscopic Continuously Collected Data for a Common Reference Area An additional reference area was selected for evaluating the above-described methods and examining how the type of data collected could influence the results and method for determining the MDCR. The same 2x2 NaI detector used in the laboratory experiments was used to collect outdoor gross gamma walkover data in a grassy area with no known radiological contamination.
The surveyor moved at approximately 0.5 m/s while holding the detector at a fixed height above the ground. GPS coordinates were collected during the survey and reported with the gross gamma count rate measurements every second. A second set of data was collected in the same manner, but with the energy window for 662 keV applied. A third set of data was also collected in the same area with the same detector, but the detector was set up such that a 4,096-channel spectrum was collected every second during the survey. Two 1-second intervals were added to give a 2-second accumulation time for the data analysis.
An evaluation of the Common site data was performed using the same framework applied to the previous examples. The analysis follows the same structure: visualization of raw data, evaluation of correlation structure, and comparison of localized smoothing methods against CCD and non-CCD MARSSIM approaches where appropriate.
9.6.1 Gross Gamma Evaluation Figure 9-17 presents the time-series of raw counts for gross gamma data. The ordered gross-gamma series is nonstationary, with a gradual, near-monotonic decline in central tendency across the traverse. Superimposed on this background trend are meso-scale
9-37 fluctuationsmultidozen to few-hundred observation segmentssuggesting spatial clustering beyond counting noise. The distribution also includes sporadic high-amplitude excursions (positive outliers) and isolated deficits. Dispersion is not constant: variability increases modestly in the mid-sequence and tightens toward the ends, indicating mild nonconstant variance.
Overall, the structure is that of a declining background field with patch-like departures and occasional outliers, rather than a homoscedastic, mean-stationary process.
Figure 9-17 Time-Series Plot of Gross Gamma Data To further define the structure in the data, this study assessed the ACF and PACF plots for each unit to determine the lag number for this dataset, as shown in the plots in Figure 9-18.
9-38 Figure 9-18 ACF and PACF Plots for Gross Gamma Data In Figure 9-18, greater emphasis is placed on the PACF plot, as the ACF displays a persistent correlation across many lags. Analysis of the PACF reveals a noticeable tapering of partial autocorrelations, with values diminishing substantially beyond k=5. This pattern suggests that the data may be reasonably modeled with an AR process incorporating up to five lag terms.
While the localized methods are anticipated to provide superior performanceparticularly given the spatial complexity and variability observed in the datathe results from the CCD-type MARSSIM and the non-CCD MARSSIM approach are included for comparison. Table 9-6 presents the standard deviations of the differenced values for each method, alongside their corresponding detection limits (LDs).
Table 9-6 Standard Deviations and Detection Limits for Each Method for the Gross Gamma Data Technique Standard Deviation (counts/interval)
Detection Limit (LD, counts/interval)
Detection Limit (LD), cpm)
MARSSIMNon CCD N/A 81.21 2,435 MARSSIMCCD 36.71 120.4 3,612 MP 32.81 110.65 3,319 MA 27.12 91.93 2,758 EWMA 27.03 91.63 2,748
9-39 9.6.2 Energy Window Gamma Evaluation Figure 9-19 presents the raw count information for the windowed data. Windowed counts look largely stationary across the traverseno clear left-to-right drift. Values cluster in distinct horizontal bands (discrete counting from fixed dwell time), with most observations in the low hundreds and occasional spikes reaching several hundred. That pattern suggests a stable background with intermittent elevations rather than a broad gradient. In other words, the level of Cs-137 across the reference area appears relatively uniform, with an expected variability attributable to statistical variation, and no delineated areas of differing activity levels.
Figure 9-19 Time-Series CCD Using a 662 keV Energy Window To better characterize the underlying dependency, the ACF and PACF plots are examined next.
These diagnostics provide insight into the strength and persistence of lagged relationships in the series and help identify the appropriate lag order for AR modeling. Figure 9-20 shows the ACF and PACF plots for windowed data.
9-40 Figure 9-20 ACF and PACF Plots for the Window Data Figure 9-20 displays the autocorrelation functions for the windowed counts. All ACF and PACF coefficients lie within +/-0.10 across lags, indicating negligible serial dependence and behavior consistent with approximate covariance-stationarity. Accordingly, this situation indicates that localized smoothing is not suitable and should not necessarily be applied for determining the limit-of-detection estimation. As stated, the variation of the data is more indicative of a normal distribution; therefore, the MARSSIM-based calculation for the minimum detectable count rate is suitable. The calculated value is 364 cpm. Uncertainty was quantified using a nonparametric bootstrap (resampling locations), yielding a bias of 0.054 cpm and a 95 percent bootstrap percentile interval of (353,376) cpm. The minimal bias and narrow interval support the validity of the detection-limit estimate.
9.6.3 Spectral Gamma Evaluation The third scan survey method involved collecting spectral measurements with a region of interest reflecting the 0.662 megaelectron volt (MeV) primary gamma from the Cs-137/Ba-137m decay. Applying Compton background subtraction to spectral data would further reduce the ambient background component of the data. However, when Compton background subtraction techniques were applied to these data for the 0.662 MeV peak, this resulted in net counts that were near zero and even negative. Therefore, the statistics for the methodologies described here would not necessarily apply and require further investigation.
9-41 Figure 9-21 Time-Series Plot of Spectral Data Figure 9-21 presents the raw count data for spectral data. Similar to the energy window data, the ordered gross-spectrum series appears approximately stationary across the traverse: there is no discernible left-to-right drift in central tendency, and variability is broadly consistent. Counts cluster in discrete horizontal bands in the low double-digitsexpected from integer counting at fixed dwellwhile a small number of isolated positive excursions (reaching ~20-30 counts) occur without persistence. No mesoscale structure (sustained lifts or dips) is evident, and heteroscedasticity, if present, is minimal. Overall, the pattern is consistent with a stable background field punctuated by occasional spikes, rather than a trend-dominated process.
To characterize serial dependence in the series, the ACF and PACF were examined, which summarize the strength and persistence of lagged relationships and guide selection of AR lag order. Figure 9-22 presents the ACF and PACF.
9-42 Figure 9-22 ACF and PACF Plots for Spectral Data Figure 9-22 presents the ACF and PACF plots for the gross-spectrum series. All coefficients remain within the approximate sampling bounds, indicating no material serial dependence and, therefore, no need for localized smoothing before estimation. Using a MARSSIM-based procedure, the MDCR was 10.8 cpm. Uncertainty was quantified through nonparametric bootstrap resampling, yielding an estimated bias of 0.002 cpm and a 95 percent bootstrap percentile interval of (10.1, 11.6) cpm. The near-zero bias and narrow interval indicate a precise and reliable detection-limit estimate.
In summary, the above scan data using three different instrument/detector set-ups (gross, energy window, and spectral) illustrate how different distributions lend themselves to determining the most appropriate method for analysis and how techniques, such as energy window and spectral measurements, can be used to minimize background variability and improve detection capability.
9.7 Determining Minimum Detectable Concentrations from the Minimum Detectable Count Rate 9.7.1 Background Sections 6.2 and 6.3 present guidance on radiological surveys conducted with data logging systems that recorded GPS location information along with the radiological survey data. As discussed in Section 9.1, the a priori scan MDC used for surveys conducted with audible surveyor vigilance does not apply to the alternative paradigm in which radiological data are collected without the surveyor listening to the audible count rate or pausing to count longer upon an audible increase in counts.
9-43 This section addresses the methods, as discussed in Section 4, for relating a measurement from an instrument to a corresponding activity or concentration. It provides a review of certain fundamental concepts in MARSSIM and discussed in other parts of this document, as Oak Ridge Institute for Science and Education (ORISE) has published new considerations for surveying discrete radioactive particles that are applicable to land scanning (ORISE 2023). This section also develops initial considerations for the a priori lag-k scan MDC with the ORISE considerations.
This section evaluates CCD survey techniques through the ORISE methodology, noting that the classic MDC methodology is a two-step process with vigilance, while the CCD has no second step and no vigilance. However, the development of a scan MDC has two stages:
(1) development of the MDCR, which is addressed in Section 9.2, and (2) a correlation of the MDCR to MDC. This second step is the correlation of the detector response (counts over some time period) to the corresponding concentration, be it picocuries per gram (pCi/g) of soil or disintegrations per minute (dpm) per 100 square centimeters (cm2) for surface contamination.
This correlation is reflected in the two parameters in the MDC equation by the source efficiency and detector efficiency.
As identified in Section 9.2, once the a priori MDCR has been calculated, the correlation to the MDC is a function of the source efficiencies and the instrument efficiency. Simply, the equation is as follows:
=
i x s (Eq. 9.33) where:
MDC = the a priori minimum detectable concentration in units appropriate for the type instrument and source media, such as pCi/g for soil and dpm/100 cm2 for beta surface measurements MDCR = the minimum detectable count rate, as derived from the methodology described in Section 9.2, typically in units of an equivalent cpm representative of the scan time interval (accumulation time) for the desired representative area i = the instrument or detector efficiency s = the efficiency of the potential contamination source The combination of the i and s gives the total efficiency, which converts the MDCR to MDC.
For beta detection, the source efficiency and detector efficiency are typically determined separately. For gamma detection, they are more typically determined as a combined term. The surveyor efficiency factor (), which is included for the two-stage scanning technique without recorded data, can be excluded. The application for beta and gamma detector efficiency is discussed below.
9.7.2 Beta Detector Efficiency Instrument efficiency (i) is defined as the ratio of the net count rate of the instrument to the emission rate of a source for a specified geometry. For beta detection, this typically represents the surface emission rate. Since surface measurements are more typically expressed in units of dpm/100 cm2, an additional factor is needed, which is the area of the detector window (W, cm2).
9-44 If W does not equal 100 cm2, probe area corrections are necessary to convert the detector response to units of dpm/100 cm2.
Section 4 provides guidance on determining source efficiency for beta measurements and surface-deposited activity, with reference to International Organization for Standardization (ISO) standards ISO 7503-1:1988, Evaluation of Surface ContaminationPart 1: Beta Emitters and Alpha Emitters, and ISO 7503-3:2016, Measurement of RadioactivityMeasurement and Valuation of Surface ContaminationPart 3: Apparatus Calibration.
Sections 5.3 and 5.4 provide an in-depth evaluation of various source configurations and detector responses that can be used to derive the source efficiency. These sections discuss the differences in beta measurements for various surface types (e.g., sealed concrete, scabbled concrete, stainless steel, untreated wood) and how the surface condition affects the total efficiency. These sections also discuss the attenuation effects of overlaying material (e.g., paint, water, dust). Appendix A provides case studies for how instrument efficiency can be derived for various radionuclide mixtures, directed mostly toward beta detection and surface contamination.
9.7.3 Gamma Detector Efficiency As described above for beta detectors, the instrument efficiency (i) is defined as the ratio between the net count rate of the instrument and the emission rate of a source for a specified geometry. For gamma measurements, instrument efficiency reflects the source geometry, considering the effective scan area and effective depth of contamination.
Instrument efficiency for gamma detectors is simply the ratio of counts recorded to the fluence of the incident gammas. Determining this ratio can be quite complex, as it depends on characteristics of the incident gamma energy, the detector chemical composition (attenuation coefficient), and the direction of incident and path length.
Section 6.2.5 presents one method for modeling the detectors response to its energy response and a corresponding source geometry. As described, this approach uses the concepts of count-rate-to-exposure-rate ratio (CPMR) and an exposure-rate-to-concentration ratio (ERC).
The CPMR may be provided by the detector manufacturer or be available in the literature, but it is dependent on the gamma energy and may need to be derived. Table 6-3 lists the CPMR as a function of energy for four different types of NaI detectors. The ERC is usually estimated by modeling the exposure rate above a source using MicroShield. Other methods may prove as effective or better, such as the use of MCNP or source samples and radiochemical analysis for correlating detector response to concentration.
9.8 Bench Testing Detector Response Accumulation Time The amount of time that the detector may accumulate counts before registering a measurement in conjunction with the scan speed also affects the likelihood of a source being seen within the CCD. The background data that were collected, as described in Section 9.4, were averaged across the 10 runs and then binned accordingly for accumulation times of 0.5 second, 1 second, and 2 seconds to show the difference in detector response. The data were then normalized to counts per second. Figures 9-23, 9-24, and 9-25 illustrate the effect on the detector response as a function of accumulation time for gross gamma data at speeds of 0.25 m/s, 0.5 m/s, and 1 m/s, respectively. The time reported for each data point is the end of the time interval used for the accumulation time. Increasing the accumulation time for the detector results in less variability in the measured background data by effectively averaging the counts from longer periods, thus masking short, intermittent changes in count rate. This is also demonstrated in the
9-45 background data collected within the 662 keV energy window, shown in Figures 9-26, 9-27, and 9-28. As discussed in Section 9.4, the variability in the data collected within an energy window is less than the variability in the gross gamma data. Increasing the accumulation time for the windowed data reduces the variability even further.
Figure 9-23 Benchtop Average Gross Gamma Background Count Rate Data for a Belt Speed of 0.25 m/s for Varying Accumulation Times Figure 9-24 Benchtop Average Gross Gamma Background Count Rate Data for a Belt Speed of 0.5 m/s for Varying Accumulation Times 135 140 145 150 155 160 165
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; Background; 0.25 m/s; Gross Gamma 0.5-second 1-second 2-second 135 140 145 150 155 160 165
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; Background; 0.5 m/s; Gross Gamma 0.5-second 1-second 2-second
9-46 Figure 9-25 Benchtop Average Gross Gamma Background Count Rate Data for a Belt Speed of 1 m/s for Varying Accumulation Times Figure 9-26 Benchtop Average Background Count Rate Data Within 662 keV Energy Window for a Belt Speed of 0.25 m/s for Varying Accumulation Times 135 140 145 150 155 160 165
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; Background; 1 m/s; Gross Gamma 0.5-second 1-second 2-second 0
1 2
3 4
5 6
-2
-1 0
1 2
Average Counts per second Source Distance from Detector (m)
Accumulation Time; Background; 0.25 m/s; Window 0.5-second 1-second 2-second
9-47 Figure 9-27 Benchtop Average Background Count Rate Data Within 662 keV Energy Window for a Belt Speed of 0.5 m/s for Varying Accumulation Times Figure 9-28 Benchtop Average Background Count Rate Data Within 662 keV Energy Window for a Belt Speed of 1 m/s for Varying Accumulation Times The data collected with the Cs-137 source, as described in Section 9.4, were averaged across 10 runs and then binned accordingly for accumulation times of 0.5 second, 1 second, and 2 seconds to show the difference in detector response. The data were then normalized to counts per second. For each measurement, the position of the source was registered in relation to the detector at the end of the accumulation period. Figures 9-29, 9-30, and 9-31 illustrate the effect on the detector response as a function of accumulation time for gross gamma data at speeds of 0.25 m/s, 0.5 m/s, and 1 m/s.
0 1
2 3
4 5
6
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; Background; 0.5 m/s; Window 0.5-second 1-second 2-second 0
0.5 1
1.5 2
2.5 3
3.5 4
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; Background; 1 m/s; Window 0.5-second 1-second 2-second
9-48 As the accumulation time increases, the magnitude of the peak in the detectors count rate when the source is nearest to the detector decreases. Longer accumulation times lead to the detector including counts from an area farther from the source in the reported measurement; when expressed as an average count rate for the time interval, this leads to a lower perceived count rate. This is most clearly seen in Figure 9-29 for a speed of 0.25 m/s, where the height of the count rate peak decreases for longer accumulation times. With actual field surveys, this could lead to masking of an increase in count rate from a source by the averaging effect from longer accumulation times. The potential for a source to be missed due to this masking effect increases as the source decreases in size. If the size of the source is larger than the distance that the detector would traverse during the accumulation time, then the accumulation time has less of an effect on the detectability of the source. This relationship between accumulation time and the magnitude of the peak count rate from the source is also seen with the 662 keV energy window data in Figures 9-32, 9-33, and 9-34.
Figure 9-29 Benchtop Average Gross Gamma Cs-137 Count Rate Data for a Belt Speed of 0.25 m/s for Varying Accumulation Times
9-49 Figure 9-30 Benchtop Average Gross Gamma Cs-137 Count Rate Data for a Belt Speed of 0.5 m/s for Varying Accumulation Times Figure 9-31 Benchtop Average Gross Gamma Cs-137 Count Rate Data for a Belt Speed of 1 m/s for Varying Accumulation Times 125 135 145 155 165 175 185 195 205
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; 1 m/s; Gross Gamma; 1 uCi Cs-137 0.5-second 1-second 2-second
9-50 Figure 9-32 Benchtop Average Cs-137 Count Rate Data Within 662 keV Energy Window for a Belt Speed of 0.25 m/s for Varying Accumulation Times Figure 9-33 Benchtop Average Cs-137 Count Rate Data Within 662 keV Energy Window for a Belt Speed of 0.5 m/s for Varying Accumulation Times 0
5 10 15 20 25
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; 0.25 m/s; Window; 1 uCi Cs-137 0.5-second 1-second 2-second
9-51 Figure 9-34 Benchtop Average Cs-137 Count Rate Data Within 662 keV Energy Window for a Belt Speed of 1 m/s for Varying Accumulation Times Having lower variability in the background data would lead to a lower MDCR, so longer accumulation times would seem to be ideal. However, the benefit of having lower variability in the data due to increased accumulation times must be balanced against the potential for a source to be missed during the actual survey, due to the averaging effect of increased accumulation times. The increase in count rate due to smaller distributed sources and point sources may only last for a few seconds, depending on the source strength and scan speed. By increasing the accumulation time, this short increase in count rate could be masked by averaging the counts over a longer period and a larger survey area. For sources distributed over a larger area, the increase in count rate as the detector moves over the source could last longer.
Therefore, it is possible for the increase in count rate to still be large enough to be detectable, even with increased accumulation times.
Longer accumulation times also lead to fewer total points collected during the survey. For smaller reference background areas and smaller survey units, this could result in too few data points being collected to provide confidence in the data. The characteristics of the sources that may be encountered during the surveys (i.e., point sources versus small or large distributed sources), as well as the overall size of the areas to be surveyed, should be considered when determining the appropriate accumulation time to use.
9.9 Impacts of Varying Radiological Background Within an Area As described in Section 9.1, a reference area should be selected that is representative (e.g., geologically, radiologically) of the areas to be surveyed for determination of the a priori MDC. It is possible that after collecting CCD within the reference area, the distribution of the count rate appears bimodal, suggesting more than one radiological background within the area.
Example gamma walkover data were obtained to show this case for a potential reference area.
The data were collected using a towed array of six 2x2 NaI detectors mounted to a vehicle, along with a GPS unit to log geographic coordinates together with the gamma count rate data.
0 5
10 15 20 25
-2
-1 0
1 2
Average Counts per second Source Distance From Detector (m)
Accumulation Time; 1 m/s; Window; 1 uCi Cs-137 0.5-second 1-second 2-second
9-52 Figure 9-35 displays the time series plot of the example dataset. This visualization shows a distinct segmentation in the radiological backgrounds between the two subareas included in the data. A histogram of the gamma count rate data, given in Figure 9-36, shows a bimodal distribution. To determine whether there is a spatial difference in the radiological characteristics within the area, the data were plotted using the GPS coordinates collected during the survey.
Figure 9-35 Time Series Plot of Example Data with Varying Radiological Background
9-53 Figure 9-36 Gamma Count Rate Histogram for SU1B and SU1C Example Data Combined As shown in Figure 9-37, there is a clear spatial difference within the overall area, as the northern section appears to have lower count rates than the southern section. The radiological differences between the two subareas result in increased variability within the combined dataset.
9-54 Figure 9-37 Map of Gamma Count Rate Data for SU1B and SU1C Example Data Combined Plotting the data with the GPS coordinates allows for the proposed reference area to be divided into two smaller areas with differing radiological characteristics. Mapping each subarea separately can allow for a more accurate depiction of the distribution of activity. As shown in Figures 9-38 and 9-39, more detail about the spatial distribution of activity within each subarea is visible.
9-55 Figure 9-38 Map of Gamma Count Rate Data for SU1B Example Data
9-56 Figure 9-39 Map of Gamma Count Rate Data for SU1C Example Data
9-57 In the example shown below, the SU1B and SU1C datasets were separated to be analyzed independently.
Figure 9-40 presents the time series of raw counts for SU1B. The average count is approximately 1,984 cpm, with substantial variability and an upward run toward the middle of the sequence.
Figure 9-40 Time Series Plot of SU1B Example Data To further define the structure in the data, the ACF and PACF plots were assessed for each unit to determine the lag number to use for each dataset. Examination of the ACF and PACF plots for the SU1B dataset (Figure 9-41) shows persistent autocorrelation across many lags in the ACF, while the PACF indicates tapering beyond lag 18. This suggests the data may be reasonably modeled using an AR structure with up to 18 lag terms.
9-58 Figure 9-41 ACF and PACF Plots for SU1B Example Data In interpreting the correlation structure of the field survey data, greater emphasis is placed on the PACF plot, as the ACF displays a persistent correlation across many lags. Analysis of the PACF reveals a noticeable tapering of partial autocorrelations, with values diminishing substantially beyond lag-k=18. This pattern suggests that the data may be reasonably modeled with an AR process incorporating up to 18 lag terms.
While the localized methods are anticipated to provide superior performanceparticularly given the spatial complexity and variability observed in the datathe results from the CCD-type MARSSIM and the non-CCD-type MARSSIM approach are included for comparison. Table 9-8 presents the standard deviations of the differenced values for each method, along with their corresponding detection limits (LDs).
9-59 Table 9-8 Standard Deviations and Detection Limits for Each Method for the SU1B Example Data Technique Standard Deviation (counts/interval)
Detection Limit (LD, (counts/interval)
Detection Limit (LD, cpm)
MARSSIMNon CCD NA 37.73 1,131 MARSSIMCCD 14.91 48.91 1,467 MP 6.63 24.52 736 MA 7.01 25.77 773 EWMA 7.07 25.97 779 Table 9-8 summarizes the standard deviations of differenced values and their corresponding detection limits (LDs) for each method. Consistent with previous examples, the localized methods produce lower variability and lower LDs than global MARSSIM approaches. The EWMA method yields the lowest LD, followed closely by MA and MP, while non-CCD MARSSIM performs worst.
Given that this analysis is based on real-world data, the bootstrap procedures outlined in Section 9.3.8 were applied to assess both the bias and the precision associated with the LD estimates. Table 9-9 summarizes the corresponding results, including point estimates of bias, LPLs, UPLs, and ranges of the 95 percent bootstrap precision intervals.
Table 9-9 Bias and 95 Percent Bootstrap Confidence Intervals for Each Method for SU1B Example Data Type Bias LPL (cpm)
UPL (cpm)
UPLLPL (cpm)
MARSSIM 0.268 1,529 1,577 47 MP 0.430 719 753 34 MA 0.164 762 785 23 EWMA 0.138 769 791 22 Bootstrap analysis of LD estimates (Table 9-9) shows that all methods produce negligible bias
(<1), with EWMA offering the narrowest precision interval, reinforcing its stability and sensitivity.
9-60 Figure 9-42 Time Series Plot of SU1C Example Data Figure 9-42 presents the raw count data for SU1C, with an observed mean of approximately 1,243 cpm. Overall, the distribution appears relatively stable, though it exhibits a gradual downward trend across much of the sequence, followed by a distinct upward shift toward the end. This temporal structure suggests potential autocorrelation in the data.
To better characterize the underlying dependency, the ACF and PACF plots were examined.
These diagnostics provide insight into the strength and persistence of lagged relationships in the series and help identify the appropriate lag order for AR modeling. Figure 9-43 shows the ACF and PACF plots for SU1C.
9-61 Figure 9-43 ACF and PACF Plots for SU1C Example Data Figure 9-43 illustrates the autocorrelation patterns for SU1C. The ACF demonstrates a gradual decay, while the PACF exhibits a distinct cutoff after lag 12. Together, these patterns are consistent with an AR structure of order 12 (AR(12)), suggesting that an AR model incorporating 12 lag terms would appropriately capture the temporal dependence in this dataset.
Table 9-10 summarizes the standard deviations of the differenced values obtained for SU1C under each smoothing method, along with the corresponding detection limits (LDs). This comparison provides a direct assessment of method performance in terms of variability reduction and sensitivity to elevated signals.
Table 9-10 Standard Deviations and Detection Limits for Each Method for the SU1C Example Data Technique Standard Deviation (counts/interval)
Detection Limit (LD)
(counts/interval)
Detection Limit (LD)
(cpm)
MARSSIMNon CCD NA 30 896 MARSSIMCCD 5.3 17 521 MP 2.34 10 312 MA 4.16 16 515 EWMA 4.4 17 492
9-62 As shown in Table 9-10, the application of localized smoothing methods once again yields substantial improvements relative to global MARSSIM. Among these approaches, the MP method produces the lowest standard deviation and the most favorable detection limit (LD),
outperforming the MA, EWMA, and MARSSIM procedures.
Because this analysis is based on empirical field data, the bootstrap resampling procedure described in Section 9.3.8 was employed to evaluate the stability of the LD estimates.
Specifically, the bootstrap was used to quantify both the potential bias and the precision of each method, thereby providing a more rigorous assessment of their comparative performance.
The bootstrap results presented in Table 9-11 further reinforce these findings. Specifically, the MP method exhibits the smallest bias and the narrowest confidence interval among the approaches considered, underscoring its superior performance for SU1C.
Table 9-11 Bias and 95 Percent Bootstrap Confidence Intervals for Each Method for SU1C Data Type Bias LPL (cpm)
UPL (cpm)
UPLLPL (cpm)
MARSSIM
-0.131 596.67 612.98 16.31 MP
-0.072 306.39 318.99 12.6 MA
-0.024 486.01 498.04 12.03 EWMA
-0.023 508.96 521.39 12.43 Taken together, the analyses of SU1B and SU1C highlight both the consistency and the site-specific nuances in method performance. For SU1B, the EWMA approach provided the most favorable balance of reduced variability, low detection limits, and stable precision, while for SU1C, the MP method demonstrated clear superiority, yielding the lowest standard deviations, minimal bias, and the tightest confidence intervals. These results emphasize that, although localized smoothing methods consistently outperform global MARSSIM, the optimal choice of technique may depend on the characteristics of the dataset under study. In practice, this suggests that tailoring the smoothing method to the structure of individual survey units can maximize sensitivity and reliability in real-world monitoring applications.
9.10 Scanning Localized Elevated Areas Versus Wide Areas: Techniques, Considerations, and Effect on Minimum Detectable Concentration MARSSIM recognizes the application of scanning methods and the resulting ability to detect small elevated levels of contamination. Additionally, the potential for discrete radioactive particles (DRPs) has been recognized, with specific guidance provided in DUWP-ISG-03, Contamination Control, Radiological Survey, and Dose Modeling Considerations to Support License Termination at Sites with Environmental Discrete Radioactive Particle Contamination, issued September 2024 (NRC 2024). The study by ORISE (ORISE 2023) describes the survey technique and effect on detection ability.
Applying a modeling and evaluation approach similar to that used by ORISE (2023), this section examines how the MDCR for CCD can be impacted by small or isolated areas of elevated activity.
9-63 The optimistic and pessimistic scenarios developed by ORISE for DRPs can be applied to a MARSSIM-defined small elevated area, or hotspot. There are two scan paths to consider for CCD: application of the serpentine path that is typically used for scans with vigilance to the CCD scan path, and, alternatively, a straight path such as for detectors as a towed array over a survey area. The serpentine and straight-scan paths each have an optimistic and a pessimistic view, due to the location of an assumed volumetric source.
Serpentine Path In accordance with the ORISE technique, the physical setting of the surveyor and surrounding environment was modeled in a three-dimensional Cartesian coordinate system (i.e., x, y, and z axes). The hypothetical surveyor is standing in the +z direction and walking the +y direction with constant velocity (v) in m/s. As the surveyor progresses in the forward (+y) direction, the detector moves side to side along the +/-x-axis, maintaining a constant ground-to-detector distance. Detector position through the surveyors transect is modeled as a sine curve, which represents the detectors flat, serpentine motion during a survey transect.
To adequately represent the sine curve (ORISE 2023), coordinates were generated, specifying the location of 100 evenly spaced points along the sine curve as the surveyor moves 1 meter in the y direction. Viewing the detector position in the xy plane (looking in the z direction), the position along a portion of the surveyors transect is illustrated in Figures 9-44 and 9-45. The detector position (represented by each dot in Figures 9-44 and 9-45) occurs at a specific time based on the surveyors forward velocity, assuming the time between each dot is equivalent. For example, assume that the detector starts at location 0.0 m, 0.0 m; the surveyor velocity is 0.5 m/s; and the surveyor will traverse the 1-meter interval displayed in each of Figures 9-44 and 9-45 in 1 second. With 100 points depicted, the time between each point represents 0.02 second. Certain points are numbered to show relative locations of the detector position during the scans.
Figure 9-44 illustrates the optimistic (serpentine) path, where the detector eventually passes directly over the hotspot. Figure 9-45 illustrates the pessimistic (offset) path, where the detector follows a similar path, but the detector-to-hotspot distance is maximized.
For the pessimistic evaluation shown in Figure 9-45 for the offset path, the hotspot is centered with half of its area in the survey lane and half in the adjacent lane, and the detectors center gets close to but never passes immediately over the center of the hotspot.
Towed Array Example Similarly, Figures 9-46 and 9-47 depict optimistic and pessimistic views of three detectors moving in the same direction over the same size elevated area. The optimistic scenario places the elevated area at the center of the 1-meter path, while the pessimistic scenario places half of the elevated area in the 1-meter path and the other half in the next survey lane. The paths of the three detectors are labeled A, B, and C in both figures. A towed array will have a straight-line path with no vigilance but with more control of speed and the resulting distance covered during each accumulation time interval.
9-64 Figure 9-44 Optimistic Pathway of CCD Walkover Scan for 0.25 m2 Figure 9-45 Pessimistic Pathway of CCD Walkover Scan for 0.25 m2
9-65 Figure 9-46 Optimistic Array Configuration for 0.25 m2 Figure 9-47 Pessimistic Array Configuration for 0.25 m2
9-66 9.10.1 Modeling of Small Elevated Area Evaluating a Small Elevated Area For evaluating detector response, it is not unusual to assign an efficiency assuming the detector is positioned over the source center. This assumption is consistent with the MARSSIM two-stage scanning technique, in which an audible increase is used for identifying an increase in count rate followed by a stationary measurement positioned over the identified area. This assumption may not be a reasonable for CCD and small elevated areas of contamination. As shown in Figures 9-44 through 9-47, depending on the detector location within the 1-meter-wide scan path in the y direction, the over-center event may only happen once, if it happens at all.
Therefore, the approach described in this section considers the possibility that the detector will pass to the side of the elevated area (i.e., by some offset).
Modeling considered the standard MARSSIM area size of 0.25 m2 as follows:
Modeling (using MicroShield Version 8.02) of a small area of elevated activity in the soil is used to determine the net exposure rate produced by a radionuclide concentration at a distance of 10 centimeters above the source. This position is selected because it relates to the average height of the 2x2 NaI scintillation detector above the ground during scanning.
Modeling analysis of the hotspot is for Cs-137 at a concentration of 1 pCi/g. The same approach can be applied for other gamma-emitting radionuclides, such as cobalt-60. The other factors are held constant, in accordance with the classical model: the depth of the area of elevated activity is 15 centimeters and the dose point is 10 centimeters above the surface, with varying distances away from the center of the hotspot center at ground level. A soil density of 1.6 grams per cubic centimeter (g/cm3) is assumed.
The MicroShield geometry of a cylinder source with side shields was used, which allowed for consideration of shielding provided by the surrounding uncontaminated soil as the detector passes to locations outside the perimeter. Figure 9-46 illustrates this geometry.
Figure 9-46 MicroShield Configuration of Shielding and Measurement Locations For this modeling, the radius of the elevated area is set to 28 centimeters, as shown in Figures 9-44 and 9-45. The size of the area provides an observation interval of at least 1 second. An interval assessment frequency of every 0.02 second has been used to provide a sufficiently fine mesh for evaluating the detector position and response as the spatial
9-67 configuration changes. This configuration places the detector in the optimistic view directly over the source during the scan. The exposure rates at each 0.02-meter interval are evaluated; and counts per second and counts per minute are evaluated for 1-and 2-second accumulation times.
The counts per interval (0.02 second) data for each distance from the elevated area surface center were obtained by applying the nominal 900 cpm per R/h factor from Section 6.2.5.
Table 9-12 presents the total counts over 1-and 2-second intervals converted to cpm values.
Recorded counts over an accumulation time interval must be whole numbers; a fraction of a count cannot exist or be recorded. Therefore, for this example, all calculated fractional counts resulting from the interval modeling are rounded according to conventional rounding methods.
Figure 9-47 presents the exposure rates for distances from the center for a MARSSIM standard elevated area (0.25 m2) at 10 centimeters from the land surface with Cs-137 at 1 pCi/g.
Figure 9-47 MicroShield Results for 0.25 m2 Elevated Area with 1 pCi/g Cs-137 The resulting counts in Table 9-12 (and what would be correlated to a count rate) reflect the increase expected for a 1 pCi/g Cs-137 elevated activity for a 0.25 m2 elevated area above what would be the background. As shown, under the optimistic scenario, the detector response over a 2-second accumulation time for the standard walking survey is 14 percent (3/22) of that for the response if the contaminated area is widespread. For the pessimistic scenario, this value is 9 percent (2/22).
Table 9-12 Total Count over 1-and 2-Second Accumulation Intervals for a 0.25 m2 Elevated Area with 1 pCi/g Cs-137 versus Wide-Area Contamination Type Accumulation Interval Optimistic (counts)
Optimistic Count Rate (cpm)
Pessimistic (counts)
Pessimistic (cpm)
Wide Area*
(counts)
Wide Area (cpm)
Walkover 1st second 2
120 1
60 22 1,323 2nd second 1
60 1
60 0-2 seconds 3
180 2
120 44 Array 1st second 3
180 3
180 22 1,323 0.0000 0.0500 0.1000 0.1500 0.2000 0.2500 0.3000 0.00 0.20 0.40 0.60 0.80 1.00 1.20 Exposure Rate (R/h)
Distance from Center (m)
Exposure Rate vs. Distance from Center
9-68 Type Accumulation Interval Optimistic (counts)
Optimistic Count Rate (cpm)
Pessimistic (counts)
Pessimistic (cpm)
Wide Area*
(counts)
Wide Area (cpm) 2nd second 1
60 1
60 0-2 seconds 4
240 4
240 44
- Wide area denotes an area of relatively uniform contamination covering the entire area for the accumulation time.
Evaluating detectability for small elevated areas requires consideration of the actual field condition and its varying background. As an example, Table 9-6 gives an LD value of 92 counts in a 2-second accumulation time interval (corresponding to 2,748 cpm) for EWMA. Using the ERC value of 0.247 R/h per pCi/g and the calibration factor of 900 cpm per R/h (both from Section 6.2.5), this MDCR value corresponds to an MDC of approximately 12 pCi/g Cs-137.
As discussed in Section 9.8 for small elevated areas, the accumulation time while the detector is over the area becomes important. For the optimistic scenario and a 2-second time interval, the 3 counts in a 2-second accumulation time (180 cpm) account for 0.033, or 3.3 percent (3/92) of the MDCR. With the MDCR of 12 pCi/g for wide-area Cs-137 contamination, the MDCR for this defined small elevated area is 368 pCi/g (12 pCi/g divided by 0.033). For the pessimistic scenario, the calculated counts in the 2-second interval, corresponding to 120 cpm, give a corresponding MDC of about 550 pCi/g for Cs-137.
Performing the same evaluation as above using the energy window data from Section 9.6.2, the MDCR presented is 364 cpm, or 12 counts in a 2-second accumulation time. For the optimistic scenario and a 2-second time interval, the 3 counts in a 2-second accumulation time (180 cpm) correspond to 0.25, or 25 percent (3/12) of the MDCR. With the MDCR of 12 pCi/g for wide-area Cs-137 contamination, the MDCR for this defined small elevated area is 48 pCi/g (12 pCi/g divided by 0.25). Using the spectral data (Section 9.6.3) with an MDCR of 11 cpm (10.8 rounded), the corresponding MDC is approximately the same as that for wide-area contamination (i.e., 11 cpm for spectral data versus 12 cpm).
Performing similar evaluations using the pessimistic scenario results in an increase in MDCR and corresponding MDC about a factor of 1.5 times higher, reflecting the decrease in counts from 3 for the optimistic scenario to 2 for the pessimistic scenario (i.e., 3/2 or 1.5).
As illustrated by the data in Table 9-12, for a towed array, an increase in counts is modeled, reflecting a modest improvement in detection level. For the optimistic scenario, the array has a calculated count of 4 versus 3 for the single detector serpentine motion in a 0-2 second accumulation time. For the pessimistic scenario, the comparison is 4 counts for the array versus 2 counts for the serpentine scan technique.
Evaluating a Larger Elevated Area As the elevated area increases, the detector will approach its full response to the default wide-area contamination. For this example, the area is assumed to be 0.8 m2 with a radius of 50 centimeters. Figures 9-48, 9-49, 9-50, and 9-51 illustrate the scanning survey pattern. Each dot represents a 0.02-second modeling segment. The optimistic path contains about 43 dots over the first second and about 39 more dots over the next second. The pessimistic path over the source contains about 36 dots. The exposure rates at each 0.02-second interval are evaluated, and counts per second and counts per minute are evaluated for 1-and 2-second accumulation time intervals.
9-69 Figure 9-48 Optimistic Pathway of CCD Walkover Scan for 0.8 m2 Figure 9-49 Pessimistic Pathway of CCD Gamma Walkover Scan for 0.8 m2
9-70 Figure 9-50 Optimistic Pathway of CCD Towed Scan for 0.8 m2 Figure 9-51 Pessimistic Pathway of CCD Towed Scan for 0.8 m2
9-71 Similar to the 0.25 m2 modeling, MicroShield was used to calculate the exposure rate at various locations for Cs-137 at a concentration of 1 pCi/g. Except for the change in areal dimension 0.8 m2 (radius of 50 centimeters), the other factors as used for the 0.25 m2 modeling were assumed; the depth of the area of elevated activity is 15 centimeters and the dose point is 10 centimeters above the surface, with varying distances away from the center of the elevated area at ground level. A soil density of 1.6 g/cm3 is assumed.
Figure 9-52 presents a curve for the results of MicroShield calculations of a larger area but holds other modeling parameters constant as assumed for the 0.25 m2 area, while increasing the surface area from 0.25 m2 to 0.8 m2.
Figure 9-52 MicroShield Results for 0.8 m2 Elevated Area with 1 pCi/g Cs-137 Table 9-13 presents the results of this modeling of source and detector response. The calculated counts in Table 9-13 (and what would be a corresponding count rate) reflect the increase expected for a 1 pCi/g Cs-137 elevated activity for a 0.8 m2 elevated area above what would be the background. As indicated, under the optimistic scenario, the detector response over a 2-second accumulation time is 32 percent of that for the response if the contaminated area is widespread (7/22). For the pessimistic scenario, the value is approximately 23 percent (5/22).
0.00000 0.05000 0.10000 0.15000 0.20000 0.25000 0.30000 0.35000 0.40000 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 Exposure Rate (R/h)
Verticle Distance from Center of Source Top (m)
Exposure Rate vs. Distance from Center Line
9-72 Table 9-13 Count Outputs over 1-and 2-Second Intervals for 0.8 m2 Elevated Area with 1 pCi/g Cs-137 Type Interval Optimistic (counts)
Optimistic (cpm)
Pessimistic (counts)
Pessimistic (cpm)
Wide Area (counts)
Wide Area (cpm)
Walkover 1st second 3
180 3
180 22 1,323 2nd second 4
240 1
60 0-2 seconds 7
210 5*
150 Array 1st second 16 960 15 900 2nd second 16 960 15 900 0-2 seconds 32 960 30 900 44 1,323
- The rounded value of 5 for 0-2 seconds includes fractional counts that are excluded for the rounding of the first and second intervals separately.
Using the same example as above for the 0.25 m2 elevated area, the detectability for a 0.8 m2 elevated area can be evaluated. For the optimistic scenario and a 2-second time interval, the 7 counts in a 2-second accumulation time (210 cpm) corresponds to 0.076, or 7.6 percent (7/92) of the MDCR. With the MDCR of 12 pCi/g for wide-area Cs-137 contamination, the MDCR for this defined small elevated area is 158 pCi/g (12 pCi/g divided by 0.076). For the pessimistic scenario, the calculated counts of 5 in the 2-second interval, corresponding to 150 cpm, gives a corresponding MDC of about 221 pCi/g for Cs-137.
Performing the same evaluation as above using the energy window data from Section 9.6.2, the MDCR presented is 364 cpm, or 12 counts in a 2-second accumulation time. For the optimistic scenario and a 2-second time interval, the 7 counts in a 2-second accumulation time (210 cpm) correspond to 0.58, or 58 percent (7/12) of the MDCR. With the MDCR of 12 pCi/g for wide-area Cs-137 contamination, the MDCR for this defined small elevated area is 21 pCi/g (12 pCi/g divided by 0.58). Using the spectral data (Section 9.6.3) with an MDCR of 11 cpm (10.8 rounded), the corresponding MDC is approximately the same as that for wide-area contamination (i.e., 11 cpm for spectral data versus 12 cpm).
Performing similar evaluations using the pessimistic scenario results in an increase in MDCR and corresponding MDC about a factor of 1.4 times higher, reflecting the decrease in counts from 7 for the optimistic scenario to 5 for the pessimistic scenario (i.e., 7/5 or 1.4).
As illustrated by the data in Table 9-13, for a towed array, an increase in counts is modeled, reflecting a significant improvement in detection level. For the optimistic scenario, the array has calculated count of 32 versus 7 for the single detector serpentine motion in a 0-2 second accumulation time. For the pessimistic, the comparison is 30 counts for the array versus 5 for the serpentine scan technique.
As illustrated in Section 9.8, decreasing scan speed and increasing the accumulation time yields better detection capability; however, increasing accumulation alone runs the risk of masking very small areas of elevated activity. As the size of the elevated area increases, the increased accumulation time improves detection capability with a decreased risk of missing small elevated areas.
9-73 9.11 Additional Variables Affecting Minimum Detectable Concentrations Several factors pertaining to the instrumentation selected for a given survey and how they are to be used to collect data have a direct impact on the measured count rate, as discussed in Section 4. The detector must be properly calibrated for the sources that are expected in the field. This includes calibrating using a source-to-detector distance that is planned for field surveys. Selecting the appropriate source-to-detector distance should be part of the DQO process, as differences in source-to-detector distance alter the detectors field of view and the resulting count rate. Additionally, the detector response for lower energy beta emitters is more susceptible to changes in the source-to-detector distance, as described in Section 4.2.
The characteristics of survey units that could affect survey performance should be identified and factored into the survey planning process. For example, it may not be appropriate to apply a CCD scan MDC based on a reference area survey conducted on a flat area to a survey conducted within a trench, as the detector response within the trench will be influenced by the floor and sidewalls. The effect of differing geometry is more pronounced for smaller, narrower trenches and when the survey is near corners. Another characteristic that should be considered when planning surveys is the moisture content. Increased water can lead to increased attenuation, especially for low-energy gammas, and may need to be corrected for in the source efficiency or in the modeling of source detector response. Ideally, the expected conditions of a survey unit and the intended reference background area should be as similar as possible.
As described in Section 5.1, it is widely expected that different types of material will be encountered when surveying a given site or facility, and that the background count rates for each material may vary. Therefore, initial evaluations of the site should aim to identify the different material types present and characterize them appropriately for determining the MDC. It is possible for different material types to be visually similar but exhibit noticeably different radiological characteristics. This can be a challenge when determining the MDC from CCD that unknowingly covers material with two different background radiation levels. A histogram of all the data may show a bimodal distribution, and a time-series plot of the data may show temporal differences in the count rates as the detector traversed between the two material types. If the MDC were calculated using the entire dataset, then the increased variability would drive the MDC higher. However, if areas with the different radiological backgrounds can be delineated and separated using time-series data, geographic coordinates, or both, then the data can also be separated and multiple MDCs could be calculated. This approach should be considered if it is possible to determine, in the planning stages of the field surveys, which type of background material should be assumed for a given area. This could be difficult if there are no visible differences.
As described in Section 9.1, CCD does not require the surveyor to monitor and react to the detector response during a survey. In most cases, however, CCD is collected by a surveyor who likely still has the ability to see the measured count rate as the survey progresses. It is possible that during a survey, the surveyor could pause in one location for a prolonged period, either due to a potential area of interest or other factors, while the detector is still registering and logging the count rate. If the surveyor pauses on an area with an elevated count rate, this could greatly affect the statistics (i.e., the variance) of the overall dataset, as a larger number of readings with a higher count rate would be introduced than if the surveyor continued the survey without pausing. If GPS data are available, then it is possible to plot these data on a map and determine where clusters exist due to pauses during the survey. It may be necessary to identify and remove the excess data points collected in a single location to obtain a more accurate dataset and, in turn, more accurate statistics to use in the MDC calculations.
9-74 9.12 References Currie, L.A., Limits for Qualitative Detection and Quantitative Determination, Analytical Chemistry, Vol. 40, Issue 3, March 1, 1968.
Falkner, J., and C. Marianno, Modeling minimum detectable activity as a function of detector speed, Radiation Detection Technology and Methods, Vol. 3, article number 25, pp. 1-8, April 9, 2019.
International Organization for Standardization, ISO 7503-1:1988, Evaluation of Surface ContaminationPart 1: Beta Emitters and Alpha Emitters, 1988.
International Organization for Standardization, ISO 7503-3:2016, Measurement of RadioactivityMeasurement and Valuation of Surface ContaminationPart 3:
Apparatus Calibration, 2016.
Marianno, C.M., K.A. Higley, S.C. Moss, and T.S. Palmer, An experimental determination of FIDLER scanning efficiency at specific speeds, Health Physics, 84(2):197-202, 2003.
doi: 10.1097/00004032-200302000-00007.
Oak Ridge Institute for Science and Education (ORISE), Estimating Scan Minimum Detectable Activities of Discrete Radioactive Particles, Final Technical Report, N. Altic and D. King, December 2023. NRC Agencywide Documents Management and Access System Accession No. ML24004A133.
Pacific Northwest National Laboratory (PNNL), Overview of a Methodology for Calculating the A Priori Scan Minimum Detectable Concentration for Post Processed Radiological Surveys, June 2023.
U.S. Nuclear Regulatory Commission, NUREG-1507, Revision 1, Minimum Detectable Concentrations with Typical Radiation Survey for Instruments for Various Contaminants and Field Conditions, August 2020.
U.S. Nuclear Regulatory Commission, DUWP-ISG-03, Interim Staff Guidance DUWP-ISG-03 Contamination Control, Radiological survey, and Dose Modeling Considerations to Support License Termination at Sites with Environmental Discrete Radioactive Particle Contamination, September 2024.
U.S. Nuclear Regulatory Commission, NUREG-1575, Revision 2, Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM), U.S. Nuclear Regulatory Commission, U.S. Environmental Protection Agency, U.S. Department of Energy, U.S. Department of Defense, 2025.
9.13 Bibliography Brockwell, P.J., and R.A. Davis, Introduction to time series and forecasting (3rd ed.), Springer, 2016. https://doi.org/10.1007/978-3-319-29854-2 Box, G.E.P., and G.M. Jenkins, Some recent advances in forecasting and control, Journal of the Royal Statistical Society: Series C (Applied Statistics), 19(2),91-109, 1970.
https://doi.org/10.2307/2344836
9-75 Box, G.E.P., G.M. Jenkins, G.C. Reinsel, and G.M. Ljung, Time series analysis: Forecasting and control (5th ed.), Wiley, 2015.
Chatfield, C., The analysis of time series: An introduction (6th ed.), Chapman and Hall/CRC, 2003.
Hyndman, R.J., and G. Athanasopoulos, Forecasting: Principles and Practice (3rd ed.),
May 2021. https://otexts.com/fpp3/
Makridakis, S., and M. Hibon, The M3-Competition: Results, conclusions and implications, International Journal of Forecasting, 16(4), 451-476, 2000. https://doi.org/10.1016/S0169-2070(00)00057-1 Shumway, R.H., and D.S. Stoffer, Time series analysis and its applications: With R examples (4th ed.), Springer, 2017. https://doi.org/10.1007/978-3-319-52452-8 U.S. Nuclear Regulatory Commission, NUREG/CR-4007, Lower Limit of Detection: Definition and Elaboration of a Proposed Position for Radiological Effluent and Environmental Measurements. September 1984.
U.S. Nuclear Regulatory Commission, NUREG/CR-6364, Human Performance in Radiological Survey Scanning, BNL-NUREG-52474, March 1998.