ML23193A782
| ML23193A782 | |
| Person / Time | |
|---|---|
| Site: | Electric Power Research Institute |
| Issue date: | 08/11/2023 |
| From: | Licensing Processes Branch |
| To: | |
| Shared Package | |
| ML23193A749 | List: |
| References | |
| EPID L-2021-TOP-0006 | |
| Download: ML23193A782 (76) | |
Text
FINAL SAFETY EVALUATION BY THE OFFICE OF NUCLEAR REACTOR REGULATION ELECTRIC POWER RESEARCH INSTITUTE TECHNICAL REPORT 3002018337, USE OF DATA VALIDATION AND RECONCILIATION METHODS FOR MEASUREMENT UNCERTAINTY RECAPTURE: TOPICAL REPORT ELECTRIC POWER RESEARCH INSTITUTE EPID NO. L-2021-TOP-0006 TABLE OF CONTENTS 1.
INTRODUCTION......................................................................................................... 1 2.
REGULATORY EVALUATION................................................................................... 1 2.1.
Applicable Regulations............................................................................................. 4 2.2.
Applicable Guidance................................................................................................. 5 3.
TECHNICAL EVALUATION....................................................................................... 6 3.1.
Risk Evaluation of Impact of Errors in the Output from the DVR Methodology.............................................................................................................. 8 3.1.1.
What Can Go Wrong?................................................................................................. 8 3.1.1.1.
Failure Modes.............................................................................................................. 9 3.1.1.2.
Failure Scenarios...................................................................................................... 11 3.1.2.
How likely is it?.......................................................................................................... 12 3.1.3.
What Are the Consequences?.................................................................................. 13 3.1.3.1.
Consideration of DVR Penalty in ATP Error Determination...................................... 14 3.1.3.2.
Failure Scenario 1..................................................................................................... 17 3.1.3.3.
Failure Scenario 2..................................................................................................... 17 3.1.3.4.
Failure Scenario 3..................................................................................................... 20 3.1.4.
What Is the Risk in The Answers To Questions (1) - (3) Being Incorrect?.................................................................................................................. 22 3.1.4.1.
Risk Evaluation of What Can Go Wrong?............................................................... 23 3.1.4.2.
Risk Evaluation of How Likely Is It?........................................................................ 23
- ii -
3.1.4.3.
Risk Evaluation of What Are the Consequences?.................................................. 23 3.1.5.
Risk Evaluation Summary......................................................................................... 24 3.2.
Data Validation and Reconciliation as an Application of a Digital Twin.......................................................................................................................... 24 3.3.
NRC Review of Modeling and Simulation............................................................. 25 3.4.
NRC Review of Instrumentation System Contribution to Uncertainty............................................................................................................... 26 3.5.
DVR Method Overview............................................................................................ 26 3.5.1.
Calculating the Statistical Representation of Each Parameter Measurement............................................................................................................ 27 3.5.2.
Modifying the Statistics to Account for Process Measurement and Instrument Channel Uncertainties............................................................................. 28 3.5.3.
The Constraint Equations.......................................................................................... 32 3.5.4.
Calculating The Reconciled Means........................................................................... 35 3.5.5.
Calculating The Reconciled Uncertainties................................................................. 39 3.5.6.
DVR Risk Evaluation................................................................................................. 42 3.6.
NRC Review of Feedwater Flow Rate Uncertainty from Previous MUR Applications.................................................................................................... 42 4.
DVR Conditions and Limitations........................................................................... 44 4.1.1.
DVR Condition and Limitation 1................................................................................ 46 4.1.1.1.
Sources of instrument channel error......................................................................... 46 4.1.1.2.
Adjustment for Systematic and Random Error.......................................................... 48 4.1.1.3.
Staff Observations..................................................................................................... 49 4.1.2.
DVR Condition and Limitation 2................................................................................ 50 4.1.3.
DVR Condition and Limitation 3................................................................................ 51 4.1.4.
DVR Condition and Limitation 4................................................................................ 52 4.1.5.
DVR Condition and Limitation 5................................................................................ 52 4.1.6.
DVR Condition and Limitation 6................................................................................ 53 4.1.7.
DVR Condition and Limitation 7................................................................................ 53
- iii -
4.1.8.
DVR Condition and Limitation 8................................................................................ 54 4.1.9.
DVR Condition and Limitation 9................................................................................ 55 4.1.10.
DVR Condition and Limitation 10.............................................................................. 57 4.1.11.
DVR Condition and Limitation 11.............................................................................. 57 5.
Conclusions............................................................................................................. 58 6.
REFERENCES.......................................................................................................... 58 7.
Appendix A - DVR Example................................................................................... 60 7.1.
Redundant Flow Measurements in a Pipe............................................................ 61 7.2.
Combining Redundant Measurements.................................................................. 62 7.3.
Introducing Constraints.......................................................................................... 63 7.4.
Calculating the Reconciled Mean.......................................................................... 66 7.5.
Calculating the Reconciled Variance.................................................................... 67 7.6.
Example Conclusion............................................................................................... 70 7.7.
Sensitivity Study - Means With Different.............................................................. 71
1
- 1. INTRODUCTION By letter dated January 27, 2021 (Ref. 1), the Electric Power Research Institute (EPRI) submitted topical report (TR) EPRI Technical Report 3002018337, Use of Data Validation and Reconciliation Methods for Measurement Uncertainty Recapture (Ref. 3) to the U.S. Nuclear Regulatory Commission (NRC) for review and approval. The purpose of this report is to establish the technical basis for using Data Validation and Reconciliation (DVR) to perform measurement uncertainty recapture (MUR) power uprates and to substantiate uncertainty claims made that are associated with the DVR process.
The complete list of correspondence between the NRC and EPRI is provided in Table 1 below.
This includes Requests for Additional Information (RAIs), responses to RAIs, and any other correspondence relevant to this review.
Table 1: List of Key Correspondence Sender Document Document Date Reference EPRI Submittal Letter January 27, 2021 1
EPRI Topical Report November 2020 3
NRC Acceptance Letter March 16, 2021 4
NRC RAI - Round 1 May 2, 2022 5
EPRI RAI Response - Round 1 August 8, 2022 6
NRC RAI - Round 2 December 8, 2023 7
EPRI RAI Response - Round 2 March 8, 2023 8
- 2. REGULATORY EVALUATION Nuclear power plants are licensed to operate at a specified maximum core thermal power, called rated thermal power (RTP). To ensure that a plant is not exceeding its RTP, each licensee must monitor the core thermal power (CTP) of the reactor and ensure that reactor CTP operation is maintained within the licensed RTP. The CTP is an estimated value that is based on a steam plant calorimetric heat balance calculation. During the estimation of the CTP (i.e., an acceptably accurate estimate of true value of CTP) the plant power is estimated under steady state operating conditions by recording data from key plant process instruments that contribute to the calculation of the reactor heat balance, and which measure the net heat and power generated by the nuclear steam supply system. The accuracy of the CTP estimate is dependent on the accuracy of all the measurements from the instrumentation and equipment providing heat and power data to the calorimetric heat balance calculation. The CTP estimate is heavily influenced by the contribution of uncertainty in the measurements of feedwater flow rate and feedwater net temperature. The estimate is dependent to a lesser extent on the uncertainties from other measured parameters contributing to the heat balance calorimetric calculation.
The neutron flux instrumentation is used to continuously monitor reactor power. This neutron flux instrumentation provides an input to the reactor protection system (RPS) to trip the reactor during transient events which could result in fuel design limits being exceeded. An assumption regarding the design of the RPS is that during the onset of a design basis transient or accident, the reactor is operating at its maximum licensed RTP. Therefore, to ensure the power level
2 indication from the neutron flux instruments is as accurate as possible, the neutron flux channels are frequently calibrated to accommodate the effects of fuel burnup, flux pattern changes, and neutron flux instrumentation channel drift. This calibration relies on a comparison of the reactor power reading based on neutron flux to the results of the CTP estimate that is calculated using the plant calorimetric heat balance data.
In summary, it is important to have an accurate measurement of feedwater flow rate and temperature to accurately estimate the CTP which is used in the periodic calibration of the nuclear flux instrumentation. The uncertainty in the feedwater flow rate and feedwater temperature measurements are the dominant contributors to total CTP uncertainty.
Nuclear power plants have typically measured the feedwater flow rate by measuring the differential pressure across one of two types of flow elements--either a precision calibrated venturi flow piping section or a flow nozzle. For both types of flow elements, the differential pressure impulse signal is proportional to the square of the feedwater velocity in the pipe. Of the two nozzle types, the venturi is more commonly used because of its relatively low unrecoverable head loss; higher unrecoverable head losses require a higher energy contribution to the motive forces associated with feedwater delivery to the reactor system. However, nozzle fouling (typically greater in a venturi than a flow nozzle) introduces errors in the feedwater flow measurement. These errors, along with errors due to the transmitter and the analog signal conversion and any analog-to-digital conversion must be accounted for when performing the feedwater flow measurement uncertainty analysis.
To ensure the instrumentation uncertainties used in determining the CTP were adequately accounted for, Appendix K, ECCS [Emergency Core Cooling System] Evaluation Models, to Title 10 of the Code of Federal Regulations (10 CFR) Part 50 formerly required licensees to assume that the reactor has been operating continuously at a power level at least 2 percent higher than the RTP when performing loss-of-coolant accident and ECCS analyses.
A change to 10 CFR Part 50, Appendix K, was published in the Federal Register on June 1, 2000 (65 FR 34913), which became effective July 31, 2000. This change allows licensees to use an uncertainty less than 2 percent provided the alternative value has been demonstrated to still account for uncertainties in determining the CTP. Many plants have justified a reduction in this uncertainty by using a more precise feedwater flow measurement and have installed leading-edge feedwater flow meters (LEFM) which are capable of operations with a smaller measurement uncertainty (and no additional unrecoverable head loss) compared to that of venturi flow meters. In some cases, the licensee uses the reading from the LEFM to directly measure the feedwater mass flow, and in other cases, the licensee uses the reading from the LEFM to periodically calibrate the reading from venturi flow measurement before the venturi flow is used in the calculation of CTP.
This change to 10 CFR Part 50, Appendix K, did not authorize increases in licensed power levels for individual nuclear power plants. Therefore, any licensee wishing to increase its licensed power level and take credit for this lower measurement uncertainty must request an amendment to its license in accordance with 10 CFR 50.90, Application for amendment of license, construction permit, or early site permit. Such requests for power uprate associated with the reduction of uncertainty in the measured CTP have been labeled measurement uncertainty recapture power uprates and have been very common.
For example, a licensee may determine the use of LEFM measurements of feedwater mass flow could reduce the CTP uncertainty from +/-2.0 to +/-0.6 percent. Suppose such a licensee may
3 have had a current licensed RTP of 3500 MW (100 percent power). That licensee would have had to perform its safety analysis at 3570 MW (102 percent power) to account for the initial impact of measurement uncertainty in the CTP estimate. For simplicity, we will call this the Analytical Thermal Power (ATP) and define it using the following equation.
2.1 Where
ATP - the Analytical Thermal Power. The thermal power used in safety analysis for the nuclear power plant.
RTP - the Rated Thermal Power. The maximum thermal power at which the plant can operate.
- the expected (design) uncertainty in the CTP estimate.
For this example, assume the ATP is 3570 MW, the RTP is 3500 MW, and the original impact of measurement uncertainty in the CTP estimate is assumed to be 2 percent, and therefore is 70 MW. This is displayed in Figure 1.
Figure 1: Example with 2% measurement uncertainty impact on Core Thermal Power However, if the measurement uncertainty could be decreased from 2 percent (70 MW) to 0.6 percent (21 MW), there are two options: we can reduce ATP or increase RTP. Reducing ATP is not necessary as the plants safety analysis demonstrates that the plant remains safe at the current ATP. Therefore, it is feasible to justify an increase in RTP from 3500 MW to 3549 MW (i.e., 3570 MW minus 0.6 percent of 3500 MW). This is displayed in Figure 2.
4 Figure 2: Example with 0.6% measurement uncertainty impact on Core Thermal Power While a licensee would need to submit a license amendment request (LAR) to the NRC to obtain such an increase in RTP and to substantiate the basis for the reduction in measurement uncertainty, the licensee may not need to change any of its safety analysis, as the initial safety analysis would remain bounding. The licensee would simply need to document in a LAR its basis for requesting a license amendment to increase its licensed RTP value. An example of a justification for a reduced measurement uncertainty can be found in Reference 10. An example of a MUR LAR can be found in Reference 11.
2.1. Applicable Regulations Appendix K to 10 CFR Part 50, ECCS Evaluation Models, requires licensees to assume that the reactor has been operating continuously at a CTP level at least 1.02 times the licensed power level to allow for instrumentation measurement error when performing safety analysis.
An assumed power level lower than 1.02 times the licensed power level (but not less than the licensed power level) may be used in these safety analyses provided the proposed alternative value has been demonstrated to account for uncertainties due to power level instrumentation error.
Paragraph 50.36(c)(2)(i) in 10 CFR prescribes limiting conditions for operation (LCO) which are defined in the technical specifications (TSs) of each licensee. Many of the LCOs are given as a function of the RTP. The RTP typically does not include the CTP uncertainty, and therefore, this value would increase if a lower CTP uncertainty were used.
Appendix A to 10 CFR Part 50, General Design Criteria (GDC) for Nuclear Power Plants, establishes minimum requirements for the principal design criteria for water-cooled nuclear power plants. GDC 10, Reactor design, requires, in part, that the RPS be designed to assure that specified, acceptable fuel design limits are not exceeded during any condition of normal operation, including the effects of anticipated operational occurrences (AOOs). In accordance with GDC 20, the protection system shall be designed (1) to automatically initiate the operation of appropriate systems including the reactivity control systems, to assure that specified
5 acceptable fuel design limits are not exceeded as a result of AOOs and (2) to sense accident conditions and to initiate the operation of systems and components important to safety.
Appendix B to 10 CFR Part 50, Quality Assurance, provides the criteria for a licensees quality assurance program (QAP) and comprises all those planned and systematic actions necessary to provide adequate confidence that a structure, system, or component will perform satisfactorily in service. There are two main aspects of the CTP uncertainty which would fall under this program. The first is the quality assurance aspects of the calculation of the CTP and CTP uncertainty. The second consists of the quality assurance requirements associated with inputs to the CTP uncertainty (i.e., the uncertainties of each instrument channel used in the calculation of the CTP uncertainty).
2.2. Applicable Guidance There is no regulatory guidance specifically written for the assessment of the CTP uncertainty calculation. While the NRC does have specific guidance on licensee LARs associated with requests for MURs, much of that guidance assumes that the MUR is accomplished using LEFMs (i.e., use of a new or independent instrument with higher accuracy, to decrease the feedwater flow measurement uncertainty). The DVR methodology does not use a new higher accuracy instrument to decrease feedwater flow measurement uncertainty but rather uses a different method to calculate the feedwater flow measurement and its corresponding uncertainty by using data from several other existing plant instruments that monitor related steam cycle parameters. Thus, while there is no regulatory guidance specifically on this topic, there are several references which provide general guidance. For this review, the NRC primarily considered the following references:
Title Date Reference NUREG/CR-3659 - A Mathematical Model for Assessing the Uncertainties of Instrumentation Measurements for Power and Flow of PWR [Pressurized Water Reactor] Reactors February 1985 12 Regulatory Guide 1.105 - Setpoints for Safety-Related Instrumentation February 2021 13 Regulatory Issue Summary (RIS) 2002-03, Guidance on the Content of Measurement Uncertainty Recapture Power Uprate Applications January 31, 2002 14 JCGM 100:2008, GUM 1995 with minor corrections, Evaluation of Measurement Data-------Guide to the Expression of Uncertainty in Measurement September 2008 15 JCGM 101:2008, Evaluation of measurement data--------
Supplement 1 to the Guide to the expression of uncertainty in measurement-Propagation of distributions using a Monte Carlo September 2008 16 Experimentation, Validation, and Uncertainty Analysis for Engineers 2018 17
6 Title Date Reference ASME PTC 19.1-2018, Test Uncertainty June 2019 18 ANSI/International Society of Automation (ISA) 67.04.01-2018, Setpoints for Nuclear Safety-Related Instrumentation December 2018 19 NUREG-1475, Rev. 1, Applying Statistics March 2011 31 ISA RP67.04.02-2010, Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation December 2010 33
- 3. TECHNICAL EVALUATION In submitting its TR (Ref. 3) to the NRC staff for approval, EPRI requested that the NRC staff evaluate the use of DVR methodology to arrive at a better estimate of the feedwater flow rate and its associated uncertainty. DVR or simply data reconciliation, is a process which takes advantage of multiple existing independent power plant steam cycle parameter measurements and their corresponding uncertainties to improve the estimate of the true value of CTP. By recognizing and using known physical relationships among those measurements (i.e., physical constraints), many parameter measurements can be treated as redundant measurements of the same process parameter. In the DVR methodology, multiple measurements of the same parameter, called redundant measurements, can be prudently combined to create a single mean value estimate of that parameter. This estimate is called the reconciled value of that parameter. Further, the uncertainty of the reconciled value of a parameter of interest is based on the uncertainties of the multiple redundant measurements. In the DVR methodology, since there is redundancy in these multiple measurements, the reconciled uncertainty will be lower as compared to the measurement uncertainty of that same parameter using a single instrument channel measurement.
Once the reconciled value has been calculated, the Taylor Series Method [59] could be used to quantify the uncertainty in that value (generally referred to as the reconciled uncertainty). The uncertainty of the reconciled value is mathematically guaranteed to be less than or equal to the uncertainty in the individual measurements. This is because the constraint equations can effectively turn the measurements from the many instruments into redundant equivalent measurements. To satisfy the constraint equations, the uncertainties associated with each of the redundant measurements can partially offset one another. Whenever there is redundancy in a measurement, those measurements can be combined into a single value, and the uncertainty of that combined value calculated using the Taylor Series Method is always less than the uncertainty of each of the individual measurements.
EPRI states that it plans to support licensees in applying the DVR method to reduce the uncertainty in estimating CTP. It plans to do this by using DVR to generate a reconciled value for the mean value of the feedwater flow rate measurement and its associated uncertainty.
Because the feedwater flow rate uncertainty is typically the largest contributor to the CTP uncertainty, the smaller reconciled feedwater flow rate uncertainty derived using the DVR methodology (when compared to the feedwater flow rate measurement uncertainty from
7 individual feedwater flow measurements) will reduce the CTP uncertainty to such an extent that a plant may be able to justify a MUR power uprate.
MUR power uprates LARs have typically been approved in the past because the plants licensee has installed a new device that measures feedwater flow rate with reduced uncertainty.
Application of the DVR methodology would allow plants to obtain a similar reduction in feedwater flow uncertainty and justify an MUR power uprate LAR but would do so analytically without the need to purchase new equipment.
While the NRC staff has reviewed and approved multiple MUR power uprate LARs, the DVR methodology described in EPRIs TR is a new technology to be used for this purpose.
Therefore, the NRC staff focused its review of EPRIs DVR TR on the efficacy and reliability of the DVR methodology in predicting CTP. The staff also focused on determining the necessary conditions and limitations on the use of EPRIs DVR methodology which would ensure that there would be reasonable assurance that application of the reconciled values would result in a prediction of the CTP and CTP uncertainty which could be relied upon during plant operation.
Similar to the criteria applied in the evaluation of other types of MUR power uprates, any licensee requesting to apply the DVR methodology at its plant to obtain approval of an MUR power uprate should be prepared to demonstrate that each of the DVR conditions and limitations discussed in this safety evaluation has been satisfied for its plant. The staffs focus in identifying these conditions and limitations was on ensuring that licensees maintain the same level of rigor within the calculation of the CTP uncertainty as is applied in the currently approved process without adding unnecessary regulatory burden. Thus, the staffs goal was to determine DVR conditions and limitations which would result in a CTP uncertainty calculation which would have the same level of trustworthiness (i.e., be on par) with the current CTP uncertainty calculation.
To create the DVR conditions and limitations, the NRC staff performed the following steps:
- 1) The NRC staff assessed the risk significance of using the DVR methodology for the purpose of estimating CTP uncertainty and for using the methodology to justify a license amendment to increase licensed RTP. This assessment enabled the staff to ensure that it was treating the DVR methodology appropriately when determining any necessary conditions and limitations. This assessment is provided in Section 3.1 below.
- 2) The NRC staff recognized that the DVR methodology could be thought of as a digital twin. This recognition allowed the DVR methodology to be separated into areas in which the NRC has substantial review experience: modeling and simulation (M&S) and digital instrumentation and control (I&C). This discussion is provided in Sections 3.2, 3.3, and 3.4.
- 3) The NRC staff analyzed the DVR methodology described in the TR, focusing specifically on the derivation of the mathematical models. This derivation enabled the staff to identify key assumptions (i.e., DVR conditions and limitations) in the derivation process which would need to be justified before applying the DVR methodology. This discussion is provided in Section 3.5.
- 4) The NRC staff analyzed past reviews of measurement uncertaintys impact on CTP, specifically focusing on guidance which provided review criteria. The NRC staff
8 evaluated and dispositioned those criteria to determine if they would also be applicable to the DVR methodology. This discussion is provided in Section 3.6.
- 5) Finally, the NRC staff analyzed each condition and limitation, providing further details and specifying information which would be important in demonstrating that each condition and limitation has been satisfied. This discussion is provided in Section 4.
In Appendix A of this safety evaluation (SE), the NRC staff performed a sample calculation that illustrates the principles of the proposed DVR methodology. This example calculation shows how the uncertainties of two measurements would be combined using a traditional uncertainty combination approach by averaging the uncertainties and demonstrates how they could be combined using the DVR methodology.
3.1. Risk Evaluation of Impact of Errors in the Output from the DVR Methodology To determine the potential significance of possible errors in the DVR prediction of the reconciled feedwater flow rate and its corresponding reconciled uncertainty, the NRC staff considered the risk triplet and included a fourth question, as suggested by Vose (Ref. 23). Specifically, these are:
What can go wrong?
How likely is it?
What are the consequences?
What is the risk in answering these first three questions incorrectly?
The last question supports the qualitative identification of uncertainty in the evaluation of risk from DVR methodology errors. The staffs evaluation of these questions is presented below.
3.1.1. What Can Go Wrong?
DVR is used to determine two values: (1) a reconciled uncertainty associated with the reconciled mean feedwater flow rate, and (2) a reconciled mean feedwater flow rate. Therefore, DVR could go wrong by incorrectly predicting each of these values (i.e., a failure mode). DVR experiencing a failure mode would lead to a scenario in which there would be an incorrect estimation1 of the CTP uncertainty and CTP (i.e., a failure scenario). The failure modes and failure scenarios are discussed below.
1This SE focuses on the calculation of four variables: the reconciled mean feedwater flow rate, the reconciled uncertainty in the feedwater flow rate, the CTP, and the CTP uncertainty. The CTP is a function of the reconciled mean feedwater flow rate, and the CTP uncertainty is a function of the reconciled uncertainty in the feedwater flow rate. To help the reader maintain a separation of these terms, this SE uses the word predict when referencing the results of the DVR method (e.g., the reconciled mean feedwater flow rate, the reconciled uncertainty in the feedwater flow rate) and the word estimate when referencing a value which is calculated from the results of the DVR method (e.g., the CTP, and the CTP uncertainty). There is no difference in meaning between predict and estimate and they are only used in this SE to signify if the calculation being discussed is a direct result or prediction of the DVR method or is using a result from the DVR method to estimate feedwater flow rate or CTP.
9 3.1.1.1. Failure Modes Failure Mode 1 is defined as the DVR method underpredicting the uncertainty in the reconciled feedwater flow rate. This is displayed in Figure 3.
Figure 3: Failure Mode 1 Because the feedwater flow rate uncertainty is one of the major contributors to the calculated CTP uncertainty, Failure Mode 1 can result in an underestimation of the CTP uncertainty. The CTP uncertainty is used to determine the margin which is needed between the RTP and the ATP. If the calculated CTP uncertainty is underpredicted, then there may not be sufficient margin between the RTP and ATP to account for instrumentation uncertainties and the plant may be operating in an unanalyzed condition (i.e., analyses of design basis accidents and anticipated transients do not envelope this condition). Further, if plant operating procedures are based on a periodically calculated CTP that has an underpredicted uncertainty, it may result in continuous reactor power operations in excess of the plants RTP, which would be a violation of the plants license restricting plant operations to within a specified maximum rated power.
Note that overpredicting the reconciled feedwater flow rate uncertainty would result in an overestimation of the CTP uncertainty and a resulting increased operational margin being maintained between the RTP and ATP. Hence, overpredicting the reconciled feedwater flow rate uncertainty results in a conservative direction error and therefore the risk from such failures is not evaluated.
Failure Mode 2 is defined as the DVR method underpredicting the reconciled mean feedwater flow rate, as displayed in Figure 4.
10 Figure 4: Failure Mode 2 Since the mean feedwater flow rate is one of the major contributors to the CTP, Failure Mode 2 can result in an underestimation of the calculated CTP. If the plant operators believe the plant is operating at a lower power than in reality, then they could raise the power level until the estimated CTP is equal to the RTP. However, if the calculated CTP is being underestimated, this means that the reactor could be enabled to continuously operate above its RTP, which would be a violation of the plant license. If the error is large enough, the plant may even be operating above its ATP and the reactor would be in an unanalyzed condition.
Overpredicting the reconciled mean feedwater flow rate would result in an overestimation of the mean CTP. Thus, even if the operators believed the plant were at the RTP, an overestimation error would result in the incorporation of additional plant safety margin. Hence, overpredicting the reconciled mean feedwater flow rate uncertainty is conservative and the risk from such failures is not evaluated.
The failure modes are summarized in Table 2.
Table 2: Failure Modes Failure Mode Description Effect 1
Reconciled feedwater flow rate uncertainty is underpredicted Estimated CTP uncertainty is underestimated 2
Reconciled mean feedwater flow rate is underpredicted Estimated mean CTP is underestimated
11 3.1.1.2. Failure Scenarios Failure Mode 1 (i.e., underpredicting the reconciled feedwater flow uncertainty) results in an underestimation of the CTP uncertainty and this is defined as Failure Scenario 1 and is displayed in Figure 5.
Figure 5: Failure Scenario 1 In Failure Scenario 1, the true CTP may be higher than its estimated value because the uncertainty in the estimated CTP is smaller than the true CTP uncertainty. In general, we expect the distance between the estimated CTP and the ATP to be at least as far apart as the design basis CTP uncertainty. Since the true CTP is assumed to fall somewhere within the CTP uncertainty band around the CTP, this ensures that the true CTP is bounded by the ATP.
However, if we have underestimated the CTP uncertainty, the true CTP (which still falls somewhere within that uncertainty band) may exceed the ATP, resulting in the plant being in an unanalyzed condition. The region of concern would be the difference between the estimated and true CTP uncertainties, as this is the region above the ATP in which the true CTP may be found.
Failure Mode 2 (i.e., underpredicting the reconciled mean feedwater flow rate) results in an underestimation of the CTP and this is defined as Failure Scenario 2 and is displayed in Figure 6.
12 Figure 6: Failure Scenario 2 In Failure Scenario 2, the true CTP would be well above its estimated value and could be outside the uncertainty band. Thus, if operators were operating the reactor core at what they believe to be the licensed RTP, the continuous true CTP could exceed this value (a license violation) and may even exceed the ATP, resulting in the plant being in an unanalyzed condition. The region of concern would be the difference between the estimated and true CTP values.
Finally, if both Failure Modes 1 and 2 were to occur, this would result in Failure Scenario 3, in which both the CTP uncertainty and the CTP mean value are underestimated. The failure scenarios are summarized in Table 3.
Table 3: Failure Scenarios Failure Scenarios Failure Modes Cause Effect 1
1 Reconciled feedwater flow rate uncertainty is underpredicted Estimated CTP uncertainty is underestimated 2
2 Reconciled mean feedwater flow rate is underpredicted Estimated CTP is underestimated 3
1 & 2 Reconciled feedwater flow rate uncertainty and reconciled mean feedwater flow rate is underpredicted Estimated CTP uncertainty and estimated CTP are underestimated 3.1.2. How likely is it?
The NRC staff was not able to quantitatively determine the likelihood of the failure modes and their resulting failure scenarios. However, the staff could make some qualitative observations. In the NRC staffs experience with instrumentation and uncertainty for measuring heat balance parameters small errors in measuring key heat balance parameters are more likely to occur than larger ones. Therefore, if the reconciled feedwater flow rate uncertainty is underpredicted,
13 it is more likely to be underpredicted by a small amount than by a very large amount.
Additionally, as described in the EPRI TR and detailed in Condition and Limitation 11, the NRC staff notes that the results from the DVR methodology will be monitored by ensuring that the difference between the DVR reconciled value and the instrument measured value is limited to a fixed value based on the measurement uncertainty (i.e., DVR penalty). While a major failure of the DVR methodology seems unlikely, it is not possible to assign a probability to this event given the lack of knowledge of the probabilities of each of the precursors which could cause such an event.
In summary, the lack of a quantitative estimate of the likelihood of the failure modes or scenarios of the DVR methodology does not preclude the NRC staffs use of a risk evaluation, as the staff focused on the risk evaluation of a reasonable worst-case scenario. Thus, the lack of quantified probabilities makes the consequence determination (discussed below) conservative because it does not include the decreased risk of more probable events.
3.1.3. What Are the Consequences?
The NRC staff determined a quantitative bounding estimate of the consequences of each failure scenario. To quantify the bounds of the consequences, the NRC staff defined a figure of merit for the DVR method, called the ATP error, which represents how far the plant is (with respect to power) from an analyzed condition. The ATP error is defined in Equation 3.1.
3.1 Where
- represents the Analytical Thermal Power Error2.
- represents Analytical Thermal Power - the CTP value which is used in the safety analysis.
- The True CTP.
If the ATP error is greater than or equal to 0, it means the plants current state is bounded by its safety analysis (i.e., the plant is not in an unanalyzed condition). If this error is less than 0, it means the plants current state is not bounded by its safety analysis (i.e., the plant is not in an analyzed condition). Further, the magnitude of the ATP error is a metric representing the distance between the plants current power and the power which was assumed in accident and transient analysis. The staff further derived this error to see how the DVR methodologys three failure scenarios can impact it. First, we substitute Equation 2.1 into 3.1 for ATP.
3.2 We want to consider the realistic situation in which the plant is operating at its RTP
. However, we also want to recognize that the CTP value is obtained from an estimation process and therefore will call this estimated CTP.
3.3 2 This error was defined to be consistent with the GUM (Ref. 15), and there is defined as estimated value
- true value, however this is a preference. The error could have been defined as true value - estimated value and the resulting derivations would have produced the same overall result.
14 Finally, we recognize that the difference between the estimated CTP value and true CTP value can be defined as the CTP error.
3.4 Substituting the equation for CTP error into the equation for ATP error, we obtain its final form.
3.5 In its final form, the ATP error is only a function of two variables, the CTP error and the CTP uncertainty. In this form, we can more easily see how the different failure scenarios would impact the ATP error. This would enable us to develop limiting estimates of each of these values. However, there is one aspect of the DVR methodology called the penalty which will greatly impact how we estimate these values.
3.1.3.1. Consideration of DVR Penalty in ATP Error Determination As described in EPRIs TR, the DVR method uses a term called a penalty which ensures that the DVRs prediction of a value cannot be very far from the indicated measurement of that same value. This penalty is used as a performance indicator that detects and flags the possibility that the difference between a measured value and its reconciled value exceeded a reasonable maximum. Equation 3-433 from the TR defines this penalty, and a simplified form of that equation is given below for convenience.
10 1
3.6 Where
is the reconciled mean feedwater flow rate value based on the sampled data set is the measured mean feedwater flow rate value of the sampled data set is the reconciled uncertainty (i.e., standard deviation of reconciled value) is the measurement uncertainty (i.e., standard deviation of measured value)
The goal of this section is to further derive the penalty until we clearly see how it will impact the ATP error. First, we assume the penalty is at its maximum value of 1 (as this will result in the largest difference between the measured and reconciled mean flow rates allowable) and move the denominator to the right-hand side (RHS).
10 3.7 3 The definition of correction uncertainty given in the topical report is incorrect. It should be correction uncertainty2 = measurement uncertainty2 - reconciled uncertainty2
15 Next, we take the square root of both sides. However, because this is an inequality, taking the square root results in two limits instead of one.
10
10 3.8 Additionally, we changed the order of the difference (from to ). This did not change the equation and was performed so we can maintain consistency with ATP error and have this difference term be represented as non-conservative when it is negative.
For this analysis, assume that there is an error in the reconciled mean feedwater flow rate (),
but we do not assume that the measured mean feedwater flow rate is in error (). Thus, we would reasonably expect to find the true value of the mean feedwater flow rate within the uncertainty bounds which are centered on the measured mean feedwater flow value. Hence, we are only interested in the non-conservative case (i.e., those cases where the reconciled mean feedwater flow rate is an underprediction of the measured mean feedwater flow rate, ).
Thus, we only need to focus on the inequality on the left-hand side of Equation 3.8.
10 3.9 Next, we introduce feedwater flow rate delta () which is the percentage difference between reconciled mean feedwater flow and measured mean feedwater flow. Note, we have defined this delta to be consistent with the ATP error in that it is only non-conservative when the value is negative as this is the instance which could result in underestimation of the CTP.
3.10 We can re-write our penalty from Equation 3.9 in terms of the feedwater flow delta by dividing both sides by the mean measured feedwater flow rate,.
1
10 3.11 Finally, we assume that the feedwater flow delta is a reasonable estimate for the CTP error. The assumption is that the difference between the true CTP and the estimated CTP would be similar to the difference between the measured feedwater flow rate and the reconciled feedwater flow rate. In general, a 1 percent change in feedwater flow will cause a similar change in CTP. In actuality, the relationship between feedwater flow and CTP is not 1:1, it may be closer to 1:0.8.
However, assuming a 1:1 relationship is conservative as it results in a calculation of the CTP error that is greater than what we would realistically expect. Hence, for conservatism, we will
16 assume a 1:1 ratio between the two and assume that the feedwater flow rate delta (as a percentage) is equal to the CTP error (as a percentage).
3.12 Equation 3.12 represents the major assumption of this analysis. This equation equates a value we can calculate (the feedwater flowrate delta) to a term we do not know (the CTP error).
Substituting equation 3.12 into equation 3.11, we can obtain the following.
1
10 3.13 Up until this point, the equations using reactor power could employ any unit for power. However, because we are using the feedwater flow rate error for the CTP error, we will need to use percentages in all future equations. This equation could be used to determine the maximum CTP error; however, we want the maximum ATP error. Therefore, we use equation 3.5 and substitute in the difference between the ATP error and the uncertainty in the CTP estimate for the CTP error.
1
10 3.14 Finally, we can add the uncertainty in the CTP estimate to both sides to obtain the following.
1
10
3.15 Where:
- represents the Analytical Thermal Power Error is the reconciled mean feedwater flow rate value based on the sampled data set is the measured mean feedwater flow rate value of the sampled data set is the reconciled uncertainty (i.e., standard deviation of reconciled value) is the measurement uncertainty (i.e., standard deviation of measured value)
- the expected (design) uncertainty in the CTP estimate.
Equation 3.15 shares many similarities with equation 3.9. In equation 3.9, we know that the reconciled value cannot be lower than the measured value by an amount greater than the RHS of the equation. Thus, the penalty expressed through this inequality limits the potential difference between the reconciled and measured values. By assuming that the feedwater flow error (i.e., that difference between the measured and reconciled values) is a reasonable estimate for the CTP error (i.e., the difference between the estimated CTP and the true CTP),
we reach the conclusion that this same RHS can be used to limit the ATP error. This is because the ATP error is defined as the CTP error plus the uncertainty in the CTP estimate.
17 3.1.3.2. Failure Scenario 1 In Failure Scenario 1, we assume the reconciled mean feedwater flow rate has been correctly predicted, but its uncertainty is underpredicted. That is, we believe the uncertainty is lower than its actual value. To estimate this underprediction, we need to determine the lowest possible value we could believe this uncertainty could be compared to its actual value. The smallest possible value we could calculate for this uncertainty is limited by zero percent, therefore zero percent is a conservative bound for the value we believe the uncertainty to be.
Estimating the actual value for the reconciled uncertainty is more challenging. It seems to always be possible to increase the true uncertainty by intentionally using a wrong constraint or by placing a bug in the code, however those situations seem excessive. Therefore, we will estimate the maximum possible likely uncertainty under reasonable operating conditions.
Because the reconciled uncertainty in the feedwater flow rate is based on the uncertainty of the measurements which are used to generate the reconciled value, we can reasonably assume that the actual value for the reconciled uncertainty could not exceed the measurement uncertainty of the feedwater flowrate. Hence, a reasonable upper bound of the actual value for the reconciled uncertainty is 2 percent.
Therefore, in Failure Scenario 1, we believe the reconciled uncertainty is 0 percent when it is in fact 2 percent. This results in another form of ATP error given below 3.16 We can then substitute the equation 2.1 for both ATP values.
3.17 For this scenario, we have assumed there is no error in the RTP value; therefore, these values cancel each other, and we are left with the final form of the ATP error for scenario 1.
3.18 Next, we can substitute 0 percent for the assumed uncertainty and 2 percent for the true uncertainty which results in an ATP error of 2 percent.
3.1.3.3. Failure Scenario 2 In Failure Scenario 2, the reconciled uncertainty has been correctly predicted, but the reconciled mean feedwater flow rate has been underpredicted. Therefore, we need to estimate the maximum possible underprediction. This underprediction would be limited by the penalty as described in Section 3.1.3.1 of this SE. Therefore, we can use equation 3.15 to estimate the ATP error.
18 We assume that - the mean measured feedwater flow rate has a nominal value of 7.4 Mlb/hr.
7.4 3.19 We assume the uncertainty of the mean measured flow rate is 2 percent. However, this uncertainty is not the standard deviation of the mean measured flow rate, but we can convert this uncertainty into the mean measured flow rate by assuming this uncertainty is related to a tolerance interval. That is, we assume that 95 percent of mean measured flow rates fall within 2 of the mean value with a 95 percent confidence. The following is one equation for the tolerance interval.
3.20 Where:
% represents the values about the mean to construct the tolerance bound is the tolerance limit factor for a two-sided tolerance interval and is based on the number of samples is the standard deviation of the measurement First, we need to convert the non-conservative direction tolerance bound from a percentage to a value with units. This is accomplished by multiplying the assumed uncertainty (2 percent) by the nominal value.
% 2% 0.148
3.21 Next, we assume that a sufficiently large number of samples are taken such that the two-sided tolerance limit factor of 1.96 can be used. Thus, re-arranging equation 3.20 and substituting in the value for the tolerance bound in equation 3.21, we can solve for the standard deviation.
95%
0.148
1.96 0.0755 3.22 We assume uncertainty of the reconciled measured flow rate is at its nominal value of 0.6 percent. While we would expect a difference between the mean measured feedwater flow rate and the reconciled mean flow rate, we assume this difference is small such that we can use measured feedwater flow rate (and not the reconciled flow rate) to convert the tolerance bounds from a percentage to a value with units.
% 0.6% 0.6% 0.0444
3.23
19 Next, we assume that a sufficiently large number of samples are taken such that the two-sided tolerance limit factor of 1.96 can be used, such that we can solve for the reconciled standard deviation.
95%
0.0444
1.96 0.0227 3.24 We assume that the estimated CTP uncertainty is equal to the reconciled mean feedwater flow rate uncertainty. This is a reasonable assumption as the estimated CTP uncertainty is primarily driven by the feedwater flow rate uncertainty. However, because we plan to use Equation 3.15, this uncertainty should be expressed as a percentage, and since uncertainties can be positive or negative, we need to use the negative value to maximize the ATP error.
0.6%
3.25 This results in the following ATP error.
1
10
0.9%
3.26 One final adjustment is needed. Consistent with the
, the first term in the equation also needs to be in percent; therefore, we should multiple this term by 100, to obtain the following:
100%
10
1.58%
3.27 A sensitivity study was also performed which ranged from 0 percent to 2 percent. The results of that study are given in Figure 7 below.
Figure 7: Sensitivity Study for Failure Scenario 2 The maximum ATP error was -1.6 percent which occurred assuming that the reconciled uncertainty was assumed to be zero. This value for reconciled uncertainty is unrealistically low, ATPError
20 and due to the way ATP error is calculated results in an overly conservative estimate of ATP error, and therefore, the ATP error of -1.6 percent is a reasonable bound for failure scenario 2.
3.1.3.4. Failure Scenario 3 In Failure scenario 3, both the reconciled uncertainty and the reconciled mean feedwater flow rate have been underpredicted. To perform this analysis, we start with equation 3.17 for ATP error and re-arrange the terms.
3.28 Because we are assuming that we are operating at the RTP, we can re-write this equation using CTP in place of RTP.
3.29 The difference between the assumed and true values of CTP is the CTP error; therefore, we can incorporate the CTP error term into this equation for ATP error.
3.30 We can use equation 3.13 to create a bound on the CTP error in the non-conversative direction.
100%
10
3.31 Therefore, the non-conservative ATP error for Failure Scenario 3 is bounded by the following equation.
100%
10
3.32 We can use the values from Failure Scenario 1 to obtain values for the assumed CTP uncertainty (
0percent) and the true uncertainty (
2 ). We can use the values from Failure Scenario 2 for the reconciled and measured variances. Once again, we can vary the reconciled variance from 0 percent to 2 percent, and the resulting ATP error is given in Figure 8.
21 Figure 8: Sensitivity Study for Failure Scenario 3 The limiting cases from the Failure Scenarios are given in Table 4.
Table 4: Failure Scenario Consequence Input Examples Failure Scenario Measured Mean Feedwater Flow Rate
()
Measured Mean Feedwater Flow Uncertainty Reconciled Mean Feedwater Flow Uncertainty Non-Conservative ATP Error (Mlb/hr)
(%)
estimated actual
(%)
(%)
(%)
1 7.4 2
0.0755 0
0 2
0.0755 2
2 7.4 2
0.0755 0.6 0.023 0.6 0.023 1.6 3
7.4 2
0.0755 0
0 2
0.0755 3
Based on the staffs analysis, the DVR penalty would seem to limit the maximum ATP error (i.e., how high of a power level the plant could be operating above the power level assumed in safety analysis) to 3 percent. Because the NRC staff was unable to estimate a likelihood for each failure scenario, the staff focused its review on the most bounding case.
22 Further, the NRC staff notes that its analysis included multiple assumptions which could not be conservatively bounded. The staff recognized that uncertainties were introduced into its consequence analysis given that it is a new analysis (e.g., required the creation of a new figure of merit - ATP), and required a number of assumptions (e.g., the difference between the true value of the feedwater flow and the reconciled value). Therefore, the staff determined it was necessary to address the uncertainties in this calculated value. However, the staff did not have data on these uncertainties, their likelihoods, or their potential impacts, as many of the uncertainties were epistemic in nature and caused by a lack of knowledge. To ensure that these uncertainties were adequately accounted for, the staff focused on developing a factor of safety using engineering judgment based on its previous experience reviewing models and simulations associated with reactor safety analysis, previous experience reviewing and performing similar uncertainty quantification analysis, and familiarity with the uncertainties associated with reactor instrumentation.
The staff considered a factor of safety of two but believed that value to be too large. Given the very conservative assumptions made in Failure Scenario 1, the staff did not find that a factor of safety was needed to bound this scenario given it was a conservative estimate and how simple the analysis was to perform. The staff determined that the major assumptions made were in Failure Scenario 2 and 3 where it was assumed that the CTP error was equal to the difference between the measured and feedwater flow rates.
The staff notes that while Failure Scenario 3 has the largest ATP error, the majority of that error (2 percent of the total 3 percent), was due to the difference between the assumed and actual reconciled feedwater flow rate uncertainty. As already stated, the NRC staff finds that this difference is conservatively calculated, and therefore, does not require a factor of safety.
However, the remaining third of the error is due to the main assumption on the CTP error. Given that that results in an ATP error of 1 percent, the NRC staff determined that this error can be conservatively bounded by a factor of safety of 1.5 on the final ATP error which effectively increases the CTP error from 1 percent (as calculated) to 2.5 percent. In summary, the staff finds that the factor of safety of 1.5 which results in a maximum ATP error of 4.5 percent is a reasonable and conservative estimate of the ATP error.
Therefore, even with multiple bounding assumptions the possible worst-case consequences of the failure scenarios identified for the DVR method would be the plant operating about 4.5 percent above the power level used in the plants safety analysis. It should be noted that the likelihood of this bounding consequence is expected to be very low. The staff notes that while this very low likelihood worst-case estimated power level is not an intended or desired state and would represent unintended reactor operations in violation of TS and license condition limits, it does not represent an immediate safety concern. Further, there are several conservatisms built into the safety analysis that incorporate a degree of safety margin that would contribute toward mitigating the effects of the consequences of possible limited power level encroachment above analyzed limits. Further, the safety factor of 1.5 that was employed by the staff was considered only as a bounding value to address unknown uncertainties.
3.1.4. What Is the Risk in The Answers To Questions (1) - (3) Being Incorrect?
As a final step in this risk evaluation, the NRC staff evaluated the risk of answering each question incorrectly, focusing on both the probability of answering incorrectly as well as the consequences. This question could also be understood as How much uncertainty is there in
23 our answers to the first three questions? or How much confidence do we have in our answers to the first three questions?
3.1.4.1. Risk Evaluation of What Can Go Wrong?
Answering the question what can go wrong is focused on identifying the failure modes (i.e., the ways in which the DVR method may fail) and then on understanding how those failure modes create failure scenarios (i.e., scenarios in which a failure mode occurs). Thus, for this evaluation to be wrong, either a failure mode could have been ignored or a failure scenario could have not been considered. While the consequences of ignoring a failure mode or not considering a failure scenario could always be significant, the likelihood of incorrectly determining what can go wrong seems very low.
The DVR methodology only provides two inputs (feedwater flow rate uncertainty and mean feedwater flow rate) to the calculation of the CTP and the CTP uncertainty; therefore, the possible failure modes are well understood. Further, there are only certain ways in which an error in each input could non-conservatively impact the calculation of the CTP and the CTP uncertainty; therefore, the possible scenarios are well understood.
Because the possibility of ignoring a failure mode would be extremely unlikely given the limited use of the DVR methodology, and the possibility of not considering a failure scenario would be extremely unlikely given the few failures modes, the NRC staff concludes that the risk of answering what can go wrong incorrectly is very low.
3.1.4.2. Risk Evaluation of How Likely Is It?
Answering the question how likely is it is focused on estimating the probability of a failure mode occurring which leads to a failure scenario. Because the NRC staff was not able to generate a reasonable estimate of these probabilities and the staff did not use them in its risk evaluation, the NRC staff concludes that the risk of answering how likely is it incorrectly is very low.
3.1.4.3. Risk Evaluation of What Are the Consequences?
Answering the question What Are the Consequences? is focused on identifying the consequences which result from each failure scenario. Thus, for this evaluation to be wrong, the staff would need to have underestimated the potential impacts of an error in the DVR methodology. The major assumption in the staffs analysis was that the difference between the true value of the feedwater flow and the reconciled value resulting from the DVR methodology could be approximated as the difference between the measured value of the feedwater flow and the reconciled value. To account for uncertainties in this assumption, the NRC staff applied a factor of safety of 1.5 to the ATP error.
While it is possible for the staffs analysis to be in error, the analysis is simple from a mathematical perspective, and therefore any major error should be easily observed.
Additionally, while the analysis did make several assumptions, the assumptions seem reasonable and should be made conservative with the application of the factor of safety. Finally, the staffs analysis did not attempt to credit the likelihood of any failure scenario, and instead used the bounding scenario.
24 Because the staff performed a simple consequence analysis, the assumptions made in that analysis were reasonable, the staff chose to apply a factor of safety of 1.5, and the staff ignored the likelihood of the consequence occurring and instead considered a reasonable worst-case scenario, the NRC staff concludes that the risk of answering what are the consequences incorrectly is low.
3.1.5. Risk Evaluation Summary The evaluation of risk insights for the DVR methodology demonstrates that even with multiple bounding assumptions, the reasonable worst-case consequences of the failure scenarios identified for the DVR method would be the plant operating about 4.5 percent above the power level used in the plants safety analysis. The staff notes that there is very low likelihood of the worst-case consequence materializing. While operating 4.5 percent above the power level used in a plants safety analysis is recognized as an unanalyzed condition, operation at that power level would not directly result in fuel failure. For normal operation, the fuel may experience increased oxidation occurring at the higher powers which could result in loss of margin to operational limits. If an accident or transient were to occur the plant would be in an unanalyzed space. However, plants typically maintain some amount of margin to design limits, and therefore it is likely that operation at powers only slightly higher than those used in safety analysis would only result in a reduction of margin and not exceedance of a design limit.
Based on this risk assessment, the NRC staff has determined that the risk significance of the DVR cannot be justified to be high, as the consequences of failure of the method would not immediately result in fuel or reactor damage, and failure of the method may not even result in exceedance of a design limit if an accident or transient were to occur. However, for the DVR method, the NRC staff is not able to determine whether the method should be considered to have either a medium or low risk significance. Part of the inability to make this determination is the need for additional information regarding the probabilities of such failures of the DVR method. Currently, the probabilities of such failures have not been measured. Another part of this inability is in understanding what constitutes a meaningful distinction between medium and low risk in the case of DVR (i.e., what magnitude of a characteristic would constitute a medium-risk DVR method, and what would constitute a low-risk DVR method). While the NRC staff agrees that the focus on the reasonable worst-case scenario, the use of bounding assumptions, the application of a factor of safety to address unknown uncertainties, and the performance monitoring of the DVR methodology would indicate that the risk could likely be low, the staff does not believe it has enough information to make such a certain conclusion. Therefore, the NRC staff has chosen to address the risks associated with application of the DVR methodology by demonstrating they can be mitigated though the application of certain DVR criteria, as discussed in Section 4.
3.2. Data Validation and Reconciliation as an Application of a Digital Twin The concept of digital twins has become a recent focus across multiple engineering disciplines (Refs. 20, 21, and 22). One useful definition of a digital twin is given by VanDerHorn and Mahadevan (Ref. 20) as the following:
A Digital twin includes:
(1) A virtual representation of a single instance of a physical system, [and]
25 (2) Data/information from the physical system used to update the states of the virtual representation over time.
Thus, a digital twin is a computational model whose inputs are primarily measurements obtained from the real-world system being simulated (as opposed to inputs which are chosen by an analyst), and which is supposed to simulate a specific state of the system which changes as those inputs change. It is helpful to understand DVR as a digital twin because the NRC staff has performed multiple regulatory reviews in various aspects of digital twins. For instance, the NRC staff has significant experience determining the credibility of virtual representations of physical systems, more commonly called computational models. The NRC staff has performed these reviews at various significance levels up to and include the highest significance level of computational models used for plant safety analysis. The NRC staff also has significant experience ensuring that data/information from a physical system is accurately measured, transmitted, and applied in determining process safety setpoints, primarily in its reviews associated with I&C and evaluation of settings and indication uncertainties for safety-related systems. Because DVR is a digital twin, the staffs review of EPRIs DVR methodology was focused on two main topics: (1) modeling and simulation and (2) instrumentation.
3.3. NRC Review of Modeling and Simulation The NRC routinely regulates M&S for various purposes. The NRC staff has observed (Ref. 25) that the review of M&S vary based on the perceived risk significance of the results. In general, these reviews fall into one of three categories: high, medium, and low.
For M&S which are considered to have a high significance, the NRC staff will evaluate each computational model individually and ensure that the application of that model results in a credible simulation. This is a very intensive review of the details of the model and is generally reserved for high significance safety analysis. The criteria applied when performing such reviews are based on regulations provided in the CFR (e.g., 10 CFR Part 50, Appendix K, Section II, Required Documentation) and further refined in additional guidance (e.g., RG 1.203, Standard Review Plan Section 15.0.2).
For M&S which are considered to have a medium significance, the NRC staff will typically review and endorse the guidance which controls the M&S (e.g., external industry consensus standards) instead of reviewing the entirety of the M&S. However, the staff may still review specific portions of the M&S which are deemed more significant. For example, M&S for Probability Risk Assessment generally falls into this category. (e.g., American Society of Mechanical Engineers (ASME)/American Nuclear Society (ANS) - RA-S-2008 (R2019)).
For M&S which are considered to have a low significance, the NRC staff will review the QAP under which the simulations are performed. In such situations, the NRC staff may also review specific portions of the M&S which are deemed more significant. The vast majority of M&S performed by a licensee generally fall into this category. An even lower risk category could be defined as those M&S which are performed by the licensee, but which are not performed under its QAP.
Using the risk significance assessed in Section 3.1, the DVR computational model would fall into the medium to low category. Thus, the NRC staff focused its review of EPRIs TR on the
26 significant portions of the DVR model such as the specification and importance of the physical constraint equations, the method for prediction of the reconciled mean feedwater flow rate, and the method for prediction of the reconciled mean feedwater flow uncertainty. However, the staff did not review other details which are generally reviewed in high consequence simulations (such as ensuring the user manual is complete).
3.4. NRC Review of Instrumentation System Contribution to Uncertainty The NRC staff has significant experience reviewing licensee and applicant estimates of the uncertainties associated with instrument channel performance at nuclear power plants. The DVR method presents a new challenge as previously only a handful of measurements were relied upon to determine CTP and CTP uncertainty using the reactor calorimetric heat balance calculation. For example, a rigorous method for evaluating instrument channel performance uncertainties, such as the one used for evaluating limiting safety system settings in plant TSs, could be used in modeling and quantifying the uncertainty of each of the many additional process parameter measurements that will be used for the DVR method of estimating CTP.
However, since the DVR method makes use of more measurements, if the same rigorous method were to be applied to quantify the uncertainty of all these additional measurements, use of the DVR method would not be practical.
While the DVR method can make use of potentially hundreds of measurements in the prediction of the reconciled mean feedwater flow rate and its uncertainty, not all these measurements contribute equally toward the total CTP uncertainty. Measurements considered to be significant are those which contribute more than 0.5 percent to the prediction of the reconciled feedwater flow uncertainty. Hence the measurements can be separated into two groups:
(A) Major contributor - a measurement which contributes at least 0.5 percent to the prediction of the reconciled feedwater flow uncertainty.
(B) Minor contributor - a measurement which contributes less than 0.5 percent to the prediction of the reconciled feedwater flow uncertainty.
Thus, the NRC staff focused its review of the TR on consideration of the most significant portions of the DVR model which are likely to contribute to the ability of the method to accurately predict its results. These include the requirements for appropriately setting up the constraint equations, the method and process for prediction of the reconciled mean feedwater flow rate, and the method and process for prediction of the reconciled mean feedwater flow uncertainty. In contrast, the staff did not review other details which are generally reviewed in high consequence simulations (e.g., ensuring the user manual is complete).
3.5. DVR Method Overview In this section, the NRC staff provides an overview of the DVR method described in EPRIs TR.
This overview provides technical details about the DVR method, as well the key assumptions in the process which informed the staffs development of certain DVR conditions and limitations that any licensee wishing to apply the DVR methodology at its plant should satisfy to obtain approval of an MUR power uprate. First, the NRC staff discusses how the measurement of each process parameter monitored by an instrument channel is determined and presented in a statistical form (e.g., mean and standard deviation). Then the staff discusses how those statistical components are affected when adjusted to account for measurement uncertainties.
27 Next, the staff addresses the constraint equations and their role in the DVR process. This is followed by a discussion of how the reconciled values are calculated and finally a discussion on the calculation of the reconciled uncertainties. The NRC staff notes that its goal in its review of EPRIs TR is to understand whether the DVR process can result in an estimate of CTP and CTP uncertainty that is more accurate than current methods of determining CTP and CTP uncertainty when using only a handful of plant measurements that feed into the reactor calorimetric heat balance.
Additionally, the NRC staff has provided in Appendix A of this SE which provides a very simple example of the DVR methodology and compares the use of the DVR methodology to the more traditional approach in calculating the reduction in measurement uncertainty when multiple instruments are present.
3.5.1. Calculating the Statistical Representation of Each Parameter Measurement Assume that the nuclear power plant has instrument channels and therefore parameter measurements that are used in the reactor heat balance calorimetric estimate (calculation) of CTP. As previously noted, this heat balance calculation of CTP is required to be taken over a period of time with the reactor operating at steady state power conditions. Multiple samples of measurements of each parameter are recorded under steady state conditions until enough data is collected to yield an adequate representation of the various processes involved in the reactor heat balance. Consider the parameter measurement4. For this measurement, we will have a sample of individual successive measurements ( each of which is obtained at some specific time (
), and we will have such successive measurements.
3.33 Where:
is the vector of all samples of the measurement is the number of samples of the measurement
is the sample of the measurement We will initially assume that the variation in these recorded measurements is due entirely to random effects and therefore we can use the mean of these recorded values as the best estimate of the expected mean value of the recorded sampling of the variable being measured5.
4 The term parameter measurement is used consistent with its use in plant measurement. Thus, a parameter measurement is the measurement of a parameter or variable in the plant. The term should not be interpreted in a statistical sense (i.e., parameter of a population).
5 The staff recognizes that each measurement representing the process parameter has both systematic and random error effects, but accounting for these effects will be discussed later. This part of the discussion is only describing how the DVR method processes the recorded set of data.
28
1
3.34 Where:
is the mean of the recorded samples of the measurement is the number of recorded samples of the measurement
is the sample of the measurement Once the mean is calculated, the next step is to calculate the uncertainty6 () of the recorded measurements. The recorded measurements standard deviation would be calculated as follows.
1 1
3.35 Where:
is the standard deviation of the samples of the measurement is the number of samples of the measurement
is the sample of the measurement
is the mean of the samples of the measurement 3.5.2. Modifying the Statistics to Account for Process Measurement and Instrument Channel Uncertainties The mean of the sample (
and the standard deviation of the sample (
) are calculated only from the recorded sample values and assume the samples have no errors. However, there are multiple systematic and random errors which are included within the measurement, such as errors resulting from process variations, sensing element precision in reflecting the characteristics of the process being measured (typically referred to as sensor accuracy),
instrument calibration uncertainties, propagation of errors resulting from signal conversion and transmission, and errors resulting from analog-to-digital conversion, digital sampling, and data storage. For this reason, to determine a more realistic representation of the true measured parameters, the mean and standard deviation of these samples may need to be adjusted to address (account for) the process and instrument channel uncertainties which are present in the recorded sample values.
For the standard deviation of the measurement, this adjustment would always result in a value which is greater than standard deviation of the sample.
6 Uncertainty definition and notation varies based on the reference. Often, the variable is used to represent the uncertainty, as the standard deviation is commonly used to represent the uncertainty. In this document and in the TR, the uncertainty will be represented by the standard deviation and because this uncertainty is often calculated from a sample (and not the population) the variable is used.
29
3.36 Where:
is the standard deviation in the measurement that accounts for instrument channel performance errors
is the standard deviation of the recorded samples of the measurement However, for the mean of the measurement, an adjustment is not so easily made. Increasing the standard deviation will always increase the uncertainty in the measurement and the increased uncertainty will more likely account for any random errors. However, it is not possible to a priori increase or decrease the mean value to account for systematic errors, since often the magnitude or the direction (i.e., conservative or non-conservative) of the systematic error may be unknown or difficult to estimate. Such a change to a mean value could add an additional systematic error. This is why it is important to perform accurate instrument channel modeling to identify all the important sources of systematic error. For a given measurement, if a specific error direction were known to result in a more limiting case (e.g., it is more limiting to have a higher mean value), than the systematic error adjustment could be conservatively accounted for (adjusted) in the limiting direction. However, this conservative adjustment can often only be made under very specific circumstances (e.g., linear behavior) and there are many instances of real-world scenarios where increasing or decreasing the mean does not result in a more or less conservative answer, it just results in a different answer. Hence, the mean of the measurements may be greater than, less than, or equal to the mean of the samples.
3.37 Where:
is the mean value of the measurement that accounts for instrument channel error
is the mean of the recorded samples of the measurement Any measurement contains error. These errors consist of systematic errors and random errors.
Systematic errors are associated with errors which directly impact the mean of the measured values. Thus, the systematic error of the measurement could be calculated using the following equation.
3.38 Where:
is the error in mean of the measurement is the mean value of the measurement is the true value of the measurement
30 Random errors are associated with errors which impact the variance in the measured values.
Thus, one way to calculate the random error of the measurement would be in determining the error in the standard deviation which could be calculated using the following equation.
3.39 Where:
is the error in standard deviation of the measurement is the standard deviation of the measurement is the true value of the standard deviation of the measurement Because the true values of the mean and the standard deviation are unknown, the staff notes that it is not possible to calculate these errors exactly. Hence, there will always be some uncertainty in both the measurement mean and standard deviation. The I&C community within the nuclear industry has significant experience in performing instrument channel modeling and identification of the sources and magnitudes of errors that contribute to instrument channel performance. This modeling and error estimation is needed when accounting for the uncertainty in these values.
Because the DVR method relies on measurements which may have multiple systematic and random errors from various sources, the staff finds that any applicant or licensee applying the DVR method should appropriately identify the statistical representation of the measurement data (i.e., mean and standard deviation) and then demonstrate how those statistical representations may need to be modified (increased) to account for uncertainties due to process measurement and instrument channel performance. This accounting needs to be reasonably accurate, so the use of analytical tools and industry standards pertinent to instrument channel modeling, identification and estimation of potential error contribution, and validation is warranted. Input data from instrument channels that contribute the most toward feedwater flow uncertainty (i.e., those with uncertainty influence of greater than or equal to 0.5 percent of feedwater flow uncertainty) should be identified. For each of these channels, the instrument channel performance uncertainty should be accurately modeled to identify and quantify (estimate) the systematic and random errors contributing to that measurement channel uncertainty. Estimates of the instrument channel uncertainty should employ appropriate instrument channel modeling and industry standards and practices for instrument uncertainty estimation or employ justified conservative bounding estimates of that uncertainty with validation. The use of any bounding estimates should be validated through tests of the DVR process and examination of test results. This is DVR Condition and Limitation 1.
There are many ways to describe the uncertainty of a measured value. The example given here will assume that we are able to obtain a good estimate of the tolerance interval for the population. That is, we know the bounds of the population such that have 95 percent confidence that 95 percent of the population falls within those bounds. This example is for illustrative purposes and Condition and Limitation 1 is meant to ensure that whatever method is used to determine the estimates of the mean and variances are accurately or conservatively estimated.
31 While using the standard deviation or the variance is common in statistics, measurements are often given in a range, such as the following interval.
,95%
3.40 Where:
is the interval that contains at least of the population with a confidence level 1. In this case we assume both values are 95%.
is the mean value of the measurement.
,% is the 95% tolerance interval around the mean which encompasses the central 95% of the population of interest with 1 95% confidence.
Equation 3.40 is a tolerance interval in that if the population is described by a normal distribution whose mean is and whose standard deviation is, then at least 95 percent of the population would fall in between the endpoints of the interval with a 95 percent confidence.7 The relationship between the tolerance interval and the standard deviation is given as the following.
3.41 Where:
is the standard deviation in the measurement
,% represents the values about the mean to construct the tolerance bound is the tolerance limit factor for a two-sided tolerance interval and is based on (the number of samples of the measurement)
While is a function of, in instances where it is possible to have a very high number of samples, the value may be assumed to be 1.96. This value corresponds to the having an infinite number of sample data points used in calculating the tolerance interval and may be a conservative value to use under certain situations. Values of k may be found in statistics handbook tables, such as those in the NUREG-1475 (Ref. 32) statistical tables in the Appendix.
For example, if the standard deviation is calculated from the tolerance limit (,%), as in Equation 3.41, then using the 1.96 value is conservative as it would result in a larger standard deviation than actual. Consider the case in which 100. The actual tolerance limit factor from that case would be 2.234. Because using the actual tolerance limit factor will always result in calculating a lower standard deviation, and having a greater standard deviation is always more conservative, using 1.96 as the tolerance limit factor is acceptable in calculating the standard deviation. However, this is not true if the equation were re-arranged.
7 As an example of confidence level in establishing the endpoints, consider such a tolerance interval of a thermocouple measurement. Suppose we took a batch of 1000 sample measurements. And then took another batch of 1000 sample measurements. And then took another batch of 1000 sample measurements and repeated this process until we had 5000 such batches. The tolerance interval would define the interval in which at least 950 samples from the total set of 1000 would lie in (95%) for at least 4750 of the 5000 batches (95% confidence).
32 3.42 Where:
,% represents the values about the mean to construct the tolerance bound is the tolerance limit factor for a two-sided tolerance interval and is based on (the number of samples of the measurement) is the standard deviation in the measurement If the tolerance limit is calculated from the standard deviation, as in equation 3.42, then using the 1.96 value is non-conservative as it would result in the smaller tolerance limit than actual.
Consider the case in which 100. The actual tolerance limit factor from that case would be 2.234. Because using the actual tolerance limit factor will always result in calculating a higher tolerance limit, the lower tolerance limit calculated using 1.96 would be non-conservative.
Therefore, the appropriate tolerance limit factor should be considered when determining the statistics of the measurements.
Because the DVR method assumes the measurements are normally distributed and relies on this assumption when calculating the reconciled values and reconciled variances, the NRC staff finds that any applicant or licensee applying the DVR method should demonstrate that the assumption that measurements are normally distributed is reasonable. This is DVR Condition and Limitation 2.
Finally, the steps described above are repeated for every measurement used in the DVR method. In general, when the measurement standard deviations are calculated, each measurement is assumed to be independent of the other measurements. This assumption is carried forward in the DVR analysis and dependent measurements would impact both the predicted reconciled values and reconciled standard deviations. Because the DVR method assumes the measurement uncertainties are statistically independent of each other and relies on this assumption when calculating the reconciled values and reconciled variances, the NRC staff finds that any applicant or licensee applying the DVR method should demonstrate that all measurement uncertainties are statistically independent of one another. This is DVR Condition and Limitation 3.
3.5.3. The Constraint Equations The constraint equations provide the relationships between the various means of the measurements in the plant. When calculating the reconciled values, the constraint equations act to limit the possible solutions to only those values which satisfy the constraint equations. For example, consider a situation in which there are two flow meters, and one is downstream from the other as given in Figure 9.
33 Figure 9: Two Flow MetersA and B Consider the difference in the mean values between these two flow meters.
3.43 Where:
is the difference between the mean measured values is the mean measured value from flow meter A is the mean measured value from flow meter B Assume that both flow meters have no systemic errors and that the true flow rate at each flow meter is the same.
3.44 Where:
is the true flow rate at flow meter A is the true flow rate at flow meter B Even under these conditions, we would still expect the difference in the measured mean values to be non-zero. However, because we know that the measured values should be equal, we can create a constraint on our solutions which would force the flow rates to be the same.
3.45 Where:
is the reconciled mean value flow meter A is the reconciled mean value flow meter B Hence, when the DVR method is used to calculate the reconciled values, only those values for and which satisfy the constraint of Equation 3.45 would be considered. Hence, the reconciled values are constrained such that only values which satisfy the constraint equations are possible.
34 Further, when calculating the reconciled uncertainty, the constraint equations can be thought of as providing redundant measurements of the measured values. For this simple example, flow meter A can be thought of as a redundant measurement of the flow at flow meter B, and therefore it can be used to reduce the uncertainty of the flow rate. This is further discussed in Section 3.5.5. An example of a set of constraints is given below.
0 0
0 1.5 12 19 3.46 Where:
is the reconciled mean value of the measurement (assuming there are measurements in total)
Each constraint equation represents a physical relationship between two or more measured values in the plant. These physical relationships can be based on known first principles (e.g., conservation of mass, conservation of energy), well-established engineering correlations (e.g., friction factors with pressure drop), commonly assumed functions (e.g., efficiencies), or can be defined by the user. In general, the DVR method does not specify a process whereby uncertainties in the constraints can be assessed; hence, each constraint is assumed to represent the physical relationship completely and accurately among the variables it contains.
Because the DVR method assumes no errors or uncertainties in the constraints and yet heavily relies on the constraints in predicting the reconciled values, the NRC staff finds that any applicant or licensee applying the DVR method should justify that each constraint is correct.
This is DVR Condition and Limitation 4.
In general, many constraints have an inherent assumption that the only changes in the measurements are due to random variation and that the plant is not changing state. For example, the constraint described in Equation 3.45 would not be true if the flow rate were either increasing or decreasing with time. Because the DVR method assumes that the variation in a parameters measurement is only due to random fluctuation, the NRC staff finds that any applicant or licensee applying the DVR method should demonstrate that the measurements were obtained at a steady state. This is DVR Condition and Limitation 5.
Along with each constraint being correct, the entire set of constraints used in the DVR must be appropriate. For any measurement, the addition of a constraint which involves that measurement would impact the reconciled value of the measurement and would reduce the reconciled uncertainty of the measurement. Likewise, the removal of a constraint would impact the reconciled value of the measurement and would increase the reconciled uncertainty of the measurement. Because there is no current basis for choosing a specific set of constraints and because the DVR method would work with any number of constraints, the NRC staff finds that any applicant or licensee applying the DVR method should justify the set of chosen constraints is appropriate. This is Condition and Limitation 6.
Finally, while the physical relationships between different measurements can take many mathematical forms (e.g., linear, quadratic, exponential, logarithmic), the DVR method assumes that these relationships are linear when in the calculation of the reconciled uncertainty.
35 However, some constraints are non-linear (e.g., conservation of energy). Because the DVR method assumes that all constraints are linear in the calculation of the reconciled uncertainty, the NRC staff finds that any applicant or licensee applying the DVR method should justify that the set of chosen constraints is linear or behave linearly in the region of interest. This is DVR Condition and Limitation 7.
3.5.4. Calculating The Reconciled Means The DVR method can be thought of as a function whose inputs are the mean measurement values and the measurement variances, and which outputs the reconciled mean measurement values ().
3.47 Where:
is the reconciled mean value of the measurement is the computational model which maps the measurement mean values and their uncertainties to the reconciled mean values is the mean value of the measurement is the uncertainty for the measurement is the total number of measurements While the mathematics behind this function is somewhat complex, the operation of the DVR method is relatively simple. The goal of the DVR method is to generate a reconciled mean value
( for each of the mean measurements (), such that the reconciled values are the closest point to the original mean value. In other words, the reconciled values both satisfy the constraint (given in Equation 3.46) and minimize the distance to the measured mean values using metrics to determine that distance. However, different distance metrics would result in different calculated distances and hence different reconciled values. In general, the common distance metric is the Euclidean distance.
3.48 Where:
is the Euclidian distance between the measured mean point (in the dimensional space) and the reconciled mean point.
is the reconciled mean value of the measurement is the mean value of the measurement is the uncertainty for the measurement is the total number of measurements The Euclidian distance provides a distance between the reconciled and measured mean values, but it treats all differences as the same. In other words, it does not matter if one measurement
36 has a much higher uncertainty than another (i.e., higher standard deviation), all distances between the reconciled and measured mean values are weighted equally. However, just because the numerical difference between the reconciled and measured mean values are high doesnt mean the two points are very far from each other. For example, consider the scenario in which the uncertainty of a measurement is also very large. Then the numerical difference between the reconciled and measured mean may be due to the uncertainty in the measurement itself. Hence, we may wish to divide the difference between the reconciled and measured mean values by the uncertainty of measurement in order to scale the difference. This is defined as the generalized statistical distance (Ref. 29), also known as the Mahalanobis distance,, and given in the equation below.
3.49 Where:
is the generalized statistical distance (i.e., the Mahalanobis distance) between the measured mean point (in the dimensional space) and the reconciled mean point.
is the reconciled mean value of the measurement is the mean value of the measurement is the uncertainty for the measurement is the total number of measurements The Mahalanobis distance is commonly used to determine a distance between a value from a population and the mean of that population. It is the distance metric chosen for EPRIs DVR methodology; however, it is not the only metric that could be chosen. Aside from a metric which contains less information (e.g., the Euclidean distance) it is also possible to choose a metric which contains more information. For example, instead of dividing by the difference between the reconciled and measured mean values by the variance of the population, we could divide this difference by the variance of the measured mean itself.
3.50 Where:
is a different form of generalized statistical distance.
is the reconciled mean value of the measurement is the mean value of the measurement is the uncertainty for the measurement is the number of samples of measurement is the total number of measurements Just like the Mahalanobis distance becomes a scalar multiple of the Euclidean distance if all measurement variances are equal, the New distance becomes a scalar multiple of the
37 Mahalanobis distance if all measurements have the same number of samples. Similarly, just as the Euclidian distance is insensitive to measurements with different variances where the Mahalanobis distance accounts for those differences, the Mahalanobis distance is insensitive to measurements with different standard errors where the New distance accounts for those differences.
Further, there are likely other distance metrics which could make use of more information about the problem (e.g., a distance metric which makes use of both the variance of the measured mean value and the variance of the reconciled mean value). Ultimately, there is no best choice of the distance metric used in the optimization process and any choice requires justification in the final application of the DVR method as demonstrated through its validation.
While the DVR method has many assumptions, one of the assumptions raises a question about the possible variation in the reconciled values. The DVR method assumes that each measurement is normally distributed and that the mean value of the measurements is the true mean of that distribution. This can be expressed in the following equation.
~,
3.51 Where:
is the random variable which is a single value of the measurement is the mean value of the measurement is the uncertainty for the measurement If each measurement is a random variable from a normal distribution, then the mean of each measurement must also be a random variable which has the following standard deviation.
3.52 Where:
is the standard deviation of mean value of the measurement (i.e., standard error) is the standard deviation for the measurement is the number of samples of measurement The standard deviation of the mean is called the standard error. The DVR method assumes that each measured mean value is normally distributed around the standard error as expressed in the following equation.
38
~,
3.53 Where:
is the random variable which represents the mean value of the measurement is the mean value of the measurement is the standard deviation of mean value of the measurement (i.e., standard error)
Hence, while the DVR method generally calculates the reconciled values assuming nominal conditions for the mean values (i.e., all mean values of measurements are assumed to be at their measured mean values), there is some inherent variability expected in these mean values and that is described by Equation 3.53. One concern is what impact this known variability would have on the variability in the prediction of the reconciled mean values. In other words, would the reconciled values change greatly, somewhat, or slightly when the random behavior of the mean of the measurement values is considered. Because application of the DVR method requires the mean measurement values to be random variables with a known variability, the NRC staff finds that any applicant or licensee applying the DVR method should demonstrate that expected variability in the inputs to the DVR methodology do not greatly impact the resulting reconciled values. This is DVR Condition and Limitation 8.
Finally, an inherent assumption in applying the results from the DVR method is that the error in the reconciled means is smaller than the error in the measured means.
3.54 Where:
is the error in the reconciled mean value, defined in equation 3.55
is the error in the measured mean value, defined in equation 3.56
3.55
3.56 Where:
is the error in the reconciled mean value is the reconciled mean value of the measurement is the true mean of the measurement
is the error in the measured mean value is the measured mean value of the measurement
39 While the NRC staff believes it is possible for the reconciled error to be less than the measured error, the staff believes it is also possible for the reconciled error to be greater than the measured error. The NRC staff believes the overall DVR methodology is logically developed, but the staff is unaware of a first principle which would mean the method must be correct (i.e., always results in a smaller error). Further, the NRC staff believes that there are a number of choices (e.g., using the Mahalanobis distance over another distance metric, using or not using specific constraints) which are not necessarily right or wrong, but would impact the predicted reconciled values.
While future work may demonstrate that the DVR method will always result in a more accurate prediction of the true mean when compared to the measured mean, that work has yet to be performed. Therefore, the NRC staff finds that any applicant or licensee applying the DVR method should provide validation data which justifies the assumption that the reconciled error is less than the measurement error. This is DVR Condition and Limitation 9.
3.5.5. Calculating The Reconciled Uncertainties The reconciled uncertainties used in the DVR method are obtained by applying the Taylor Series Method (TSM) for uncertainty quantification. This method is commonly applied to propagate uncertainties and is discussed in both the Guide to Uncertainty in Measurement (GUM) Standard (Ref. 15), and ASME PTC 19.1 (Ref. 18), but for this evaluation we will follow the discussion provided in Section 3-3 of Coleman and Steele (Ref. 17). Assume a variable is the result of a function of independent random variables.
3.57 Where:
is the result of function is the function which is used to calculate is the input which is a random variable is the total number of inputs The combined uncertainty in the result can be obtained using TSM uncertainty propagation equation. However, the
3.58 Where:
is the standard deviation (i.e., combined uncertainty) in
is partial derivative of with respect to the input is the standard deviation of the input which is a random variable is the total number of inputs
40 The combined uncertainty can be calculated for the DVR method given in Equation 3.47. The combined uncertainty in the reconciled value can be calculated using the following TSM uncertainty propagation equation.
3.59 Where:
is the standard deviation (i.e., combined uncertainty) in the reconciled mean value
- also called the reconciled standard error
is partial derivative of function with respect to the mean value of the measurement is the standard deviation of mean value of the measurement is the total number of measurements When applying the results of the DVR uncertainty quantification process, we need the standard deviation in the reconciled value (, as we will use this standard deviation (specifically for the feedwater flow rate) to determine the CTP uncertainty. However, one issue that arises in applying the TSM is that it does not map the measurement standard deviations () to reconciled standard deviations (, but instead maps the measurement standard errors () to the reconciled standard errors (). This is because the DVR method does not map the measurements (or their means) to a reconciled measurement, but to the mean of the reconciled measurement.
The problem which arises is that to calculate the reconciled standard deviation, we need to know the number of reconciled measurements, as given in the equation below.
3.60 Where:
is the standard deviation for the reconciled measurement is the standard deviation of reconciled mean value of the measurement (i.e., reconciled standard error) is the number of samples of reconciled measurement However, there is no way to determine the number of reconciled measurements,. To better understand the problem, we can re-write the TSM equation in terms of standard deviations instead of standard errors.
3.61
41 Where:
is the standard deviation for the reconciled measurement is the number of samples of reconciled measurement
is partial derivative of function with respect to the mean value of the measurement is the standard deviation for the measurement is the number of samples of measurement is the total number of measurements If we assume that the same number of samples are taken for each measurement, then we can factor out all of the from the RHS to obtain the following equation.
3.62 Where:
is the standard deviation for the reconciled measurement is the number of samples of reconciled measurement
is partial derivative of function with respect to the mean value of the measurement is the standard deviation for the measurement is the number of samples of any measurement (all measurements have the same number of samples) is the total number of measurements Even if all measurements have the same number of samples, we must determine how many samples we have of each reconciled value (). In this case, it seems to be a natural assumption that we have as many samples of each reconciled value as we did of the measured values, and hence the equation reduces to the following.
3.63 The NRC staff notes that while the TSM is a well-established process, its application in DVR, especially its ability to predict accurate uncertainties of the reconciled value with a DVR model as complex as that proposed for MURs, has not been established. There are multiple assumptions and choices made when performing the analysis which may add uncertainties and impact the final results. Therefore, the NRC staff finds that any applicant or licensee applying the DVR method should provide validation data which demonstrates that the reconciled uncertainties have been accurately or conservatively predicted. Because this is directly related to the validation, this is added to DVR Condition and Limitation 9.
42 Application of the DVR method assumes that every measurement uses the same number of samples when calculating its mean. Further, the method assumes that this same number of samples are the number of samples of the reconciled measurements used in converting the standard error which is the outcome of the TSM uncertainty propagation into a standard deviation of the reconciled measurement which is used in determining the CTP uncertainty.
Therefore, the NRC staff finds that any applicant or licensee applying the DVR method should demonstrate that the same number of samples have been used for each measurement; or, if a different number of samples is used for the measurements, the reconciled uncertainty has been calculated in an accurate or conservative manner. This is DVR Condition and Limitation 10.
3.5.6. DVR Risk Evaluation As detailed in the NRC staffs risk evaluation of the DVR method (Section 3.1 of this SE), the staffs evaluation relied heavily on estimating a maximum possible error in the results of the DVR methodology. The NRC staff could only generate a reasonable estimate of this error due to the use of the penalty factor in the DVR methodology as this penalty factor provides a high degree of assurance that the maximum possible error is bounded by a reasonable value.
Therefore, the NRC staff finds that any applicant or licensee applying the DVR method should use a method to ensure that the difference between the reconciled value of each major contributor and the initial measured value of that same major contributor is monitored and does not exceed some reasonable limit. This is DVR Condition and Limitation 11.
3.6. NRC Review of Feedwater Flow Rate Uncertainty from Previous MUR Applications Historically, licensees have provided the calculation of the uncertainty analysis for CTP with the MUR LAR. This calculation relies on applying the TSM to combine uncertainties of the various measurements that were used to generate the CTP. While there are multiple variables which are considered in the calculation of CTP and its uncertainty, the feedwater flow uncertainty generally contributes close to 80 percent or more to the total CTP uncertainty. Therefore, one way to significantly reduce the CTP uncertainty is to reduce the feedwater flow uncertainty, and hence licensees began to use LEFMs, which results in a lower feedwater flow rate uncertainty compared to measurements using venturi nozzles.
In 1999, the NRC staff reviewed a new method for determining the feedwater flowrate uncertainty using LEFMs (Ref. 26). In that SE, the NRC staff described how it determined that the licensees selected manufacturers LEFMs were capable of providing improved thermal power measurement capability through feedwater flow measurement accuracy. In 2010, the NRC staff reviewed a supplement to the initial report (Ref. 27). In that supplement, the NRC staff identified six items as being of key importance to the methodology and used that list as review criteria. Each item is listed below, as well as a brief discussion by the NRC staff about if that specific item would also be applicable to the DVR methodology.
- 1. Consistency with the guidelines in NRC Regulatory Issue Summary (RIS), 2002-03, Guidance on the Content of Measurement Uncertainty Recapture Power Uprate Applications (Ref. 14).
The NRC staff finds that this criterion would still apply. While the RIS was specifically written assuming that the change in CTP uncertainty would be using more accurate flow meters, the staff notes that much of what is written in the RIS for MURs would apply independently
43 of the method of obtaining the CTP uncertainty reduction. For example, the RIS discusses the feedwater flow measurement technique and the NRC staff notes that the DVR methodology is such a technique.
- 2. Substantiation that flatness ratio, defined as the ratio of the measured average axial velocity at the outside chords to the average axial velocity at the inside chords, can be correlated to the ultrasonic flow meter (UFM) correction factor or calibration coefficient to address Reynolds number differences between the Alden Research Laboratory test and in-plant conditions if the flow profile is not significantly distorted by upstream piping components, such as elbows located within a few pipe diameters of the LEFM. Where significant distortion occurs, as can be determined from Alden Research Laboratory test results, then a Reynolds number extrapolation is necessary and acceptable.
The NRC staff finds that this criterion would not directly apply since one driving factor behind using the DVR methodology is that it can serve as a back-up method of estimating feedwater flow rate when the LEFM equipment is not functioning properly or is inoperable altogether. However, the NRC staff notes that the underlying concept does apply in that there should be some assurance that the validation obtained under a specific set of conditions is applicable to the real-world application. The NRC staff concludes that this criterion would be satisfied through satisfying Condition and Limitation 9.
- 3. Acceptability of the theoretical description of the LEFM and its operation.
The NRC staff finds that this criterion would still apply with the change of LEFM to DVR Method. The NRC staff notes that this criterion would be met by the information contained in EPRIs TR, as it contains the theoretical description of the DVR methodology, and the further information which must be submitted to satisfy the conditions and limitations of this SE, as that information would complete the description of the methodology. Therefore, the NRC staff concludes that no new DVR condition and limitation is needed.
- 4. Substantiation that the uncalibrated CheckPlus (trade name for the licensees selected manufacturer and model of LEFM for the referenced SE) is typically within a fraction of a percent of the flow rate measured at Alden Research Laboratory.
The NRC staff finds that this criterion would not directly apply; however, the NRC staff does believe that the underlying concept should be maintained, which is that there should be empirical evidence presented which demonstrates that the predicated feedwater flowrate and its uncertainty is accurate with respect to independent measurement of both variables (i.e., validation). The NRC staff concludes that this criterion would be satisfied through satisfying Condition and Limitation 9.
- 5. Substantiation that the CheckPlus is typically relatively unaffected by flow profile distortion and swirl and, further, that the CheckPlus will provide an approximation of the flow profile.
The NRC staff finds that this criterion would not directly apply; however, the NRC staff notes that the underlying concept should be maintained. That concept is that there should be checks to ensure that expected deviations in input will not greatly impact the results of the predicated feedwater flowrate or its uncertainty. The NRC staff notes that this criterion would be met by satisfying Condition and Limitation 8 which requires an input sensitive study to
44 ensure the DVR results do not greatly vary due to expected changes in input values.
Therefore, the NRC staff concludes that no additional conditions or limitations are needed.
- 6. Substantiation that downstream geometry does not have a significant influence on CheckPlus calibration.
The NRC staff finds that this criterion would not directly apply; however, the NRC staff notes that the underlying concept should be maintained. That concept is that there should be checks to ensure that the site-specific application of the DVR method would not invalidate the DVR method. The NRC staff notes that this criterion would be met by satisfying Condition and Limitation 9 which requires validation to justify the DVR process. Therefore, the NRC staff concludes that no additional condition and limitation is needed to address this concern, other than that which was described above under Condition and Limitation 9.
In summary, the NRC staff concludes that an applicant who satisfies the DVR conditions and limitations would also be satisfying the relevant criteria from the staffs previous SEs.
- 4. DVR CONDITIONS AND LIMITATIONS The DVR conditions and limitations are listed in Table 5. The sub-sections below the table provide additional discussion describing the details of what should be provided to satisfy each condition and limitation.
45 Table 5: DVR Conditions and Limitations Number Description 1
Any applicant or licensee applying the DVR method should appropriately calculate the statistical representation of the measurement data (i.e., mean and standard deviation) and demonstrate how those statistical representations have been adjusted to account for uncertainties due to process measurement and instrument channel errors. Estimates of the instrument channel uncertainty should employ appropriate instrument channel modeling and industry standards and practices for instrument uncertainty estimation or employ justified conservative bounding estimates of that uncertainty with validation. The use of any bounding estimates should be validated through tests and examination of test results.
2 Any applicant or licensee applying the DVR method should demonstrate that the assumption that measurements are normally distributed is reasonable.
3 Any applicant or licensee applying the DVR method should demonstrate that all measurement uncertainties are statistically independent.
4 Any applicant or licensee applying the DVR method should justify that each constraint is correct.
5 Any applicant or licensee applying the DVR method should demonstrate that the measurements were obtained at a steady state.
6 Any applicant or licensee applying the DVR method should justify the set of chosen constraints is appropriate.
7 Any applicant or licensee applying the DVR method should justify that set of chosen constraints is linear or behave linearly in the region of interest.
8 Any applicant or licensee applying the DVR method should demonstrate that expected variability in the inputs to the DVR methodology do not greatly impact the resulting reconciled values.
9 Any applicant or licensee applying the DVR method should provide validation data which justifies the assumption that the reconciled error is less than the measurement error and that the reconciled uncertainties have been accurately or conservatively predicted.
10 Any applicant or licensee applying the DVR method should demonstrate that the same number of samples have been used for each measurement; or, if a different number of samples is used for the measurements, the reconciled uncertainty has been calculated in an accurate or conservative manner.
11 Any applicant or licensee applying the DVR method should use a method to ensure that the difference between the reconciled value and the initial measured value is monitored and does not exceed some reasonable limit.
46 4.1.1. DVR Condition and Limitation 1 DVR Condition and Limitation 1 Any applicant or licensee applying the DVR method should appropriately calculate the statistical representation of the measurement data (i.e., mean and standard deviation) and demonstrate how those statistical representations have been adjusted to account for uncertainties due to process measurement and instrument channel errors. Estimates of the instrument channel uncertainty should employ appropriate instrument channel modeling and industry standards and practices for instrument uncertainty estimation or employ justified conservative bounding estimates of that uncertainty. The use of any bounding estimates should be validated through tests and examination of test results.
SE for EPRI Technical Report 3002018337 Sections 2, 5, and 7 of the EPRI DVR TR describe how licensees who elect to use the DVR methodology will be collecting plant process measurement information from a plant process information data historian that will be used to feed into a client server system running the DVR system software to compute the reconciled mean and standard deviation of feedwater flow rate uncertainty (or CTP uncertainty directly). Section 2.5 of the TR provides an example configuration (Figure 2-1) of plant equipment which could be implemented by licensees to validate and reconcile this data from the plant data historian.
4.1.1.1. Sources of instrument channel error Before the plant process data can be stored in the data historian, it must be transmitted there from some plant instrumentation source. That source can be the plant process computer, a plant data acquisition system other than the plant process computer, or some other distributed control and data acquisition system. Often, the data is put into the plant historian via a network connection from one of these data acquisition sources. Instrument channels from around the plant feed information into each of those data acquisition systems. Those instrument channels are typically composed of sensing elements, sensing lines, transducers, transmitters, converters, signal processing devices, signal isolators, analog-to-digital converters, digital processors, and other signal transmission and conversion devices. Each one of these devices within the instrument channel contributes its own source of error (both systematic and random) into the reading (measurement) of the process parameter that is eventually stored on the plant data historian. Further, the output of the sensing and converting devices is influenced by local process and ambient condition variations, and by the accuracy of the measurement and test equipment that is used to periodically calibrate these instrument channels such that its representation of that process can be compared to a calibration standard with known but acceptably small uncertainty. In other words, the value of the plant parameter that is stored in the data historian includes error not only from the measurement sensing devices, but also from every device along the instrument channel signal path before it gets to the data acquisition systems and ultimately, to the data historian. Once inside the data historian, the data may be further influenced by the effects of digital sampling, data compression and storage routines, algebraic truncation, and other digital signal processing factors.
Hence, when estimating the uncertainties associated with the determination of the devices which feed into the client server that ultimately determines the reconciled feedwater flow rate,
47 the mean and standard deviation of the measurements need to account for these errors. Thus, the mean and standard deviation of the true parameter measurement (which includes these uncertainties) will always be greater than or equal to the standard deviation of the recorded samples.
Section 3.2 of the EPRI DVR TR describes a process which may be used to adjust the measurements from the recorded data in a manner that, in part, accounts for the effects of systematic and random errors that are embedded within the measured data. The report states that there are two ways to address systematic error. The first is to increase the uncertainty of the measured value such that the Gaussian uncertainty distribution encompasses the true value. But for that method to be implemented, one must have a great deal of knowledge about the size and shape of each of the distributions of the error components, and about how they may convolve to determine the resulting shape of the combined error distributions.
The EPRI DVR TR states: The second method is to evaluate the systematic error when assessing the test results to determine appropriate test data corrections to be applied to raw, sampled data. Systematic errors with the measured data may cause the test results to be consistently low or high as compared to expectations, when assessing many test results.
Systematic errors often may be more easily detected by analyzing the test result data than investigating the error contributors of the measurement. However, the NRC staff recognizes that while there is a possibility that the existence of systematic errors may be detectable through examination of reconciled results and comparing them to expectations, the proper magnitude and degree of contribution of the systematic error may not be discernable through such examination. The staff recognizes that good instrument channel modeling and evaluation of the types of process effects that can influence the transmitted signal can contribute to the validation of the results by identifying and quantifying the approximate magnitude of systematic errors that contribute to biasing the result of the reconciled measurements. The NRC staff finds that DVR Condition and Limitation 1 addresses this concern because such instrument channel modeling and uncertainty evaluation provides a more realistic estimate of both the bias and the systematic uncertainties in process conditions as measured and transmitted by the instrument channel, which should lead to a more accurate reconciled mean and uncertainty produced by the DVR methodology.
The EPRI DVR TR further states: Once the systematic errors are detected, and the source(s) of the error are determined, corrections can be applied, and the test results can be recalculated.
It may also be possible to reduce the uncertainty of the measurement related to the random errors to improve the Gaussian distribution of the measurement. Figure 3-5 in the EPRI DVR TR (reproduced below) illustrates the relationship between the location of the mean (including biases) and standard deviation of the measured data (right-hand distribution) as compared to the location of the mean for the true value and its corrected Gaussian distribution (left-side distribution). Hence the area between these distributions is where correction of the measured data for systematic error would bring the measured data closer to the true value. The DVR method described in the TR proposes to implement a model tuning process that is part of the DVR methodology. The TR states: Provided the error terms used for the TSM are estimated conservatively, the resulting distribution will bound the results which would be obtained using the MCM [Monte Carlo Method]. Systematic errors can then be identified and analyzed by reviewing the test results. (Emphasis added by the NRC staff.)
48 Section 6.2.5 of the EPRI DVR TR further elaborates on the model tuning process. It states that the DVR input measurement accuracy should encompass any plant calibration records, loop accuracies, systematic errors, localization errors, or other measurement phenomena. It also states: Refinement to the accuracy value may include accounting for systematic errors, localization errors, or other measurement phenomena that may bias the results. Finally, the TR states: When available[,] manufacturer performance specifications for the in-situ field performance and/or plant calibration records are used, and the overall loop accuracies of the measurement through the plant computer output are calculated or estimated. If these data are not available, conservative estimates of the in-situ field accuracies of instruments in question, based upon industry experience with similar devices, are developed. When difficulties are encountered resolving the accuracy of an instrument that is a significant DVR CTP results contributor; additional measures will need to be taken to identify the source of the error. Cross comparisons of measurements in question in the field using instrumentation with a known, lab-traceable accuracy may be used as a means to identify measurement errors. Otherwise, conservative bounding [limits] of the accuracy estimate may be used, and/or, additional margins to account for the errors may be added to the DVR CTP accuracies when evaluating the DVR CTP overall accuracy.
4.1.1.2. Adjustment for Systematic and Random Error The staff notes that while the EPRI DVR TR acknowledges that corrections to the recorded parameter measurement data need to be made so that it is adjusted to account for a variety of sources of uncertainty such as plant calibration records, loop accuracies, systematic errors, localization errors, or other measurement phenomena, the TR does not seem to account for the other types of instrument channel measurement errors that are not readily observable under operations or surveillance tests. These include some of the sources of error that could have the greatest contribution to systematic error, such as localized process and ambient errors due to incorrect calibration scaling to adjust for head correction, stratification of temperature within
49 larger pipes, differential pressure, and flow biases due to internal geometries of pressure vessels, etc. Since the TR rightly points out that it is important that the error terms be estimated conservatively, it does not stress the importance of accurately modeling the instrument channels that will be contributing the most (i.e., 0.5 percent or greater) toward total uncertainty in CTP.
Such accurate instrument channel and sampling system modeling would serve to best identify and therefore account for the systematic errors that are the most difficult to determine. These corrections could then be applied during the model tuning process to arrive at a more truly representative recording of true process parameter. The NRC staff finds that DVR Condition and Limitation 1 addresses this concern because such accurate instrument channel and sampling system modeling serves to provide a more accurate and realistic estimate of both the bias and the systematic uncertainties in process conditions as measured and transmitted by the instrument channel, which should lead to a more accurate reconciled mean and uncertainty produced by the DVR methodology.
4.1.1.3. Staff Observations Each plant measurement can be categorized into one of two significance groups. Either it is considered to be a major contributor to the DVR prediction (e.g., it contributes greater than or equal to 0.5 percent toward the estimate of total CTP uncertainty), or it is considered to be a minor contributor (Section 3.4). While the calculation of the mean and standard deviation of each measurement parameter that is used in the DVR process should be reasonably accurate, it is expected that a focus will be placed on the determination of the statistical representation for the instrument channels which are categorized as major contributors to the DVR prediction of CTP mean and uncertainty.
One area of such evaluation would be the degrees of fidelity and conservatism that is applied to such modeling. For example, ISA Recommended Practice ISA RP67.04.02-2010 (Ref. 33) outlines several considerations and modeling techniques that should be applied to this type of analysis, and describes methods for combining uncertainties that have been standardized by the ISA. These methods are presented in contrast to that of methods identified in the ASME Power Test Code PTC 19.1 (Ref. 18). For example, in the PTC, contrary to the method described in the Recommended Practice, the square-root-of-the-sum-of-the-squares combination of biases is used and justified on the basis that all the biases are random. The ISA Recommended Practice notes that PTC approach includes the application of a bias concept that differs from (i.e., is less conservative than) the Recommended Practice, and therefore the calculation of an uncertainty coverage interval would be less conservative than the ISA methods. Also, the Recommended Practice states that the reader is cautioned that ANSI/ASME PTC 19.1 is intended for application to activities different from the purpose of this Recommended Practice. Therefore, the NRC staff recommends that users of the DVR methodology exercise caution when modeling and estimating the true measurement uncertainty coverage interval.
When considering the calculation of the mean, users of the EPRI DVR methodology should ensure that they have appropriately considered and addressed any contribution of biases in the measurements of the instrument channels that contribute to CTP and its uncertainty. If there are biases in the mean that are previously unidentified and unaccounted for, the computed mean of the measurement may be biased in a conservative direction (e.g., either higher or lower depending on the specific parameter), if a such a direction can be discerned, which would serve to mask adverse plant performance. However, a conservative direction cannot always be discerned, making the treatment of unknown systematic errors more difficult. Likely contributors
50 to such biases may include factors that are not readily apparent during an instrument sensor or channel calibration process, and hence will not be identified through examination of historical plant calibration data. These include process effects, such as thermal stratification in the pipe where important temperature sensors are used for determining enthalpy at that location in the steam cycle. To identify such biases, it may be necessary to model the instrument sensing and transmission process for the entire channel, from process through storage in the data historian.
Such biases may also be present due to a previously inadequate determination of the proper instrument scaling parameters that are used for establishing the proper calibration of the sensor/
transmitter. This is likely because many of the additional existing instrument channels that are proposed for use in the DVR methodology may not previously have had the degree of modeling and engineering evaluation for channel performance uncertainty as other instrument channels that were historically used for the CTP calorimetric heat balance, or instrument channels which may have also been used for safety-related function purposes.
Similarly, when considering the calculation of the standard deviation of the measurement, users of the DVR methodology should ensure that all portions of the measurement process in the calculation of the measurement variability have been considered. This includes variability associated with the process and ambient conditions, the accuracy of the measurement device (i.e., instrument), variability associated with the signal coming from the device traveling to a centralized computer, variability associated with compressing data and storing the results in a data logging system, and any other variabilities in the generation, transmission, storage, and use of the measurement data. These variabilities must be appropriately calculated such that standard deviation of the measurement is not underpredicted. Similar to the discussion regarding bias errors discussed above, careful instrument channel modeling and evaluation would help to identify these error contributions such that they may be estimated more accurately.
Finally, if the standard deviation is calculated from the tolerance limit that has a 95 percent confidence level or higher, then it is acceptable and conservative to use a tolerance limit factor of 1.96. However, if the tolerance limits are being calculated from the sample standard deviation, it is not appropriate or acceptable to use a tolerance limit factor of 1.96 (unless the sample size considered is in the range of thousands of data points) and the tolerance limit factor should be calculated based on the actual number of samples included in the DVR process.
4.1.2. DVR Condition and Limitation 2 DVR Condition and Limitation 2 Any applicant or licensee applying the DVR method should demonstrate that the assumption that measurements are normally distributed is reasonable.
SE for EPRI Technical Report 3002018337 While there are statistical tests which are often used to determine if a set of values has a normal distribution (e.g., DAgostino, Wilks-Shapiro, Anderson-Darling), licensees should use caution when applying such tests as it would be very easy to incorrectly determine that the data sets are not normal when in fact, they are normal. Statistical tests have two types of errors. For a normality test, a type 1 error occurs if the test says that the set of values is not normally distributed, when in fact it is normally distributed. A type 2 error occurs if the test says that the set of values is normally distributed, when in fact it is not normally distributed. By choosing a
51 significance level, we can control the rate of type 1 error. For example, using the common significance value of 5 percent would mean that the test would only result in a type 1 error five times in one hundred. However, if only 14 tests were performed with the same 5 percent significance value, the probability of obtaining at least one false positive is over 50 percent.
Thus, even if the data from each of those 14 tests were from normal distributions, the test for normality would say otherwise. Given that there are hundreds of measurements (i.e., data sets),
the NRC staff concludes that it would not be feasible for normality tests to be used to demonstrate that all measurements were obtained from a normal distribution.
In general, the NRC staff have observed that the assumption of a measurement being normally distributed is a common assumption made when considering plant instruments. It is an assumption which is made currently in the calculation of the CTP and CTP uncertainty, and therefore the NRC staff does not believe it is unwarranted to make this assumption when applying the DVR method. However, the NRC staff notes that while it may be reasonable to assume measurements can be considered random samples from a normal distribution, this assumption should be confirmed for each measurement, as some measurements may be known to have a different type of distribution and would therefore need to be treated in a separate manner. However, the staff notes that while previous methods did assume that measurements were normally distributed, the DVR method makes this assumption many more times as it relies on many more measurements.
Because it is unlikely that each measurement can be proven to be normally distributed, the NRC staff finds that performing validation of the DVR method is necessary to ensure that even if some of the measurements are not normally distributed, the DVR method still results in an accurate prediction of the true feedwater flow rate and an accurate prediction of its uncertainty.
This validation is discussed in DVR Condition and Limitation 9.
4.1.3. DVR Condition and Limitation 3 DVR Condition and Limitation 3 Any applicant or licensee applying the DVR method should demonstrate that all measurement uncertainties are statistically independent.
SE for EPRI Technical Report 3002018337 One of the key assumptions in the DVR method and the use of the TSM to determine the combined uncertainty is the assumption of statistical independence for input (i.e., each measurement). In general, the uncertainty of any two measurements would be expected to be independent if they are measured by different instruments. For example, a flow meter and a thermocouple would generally result in measurements with statistically independent uncertainties. However, it is always possible for dependencies between measurement uncertainties to appear where there are commonalities between the measurements. The following is a list of some examples of some commonalities which may result in statistical dependence.
- 1. Using the same model of instrument or instruments from the same manufacturer.
- 2. Using the same signal processing software to transmit information from the instruments.
- 3. Using the same software to reduce/store the readings from the instruments.
- 4. Using the results of two or more measurements to calculate a pseudo-measurement.
52 In general, 1,2, and 3, are common to many instrumented systems and would only result in a dependence if there was an error in the process (e.g., all flow meters from a specific company were improperly built or calibrated and therefore share the same error).
The treatment of a pseudo-measurement is more complicated. A direct calculation of a pseudo-measurement would almost always be dependent on the measurements which are used in its generation. Therefore, the covariance between the pseudo-measurement uncertainty and the uncertainties from other measurements which were used to generate it may not be zero (i.e., it is not statistically independent).
4.1.4. DVR Condition and Limitation 4 DVR Condition and Limitation 4 Any applicant or licensee applying the DVR method should justify that each constraint is correct.
SE for EPRI Technical Report 3002018337 Ensuring the correctness of the constraints is vital to the DVR method, as an incorrect constraint would result in a reconciled value which is further from the true value than the measured value.
In general, constraints are based on first principle (e.g., conservation of mass, energy) or well-known physical laws (e.g., friction factors). However, constraints may also be based on assumed relationships (e.g., component efficiencies) or user defined relationships. The DVR method has no mechanism to address uncertainties or errors in the constraints; therefore, each constraint must be correct. While all constraints would have some impact on the reconciled feedwater flow rate and its uncertainty, only a sub-set of constraints would be major contributors to those values. Errors in the constraints related to major contributors (see Section 3.4 of this SE) would be a larger concern than errors in the constraints related to minor contributors as errors from minor contributors would have a minimal impact on the reconciled values.
Because it is unlikely that each constraint can be proven to be correct, the NRC staff concludes that performing validation of the DVR method is necessary to ensure that, even if a constraint is incorrect, the DVR method still results in an accurate prediction of the true feedwater flow rate and an accurate prediction of its uncertainty. This validation is discussed in DVR Condition and Limitation 9.
4.1.5. DVR Condition and Limitation 5 DVR Condition and Limitation 5 Any applicant or licensee applying the DVR method should demonstrate that the measurements were obtained at a steady state.
SE for EPRI Technical Report 3002018337 Because the DVR method assumes that the only changes to the values of each measurement are due solely to random fluctuations, the plant should not be changing state during the period over which the measurements are being taken. There are multiple ways to demonstrate that the
53 plant is at a steady state. While each measurement used in the DVR method should be at a steady state, those measurements which are considered major contributors should have an additional focus.
Because it is unlikely that it can be proven that every measurement is at a steady state, the NRC staff finds that performing validation of the DVR method is necessary to ensure that even if a measurement is not at a perfect steady state, the DVR method still results in an accurate prediction of the true feedwater flow rate and an accurate prediction of its uncertainty. This validation is discussed in DVR Condition and Limitation 9.
4.1.6. DVR Condition and Limitation 6 DVR Condition and Limitation 6 Any applicant or licensee applying the DVR method should justify the set of chosen constraints is appropriate.
SE for EPRI Technical Report 3002018337 Because the choice of constraints impacts which values the DVR method can choose for the reconciled mean feedwater flow rate, the set of constraints chosen must be appropriate. This includes ensuring that a constraint is either used or specifically not used just because it makes the resulting reconciled value more desirable. For example, if the use or non-use of a constraint related to the efficiency of the heat exchanger makes the resulting reconciled value more desirable, but there is no basis for reconciled values change (i.e., there is no engineering justification as to the impact the heat exchangers efficiency has on the reconciled mean feedwater flow rate), then this is an example of model tuning and is not permitted.
In general, as more constraints are added to DVR method, the reconciled mean feedwater flow rate should approach its true value and the reconciled uncertainty should decrease. Likewise, if constraints are removed, the reconciled mean feedwater flow rate should approach the measured value and the reconciled uncertainty should increase to the measurement uncertainty.
Because there will likely not be a detailed sensitivity analysis on the constraints used, and constraints could be added or removed and the DVR method would still generate predictions of reconciled values, the NRC staff finds that performing validation of the DVR method is necessary to ensure that the set of constraints used is reasonable such that the DVR method results in an accurate prediction of the true feedwater flow rate and an accurate prediction of its uncertainty. This validation is discussed in DVR Condition and Limitation 9.
4.1.7. DVR Condition and Limitation 7 DVR Condition and Limitation 7 Any applicant or licensee applying the DVR method should justify that the set of chosen constraints is linear or behave linearly in the region of interest.
SE for EPRI Technical Report 3002018337
54 While many of the constraints are based on linear relationships between the measurements, some constraints have non-linear relationships. However, one of the DVR methods key assumptions in predicting the reconciled value and its uncertainty is that the constraints are all linear, or at least behavior linearly in the region of interest (i.e., the region around which they are used to predict the reconciled value and its uncertainty). This assumption arises primarily from the use of the TSM method for uncertainty quantification which assumes linear behavior in the calculation of the reconciled variance. Thus, while the constraints can display strongly non-linear behavior far from that region, they must behave linearly in the region of interest. In summary, each constraint should either be (A) linear, or (B) behave linearly in the region of interest.
4.1.8. DVR Condition and Limitation 8 DVR Condition and Limitation 8 Any applicant or licensee applying the DVR method should demonstrate that expected variability in the inputs to the DVR methodology do not greatly impact the resulting reconciled values.
SE for EPRI Technical Report 3002018337 The DVR method uses the mean and standard deviations of each measurement as an input.
While the standard deviation is generally assumed to be a constant, the mean value is a random variable and therefore expected to vary. Therefore, applicants should perform a sensitivity study ensuring that the known variation in the mean values does not greatly impact the reconciled mean feedwater flow rate or feedwater uncertainty. One example of an acceptable sensitivity study would be using Monte Carlo analysis with each mean measurements value randomly selected from a normal distribution, where the mean of that distribution is the mean of the measurement and the standard deviation of the distribution is the standard error of the measurement (i.e., the standard deviation or uncertainty of the measurement divided by the square root of the number of samples used to calculate the mean of the measurement). This analysis should be performed such that enough samples are used to demonstrate that the expected variation in the inputs to the DVR method will not result in large variations in the predicted reconciled values (both reconciled mean values and reconciled uncertainties).
While there is some expected variability in the reconciled mean value, the standard deviation of the reconciled mean should much be less than the predicted standard error (i.e., the reconciled uncertainty divided by the square root of the number of samples used to calculate the mean). If it is not, then that is evidence that the predicted reconciled uncertainty is not large enough to address known variabilities in the DVR input values.
55 4.1.9. DVR Condition and Limitation 9 DVR Condition and Limitation 9 Any applicant or licensee applying the DVR method should provide validation data which justifies the assumption that the reconciled error is less than the measurement error and that the reconciled uncertainties have been accurately or conservatively predicted.
SE for EPRI Technical Report 3002018337 The DVR method makes use of multiple physical models in the constraint equations; however, the way in which these models are combined is an optimization of the generalized statistical distance between the initial measured mean values and the reconciled mean values. While this optimization is logical, it is not a physical model as there are no physical laws which have been shown to demonstrate that nature behaves in this manner. Therefore, the validation of the DVR method is vital in demonstrating the credibility (i.e., if the model can be trusted for its intended use) of the model.
In general, validation for DVR could be done using one of two different approaches. In the first approach, there needs to be a direct validation that the DVR method chosen (including all assumptions generally made by the users) results in a more accurate prediction of the true mean feedwater flow rate than the measured mean feedwater flow rate, and that the reconciled uncertainty has been accurately calculated. This validation would require an independent measurement of the feedwater flow rate (i.e., a measurement not used in the DVR method) that also has an uncertainty of a similar magnitude as reconciled uncertainty from the DVR method.
Such validation data could be obtained at a plant with LEFMs provided the LEFM measurements were not used8 in the DVR method. In such a situation, the LEFM mean value and its uncertainty (i.e., the 95 percent tolerance interval) should have significant overlap, and if there is not overlap, either the DVR method or the LEFM measurements must be in error.
If such validation data is obtained at the same plant in which the DVR method would be applied, there would be no question as to the applicability of that data. However, such direct validation is limited to plants in which a more accurate measurement of the feedwater flow rate is available.
For plants which do not have an alternate and more accurate measurement of the feedwater flow rate available such direct validation is not possible and therefore a two-stage approach to validation would be required. First, the applicant or licensee using the DVR method should use the same DVR methodology and generate a DVR model for a similar plant which does have more accurate measurements of the feedwater flow rate with a similar uncertainty to the reconciled prediction (e.g., LEFMs). This validation should demonstrate that the DVR prediction (with uncertainties) has significant overlap with the more accurate measurements (and its uncertainties).
Second, the applicant or licensee using the DVR method should use that same methodology to generate the DVR model that will be used at their own plant. This model should be used to predict the reconciled mean feedwater flow rate and uncertainty without the DVR model making 8 In some instances, the LEFMs may be used as a calibration or correction of the venturi flow meters. This use would create a dependence on the LEFM value and therefore make the use of the LEFMs as an independent measurement questionable. Ideally, the results of the LEFMs should in no way inform the results of the DVR method.
56 use of the measured feedwater flow rate measurement. The DVR model which is implemented at the plant and the measured feedwater flow rate should have significant overlap. Because this use of the DVR method entirely ignores the measurement of the feedwater flow rate, the reconciled mean feedwater flow value will be different and reconciled uncertainty would be higher than the value used during plant operation. However, this validation demonstrates that the DVR method is able to accurately predict the feedwater flow rate. When the method is applied for operation, it is expected that the plant would use the feedwater flow measurement as an input to the DVR method to generate the best estimate of the CTP.
Finally, DVR is a computational model, and as such could be reviewed as either a physics-based model or a data-driven model. Primarily, the staffs review focused on DVR as a physics-based model by ensuring that each element of the modeling process was correct in that the assumptions of the mathematical models used matched physical reality. To ensure this correctness, the staff created the conditions and limitations discussed in this SE. However, the staff recognizes that similar to other engineering analysis, it will not be possible to demonstrate that each of the assumptions made in the mathematical models is certain to be true. While each condition and limitation should be demonstrated to be satisfied using reasonable efforts, the NRC staff recognized that an alternative approach was to treat the DVR model as a data-driven model (as opposed to a physics-based model). Treating the DVR as a data-driven model would not rely on the demonstration of the veracity of the assumptions made in the mathematical models, but instead rely on a demonstration that the DVR could accurately predict empirical (measured) data. While the validation data from this condition and limitation is necessary for both approaches, if DVR is treated as a data-driven model, the validation data becomes the main evidence demonstrating the credibility of the model.
Thus, if treated as a data-driven model, the much higher reliance on validation would require a significant expansion of the amount of validation data. For many models, the requirement for such extra validation would create a large challenge in that there would need to be a source for the empirical data used for that validation. However, for DVR models there is the possibility of using the real-time plant data to also act as validation. Similar to the second approach described above, a modified version of the DVR model which does not contain the feedwater flow measurement as an input could continuously be used to predict the feedwater flow. Such modified DVR model would be very similar to the model used to predict core power (e.g., the only difference being the lack of the input of the feedwater flow measurement itself), and the models prediction could be compared to the feedwater flow measurement. Assuming that the reconciled feedwater flow value and its uncertainty had significant overlap with the feedwater flow measurement and its uncertainty, this comparison would be a real time validation, would demonstrate that ability of the DVR model to accurately predict the feedwater flow value, and would provide reasonable assurance that the unmodified version of the DVR model (i.e., the version which does use the feedwater flow rate as an input) would also result in an accurate prediction of the true feedwater flow value.
57 4.1.10. DVR Condition and Limitation 10 DVR Condition and Limitation 10 Any applicant or licensee applying the DVR method should demonstrate that the same number of samples have been used for each measurement; or, if a different number of samples is used for the measurements, the reconciled uncertainty has been calculated in an accurate or conservative manner.
SE for EPRI Technical Report 3002018337 The DVR method results in the calculation of a reconciled mean feedwater flow rate and a reconciled uncertainty. The reconciled uncertainty is given in the form of a standard deviation; however, it is not the standard deviation of the reconciled feedwater flow rate, but the standard deviation of the mean of the reconciled mean feedwater flow rate (i.e., the standard error of the reconciled feedwater flow rate). The standard deviation in the reconciled value is what is needed to calculate the uncertainty in the CTP. In general, the standard deviation in the reconciled mean can be used to estimate the standard deviation in the reconciled measurement, but the number of samples of the reconciled value is needed to perform this calculation.
For the DVR method, there are no reconciled values, and therefore, there is no number of samples of the reconciled values. Hence, some number of samples must be assumed. If the mean of all measurements is calculated using the same number of samples, then the NRC staff considers it is reasonable to assume that this number of samples could also be used for the number of samples of the reconciled values. While the NRC staff considers this to be a reasonable assumption, it also recognizes it as an assumption which cannot be easily proven or justified, as the same standard deviation in the reconciled mean would correspond to an infinite number of standard deviations in the reconciled values. Therefore, the NRC staff finds that further justification is needed to demonstrate that the DVR method results in an accurate prediction of the true feedwater flow rate and an accurate prediction of its uncertainty. This validation is discussed in DVR Condition and Limitation 9.
If a different number of samples is used to calculate the mean of different measurements, it is not clear which number of samples should be used when converting the standard deviation of the mean reconciled value to the standard deviation of the reconciled value. Therefore, any applicant using a different number of samples would need to demonstrate that the calculation of the standard deviation in the reconciled value was performed in an accurate or conservative manner.
4.1.11. DVR Condition and Limitation 11 DVR Condition and Limitation 11 Any applicant or licensee applying the DVR method should use a method to ensure that the difference between the reconciled value and the initial measured value is monitored and does not exceed some reasonable limit.
SE for EPRI Technical Report 3002018337
58 The penalty factor as defined in equation 3.42 of the TR is one such factor which could be used to satisfy this condition and limitation. Other forms of the penalty factor would also be acceptable provided they can be demonstrated to ensure the difference between the reconciled value and the initial measured value does not exceed some reasonable limit.
- 5. CONCLUSIONS Based on the NRC staffs risk assessment of the DVR results (Section 3.1), the staffs previous treatment of similar models and simulations (Section 3.3), the staffs previous evaluation of nuclear power plant process measurement uncertainty (Section 3.4), the staffs understanding of the DVR methodology (Section 3.5), and the staffs previous treatment of the calculation of the feedwater flow rate and its uncertainty in relation to the calculation of the CTP and CTP uncertainty (Section 3.6), the NRC staff concludes that there is reasonable assurance that the DVR method as described in EPRI TR 3002018337 can be used to determine the CTP and the CTP uncertainty, provided all DVR conditions and limitations (Section 4) have been satisfied.
- 6. REFERENCES
- 1. Greene, J., EPRI, letter to NRC, Transmittal of Use of Data Validation and Reconciliation Methods for Measurement Uncertainty Recapture: Topical Report, EPRI Report 3002018337, January 27, 2021, ADAMS Accession No. ML21053A028.
- 2. Swilley, S., EPRI, letter to NRC, Request for Withholding of the following Proprietary Document: Use of Data Validation and Reconciliation Methods for Measurement Uncertainly Recapture, EPRI Technical Report 3002018337, January 27, 2021, ADAMS Accession No. ML21053A029.
- 3. EPRI, Use of Data Validation and Reconciliation Methods for Measurement Uncertainty Recapture, TR 3002018337, November 2020, ADAMS Accession Nos. ML21053A031 (Proprietary Version, Non-Publicly Available) and ML21053A030 (Nonproprietary Version, Publicly Available).
- 4. Holonich, J., NRC, e-mail, to Crytzer, K., EPRI, NRC Staff Acceptance and Withholding Determinations for EPRI Data Validation Topical Report, March 16, 2021, ADAMS Accession No. ML21048A004.
- 5. James, L., NRC, e-mail, to Crytzer, K., EPRI and Pimentel, F., NEI, Request for Additional Information - EPRI Report 3002018337, Use of Data Validation and Reconciliation Methods for Measurement Uncertainty Recapture: Topical Report (EPID No. L-2021-TOP-0006), May 2, 2022, ADAMS Accession No. ML22118A055.
- 6. Crytzer, K. EPRI, letter to James, L., NRC, Docket No. 99902021 - EPRI Responses to RAI-01, RAI-03, RAI-09, and RAI-10 for Topical Report 3002018337-P, August 8, 2022, ADAMS Accession No. ML22223A052.
- 7. James, L., NRC, e-mail to Greene, J., EPRI, Final Set 2 Request for Additional Information
- EPRI Report 3002018337, Use of Data Validation and Reconciliation Methods for Measurement Uncertainty Recapture: Topical Report (EPID No. L-2021-TOP-0006),
December 8, 2022, ADAMS Accession No. ML22341A076.
59
- 8. Crytzer, K., EPRI, letter to James, L., NRC, Docket No. 99902021 - EPRI Responses to RAI-13, RAI-14, RAI-15, RAI-16 and RAI-17 for Topical Report 3002018337-P, March 8, 2023, ADAMS Accession No. ML23066A242.
- 9. NRC, Thermal and Hydraulic Design, Section 4.4 of NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants: LWR [Light Water Reactor] Edition, Revision 2, March 2007, ADAMS Accession No. ML070550060.
- 10. Exelon Generation Company, LLC, Calculation LE-0113, Rev. 0, Reactor CTP Uncertainty Calculation 1, Attachment 11,December 2, 2009, ADAMS Accession No. ML100850406.
- 11. Tennessee Valley Authority, License Amendment Request for Measurement Uncertainty Recapture Power Uprate (WBN-TS-19-06), October 19, 2019, ADAMS Accession No. ML19283G117.
- 12. Hesson, G.M., Cliff, W.C., and D.L. Stevens, A Mathematical Model for Assessing the Uncertainties of Instrumentation Measurement for Power and Flow of PWR Reactors, NUREG/CR-3659, February 1985, ADAMS Accession No. ML081550335.
- 13. NRC, Setpoints for Safety-Related Instrumentation, RG 1.105, Revision 4, February 2021, ADAMS Accession No. ML20330A329.
- 14. NRC, Guidance on the Content of Measurement Uncertainty Recapture Power Uprate Applications, RIS 2002-03, January 31, 2002, ADAMS Accession No. ML013530183.
- 15. JCGM 100:2008 Evaluation of Measurement DataGuide to the Expression of Uncertainty in Measurement, 2008.
- 16. JCGM 101:2008 Supplement 1 - Propagation of distributions using a Monte Carlo Method, 2009. https://www.bipm.org/documents/20126/2071204/JCGM_101_2008_E.pdf/325dcaad-c15a-407c-1105-8b7f322d651c?version=1.12&t=1659082897489&download=true
- 17. H. W. Coleman and W. G. Steele, Experimentation, Validation, and Uncertainty Analysis for Engineers,4th ed. Hoboken, NJ: John Wiley & Sons, 2018.
- 19. ANSI/ISA, Standard 67.04.01-2018, Setpoints for Nuclear Safety-Related Instrumentation, ISA, Research Triangle Park, NC, 2018.
- 20. VanDerHorn, E., and S. Mahadevan, Digital Twin: Generalization, characterization and implementation, Decision Support Systems, Volume 145, June 2021.
- 21. Jones, D., Snider, C., Nassehi, A., Yon, J., and Hicks, B., Characterizing the Digital Twin: A systematic literature review, CIRP Journal of Manufacturing Science and Technology, Vol 29, pp 36-52, 2020.
60
- 22. Kochunas, B., and Huan, X., Digital Twin Concepts with Uncertainty for Nuclear Power Applications, energies, vol 14, 2021.
- 23. Vose, D., Risk Analysis: A quantitative guide - 3rd Edition, John Wiley & Sons, Ltd.,
England, 2008.
- 24. ASME VVUQ 1, Verification, Validation, and Uncertainty Quantification Terminology in Computational Model and Simulation, The ASME, New York, 2022.
- 25. Moorcroft, D., Kaizer, J., and K. Aycock. Module 6: Regulatory Agency Perspectives:
Examples and Lessons Learned, Verification, Validation, and Uncertainty Quantification in Computational Model of Materials and Structures - Online Course, The Minerals, Metals, &
Materials Society, August 22-23, 2022.
- 26. NRC, SE of TR ER-80P, Improving Thermal Power Accuracy and Plant Safety While Increasing Operating Power Level Using the LEFM System, March 8, 1999, ADAMS Accession No. ML11353A016.
- 27. NRC, SE of TR ER-157P, Supplement to Topical Report ER-80P: Basis for a Power Uprate with the LEFM Check or CheckPlus System, August 16, 2010, ADAMS Accession Nos. ML102160694 (original SE) and ML102160713 (comment resolution).
- 28. ASME VVUQ 10.2, The Role of Uncertainty Quantification in Verification and Validation of Computational Solid Mechanics Models, The ASME, New York, 2021.
- 29. Mahalanobis, P.C., On the Generalized Distance in Statistics, Proceedings of the National Institute of Sciences (Calcutta), 1936.
- 30. https://www.khanacademy.org/math/multivariable-calculus/applications-of-multivariable-derivatives/lagrange-multipliers-and-constrained-optimization/v/lagrange-multiplier-example-part-1 (retrieved on March 27, 2023)
- 31. Arras, K.O., An Introduction to Error Propagation: Derivation, Meaning and Examples of Equation
, Technical Report of the Autonomous Systems Lab, Institute of Robotic Systems, Swiss Federal Institute of Technology Lausanne (EPFL),
No. EPFL-ASL-TR-98-01 R3, September 1998.
- 32. NUREG-1475, Revision 1, Applying Statistics, D. Lurie, L. Abramson, J. Vail, Second Edition, March 2011. ADAMS Accession No. ML11102A076.
- 33. International Society of Automation Recommended Practice ISA-RP67.04.02-2010, Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation, December 2010, Section 6 and Annex J.
- 7. APPENDIX A - DVR EXAMPLE This appendix provides a simple example of the DVR methodology provided in EPRIs DVR TR.
The example uses two flow meters in the same pipe which make redundant measurements of the flow rate. The normal method used to average the two measurements and generate a reduced uncertainty is compared to the DVR method.
61 7.1. Redundant Flow Measurements in a Pipe Consider two flowmeters on the same pipe where one of the flowmeters is downstream from the other. While the flowmeters may be independent from one another, the variable they are measuring should be the same (i.e., the flow rate in the pipe should be the same at both locations - assuming no leakage). Thus, we could take advantage of this redundant measurement and determine an estimate of the true flow rate in the pipe which has a lower uncertainty than either of the individual measurements. To demonstrate this, assume that we have a pipe with flow meters A and B as displayed in Figure 10.
Figure 10: Pipe with two flow meters A and B Further, assume that we know the values from flow meters A and B, and their respective uncertainties. In engineering, there is some disagreement as to what a measurement uncertainty mathematically means. The GUM (Ref. 15) defines the standard uncertainty as the standard deviation of a measurement. However, it is common in engineering to consider the uncertainty as the value which can be added and subtracted from the mean to encompass a large percentage (often 95 percent) of the uncertainty population for that value. To ensure clarity, in this SE we will not assign any specific mathematical meaning to uncertainty. The means and associated uncertainties of each measurement are given in Table 6.
62 Table 6: Example pipe flow rates and uncertainties Flow Meter mean flow rate (kg/sec) 1.96 95% tolerance Interval (kg/sec) standard deviation (kg/sec) variance (kg2/sec2)
Flow Meter A 245.00 245 12.25 12.25 1.96 6.25 39.06 Flow Meter B 250.00 250 12.50 12.50 1.96 6.38 40.67 7.2. Combining Redundant Measurements We can combine these measurements to obtain a single estimate for the flowrate through the pipe that has a lower uncertainty than either of the individual measurements. We do this by assuming there is a relationship between the flow rates at meter A and meter B. We could assume that the best estimate of the true flow rate in the pipe is an average of the flow rates.
2
7.1 Where
is the best estimate of the true flow rate in the pipe is the flow rate measured by flow meter A is the flow rate measured by flow meter B Using the TSM as illustrated in Section 3-3.1 of Coleman and Steele (Ref. 17), we can determine the best estimate of the variance using Equation 3.9.
7.2 We can directly evaluate the partial derivatives in Equation 7.2. The resulting variance is given as follows.
1 2
1 2
4 7.3
63 Using Equation 7.1 and letting as well as we can generate our predicted value for the mean of the flow rate. Using Equation 7.3, we generate new uncertainties. All of this information is updated in Table 7.
Table 7: Example pipe flow rates - with averaged flow rate Flow Meter mean flow rate (kg/sec) 1.96 95% tolerance Interval (kg/sec) standard deviation (kg/sec) variance (kg2/sec2)
Flow Meter A 245.00 245 12.25 12.25 1.96 6.25 12.25 1.96
39.06 Flow Meter B 250.00 250 12.50 12.50 1.96 6.38 12.50 1.96
40.67 Averaging 247.5 247.5 8.75 19.93 4.46
4 19.93 Notice that by taking the average of flow meters A and B, we were able to reduce the uncertainty in our estimate of the flow rate in the pipe. This type of uncertainty reduction is due to having redundant measurements of the same parameter and assuming that the best estimate of the true value is the average of both measurements. This average is our model of the flow rate in the pipe. However, we could use a different model to determine another estimate of the flow rate in the pipe, for example we could use a weighted average of the flow rates.
7.3. Introducing Constraints Instead of developing new models to provide better estimates of the flow rate in the pipe, DVR focuses on determining physical equations which the variables in the pipe should satisfy. These physical equations, while technically models, are models in which we have a high degree of belief are true, as there is no easy way to assess any errors or uncertainties in the constraints.
The most obvious constraint for our example is that the mass flow rates from flow meters A and B should be equal. This is represented in the equation below.
7.4 Unlike Equation 7.1 which introduces a new way to estimate the flow rate in the pipe, Equation 7.4 is a constraint we place on the system. However, introducing this constraint reveals an inconsistency. According to our measured data, the mass flow rate of A does not equal the mass flow rate of B. So, which is correct? Should we trust the measured data (which is the same as saying the constraint must be wrong) or should trust the constraint (which is the
64 same as saying the measured data must be wrong)? Using the DVR methodology, we trust the constraint and we assume that measured data must be in error. This is because the constraint is often based on a principal of physics which is believed to be true, while measured data is known to have biases and uncertainties that are not always captured. Thus, after determining that we need to adjust our measured value, we next must determine the best way to adjust them.
We could assume that flow meter A is completely correct and add a bias to B to match A. Or we could assume flow meter B is completely correct and add a bias to A to match B. Because such assumptions would be arbitrary, we assume that both A and B are incorrect. However, in assuming that both A and B are incorrect, we still must determine a means to fairly determine how incorrect each is. We could determine that both flow meters are incorrect the same amount, however this also seems arbitrary when fully considered. Saying that both flow meters are incorrect the same amount is akin to saying that 50 percent of the incorrectness is in flow meter A and 50 percent in flow meter B. This seems reasonable, but maybe we believe that 90 percent of the incorrectness is in flow meter A and 10 percent in flow meter B (or vice versa).
Therefore, we need an objective basis to determine how much incorrectness to assign to each flow meter and for that we will use a distance metric.
For our pipe example, we wish to determine a new mean value for flow meters A and B, and we will represent these values as and respectively. These new mean values are called the reconciled mean values. Consider the 2D space of all possible values of and. Before we apply the constraint equation, this space of all possible solutions is the entire plane. This plane is represented in Figure 10 below where the gray area (i.e., the entire plane) are the possible solutions.
Figure 11: Solution space - no constraint
65 However, we know these new mean values should satisfy our constrain equation in 7.4.
7.5 Thus, we can use this constraint to rule out many of these possible solutions given in Figure 11.
By applying the constraint, we are saying that only those solutions which satisfy the constraint are deemed possible. Thus, our new solution space is given in Figure 12 where the gray line represents the possible solutions.
Figure 12: Solution space - with constraint The difference between Figure 11 and Figure 12 demonstrates the impact of applying the constraint. While there are an infinite number9 of possible solutions in each case, applying the constraint allows us to rule out most solutions and enables us to focus on those remaining.
Thus, instead of asking what is the best solution we can choose in the plane of Figure 11, we can ask what is the best solution we can choose on the line of Figure 12. However, we need additional information to determine that solution.
While we could choose any point along the line as our solution, we want to choose new mean values that are closest the original mean values. However, to determine which values are closest, we need to determine a distance metric. The distance metric we have chosen is the 9 There is natural tendency to say that the constraint reduces the number of possible solutions. However, given that a line has an infinite number of points and a plane also has an infinite number of points, it seems incorrect to say that the number of possible solutions has been reduced. This is further justified by the proofs which demonstrate that cardinality of and are the same.
66 Mahalanobis distance. (Note that other distance metrics are also available to be chosen, however the Mahalanobis distance is the metric that was selected for use in the EPRI DVR TR.)
7.4. Calculating the Reconciled Mean The Mahalanobis distance (Ref. 28) is a generalized statistical distance. For a single sample from a given population, the Mahalanobis distance provides a measure of how far a sample value is from the mean value of that population, and that distance from the mean is then divided by the variance of the population. The general form of the metric is given as the following.
7.6 Where
- the Mahalanobis distance
- the sample point from the th population
- the mean of the th population
- the variance of the th population Effectively, the Mahalanobis distance is distance metric that weights how far a particular point is from the mean of a population by the variance of that population. In other words, it provides a weighted measure of the distance from means, where the weight is based on the variance.
Consider a simple case where we only have one population (1), and ask the question:
What point in that population would have a Mahalanobis distance of 0, 1, 2?
For the Mahalanobis distance to be zero, the term must be zero. This only occurs if the point we are calculating the distance to is the mean. Thus, if 0 the point in question must be the mean. For the Mahalanobis distance to be one, must be equal to.
Further, for the Mahalanobis distance to be two (or any integer),
must be equal to that integer multiple of. Thus, if, where is an integer, the point in question must be located standard deviations away from the mean.
For our example problem, we have two populations to consider, those of flow meter A and those of flow meter B. Therefore, the form of the Mahalanobis distance metric is the following.
7.7 In summary, we believe that a better estimate of the new mean values (i.e., the reconciled mean values) are those values ( and ) which satisfy the constraint (i.e., are on the gray line of Figure 12) and which also minimize the distance metric (i.e., the Mahalanobis distance of Equation 7.7). One common way to solve this problem is by using Lagrange multipliers.
67 Lagrange multipliers are a method for finding a local maxima or minima of one function (i.e., the Mahalanobis distance) subject to the constraints of another function (i.e., the constraint that the new mean values must be equal). An example of using Lagrange multipliers can be found in Reference 30.
While we used the constraint and the Mahalanobis distance with Lagrange multipliers to generate a new reconciled mean value, we will use the same TSM method to determine the uncertainty. Thus, we will be using the same equation (7.2) to calculate the new variance, however, the partial derivatives will be obtained from the function which represents the DVR process.
7.5. Calculating the Reconciled Variance Determining the new variance due to averaging the mass flow rates was a straightforward application of the TSM. Determining the reconciled variance resulting from the DVR method can also be understood as the application of the TSM, but it is not as simple or straightforward. This is primarily due to the fact that we no longer have a single equation which can be used to determine the new mean value, but that determination involves a constraint and the minimization of a metric.
To illustrate this difference, consider a function which uses inputs,,, to generate a new estimate of the mean value of the flow rate.
1, 2,,
7.8 We can further refine the inputs by recognizing that our function will only use two types of inputs, either the input will be a mean value (,,, ) or it will be a variance
(,,, ). Thus, we can re-write our function as the following:
1, 1 2, 2, 2 2,,,
2 7.9 We can write the averaging process ignoring the variance terms as follows.
2 7.10 We can also write the DVR method in a similar format as the function of specific inputs.
Because we will calculate two different reconciled values (, ), we will have two equations as follows.
1
A
B
A 1
B
7.11
68 Because the TSM can be used for both processes, the NRC staff can use the same equation to determine the new variance.
1
1
12
12
2
2 7.12 Where:
is the variance in the new mean value.
is the first derivative of the function with respect to th mean
is the variance in the th mean
is the first derivative of the function with respect to th variance is the variance in the th variance For the averaging process, this equation reduces to the following.
A
A
B
B 7.13 Where:
is the variance in the new mean.
is the first derivative of the function with respect to
is the variance in the mean of measurement A.
is the first derivative of the function with respect to
is the variance in the mean of measurement B.
The result of these equations is similar, but slightly different, from the TSM equation used earlier in Equations 7.2 and 7.3. The result of earlier equations was the variance of the averaged measurement (). However, the result of these equations is the variance of the mean of the averaged measurement (
). The variance of the averaged measurement is the variance in the population of measurements while the variance of the mean of the averaged measurement is the variance in a mean from that population. While these two values are related to each other, they have very different magnitudes. These variances are related using the following equation.
7.14 Where:
is the variance in the sample mean is the variance in the sample is the number of elements in the sample used to generate the sample mean.
69 The reason there is a difference in the TSM equation is that there is a difference in the underlying functions which map the input to the output. The original function (Equation 7.1) mapped two measurements to an averaged measurement. Hence, the variance used in the TSM was the variance of each measurement. However, the new functions (Equations 7.10 and 7.11) mapped two mean values to a new mean value. Hence, the variance used in the TSM is the variance of each mean. It is helpful to re-write the TSM equation for variance and express it as a function, not of the mean variances (i.e., the standard errors) but of the measurement variances.
1
12
12
2
2 7.15 Where:
is the variance in the new measurement.
is the number of values of new measurements which are combined to generate the mean of the new measurements.
is the variance in the th measurement.
is the number of samples used to generate the th mean In general, we will make the following two assumptions. First, we assume that all values are equal. That is, we use the same number of samples to generate,,,. Further, we also use the same number of samples are to generate the new mean. In other words,
. Second, we assume that the variance of the variance is zero. That is, we assume that the variance in each population is a constant and therefore 0. Using these two assumptions, the TSM variance equation reduces to the following.
1
7.16 It should be noted that Equation 7.16 is not specific to the DVR method. It is the equation which results from applying the TSM to any function which uses the mean and variances as inputs and results in a new mean as an output. To obtain this simplified version, we have only needed to make two assumptions: that the variances are constants (i.e., they are not random variables with their own variances), and that all mean values have been generated using the same number of samples. Thus, we can use this equation for both the averaging and the DVR example. For the case in which we are using an average value, we obtain the following equation for the new variance.
A
B
7.17 For the case in which we are using DVR, we obtain the following equation for the new variance.
70
7.18 This results in the following equations for the variances for the averaging and DVR method.
1 2
1 2
7.19
1
2
2
2 1
2
7.20 Using these new variances, we can update the table with the new values from the DVR method.
The completed data is given in Table 8.
Table 8: Example pipe flow rates - with averaged flow rate and DVR Flow Meter mean flow rate (kg/sec) 95% tolerance Interval (kg/sec) standard deviation (kg/sec) variance (kg2/sec2)
Flow Meter A 245.00 245 12.25 6.25 39.06 Flow Meter B 250.00 250 12.50 6.38 40.67 Averaging 247.5 247.5 8.75 4.46 19.93 DVR Model 247.45 247.45 8.75 4.46 19.92 7.6. Example Conclusion This example demonstrates how DVR can be used to reduce uncertainties. For this example, when compared with the averaging approach, the use of DVR resulted in very similar values of both the mean value (247.5 - averaging vs. 247.45 - DVR) and very similar variances (19.93 -
averaging vs. 19.92 - DVR). However, the benefit in using the DVR methodology over averaging can be seen by considering sensitivities of this case. For example, consider the case where the standard deviations of flow meter A and B are drastically different, as given in Table 9.
71 Table 9: Example pipe flow rates with different variances - with averaged flow rate and DVR Flow Meter mean flow rate (kg/sec) 95% tolerance Interval (kg/sec) standard deviation (kg/sec) variance (kg2/sec2)
Flow Meter A 245.00 245 50 25.51 650 Flow Meter B 250.00 250 12.50 6.38 40.67 Averaging 247.5 247.5 25.76 13.14 172.7 DVR Model 249.70 249.70 12.13 6.19 38.28 While the mean flow from the estimate from the averaging model did not change, the estimate of the mean flow from the DVR model shifted towards flow meter B, as that was the flow meter with the dramatically lower uncertainty. One of the advantages of using a DVR model instead of using averaging is also demonstrated in this example. Notice that the variance of the average model is larger than the variance of flow meter B. Thus, for this example it would be better to ignore the results from the average model and from flow meter A and only use the measurement from flow meter B. However, the variance from the DVR model will always be calculated to be lower than the variances from the individual flow meters due to equations used in calculating the variance. Thus, the DVR model results in a variance which is slightly lower than the variance in flow meter B.
The major advantage of using a DVR model instead of averaging is that it can make use of other relationships between instrumentation-based process parameter measurements besides just redundant measurements using constraints.
7.7. Sensitivity Study - Means with Different Number of Smaples To demonstrate the impact of different number of measurement data samples used to calculate the mean, the NRC staff analyzed the two-example problem assuming that 50 samples were used to obtain the mean of flow meter A, and 100 samples were used to obtain the mean of flow meter B. The staff further assumed that these samples were taken over the same time interval and that the means and variance given in Table 6 apply.
If we wish to use the averaging approach, we can no longer use Equation 7.1 because for this equation we will have twice as many samples from flow meter B when compared to A. However, we can use Equation 7.10, as this equation only relies on the calculated means and not the individual measured values. However, because the same number of samples is not used in both measurements, we cannot use Equation 7.17 to determine the uncertainty, but must use Equation 7.13. Recognizing that the term in the equation can be substituted for the
, we can write the final form of the uncertainty equation as the following.
72
A
B
7.21 Using the function given in Equation 7.10, we can calculate the magnitude of the partial derivatives in Equation 7.21 to obtain the following:
1 2
1 2
7.22 Next, we can substitute the values of the know variables.
1 2
39.06 50 1
2 40.67 100 0.2970 7.23 The variance calculated in 7.23 is the variance in the mean of the reconciled value. However, we do not want the variance in the mean, but the variance in the reconciled value. This raises the question of what value should we assume for. Table 10 demonstrates that the assumed value of is directly proportional to the variance.
Table 10: Reconciled Variances as a function of with 50 and 100 samples
50 14.85 75 22.275 100 29.70 19.93 If the number samples are very large compared to the difference between the samples, the impact on the variance is small. For example, we will use the same problem, but assume that flow meter A has 1000 samples and flow meter B has 1100 samples. Again, we are not sure about which value to use for, but we can reasonably assume that it should be between 1000 and 1100, and there is not much difference in the variances as demonstrated Table 11.
73 Table 11: Reconciled Variances as a function of with 1000 and 1100 samples
1000 19.01 1050 19.96 1100 20.91 19.93 Principal Contributors:
J.S. Kaizer D.L. Rahn Date: August 11, 2023