RS-05-062, Additional Information Supporting the Request for License Amendment Related to 21-Month Fuel Cycle

From kanterella
(Redirected from RS-05-062)
Jump to navigation Jump to search
Additional Information Supporting the Request for License Amendment Related to 21-Month Fuel Cycle
ML051470192
Person / Time
Site: Clinton Constellation icon.png
Issue date: 05/23/2005
From: Jury K
AmerGen Energy Co
To:
Document Control Desk, Office of Nuclear Reactor Regulation
References
GL-91-004, RS-05-062
Download: ML051470192 (231)


Text

AmerGen Energy Company, LLC www.exeloncorp.com AmerGen An Exelon Company SM 4300 Winfield Road Warrenville, IL 60555 RS-05-062 10 CFR 50.90 May 23, 2005 U. S. Nuclear Regulatory Commission ATTN: Document Control Desk Washington, DC 20555-0001 Clinton Power Station, Unit 1 Facility Operating License No. NPF-62 NRC Docket No. 50-461

Subject:

Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle

Reference:

Letter from Keith R. Jury (AmerGen Energy Company, LLC) to U. S. NRC, "Request for Amendment Related to Technical Specification Surveillance Requirement Frequencies to Support 24-Month Fuel Cycles in Accordance with the Guidance of Generic Letter 91-04, 'Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle'," dated May 20, 2004 In the referenced letter, AmerGen Energy Company, LLC (AmerGen) submitted a request for a change to Appendix A, Technical Specifications (TS), of Facility Operating License No. NPF-62 for Clinton Power Station (CPS), Unit 1. Specifically, the change addresses certain TS Surveillance Requirement (SR) frequencies that are specified as "18 months" by revising them to "24 months" in accordance with the guidance of Generic Letter (GL) 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle."

Additional revisions to the CPS TS were proposed to support the change to a 24-month fuel cycle.

The NRC, in support of their review of the referenced amendment request, has requested additional information. This request was provided electronically from Douglas V. Pickett (U. S.

NRC) to Timothy A. Byam (AmerGen) on November 15, 2004. The attachment to this letter provides the requested information.

There are no regulatory commitments associated with this letter.

A.661

May 23, 2005 U. S. Nuclear Regulatory Commission Page 2 AmerGen has reviewed the information supporting a finding of no significant hazards consideration that was previously provided to the NRC in the referenced letter. The supplemental information provided in this submittal does not affect the bases for concluding that the proposed license amendment does not involve a significant hazards consideration.

If you have any questions concerning this letter, please contact Mr. Timothy A. Byam at (630) 657-2804.

I declare under penalty of perjury that the foregoing is true and correct. Executed on the 23'd day of May 2005.

Respectfully, Keith R. Jury Director - Licensing and Regulatory Affairs AmerGen Energy Company, LLC

Attachment:

Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle*

_ _1 ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Instrumentation and Controls Section Questions I though 11 refer to the page number of the licensee's application dated May 20, 2004 (ADAMS Accession No. ML041460522).

I&C Request 1:

On page 21, Attachment 1, AmerGen states that the Clinton Power Station (CPS) setpoint calculations were based on Instrument Society of America (ISA) Standard 67.04, Part II and that "Method 3" was not utilized. The staff is aware that both ISA Methods 2 and 3 have been used at CPS for setpoint calculations in another TS amendment request. Provide details of the setpoint calculation methodology used in this amendment request including some typical sample calculations. Also, please confirm that this amendment request only incorporates ISA Method 2.

I&C Response I The Clinton Power Station (CPS) 24-Month Cycle License Amendment Request (Reference 1) states that in performing the revised setpoint calculations to support any revised allowable values the use of Instrument Society of America (ISA) RP67.04, Part II (Reference 2) Method 3 was not utilized. The revised allowable values proposed in the license application are all supported by Reference 2 Method 1 calculations or Channel Error (CE) calculations. The CE calculations are applied for those setpoints that do not have a safety analysis analytical limit as described in the CPS Nuclear Engineering Standard CI-01.00, Revision 3 "Instrument Setpoint Calculation Methodology," Section 4.5.3. This standard is provided as Appendix A to this attachment. For these CE calculations, all applicable uncertainty is placed between the allowable value (AV) and the nominal trip setpoint based on the Square Root Sum of Squares (SRSS).

Regardless of the calculation method used, after the as found readings are taken the setpoint is always calibrated to be within the As-Left Tolerance (ALT) limits. Restoration of the setpoint to within the ALT provides adequate margin to the AV to account for 30 months of drift in addition to other channel uncertainties.

There are however, two existing Method 3 calculations that support current allowable values. As part of the AmerGen review of these calculations it was determined that changes to the calculated AVs were not necessary to support the change in calibration frequency to 24 months. In addition to the two Method 3 calculations, proposed changes in calibration frequency are also supported by setpoint calculations performed in accordance with Reference 2 Method 1 and Method 2, and General Electric (GE)

"Method 2" as defined in NEDC-32889P (Reference 3).

As a sample calculation, a copy of Method 3 calculation IP-C-0059, "Setpoint Calculation for RPV Level 3 and Level 8 (NR); Transmitter 1B21 N095A, B," is provided as Appendix B to this attachment. This calculation supports the AV for Technical Specification (TS)

Section 3.3.5.1, "Emergency Core Cooling System (ECCS) Instrumentation," Table 3.3.5.1-1, Function 4.d, "Reactor Vessel Water Level - Low, Level 3 (Confirmatory),"

Table 3.3.5.1-1, Function 5.d, "Reactor Vessel Water Level - Low, Level 3 (Confirmatory)," and TS Section 3.3.5.2, "Reactor Core Isolation Cooling (RCIC) System Instrumentation," Table 3.3.5.2-1, Function 2, "Reactor Vessel Water Level - High, Level 8." In addition, a copy of Method 1 calculation IP-C-0067, "Setpoint Calculation for Main Steam Line Pressure - Low; Transmitters 1B21N076A, B, C, D," is provided as Page 1 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Appendix C to this attachment. This calculation supports the proposed new AV for TS Section 3.3.6.1, "Primary Containment and Drywell Isolation Instrumentation," Table 3.3.6.1-1, Function 1.b, "Main Steam Line Pressure - Low."

I&C Request 2:

On page 17, Attachment 1, Outlying and Pooling Requirements, AmerGen proposes to limit the number of outliers excluded from any dataset to one datum. This excluded datum is above and beyond any and all data that are excluded according to the seven (7) criteria listed in pages 16 and 17 of Attachment 1. The practice of excluding a datum on statistical grounds without a plausible explanation, however, may be unwarranted.

The statistical test for outliers serves to identify a potential outlier and, as such, the offending datum is investigated for cause. The seven criteria listed in pages 16 - 17 appear to have covered all plausible causes. Exclusion of an outlier, therefore, robs the data of real information and makes any measure of variability smaller than it has to be.

Identify all (if any) outliers that surfaced in the CPS study and their disposition.

I&C Response 2:

An outlier is a data point that is significantly different from the rest of the sample. The presence of an outlier in a sample of instrument data will result in the calculation of a larger sample standard deviation. In the small sample sizes available for CPS, outlier identification is more likely and its contribution to the calculated standard deviation will be more pronounced.

The resulting drift calculations after removal of the outliers is anticipated to more accurately reflect actual device performance. Inclusion of data that is significantly different than the general data population will result in applying a broader range of acceptable as-found instrumentation settings. In this case, marginal performing instruments, or instruments that should be more closely evaluated for corrective action, may be overlooked. By eliminating a single outlier, the resultant more restrictive As-Found I As-Left (AFAL) acceptance criteria will facilitate identification and allow the ongoing trend program to detect this condition and appropriately initiate design action, maintenance action, or both to address the problem. According to American Society for Testing and Materials (ASTM) Standard E 178-80, "Standard Practice for Dealing With Outlying Observations," the Critical-T Test is the best one to use to identify a single outlier.

Beyond the explicit seven criteria specified in Attachment 1 to Reference 1, where investigation has justified removal of data, data may be corrupt (i.e., not reflecting actual performance) for a number of unverifiable causes (e.g., personnel error). As such, the allowance for exclusion of a single outlier, after addressing the seven criteria, attempts to focus the data on what should be the expected performance of the instrumentation and results in triggering future evaluations at more conservative levels.

Table 1 below summarizes the instrument drift groups with a single outlier removed beyond the data that were excluded according to the seven criteria listed in Attachment 1 to Reference 1, pages 16 and 17.

Page 2 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Table 1 Instrument Drift Groups Removed with Single Outlier Drift Number Valid Data Critical T Value @

T Value for No. Of Analysis Points (after Outlier 2.5% Significance Analyss No. Removal) Outlier Outliers Level Group 8A 45 3.04 3.25 1 Group 13 89 3.28 8.22 1 Group 15 27 2.71 2.76 1 Group 16 47 3.04 4.40 1 Group 17 28 2.71 3.10 1 Group 18 27 2.71 3.36 1 Group 19 51 3.13 3.29 1 Group 20 26 2.71 4.99 I Group 24A 29 2.91 3.38 1 Group 32 255 4.00 14.43 I Group 35 26 2.71 4.64 I Group 40 51 3.13 3.21 I Group 41 27 2.71 3.15 1 I&C Request 3:

On page 18, Normality, AmerGen states that "The Chi-Square Goodness of Fit test or either the W or D Prime test is used, depending on.... " However, the Chi-Square test is known for having low sensitivity for testing goodness of fit, especially for small to moderate sample sizes. Additionally, the result of the test of fit is a function of the binning scheme used. For these reasons, the Chi-Squared test should not be used to test normality. Furthermore, when more than one test is available, the testing procedure must be declared in advance of the data collection and not left up to the engineer.

Identify instances where the Chi-squared test was used, (either by itself or in combination with other tests) of normality, and the results of such tests.

I&C Response 3:

The CPS drift analysis work plan requires the following tests for normality to be performed (as applicable to sample size):

  • Chi-Squared
  • D-Prime (D') for moderate to large sample sizes
  • W Test, for sample sizes less than 50
  • Coverage Analysis Histogram
  • Probability Plot None of the above tests was used alone to confirm normality.

Table 2 below provides a listing of the drift analysis groups using the Chi-Squared test to show normality and what other tests were performed to confirm that normality.

Page 3 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Table 2 Drift Analysis Groups Using Chi-Squared Test Drift Number Chi-Ayift Valid Degrees of Squared Chi-Squared Analysis Data Freedom computa- Result Confirming Test(s)

N Points tion Group 39A 24 9 5.079 Satisfied W and Coverage Analysis Histogram Group 15 27 9 5.528 Satisfied W and Coverage Analysis Histogram Group 20 26 9 5.566 Satisfied W and Coverage Analysis Histogram Group 14 26 9 6.294 Satisfied W and Coverage Analysis Histogram Group 18 27 9 7.506 Satisfied Coverage Analysis Histogram Group 23 30 9 7.920 Satisfied W and Coverage Analysis Histogram Group 40 51 9 7.925 Satisfied D' and Coverage Analysis Histogram Group 39 67 9 8.336 Satisfied D' and Coverage Analysis Histogram Group 17 28 9 8.397 Satisfied W and Coverage Analysis Histogram and Normal Probability Plot I&C Request 4:

Page 19, Time Dependency. Justify the use of R-squared thresholds of 0.3 and 0.1.

I&C Response 4:

The R-squared value thresholds of 0.3 and 0.1 are provided in Exelon Generation Company, LLC (Exelon) Nuclear Engineering Standard NES-EIC-20.04 (Reference 4),

Appendix J, pages J17-18, which was previously reviewed by the NRC as part of their review of the LaSalle 24-month cycle submittal (Reference 5). The R-squared test is not intended to be supportable independently, but as one diverse check among several. As described in Reference 1, Attachment 1, page 19, the conclusion of the Time Dependency evaluation is determined by the collective evaluation of the results of the Scatter Plot, Binning Analysis, Drift Regression, and Absolute Value of the Drift Regression analyses.

I&C Request 5:

Pages 19 - 20, Tolerance Interval and Drift Characterization. Describe, or give formula for the "extrapolated standard deviation." Please indicate how the extrapolated standard deviation is used for the extrapolated prediction.

I&C Response 5:

The phrase "extrapolated standard deviation" is from page 20 of Attachment 1 to Reference 1 and is referring to how the time dependent random drift is established for 915 days. The extrapolated standard deviation is a linear extrapolation developed from the slope and intercept of the plotted bin standard deviations from the regression analysis. The equation for extrapolated standard deviation (S) is as follows.

Page 4 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle S = mt + b Where:

m is the slope of the drift line b is the intercept with the y axis t is 915 days The time dependent random drift is then calculated by the following formula.

Time Dependent Random Drift = +1-KNS Where:

K is the required confidence factor from K-Values Worksheet N is the normality adjustment factor from Histogram Adjustment Worksheet S is the extrapolated standard deviation In summary, the extrapolated standard deviation is used to determine the time dependent random drift. Multiplying the extrapolated standard deviation by the confidence factor (based on the sample size and 95/95 confidence) and the normality adjustment factor determines the time dependent random drift.

I&C Request 6:

Page 17, Attachment 1. Clarify the statement in the first paragraph, "These changes were only eliminated where insufficient as-found or as-left data was available."

I&C Response 6:

The discussion provided on Page 17 of Attachment 1 to Reference 1, is a continuation of the "Data Collection and Conditioning" section that describes the methodology used to make adjustments or elimination of data points during the data conditioning process.

The first paragraph on page 17 describes how scaling or setpoint changes can be used as a basis for eliminating a data point. When scaling or setpoint changes are incorporated into a revision of the calibration procedure, and that procedure is performed at the subsequent calibration, the initial as-found data reflects a different test point than the test point data available from the previous as-left. In instances where the as-found data did not correlate to the same test point as the previous as-left, the data was eliminated. This is the intent of the statement that changes were eliminated only where "insufficient as-found or as-left data was available."

I&C Request 7 Page 18, Attachment 1. Clarify the statement in the second paragraph, "For the instances where statistical analysis could not be performed, CPS setpoint methodology assumptions for drift values are utilized to support 30 month (i.e. 24 months plus 25%

scheduling allowance of TS SR 3.0.2) calibration intervals." Provide the basis for acceptability of the assumptions.

Page 5 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle l&C Response 7:

In the absence of a statistical analysis of drift, the CPS setpoint methodology (i.e.,

Appendix A to this attachment) requires the use of vendor supplied drift data in the setpoint calculation. In the absence of vendor supplied drift data the standard conservatively assumes that drift will occur, however, it is not required to be modeled as time dependent. The standard provides two alternatives for the drift value. The first alternative is the assumption that the drift is equal to the vendor stated accuracy for the device involved. A second alternative provided in the standard is to use 0.5% of span for electrical devices and 1.0% of span for mechanical devices in the absence of vendor data. Selection of these drift values is the result of engineering review of typical Reference Accuracy and industry practices for these device types. The setpoint drift value is based on the SRSS of the individual device drift values (e.g., vendor accuracy for each device in the loop).

In order to confirm adequate drift modeling, whether by one of these alternatives or by statistical analysis of historical performance values, CPS is committed to performing drift trending as documented in Attachment 4 to Reference 1 (i.e., commitment 2). This program requires a condition report be written for any instrument found out of tolerance (OOT) (i.e., outside the As-Found Tolerance (AFT)). AFT includes the assigned drift, accuracy, and calibration uncertainties. During calibration as-found readings are taken.

If the readings are found outside the AFT a condition report is written. If the readings are also beyond the AV the instrument is declared inoperable. In either case the calibration is always reset within the As Left Tolerance (ALT) limits. The condition report documents the occurrence and provides for drift performance trending including proper setpoint modeling and equipment performance.

I&C Request 8:

Page 6, Attachment 5, fourth paragraph. Two failures were identified for Electroswitch 20K. Provide justification how 95/95 percent confidence level was achieved.

I&C Response 8:

This request indicates that two Electroswitch 20K switch failures were identified during the review of CPS surveillance history of the logic system components. However, as documented on page 6 of Attachment 5 to Reference 1, there was one Electroswitch 20K failure and one GE Type CR2940 switch failure. The conclusion of this surveillance history evaluation was that since the switch types were unique and only two failures were identified in a large population of control switches over the evaluation period, these failures were not indicative of a repetitive or time based failure problem.

The failures addressed in this request were associated with TS Surveillance Requirement (SR) 3.3.3.2.2 which requires verification that each required control circuit and transfer switch in the remote shutdown panel is capable of performing its intended function. This SR is not a calibration surveillance. The guidance provided in Regulatory Guide (RG) 1.105, "Setpoints for Safety-related Instrumentation," Revision 3, indicates that the 95/95 percent confidence level is the criterion for combining uncertainties in determining a trip setpoint and its AV to assure that there is a 95% probability that the constructed limits contain 95% of the population of interest. Since SR 3.3.3.2.2 is not a calibration surveillance, contains no trip setpoints or AVs, and has no measured Page 6 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle uncertainties to combine, the 95/95 percent confidence level is not applicable to the components discussed in this request.

I&C Request 9:

Page 32, Attachment 5, last paragraph. Three failures were identified. Provide justification how 95/95 percent confidence level was achieved.

I&C Response 9:

The failures addressed in this request were associated with TS SR 3.3.3.2.3 which requires performance of a channel calibration for each required instrumentation channel.

As stated on page 32 of Attachment 5 to Reference 1, no allowable value is applicable to these functions and a separate drift evaluation was not performed for the Remote Shutdown System instrument channels based on the design function and equipment history. The guidance provided in RG 1.105 indicates that the 95/95 percent confidence level is the criterion for combining uncertainties in determining a trip setpoint and its AV to assure that there is a 95% probability that the constructed limits contain 95% of the population of interest. Since SR 3.3.3.2.3 contains no trip setpoints or AVs, and has no measured uncertainties to combine, the 95/95 percent confidence level is not applicable to the failures discussed in this request.

I&C Request 10:

Page 33, Attachment 5, fourth paragraph. Two failures were identified. Provide justification how 95/95 percent confidence level was achieved.

I&C Response 10:

The failures addressed in this request were associated with TS SR 3.3.4.1.2 which requires performance of a channel calibration for each required instrumentation channel.

As stated on page 33 of Attachment 5 to Reference 1, drift evaluations were not performed for the turbine stop valve limit switches since they are mechanical devices that require mechanical adjustment only. Drift is not applicable to these devices. The identified two failures were the only limit switch failures that occurred during a review period from 1992 to 2002. Only one of the two failures was corrected by adjusting the

.setting," the second was strictly a mechanical failure. In lieu of attempting to analyze this single failure as reflective of a statistical uncertainty to be evaluated against the 95/95 criterion, engineering judgment is used to apply margin from the setpoint to the AV and the AV to the AL. The limit switches are part of the Maintenance Rule condition monitoring program, which tracks the devices for failure trends. As such, any identified adverse trend requires an action plan to correct the deficiency. In addition, providing assurance that mechanical failures have not occurred the switches are functionally tested on a quarterly basis (i.e., SR 3.3.4.1.1) to verify operation.

I&C Request 11:

Page J16, Attachment 6, second paragraph. Clarify the statement, "The 46 to 135 day and 46 to 135 day bins.......

Page 7 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle l&C Response 11: of Reference 1 provides Appendix J to Reference 4. Page J16 of shows an example of a Time Dependence Evaluation. In the example, the first table indicates the data count and percent of total count for each Bin. As noted in this request, the paragraph below the table states "The 46 to 135 day and 46 to 135 day bins are thrown out due to less than 5 data points and..." This is a typographical error.

The statement should read "The 46 to 135 day and 651 to 800 day bins are thrown out due to less than 5 data points and..." Reference 4 will be corrected and CPS has written an Issue Report to track resolution of the error in this standard.

Questions 12 and 13 refer to the Clinton Power Station Instrument Setpoint Calculation Methodology included in the licensee's letter dated April 16, 2004 (ADAMS Accession No. ML041120059).

I&C Request 12:

Appendix L. Indicate the setpoint calculations for which the graded approach to Categories 2, 3, and 4 of this Appendix has been used and provide sample calculations, indicating the confidence level achievable.

l&C Response 12:

All the calculations supporting the 24-month cycle amendment request have been prepared to the same level of rigor. No attempt has been made to establish whether they are category 1 or 2 because they both require the highest level of rigor.

Appendix L to the CPS setpoint methodology provides the CPS graded approach to uncertainty analysis (see Appendix A to this attachment). Graded approaches are based on the fact that all the rigor and conservatism established in Reference 2 may not be warranted for all setpoints in a nuclear power plant. In accordance with Reference 2, a nuclear plant licensee may establish a multilevel classification scheme by documenting the rationale used to establish the classification. Implementation of a graded approach to setpoints requires the user to identify how critically important each setpoint is.

Therefore, a graded approach, with classification for setpoints, will help ensure proper maintenance of safety grade nuclear instrumentation without compromising the safe and reliable operation of the plant.

I&C Request 13:

Appendix N. Has this Appendix been applied for any setpoint calculation? If yes, justify how 95/95 confidence level has been achieved and provide sample setpoint calculations.

I&C Response 13:

Appendix N to the CPS setpoint methodology (see Appendix A to this attachment) addresses the potential interaction of setpoints due to the uncertainty tolerances about the different setpoints. An example process would be the high and low level setpoints for a tank. None of the calculations supporting the proposed amendment request in Reference 1 needed to utilize Appendix N to the CPS setpoint methodology to assure Page 8 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle the low likelihood of overlap. The setpoints in the calculations that contain two setpoints were not close enough to each other to require consideration of potential overlap.

Electrical Engineering Section Electrical Request 1:

Surveillance Requirement (SR) 3.8.1.18, Diesel Generator (DG) load sequence timer calibration.

This SR requires each timer to be within +/- 10% of its design setpoint. Please provide the basis to demonstrate that the change in frequency from 18 months to 24 months does not require a closer tolerance for the as-left setpoint for the timer.

Electrical Response 1:

CPS TS SR 3.8.1.18 states "Verify the sequence time is within + 10% of design for each load sequence timer." This SR does not require calibration of any instrument. Therefore the +/- 10% value is not a calibration tolerance. The SR is performed as part of CPS procedures 9080.21, "Diesel Generator 1A - ECCS Integrated," and 9080.22, "Diesel Generator 1B - ECCS Integrated," rather than as a calibration procedure. The surveillance is currently performed on an 18-month frequency consistent with the recommendations of RG 1.108, "Periodic Testing of Diesel Generator Units Used as Onsite Electric Power Systems at Nuclear Power Plants." AmerGen has proposed in Reference 1 to revise the frequency for this surveillance from 18 months to 24 months consistent with the guidance in RG 1.9, "Selection, Design, Qualification, and Testing of Emergency Diesel Generator Units Used as Class 1E Onsite Electric Power Systems at Nuclear Power Plants," plant conditions required to perform the SR, and the expected fuel cycle length. Historically, there have been no failures of the timing sequence verification while performing this surveillance and while employing the current calibration intervals for the time delay devices involved. There are three types of time delays checked in the procedure. The Nuclear System Protection System (NSPS) circuit card timer (5 seconds), the Westinghouse TD-5 time delay relay (10 seconds), and the Agastat E7000 time delay relay (40 seconds). The calibration frequency of the NSPS circuit card timer is the only one that will be impacted by the new fuel cycle duration.

This frequency will be increasing from 18 to 24 months. The as-left setting requirement during calibration of this timer is + 1% of setpoint. Review of the calculation, which evaluates drift on this device, indicates that no change to this as-left value is required for this device when increasing the calibration interval from 18 to 24 months.

Electrical Request 2:

SR 3.8.11.2, System functional test of the Static VAR compensator (SVC) protection subsystem.

Please identify the signals and components in the SVC protection subsystem whose function may be affected by increasing the test frequency from 18 to 24 months.

Describe what measures you plan to take to detect and compensate for any degraded performance between surveillance intervals. Please provide copies of drawings M01-1103-1 and E02-IAP03 describing the SVC.

Page 9 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Electrical Response 2:

TS SR 3.8.11.2 requires performance of a system functional test of each static VAR compensator (SVC) protection subsystem, including breaker actuation. This SR requires a functional test of the reserve auxiliary transformer (RAT) SVC and the emergency reserve auxiliary transformer (ERAT) SVC to ensure that each SVC protection subsystem will actuate to automatically open the associated SVC's main circuit breakers in response to signals associated with SVC failure modes that could potentially damage or degrade plant equipment. System function testing should thus include satisfactory operation of the associated relays and testing of the sensors for which failure modes would be undetected. The functional checks of the SVC protection subsystems are performed by procedures CPS 9384.01, "ERAT SVC Protective Relays Functional Test," and CPS 9384.02 "RAT SVC Protective Relays Functional Test."

These procedures identify the 18-month test frequency from the TS SR for performing the functional check. The 18-month frequency was selected to correspond with the CPS fuel cycle length. Performing the functional checks of these devices requires operating the breakers that isolate the SVC from the associated 4.16 kilovolt (kV) bus and therefore, require a plant outage for testing the RAT SVC protection devices. Testing the ERAT SVC protection devices does not require a plant outage, however, the ERAT SVC functional testing is performed on the same frequency as the RAT SVC for consistency, to conform to the fuel cycle length, and to allow analysis of all the SVC test data on the same basis for trending purposes.

The devices functionally tested as part of this SR are electronic protective relays monitoring the output of the SVC for changes in voltage, current, and harmonic content.

Since they are electronic relays, they are programmed rather than being adjusted by dial settings and movement of induction disks. Their function is to serve as the redundant protective system to the programmable high speed controller and isolate the SVC before the SVC output could negatively affect the voltage supplied to the safety related buses.

The inputs to these relays are from current transformers (CTs) and potential transformers (PTs ) located at the SVC connection to the associated 4.16 kV bus. CTs and PTs are static devices with no adjustments and no expected change to their output ratio. Based on the types of devices tested as part of TS SR 3.8.11.2, there is no need to take additional actions to detect and compensate for any degraded performance between surveillance intervals as a result of the extended test frequency.

Based on clarification provided by the NRC during a February 3, 2005 teleconference, the SVC systems single line diagrams and protection single line diagrams for the RAT and ERAT SVCs are provided in Appendix D. The SVC system description is also provided as Appendix E to this attachment. This system description provides a description of the operation and function of the CPS SVC protection subsystem devices.

Electrical Request 3:

Table 3.3.8.1-1, Loss of Power Instrumentation, indicates a change in the loss of offsite power (LOOP) time delay from 10 seconds to 5 seconds. FSAR (Rev. 10), Section 8.3.1.1.2, Unit Class IEA-C Power Systems, indicates (on page 8.3-7) that the starting time of the largest Class IE motor is approximately 10 seconds when the offsite voltages are at their minimum expected value. It is our understanding that the 5 second delay Page 10 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle corresponds to a complete loss of voltage (0 Volts). Please confirm that the decrease in the time delay for the LOOP trip to 5 seconds does not challenge the voltage-time trip characteristic of the LOOP relay by any motor starting at minimum expected voltage.

Electrical Response 3:

There were no changes to the setpoints for the loss of voltage relays. The operating times of the relays during Loss of Offsite Power (i.e., 0 bus volts) events or during voltage transients (i.e., most severe dip during motor starting) are unchanged.

Therefore, there is no change to the relay/bus/system response to motor starting transients as a result of changing the value listed in TS Table 3.3.8.1-1, "Loss of Power Instrumentation," Item 1.b, Loss of Voltage - Time Delay, from 10 to 5 seconds.

Electrical Request 4:

The TS Bases statements for the change request for SR 3.8.1.8, Transfer of Offsite Power from Normal source to Alternate source, SR 3.8.1.12, DG auto start and load on ECCS signal, and SR 3.8.1.13, DG automatic trip bypass, indicate the change can be justified by operating experience that has shown that these components usually (emphasis added) pass the SR (and removed "when performed on the 18 month frequency'). Please provide the data that supports the justification that, even with some failures at the 18 month surveillance frequency, the frequency can be extended to 24 months.

Electrical Response 4:

As stated in Attachment 5 of Reference 1, a review of the applicable CPS surveillance history for the AC Sources demonstrated there have been no previous failures of these three SRs that would have been detected solely by the required 18-month periodic performance. Additionally, the more frequent testing required by SRs 3.8.1.1, 3.8.1.2, 3.8.1.3, and 3.8.1.7 provides additional assurance that offsite power and diesel generator availability and proper functioning will be promptly detected. The commitment to trend ongoing performance at CPS will also identify any potential unanticipated degradation resulting from extending these tests from 18 to 24 months.

The phrase "usually pass the SR when performed on the 18 month frequency" is a common generic Bases statement (which occurs in 49 instances in the CPS TS Bases).

In these instances, the proposed Bases revisions that coordinate with the change in Surveillance Frequencies from 18 to 24 months has simply deleted the portion "when performed on the 18 month frequency." The word "usually" is not intended to necessarily reflect that there have been failures, but is simply a generic statement that would encompass occasional failures. The three Bases changes addressed in this request are also made consistently in each of the other 46 occurrences.

Electrical Request 5:

The TS Bases statements for SR 3.8.1.15, DG hot restart test, SR 3.8.1.16, DG synchronizing test, SR 3.8.1.17, DG protective trip bypass and SR 3.8.1.18, DG load sequence timer calibration, state that the surveillances are consistent with Regulatory Guide (RG) 1.108. This RG had been withdrawn and replaced with Revision 3 to RG 1.9 in 1993. Please explain the continued reference to RG 1.108.

Page 11 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Electrical Response 5:

The TS Bases for the SRs specified in this request provide separate RG cross-reference citations for (1) testing acceptance criteria and (2) testing frequency. The intent of the TS Bases discussions is to provide a basis for the requirements addressed by a given Limiting Condition for Operation (LCO) or SR. There is no intent to infer broader commitment to these RGs than the context in which the citation is made.

In the surveillances referenced in this request, the testing acceptance criteria are not proposed for change, and therefore, the current licensing basis for these tests continue to reference RG 1.108. However, the frequency of testing specified in RG 1.108 was 18 months, while RG 1.9 supports the proposed 24-month testing frequency. As such, only the portion of the Bases associated with the frequency is revised to reflect its support within RG 1.9. CPS is committed to portions of RG 1.108, Revision 1, dated August 1977, as well as portions of RG 1.9, Revision 2, dated December 1979 and Revision 3, dated July 1993, as indicated in the Updated Safety Analysis Report (USAR) Section 1.8.

Electrical Request 6:

No justification has been provided in the TS Bases statements for a 24 month surveillance frequency for SR 3.8.1.19, DG auto start on a combined LOOP and ECCS signal, and SR 3.8.4.2, Battery charger full load and recharge capability. Please provide the basis for this requested change.

Electrical Response 6:

Based on clarification provided by the NRC Staff in a February 3, 2005 teleconference, the following additional justification is provided. However, AmerGen notes that it is inappropriate for the TS Bases to contain justification for past changes. The Bases provide standard wording related to the Frequency basis, consistent with the content and format of NUREG-1434, "Standard Technical Specifications General Electric Plants, BWR/6."

The diesel generator (DG) is started numerous times during the operating cycle in accordance with various surveillance requirements. Performance of SR 3.8.1.19 encompasses portions of the logic and starting relays that are more frequently tested, such that the surveillance uniquely tests only a small number of items that are not tested during the monthly and semi-annual tests of the diesel generator. This includes the bus and offsite source loss of power relays, the LOCA signal to the DG start logic, and the contacts of the auxiliary relays for these inputs to the DG start logic. These relays are located in mild environmental zones of the plant. The increased interval between calibrations for the loss of power relays and sensing circuits for the LOCA signals have been evaluated in other portions of Reference 1 and have been evaluated for satisfactory performance to support extension to 24-month calibration intervals. The auxiliary relays will age an additional 6 months before being operated during the integrated test. This additional aging will, however, have no impact on the condition of the relay coils since they are de-energized during this period. Any small amount of increased oxidation on the relay contacts surface, assumed to occur during the additional 6 months of aging, would not be expected to be capable of maintaining its Page 12 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle integrity when exposed to the 125 VDC potential of the circuit nor would it provide sufficient resistance to prevent pick up of the auto start relay. Accordingly, the increase in the surveillance interval for SR 3.8.1.19 is not expected to impact successful performance of this surveillance.

The battery charger provides power to the DC bus continuously during the operating cycle so the capability of the charger to provide the required voltage is continuously demonstrated. The battery charger full load and recharge capability surveillance required by SR 3.8.4.2 verifies the ability of the charger to produce its nameplate output for a specified duration. The charger output is checked by feeding a load bank, which is adjusted to produce the required current output from the charger. This does not require the charger to operate any differently than during normal operation since the charger automatically adjusts its output to maintain the selected voltage level. Aging of internal components of the charger is adequately addressed by preventive maintenance tasks, which inspect the charger and dictate periodic replacement of age sensitive components (such as capacitors on a 6 year interval). Accordingly the increase in the surveillance interval is not expected to impact successful performance of this surveillance.

Electrical Request 7:

The TS Bases statements for SR 3.8.4.3, Battery service test, indicates the change request is an exception to RGs 1.32 and 1.129 without any explanation. Please provide the justification why the extension to 24 months is acceptable.

Electrical Response 7:

AmerGen is committed to RG 1.32, "Criteria for Safety-Related Electric Power Systems for Nuclear Power Plants," and RG 1.129, "Maintenance, Testing, and Replacement of Large Lead Storage Batteries for Nuclear Power Plants," which include commitments to perform a battery "service test" (i.e., SR 3.8.4.3) during refueling outages, or at some other outage, with intervals between tests "not to exceed 18 months." Since the battery service test is required to be performed during outage conditions in accordance with Note 2 to SR 3.8.4.3, and the expected fuel cycle lengths are nominally 24 months, this exception is required.

A battery service test is a special as found test of the battery's capability to satisfy the design requirements (i.e., battery duty cycle) of the DC electrical power system. Note 1 to SR 3.8.4.3 allows the performance of a modified performance discharge test (i.e., SR 3.8.6.6) in lieu of the battery service test. As explained in the CPS TS Bases for SR 3.8.4.3, this substitution is acceptable because the modified performance test of SR 3.8.6.6 represents an equivalent test of battery capability as SR 3.8.4.3.

The battery performance test is a test of constant current capacity of a battery, normally done in the as-found condition, after having been in service, to detect any change in the capacity determined by the acceptance test. The modified performance test utilizes current values that bound the battery duty cycle of the service test. The test is intended to determine overall battery degradation due to age and usage. Based on trending the battery capacity determined by the performance discharge test, the battery will be replaced prior to its capacity dropping below 80% of the manufacturer's rating. A Page 13of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle capacity of 80% shows that the battery rate of deterioration is increasing even though the battery is sized to meet the assumed duty cycle loads when the battery design capacity reaches this 80% limit. Replacement of the battery prior to the capacity dropping below 80% of the manufacturer's rating will ensure that the battery continues to meet the requirements of SR 3.8.6.6.

The Surveillance Frequency for the performance discharge test is normally 60 months.

If the battery shows degradation, or if the battery has reached 85% of its expected life, the Surveillance Frequency required by SR 3.8.6.6 is reduced to either 24 months or 12 months. This 12-month Frequency is not being extended to 24 months. As such, when the battery begins to show degradation or has reached 85% of its expected life with capacity < 100% of manufacturer's rating, the increased testing frequency of 12 months will continue to appropriately monitor the battery condition. Use of the modified performance test will assure capability to meet the design required battery duty cycle (i.e., service test acceptance criteria).

As such, extending the periodic battery service test required by SR 3.8.4.3 will not result in any increased potential for battery age related degradation to impact continued ability of the battery to perform its assumed duty cycle since any additional monitoring will continue to be imposed by SR 3.8.6.6.

References:

1. Letter from Keith R. Jury (AmerGen Energy Company, LLC) to U. S. NRC, "Request for Amendment to Technical Specification Surveillance Requirement Frequencies to Support 24-Month Fuel Cycles in Accordance with the Guidance of Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle'," dated May 20, 2004
2. Instrument Society of America (ISA) RP67.04, "Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation," Part 11, 1994
3. GE Nuclear Energy Report NEDC-32889P, "General Electric Methodology for Instrumentation Technical Specification and Setpoint Analysis," Revision 2, dated February 2000
4. Exelon Nuclear Engineering Standard NES-EIC-20.04, "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy," Revision 3
5. Letter from U. S. NRC to Oliver D. Kingsley (Exelon Generation Company, LLC),

Amendment Nos. 147 and 133 for LaSalle County Station Units 1 and 2, dated March 30, 2001 Page 14 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Appendix A CI-01.00, Revision 3 Clinton Power Station Instrument Setpoint Calculation Methodology

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 TITLE: INSTRUMENT SETPOINT CALCULATION METHODOLOGY SCOPE OF REVISION:

1. Updated references to current procedures, standards and revisions.
2. Incorporated revisions necessary to produce setpoint calculations using the results of the drift analysis prepared for implementation of NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle"
3. Incorporated guidance from NES-EIC-20.04 "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy" providing additional reasonable assumptions for drift in lieu of better data.

4.Incorporated guidance acknowledging that calculations may be prepared in accordance with other methodologies such as ISA Method 2 and 3 after consulting with the Electrical / Instrument and Control Design Manager.

INFORMATION USE Procedure Owner: Paul Marcum Approval Date 04-21-04 CHANGE NO. DATE PAGES 0

0 0

Page 1 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 TABLE OF CONTENTS PAGE 1.0 PURPOSE 3 2.0 DISCUSSION/DEFINITION 3 2.1 Discussion 3 2.2 Definitions 9 3.0 Responsibility 21 4.0 STANDARD 21 4.1 Setpoint Calculation Guidelines 21 4.2 Definition of Input Data and Requirements 23 4.3 Determining Individual Device Error Terms 35 4.4 Determining Loop/Channel Values (Input to Setpoint 39 Calculation) 4.5 Calculation Nominal Trip Setpoints and 54 Indication/Control Loops

5.0 REFERENCES

60 6.0 APPENDICES 64 Appendix A, Guidance on Device Specific Accuracy and 66 Drift Allowances Appendix B, Sample Calculation Format 76 Appendix C, Uncertainty Analysis Fundamentals 94 Appendix D, Effect Of Insulation Resistance on Uncertainty 131 Appendix E, Flow Measurement Uncertainty Effects 147 Appendix F, Level Measurement Temperature Effects 155 Appendix G, Static Head and Line Loss Pressure Effects 165 Appendix H, Measuring and Test Equipment Uncertainty 167 Appendix I, Negligible Uncertainties / CPS Standard 175 Assumptions Appendix J, Digital Signal Processing Uncertainties 181 Appendix K, Propagation Of Uncertainty Through Signal 184 Conditioning Modules Appendix L, Graded Approach to Uncertainty Analysis 190 Appendix M, Using the Results of Statistical Drift 196 Analysis Appendix N, Statistical Analysis of Setpoint Interaction 199 Appendix 0, Instrument Loop Scaling 201 Appendix P, Radiation Monitoring Systems 209 Appendix Q, Rosemount Letters 212 Appendix R, Record of Coordination for Computer Point 214 Accuracy Page 2 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 1.0 PURPOSE 1.1 The purpose of this Engineering Standard is to provide a methodology for the determination of instrument loop uncertainties and setpoints for the Clinton Power Station.

The methodology described in this standard applies to uncertainty calculations for setpoint, control, and indication applications.

1.2 This document provides guidelines for the calculation of instrumentation setpoints, control, and indication applications for the Clinton Power Station.

1.3 These guidelines are applicable to all instrument setpoints. They include guidance for calculation of both Allowable Values and Nominal Trip Setpoints for setpoints included in plant Technical Specifications and calculation of Nominal Trip Setpoints for instruments not covered in the plant Technical Specifications. This document also includes guidance for determination of all input data applicable to the calculations as well as important topics concerning the interfaces with surveillance and calibration procedures and practices.

2.0 DISCUSSION/DEFINITIONS 2.1 Discussion 2.1.1 This document is structured to progress through a complete calculation process, from the most detailed level of individual device characteristics (drift, accuracy, etc.),

through determination of loop characteristics, and finally to calculation of setpoints and related topics, as outlined in the following figure:

Definition of Input Data and Requirements Calculation of Individual Device Terms (device accuracy, drift, etc.)

Combination of Individual Device Terms into Loop Terms (loop accuracy, etc.)

Calculation of Total Channel/Loop Values (Setpoint, Allowable Value, etc.)

Evaluation of Results and Resolution of Problem areas Supporting Information Page 3 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 FIGURE 1. THE SETPOINT CALCULATION PROCESS

a. DETERMINE SETPOINT OR CHANNEL ERROR VALUE TO BE CALCULATED
b. DEFINE INSTRUMENT CHANNEL CHARACTERISTICS INSTRUMENT DEFINITION PROCESS & PHYSICAL INTERFACES EXTERNAL INTERFACES
c. DETERMINE INSTRUMENT CHANNEL DESIGN REQUIREMENTS REGULATORY REQUIREMENTS FUNCTIONAL REQUIREMENTS
d. CALCULATE DEVICE SPECIFIC ERROR TERMS ACCURACY DRIFT CALIBRATION I
e. CALCULATE CHANNEL SPECIFIC ERROR TERMS I

ACCURACY DRIFT CALIBRATION PMA/PEA OTHERS l

SETPOINTS WITH ANALYTICAL LIMITS SETPOINTS/INDICATIONS WITH NO ANALYTICAL LIMIT

f. CALCULATE AV i. CALCULATE CHANNEL ERROR
g. CALCULATE NTSP j. CALCULATE SETPOINT
h. SELECT ACTUAL SETPOINT
k. COMPARE NTSP, AV, CHANNEL ERROR TO EXISTING REQUIREMENTS TECHNICAL SPECIFICATIONS FUNCTIONAL REQUIREMENTS OTHER REGULATORY REQUIREMENTS
1. OPTIMIZE CHANNEL TO MEET REQUIREMENTS Page 4 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.1.2 Instrument setpoint uncertainty allowances and setpoint discrepancies are issues that have led to a number of operational problems throughout the nuclear industry.

Historically CPS instrument loop uncertainty and setpoint determination had been based upon varying setpoint methodologies. Instrument channel uncertainty and setpoint determination had been established by two different methods depending on whether or not they applied to the Reactor Protection System and Engineered Safeguards Functions developed by GE or other safety related systems.

These methods involved:

1. Legacy S&L setpoint calcs which conservatively added accuracy errors to drift errors rather than SRSS. These calculations rarely recognized an Analytical Limit and as such did not calculate a Tech Spec Allowable Value.
2. GE setpoint calcs which are similar to "ISA method 2" Ref 5.3.

A third methodology was used to verify that an allowance for instrument uncertainty was contained in the allowable value for Technical Specifications indicating instruments (i.e.: "Channel Error" as in this standard). All three methodologies were rigid in recommendation and differed in both process and application. This resulted in CPS instrument uncertainty and setpoint calculations lacking consistent definition of allowable value and improper understanding of the relationship of the allowable value to earlier setpoint methodologies, procedures, and operability criteria. Beginning with Rev. 1, this Engineering Standard is intended to provide consistency between all CPS instrument setpoint calculations by incorporating the common strengths of CPS historical methodologies and ISA into one common standard with common terms.

This Standard provides a mechanism for the uniform development of new and revised CPS instrument setpoint and channel error calculations.

This standard does not prohibit the use of ISA recommended practice methods 2 and 3, but does strongly prefer method 1 for setpoints with analytical limits and, as such, is the method prescribed within this standard. This prescribed method should be used unless there is an infringement on operating margin to the point where the increase in nuisance alarms / actuations could cause more harm than the added conservatism gained. In that Method 2 and 3 calculate the setpoint directly from the analytical limit more operating margin can be attained. The Electrical/Instrument and Control Design Manager should be consulted prior to using methods other than the preferred in this standard.

Page 5 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.1.3 This standard provides flexibility, then, in the precise method by which a setpoint is determined, allowing for variations in calculation rigor dependent upon the significance of the function of the setpoint or operator decision point. The intent is to provide a format and systematic method, in contrast with a prescriptive method, of identifying and combining instrument uncertainties. As such, this standard provides guidelines to statistically combine uncertainties of components in a measurement and perform comparisons to ensure that there is adequate margin between the setpoint and a given limit to account for measurement error. This descriptive systematic method provides a consistent criterion for assessing the magnitude of uncertainties associated with each uncertainty component, thereby ensuring plant safety.

2.1.4 A systematic method of identifying and combining instrument uncertainties is necessary to ensure that adequate margin has been provided for safety related instrument channels that perform protective functions and for instrument channels that are important to safety.

Thus ensuring that vital plant protective features are actuated at the appropriate time during transient and accident conditions. Analytical Limits have been established through the process of accident analysis, which assumed that plant protective features would intervene to limit the magnitude of a transient. Limiting Safety System Settings (LSSS) are established in accordance with 10 CFR 50.36. Ensuring that these protective features actuate as they were assumed in the accident analysis provides assurance that safety limits will not be exceeded. The methodology presented by this revision is based on the industry standard ANSI/ISA S67.04, "Setpoints for Nuclear Safety Related Instrumentation" Parts I and II (Ref. 5.3), which is endorsed by Regulatory Guide 1.105 (Ref. 5.11). Clinton Power Station (CPS) has invoked RG 1.105 for a basis for meeting the requirements of 10CFR50, Appendix A, general design criterion 13 and 20.

2.1.5 Relation to ISA Standards and Regulatory Guides 2.1.5.1 The applicable ISA Standard for setpoint calculations is ISA S67.04. That standard was prepared by a committee of the ISA, which included some representatives who also participated in preparation of the CPS Setpoint Methodology. The CPS Setpoint Methodology is consistent with ISA Standard S67.04. More specifically this standard as it applies to setpoints with analytical limits strongly prefers the use of ISA Recommended Practice Method 1. It Page 6 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 is recognized that maintenance of operational margin has not been possible in rare cases using Method 1. It is also recognized that GE normally uses a method similar to ISA Recommended Practice Method 2. CPS currently uses Method 3 for reactor water level setpoints and GE provided several Method 2 calculations when power uprate was implemented.

2.1.5.2 There are three Regulatory Guides related to setpoint methodology; RG 1.105 (Ref. 5.11), RG 1.89 (Ref. 5.35) and RG 1.97 (Ref. 5.34). RG 1.105 covers setpoint methodology.

This Setpoint Methodology complies with RG 1.105. RG 1.89 covers equipment qualification. This Setpoint Methodology does not directly address equipment qualification, beyond the basic assumption that instrumentation is qualified for its intended service. This Setpoint Methodology may be used to determine instrument errors under various conditions as part of the process of demonstrating that instruments are qualified to perform specified functions, in accordance with RG 1.89. RG 1.97 covers the topic of post accident instrumentation. This Setpoint Methodology also does not address RG 1.97. However, as is the case with RG 1.89, the methods of determining instrument performance inherent in this Setpoint Methodology may be used when demonstrating that a particular instrument channel satisfies the guidance of RG 1.97.

2.1.6 In summary, this standard, based upon ISA-S67.04, provides an acceptable method to calculate instrument loop accuracy and setpoints, and applies to NSED as well as any technical staff members involved in the modification of instrument loops at CPS. The results of an uncertainty analysis might be applied to the following types of calculations:

  • Parameters and setpoints that have Analytical Limits
  • Evaluation or justification of previously established setpoints
  • Parameters setpoints that do not have Analytical Limits.
  • Determination of instrument indication uncertainties Page 7 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.1.7 Setpoints without Analytical Limits Many, setpoints are important for reliable power generation and equipment protection. Because these setpoints may not be derived from a safety limit threaded to an accident analysis, the basis for the setpoint calculation is typically developed from process limits providing either equipment protection or maintaining generation capacity. As defined in Appendix L, "Graded Approach to Uncertainty Analysis", the criteria in this Engineering Standard may also be used as a guide for setpoints that do not have Analytical Limits to improve plant reliability, but the calculation may not be as rigorous.

2.1.8 These guidelines are applicable to all instrument setpoints. They include guidance for calculation of both Allowable Values and Nominal Trip Setpoints for setpoints included in plant Technical Specifications, and calculation of Nominal Trip Setpoints for instruments not covered in plant Technical Specifications.

2.1.9 Indication Uncertainty (Channel Error)

Uncertainty associated with process parameter indication is also important for safe and reliable plant operation.

Allowing for indication uncertainty supports compliance with the Technical Specifications and the various operating procedures. The methodology presented in this Engineering Standard is applicable to determining indication uncertainty.

2.1.10 Mechanical Equipment Setpoints This Engineering Standard was developed specifically for instrumentation components and loops. This Engineering Standard does not specifically apply to mechanical equipment setpoints (i.e. safety and relief valve setpoints) or protective relay applications. However, guidance presented herein may be useful to predict the performance of other non-instrumentation-type devices.

2.1.11 Rounding Conventions Normal rounding conventions (rounding up or down depending on the last digit in the calculated result) do not apply to error calculations or setpoints. All rounding of results should be done in the direction, which is conservative relative to plant safety (upward for error terms, away from the Analytical Limit for Allowable Values and Nominal Trip Setpoints). Additionally, all output values to calibration procedures should be in the precision required by the calibration procedure.

Page 8 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2 Definitions NOTE Many of the followi ing definitions are based on the methodology of NEDC-31336 (Ref 5.1).

IW'here the termns defined are equivalent to terms uised in ISA StandardS67.04 (Ref 5.3), the equivalence is noted.

2.2.1 AS-FOUND TOLERANCE (AFTL): the tolerance of the As-Found error in the instrument loop (AFTL), which requires calibration to restore the loop within the As-Left Tolerance. An as-found tolerance (AFTj) is also developed for all devices in channel.

2.2.2 ACCURACY TEMPERATURE EFFECT (ATE): The change in instrument output for a constant input when exposed to different ambient temperatures.

2.2.3 ALLOWABLE VALUE (AV): (Technical Specifications Limit):

The limiting value of the sensed process variable at which the trip setpoint may be found during instrument surveillance. Usually prescribed as a license condition.

Equivalent to the term Allowable Value as used in ISA Standard S67.04.

2.2.4 ANALYTICAL LIMIT (AL): The value of the sensed process variable established as part of the safety analysis prior to or at the point which a desired action is to be initiated to prevent the safety process variable from reaching the associated licensing safety limit. Equivalent to the term Analytical Limit as used in ISA Standard S67.04.

2.2.5 AS-LEFT TOLERANCE (ALTi): This tolerance is the precision with which the technician should be able to set the device during surveillance. Additionally, if the As-Found value is within the (ALTi) then re-calibration is not required.

The As-Left Tolerance is determined by the organization responsible for defining the surveillance procedures (recommendations are provided in this document). A loop as-left tolerance (ALTL) is also developed for all devices in channel.

2.2.6 BIAS (B): A systematic or fixed instrument uncertainty, which is predictable for a given set of conditions because of the existence of a known direction (positive or negative). See Appendix C, Section C.1.2, for additional discussion.

Page 9 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.7 BOUNDING VALUE (BV): The extreme value of the conservatively calculated process variable that is to be compared to the licensing safety limit during the transient or accident analysis. This value may be either a maximum or minimum value, depending upon the safety variable.

2.2.8 CALIBRATION TOOL ERROR (Ci): The accuracy of the device (multimeter, etc.) being used to perform the calibration or surveillance test. Also referred to as M&TE (MTE). For typical precision equipment CPS recommends that this error term be considered to be a 3 sigma value, provided that the calibration of these devices is to NIST traceable standards and minimizes the effects of hysteresis, linearity and repeatability.

2.2.9 CALIBRATION STANDARD ERROR (CSTD): The error in the calibration of the calibrating tool. Per CPS standard CI-01.00 assumptions, this value considered negligible to the overall calibration error term and can be ignored.

2.2.10 CHANNEL CALIBRATION ACCURACY (CL): The quality of freedom from error to which the nominal trip setpoint of a channel can be calibrated with respect to the true desired setpoint. Considering only the errors introduced by the inaccuracies of the calibrating equipment used as the standards or references and the allowances for errors introduced by the calibration procedures. The accuracy of the different devices utilized to calibrate the individual channel instruments is the degree of conformity of the indicated values or outputs of these standards or references to the true, exact, or ideal values. The value specified is the requirement for the combined accuracies of all equipment selected to calibrate the actual monitoring and trip devices of an instrument channel plus allowances for inaccuracies of the calibration procedures. Channel calibration accuracy does not include the combined accuracies of the individual channel instruments that are actually used to monitor the process variable and provide the channel trip function.

2.2.11 CHANNEL INSTRUMENT ACCURACY (AL): The quality of freedom from error of the complete instrument channel with respect to acceptable standards or references. The value specified is the requirement for the combined accuracy's of all components in the channel that are used to monitor the process variable and/or provide the trip functions and includes the combined conformity, linearity, hysteresis and repeatability errors of all these devices. The accuracy of each individual component in the channel is the degree of conformity of the indicated values of that instrument to the values of a recognized and acceptable standard or reference device (Usually National Bureau of Standards Page 10 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 traceable), that is used to calibrate the instrument.

Channel instrument accuracy, channel calibration accuracy, and channel instrument drifts are considered to be independent variables. This definition encompasses the terms Vendor Accuracy, Hysteresis, and Repeatability defined in ISA Standard S67.04.

2.2.12 CHANNEL INSTRUMENT DRIFT (DL): The change in the value of the process variable at which the trip action will occur between the time the nominal trip setpoint is calibrated and a subsequent surveillance test. The initial design data considers drift to be an independent variable. As field data is acquired, it may be substituted for the initial design information. This term is equivalent to the Drift Uncertainty (DR) term used in the ISA Standard S67.04.

2.2.13 CHANNEL INDICATION UNCERTAINTY (CE): This is a prediction of error in an indicator or data supply channel resulting from all causes that could reasonably be expected during the time the channel is performing its function. This term is not used in setpoint calculations.

2.2.14 CONFIDENCE LEVEL: The relative frequency that the calculated statistic is correct.

2.2.15 CONFIDENCE INTERVAL: The frequency that an interval estimate of a parameter may be expected to contain the true value. For example, 95% coverage of the true value means, that in a repeated sampling, when 95% uncertainty interval is constructed for each sample, over the long run, the intervals will contain the true value 95% of the time.

2.2.16 CPS STANDARD CI-01.00 ASSUMPTIONS: Assumptions established by the Setpoint Program that are considered to be defendable and should be used without modification to any new or revised calculation, performed under this methodology, as applicable. See Appendix I,Section I.11 for the current standard assumptions. However, it should be noted, that specific assumptions germane to the individual calculation shall follow all standard assumptions.

2.2.17 DEADBAND: The range within which the input signal can vary without experiencing a change in the output.

2.2.18 DESIGN BASIS EVENT (DBE): The limiting abnormal transient or an accident which is analyzed using the analytical limit value for the setpoint to determine the bounding value of a process variable.

Page 11 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.19 DRIFT TOLERANCE INTERVAL (DTIc).- Defined herein as the calculated drift based on As-Found / As-Left data for the calibration interval and tolerance interval of interest from a statistical drift study.

2.2.20 FULL SPAN/SCALE (FS): The highest value of the measured variable that device is adjusted to measure.

2.2.21 HARSH ENVIRONMENT: This term refers to the worst environmental conditions to which an instrument is exposed during normal, transient, accident or post-accident conditions, out to the point in time when the device is no longer called upon to serve any monitoring or trip function. This term may be used in Equipment Qualification to define the qualification conditions.

From the standpoint of establishing setpoints, Harsh Environment does not apply. This distinction is made to avoid confusion between the long-term functional requirements for the devices, which includes post-trip operation, and the operational requirements during the initial period leading to the first trip.

2.2.22 HUMIDITY EFFECT (HE): Error due to humidity.

2.2.23 HYSTERESIS: An instrument's change in response as the process input signal increases or decreases (see Fig. C-5).

2.2.24 INDICATOR READING ERROR (IRE): The error applied to the accuracy with which personnel can read the analog and digital indications in an instrument loop or on M&TE. This value will normally be one quarter of the smallest division of the scale. IRE is not required IF the device ALT is rounded to the nearest conservative half-minor division.

For non-linear scales the IRE may be evaluated for the area of interest. Appendix C provides in depth discussion and usage guidelines for IRE.

2.2.25 INSTRUMENT CHANNEL: An arrangement of components required to generate a protective signal, or, in the case of monitoring channels, to deliver the signal to the point at which it is monitored. Unless otherwise stated, it is assumed that the channel is the same as the loop.

Equivalent to the term Instrument Channel in ISA Standard S67.04.

Page 12 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.26 INSTRUMENT RESPONSE TIME EFFECTS: The delay in the actuation of a trip function following the time when a measured process variable reaches the actual trip setpoint due to time response characteristics of the instrument channel.

2.2.27 INSULATION RESISTANCE ACCURACY ERROR (IRA): This is the error effect produced by degradation of insulation resistance (IR), for the various cables, terminal boards and other components in the instrument loop, exclusive of other defined error terms (Accuracy, Calibration, Drift, Process Measurement Accuracy, Primary Element Accuracy).

Since the effect of current leakage associated with IRA is predictable and will act only in one direction for a given loop, IRA is always treated as a bias term in calculations.

2.2.28 LICENSEE EVENT REPORT (LER): A report which must be filed with the NRC by the utility when a technical specifications limit is known to be exceeded, as required by 10CFR50.73.

2.2.29 LICENSING SAFETY LIMIT (LSL): The limit on a safety process variable that is established by licensing requirements to provide conservative protection for the integrity of physical barriers that guard against uncontrolled release of radioactivity. Events of moderate frequency, infrequent events, and accidents use appropriately assigned licensing safety limits. Overpressure events use appropriately selected criteria for upset, emergency, or faulted ASME category events. Equivalent to Safety Limit in ISA Standard S67.04.

2.2.30 LIMITING SAFETY SYSTEMS SETTING (LSSS): A term used in the Technical Specifications, and in ISA Standard S67.04, to refer to Reactor Protection System (nominal) trip setpoints and allowable values.

2.2.31 LIMITING NORMAL OPERATING TRANSIENT: The most severe transient event affecting a process variable during normal operation for which trip initiation is to be avoided.

2.2.32 LINEARITY: The ability of the instrument to provide a linear output in response to a linear input (see Fig. C-6).

2.2.33 MEAN VALUE: The average value of a random sample or population. For n measurements of Xi, where i ranging from 1 to n, the mean is given by M = Z XI/n 2.2.34 MEASURED SIGNAL: The electrical, mechanical, pneumatic, or other variable applied to the input of a device.

Page 13 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.35 MEASURED VARIABLE: A quantity, property, or condition that is measured, e.g., temperature, pressure, flow rate, or speed.

2.2.36 MEASUREMENT: The present value of a variable such as flow rate, pressure, level, or temperature.

2.2.37 MEASUREMENT AND TEST EQUIPMENT EFFECT (MTE): The uncertainty attributed to measuring and test equipment that is used to calibrate the instrument loop components. Also called Calibration Tool Error (Ci).

2.2.38 MILD ENVIRONMENT: An environment that at no time is more severe than the expected environment during normal plant operation, including anticipated operational occurrences.

2.2.39 MODELING ACCURACY: The modeling accuracy may consist of modeling bias and/or modeling variability. Modeling bias is the result of comparing analysis models used in event analysis to actual plant test data or more realistic models. Modeling variability is the uncertainty in the ability of the model to predict the process or safety variable.

2.2.40 MODULE: Any assembly of interconnecting components, which constitutes an identifiable device, instrument or piece of equipment. A module can be removed as a unit and replaced with a spare. It has definable performance characteristics, which permit it to be tested as a unit. A module can be a card, a drawout circuit breaker or other subassembly of a larger device, provided it meets the requirements of this definition.

2.2.41 MODULE UNCERTAINTY (As): The total uncertainty attributable to a single module. The uncertainty of an instrument loop through a display or actuation device will include the uncertainty of one or more modules.

2.2.42 NOISE: An unwanted component of a signal or variable. It causes a fluctuation in a signal that tends to obscure its information content.

2.2.43 NOMINAL TRIP SETPOINT (NTSP): The limiting value of the sensed process variable at which a trip may be set to operate at the time of calibration. This is equivalent to the term Trip Setpoint in ISA Standard S67.04.

2.2.44 NOMINAL VALUE: The value assigned for the purpose of convenient designation but existing in name only; the stated or specified value as opposed to the actual value.

2.2.45 NONLINEAR: A relationship between two or more variables that cannot be described as a straight line. When used to describe the output of an instrument, it means that the output is of a different magnitude than the input, e.g.,

square-root relationship.

Page 14 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.46 NORMAL DISTRIBUTION: The density function of the normal random variable x, with mean p and variance o2 is:

2 (X- )

n(x;, a)= e 2a2 2.2.47 NORMAL PROCESS LIMIT (NPL): The safety limit, high or low, beyond which the normal process parameter, should not vary.

Trip setpoints associated with non-safety-related functions might be based on the normal process limit.

2.2.48 NORMAL ENVIRONMENT: The environmental conditions expected during normal plant operation.

2.2.49 OPERATIONAL LIMIT (OL): The value of a process variable established to enable determination of trip avoidance margin (operating margin) for the limiting normal operating transient.

2.2.50 OVERPRESSURE EFFECT (OPE): Error due to overpressure transients (if any).

2.2.51 POWER SUPPLY EFFECT (PSE): Error due to power supply fluctuations.

2.2.52 PRIMARY ELEMENT ACCURACY (PEA): The accuracy of the device (exclusive of the sensor) which is in contact with the process, resulting in some form of interaction (e.g., in an orifice meter, the orifice plate, adjacent parts of the pipe, and the pressure connections constitute the primary element).

2.2.53 PROBABILITY: The relative frequency with which an event occurs over the long run.

2.2.54 PROCESS MEASUREMENT ACCURACY (PMA): Process variable measurement effects (e.g., the effect of changing fluid density on level measurement) aside from the primary element and the sensor.

2.2.55 RADIATION EFFECT (RE): Error due to radiation.

2.2.56 RANDOM: Describing a variable whose value at a particular future instant cannot be predicted exactly, but can only be estimated by a probability distribution function. See Appendix C, Section C.1.1, for additional discussion.

2.2.57 RANGE: The region between the limits within which a quantity is measured, received, or transmitted, expressed by stating the lower and upper range values.

2.2.58 REPEATABILITY: The ability of an instrument to produce exactly the same result every time it is subjected to the same conditions (see Figure C-4).

Page 15 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.59 REQUIRED LIMIT (RL): A criterion sometimes applied to As-Found surveillance data for judging whether or not the channel's Allowable Value could be exceeded in a subsequent surveillance interval.

2.2.60 REVERSE ACTION: An increasing input to an instrument producing a decreasing output.

2.2.61 RFI/EMI EFFECT (REE): Error due to RFI/EMI influences (if any).

2.2.62 RISE TIME: The time it takes a system to reach a certain percentage of its final value when a step input is applied.

Common reference points are 50%, 63%, and 90% rise times.

2.2.63 RPS: Reactor Protection System.

2.2.64 RTD: Resistance Temperature Detector.

2.2.65 SAFETY LIMIT (Licensing Safety Limit): A limit on an important process variable that is necessary to reasonably protect the integrity of physical barriers that guard against the uncontrolled release of radioactivity.

2.2.66 SAFETY-RELATED INSTRUMENTATION: Instrumentation that is essential to the following:

  • Provide emergency reactor shutdown
  • Provide containment isolation
  • Provide reactor core cooling
  • Provide for containment or reactor heat removal
  • Prevent or mitigate a significant release of radioactive material to the environment or is otherwise essential to provide reasonable assurance that a nuclear power plant can be operated without undue risk to the health and safety of the public Other instrumentation, such as certain Regulatory Guide 1.97 instrumentation, may be treated as safety related even though it may not meet the strict definition above.

2.2.67 SEISMIC EFFECT (SE): The change in instrument output for a constant input when exposed to a seismic event of specified magnitude.

2.2.68 SENSOR (TRANSMITTER): The portion of the instrument channel, which converts the process parameter value to an electrical signal. This is equivalent to ISA Standard S67.04.

2.2.69 SIGMA: The value specified is the maximum value of a standard deviation of the probability distribution of the parameter based on a normal distribution.

2.2.70 SIGNAL CONVERTER: A transducer that converts one transmission signal to another.

Page 16 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.71 SPAN: The algebraic difference between the upper and lower values of a range.

2.2.72 SPAN SHIFT: An undesired shift in the calibrated span of an instrument (see Figure C-8). Span shift is one type of instrument drift that can occur.

2.2.73 SQUARE-ROOT EXTRACTOR: A device whose output is the square root of its input signal.

2.2.74 SQUARE-ROOT-SUM-OF-SQUARES METHOD (SRSS): A method of combining uncertainties that are random, normally distributed, and independent.

C= + a2 b2 2.2.75 STANDARD DEVIATION (POPULATION): A measure of how widely values are dispersed from the population mean and is given by n x2 - (X) 2 n(n-1) 2.2.76 STANDARD DEVIATION (Sample): A measure of how widely values are dispersed from the sample mean and is given by

_ n x2 - (EX)2 n2 2.2.77 STATIC PRESSURE: The steady-state pressure applied to a device.

2.2.78 STATIC PRESSURE EFFECT (SPE): The change in instrument output, generally applying only to differential pressure measurements, for a constant input when measuring a differential pressure and simultaneously exposed to a static pressure. May consist of three effects:

(SPEs) Static Pressure Span Effect (random)

(SPEz) Static Pressure Zero Effect (random)

(SPEBS) Bias Span Effect (bias) 2.2.79 STEADY-STATE: A characteristic of a condition, such as value, rate, periodicity, or amplitude, exhibiting only a negligible change over an arbitrary long period of time.

2.2.80 STEADY-STATE OPERATING VALUE (X0): The maximum or minimum value of the process variable anticipated during normal steady-state operation.

Page 17 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.81 SUPPRESSED-ZERO RANGE: A range in which the zero value of the measured variable is less than the lower range value.

2.2.82 SURVEILLANCE INTERVAL: The elapsed time between the initiation or completion of successive surveillance's or surveillance checks on the same instrument, channel, instrument loop, or other specified system or device.

2.2.83 TEST INTERVAL: The elapsed time between the initiation or completion of successive tests on the same instrument, channel, instrument loop, or other specified system or device.

2.2.84 TIME CONSTANT: For the output of a first-order system forced by a step or impulse, the time constant T is the time required to complete 63.2% of the total rise or decay.

2.2.85 TIME-DEPENDENT DRIFT: The tendency for the magnitude of instrument drift to vary with time.

2.2.86 TIME-INDEPENDENT DRIFT: The tendency for the magnitude of instrument drift to show no specific trend with time.

2.2.87 TIME RESPONSE: An output expressed as a function of time, resulting from the application of a specified input under specified operating conditions.

2.2.88 TOLERANCE: The allowable variation from a specified or true value.

2.2.89 TOLERANCE INTERVAL: An interval that contains a defined proportion of the population to a given probability.

2.2.90 TOTAL HARMONIC DISTORTION (THD): The distortion present in an AC voltage or current that causes it to deviate from an ideal sine wave.

2.2.91 TRANSFER FUNCTION: The ratio of the transformation of the output of a system to the input to the system.

2.2.92 TRANSMITTER (SENSOR): A device that measures a physical parameter such as pressure or temperature and transmits a conditioned signal to a receiving device.

2.2.93 TRANSIENT OVERSHOOT: The difference in magnitude of a sensed process variable taken from the point of trip actuation to the point at which the magnitude is at a maximum or minimum.

2.2.94 TRIP ENVIRONMENT: The environment that exists up to and including the time when the instrument channel performs its initial safety (trip) function during an event.

2.2.95 TRIP UNIT: The portion of the instrument channel which compares the converted process value of the sensor to the trip value, and provides the output "trip" signal when the trip value is reached.

Page 18 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.96 TURNDOWN RATIO: The ratio of maximum span to calibrated span for an instrument.

2.2.97 UNCERTAINTY: The amount to which an instrument channel's output is in doubt (or the allowance made therefore) due to possible errors either random or systematic which have not been corrected for. The uncertainty is generally identified within a probability and confidence level.

2.2.98 UPPER RANGE LIMIT (URL): The maximum upper calibrated span limit for the device.

2.2.99 VENDOR ACCURACY (VA): A number or quantity that defines the limit that errors will not exceed when the device is used under reference operating conditions (see Figure C-3). In this context, error represents the change or deviation from the ideal value.

2.2.100 VENDOR DRIFT (VD): The drift value identified in vendor specifications or device testing (history) data.

2.2.101 ZERO: The point that represents no variable being transmitted (0% of the upper range value).

2.2.102 ZERO ADJUSTMENT: Means provided in an instrument to produce a parallel shift of the input-output curve.

2.2.103 ZERO ELEVATION: For an elevated-zero range, the amount the measured variable zero is above the lower range value.

2.2.104 ZERO SHIFT: An undesired shift in the calibrated zero point of an instrument (see Figure C-7). Zero shift is one type of instrument drift that can occur.

2.2.105 ZERO SUPPRESSION: For a suppressed-zero range, the amount the measured variable zero is below the lower range value.

2.2.106 The following Abbreviations and Acronyms are used:

AFTi = As-Found Tolerance Ai = Device Accuracy AF/AL= As Found/As Left Data AL = Analytical Limit AL = Loop/Channel Accuracy ALT = As-Left Tolerance ATE = Accuracy Temperature Effect AV = Allowable Value B = Bias Effect BV = Bounding Value BWR = Boiling Water Reactor Ci = Calibration Device Error CE = Channel Indication Uncertainty CU = Channel Uncertainty CL = Loop/Channel Calibration Accuracy Error CSTD = Calibration Standard Error D = Device Drift DBE = Design Bases Event Page 19 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 DL = Loop/Channel Drift DTIC = Calculated Drift Tolerance Interval ECCS = Emergency Core Cooling System FS = Full Span/Scale Value g = Acceleration of gravity HE = Humidity Effect IR = Insulation Resistance IRA = Insulation Resistance Accuracy Error IRE = Indicator Reading Error ISA = Instrument Society of America LER = Licensee Event Report LOCA = Loss of Coolant Accident LSL = Licensing Safety Limit LSSS = Limiting Safety Systems Setting N,n = The number of Standard Deviations (sigma values) used NIST = National Institutes of Science and Technology NPL = Nominal Process Limit NTSP = Nominal Trip Setpoint OL = Operational Limit OPE = Overpressure Effect PEA = Primary Element Accuracy PMA = Process Measurement Accuracy PSE = Power Supply Effect RE = Radiation Effect REE = RFI/EMI Effect RFI/EMI = Radio Frequency/Electro-Mechanical Interference RG = Regulatory Guide RL = Required Limit RPS = Reactor Protection System RTD = Resistance Temperature Detector SE = Seismic Effect SL = Safety Limit SP = Span SPE = Static Pressure Effect SPEBS = Bias Span Effect SPEs = Random Span Effect SPEz = Random Zero Effect SRSS Square root of the sum of the squares.

T Temperature THD Total Harmonic Distortion URL = Upper Range Limit USNRC= United States Nuclear Regulatory Commission VA = Vendor Accuracy VD = Vendor Drift Z = Measure of Margin in Units of Standard Deviations ZPA = Zero Period Acceleration G = Sigma Page 20 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 3 .0 RESPONSIBILITY The Supervisor- C&I Design Engineering is responsible for the implementation of this Standard.

4.0 STANDARD 4.1 Setpoint Calculation Guidelines The overall process for evaluating instrumentation is depicted in Figure 1, and described in the sections of this document which follow.

4.1.1 Overview 4.1.1.1 Summary of Setpoint Methodology The Clinton Power Station (CPS) Setpoint Methodology is a statistically based methodology. It recognizes that most of the uncertainties that affect instrument performance are subject to random behavior, and utilizes statistical (probability) estimates of the various uncertainties to achieve conservative, but reasonable, predictions of instrument channel uncertainties. The objective of the statistical approach to setpoint calculations is to achieve a workable compromise between the need to ensure instrument trips when needed, and the need to avoid spurious trips that may unnecessarily challenge safety systems or disrupt plant operation. With special approval, methods 2 or 3 of ref. 5.3 may be used to gain small increases in operating margin to avoid spurious trips or nuisance alarms. See section 2.1.2.

4.1.2 Fundamental Assumptions 4.1.2.1 Treatment of Uncertainties The first fundamental assumption of the CPS Setpoint Methodology is that all uncertainties related to instrument channel performance may be treated as a combination of bias and/or independent random uncertainties. It is assumed that, although all random uncertainties might not exhibit the characteristics of a normal random distribution, the random terms may be approximated by a random normal distribution, such that statistical methods may be used to combine the individual uncertainties. Thus, a key aspect of properly applying this methodology is to examine the various error terms of interest and properly classify each term as to whether it represents a bias or random term, and then to assign adequately conservative values to the terms.

Page 21 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.1.2.2 Trip Timing The second fundamental assumption of the CPS Setpoint Methodology is that the automatic trip functions associated with setpoints are optimized to function in their first trip during an event, the point in time when they (and they alone) are most relied upon for plant safety. Additional or subsequent trip functions are permitted to be less accurate because their importance to plant safety (relative to the importance of operator action) is less. Worst case environmental conditions, that assume failure of protective equipment, or conditions that would only exist after the point in time where manual operation action is expected are not applicable to the automatic trip functions that are expected or relied upon to occur in the early part of an event. This assumption is necessary to ensure that overly conservative environmental assumptions are not permitted to inflate error estimates, producing overly conservative setpoints, which may themselves lead to spurious trips and unnecessary challenges to safety systems. Paragraph 4.2.4.2.(d), discusses determination of trip timing.

4.1.2.3 Instrument Qualification The third fundamental assumption of the CPS Setpoint Methodology is that safety related instrumentation has been qualified to function in the environment expected as a result of plant events. This relates to the second assumption, above. Specifically, although the setpoint is optimized for the first trip expected in an event, the instrumentation might be required to function after the first trip. In optimizing the setpoint for the first automatic function, it is expected that later automatic functions will occur, but with potentially poorer accuracy (see paragraph 4.2.4.2.(d) for further discussion on trip timing). The later automatic functions of the instrumentation can only be expected if the instrumentation has been qualified for the expected environmental conditions.

4.1.3.1 Probability Criteria 4.1.3.2 Because the CPS Setpoint Methodology is statistically based, it is necessary to establish a desired probability for the various actions associated with the setpoints. The probability target is 95%. This value has been accepted by the USNRC. Appendix C, Uncertainty Analysis Fundamentals and Reference 5.32, EPRI TR-103335, provide detail discussion of the systematic methodology.

Page 22 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.1.3.3 In applying the 95% probability limit, it is important to recognize the form of the data and the objective of the calculation. For the case of test data or vendor data, the 95% probability limit corresponds to plus or minus two (2) standard deviations (i.e., 2 sigma). This represents a normal distribution with 95% of the data in the center, and 2.5% each at the upper and lower edges of the distribution.

In the case of a setpoint calculation, we are usually not interested in a plus or minus situation. Instead, since the purpose of the trip setpoint is to ensure a trip only when approaching a potentially unsafe condition (one direction only). CPS is interested in a distribution in which 95% is below the trip point, and 5% is beyond the trip point, all at one end of the normal distribution.

This is called a normal one-sided distribution. The point at which 5% of the cases lie beyond the trip point corresponding to 1.645 standard deviations (i.e., 1.645 sigma).

4.1.3.4 In performing the setpoint or channel error calculations it will be important that the probabilities associated with various elements of the calculation be known and properly accounted for. Scaling and the design requirements necessary for implementing process measurement will be evaluated and controlled in a device calculation.

4.1.3.4 In performing the setpoint or channel error calculations it will be important that the probabilities associated with various elements of the calculation be known and properly accounted for. Vendor and calibration data will generally be 2 or 3 sigma values. In determining channel accuracies and other errors, the data will generally be adjusted to a common 2 sigma basis. Subsequently in setpoint calculations, etc., the probability limits will be adjusted from 2 sigma to the particular probability limit of interest.

4.2 Definition of Input Data and Requirements This section of this document provides detailed discussion of the input data and requirements that may apply to a given calculation, in terms of information on the characteristics of the instrument channel and the applicable design requirements. Additional guidance is provided in Appendix C, and in detailed Appendices, as indicated.

Page 23 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.1 Defining Instrument channel characteristics, Overview The instrument characteristics to be defined depend on the nature of the instrument channel. Generally, the following information should be included in the instrument channel design characteristics:

4.2.1.1 Instrument Definition Manufacturer Model Range Vendor Performance specifications Tag Number Instrument Channel Arrangement 4.2.1.2 Process and Physical Interfaces Environmental Conditions Seismic Conditions Process Conditions 4.2.1.3 External Interfaces Calibration Methods Calibration Tolerances Installation Information Surveillance Intervals External Contributions Process Measurement Primary Element Special terms and Biases Each of these aspects is discussed in more detail in the following Section 4.2.2 Defining Instrument Channel Characteristics 4.2.2.21 Instrument Definition

a. Manufacturer, Model, Tag Number, Instrument Arrangement The instrument tag number, Manufacturer and model number are determined from controlled design information or by examination of the actual instruments. Instrument channel arrangement refers to the schematic layout of the channel, including both the physical layout and the electrical connections.

The physical layout is important for devices that may be exposed to static head or local environmental conditions, so that the conditions can be properly accounted for in the calculations. The electrical connections are of importance because the actual manner in which the devices in a channel are connected affects the combination of error terms, particularly with regard to estimating calibration errors.

Page 24 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

b. Instrument Range The instrument range for each device in the instrument channel includes at least four terms.

The Upper range limit(URL) of the instrument and the calibrated span (SP) of the device. The last two, are the range of the input signal to the device, and the corresponding range of output signal produced in response to the input.

As an illustration, consider a typical channel consisting of a pressure transmitter connected to a trip unit and a signal conditioner leading to an indicator channel:

The maximum pressure range over which the transmitter is capable of operating is the URL. The process pressure range for which the transmitter is calibrated is the SP.

The output signal range of the transmitter is the electrical output(volts or milliamps) corresponding to the calibrated span.

The input to the trip unit and the signal conditioner would be the electrical input corresponding to the electrical output of the transmitter. In a similar fashion, the input and output ranges for every device in the instrument channel is defined by establishing the electrical signal that corresponds to the calibrated span of the transmitter.

c. Vendor Performance Specifications Vendor performance specifications are the terms that identify how the individual devices in an instrument channel are expected to perform, in terms of accuracy, drift, and other errors. All error terms identified in manufacturers performance data should be considered for potential applicability to the calculation of errors. In addition, the results of plant specific or generic Equipment Qualification (EQ) programs should be considered. When EQ program data applicable to a particular application indicates different performance characteristics than that published in open vendor data, the limiting or most conservative data will be used. If additional margin is required, then the differences should be resolved. In order to assure consistency in combining errors in an instrument channel, vendor performance specifications must be expressed as a percentage of Upper Range, Calibrated Span, or the electrical input or output ranges of the devices.

Page 25 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.2.2 Process and Physical Interfaces

a. Environmental Conditions Up to four distinct sets of environmental conditions must be defined for a given instrument channel.
  • The first of these is the set of environmental conditions that applies at the time the instruments are calibrated. Under normal conditions, the only environmental condition of interest during calibration is the possible range of temperatures.

This is of interest because temperature changes between subsequent calibrations can introduce a temperature error, which becomes part of the apparent drift of the device.

  • The second distinct set of environmental conditions is the plant normal conditions. These are the combination of radiation, temperature, pressure and humidity that are expected to be present at the mounting locations of each of the devices during normal plant operation under conditions where the instrument is in use. These conditions are used to estimate normal errors, particularly in the spurious trip margin evaluation.
  • The third distinct set of environmental conditions to be identified is the trip environmental conditions. These are the combination of radiation, temperature, pressure and humidity expected to be present at the mounting location of each device at the point in time that the device is relied upon to perform its automatic trip function. These environmental conditions are generally those that may exist at the first trip of an automatic system, before the operator takes control of an event.
  • The fourth distinct set of environmental conditions that may be needed is the long-term post-accident environmental conditions. These conditions do not apply to most setpoints, but may apply for evaluations of channel error for post-accident monitoring and long-term core cooling (or similar) functions.

Page 26 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.2.2 (cont'd)

  • In all cases, it should be noted that the environmental conditions of importance are those seen by all the devices in the instrument channel.

This includes equipment, which connects to the instrument, such as instrument lines. For example, instrument lines, which pass through multiple areas (particularly the Drywell) will experience static head variations due to the temperature effects on the fluid in the lines (see Process Measurement Accuracy of Appendix C).

b. Seismic Conditions
  • Seismic conditions ("g" loads) apply to setpoints associated with events that may occur during or after an earthquake. Depending on the type of instrument (and the manufacturer's definition of how seismic loads affect the devices) two different seismic conditions may be of interest. These are the seismic loads that may occur prior to the time the instrument performs its function, and the seismic loads that may be present while the instrument is performing its function. In general, the seismic loading of interest is the Zero Period Acceleration at the point the instrument is mounted.
c. Process Conditions As discussed in Appendix C, three sets of process conditions may be of importance for most instrument channels.
  • The first of these is the calibration conditions that may be present at the time the device is calibrated. This is generally of interest for devices such as differential pressure transmitters, which are calibrated at zero static pressure, but then operated when the reactor is at normal operating pressure. The change in static pressure conditions must be known and accounted for in calibration and/or channel error calculations.
  • The second set of process conditions of interest is the set of worst case conditions that may be imposed on the instrument from within the process. Certain types of pressure transmitters, for example, are subject to overpressure errors if subjected to pressures above a specified value.

Page 27 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

  • The third set of process conditions of interest is the conditions expected to be present when the instrument is performing its function. Conceivably, this can be more than one set of conditions. These process conditions determine the errors that may exist when the instruments are calibrated at different process conditions, and may also affect the magnitude of Process Measurement Accuracy and Primary Element Accuracy terms in the setpoint or channel error calculations.

4.2.2.3 External (outside world) Interfaces

a. Calibration Methods and Tolerances Calibration methods and tolerances are of importance because they have an effect on many aspects of the setpoint or channel error evaluations. They determine the channel calibration error, and may also be used to determine As-Found and As-Left tolerances. Calibration tolerances can be identified in a number of different ways. If the plant operating personnel have evaluated their calibration procedures and established an overall channel calibration error for each channel, then this information may be used directly in setpoint calculations. If not the following information should be obtained, so that the channel calibration error can be determined:
1. A list of the instruments used to calibrate the channel.
2. A calibration diagram, showing the locations in the instrument channel where calibration signals are input or measured, the type and accuracy of instruments used at each location, and values of calibration signals.
3. If known, accuracy of the NIST or equivalent Calibration standards used to calibrate devices such as pressure gauges used in the calibration.
4. If established, As-Left and As-Found tolerances used in calibration of each of the devices.
b. Installation Information Installation information of interest includes the installed instrument arrangement, including all connections to the process, instrument line routings, panel and rack locations and elevations, etc.

Elevations and instrument line routings are important for determining head corrections, Process Measurement Accuracy and Primary Element Accuracy, and other effects associated with instrument physical arrangement.

Page 28 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

c. Surveillance Intervals The surveillance interval associated with each device in the instrument channel should be determined from the plant surveillance documents. In general, the surveillance interval assumed for the setpoint or channel error calculations should be the longest normal surveillance interval of any device in the channel (e.g., 18 months, due to the transmitter). In cases where the calibration interval can be delayed, the maximum interval should be used (e.g., CPS Technical Specifications allow for calibration intervals to be delayed for up to 125% of the required interval, or (18 months)
  • 1.25 = 22.5 months).

However, for devices in the instrument channel that are calibrated on a shorter interval, inaccuracies need not be extrapolated to the maximum interval.

Refer to Section 4.3.2 for more detail.

d. External Error Contributions The final step in determining instrument channel characteristics is to determine whether the instrument channel of interest may be subject to any additional error contributions beyond those normally associated with the instruments themselves. If any of these effects may apply to a particular channel, data necessary to define the effect must be obtained.

Potential External Error Contributions may include:

  • Process Measurement Accuracy (PMA)
  • Primary Element Accuracy (PEA)
  • Indicator Reading Error (IRE)
  • Insulation Resistance Accuracy (IRA)
  • Unique error terms 4.2.3 Instrument Channel Design Requirements Design requirements applicable to the instrument channel should be defined, including, as applicable:

4.2.3.1 Regulatory Requirements

  • Technical Specifications
  • Safety Analysis Reports
  • NRC Safety Evaluation Reports
  • Regulatory Guides 1.89, 1.97 and 1.105 Page 29 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.3.2 Functional Requirements

  • Instrument function
  • Analytical and Safety Limits
  • Operational Limits
  • Function Times
  • Requirements imposed by plant procedures, Emergency Operating Procedures (EOPs), etc.
  • For indicator or computer channels, allowable channel error (CE)

Each of these aspects is discussed below.

4.2.4 Defining Instrument Channel Design Requirements 4.2.4.1 Regulatory Requirements

a. Technical Specifications Technical Specifications requirements are of importance for setpoints and instrument channels covered within the Technical Specifications.

Requirements of importance are Surveillance intervals, Allowable Values and Nominal Trip Setpoints specified in the Technical Specifications. Existing values in the Technical Specifications should be reviewed, even for new setpoint calculations, because it is usually desirable to preserve the existing Technical Specifications values if they can be supported by the setpoint calculations. Thus, the Technical Specifications values (particularly the Allowable Value and Nominal Trip Setpoint) are used in evaluating the acceptability of calculation results, and may also be used in the evaluation of As-Found and As-Left Tolerances and determination of Required Limits (if used).

b. Safety Analysis Reports, NRC, SERs, 10CFR50, Regulatory Guides While the Technical Specifications are the key documents to examine for regulatory commitments or requirements, the balance of the plant licensing documentation may contain commitments or agreements reached with the NRC, as well as system specific requirements that may affect setpoint calculations.

Normally, all such commitments or requirements should also be reflected in the applicable plant specifications and documents. However, the licensing documentation should be considered in assuring commitments are known.

Page 30 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.4.2 Functional Requirements

a. Instrument Function Instrument functional requirements are normally contained in system Design Specifications, Design Specification Data Sheets, Instrument Data Sheets and similar documents. The functional requirements to be determined should not only include the purpose of the setpoint, but also the plant operating conditions or operating modes under which the trip is required to be operable, and identification of the most severe conditions under which the trip should be avoided.

The plant operating conditions under which a trip must be operable should be correlated to the licensing basis events so that the questions of trip environment, absence or presence of seismic loads, etc., can be answered.

b. Analytical and Safety Limits
  • The Licensing Safety Limit (LSL) is the value of a safety parameter that must not be violated in order to assure plant safety. In the case of a safety situation for which there is an accident or transient analysis, the safety limit is the limit that the analysis is intended to support. For situations where there is no transient analysis, such as the pressure limit for a section of pipe.

The Safety Limit or Nominal Process Limit (NPL) would be the limit assumed in design (the Design pressure and Temperature of the pipe, for example).

  • The Analytical Limit (AL) is a slightly different concept. The Analytical Limit is the value at which the trip is assumed to occur, as part of the analyses, which prove that the Safety Limit is satisfied. For the example of pipe pressure, if there is a stress analysis, which assumes that a particular event is terminated, by instrument action, at or before a certain pressure is reached.

The pressure at which the instrument is assumed to react, to terminate the event, is the Analytical Limit for that event, even if it is different than the Design Pressure of the piping.

  • The section of this document dealing with the actual setpoint calculations gives more specific guidance on how to select the Analytical Limit to be used.

Page 31 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

c. Operational Limits (OL)

Operational Limits are the values of the measured parameter which may occur during plant operation, and at which it would be undesirable to have a trip occur.

Usually, there is one limiting Operational Limit for a given setpoint. In certain cases, such as High Drywell Pressure, there may be no credible operating condition, short of the design basis accident (which requires a trip). In such situations, there would be no Operational Limit.

d. Function Times
  • Function times should be identified for every instrument channel requiring either a setpoint calculation or channel error calculation. The function time is important because it is used to determine the worst rational environmental conditions for use in determining instrument error.

Caution should be exercised in determining function times. This is because the function time selected for a particular case can have a very large impact on instrument error calculations, and this in turn can have a significant impact on the setpoint, and the risk of spurious trip. That is, over-conservative function times lead to over-conservative setpoints and higher spurious trip risk. Since spurious trips can themselves lead to safety system challenges, the ultimate result of over-conservative function times can be a situation, which is counter productive to overall safety.

Page 32 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 In determining the function time for a particular setpoint, attention should be given to the conditions under which the operator depends most on the automatic actions triggered by the setpoint.

For example, in the case of a reactor water level signal intended to start the ECCS system in the event of a Loss of Coolant Accident. The operator depends most on the automatic function during the first 10 minutes of the event, before reactor power is significantly reduced and before the operator has had an opportunity to take control of the situation. During this early period of a LOCA, the core is not yet uncovered and therefore no core damage and major radioactive release would be expected. The operator could reset the water level trip devices after the event, but since the reactor would then be shutdown, and rapidly changing water levels would no longer be credible, the need for trip accuracy would be considerably reduced. Thus, it is appropriate to base the trip setpoint on the conditions existing in the first 10 minutes, without assuming core damage (it should be noted, however, that environmental conditions used for Equipment Qualification might indicate otherwise, since they assume failures).

Note: All setpoints, controls or indications need only be evaluated to the worst environmental conditions present at the time their function is required.

Page 33 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

e. Requirements Imposed by Plant Procedures (EOPs, etc.)

As defined in Appendix L, plant operating procedures, particularly Emergency Operating Procedures, should be considered in defining the functions of instruments.

This is particularly important in connection with the topic of instrument function times, since the Plant Procedures define the extent that the operator may depend on the instrumentation, and the events for which this dependence is most important. Engineering judgment must be exercised in evaluating the effect of operating procedures. For example, while a particular procedure may require the operator to reset a particular trip device, the reset requirement does not necessarily imply that the instrument must react as accurately in a subsequent trip. Thus, the first trip, prior to the operator taking control, may still be the appropriate basis for the setpoint calculation.

Engineering judgment and a good understanding of the design bases of the plant must be applied to identifying the impact of Plant Procedures on the functional requirements applicable to the instrumentation.

f. Allowable Channel Error (CE)

As defined in Section 2.2, Channel Error Indication Uncertainty, for certain types of channels, particularly indicator channels and channels which supply signals to computers and data collection systems, there may be requirements on the maximum allowable error in the channel. Such requirements may be imposed by the purpose of the indicating functions (such as a Plant procedure requirement), or by the use that is made of the data. The manner in which the instrument data is used should be evaluated to determine if there are any inherent limits on acceptable channel error, independent of the setpoint calculation.

4.2.5 Data Collection All data collected should be referenced to its source (document number, title, and revision level) and recorded in the Input, Output, or Reference Section of the calculation, so that the basis for the setpoint or channel error calculations will be traceable to the proper plant documents.

Page 34 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.3 Determining Individual Device Error Terms 4.3.1 Determining Individual Device Accuracies As defined in Section 2.2, the overall accuracy error for any individual device is developed by combining all the individual error contributions identified by vendor performance specifications or device qualification tests.

As a means of assuring consideration of all terms, it is useful to view the accuracy error of the device in terms of the factors that might cause the device to exhibit errors.

That is, what external or internal effects might affect the performance of the device? The answer to this question is straight forward: Device accuracy may be influenced by the inherent precision of the internal components, plus errors caused by each and every external (environmental) influence on the device. Specifically, the following potential causes of accuracy error should be considered for any given device:

a. Vendor Accuracy (VA)
b. Accuracy Temperature Effect (ATE)
c. Overpressure Effect (OPE)
d. Static Pressure Effect (SPE)
e. Seismic Effect (SE)
f. Radiation Effect (RE)
g. Humidity Effect (HE)
h. Power Supply Effect (PSE)
i. RFI/EMI Effect (REE)

The identification of these potential effects is not intended to indicate that they apply to all devices. First of all, some suppliers of instrumentation provide a single value of accuracy error, which may already include all or many of the external environmental effects listed above (within some bounding environment specified by the vendor).

Guidance and information for some common devices is provided in Appendix A and C to this document, additionally, Appendix L, Graded Approach to Uncertainty Analysis, provides guidance in terms of rigor in which elements of device uncertainty should be considered during a calculation.

Following identification of potential effects, each of the error terms should be examined to determine if it may be treated as a random term, or whether dependencies may exist which would include systematic or bias error as described in Appendix C, Sections C.l.1 and C.1.2.

Page 35 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Once all the accuracy error contributions for a particular instrument are identified, they should be combined using the SRSS method to determine total device accuracy. In performing the SRSS combination, the individual level of confidence of each term (sigma level) should be accounted for such that the resultant device accuracy error is a 2 sigma value. Refer to Section C.4 for cases where instruments are calibrated together as a rack.

Ai = + N( (VAi/n)2 + (ATEi/n)2 + (OPEi/n)2 + (SPEi/n) 2 +

(SEi/n) 2 + (REi/n) 2 + (HEi/n) 2

+ (PSEi/n) 2 + (REEi/n)2 )1/2

+ Any bias term associated with the above random errors (2a)

Where the values of 'n' are the sigma values associated with each individual effect (i.e., 1, 2, 3) and N is 2 for a 2 sigma value of Ai.

Generally, two accuracy terms are required for setpoint calculations; accuracy under normal plant operating conditions (AiN) and accuracy under the conditions for which the circuit will be required to trip(AI Accident/seismic) -

The Setpoint Program Coordinator can provide sample calculations.

4.3.2 Determining Individual Device Drift Drift for individual devices are determined in a manner similar to that of accuracy.

Vendor Drift (VD): Refer to Section 2.2 for definition.

The Vendor Drift term should be adjusted to the surveillance interval for that device. In accordance with References 5.1 and 5.3 this adjustment is made by multiplying the value of VD by the square root of the ratio of the surveillance interval (M) to the drift interval associated with the vendor data.

Example (six month drift interval specification):

VDM = (M/6)1/2VD5month Refer to Appendix I, Standard Assumptions for sigma value.

Further information on drift for specific types of commonly used instruments, is provided in Appendix A.

Page 36 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Several cautions should be noted concerning drift calculations, specifically:

The functional life of the device must exceed the assumed surveillance interval. This is because the extrapolation of drift to longer surveillance intervals fundamentally assumes the instrument is qualified for, and expected to perform normally for, the intended length of service. The drift allowance is intended to account for natural long-term variations in the performance of a basically

'healthy' instrument, not instrument failures.

Drift calculations should be consistent with observed performance. Surveillance testing (As-Found and As-Left data) gives an indication of apparent drift. The surveillance test data is not pure drift; since it is masked by accuracy, calibration errors and other contributors as described in Section C.3.4. However, calculation models exist to permit evaluating drift performance. Conversely, good apparent performance in surveillance testing may be used to justify improvements in assumed drift values used in setpoint or channel error calculations. This is a very important consideration, since the setpoint calculation methods assume drift is a random variable, such that drift for longer intervals is determined using the SRSS method. The USNRC may require that drift assumptions be validated based on field data (the use of field data to validate drift assumptions is discussed in Appendices A and C).

4.3.3 Determining Device Calibration Tolerances Four key considerations have been introduced in other sections of these guidelines concerning calibration tolerances. These are:

a. As Found Tolerance (AFTi): Refer to Section 2.2 for definition.
b. As-Left Tolerance (ALTi): Refer to Section 2.2 for definition.
c. The Calibration Tool Error (Ci): Refer to Section 2.2 for definition and Appendix H for guidance.
d. The Calibration Standard Error (CSTD): Refer to Section 2.2 for definition. Per Standard Assumptions in Appendix I,Section I.11, this value is considered negligible.

Page 37 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 The first two of these terms are arbitrary. That is, AFT is typically calculated as shown below, however it can be rounded in a conservative manner to force a more limiting value in order to preserve an existing setpoint (See Section 4.4.5 for Loop AFT). ALT is up to personnel establishing calibration and surveillance procedures to establish these values. Once established, they should be used in the setpoint and channel error calculations.

Generally, ALT is set to VA, however ALT will be considered a 2 sigma value. In the absence of other guidance, this methodology recommends that the terms be established as follows:

AFT+/- = +/- (N) ((ALTi/n)2 + (Ci/n) 2 + (Di/n) 2 ) 1 /2 (20)

ALT+/- = +/- VAj (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Refer to Section 2.2 for definitions and Sections C.3.16 &

C.3.17 for additional guidance.

Typically ALT was established in calibration procedures equal to VA. However, per Sections 2.2.5 and 4.2.2.3, the ALT established in plant procedures should be used. If, in order to preserve a setpoint, a smaller tolerance is needed, then plant personnel should be contacted for concurrence prior to using in calculation. If, the ALT established in calibration procedures is smaller than VA, then the calculation should use VA, so that plant personnel could relax the tolerance, if desired.

NOTE: The AFT and ALT values should be converted to the engineering units required by the calibration procedure and rounded to the precision of the M&TE equipment used. In cases where values are established for indication, the values should consider the readability of the device and round to the next M minor division.

These guidelines have been established because they permit surveillance procedure error bands, which are consistent with the types of errors that may be present during calibration.

Page 38 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4 Determining Loop/Channel Values 4.4.1 Determining Loop Accuracy (AL)

Loop Accuracy must be determined in such a way as to be compatible with the various setpoint and channel error calculations. Loop Accuracy shall be determined to a level of confidence corresponding to 2 Standard Deviations (20).

In order to determine Loop Accuracy, the accuracy of all devices in the loop must be determined (with a known or assumed sigma value associated with each), adjusted to a common sigma value (2), and then combined to produce the value of Loop Accuracy. All bias effects related to any of the devices shall be separated from the random portion of the accuracy data and will be dealt with separately, such that the individual device accuracy values may be assumed to be approximately random, independent, and normally distributed.

All individual device errors shall be determined on the basis of the environmental conditions (normal, trip, post accident, etc.) applicable to the event (and function time) for which the Loop Accuracy applies.

Once the individual device accuracy errors have been identified and characterized to a common sigma value (2),

they are combined by the SRSS method to find the Loop Accuracy.

AL = +(Al2 + A2 2 +...+ A 2 ) 1/2 +/- any bias terms (20)

Normally, two distinct values of loop accuracy must be determined using the equation above. These are the normal loop accuracy (ALenorma1) and the accuracy under accident or seismic conditions or both (AL(accident/seismic)).

Two important cautions must be noted concerning Loop Accuracy. First, the devices included in Loop Accuracy must be consistent with the signal path of interest (i.e., every device from the signal source to the point at which the setpoint trip is produced or the channel output utilized).

Secondly, the term 'devices' is not intended to restrict the calculation to hardware, or to include hardware that is treated uniquely elsewhere in the setpoint calculations.

'Devices' may include software.

Page 39 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.1.1 The following devices are typically included in Loop Accuracy:

(1) Transmitters (2) Trip Units (3) Signal Conditioners/Multiplexers/Network Resistors (4) Software errors associated with signal processing (5) Anything which introduces a random, non-time dependent error is included, in the signal from source to point of use, unless handled elsewhere in setpoint calculations.

4.4.1.2 The following are exceptions, which are normally not included in determination of loop accuracy:

(1) Process measurement errors (PMA) and the errors of the Primary Element (PEA) are treated separately.

(2) Errors due to Insulation Degradation (IRA) are treated separately.

4.4.2 Determining Loop AS-Left Calibration Tolerances (AITL)

Refer to Section 2.2 for definition and Section 4.3.3 for component As-Left Tolerance (ALTi).

Loop As-Left Tolerance (ALTL) is calculated by combining the individual component As-Left tolerances (ALTi). Once the calculated Loop As-Left Tolerance has been determined by the SRSS of component As-Left Tolerances, this value should be compared to existing calibration procedure Loop As-Left Tolerances. If feasible, it is desired to retain existing procedural Loop As-Left Tolerances. Selection and use of existing procedural As-Left Tolerances is desired since these values already consider readability of test equipment.

If the procedural Loop As-Left tolerance is retained, this value shall be used in the development of CL and AFTL and listed in the calculation results summary. Likewise, if the calculated loop As-Left tolerance is selected, this value shall be used in the development of CL and AFTL and will be listed in the calculation results summary. If selecting the calculated Loop As-Left Tolerance, consideration should be given to the readability of the test equipment. The selected As-Left tolerance shall be considered a 2 a value.

Page 40 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 If it is desired to implement an ALTL less than the existing procedural ALTL, I&C Maintenance should be contacted for concurrence.

NOTE: The ALTL value shall be converted to the engineering units required by the calibration procedure and rounded to the precision of the M&TE equipment used. In cases where values are established for indication, the values should consider the readability of the device and round to the next M minor division.

The formula is shown as follows:

ALTL = +/-(N) [(ALT 1/n)2 +(ALT2 /n)2 + . .+(ALTi/n)2 J 1 /2 (20y)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.4.3 Loop Calibration Error (CL)

Loop Calibration Errors may be established by the organization responsible for calibration. Generally, Loop Calibration Error shall be calculated as 2 Sigma confidence level as shown in Section 4.4.3.1.

There are three basic components of Loop Calibration error, see Section 2.2 for definitions. These are the following:

a. ALT+/-
b. Ci, C. CSTD, It is important to note that Ci and CSTD are controlled by 100% testing per procedure CPS 1512.01, Reference 5.24.

For these reasons it is assumed that the Ci and CSTD values represent 3 sigma values.

Page 41 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.3.1 The process of determining Loop Calibration Error is performed in two steps. The first step is to review the loop diagram and calibration procedures to determine what calibration tools are used and how many times each are used in establishing the calibration of the loop. This is a function of the plant specific calibration procedures.

Typically, the calibration of a particular loop containing a transmitter and trip unit involves the use of only one pressure source and the alarm indication at the ATM. Once the device usage is determined, the loop calibration tool error is determined by combining the errors by SRSS. In the above example, there would be 4 terms in the SRSS calculation (ALTi for each instrument, and a Ci and CSTD value for the pressure source gauge).

CL = + N (2(ALTj/n)2 + z (Ci/n) 2 + z (CSTD/n) 2 ) /2 (26)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Further discussion on M&TE is provided in Appendix H.

4.4.4 Determining Loop Drift (DL)

Loop Drift must be determined in such a way as to be compatible with the various setpoint and channel error calculations.

In order to determine Loop Drift, the drift of all devices in the loop must be determined (with a known or assumed sigma value associated with each) and then combined to produce the value of Loop Drift. Any bias effects related to any of the devices shall be separated from the drift data and dealt with separately, such that the individual device drift values may be assumed to be approximately random, independent, and normally distributed.

All individual device drifts must be determined on the basis of the environmental conditions applicable to the initial and subsequent surveillance tests and device calibrations (generally, temperature variations between subsequent calibrations).

DL = +/-N(Dj 2 /n + D2 2 /n +...+ Di2 /n)/2 +/- any bias terms (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Page 42 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Two important cautions must be noted concerning Loop Drift. First, the devices included in Loop Drift must be consistent with the signal path of interest (i.e., every device from the signal source to the point at which the setpoint trip is produced or the channel output utilized).

Secondly, the term 'devices' is not intended to restrict the calculation to hardware, or to include hardware that is treated uniquely elsewhere in the setpoint calculations.

4.4.4.1 The following devices are typically included in Loop Drift:

(1) Transmitters (2) Trip Units (3) Signal Conditioners/Multiplexers/Network resistors (if these devices exhibit drift)

(4) Anything, which introduces a time dependent change in the signal from source to point of use.

4.4.5 Determining Loop As-Found Calibration Tolerances (AFTL)

Key considerations have been introduced in other sections of these guidelines concerning individual loop errors used to calculate AFTL. These are:

1. Loop Calibration Error (CL): Defined in Section 2.2 and calculated in Section 4.4.3.
2. Loop Drift Error (DL): Defined in Section 2.2 and calculated in Section 4.4.4.

To calculate AFTL, loop calibration equipment and drift tolerances should be combined using the SRSS methodology.

AFTL is calculated as follows:

AFTL = + (N) ((CL/n)2 + (DL/n) )1/2 (2a)

NOTE: The AFTL value shall be converted to the engineering units required by the calibration procedure and rounded to the precision of-the M&TE equipment used. In cases where values are established for indication, the values should consider the readability of the device and round to the next M minor division.

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Page 43 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 This provides assurance, that the loop is functional and the AV is protected.

These guidelines have been established because they permit surveillance procedure error bands, which are consistent with the types of errors that may be present during calibration.

4.4.6 Determining Process Measurement Accuracy and Primary Element Accuracy (PMA/PEA)

Per definition in Section 2.2 and discussion in Appendix C, Process Measurement Accuracy (PMA) and Primary Element Accuracy (PEA) are generalized terms used in channel error calculations and setpoint calculations to account for measurement errors which lie outside the normal calibration bounds of the channel. For example, consider the case of venturi flow meter connected to a differential pressure transmitter and trip unit. The normal surveillance testing of the instrument channel would concern itself with the transmitter and trip unit. The flow meter might have been calibrated by some sort of test, but it is not part of the instrument channel. On the other hand, it very definitely is part of the measurement process.

The use of PMA and PEA in the channel evaluation is a matter of engineering judgment. These two categories are defined as a means of reminding the engineer to account for everything that affects the performance of the instrument loop. Since both PMA and PEA are treated identically in the setpoint and channel error calculations, it is not important which effects are assigned to each value, as long as the effects are assigned in such a way that there is a proper separation/combination of independent and dependent effects. This point is best illustrated by a few examples.

Keep the definitions (Section 2.2) of the terms in mind:

The following paragraphs illustrate various instrument systems and application of these two definitions.

Page 44 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.6.1 Flow Measurement As discussed in Appendix E, Flow Measurement Uncertainty Effects, consider a flow measurement system consisting of a flow meter, such as a venturi, instrument lines connecting the flow meter to a differential pressure transmitter, and the transmitter itself. The device in contact with the process is the flow meter itself. The flow meter is therefore the Primary Element. There is some fundamental error or uncertainty in the differential pressure at the instrument line connections on the meter, due to the design of the flow meter, as-built dimensions, etc. This error may consist of both a bias term and a random component. These random and bias errors are both components of Primary Element Accuracy (PEA).

The connection between the flow meter (primary element) and the transmitter (sensor) is made using instrument lines. The density of the fluid in these lines will vary with ambient temperatures on the spaces through which these lines are routed. These density changes will affect the pressure transmitted from the primary element to the sensor. This affect can be considered negligible if the sensing lines of a differential pressure transmitter are routed together and can be proven affected by the same ambient temperature. These errors inherent in the use of the instrument lines are Process Measurement Accuracy.

4.4.6.2 Water Level Measurement Refer to Appendix F, Level Measurement Temperature Effects, and consider a water level measurement system, particularly in a BWR, may consist of a condensing chamber, sensing lines (variable and reference leg) and differential pressure transmitters. In a manner similar to that in paragraph 4.4.6.1 we would normally classify the elevation uncertainty associated with the condensing chamber as PEA. The errors due to ambient temperature fluctuations, and their effects on instrument line fluid density, would be considered to be PMA.

4.4.6.3 Temperature Measurement A typical temperature measurement system may consist of a temperature detector, such as a thermocouple or resistance temperature detector, and a temperature switch. In this case, the temperature detector could be treated as a sensor, much in the same fashion as a pressure detector.

However, the temperature detector is generally not calibrated with the channel. For this reason, the errors of the temperature detector are usually treated as PEA.

There is no PMA in this case.

Page 45 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.6.4 General Guidance In general, PMA and PEA are shown in the calculations being random independent variables. Therefore, random effects assigned to PEA and PMA should be independent of each other. However, if they are determined to be a bias, then they will be dealt with separately. The boundaries between PMA and PEA are a matter of convenience and judgment. The most important factor is that all potential error sources arising anywhere in the process, from the true variable desired to be measured all the way to the sensor in the instrument channel, must be considered in error calculations, as PMA, PEA, or as some other error term.

4.4.7 Determining Other Error Terms The fundamental objective of the calculation of setpoints or channel errors is to incorporate all reasonably expected error sources, as well as any that are part of the licensing commitments applicable to the plant. As part of the design or calculation process, the responsible engineer should consider whether additional error terms should be considered. The following paragraphs discuss several potential error sources. It is up to the responsible engineer to determine whether these are applicable, and, if applicable, to define the error values.

4.4.7.1 Indicator Reading Error (IRE)

As defined in Section 2.2 and further discussed in Appendix C, Section C.3.13, if a particular channel error calculation is intended to define the potential errors in data which is manually recorded, based on reading indicator or gauges, the error in reading the scale on the indicator must be considered. This error must be established on a case basis. In general, it is a question of the scale divisions, scale curvature, etc (See Section 4.3.3 for discussion on AFT and ALT).

4.4.7.2 Resistors, Multiplexers, etc.

The signal processing hardware is not the only source of significant error in some types of instrument channels.

Channels that supply signals to computer inputs, recorders, etc., sometimes setup to measure the voltage drop across a resistor in the circuit. The resistor accuracy (1%, for example) may introduce a significant error into the voltage measurement. Similar signal transmission devices, such as multiplexers, may introduce errors, which must be considered.

Page 46 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.7.3 Software Errors With the increased use of instrument channels which provide data to microprocessors and computers, where that data is manipulated then used to trigger some action or provide data, the software used becomes important.

Software that influences the use of data introduces errors, which should be considered for applicability.

4.4.7.4 Degradation of Insulation Resistance Accuracy Error (IRA)

References 5.22, 5.23, 5.24, may provide a bounding IRA value to use, if the device is identified by these calculations. However, if a more precise IRA value for the identified devices is needed or a non identified device requires IRA to be established, then the guidance, provided in Appendix D shall be used. It determines the Effect of Insulation Resistance (IR) on Uncertainty, under certain accident conditions, particularly steam environments, where the insulation resistance of cables, terminal blocks and other devices may be reduced, producing larger than expected leakage currents, which degrade signals. This error (IRA) is defined in Section 2.2. The applicability of IRA depends on both the accident environment and the time of function. Many reactor protection setpoints, which are intended to prevent accident consequences, are not subject to IRA because of timing considerations. IRA, on the other hand, may significantly affect certain post-accident monitoring functions. These type errors are generally determined as part of equipment qualification programs.

4.4.8 Channel Error Calculation As defined in Section 2.2, Channel Error Indication Uncertainty, Channel Error is determined when there are requirements for channel uncertainty, independent of a Safety Related Setpoint. Typically, there are three situations where Channel Error is of interest. These are (1) Non-Safety Related Setpoints, (2) when the channel serves as an indicator/recorder/control function and where the accuracy must be known (RG 1.97 indicators, information for operators, etc.), and (3) channels which supply information to data collection systems, computer systems, etc.

Page 47 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 The channel error is determined by:

CE = +/-(1.645/N)(SRSS OF RANDOM TERMS) +/- BIAS TERMS Typically calculated and shown as below:

CU= iN(PMA +PEA2 + AL2 + (CL/n) 2 + (DL/n) 2 ) 1/2 +/-B (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE = +(1.645/N)(CU2 + IRE 2 )"/2 + Bias Terms Note: An (1.645/N) adjustment to channel error is applicable to non-safety setpoints and required indicator readings that have a limit approached in one direction (single sided interest).

4.4.8.1 The RANDOM TERMS that should be considered include the following:

(1) Loop Accuracy (AL) under the worst environmental conditions applicable to the channel function (2) Loop Calibration Error (CL)

(3) Loop Drift (DL)

(4) Process Measurement Accuracy (PMA)

(5) Primary Element Accuracy (PEA)

(6) Indicator Reading Error (IRE) if applicable.

(7) Any other random terms expected to be present for the indicator and or computer channel function (such as software errors)

Refer to definitions in Section 2.2.

4.4.8.2 The BIAS TERMS that should be considered include:

(1) Any bias associated with Process Measurement or the Primary Element (PMA/PEA)

(2) The bias component of Insulation Resistance Accuracy Error (IRA)

(3) The bias portion of readout errors (IRE).

(4) The bias portion of any other unique terms known to exist (including drift and software bias).

Page 48 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.9 Setpoints with no Analytical Limit or Allowable Value In some cases it is necessary to determine setpoints when there are no Tech. Spec. Allowable Values or Analytical Limits. As discussed in section 2.2.47, the NPL is a limit, high or low, beyond which the normal process parameter should not vary.

NTSP(zINC) = NPL - CE NTSP(DEC) = NPL + CE Note: An (1.645/N) adjustment should be made when calculating CE for non-safety setpoints and required indicator readings (single sided interest).

4.4.10 Determining Analytical Limits (AL)

Analytical Limits are used in calculating the Nominal Trip Setpoint and Allowable Value (if required). Methods of calculating Analytical Limits are not within the scope of these guidelines. However, the process by which the designer determines an Analytical Limit is of interest.

Per Section 2.2, the Analytical Limit is "the value of the sensed process variable established as part of the safety analysis, prior to or at the point which a desired action is to be initiated to prevent the safety process variable from reaching the associated licensing safety limit".

NEDC-31336, Reference 5.1, includes a discussion of the source of the Analytical Limits applicable to the set of key setpoints for which direct credit is taken in the Safety Analysis Report. For setpoints not discussed in Reference 5.1, the following guidelines are provided for determining Analytical Limits:

a. The first step for determination of an Analytical Limit is to determine the purpose of the particular setpoint. That is, what event is the setpoint intended to mitigate, prevent or initiate?
b. Once the event of interest is identified, determine what assumptions have been made in the system design or analysis regarding the setpoint. These assumptions may be explicit in the design or implicit.
c. The value of the sensed process variable, which corresponds to the design assumptions for that event is the Analytical Limit.

Page 49 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 The key question is what value of the sensed variable corresponds to the design assumptions. This correspondence may be indirect. For example, a setpoint intended to isolate a line on a high flow would have a design basis in terms of flow rate. Whereas the Analytical Limit and setpoint calculations would be done in terms of the differential pressure across the flow measurement device, corresponding to the flow rate at which the isolation is assumed to occur. As another example, consider a setpoint intended to limit pressurization of a pipe. In this case, the Analytical Limit may be the design pressure of the pipe, but not always. If the stress analysis of the pipe assumes some peak pressure in the pipe different from the design pressure, the assumed peak pressure corresponding to the event for which the setpoint is intended, less any transient overshoot, would be the Analytical Limit. When in doubt, the organization that provided the design bases and/or analyses of the system or component should be consulted to ensure proper identification of the Analytical Limit. Trip setpoints associated with non-safety related functions are typically based on the process limit, High or Low, beyond which normal process parameter should not vary.

This limit is defined as the Normal Process Limit (NPL).

4.4.11 Allowable Value Calculation (AV)

If the setpoint in question is contained in Technical Specifications and is required to have an Allowable Value, the Allowable Value (AV) should be calculated using either equation depending on the direction of process variable change when approaching the Analytical Limit. The first equation is for process variables, which increase to trip, and the second equation is for process variables, which decrease to trip.

AV(INC) = AL -(1.645/N) (SRSS OF RANDOM TERMS) -BIAS TERMS AV(DEC) = AL +(1.645/N) (SRSS OF RANDOM TERMS) +BIAS TERMS Or, as further described by Sections 4.4.11.1 and 4.4.11.2:

AV(INC) = AL -((1.645/N) ((PMA 2 +PEA 2 + AL2 )1/2 + B))

2 AV(DEC) = AL +((1.645/N) ((PMA +PEA 2 + AL2 )1/2 +/- B))

Where N represents the number of standard deviations with which the value is calculated to (normally 2 standard deviations).

Note: An (1.645/N) adjustment is applicable to setpoints that have a limit approached in one direction (single sided interest).

Page 50 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Per Sections 4.5.1.(l) and 4.4.13.(a), if the existing Tech. Spec. AV is conservative to the calculated AV, therefore preserved, then the existing AV should be used in any other Sections requiring AV, unless a change in AV is desired.

4.4.11.1 The RANDOM TERMS that should be considered for particular AV calculations include the following:

(1) Loop Accuracy under Trip conditions (AL(trip))

(2) Process Measurement Accuracy (PMA)

(3) Primary Element Accuracy (PEA)

(4) The random portion of any other unique terms known to exist for a particular instrument application, excluding Drift.

4.4.11.2 BIAS TERMS that should be considered are:

(1) Any Biases associated with Process Measurement or the Primary Element (PMA/PEA).

(2) The bias component of Insulation Resistance Error (IRA).

(3) The bias portion of any other unique terms known to exist (including drift and software bias).

It should be noted that the sign applied to bias terms should be conservative relative to plant safety (i.e.,

credit should not be taken for a beneficial bias unless it can be assured that the beneficial bias will always be present).

4.4.12 Setpoints with Allowable Values The NTSP should be calculated using either equation below, depending on the direction of process variable change when approaching the Analytical Limit. The first equation is for process variables, which increase to trip, and the second equation is for process variables that decrease to trip.

NTSP(INC) = AV - AFTL NTSP(DEC) = AV + AFTL Page 51 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.12.1 Selecting Actual Setpoints The actual setpoint used in calibrating instrumentation may not be the value of the NTSP calculated. The choice of the actual setpoint to be used in the plant is a matter-of evaluating setpoint conservatism as compared to the AV and operational preferences. In other words, the existing plant setpoint may be conservative to the calculated setpoint and AV and pose limited impact on plant operations or spurious trips. This in-plant (existing) setpoint would satisfy both the calculation requirements and plant operation, as such, the channel would not require a setpoint revision. The existing setpoint becomes the NTSP and used in any other Sections requiring NTSP.

4.4.12.2 Evaluation of Trip Reset Value The reset setting is a variable % span adjustment of the trip setpoint. CPS calibration procedures typically has it set at 3t span (i.e. Trip is set at 100t, reset is shown as 97%). The same AFT and ALT is placed on the trip setpoint, as well as the reset, however, it is not possible for the trip to be found low in its band, while the reset is found high. Areas to consider are as follows:

a. The loop has both a high and low setpoint, with the resets overlapping, thus potentially both alarms at the same time.
b. When calculated AFT is greater than the reset in calibration procedure.
c. Both trip and reset require a NTSP calculation to provide different functions.

The reset value may require adjustment different than the typical setting of 3t span.

4.4.13 Evaluating Results and Resolving Problems The evaluation of results depends to some extent on the ultimate goal of the setpoint calculations. If there is no existing setpoint in use no evaluation may be necessary.

However, in the more normal case, there is already an existing setpoint and, in some cases, Technical Specifications requirements. In this case, the evaluation of results should include:

a. Evaluate the calculated Nominal Trip Setpoint and Allowable Value against existing values. If existing values are not supported by the calculations, determine whether or not it is desirable to preserve the existing values.

Page 52 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

b. If existing values are to be preserved, investigate iteration opportunities and revise the calculations.

4.4.13.1 Iteration to Resolve Setpoint Problems There are usually opportunities for iteration as a means of resolving problems with a calculated setpoint, short of modifying instrument installations or hardware. As a minimum, the following alternatives should be considered:

(1) Modify the Analytical Limit. Frequently, analyses that are the source of the analytical limit have margin. Changes to the analytical limit, to take credit for existing analysis margins, is a powerful way to optimize setpoint calculations, since it has no impact on instrumentation or instrument error allowances. Further, there are many situations (even in plant transient or accident analyses) where relatively simple parameter studies can be used to adjust the analytical limit without re-doing the actual transient or accident analyses.

(2) Re-evaluate environmental assumptions. Many environmental assumptions are driven by worst case licensing assumptions, which may not be appropriate to instrument error analyses. For example, it makes no sense to use an environment that assumes plant conditions that the instrument setpoint of interest is designed to prevent. Environmental assumptions may also be optimized by careful consideration of trip timing, and by refining the analyses that predict environmental conditions.

(3) Re-evaluate calibration errors. Use of different calibration instruments, modified As-Found or As-Left Tolerances can be used to change calibration error allowances and improve setpoint calculations.

(4) Re-evaluate drift assumptions. Consider using statistical analyses of actual as-found and as-left data from surveillance testing to justify improved drift allowances.

(5) Evaluate other assumptions in setpoint calculations, such as function requirements for the instrumentation, trip timing, surveillance intervals, etc.

(6) Examine instrument applications. For example, for setpoints heavily impacted by a predicted radiation dose, a change from a standard model to a radiation resistant model of the same instrument can have major benefits (changing from a Rosemount 1153B "PI' output to an 1153B "R" output, for example).

Page 53 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.5 Calculation Nominal Trip Setpoints and Indication/Control Loops The individual calculations associated with setpoint and channel error evaluations are outlined below. The engineer performing the calculations should determine which calculations apply to the particular situation, based on the guidance provided.

4.5.1 Setpoint with Analytical Limit The following steps shall be performed for a Setpoint with Analytical Limit:

a. Calculate the individual device accuracy (Ai) per Section 4.3.1.
b. Calculate the individual device As-Left Tolerance (ALTi) per Section 4.3.3.
c. Calculate the loop As-Left Tolerance (ALTL) per Section 4.4.2.
d. Calculate the individual device Calibration Error (Ci) per Section 4.3.3.
e. Calculate the loop Calibration Error (CL) per Section 4.4.3.
f. Calculate the individual device drift error (Di) per Sections 4.3.2.
g. Calculate the loop Drift Error (DL) per Section 4.4.4
h. Calculate the individual device As-Found Tolerance (AFTi) per Section 4.3.3.
i. Calculate the loop As-Found Tolerance (AFTL) per Section 4.4.5
j. Develop PMA, PEA, IRA, and other error terms per Sections 4.4.6 and 4.4.7 as applicable.
k. Calculate the Allowable Value (AV) from the Analytical Limit (AL) per Sections 4.4.10 and 4.4.11. .
1. Compare calculated Allowable Value to existing Technical Specification AV. Use the existing AV if conservative, unless it is desired to revise the existing Technical Specifications.
m. Calculate the Nominal Trip Setpoint (NTSP) from the Allowable Value per Section 4.4.12.
n. Consider whether adequate separation exists between the Nominal Trip Setpoint and Allowable Value to avoid LERs.

Page 54 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

o. Use the existing setpoint if conservative, unless it is desired to revise it. Then select a setpoint to be used in the calibration procedure that is bounded by the Nominal Trip Setpoint.
p. Evaluate the Trip Reset Value
q. Optimize calculations, if necessary, to validate existing Technical Specifications, designs, etc.

4.5.2 Indication/Control Loop The following steps shall be performed for an Indication/Control Loop:

a. Calculate values per Section 4.5.1.a through 4.5.1.j.
b. Calculate the channel uncertainty (CU) and channel error (CE) per Section 4.4.8.
c. Optimize calculations, if necessary, to validate existing Technical Specifications, designs, etc.

Note: If indication loop also provides indication for a specific reading as required by the Tech. Spec, then sections 4.5.1.k through 4.5.1.o should be addressed for that indicated reading (in lieu of setpoint).

4.5.3 Setpoint without Analytical Limit The following steps shall be performed for Setpoint without Analytical Limit:

a. Calculate values per Section 4.5.1.a through 4.5.1.j.
b. Calculate the channel uncertainty (CU) and channel error (CE) per Section 4.4.8.

c.Identify the Nominal Process Limit (NPL) per Section 4.4.9. This also might be given as an Allowable value.

d. Calculate the Nominal Trip Setpoint (NTSP) from the Nominal Process Limit using the channel error per Section 4.4.9.
e. Use the existing setpoint if conservative, unless it is desired to revise it. Then select a setpoint to be used in the calibration procedure that is bounded by the Nominal Trip Setpoint.
f. Optimize calculations, if necessary, to validate existing designs, etc.

Page 55 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.5.4 The following tables lists the equations developed in Sections 4.3 & 4.4 for the different calculation scenarios in Section 4.5.1 above.

Setpoint/Indication/Control Calculation Section 4-Formulas 4.3.1 Device Accuracy (A,):

Ai = + N( (VAi/n) 2 + (ATEi/n) 2 + (OPEi/n) 2 + (SPEi/n) 2

+(SEi/n) 2 + (REi/n) 2 + (HEi/n) 2 + (PSEi/n) 2 + (REEi/n)2 ) 1 /2

+/- Any bias term associated with the above random errors (20) 4.4.1 Loop Accuracy (AL):

AL = +/-(A1 2 + A2 2 + . . . + Ai2 ) 1 /2 +/- any bias terms (2a) 4.3.3 Device As-Left Tolerance (ALTi):

ALTI = + VAi (2a)

See discussion on whether to use ALT from calibration procedures or establish as VA 4.4.2 Loop As-Left Tolerance (ALTL):

ALTL = +(N) [(ALT1 /n)2 + (ALT2 /n)2 +...+ (ALTi/n)2 ]1/2 (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.3.3 Determining Device Calibration Tolerances Guidance for M&TE is given in Appendix H 4.4.3 Loop Calibration Error (CL):

CL + N (X(ALTi/n) 2 + X (Ci/n)2 + v (Cs/n)2 ) 1/2 (2a)

Where N represents the number of standard deviations

._ with which the value is evaluated to (normally 2 Page 56 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Setpoint/Indication/Control Calculation Section Formulas standard deviations) and n represents the sigma value for each device.

4.3.2 Device Drift (Di):

Refer to Appendix I, Standard Assumptions for sigma value.

VDM = (M/6) 1 1 2 VD6 -month 4.4.4 Loop Drift (DL):

DL + ( (D./n) 2 + (D 2 /n) 2

+...+ (Di/n) 2 )1 /2 +/- bias terms (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.3.3 Device As-Found Tolerance (AFTi):

AFTi + (N) ((ALTi/n)2 + (Ci/n) 2 + (Di/n) 2 )1/2 (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.4.5 Loop As-Found Tolerance (AFTL):

AFTL = + (N) ((CL/n) 2 + (DL/n) 2 )1/ 2 (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.4.6 & Determine PMA,PEA, IRA, and other error terms 4.4.7 For Setpoint Calculations with Analytical Limit 4.4.10 & Allowable Value (AV):

4.4.11 AV(INC) = AL -(1.645/N) (SRSS OF RANDOM TERMS) -BIAS TERMS AV(DEC) = AL +(1.645/N) (SRSS OF RANDOM TERMS) +BIAS TERMS Page 57 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Setpoint/Indication/Control Calculation Section Formulas Typically calculated and shown as below:

AV(INC) = AL -((1. 645/N) ((PMA 2 +PEA 2 + AL2 )1/2 B) )

AV(DEC) = AL +((1.645/N) ( (PMA 2 +PEA2 + AL2 ) 1/2 _ B) )

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations)

Note: An (1.645/N) adjustment is applicable to setpoints that have a limit approached in one direction (single sided interest).

4.4.12 Nominal Trip Setpoint (NTSP):

NTSP (INC) = AV - AFTL NTSP (DEC) = AV + AFTL For Indication/Control Calculations only 4.4.8 Channel Error (CE):

CE = + (SRSS OF RANDOM TERMS) +/- BIAS TERMS Typically calculated and shown as below:

CU = +/- N (PMA 2 +PEA 2 + AL 2 + (CL/n) 2

+ (DL/n) 2)1/ 2 +/- B (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE + (CU 2 + IRE 2 ) 1/ 2 +/- Bias Terms For Setpoints without Analytical Limit and/or Indication/Control 4.4.8 Channel Error (CE):

CE = +/- (1.645/N) (SRSS OF RANDOM TERMS) +/- BIAS TERMS Typically calculated and shown as below:

Page 58 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Setpoint/Indication/Control Calculation Section Formulas 4

CU = +/- N(PMA2 +pEA2 + AL2 + (CL/n)2 + (DL/n) 2 )1 2 +/- B (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE = +/- (1.645/N) (CU2 + IRE 2 )1/2 + Bias Terms Note: An (1.645/N) adjustment to channel error is applicable to non-safety setpoints or required indicator readings that have a limit approached in one direction (i.e. increasing or decreasing only, but not both), single sided interest.

4.4.9 Nominal Trip 8etpoint (NTSP):

NTSPCINC) = NPL - CE Or NTSP (DEC) = NPL + CE Page 59 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

5.0 REFERENCES

5.1 NEDC-31336, General Electric Improved Setpoint Methodology, October 1986, (GE Proprietary information) 5.2 NEDC-32889P rev 2, General Electric Methodology for Instrumentation Technical Specification and Setpoint Analysis, February, 2000. GE reference for use in Extended Power Uprate Calculations.

5.3 ANSI/ISA S67.04, Setpoints for Nuclear safety Related Instrumentation Parts I and II.

Part I is the Standard and Part II is the Recommended Practice. See Part II page 46 for description of "Methods" And, ISA dTR 67.04.09, Graded Approaches to Setpoint Determination, Draft Technical Report, 1994 and the subsequent version Draft 4, May, 2000 5.4 GE Nuclear Energy internal procedures 5..5 General Electric Document EDE-40-1189 (Rev. 0) 5.6 ANS/ASME PTC 19.1-1985, Measurement Uncertainty Establishes a basis for the principles of uncertainty analysis.

5.7 ASME MFC-3M-1989, Measurement of fluid Flow in Pipes Using Orifice, Nozzle, and Venturi Provides information regarding expected uncertainties and errors associated with flow measurement.

5.8 ASME 1967 Steam Tables Provides the basis for water density as a function of temperature and pressure. When used, the appropriate pages should be copied and made as an attachment to the calculation.

5.9 ANSI N42.18, American National Standard for Specification and Performance of On-Site Instrumentation for Continuously Monitoring Radioactivity in Effluents This standard establishes minimum expected performance standards for certain types of radiation monitoring equipment.

5.10 The Institute for Nuclear Power Operations (INPO) Good Practice TS-405, Setpoint Change Control Program.

Provides guidance for setpoint change control and implementation practice.

Page 60 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 5.11 Regulatory Guide 1.105, Rev. 01, Setpoints for Safety-Related Instrumentation CPS has committed to Regulatory Guide 1.105 Rev 01 for guidance relative to instrument setpoint preparation and control. This Regulatory Guide 1.105 establishes the NRC's proposed endorsement of the ISA-67.04. The discussion also provides the NRC's perspective on various technical areas related to setpoint methodologies and statistical analysis.

5.12 NRC Information Notice 92-12, Effects of Cable Leakage Currents on Instrument Settings and indications Information Notice 92-12 describes a potential problem related to instrument loop current leakage. During the high humidity and temperature conditions of a LOCA or HELB, insulation resistance can be degraded, thereby contributing to the measurement uncertainty of affected instrument loops.

5.13 ER-AA-520, Rev. 3, "Instrument Performance Trending" T&RM 5.14 CPS 1512.01, Rev. 18a, Calibration and Control of Measuring and Test Equipment (M&TE) and MA-AA-716-040 Rev.

2, Control of Portable Measurement and Test Equipment Program.

These procedures establish generic requirements and controls for calibration and verification of Test Equipment and Reference Standards. Additionally, the administrative requirements for controlling M&TE are provided. These procedures establish the minimum requirements for M&TE control. This Engineering Standard assumes that M&TE is controlled in accordance with this directive.

5.15 CPS 8801.01, Rev. 13, Instrument Calibrations This procedure provides instructions for performing operations verification and calibration of single and multiple input devices as an individual instrument. It also includes instructions for development of Instrument Data Sheets.

5.16 CPS 8801.02, Rev. 12, Loop Calibrations This procedure provides instructions for performing operations verification and calibration of instrument loops. It also includes instructions for development of Loop Calibration Data Sheets.

Page 61 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 5.17 CPS 8801.05, Rev. 15a, Corrections to Instrument Calibrations This procedure provides instructions for scaling and applying corrections to setpoint data obtained from Engineering.

5.18 Not Used 5.19 Assessment EA # 2003-06220 r/2, "Performance of Instrument Drift Analyses In Support of the Clinton Power Station 24 Month Refuel Cycle Project", dated 3/19/04.

5.20 CC-AA-309-1001 Rev. 0, Guidelines for Preparation and Processing of Design Analysis 5.21 CC-AA-309 Rev. 3, Control of Design Analysis This procedure establishes requirements and controls for preparation, review, documentation and approval of design analyses.

5.22 Calculation 01ME127, Rev.0, DBA Influence On Insulation-Resistance Related Instrument Errors This calculation determines the influence of design basis accident (DBA) conditions on containment instrumentation loop signal transmission systems (i.e., penetrations, cabling, splices, and conduit seals) and the consequent effect on the accuracy of measurement of safety-related process parameters. The calculation addresses those instrument loops which have the primary devices located inside containment and for which S&L has prepared instrument setpoint accuracy calculations per the requirements of Reg. Guide 1.105.

5.23 Calculation 01ME128, Rev. 0, DBA Influence On Insulation-Resistance Related Instrument Errors For GE RG 1.105 Instruments This calculation determines the influence of design basis accident (DBA) conditions on containment instrumentation loop signal transmission systems (i.e., penetrations, cabling, splices, and conduit seals) and the consequent effect on the accuracy of measurement of safety-related process parameters. The calculation addresses those instrument loops which have the primary devices located inside containment and for which GE has prepared instrument setpoint accuracy calculations per the requirements of RG 1.105.

Page 62 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 5.24 Calculation CI-CPS-187, Rev. 0, DBA Influence On Insulation-Resistance Related Instrument Errors This calculation provides similar information as Calculations 01ME127 or 01ME128. Also, this calculation determines the bounding influence on instrumentation loops for each generic circuit type (current source, voltage source, and bridge current source), that can be applied to similar circuits under harsh conditions. This calculation addresses instrument loops that have the primary devices located outside containment and for which Sargent & Lundy prepared Reg. Guide 1.105 instrument setpoint calculations.

5.25 Not Used 5.26 NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle," dated April 2, 1991 5.27 NES-EIC-20.04 Rev. 3 "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy" 5.28 Honeywell 4450 Extended Analog System Input 4400 AG-T, Termination Assembly, K2801-0116A, Tab 15, and Analog Input Subsystem, K2801-0116B,,Book 1, Tab 2.

Vendor Manual and Specifications 5.29 Record of Teleconference from Carl M. Ingram to J. Miller.

File Nos. 126.5, S/U 33.1. 10/16/81 File Nos. 126.5, S/U 33.1. 10/16/81 5.30 IP-C-0089 Rev. 0, "M&TE Uncertainty Calculation" 5.31 ASTM Standard D257-91, Standard Test Methods for D-C Resistance or Conductance of Insulating Materials, Appendix XI 5.32 EPRI TR-103335, Rev. 1, Statistical Analysis of Instrument Calibration Data. Guidelines for Instrument Calibration Extension/Reduction Programs.

5.33 EPRI TR-102644, Calibration of Radiation Monitors at Nuclear Power Plants 5.34 Regulatory Guide 1.97, Rev. 3, Instrumentation for Light-Water-Cooled Nuclear Power Plants to Assess Plant and Environs Conditions During and Following an Accident.

5.35 Regulatory Guide 1.89, Rev. 0, Qualification of Class lE Equipment For Nuclear Power Plants 5.36 DC-ME-09-CP, Rev. 11, "Equipment Environmental Design Conditions, Design Criteria."

5.37 CC-AA-103-2001, Rev. 0, "Setpoint Change Control" Page 63 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 6.0 APPENDICES This Engineering Standard includes Appendices organized to provide all required technical information necessary to prepare a CPS Instrument Setpoint Calculation. The Appendices are listed as follows:

Appendix A, GUIDANCE ON DEVICE SPECIFIC ACCURACY AND DRIFT ALLOWANCES Appendix B, SAMPLE CALCUALTION FORMAT Appendix C, UNCERTAINTY ANALYSIS FUNDAMENTALS Appendix D, EFFECT OF INSULATION RESISTANCE ON UNCERTAINITY Appendix E, FLOW MEASUREMENT UNCERTAINTY EFFECTS Appendix F' LEVEL MEASUREMENT TEMPERATURE EFFECTS Appendix G, STATIC HEAD AND LINE LOSS PRESSURE EFFECTS Appendix H, MEASURING AND TEST EQUIPMENT UNCERTAINTY Appendix I, NEGLIGIBLE UNCERTAINTIES / CPS STANDARD ASSUMPTIONS Appendix J, DIGITAL SIGNAL PROCESSING UNCERTAINTIES Appendix K, PROPAGATION OF UNCERTAINTY THROUGH SIGNAL CONDITIONING MODULES Appendix L, GRADED APPROACH TO UNCERTAINTY ANALYSIS Appendix M, NOT USED Appendix N, STATISTICAL ANALYSIS OF SETPOINT INTERACTION Appendix 0, INSTRUMENT LOOP SCALING Appendix P, RADIATION MONITORING SYSTEMS Appendix Q, Rosemount Letters Appendix R, RECORD OF COORDINATION FOR COMPUTER POINT ACCURACY Page 64 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Figure 2. Setpoint Relationships SAFETY LIMIT TRANSIENT ANALYSIS ANALYTICAL LIMIT ALLOWABLE VALUE LOOP AS-FOUND TOLERANCE I

LOOP AS-LEFT TOLERANCE I

SELECTED SETPOINT (NTSP)

LOOP AS-LEFT TOLERANCE LOOP AS-FOUND TOLERANCE OPERATING LIMIT

+

TRANSIENT ANALYSIS NORMAL OPERATING VALUE Page 65 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 APPENDIX A GUIDANCE ON DEVICE SPECIFIC ACCURACY AND DRIFT ALLOWANCES A.1 Overview In general, there are three parameters relating to Accuracy and Drift, which must be determined for any given device. These are Accuracy under normal conditions (Ai(normal)), Accuracy under trip conditions (Ai (trip)) and Drift (Di). There are two steps that must be taken to determine these values.

a. Identify the individual effects that may contribute to these errors.
b. Obtain numerical data on the identified individual effects.

In determining the effects that may contribute, and identify the numerical values, consideration should be given to the following sources of information (in order of importance):

c. Clinton specific data from testing of actual instruments, surveillance records, qualification programs, etc.
d. Generic data from testing of actual instruments, surveillance data, qualification programs, etc.
e. Vendor supplied data sheets and data.
f. Purchase specifications for equipment
g. Generally accepted assumptions.

The purpose of this appendix is to provide guidance for the process described above.

Page 66 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2 Effects Expected to be Present in Accuracy and Drift Values A.2.1 Accuracy As discussed in paragraph 4.3.1 and defined in Section 2.2, the following effects may typically be part of instrument accuracy (potentially, for both normal and trip conditions):

a. Vendor Accuracy (VA)
b. Accuracy Temperature Effect (ATE)
c. Overpressure Effect (OPE)
d. Static Pressure Effect (SPE)
e. Seismic Effect (SE)
f. Radiation Effect (RE)
g. Humidity Effect (HE)
h. Power Supply Effect (PSE)
i. RFI/EMI Effect (REE)

It may not be possible, in many cases, to determine all of the above effects. Qualification testing, or vendor performance specifications may simply state a value for accuracy, and then stipulate a range of temperatures, radiation levels, seismic loads, humidity and other boundaries within which the value of accuracy is applicable. In such cases, there is no need to determine the separate effects.

A.2.1.a Rosemount Transmitter Devices In the absence of suitable vendor data, Clinton specific qualification data or surveillance test data GE recommends that the information in the following paragraphs be used. For a selected group of Rosemount devices GE has determined recommended accuracy assumptions based on generic qualification testing. This information has been provided to the USNRC (Reference 2.1) and used for many setpoint calculations accepted by the NRC.

Page 67 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.1.a(l) Rosemount Transmitters GE recommends that the following be used as a basis for determining normal and trip environment accuracies for Rosemount transmitters (models 1151, 1152-T0280, 1153 Series B, and 1154).

A.2.1.a.(l).(a) Vendor Accuracy (VA), Accuracy Temperature Effect (ATE), Power Supply Effect (PSE), Humidity Effect (HE) and RFI/EMI Effect (REE)

VA = 0.25% SP (3 Sigma)

ATE = (0.75% UR + 0.5% SP) (delta Ta)/100 (3 Sigma)

(double this value for Range Code 3)

PSE = 0.005% SP per volt (3 Sigma)

HE = 0 (included in VA)

REE = 0 (Normally negligible)

Determination of 'delta Ta' is discussed in paragraph A.2.3.

A.2.1.a.(1).(b) Overpressure Effect (OPE)

This effect varies depending on the instrument range, and is identified in Rosemount product data sheets. GE treats the resulting values as 3 Sigma values based on experience with the Rosemount data.

A.2.1.a.(1).(c) Static Pressure Effect (SPE)

As discussed in paragraph 4.3.1, SPE sometimes consists of several effects, some of which are random and some of which are bias. This is particularly the case with Rosemount differential pressure transmitters (note, SPE does not apply to absolute pressure or gage pressure transmitters). In the case of Rosemount transmitters, there are three SPE components: (1) a random zero point error, (2) a random span error, and (3) a bias span error. The bias span error is easily adjusted for as part of the calibration process (this is often done). If accommodated in the calibration process, it need not be included in the accuracy error calculations.

GE has found that the Rosemount manuals may be difficult to interpret concerning SPE. For this reason, the following summary is provided to describe definition of the Rosemount SPE.

Page 68 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 The components of SPE are calculated as follows:

Random Zero Effect; SPEz = (Zero)% UR (delta P)/1000 (3 Sigma)

Random Span Effect; SPES = (Span)% SP (delta P)/1000 (3 Sigma)

Bias Span Effect; SPEBS = (BS)% SP (delta P/1000 (3 Sigma)

Where 'delta P' is the pressure difference between the system pressure at calibration and the system pressure under trip conditions, and the terms SPEz, SPEs, and SPEBs are shown in Table A.1.

TABLE A.1 ROSEMOUNT STATIC PRESSURE EFFECT EFFECT RANGE 1151DP 1152-T0280 1153B 1154 (Zero) % (Zero)%- (Zero)t (Zero)%

Random Zero Error (SPEz) 3 0.25 0.25 0.50 N/A 4,5 0.125 0.125 0.2 0.2 6, 7, 8 0.125 0.25 0.5 0.5 (Span)% (Span)% (Span) % (Span) %

Random Span Error (SPEs) 3 0.5 0.25 0.5 N/A 4,5,6,7,8 0.25 0.25 0.5 0.5 (BS)% (BS) % (BS)% (BS)%

Bias Span Error (SPEBs) 3 1.75 1.5 1.5 N/A 4 0.87 1.0 0.75 0.75 5 0.81 1. 0 0.75 0.75 6 1.45 1. 0 1.25 1.25 7 1.05 1. 0 1.25 1.25 8 0.55 1.0 0.75 0.75 CPS Vendor Manual 4256/57 K2801-091, K2801-091, M008-0002 (3/87) Tab 1 Tab 2 NOTE: Rosemount manuals supplied with purchased instrumentation should be checked to determine if any changes apply to this information.

Page 69 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.1.a.(1).(d) Seismic Effect (SE)

Based on an evaluation of Rosemount test data, GE recommends the following:

SE = 0.23% UR (2 Sigma)

Where equation applies to situations in which the Zero Period Acceleration (ZPA) at the mounting location of the transmitter does not exceed 1 "g" for the event of interest, and where the transmitter is expected to be performing its trip function simultaneous with the seismic event.

SE = (0.03 ZPA + 0.20)% UR (2 Sigma)

Where ZPA exceeds 1 "g", but not 10 "g", and the transmitter is expected to be performing its trip function simultaneous with the seismic event.

SE = 0.25% UR (2 Sigma)

Where ZPA exceeds 2 "g", but the seismic event is expected to occur between the time of the last calibration and the time of trip, but not simultaneously.

If the seismic event ZPA does not exceed 2 "g", and the event is not simultaneous with the trip event, the effect on transmitter accuracy is negligible.

A.2.1.a.(1).(e) Radiation Effect (RE)

GE does not recommend use of Rosemount model 1151 transmitters for trip applications for which the gamma Total Integrated Dose (TID) to time of trip exceeds approximately 104 RAD. Up to this value, the radiation effect on 1151 transmitters is negligible (plant specific EQ program data should be used to support use of 1151 transmitters in a radiation environment, if such data is available).

For the 1152-T0280 transmitter:

RE = (1.25X + 1.25)% UR (2 Sigma)

Where TID exceeds 0.1 MRAD, but does not exceed 0.4 MRAD. This effect should be multiplied by 1.68 for Range Code 3. There is no effect at or below 0.1 MRAD.

RE = (4.5X + 4.5)% UR (2 Sigma)

Where TID exceeds 0.4 MRAD, but not 20 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

Page 70 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 The term "X" is defined as:

X = (setpoint of interest-instrument zero)/calibrated span For the 1153 Series B transmitter with a "P" output:

RE = (3.OX + 3.0)% UR (2 Sigma)

Where TID exceeds 0.1 MRAD, but not 22 MRAD. There is no effect at or below 0.1 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

For the 1153 Series B transmitter with an "R" output:

RE = (1.5X + 1.5)% UR (2 Sigma)

Where TID exceeds 0.1 MRAD, but not 22 MRAD. There is no effect at or below 0.1 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

For the 1154 transmitter:

RE = (1.OX + 1.0)% UR (2 sigma)

Where TID exceeds 0.5 MRAD, but not 50 MRAD. There is no effect at or below 0.5 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

A.2.1.a.(2) Rosemount Trip Units For unmodified Rosemount model 510DU and 710DU trip units use vendor specified data for instrument uncertainties. For trip units modified by GE (model number 147D8505G005), use GE Performance Specification 22A7866 for instrument uncertainties.

A.2.2 Drift As discussed paragraph 4.3.2, there are two terms of interest in determining device drift. These are Vendor Drift (VD) and some time interval associated with VD(usually 6 months). These effects should be determined from vendor data, field data, or qualification data, if available.

A.2.2.a Rosemount Devices For a selected group of Rosemount devices GE has determined recommended drift assumptions based on generic qualification testing. This information has been provided to the USNRC (Reference 5.1) and used for many setpoint calculations accepted by the NRC. In the absence of suitable Clinton specific qualification data or surveillance test data GE recommends that the information in the following paragraphs be used.

A.2.2.a.(1) Rosemount Transmitters For Rosemount model 1151, 1152-T0280, 1153 Series B and 1154 transmitters refer to vendor supplied information for the appropriate drift term. Due to Rosemount correspondences in the year 2000, the Rosemount drift terms will conservatively be considered to be 2 sigma.

Page 71 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.2.a.(2) Rosemount Trip Units For Rosemount model 510DU and 710DU trip units use the vendor specified data. For trip units modified by GE (model number 147D8505G005), use the GE Performance Specification 22A7866 A.2.3 (Deleted)

A.2.4 Interpreting Vendor Data For many devices, it may be necessary to use vendor data sheets or specifications as the source of accuracy and drift information for setpoint calculations. However, vendors commonly use many different terms to describe the performance of their equipment. In addition, most vendors do not specify their data in terms of a probability of error (i.e., they don't say how many standard deviations their values represent). Therefore, interpretation is necessary.

When interpreting terminology, the definitions in Section 2.2 of this document should be used to ensure consistent interpretation.

For example, the definition of Channel Instrument Accuracy, paragraph 2.2.11, states that accuracy, as referred to in the CPS Setpoint Methodology, includes "the combined conformity, hysteresis and repeatability errors". Paragraph 2.2.11 also indicates certain terms, which are not considered to be part of accuracy.

Care should be exercised to relate the vendor-defined errors to the functions of the instrument channel. For example, a Rosemount trip unit with an analog indicator has two distinct sets of errors.

There are errors associated with the trip circuitry, and which apply to a trip setpoint calculation. There are also errors associated with the analog indicator, which do not apply to the trip function, but which would apply if the purpose of the calculation is to define the error associated with readings taking using the analog indicator.

In some cases, vendors may not identify all errors of interest.

For some types of devices, vendors identify accuracy errors but no drift effects. In such cases, it is necessary to first determine whether or not there is satisfactory evidence that the omitted item (drift, for example) does not apply to this type of device. If available information is not convincing, it may be necessary to assume a value. Paragraphs A.2.5 and A.2.6 contain recommendations for establishing error terms on the basis of field data and/or conservative assumptions.

Page 72 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 The final aspect of importance when interpreting vendor data is determining how many standard deviations (sigma values) the data represents. In general, this is an issue of how much confidence we have in the vendor data. Data may be qualitatively classified into three categories: (1) best estimate data, (2) worst case data which is backed by limited testing, and (3) worst case data backed by extensive qualification testing or testing of every delivered device. In the absence of information from a vendor, which specifies the sigma value associated with the data, GE recommends treating data as follows:

a. Best Estimates: Assume they are (1) sigma values.
b. Worst case data backed by limited testing: Assume two (2) sigma.
c. Worst case data extensively backed: Assume three (3) sigma.

Under normal circumstances, all vendor data will be one of the latter two cases (i.e., 2 or 3 sigma). This is because most vendors specify instrument performance in terms of guaranteed performance. In order to guarantee performance, the vendor must have considerable confidence in the data. A two (2) sigma value corresponds to a 95% probability value, while three (3) sigma corresponds to slightly greater than 99%. Thus, assignment of the sigma value to be assumed in the calculations is a question of the confidence placed in the vendor data.

A.2.5 Interpreting Surveillance Test Data Surveillance test data can be a valuable source of information with which to improve the database and refine setpoint calculations.

The primary use of surveillance test data is in validating and/or refining drift assumptions, and in extending instrument surveillance intervals. The primary limitation associated with use of field data is that there .must be a valid basis for assumptions as to what the data contains. For example, surveillance data is normally valid as a source of improved drift information, and may be used to estimate other surveillance test related errors, but is not a good source for validating accuracy assumptions. Instrument accuracies may be quite different under trip conditions than during surveillance testing.

The basic approach to use of surveillance test data is a three part approach:

a. Define, in terms of the values of interest (drift, etc.), what the surveillance data represents, as a means of defining how you will interpret the data.
b. Collect the surveillance data needed to provide a strong statistical basis.
c. Perform a statistical analysis of the data, and establish the desired values along with the associated sigma level for use in channel error calculations or setpoint calculations.

Page 73 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3

.The area of greatest potential benefit associated with surveillance test data analyses are the use of test data to validate reduced drift assumptions for existing surveillance test intervals, and the use of the data to predict revised drift values for longer surveillance test intervals. This latter is particularly useful in preparing justifications for temporary surveillance interval extensions in order to avoid undesired plant shutdowns for surveillance testing.

Detailed calculation models and methods for evaluating surveillance test data are beyond the scope of this document. Standard statistical methods may be used. In addition, References 5.1, 5.3,

& 5.32 contain a detailed discussion of validating drift assumptions from surveillance test data.

A.2.6 Recommended Assumptions in the Absence of Data In the absence of better information, the following assumptions can be used in channel error and setpoint calculations:

a. Calibrating equipment accuracies are taken as 3 sigma values provided that the calibration of these devices is to NIST traceable standards and minimizes the effects of hysteresis, linearity and repeatability. The accuracies of the standards themselves are also taken to be 3 sigma values.
b. If Vendor Drift (VD) is not specified by the vendor or available from other sources, and if there is no basis for assuming drift is zero or negligible, assume VD equals Vendor Accuracy (VA) over the entire calibration period.

OR If Vendor Drift (VD) is not specified by the vendor or available from other sources, and if there is no basis for assuming drift is zero or negligible, the following default values may be included for additional conservatism when preparing the analysis. The default drift effect values that will be used in these cases are: -

  • Mechanical Components: +1.0% of span per refueling cycle
  • Electronic Components: +0.5% of span per refueling cycle The intent of these default drift effect values (Reference 5.27 Appendix A) is to establish consistent values for this type of error for inclusion into the calculations to achieve additional conservatism when this data is not available, applicable, or published. Selection of these default drift effect values is the result of engineering review and judgment of industry practices, typical Reference Accuracy for these device types, and industry experience.

Choosing between these two involves a balance of the margins desired to the AL and the margins available to the operating limit.

Page 74 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.7 Cautions Concerning Use of Qualification Program Data Plant specific data from Equipment Qualifications programs is a valuable source of data on instrument performance, particularly regarding the various accident related accuracy error terms (Radiation Effect, Seismic Effect, etc.). However, care should be exercised in use of this data.

In many cases, Equipment Qualification programs have been conducted to prove that class IE equipment will function throughout its intended lifetime. Because the post-accident functions include indications for operator use, the environmental conditions used in EQ programs may include long term post-accident conditions, which do not apply to most setpoint calculations. Use of EQ results, without taking into account less severe trip conditions can result in extreme conservatism. Overly conservative setpoints can impact plant operations and lead to unnecessary challenges to safety systems.

Page 75 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 APPENDIX B SAMPLE CALCULATION FORMAT This samplepresents, the format usedfor a setpoint and indication/control calculation. An Example of these types of calculations can be obtainedfrom the Setpoint Program Coordinator. The calculationcover sheets areproduced using Attachment I or 2 from, Reference 5.20, depending on whether the calculation is a major or minor revision. The calculationshall reflect the name and order of major sections as shown in the TOC below, howt'ev'er, it is only recommended that sections within each major section be presented as shown in this Attachment. For other types of calczmlations, such as Nis, APRMs, andRadiation Monitors, the major sections of this sample should be used andAppendix Pforguidance. The Selpoint Program Coordinatorcanprovide examples of what is shown within each major section.

TABLE OF CONTENTS CALCULATION COVER SHEET............................. (PAGE #)

TABLE OF CONTENTS................................... (PAGE #)

1.0 OBJECTIVE ............ (PAGE #)

2.0 ASSUMPTIONS .(PAGE #)

3.0 METHODOLOGY .(PAGE #)

4.0 INPUTS .(PAGE #)

5.0 OUTPUTS .(PAGE #)

6.0 REFERENCES

.(PAGE #)

7.0 ANALYSIS AND COMPUTATION SECTION(S) .(PAGE #)

8.0 RESULTS .(PAGE #)

9.0 CONCLUSION

S .(PAGE #)

ATTACHMENTS ATTACHMENT 1, Scaling (# of pages)

ATTACHMENT 2, Results Summary (# of pages)

ATTACHMENT 3 (etc. as required) (# of pages)

Page 76 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 1.0 OBJECTIVE Should state purpose,functions and objectives of calculation, including the category to which the amount of rigor is required.

2.0 ASSUMPTIONS Other than CPS Standard Assumptions, there are two types that can be made: an assumption as to a value; or an assumption as to the quality of input information.

For each assumption, a judgment must be made as to whether confirmation is required or justification is provided to show it is reasonable. Refer to CC-AA-309 and CC-AA-309-1 001, for further guidance.

All standardassumnptions (See Appendix 1, Section .11) requiredby this calculationwill be listedfirst. Any additionalassumptions as discussed above, willfollow, standardassumptions.

.3.0 METHODOLOGY Typical:

This calculation will determine the instrument uncertainty associated with the (Function - Description). The evaluation will determine the loop setpoint and Allowable Value for the (Function). Instrument uncertainty will be determined in accordance with CI-01.00, "Instrument Setpoint Calculation Methodology". The evaluation will then compare the current setpoint and Allowable Value with the results determined by this calculation.

M&TE error will be determined from the results of Calculation IP-C-0089, which uses building temperature minimum and maximums to develop the uncertainty, and review of the corresponding loop and device calibration procedures. Any changes to the calibration procedures will be shown in Attachment 2.

Per CI-01.00, Head Correction is determined by evaluating design drawings, survey data, and/or walk down data as applicable and calculated in Attachment 1.

Page 77 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 4.0 INPUTS Inputs that cannot be easily retrievedfrom the C~PS Document System, should be also added as attachments. Typical: (Number, Revision Level, Title) 5.0 OUTPUTS Typical: (ATunuber, Revision Level, Title)

Calibrationproceduresand other calculationsas required.

6.0 REFERENCES

Typical. (Number, Revision Level, Title).

7.0 ANALYSIS AND COMPUTATION SECTION(S)

This section should list all of the equations identified in Section 4.5.11 of CI-01. 00 for the type of calculation to be performed. All inputs, outputs, and references should be identified as requirediwithin the document (eg Input 4.1, Output 5.1, Ref 6. 1). Titles can be shown in document (typically not shown), however revision levels shall only be identified in Sections4.0, 5.0, and 6.0.

From CI-01.00, Section 4.5.11, Note: The individual terms and acronyms are defined in Cl-01.00, Section 2.2.

7.1 Loop Function 7.2 Loop Diagram 7.3 Equations 7.3.1 Loop Accuracy (AL):

For component, A= i_) + (A iE)2+ (O___ iE) ( _+)2+

_; (-) + (-) + i) +( i )2+(R ;) +/-B n n n ) n n ( n +(n n (2o)

Page 78 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 For loop, AL = +/- V,2 + A 22 + ... + A' 2 +/- B (2oy) 7.3.2 Calculation of As-Left Values For component, ALT = (existing ALT or VA) (2a)

The loop As-Left Tolerance (ALT) will be calculated as follows:

ALT I= +/-(AN)1( ALT I'+ ( ALT )2+ ...+ (ALT,)' (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

7.3.3 Loop Calibration Error (CL):

CL = +/-Aj _ ( ALT) *+/-+( 2 CST D )2 (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

7.3.4 Loop Drift (DL):

DL =+/-N J(D-LJ+(D 2 + ... + ( D (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Page 79 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.3.5 Calculation of As-Found Values For component, AFTi= +/-(N))j( n ) +(D1LJ+J(. ) (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

The loop As-Found Tolerance (AFT) will be calculated as follows:

AFTL =+(N) ) +(P ) (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

7.3.6 Channel Uncertainty (CU) and Channel Error (CE):

This Section is for non-safety setpoints, indication, and control loops, and need not be derived for Safety Related setpoints.

CU =+/-N PMA2 +PEA 2 + AL2 +(§1-) +L(PJ ) B (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE =i *5 CU 2+JRE2 +/-B Note: An (1.645/N) adjustment to channel error is applicable to non-safety setpoints or required indicator readings that have a limit approached in one direction (single sided interest).

Page 80 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.3.7 Setpoints with no Analytical Limits or Allowable Values NTSP (INC) = NPL - CE NTSP (DEC) NPL + CE 7.3.8 Allowable Value Calculation Allowable Value calculated for an increasing trip, AV=AL-(l"'

  • PMA2+PEA 2+AL2_B Allowable Value calculated for a decreasing trip, AV =AL+( N )lIPMA2+PEA +AL2+B Note: An (1.645/N) adjustment is applicable to setpoints that have a limit approached in one direction (single sided interest)

Note: The calculation of the AV does not include the CL and DL terms.

7.3.9 Nominal Trip Setpoint Calculation The Nominal Trip Setpoint (NTSP) should be calculated using the equations below depending on the direction of process variable change when approaching the Analytical Limit.

For process variables that increase to trip, NTSP = AV - AFTL For process variables that decrease to trip, NTSP = AV + AFTL 7.4 Determination of Uncertainties A section is requiredforeach device in the loop as shown by the loop diagram in section 7.2. In cases where there are multiple loops, and one device depicted in the loop diagram has different manufacture/model numbers (i.e. tvo channels, where the sensor has two different model numbers). A section evaluating each manufacturehinodelnumber is requiredand the worst case will be used in the Results, Section 8. 0. Belowv is example for Rosemount Transmitter:

Page 81 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1 Sensor/Transmitters; Calculationsare typicallyperformed in % Span and converted to engineering units as requiredin different sections of calculation.

This is not a requirement,however all values calculatedforoutput to calibrationproceduresshall be in the units andprecision necessary to support the calibrationprocedure.

7.4.1.1 Vendor Accuracy of pressure transmitters (VAPT)

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value.

VAir=+/-I ]%Span (?a) 7.4.1.2 Accuracy Temperature Effect 7.4.1.2.1 Normal Accuracy Temperature Effect (ATEpT(Nor,,a1))

Calculationor conversion if required Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

ATEpT(Norial)= +/- I I% Span (?CY) 7.4.1.2.2 Accident Accuracy Temperature Effect (ATEPT(AccId))

This Section based on time when finction is required; may need to be calculated. Refer to the Appendicesfor aid in developing value.

Also, refer to the EQ manualsfor more information.

ATEPT(A¢cid) = +/- [ 1% Span (?a) 7.4.1.3 Humidity Effect (HEpr)

Calculationor conversion if required. Refer to the Appendices for aid in developing value. Use standardassumptionwhen no vendor information is available.

HEvr= +/- [ ]%Span (?a)

Page 82 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1.4 Radiation Effect 7.4.1.4.1 Normal Radiation Effect (REpr(Normal))

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

REPT(Normal) = + I% Span (?a) 7.4.1.4.2 Accident Radiation Effect (REpT(Accidnetl))

This Section based on time whenfimction is required; may need to be calculated. Refer to the Appendicesfor aid in developing value.

Also, refer to the EQ manualsfor more information REPT(Accid) = I J% Span (?a) 7.4.1.5 Power Supply Effects of pressure transmitters (PSEpr)

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

PSErT=+/-[ ]% Span (?a) 7.4.1.6 Static Pressure Effect (SPEpr)

Calculation or conversion ifrequired Refer to the Appendices for aid in developing value.

SPEpT=+/-[ ]%Span (?a) 7.4.1.7 Overpressure Effect (OPEpr)

Calculation or conversion if required. Refer to the Appendicesfor aid in developing value.

OPEpr=+/-I ]%Span (?a)

Page 83 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1.8 Seismic Effect 7.4.1.8.1 Normal Seismic Effect (SEpT(NormaI))

Use standardassumption.

SEPT(Normal) = 0 7.4.1.8.2 Accident Seismic Effect (SEPT(Accid))

PerSection C3. 14, A seismic event coincident with a LOCA is a design basis event per USAR 15.6.5. However, per USAR 15.6.5.1.1, there are no realistic, identifiable events which would result in a pipe break inside the containment ofthe magnitude requiredto cause a loss-of-coolant accident coincident with a safe shutdown earthquake. Therefore, each setpoint calculation should consider the largereffect of a seismic event or loss-of-coolant.

SEpT(Accid) = 0 7.4.1.8.3 OBE/SSE Seismic Effect (SEpT(scismic))

Refer to the Appendicesfor aid in developing value. Also, refer to the SQ manualsfor more information SEPT(Seismic) = i 1% Span (?a) 7.4.1.9 RFI/EMI Effect (REEpT)

Use standardassumption, ifapplicable or review historicalwork packages and vendor data to build ajustifiable assumption.

REEPT= 0 7.4.1.10 Bias (BpT) -

Refer to Appendix Cfor guidance.

BPT = 4 [ I% Span (?CF) 7.4.1.11 Pressure Transmitter Accuracy Refer to Section 7.3.1 forformula.

Page 84 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1.1 1.1 Normal Pressure Transmitter Accuracy (Apr(Norma1))

ArT(Normal)= +/-1 ]% Span (?a) 7.4.1 .11.2 Accident Pressure Transmitter Accuracy (APT(Accid))

Calculatedthe same as normal, however the accident uncertainties replace the similarnormal uncertainities.

APT(Accid) = +/-1 1% Span (?a) 7.4.1.11.3 Seismic Pressure Transmitter Accuracy (ApT(Seismic))

Calculatedthe same as normal, howu'ever the seismic uncertainty replaces the normal seismic uncertainity.

APT(seismic)= +/-1 % Span (?a) 7.4.1.11.4 Pressure Transmitter Accuracy (Apr)

Based on the above, use the largest uncertainty is calculated under

[normal/accident/seismic] conditions to determine AV, NTSP, and CE. Therefore:

APT = +/-APT(normallaccident/seismic)

Arr =+/- +/-1 % Span (?a) 7.4.2 Loop Accuracy (AL)

Refer to Section 7.3.1forformula AL=I 1% Span (2a) 7.5 As-Left Values (ALT)

Each device in loop requiresan ALTj.

For component, ALT = (existing ALT or VA) units (3a)

Page 85 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 The loop As-Left Tolerance (ALT) will be calculated as follows:

Refer to Section 7.3.2forformula.

ALTL= +/- 1 units (2o) 7.6 Loop Calibration Error (CL)

Refer to Section 7.3.3forformula 7.6.1 As-Left Tolerance (ALTL)

Refer to Section 7.5 for values.

ALTL = +/-1I% Span (2a) 7.6.2 Calibration Tool Error (Ci)

Each device requiresa calibrationtool error.

7.6.2.1 Transmitter Calibration Tool Error (Cpsr)

Refer to M& TE calculationIP-C-0089,for maximum V1alues howtever, if extra margin is required,refer to Appendix Hfor additionalguidance.

CpT=+/-I ]% Span (3a) 7.6.3 Calibration Standard Error (CsTD):

Per Assumption [ ],Calibration Standard Error is considered negligible for the purposes of this analysis.

CsTD = O 7.6.4 Loop Calibration Error (CL):

Calculate usingformulafrom Section 7.6 above. Only the M&TE required for the loop is usedfor calculatingthe Loop CalibrationError(C).

CL=+I +/-  % Span (2a)

Page 86 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.7 Loop Drift Each device requiresa drift evaluation.

7.7.1 Pressure Transmitter Drift (Dpr):

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

Dprw I ]% Span (?C) 7.7.2 Loop Drift (DL):

Refer to Section 7.3.4 forformula.

DL=+/-I 1% Span (2a) 7.8 Calculation of As-Found Values (AFT)

Each device in loop requiresan AFTj. Refer to Section 7.3.5 for formulas.

For component, AFTj = +/-1]units (2a)

The loop As-Found Tolerance (AFTL) will be calculated as follows:

AFTL = i+/-I units (2a) 7.9 Process Measurement Accuracy (PMA):

Discussion and calculationas required Refer to the Appendicesfor aid in developing value.

PMA=+I +/- % Span (?a) 7.10 Primary Element Accuracy (PEA):

Discussion and calculation as required. Refer to the Appendices for aid in developing value.

PEA = +/-1 I% Span (?a)

Page 87 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.11 Insulation Resistance Accuracy Error (IRA):

References 5.22, 5.23, 5.24from C-0 1.00, may provide a bounding IRA value to use, if the device is identified by these calculations.

Howtever, if a more precise IRA valuefor the identified devices is needed or a non identified device requiresIRA to be established, then the guidance, provided in Appendix D shall be used.

8.0 RESULTS 8.1 Determine Channel Uncertainty (CU):

This section is only applicable to indication/controlloop calculations.

Refer to Section 7.3.6forformnula. N/A for safety relatedsetpoint calculations.

CU = +/- ]units (2a)

CE = +/- units (2a) 8.2 Calculation of Setpoints with not Analytical Limits or Allowable Values This section is only applicable to setpoint calculations. Refer to Section 7.3. 7forformula. N/A for safety relatedselpoint calculations.

NTSP =I ]units 8.3 Calculation of the Allowable Value (AV)

This section is only applicable to setpoint calculations. Refer to Section 7.3.8forformula. N/A for non-safety relatedsetpoint, indication, and controlloop calculations.

AV=(( units (2a) 8.4 Calculation of the Nominal Trip Setpoint (NTSP)

This section is only applicable to setpoint calculations. Refer to Section 7.3.9forformula. N/A for non-safety relatedsetpoint, indication, and controlloop calculations.

NTSP= [ I units Page 88 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 8.5 Evaluation of Reset Value Evaluateper guidance given by Section 4.4.12.2

9.0 CONCLUSION

S Add discussion of results to verbalize that the objectives are met and that they graphicallypresented, the figure should reflect the direction of the setpoint.

Page 89 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 FIGURE 1- [NAME] FUNCTION Maximum Instr. Range - [] UNITS Analytical Limit (AL) - [ ] UNITS CalculatedA V Actual AV [ ] UNITS

+ AFT [ ] UNITS CalculatedNTSP

+ALT Actual NTSP f--

, _ __ [ ] UNITS

-ALT 1 [ ] UNITS

-AFT [ ] UNITS Minimum Instr. Range [I UNITS Page 90 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 ATTACHMENT I SCALING OF THE [NAME) FUNCTION There shotld be a discussion whether head correctionis applicable or not. If applicable then it should be developed. CPS 8801.05 shall be used as guidance, however only verified information (typically w'alkdowns) may be usedfrom existing CPS 8801.05 head corrections.

Scaling shall be performedfor each device in loop as presentlypresented in the existing calibrationprocedures (CardinalPoints, Units, andprecision).

Discussionwith C&I maintenanceshall be requiredwhen unable to support existing calibrationprocedures.

1 Transmitter EINs Manufacturer: Rosemount Inc.

Model No.:

Input:

Output:

Process Range Min (p) Max (P) Units Transmitter Output Range Min(o) Max(O) Units Page 91 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 EINs Transmitter Calibration Cal. Pt. [Input Output folts DC)

Units AFT units ALT units 0% [] []

( to ) ( to )

25% [] - []

( to ) ( to )

50% [] []

( to ) ( to )

75% [] []

( to ) ( to )

100% [] [I

( to ) ( to )

Page 92 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE CALCULATIONS CALCULATION METHODOLOGY Revision 3 ATTACHMENT 2 RESULTS

SUMMARY

The following tables list the applicable results of this calculation:

P.i.ary SensiorSc6alingCalibration Primajy Sensor, - Calibration Span 0% -- . S%'  ; 7 100%

units units units units units Individual Comipoinent Setting Tolerances.'.. ' . '

,, ,'.' .. ' ,-- . ' ' ' . 4. '

Component EIN 2. A'.§Found 'As-Left ':,

___ __ __ _ _ _ _ _ _ (u niits) (units):

-Tr-ip' Setpoint'arnd LoopSetting Tolerances Co'mponent EIN .. I ,' As-Found 'As-Left

-units):'

(u -units)

(

M&T Usd 'In Calulation' WManufacturer' Model Number' Ran.e  :-  :'

',-'. .'..'":': USAR/Tec ical pecift etpoint,',

Component EN 'Allowable Value .' USATechnical Specification

- 'Design Setpoint--. Section: -

Tech. Spec. Tables:

ORM Tables:

Page 93 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 APPENDIX C UNCERTAINTY ANALYSIS FUNDAMENTALS The ideal instrument would provide an output that accurately represents the input signal, without any error, time delay, or drift with time. Unfortunately, this ideal instrument does not exist. Even the best instruments tend to degrade with time when exposed to adverse environments. Typical stresses placed on field instruments include ambient temperature, humidity, vibration, temperature cycling, mechanical shock, and occasionally radiation.

These stressors may affect an instrument's reliability and accuracy. This Appendix discusses the various elements of uncertainty that should be considered as part of an uncertainty analysis. The methodology to be applied to uncertainty analysis and the determination of trip setpoints is also described in this Appendix.

Instrument loop uncertainty is a combination of individual instrument uncertainties and variations in the process that the loop is monitoring. Individual instrument uncertainty may vary with the environmental conditions around the instrument and with process variations.

The are five general categories of environmental and process conditions which need to be considered: (1) normal operations, (2) seismic event, (3) post seismic, (4) accident, which could be LOCA, MSLB, HELB, etc., (5) post accident. This standard provides information for determining instrument uncertainties under each condition. The total instrument uncertainty may be used alone, as for indicators and recorders to provide an estimate of possible error between actual and indicated process conditions, or as a step toward determining instrument setpoints and operator decision points.

Not all categories of uncertainty described in this Appendix will apply to every configuration. But, the analyst should provide, in the body of the calculation, a discussion sufficient to explain the rationale for any uncertainty category that is not included.

C.1 Categories of Uncertainty The basic model used in this design standard requires that the user categorize instrument uncertainties as random, bias, or arbitrarily distributed. This section describes the various categories of instrument uncertainty and provides insight into the process of categorizing instrumentation based on performance specifications, test reports, and plant calibration data.

Page 94 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 The estimation of uncertainty is an interactive process requiring the development of assumptions and, where possible, verification of assumptions based on actual data. Ultimately, the user is responsible for defending assumptions that affect the basis of uncertainty estimates.

It should not be assumed that, since this design standard addresses three categories of uncertainty, all three types must be used in each uncertainty calculation. Additionally, it should not be assumed that instrument characteristics would fit neatly into a single category. For example, the nature of some data may require that an instrument's static pressure effect be described as bimodal, which might best be represented as a random uncertainty with an associated bias.

C.1.1 Random Uncertainties When repeated measurements are taken of some fixed parameter, the measurements will generally not agree exactly. Just as these measurements do not precisely agree with each other, they also deviate by some amount from the true value. Uncertainties that fluctuate about the true value without any particular preference for a particular direction are said to be random.

Random uncertainties are sometimes referred to as a quantitative statement of the reliability of a single measurement or of a parameter, such as the arithmetic mean value, determined from a number of random trial measurements. This is often called the statistical uncertainty and is one of the so-called precision indices. The most commonly used indices, usually in reference to the reliability of the mean, are the standard deviation, the standard error (also called the standard deviation in the mean),

and the probable error.

In the context of instrument uncertainty, it is generally accepted that random uncertainties are those instrument uncertainties that a manufacturer specifies as having a +/- magnitude and are defined in statistical terms. It is important to understand the manufacturer's data thoroughly and be prepared to justify the interpretation of the data. After uncertainties have been categorized as random, it is required that a determination be made whether there exists any dependency between the random uncertainties. Figure C-1 shows the expected nature of randomly distributed data. There is a greater likelihood that data will be located near the mean; the standard deviation defines the variation of data about the mean.

Page 95 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 95.4%

01.

-3a -2a -Ic 0 la 2a 3a Figure C-1 Random Behavior C.1.2 Bias Uncertainties Suppose that a tank is actually 50% full, but a poorly designed level monitoring circuit shows the tank level as fluctuating randomly about 60%. As discussed in the previous section, the fluctuations about some central value represent random uncertainties. However, the fixed error of 10t in this case is called a systematic or bias uncertainty. In some cases, the bias error is a known and fixed value that can be calibrated out of the measurement circuit. In other cases, the bias error is known to affect the measurement accuracy in a single direction, but the magnitude of the error is not constant.

Bias is defined as a systematic or fixed instrument uncertainty, which is predictable for a given set of conditions because of the existence of a known direction (positive or negative). A very accurate measurement can be made to be inaccurate by a bias effect.

The measurement might otherwise have a small standard deviation (uncertainty), but read entirely different than the true value because the bias effectively shifts the measurement over from the true value by some fixed amount. Figure C-2 shows an example of bias; note that bias as shown in Figure C-2 shifts the measurement from the true process value by a fixed amount.

Page 96 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Measured Value True Value Bias Figure C-2 Effect of Bias Examples of bias include head correction, range offsets, reference leg heat-up or flashing and changes in flow element differential pressure because of process temperature changes. A bias error may have a random uncertainty associated with the magnitude.

Some bias effects, such as static head of the liquid in the sensing lines, can be corrected by the calibration process. These bias effects can be left out of the uncertainty analysis if verified to be accounted for by the calibration process. Note that other effects, such as density variations of the static head, might still contribute to the measurement uncertainty.

C.1.3 Arbitrarily Distributed Uncertainty Some uncertainties do not have distributions that approximate the normal distribution. Such uncertainties may not be eligible for the rules of statistics or square root of the sum of the squares combinations and are categorized as arbitrarily distributed uncertainties. Because they are equally likely to have a positive or a negative deviation, worst-case treatment should be used.

It is important that the engineer recognize that the direction (sign) associated with a bias is known, whereas the sign associated with an arbitrarily distributed uncertainty is not known but is assumed based on a worst-case scenario.

C.1.4 Independent Uncertainties Independent uncertainties are all those uncertainties for which no common root cause exists. It is generally accepted that most instrument channel uncertainties are independent of each other.

Page 97 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.1.5 Dependent Uncertainties Because of the complicated relationships that may exist between the instrument channels and various instrument uncertainties, it should be recognized that a dependency might exist between some uncertainties. The methodology presented here provides a conservative means for addressing these dependencies. If, in the engineer's judgment, two or more uncertainties are believed to be dependent, then these uncertainties should be added algebraically to create a new, larger independent uncertainty. For the purpose of this design standard, dependent uncertainties are those for which the user knows or suspects that a common root cause exists, which influences two or more of the uncertainties with a known relationship.

C.2 Interpretation of Uncertainty Data The proper interpretation of uncertainty information is necessary to ensure that high confidence levels are selected and that protective actions are initiated before safety limits are violated.

Also, proper interpretation is necessary for the valid comparison of instrument field performance with setpoint calculation allowances. This comparison confirms the bounding assumptions of the appropriate safety analysis.

Accuracy (uncertainty) values should be based on a common confidence level (interval) of at least two standard deviations (95% corresponds to approximately 2 standard deviations). The use of three or more standard deviations may be unnecessarily conservative, resulting in reduced operating margin. Some uncertainty values may need to be adjusted to 2-standard deviation values.

For example, if a vendor accuracy for a 99% level (3 standard deviations) is given as +/-6 psig, the 95% confidence level corresponds to +/-4 psig (= (2/3) x 6). This approach assumes that vendor data supports this 3 standard deviation claim.

Performance specifications should be provided by instrument or reactor vendors. Data should include vendor accuracy, drift, environmental effects and reference conditions. Since manufacturer performance specifications often describe a product line, any single instrument may perform significantly better than the group specification. If performance summary data is not available or if it does not satisfy the needs of the users, raw test data may need to be reevaluated or created by additional testing.

Page 98 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 If an uncertainty is known to consist of both random and bias components, the components should be separated to allow subsequent combination of like components. Bias components should not be mixed with random components during the square root of the sum of the squares combination.

Historically, there have been many different methods of representing numerical uncertainty. Almost all suffer from the ambiguity associated with shorthand notation. For example, without further explanation, the symbol +/- is often interpreted as the symmetric confidence interval associated with a random, normally distributed uncertainty. Further, the level of confidence may be assumed to be 68% (standard error, 1 standard deviation), 95% (2 standard deviations) or 99% (3 standard deviations). Still others may assume that the +/- symbol defines the limits of error (reasonable bounds) of bias or non-normally distributed uncertainties. Vendors should be consulted to avoid any misinterpretation of their performance specifications or test results.

Reactor vendors typically utilize nominal values for uncertainties used in a setpoint analysis associated with initial plant operation. These generic values are considered conservative estimates, which may be refined if plant-specific data is available. Since plant-specific data may be less conservative than the bounding generic data, care should be taken to ensure that it is based on a statistically significant sample size.

One source of performance data that requires careful interpretation is that obtained during harsh environment testing. Often, such tests are conducted only to demonstrate the functional capability of a particular instrument in a harsh environment. This usually requires only a small sample size and invokes inappropriate rejection criteria for a probabilistic determination of instrument uncertainties. The meager data base typically results in limits of error (reasonable bounds) associated with bias or non-normally distributed uncertainties.

The limited database from an environmental qualification test also precludes adjusting the measured net effects for normal environmental uncertainties, vendor accuracies, etc. Thus, the results of such tests describe several mutually exclusive categories of uncertainty. For example, the results of a severe environment test may contain uncertainty contributions from the instrument vendor accuracy, measuring and test equipment uncertainty, calibration uncertainty and others, in addition to the severe environment effects. A conservative practice is to treat the measured net effects as only uncertainty contributions due to the harsh environment.

Page 99 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 In summary, avoid improper use of vendor performance data. Just as important, do not apply overly conservative values to uncertainty effects to the point that a setpoint potentially limits normal operation or expected operational transients. Because of the diversity of data summary techniques, notational ambiguities, inconsistent terminology and ill-defined concepts that have been apparent in the past, it is recommended that vendors be consulted whenever questions arise. If a vendor-published value of an uncertainty term (source) is confirmed to contain a significant bias uncertainty, then the +/- value should be treated as an estimated limit of error. If the term is verified to represent only random uncertainties (no significant bias uncertainties), then the

+/- value should be treated as the 2-standard deviation interval for an approximately normally distributed random uncertainty.

C.3 Elements of Uncertainty NOTE: The following sections may expand or add clarification for elements of uncertainty, but does not replace the definitions specified in Section 2.2.

C.3.1 Process Measurement Accuracy (PMA)

PMA are those effects that have a direct effect on the accuracy of a measurement. PMA variables are independent of the process instrumentation used to measure the process parameter. PMA can often be thought of as physical changes in the monitored parameter that cannot be detected by conventional instrumentation.

The following are examples of PMA variables:

  • Temperature stratification and inadequate mixing of bulk temperature measurements
  • Reference leg heatup and process fluid density changes from calibrated conditions
  • Piping configuration effects on level and flow measurements
  • Fluid density effects on flow and level measurements
  • Line pressure loss and pressure head effects
  • Temperature variation effect on hydrogen partial pressure
  • Gas density changes on radiation monitoring Some PMA terms are easily calculated, some PMA terms are quite complex and are obtained from General Electric documents, and other PMA terms are allowances developed and justified by Design Basis Documents.

Page 100 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.2 Primary Element Accuracy (PEA)

PEA is generally described as the accuracy associated with the primary element, typically a flow measurement device such as an orifice, venturi, or other devices from which a process measurement signal is developed. The following devices are typically considered to have a primary element accuracy that requires evaluation in an uncertainty analysis:

  • Flow venturi
  • Flow nozzle
  • Orifice plate
  • RTD or thermocouple thermowell
  • Sealed sensors such as a bellows unit to transmit a pressure signal PEA can change over time because of erosion, corrosion, or degradation of the sensing device. Installation uncertainty effects can also contribute to PEA errors.

Page 101 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.3 Vendor Accuracy (VA)

VA defines a limit that error will not exceed when a device is used under reference or specified operating conditions. An instrument's accuracy consists primarily of three instrument characteristics:

repeatability, hysteresis, and linearity. These characteristics occur simultaneously and their cumulative effects are denoted by a band, that surrounds the true output (see Figure C-3). This band is normally specified by the manufacturer to ensure that their combined effects adequately bounds the instrument's performance over its design life. Deadband is another attribute that is sometimes included within the vendor accuracy (see Section C.3.9).

Accuracy Band 20 Ma . ...........

True Va ue Output Ma 4 Ma ....... .......... ...................... .......

PO - Zero PS - Upper Span Limit PO PS Figure C-3 Instrument Accuracy Page 102 of 214

Instirument Setpoint APPENDIX C - UNCERTAINTY Calci. ilation Methodology ANALYSIS FUNDAMENTALS REVISION I Repeatability is an indication of an instrument's stability and describes its ability to duplicate a signal output for multiple repetitions of the same input. Repeatability is shown on Figure C-4 as the degree that signal output varies for the same process input.

Instrument repeatability can degrade with age as an instrument is subjected to more cumulative stress, thereby yielding a scatter of output values outside of the repeatability band.

20 Repeatability Output Band mA 4 . I Pin Pressure Input Figure C-4 Repeatability Page 103 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Hysteresis describes an instrument's change in response as the process input signal increases or decreases (see Figure C-5). The larger the hysteresis, the lower is the corresponding accuracy of the output signal. Stressors can affect the hysteresis of an instrument.

20 Res po ns e to Dec rea sing P res s u re Out put mA Res p on se to Increasing Pressure 4

P res s u re In p u t Figure C-5 Hysteresis Page 104 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 All instrument transmitters preferably exhibit linear characteristics, i.e., the output signal should be linearly and proportionately related to the input signal. Linearity describes the ability of the instrument to provide a linear output in response to a linear input (see Figure C-6). The linear response of an instrument can change with time and stress.

20 ...............................................................................................................................

Act ual ,

Calibrat ion .;

Cu rve ,,. /

Out put mA m A /Des ired

/,.-' Ca libra t io n

/ ' Curve 4................. ............................... ...............

Pressure Input Figure C-6 Linearity In cases in which the measurement process is not linear, the more appropriate term to use is conformity, meaning that the output follows some desired curve. Linearity and conformity are often used interchangeably.

As discussed, vendor accuracy is generally described as the combined effect of hysteresis, linearity, and repeatability. These three separate effects are sometimes combined to form the bounding estimate of vendor accuracy as follows:

VA = +/-(h2 +12 +r2)1/2 where, VA = Vendor Accuracy h = Hysteresis 1 = Linearity r = Repeatability Page 105 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Accuracy cannot be adjusted, improved, or otherwise affected by the calibration process. Rather, accuracy is a performance specification against which the device is tested during calibration to determine its condition. A 5-point calibration check, (0t, 25t, 50%, 75t, and 100%), of an instrument's entire span verifies linearity. If a 9-point check is performed, by checking up to 100t and back down to 0t, hysteresis is also verified. Finally, if the calibration check is performed a second time (or more),

repeatability is verified. The calibration check process is rarely performed to a level of detail that also confirms repeatability but if it is, per ISA S 67.04, both vendor accuracy and the calibrations tolerance do not both need to be included in the uncertainty analysis. For this reason, the vendor accuracy term should be checked to verify that it includes the combined effects of linearity, hysteresis, and repeatability. If the vendor accuracy specification does not include all of these terms, the missing terms are included into the vendor accuracy specification as follows:

VA = +/-(va2 + h2 +12 +r2) 112 where, VA = Revised estimate of vendor accuracy va = Vendor's stated accuracy with some terms not included h = Hysteresis (if not already included) 1 = Linearity (if not already included) r = Repeatability (if not already included)

Vendor accuracy is considered an independent and random uncertainty component unless the manufacturer specifically states that a bias or dependent effect also exists. Vendor accuracy is normally expressed as a percent of instrument span, but this should be confirmed from the manufacturer's specifications.

Bistables, trip units, and pressure switches may not require a consideration of hysteresis and linearity because the calibration might be checked only at the setpoint. If the accuracy is checked at the setpoint for these devices, the accuracy elsewhere in the instrument's span is not directly verified.

The calibration process might not adequately confirm the vendor accuracy if the measuring and test equipment (M&TE) uncertainty significantly exceeds the accuracy of the device being calibrated.

For example, the calibration process cannot verify a 0.1t accuracy specification with M&TE having an uncertainty of 0.5%. If the M&TE uncertainty exceeds the specified vendor accuracy, then the vendor accuracy should be considered no better than the M&TE allowance.

Page 106 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.4 Drift Drift is commonly described as an undesired change in output over a period of time; the change is unrelated to the input, environment, or load. A shift in the zero setpoint of an instrument is the most common type of drift. This shift can be described as a linear displacement of the instrument output over its operating range as shown in Figure C-7. Zero shifts, can be caused by transmitter aging, an overpressure condition such as water hammer, or sudden changes in the sensed input that might stress or damage sensor components.

20 . r As-Found Condition at Calibration Output MA 0

,o'/Original Calibr~tion 4 4.. .......t--------

e.

PZc PZo Pressure Input PSC PSo PZc = Pressure Zero @ Recal PSc = Pressure Span @ Recal PZo = Pressure Zero @ Original PSo = Pressure Span @ Original Figure C-7 Zero Shift Drift Span shifts are less common than zero shifts and are detected by comparing the minimum and maximum current outputs to the corresponding maximum and minimum process inputs. Figure C-8 shows an example of forward span shift in which the instrument remains in calibration at the zero point, but has a deviation that increases with span. Reverse span shift is also possible in which the deviation increases with decreasing span.

Page 107 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 20 .....................................

IAs-Found Condition I at Calibration mA 4

PZo Pressure Input PSc PSo PZo = Pressure Zero @ Original PSc = Pressure Span @ Recal PSo = Pressure Span @ Original Figure C-8 Span Shift Drift The amount of drift allowed for an instrument depends on the manufacturer's drift specifications and the period of time assumed between calibrations. For safety-related devices, the drift allowance should be based on the Technical Specifications allowance for plant operation (i.e. 24 months) plus an additional allowance of 25%. Note that not all equipment is checked at this frequency; the Technical Specifications still states a shorter frequency for certain equipment, such as quarterly checks of trip units.

The manufacturer's specified drift is often based on a maximum interval of time between calibration checks. Several methods are available to adjust the drift allowance to match the calibration period of the instrument. If the instrument drift is assumed to be linear as a function of time and continuing in one direction once it starts, the drift allowance would be calculated as shown below:

For an example of vendor drift interval of 6 months and 0.5%:

DR30 = +/-0.5% (30/6) = +/-2.5% of span Page 108 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 In the absence of other data, this is a conservative assumption.

However, if the vendor states that the drift during the calibration period is random and independent, then it is just as likely for drift to randomly change directions during the calibration period.

In this case, the square root of the sum of the squares of the individual drift periods between calibrations could be used. In this case, the total drift allowance for 30 months would be:

DR3 0 +/-(0.5%2 + 0.5%2 + 0.5%2 + 0.5%2 + 0.5%2) /21= +1.12% of span The approach in section 4.3.2 assumes the drift is random and independent as above.

Some vendors have stated that the majority of drift tends to occur in the first several months following a calibration and that the instrument output will not drift significantly after the "settle-in period." In this case, a lower drift value might be acceptable provided that the vendor can supply supporting data of this type of drift characteristic. However, when the vendor stated drift is for a longer period (i.e. Rosemount drift = 0.2% for 30 months) then the calibration period it is not'acceptable to arbitrarily reduce the drift value. In this case the data supporting a "settle-in period" drift characteristic must be evaluated.

VD3 0 = +/-[VDyr2 + VDyr2 + (VDyr 2 . 2)]1/2 In the above expression of drift, VDyr represents the annual drift estimate and the resultant drift, VD30 , represents the 30-month drift estimate. If VDyr = 1%, the 30-month drift estimate is obtained by:

VD30 =[1 [. 02 + 1 .02 + (1 0%2 . 2)] 1/21 +/-1 .58% of span Drift can also be inferred from instrument calibration data by an analysis of as-found and as-left data. Typically, the variation between the as-found reading obtained during the latest calibration and the as-left reading from the previous calibration is taken to be indicative of the drift during the calibration interval. By evaluating the drift over a number of calibrations for functionally equivalent instruments, an estimate of the drift can be developed.

Typically, the calibration data is used to calculate the mean of drift, the standard deviation of drift, and the tolerance interval that contains a defined portion of the drift data to a certain probability and confidence level (typically 95%/95%). This statistically determined value of drift can be used to validate the vendor's performance specification and can also be used as the best estimate of drift in the uncertainty calculation.' Assigning all of the statistically determined drift from plant specific data is Page 109 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 especially conservative because this drift allowance contains many other contributors to uncertainty, including:

  • Instrument hysteresis and linearity error present during the first calibration
  • Instrument hysteresis and linearity error present during the second calibration
  • Instrument repeatability error present during the first calibration..
  • Instrument repeatability error present during the second calibration
  • Measurement and test equipment error present during the first calibration
  • Measurement and test equipment error present during the second calibration
  • Personnel-induced or human-related variation or error during the first calibration
  • Personnel-induced or human-related variation or error during the second calibration
  • ^ Instrument temperature effects due to a difference in ambient temperature between the two calibrations (this is particularly true for 18 month cycle plants in which the first calibration is performed in the winter and the second calibration is performed in the summer)
  • Environmental effects on instrument performance, e.g.,

radiation, temperature, vibration, etc., between the two calibrations that cause a shift in instrument output

  • Misapplication, improper installation, or other operating effects that affect instrument calibration during the period between calibrations
  • True instrument "drift" representing a change, time-dependent or otherwise, in instrument output over the time period between calibrations See Appendix M for information about how to incorporate the results of an As Found As Left (AF/AL) drift analysis into a setpoint or channel error calculation.

Regardless of the approach taken for determining the drift allowance, the uncertainty calculation should provide the basis for the value used.

C.3.5 Accuracy Temperature Effects (ATE)

The ambient temperature is expected to vary somewhat during normal operation. This expected temperature variation can influence an instrument's output signal and the magnitude of the effect is referred to as the temperature effect. Using a maximum temperature that bounds the maximum observed temperature can reduce the Page 110 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 conservatism of using the maximum temperature difference. Larger temperature changes associated with accident conditions are considered part of the environmental allowance and the effect of larger temperature changes was determined as part of an environmental qualification test. The temperature effects described here only relate to the effect on instrument performance during normal operation.

The vendor normally provides an allowance for the predicted effect on instrument performance as a function of temperature. For example, a typical temperature effect might be +/-0.75% per 100OF change from the calibrated temperature. This vendor statement of the temperature effect would be correlated to plant-specific performance as follows:

ATE = +/-(Int - ctl) (vte) where, ATE = Temperature effect to assume for the uncertainty calculation nt = Normal expected maximum or minimum temperature (both sides should be checked) ct = Calibration temperature (typically, minimum zone temp.)

vte Vendor's temperature effects expression For example, if the vendor's temperature effects expression is

+/-0.75t of span per 100 0 F, the calibration temperature is 650 F if known, otherwise use the minimal temperature for that zone, and the maximum expected temperature is 110 0 F. This vendor statement of the temperature effect would be correlated to plant-specific performance as follows:

ATE = +/-[I 110 0 F - 650F1 x (0.75t - 100 0F)] = + 0.3375t of span Notice that the above approach starts with the minimal zone temperature, and then determines the maximum expected variation from the minimal zone temperature under normal operating conditions. Design Criteria DC-ME-09-CP "Equipment Environmental Design Conditions" provides all normal and harsh environments for the plant.

The above discussion applies to temperature effects on instrumentation, in response to expected ambient temperature variations during normal plant operation. Some manufacturers have also identified accident temperature effects that describe the expected temperature effect on instrumentation for even larger ambient temperature variations. An accident temperature effect describes an uncertainty limit for instrumentation operating Page 111 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 outside the normal environmental limits and in some cases may include normal temperature effects.

Temperature effect is considered a random error term unless otherwise specified by the manufacturer.

C.3.6 Radiation Effects (RE)

During normal operation, most plant equipment is exposed to relatively low radiation levels. Although the lower dose rate, radiation effects, might have a nonreversible effect on an instrument, the calibration process can eliminate them. If the dose rate is low enough, the ambient environment might be considered mild during normal operation and radiation effects can be considered negligible. Any effects of relatively low radiation effects are considered indistinguishable from drift and are calibrated out during routine calibration checks.

If the normal operation dose rate is high enough that radiation effects should be considered, the environmental qualification test report will provide the best source of radiation effect information. During the worst-case accident environment, radiation effects can be part of the simultaneous effect of temperature, pressure, steam, and radiation that was determined during the environmental qualification process. Other plant locations might experience a more benign temperature and pressure environment, but still be exposed to significant accident radiation. For each case, the determination of the radiation effects should rely on the data in the environmental qualification report. Environmental qualification test report data should usually be treated as an arbitrarily distributed bias unless the manufacturer has provided data supporting its treatment as a random contributor to uncertainty.

C.3.7 Static Pressure Effects (SPE)

Some devices exhibit a change in output because of changes in process or ambient pressure. A differential pressure transmitter might measure flow across an orifice with a differential pressure of a few hundred inches of water while the system pressure is over 1,000 psig. The system pressure is essentially a static pressure placed on the differential pressure measurement. The vendor usually specifies the static pressure effect; a typical example is shown below:

Static pressure effect = +/-0.5t of span per 1,000 psig The static pressure effect is a consequence of calibrating a differential pressure instrument at low static pressure conditions, but operating at high static pressure conditions.

Page 112 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 If the static pressure effect is considered a bias by the manufacturer, the operating manual usually provides instructions for calibrating the instrument to read correctly at the normal expected operating pressure, assuming that the calibration is performed at low static pressure conditions. This normally involves changing the zero and span adjustments by a manufacturer-supplied correction factor at the low-pressure (calibration) conditions so that the instrument will provide the desired output signal at the high-pressure (operating) conditions. The device could also be calibrated at the expected operating pressure to reduce or eliminate this effect, but is not normally done because of the higher calibration cost and complexity.

Some static pressure effects act as a bias rather than randomly.

For example, some instruments are known to read low at high static pressure conditions. If the calibration process does not correct the bias static pressure effect, the uncertainty calculation needs to include a bias term to account for this effect.

Ambient pressure variation can cause some gauge and absolute pressure instruments to shift up or down scale depending on whether the ambient pressure increases above or decreases below atmospheric pressure. Normally, this effect is only significant on 1) applications measuring very small pressures or 2) applications in which the ambient pressure variations are significant with respect to the pressure being measured. Gauge pressure instruments can be sensitive to this effect when the reference side of a sensing element is open to the atmosphere. If the direction of the ambient pressure change is known, the effect is a bias. If the ambient pressure can randomly change in either direction, the effect is considered random.

C.3.8 Overpressure Effect (OPE)

In cases where an instrument can be over-ranged by the process pressure without the process pressure exceeding system design pressure, an overpressure effect must be considered. Overpressure effects are often considered in low-range monitoring instruments in which the reading is expected to go off-scale high as the system shifts from shutdown to operating conditions. Some pressure switches may also be routinely over-ranged during normal operation.

The overpressure effect is normally considered random and is usually expressed as a percent uncertainty as a function of the amount of overpressure. The contribution of the overpressure effect on instrument uncertainty would only apply after the instrument has been over-ranged.

Page 113 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.9 Deadband Deadband represents the range within which the input signal can vary without experiencing a change in the output. The ideal instrument would have no deadband and would respond to input changes regardless of their magnitude. Instrument stressors can change the deadband width over time, effectively requiring a greater change in the input before an output response is achieved.

The vendor's instrument accuracy specification might include an allowance for deadband or it might be considered part of hysteresis (included in vendor accuracy). Recorders generally have a separate allowance for deadband to account for the amount the input signal can change before the pen physically responds to change.

Pressure switches are also susceptible to deadband. For this reason, a pressure switch setpoint near the upper or lower end of span should confirm that the setpoint allows for deadband. In extreme cases, the pressure switch might reach a mechanical stop with the deadband not allowing switch actuation.

C.3.10 Measuring and Test Equipment Uncertainty Measuring and test equipment (M&TE) uncertainty is defined in Section 2.2 and further describ6d in Appendix H.

C.3.11 Turndown Ratio Effect If a transmitter has an adjustable span over some total range, the uncertainty expression may require adjustment by the turndown factor. For example, a transmitter may have a range of 3,000 psig with an uncertainty of 2% of the total range, sometimes referred to as the upper range limit (URL). If the span is adjusted such that only 1,000 psig of the entire 3,000 psig range is used, the transmitter has not somehow become more accurate. The 2%

uncertainty of the 3,000 psig span is 60 psig, which equates to a 6t uncertainty for the 1,000 psig span. Transmitters with variable spans typically define performance specifications in terms of the total range and the calibrated span.

If the performance. specifications are quoted as a percent of full span (FS), the uncertainty expression will not require an adjustment for the turndown factor.

Page 114 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.12 Power Supply Effects (PSE)

Power supply effects are the changes in an instrument's input-output relationship due to the power supply stability. For 2-wire current loop systems, AC supply variations must be considered for their effects on the loop's DC power supply. The consequential DC supply variations must then be considered for their effects on other components in the series loop, such as the transmitter.

Using the manufacturer's specifications, the power supply is typically calculated as follows:

PSE = (pss) (vpse) where, PSE = Power supply effect to assume for the uncertainty calculation pss = Power supply stability vpse = Vendor's power supply effect expression Power supply stability refers to the variation in the power supply voltage under design conditions of supply voltage, ambient environment conditions, power supply accuracy, regulation, and drift. This effect can be neglected when it can be shown that the error introduced by power supply variation is <10% of the instrument's reference accuracy.

Harmonic distortion on the electrical system can also contribute to power supply uncertainty.

Page 115 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.13 Indicator Reading Error (IRE)

An analog indicator can only be read to a certain accuracy. The uncertainty of an indicator reading depends on the type of scale and the number of marked graduations (See Section 4.4.7.1). An analog indicator can generally be read to a resolution of M of the smallest division on the scale. Figure C-9 shows an example of a linear analog scale. As shown, the indicator would be read to M of the smallest scale. Anyone reading this scale is able to confirm that the indicator pointer is between 40 and 45. In this case, the estimated value would be 42.5. If an imaginary line is mentally drawn at the M of smallest scale division point, an operator can also tell whether the pointer is on the high side or the low side of this line. Therefore, the uncertainty associated with this reading would be +/- :S of the smallest scale division, or +1.25 for the example shown in Figure C-9. Notice that this approach defines first the resolution to which the indicator could be read (M of smallest scale division) with an uncertainty of +/- Ya of smallest scale division about this reading resolution. In terms of an uncertainty analysis, it is not the reading resolution, but the uncertainty of the resolution that is of interest.

Per Section 4.3.3, the AFT and ALT values are rounded to the next M minor marking, thus typically eliminates the need to include the :YS minor division uncertainty. Also, for cases where calibration procedures require reverse calibration of devices, where readability of the end device does not need to be taken into account. However, readability of M&TE may need to be considered.

0 10 20 30 40 50 6 0 70 80 9 0 10 0 Figure C-9 Analog Scale Page 116 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Type of Scale Discussion Analog An uncertainty of +/- VR of the smallest division Linear should be assigned as the indication reading error, if applicable. See above discussion.

Analog Logarithm or exponential scales allow the Logarithm presentation of a wide process range on a single or scale. Radiation monitoring instruments commonly Exponential used an exponential scale. An uncertainty of +/- V4 of the specific largest division of interest should be assigned as the indication reading uncertainty.

This requires an understanding of where on the scale that the operators will be most concerned regarding the monitored process, if applicable. See above discussion.

Analog Square root scales show the correlation of Square Root differential pressure to flow rate. An uncertainty of +/- (1/4 of the specific largest division of interest)% should be assigned as the indication reading uncertainty. This requires an understanding of where on the scale that the operators will be most concerned regarding the monitored process, if applicable. See above discussion.

Digital The reading uncertainty is the uncertainty associated with the least significant displayed digit, which is usually negligible as an indication reading uncertainty. The digital display must be evaluated to confirm that the reading uncertainty is insignificant, if applicable. See above discussion.

Analog Analog recorders have the same reading Recorder uncertainties, as do analog indicators. The only potential difference is that the indicator scale is fixed in place but the recorder chart paper can be readily replaced with a different scale paper. The chart paper used for the recorder should be checked to verify that the indication reading uncertainty can be estimated, if applicable. See above discussion.

Page 117 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.14 Seismic Effects Two types of seismic effects should be considered: 1) normal operational vibration and minor seismic disturbances, and 2) design basis seismic events in which certain equipment performs a safety function.

The effects of normal vibration (or a minor seismic event that does not cause an unusual event) are assumed to be calibrated out on a periodic basis and are considered negligible. Abnormal vibrations (vibration levels that produce noticeable effects) and more significant seismic events (severe enough to cause an unusual event) are considered abnormal conditions that require maintenance or equipment modification.

Design basis seismic events can cause a shift in an instrument's output. For the equipment that must function during and following a design basis seismic or accident event, the environmental qualification test report should be reviewed to obtain the bounding uncertainty. The seismic effect may be specified as a separate effect or, in some cases, may be included in the overall environmental allowance. A seismic event coincident with a LOCA is a design basis event per USAR 15.6.5. However, per USAR 15.6.5.1.1, there are no realistic, identifiable events which would result in a pipe break inside the containment of the magnitude required to cause a loss-of-coolant accident coincident with a safe shutdown earthquake. Therefore, each setpoint calculation should consider the effects of a seismic event and loss-of-coolant accident independently to establish the worst case scenario for the instrumentation being evaluated. Consideration should be given to the accident that the equipment is required to mitigate. For example, it is not necessary to impose LOCA conditions as worse case if no credit is taken to mitigate a LOCA condition (e.g. a trip function may activate prior to any harsh environment, thus calculation of LOCA is not required, whereas, indication may be required LOCA/post LOCA, therefore both seismic and LOCA would be calculated and the worst value used). This consideration should be documented in the calculation.

For well-designed and properly mounted equipment, the seismic effect will often contribute no more than +0.5% to the overall uncertainty. This effect can be considered random and can be included within the uncertainty expression as a random term.

Including a small allowance for seismic effects is considered a conservative, but not required, approach to the uncertainty analysis.

Page 118 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.15 Environmental Effects - Accident The environmental allowance is intended to account for the effects of high temperature, pressure, humidity, and radiation that might be present during an accident, such as a LOCA or HELB event. This allowance should include an evaluation of the timing of the event including the environmental condition existing at the time the function is designed to trip (See example in C.3.14 above). Some manufacturers do:not distinguish the uncertainties due to each of the accident effects. In such cases, the accident uncertainty may be a single + value given for all accident effects.

Qualification reports for safety-related instruments normally contain tables, graphs or both, of accuracy before, during and after radiation and steam/pressure environmental and seismic testing. Many times, manufacturers summarize the results of the qualification testing in their product specification sheets. More detailed information is available in the equipment qualification report. The manufacturer's specification sheet tends to be very conservative, as the worst-case performance result is normally presented.

Because of the limited sample size typically used in qualification testing, the conservative approach to assigning uncertainty limits is to use the bounding worst-case uncertainties. It is also recommended that discussions with the instrument manufacturer be conducted to gain insight into the behavior of the uncertainty (should it be considered random or bias?). This is important because if the uncertainty is random and of approximately the same magnitude as other random uncertainties, then SRSS methods might be used to combine the accident-induced uncertainty with other uncertainties. The environmental allowance should be of approximately the same size as the other random uncertainties if it is combined with other random terms in an SRSS expression. This consideration comes from the central limit theorem, which allows the combination of uncertainties by SRSS as long as they are of approximately the same magnitude. If not, then the accident uncertainty should be treated as an arbitrarily distributed uncertainty.

Using data from the qualification report in place of performance specifications, it is often possible to justify the use of lower uncertainty values that may occur at reduced temperatures or radiation dose levels. Typically, qualification tests are conducted at the upper extremes of simulated accident environments so that the results apply to as many plants as possible, each with different requirements. Therefore, it is not always practical or necessary to use the results at the bounding environmental extremes when the actual requirements are not as limiting. Some cautions are needed, however, to preclude possible misapplication of the data:

Page 119 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3

1. The highest uncertainties of all the units tested at the reduced temperatures or dose should be used. A margin should also be applied to the tested magnitude of the environmental parameter consistent with Institute of Electrical and Electronics Engineers 323-1975.
2. The units tested should have been tested under identical or equitable conditions and test sequences.
3. If data for a reduced temperature is used, ensure that sufficient "soak-time" existed prior to the readings at that temperature to ensure sufficient thermal equilibrium was reached within the instrument case.

The requirement in Item (1) above is a conservative method to ensure that bounding uncertainties are used in the absence of a statistically valid sample size. Item (2) above is an obvious requirement for validity of this method. Item (3) ensures that sufficient thermal lag time through the instrument case is accounted for in drawing conclusions of performance at reduced temperatures. In other words, if a transmitter case has a one-minute thermal lag time, then ensure that the transmitter was held at the reduced temperature at least one minute prior to taking readings.

Generally, the worst uncertainty is used from either the qualification report or the performance specification, unless more consideration is needed to preserve the existing AV or setpoint.

C.3.16 As-Left Tolerance Specification The device as-left tolerance establishes the required accuracy band that a device or group of devices must be calibrated to within when periodically tested. If an instrument as-found value is found to be within the as-left tolerance, no further re-calibration is required for the instrument and calculations should assume that an instrument might be left anywhere within this tolerance.

See Section 4.3.3 for establishing the calibration as-left tolerance for a device. For all existing CPS instruments, an as-left tolerance is already specified by the applicable surveillance calibration procedure. CPS typically calibrates non-safety related instruments to a generic calibration procedure with tolerances per the Instrument Data Sheet (IDS). This as-left tolerance is recommended for use in the calculation unless other conditions suggest that a different tolerance is warranted. For example, a tighter tolerance is easily achievable for most electronic equipment and a tighter tolerance might provide needed margin for a setpoint calculation. Conversely, establishing a tighter tolerance than is achievable per the manufacturer ensures that it will routinely be found out of calibration.

Page 120 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 The as-left tolerance should be specified for all instruments covered by the associated calculation, even if the as-left tolerances are unchanged from the values already specified in the applicable calibration procedures. The as-left tolerance is treated as a random term in the uncertainty analysis.

For all instrument loops, the loop as-left tolerance is calculation per Section 4.4.5.

C.3.17 As-Found Tolerance Specification The device as-found tolerance establishes the limit of error the defined devices can have and still be considered functional. The as-found tolerance will never be less than the as-left tolerance.

The purpose of the loop as-found tolerance is to establish a level of drift within which the instrument loop is still clearly functional, but not so large that an allowable value determination is required. An instrument or loop found outside the as-left tolerance but still within the as-found tolerance requires a recalibration but no further evaluation or response.

The as-found tolerance is generally defined to include the effects of M&TE, ALT, and vendor drift. Reference Section 4.3.3 for calculating the as-found tolerance.

The as-found tolerance should be specified for all instruments covered by the associated calculation.

For all instrument loops, the loop as-found tolerance is calculation per Section 4.4.5. For Technical Specifications, the loop as-found tolerance as defined at CPS, impacts the setpoint determination.

C.4 Uncertainty Analysis Methodology An uncertainty calculation establishes a statistical probability and confidence level that bounds the uncertainty in the measurement and signal processing of a parameter such as system pressure or flow. Knowledge of the uncertainty in the process measurement is then used to establish an instrument setpoint or provide operators with the expected limits for process measurement indication uncertainty.

The basic approach used to determine the overall uncertainty for a given channel or module is to combine all terms that are considered random using the Square Root of the Sum of the Squares (SRSS) methodology, then adding to the result any terms that are considered nonrandom.

Page 121 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Note that the bias terms do not all operate in the same direction.

Although it could be argued that some bias terms operate in opposite directions and therefore should be somewhat self-canceling, the standard practice is to treat the positive and negative channel uncertainty separately, if bias terms are present.

The reason for this approach is based on generally not knowing the actual magnitude of.the bias terms at a particular instant; the bias terms are defined at bounding levels only. Accordingly, the maximum positive uncertainty is given by:

2 VAi=+ (VA 1+VA,)

In the determination of the random portion of an uncertainty, situations may arise where two or more random terms are not totally independent of each other, but are independent of the other random terms (e.g. two instruments calibrated together as a rack). This dependent relationship can be accommodated within the SRSS methodology by algebraically summing the dependent random terms prior to calculating the SRSS. The uncertainty expression would be similar for all random terms for both devices developed by section 4.3.1.

C.5 Propagation of Uncertainty through Modules If signal conditioning modules such as scalars, summers, square root extractors, multipliers, or other similar devices are used in the instrument channel, the module's transfer function should be accounted for in the instrument uncertainty calculation. The uncertainty of a signal conditioning module's output can be determined when 1) the uncertainty of the input signal, 2) the uncertainty associated with the module, and 3) the module's transfer function are known. Equations have been developed to determine the output signal uncertainties for several types of signal conditioning modules. Refer to Appendix K for additional information.

C.6 Calculating Total Channel Uncertainty The calculation of an instrument channel uncertainty should be performed in a clear, straightforward process. The actual calculation can be completed with a single loop equation containing all potential uncertainty values or by a series of related term equations. Either way, a specific channel calculation should be laid out to coincide with a channel's layout from process measurement to final output module or modules, using the formulas described previously in Section 4.4.9 & 4.4.12 (setpoints) and 4.4.8 (indication).

Page 122 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Depending on the loop, the uncertainty may be calculated for a setpoint(s), indication function, or control function. In some cases, all three functions may be calculated. Because each function will typically use different end-use devices, the channel uncertainty is calculated separately for each function.

Components for these equations, generally are built as follows:

1. Per Section 4.3.1, an instrument loop may contain several discrete instruments (modules) that process the measurement signal from sensor to display, or from sensor to trip unit. An uncertainty calculation would determine the expected uncertainty for the selected instrument loop and each discrete component could have several uncertainty terms contributing to the overall expression. The overall uncertainty calculation for the device (Ai) may contain any or all (or other) of the following uncertainty terms.
2. Per Section 4.4.1, AL is determined from analysis of loop device error (Ai). All individual device error must be determined on the basis of the environmental conditions (normal, trip,-post accident, etc.) applicable to the event and function time for which the loop accuracy applies. Once all the accuracy error contributions for a particular instrument are identified they should be combined using the SRSS method to determine total device accuracy. In performing the SRSS combination, the individual level of confidence of each term (sigma Level) should be accounted for to ensure the resultant device accuracy error is a 2 sigma value.
3. CL is determined from two basic components. These are As Left Tolerance *(ALT) and Maintenance and Test Equipment (M&TE).

Per Appendix H, M&TE error consists of the error associated with each calibration tool or device used to calibrate the individual devices in the loop (including reading error) and the error associated with the Reference Standards used to calibrate the calibration tools.

Per Appendix I, all potential errors from M&TE are controlled by 100% testing and can therefore be assumed as 3 sigma values.

4. Per Section 4.4.4, DL is determined from analysis of loop device drift error. All individual device drift error must be determined on the basis of the environmental conditions (normal, trip, post accident, etc.) applicable to the event and function time for which the loop accuracy applies and adjusted to a common drift interval. Once the drift error contribution for a particular instrument is identified it is combined with each loop device drift term using the SRSS method to determine total loop drift. In performing the SRSS combination, the individual level of confidence of each term (sigma Level) should be accounted for to ensure the resultant drift accuracy error is a 2 sigma value. Per section 4.4.4, DL is determined as:

Page 123 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3

5. Per Sections 4.4.6, C.3.1, and C.3.2, PMA and PEA are established as uncertainties to account for measurement errors, which lie outside the normal calibration bounds of the channel.
6. Per Section 4.4.8.2, the biases for all modules should be accounted for and combined outside the square root radical.

Page 124 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Table C-1 Channel Uncertainty/Setpoint Calculation Checklist Task Completed?

Yes No (1) Are purpose and objectives clearly defined. 0 El (2) Are standard assumptions used as appropriate and any new assumptions used clearly justified and/or identified, as confirmation required. El E (3) Are Inputs/Outputs/References appropriately used, identified to latest revisions, and attached if required. - l (4) Diagram instrument channel. EQ (5) Identify functional requirements, including actuations, any EOP setpoint requirement. E E (6) Identify operating times for functions. E E (7) Identify environment associated with functions during defined operating times. ElO (8) Identify limiting environment and function. E E (9) Identify Process Measurement Accuracy (PMA) and Primary element accuracy (PEA) associated with each function and all drawings/walkdowns/other references identified to calculate values. ElO (10) Identify biases due to linear approximations of nonlinear functions (RTDs). Determine if the biases are of concern over the region of interest for the setpoint. E E]

(11) Identify any modules with non-unity gains. El 0 (12) Identify transfer function for each module with a non-unity gain. ElO (13) For each module, identify normal environment uncertainty effects, as applicable:

Vendor Accuracy (VA) El Vendor Drift (VD) lO Temperature effects (ATE) E lO Radiation effects (RE) El l Power supply effects (PSE) E l Static pressure effects (SPE) El Overpressure effects (OPE) E l Page 125 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Table C-1 (continued)

Channel Uncertainty Calculation Checklist Task Completed?

YesNo Deadband (DB) 00 Measuring and test equipment uncertainty (MTE) O O Turndown Ratio Effect (TD) 00 Indicator Reading Error (IRE) 00 (14) For each module, identify harsh environment uncertainty effects, as applicable.

Accident temperature effects (ATE) El El Accident radiation Effects (RE) 00.

Humidity effects (HE) 00 Seismic effects (SE) 00 Worst case between seismic and harsh environment used to establish AV and NTSP 00 (15) For electrical penetrations, splices, terminal blocks, or sealing devices in a harsh environment, are current leakage effects (IRA) determined. 0 0 (16) Classify each module and process effect as random or bias. Determine if any of the random terms are dependent. Combine dependent random terms algebraically before squaring in the SRSS. 0 E (17) Combine random effects for each module by SRSS.

Add bias effects algebraically outside the SRSS. 0 E (18)If the instrument channel has a module with non-unity gain, the total uncertainties in the input signal to the module must be determined, the module transfer function effect on this uncertainty calculated, and the result combined with the non-unity gain module and downstream module uncertain-ties to determine total channel uncertainty. 0 El (19)Has the ALT and AFT been appropriately identified for each device. 0 0 (20)Has M&TE been appropriately identified and values correctly calculated, using the guidance of calculation IP-C-0089 (Ref. 5.30), as a minimum. 0 El (21)Does the drift interval meet or exceed the calibration interval, for each device. El 0 Page 126 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Table C-1 (continued)

Channel Uncertainty Calculation Checklist Task Completed?

YesNo (22)Are the appropriate equations used for the type of calculation (i.e. setpoint or indication). EJE0 (23) Has values such as AV, NTSP, ALT, AFT, etc. been converted to the units required by the calibration procedure. on0 (24) Has the existing AV and Setpoint been preserved and if not has all efforts been made to minimize the terms that affect calculation of AV and NTSP. 0 0 (25) Does the conclusions verbalize that the objectives were met and are they graphically presented.

(26) Does Attachment 1, identify head correction for the loop and identified all drawings/walkdowns/other references required to calculate head correction. no0 (27) Does Attachment 2 present all the information required by C&I maintenance and calibration procedures. Examples are:

M&TE model and ranges or equivalent identified AV, NTSP, ALT, AFT given in the appropriate units and precision required by calibration procedures. no (28) Has the Cover Pages and Table of Contents been prepared correctly no1 Page 127 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.7 Nominal Trip Setpoint Calculation An uncertainty calculation defines the instrument loop uncertainty through a specific arrangement of instrument modules. This calculation is then used to determine an instrument setpoint based upon the safety parameter of interest. The relationship between the setpoint, the uncertainty analysis, and normal system operation is shown in Figure C-10.

Process Safetv Limit AL Analysis Margin, Transient Response Transient nalysis Modeling Error, Response Time, Etc.

Analytical Limit L

Accident Environmental Effects Process Measurements Effects Process Element Effects, Etc.

Process Uncertainties AL Device Uncertainty: Channel Modules, Temperature, Environment, Humidity Effects, etc.

j Allowable Value f LER Avo'dance Margin ALT, M&TE, Drift As-Found As-Left Nominal Trip Setpoint Tolerance Tolerance

- Spurious Trip Avoidance Margin 4i Operating Limit Limits of Normal Operating Range Including Transients Operating Range

,r Normal Operating Value Figure C-10 Setpoint Relationships Page 128 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 The information provided in figure C-10, prompts several observations:

  • The relationships shown can vary between applications or plants, and is provided for illustrative purposes only.
  • The setpoint has a nominal value. The upper and lower limits for the setpoint shown represent the allowed AFT & ALT tolerances for the setpoint. Typically, an instrument found within the band defined by the as-left tolerance does not require an instrument reset.
  • The setpoint relationship shown assumes that the process increases to reach the setpoint. If the process decreased towards the setpoint, the relationships shown in Figure C-10 would be reversed around the setpoint.
  • The as-found tolerance is wider than the as-left tolerance and accounts for expected drift or certain other normal uncertainties during normal operation. Instruments found within the as-found tolerance, but outside the as-left tolerance require resetting with no further action. Instruments found outside the as-found tolerance require resetting and an evaluation to determine if the loop is functioning properly.

eSafety limits are established to protect the integrity of systems or equipment that guard against the uncontrolled release of radioactivity. Process limits may also be established to protect against the failure, catastrophic or otherwise, of a system.

  • Analytical limits are established to ensure that the safety limit is not exceeded. The analytical limit includes the effects of system response times or actuation delays to ensure that the safety limit is not exceeded.
  • The allowable value is a value that the trip setpoint should function on or before, when tested periodically due to instrument drift or other uncertainties associated with the test to protect the analytical limit. A calibrated or loop verified setpoint found within the allowable value region, but outside the instrument's as-found tolerance, is usually considered acceptable with respect to the analytical limit and allowable value. The instrument must be reset to return it within the allowed as-left tolerance. A setpoint, found outside its as-found tolerance but with the allowable value, should be evaluated for functionality. A setpoint, found outside the allowable value region, requires an evaluation for operability. Normally, an allowable value is assigned to Technical Specifications parameters that also have an analytical limit.

Page 129 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3

  • The trip setpoint is the desired actuation point that ensures, when all known sources of measurement uncertainty are included, that an analytical limit is not exceeded. Depending on the setpoint, additional margin may exist between the trip setpoint and the analytical limit. The trip setpoint is selected to ensure the analytical limit is not exceeded while also minimizing the possibility of inadvertent actuations during normal plant operation.

Page 130 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 APPENDIX D EFFECT OF INSULATION RESISTANCE ON UNCERTAINTY D.1 Background Under the conditions of high humidity and temperature associated with either a Loss of Coolant Accident (LOCA) or high energy line break (HELB), the insulation resistance (IR) may decrease in instrument loop components such as cables, splices, connectors, containment penetrations, and terminal blocks. A decrease in IR results in an increase in instrument loop leakage current and a corresponding increase in measurement uncertainty of the process parameters, defined in Section 2.2 as IRA.

Degraded IR effects during a LOCA or HELB are a concern for instrumentation circuits due to the low signal current levels. A decrease in IR can result in substantial current leakage that should be accounted for in instrument setpoint and post accident monitoring uncertainty calculations. The NRC expressed concern with terminal block leakage currents in Information Notice 84-47. More recently, the NRC stated in Information Notice 92-12 (Ref. 5.12) that leakage currents should be considered for certain instrument setpoints and indication.

This Appendix provides an overview of IR effects on standard instrumentation circuits and provides examples of the effect of IRA on instrument uncertainty. Specifically, this Appendix addresses the following:

  • Qualitative effects of temperature and humidity on IR
  • Analytical methodology for evaluating IR effects on instrument loop performance
  • Technical information needed to perform an evaluation
  • Application of results to uncertainty calculations
  • Consideration of inherent margins in the analytical methodology Page 131 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 D.2 Environmental Effects on Insulation Resistance IR is affected by changes in the environment. ASTM Standard D257-91 (Ref. 5.31), provides a discussion of the factors that affect the resistance of a material. This ASTM standard discusses material properties in general; it does not limit itself to cables or any other type of particular construction. Factors that affect the resistance or the ability to measure resistance include:

  • Temperature
  • Humidity oTime of electrification (electrical measurement of resistance)
  • Magnitude of voltage
  • Contour of specimen
  • Measuring circuit deficiencies eResidual charge Temperature and humidity effects are of particular interest for circuits that may be exposed to an accident harsh environment. The resistance of an organic insulating material changes exponentially with temperature. Often, this variation can be represented in the form:

R = Be-m/T) where, R =Resistance of an insulating material B =Proportionality constant m =Activation constant T =Absolute temperature in degrees Kelvin One manufacturer predicts a similar exponential variation of IR with respect to temperature for their cable; the manufacturer provides the following equation, for determining IR at a given temperature:

IR = (4 X 1015) log (D/d) e 0 0 79 X T) where, IR = Calculated cable insulation resistance, megohm for 1,000 ft T = Temperature, degrees Kelvin d = Diameter of conductor D = Diameter of conductor and insulation Page 132 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Example D-1 Using the above expression, a sample IR will be calculated at 300OF (4220 K). Cable heatup due to current flow will be neglected for instrument cables since they carry no substantial current. Typical values for d and D are 0.051 in. and 0.111 in., respectively, for a 16 awg conductor.

IR = (4 x 70'5) log (0.111/0.051) e0-0. 079 X 422) =

4.5 megohms per 1, Offt Using the above equation, a graph of the cable IR variation with temperature is provided in Figure D-1. This figure is illustrative only and does not necessarily apply to other configurations or materials.

5.0 4.0 I1' 3.0 2.0 FS 1.0 F-0 .0 1I 300 Degrees Fahrenheit Figure D-1 Typical Cable Insulation Resistance Variation with Temperature Page 133 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Insulation resistance of solid dielectric materials decreases with increasing temperature and with increasing humidity. Volume resistance of the insulating material is particularly sensitive to temperature changes. Surface resistance changes widely and very rapidly with humidity changes. In both cases, the change in IR occurs exponentially.

ASTM D257, Reference 5.31, discusses temperature and humidity as a combined effect on IR. In some materials, a change from 25 0 C to 100 0 C may change IR by a factor of 100,000 due to the combined effects of temperature and humidity. The effect of temperature alone is usually much smaller.

IR is a function of the volume resistance as well as the surface resistance of the material. In the case of an EQ test that includes steam and elevated temperatures, the minimum IR is expected near the peak of the temperature transient in a steam environment.

Condensation of steam and chemical spray products will reduce the surface resistance substantially.

Page 134 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 D.3 Analytical Methodology D.3.1 Floating Instrument Loops (4 - 20 mA or 10 - 50 mA)

Instrument loops for pressure, flow or level measurement normally use a 4 to 20 mA (or 10 to 50 mA) signal. The instrument circuit typically consists, as a minimum, of a power supply, transmitter (sensor), and a precision load resistor from which a voltage signal is obtained for further signal processing. A typical current loop (without IR current leakage) is shown in Figure D-2.

Figure D-2 Typical Instrument Circuit In a current loop, the transmitter adjusts the current flow by varying its internal resistance, RT, in response to the process. The transmitter functions as a controlled current source for a given process condition. The signal processor load resistor, RL is a fixed precision resistor. Under ideal conditions, the voltage drop across RL is directly proportional to the loop current and normally provides the internal process rack signal.

Page 135 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 If current leakage develops in an instrument loop due to a degraded insulation resistance, the path is represented as a shunt resistance, Rs, in parallel to the transmitter as shown in Figure D-3.

P.-

Figure D-3 Instrument Circuit with Current Leakage Path Note that Figure D-3 applies only to floating instrument loops. In a floating instrument loop, the signal is not referenced to instrument ground. Thus, even if there is a low IR between cables or other instrument loop components to ground, the effect on instrument loop performance will be negligible as long as there is not a return path to ground for current flow. In this case, the only potential current leakage path is from conductor to conductor across the transmitter as shown in Figure D-3. See Section D.3.2 for necessary analytical methodology if the signal negative is grounded.

Page 136 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Leakage current disrupts the one-to-one relationship between the transmitter current and load current, such that a measurement error is introduced at the load. For a standard 4 - 20 mA (or 10 to 50 mA) instrument loop, the error is always in the higher-than-actual direction, meaning that the load current will be higher than the transmitter output current. The magnitude of the error in percent span (Is(*)) caused by leakage is defined as the ratio of leakage current to the 16mA span of a 4 - 2OmA loop, or, Is~t)= (X,/16mA) X 100 Where Is = shunt current From figure D-3, Is can be expressed in terms of voltage, current and resistance in the current loop consisting of a power supply, load resistance and IR (shunt resistance) as follows:

V = IL RL + Is Rs where, VP = Power supply voltage IL = Current through the load resistor Is = Shunt current RL = Rack load resistance Rs = Equivalent shunt (IR) resistance Solving for Is, IS = (V - IL RL) /RS Converting mA to Amps and normalizing for a 16mA span yields the following result:

Is(* span) = [(V - IL RL )/(Rs X 0. 016)) X 100 The error due to current leakage is inversely proportional to the IR, or Rs in the above equation. As Rs decreases, the loop error due to current leakage increases. Note that equation to determine "V1 has been simplified to provide an error in terms of percent span.

For this case, the total instrument span is 16 mA for a 4 to 20 mA instrument loop.

Rs is an equivalent shunt resistance obtained from several parallel shunt paths. A typical circuit inside containment, showing all potential parallel current leakage paths, is shown in Figure D-4.

Page 137 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 I

I Figure D-4 Potential Current Leakage Paths As depicted in Figure D-4, the current leakage paths include the following:

Rsp, Splice at sensor Rc Field cable RsP 2 Splice between field cable containment penetration Rp Containment penetration Figure D-4 is intended to provide a feel for the various current leakage paths that might be present inside containment or a steam line break area; however, it is not necessarily complete. The containment penetrations might include the use of an extension (or jumper) cable to accomplish the transition from the field cable to the electrical penetration pigtail. Additional cables and splices may also be installed in the circuit, and each additional component should be included in the model.

Example D-2 Suppose we want to determine the IR that will affect the instrument loop uncertainty by 5%. The instrument loop conditions that yield the worst-case conditions for this example are as follows:

V =50 VDC (highest typical loop power supply voltage)

IL =4 mA (0.004 A) (lowest possible loop current)

RL =250 ohm (lowest typical total load resistance)

Page 138 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Using the last equation from D.3.1 above, 5= [(50 - (0.004 X 250))/(Rs X 0.016)]

Rs =61, 250 OHM For a 10 to 5OmA loop, the result is as follows:

5* = [(50 - (0.010 X 100))/(Rs X 0.040))

Rs 24,500 OHM The interpretation of the above result is that any combination of current leakage paths with an equivalent IR of 61,250 ohm can cause an error of 5t of span in a 4 to 20 mA loop. Note that the above example is based on a worst-case configuration. Any decrease in power supply voltage, or an increase in total load resistance or current, will result in a smaller percent error for a given shunt resistance. Note that leakage current is a bias, causing the load current to always be higher than the transmitter current.

D.3.2 Ground Referenced Instrument Loops (4 - 20 mA or 10-50 mA)

. A.

The methodology provided in Section D.3.1 can be used if the signal negative is connected to ground; however, the circuit model is different in this case since there are additional current leakage paths than for a floating circuit. As discussed in Section D.3.1, a floating circuit is not ground-referenced; therefore, current leakage to ground is not likely since there is not a return path for current flow at the instrument power supply. In the case of an instrument loop with the signal current grounded at the instrument power supply, leakage paths to ground are possible since there is a return path to ground. This configuration is shown in Figure D-6.

Page 139 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3

-D.

Figure D-6 Current Leakage Paths for a Ground-Referenced Instrument Loop As shown in Figure D-6, the current leakage paths are as follows:

Rs, Conductor-to-conductor for equivalent IR per Section D.3.1 RS2 Positive conductor to ground IR equivalent resistance Rs3 Negative conductor to ground IR equivalent resistance All of the above terms are parallel equivalent resistances that are calculated from cables, connectors, splices, etc., in accordance with the equations from Section D.3.1. Note that current leakage path Rs3 can be neglected since it is effectively grounded at each end. The final configuration for analysis purposes is shown in Figure D-7.

Page 140 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 I

Figure D-7 Circuit Model for a Ground-Referenced Instrument Loop The analysis of this circuit is identical to the methodology presented in Section D.3.1. Note that since there are additional current leakage paths, a ground-referenced instrument loop may be more susceptible to instrument uncertainty when its components are exposed to high temperature and humidity.

D.3.3 Resistance Temperature Detector Circuits (RTDs)

Resistance temperature detectors (RTDs) provide input to the Reactor Protection System and the Engineered Safety Features Actuation System. RTDs are also used for several post-accident monitoring functions. Because of these applications, the effect of degraded insulation resistance must be considered for RTD circuits.

However, because of the difference in signal generation and processing, the analysis methodology is different than for 4 to 20 mA instrument loops.

An RTD circuit measures temperatures by the changing resistance of a platinum RTD, rather than a change in current. A typical 3-lead RTD circuit is shown in Figure D-8 (bridge and resistance to current [R/I] signal conditioner circuitry not shown for simplicity). Shunt resistances Rs and Rss represent possible leakage current paths for this configuration.

Page 141 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Figure D-8 RTD Circuit with Insulation Resistance Shown The compensating lead wire resistance is approximately 0 ohms compared to the associated IR, Rss. Therefore, Rss is effectively shorted by the lead wire and will have no effect on the resistance signal received at the signal conditioner. This concept applies to 4-lead RTD circuits also. Shunt resistance (Rs) is in parallel with the RTD. The R/I signal conditioner will detect the equivalent resistance of the parallel resistances Rs and RRTD. For this configuration, the equivalent resistance is RE.

RE = RRTD X Rs/ (RRTD + Rs)

The error, E, in OF introduced by the shunt resistance is defined as the difference between the temperature corresponding to the RTD resistance and the temperature corresponding to the equivalent resistance. In equation form, E (TF) = Temp (RE) - Temp (RRTD)

Expressed in percent span, E(t) = [(Temp(RE) - Temp(RRTD)/Span) X 100O Because the equivalent resistance seen by the signal conditioner will always be less than the RTD resistance, the resulting error will always be in the lower-than-actual temperature direction. In other words, the indicated temperature will always be lower than the actual temperature by the error amount.

Page 142 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Example D-3 As an example, calculate the IR in an RCS wide-range RTD instrument loop that will cause a 5t error in temperature, measurement. The instrument span is 700 0 F. Perform the evaluation at an RTD temperature of 700 0F.

-5* = [(Temp (Rs) - 700)/700] X 100*

or, Temp (RE) = 665 OF From standard 200Q RTD tables, the corresponding resistance is approximately 466 ohm. This is the equivalent resistance RE. The RTD resistance for 7000 F is approximately 480 ohm. So, the IR shunt resistance can be calculated by equation D-6.

466 = 480 Rs/ (480 + Rs) or, Rs = 15,977f?

D.4 Information Required to Perform Analysis The following information is normally obtained to complete an analysis of current leakage effects:

  • Cable length and type in the area of interest
  • Number of splices in the area of interest
  • List of all potential current leakage sources, e.g., cables, containment penetrations, etc.
  • EQ test report information providing measured insulation resistance for each component
  • Instrument circuit power supply maximum rated output voltage
  • Total instrument loop loading for the circuits of interest
  • Instrument loop span (4 - 20 mA, 0-700 0 F, etc.)
  • Power supply configuration, e.g., floating or grounded Page 143 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Example D-4 Assuming the following design inputs, calculate the maximum uncertainty associated with IR current leakage effects. Note: This is an example only and does not apply to a particular configuration.

Containment electrical penetration IR: 4.4 x 106 Q (obtained from EQ file)

Cable IR:120 x 106 g2/ft (obtained from EQ file)

Cable length inside containment is 250 ft (from design documents)

Note that cable IR is modeled as parallel resistances, or in this case, as 250 parallel resistances, each with a resistance of 120 x 106 Q Or, cable IR = 120 x 106/250 = 0.48 x 106 Q Cable splices: 2.9 x 106 Q (obtained from EQ file)

Perform calculation at maximum power supply voltage (assume 48 VDC) and minimum loading (4 mA on a floating loop).

First, calculate equivalent shunt resistance due to all IR paths:

1/R, =1/(4.4 X 106) + 11(0.48 X 106) + 1/(2.9 X 1o6))

or, R, = 0.38 X 106 g2 The error in percent span is calculated by:

[48 - (0.004 x 250)]/[(0.38 x 106) X 0.0016] = 0.77T of span This is the worst case configuration consisting of the minimum IR values from EQ test reports at the minimum loop loading. The uncertainty could be improved by including the actual instrument loop load. Also, the uncertainty could be calculated at the setpoint which often will have a higher loop current than the assumed 4 mA above.

Page 144 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 D.5 Application of Results to Uncertainty Calculations Current leakage due to IR is a bias defined as IRA in Section 2.2 and used in equations described in Section 4.5.4. The direction of the bias depends on the type of circuit as follows:

  • Instrument loops, e.g., 4 to 20 mA or 10 to 50 mA circuits, will indicate higher than actual. The bias term is positive.
  • RTD circuits will indicate lower than actual. The bias term is negative.

D.6 Additional Considerations Depending on the instrument loop components, the circuit configuration, and the existing margins in a calculation, the first pass on a calculation may indicate less-than-desired setpoint margins. In this case, the input parameters to the calculation can be reviewed for any inherent margin that can be justifiably removed from the analysis. The following should be considered:

  • A Worst case IR values from the EQ test report are typically used. If the worst case IR values are based on IR to ground measurements and the instrument loop of concern is floating, then only conductor-to-conductor leakage need be considered.

This effectively doubles the IR to use for the calculation since the current leakage depends on the series IR of both conductor's insulation.

  • If the EQ test attempted to envelope all plants and all postulated accidents with a high peak temperature, e.g.,

450 0F, but the plant requirement is to a lesser value, such as 300 0F, then margin is contained in the test report. The IR of an insulating material decreases exponentially with temperature. The EQ test report should be reviewed to determine the measured IR at lower temperatures.

  • The calculations, References 5.22, 5.23, & 5.24, may have been performed for the worst-case circuit configuration for the sake of simplicity. In this case, the calculation probably assumed the following circuit conditions:
  • Maximum power supply voltage
  • Minimum instrument loop loading
  • Minimum instrument loop current, e.g., 4 mA or 10 mA Page 145 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 If the actual circuit configuration and desired current corresponding to the actual setpoint differs from the above assumptions, then the CI-01-00 calculation can calculate IRA per Appendix D, for the actual loop configuration and required setpoint to eliminate unnecessary conservatism.

  • Consider the time during which the process parameter is required. If the instrument loop performs a trip function prior to the peak accident transient conditions or if the instrument loop provides a post-accident monitoring function after the peak accident transient conditions have passed, a lower value of IRA may be defendable based upon a review of the appropriate EQ test reports.
  • Consider the signal cable routing in each environmental zone.

If the signal cable routes through multiple zones each with a unique peak temperature, a lower value of IRA may be defendable based upon calculation of the effect for each zone.

D.7 Concluding Remarks The effect of IRA on instrument uncertainty is easily in a setpoint or indication uncertainty calculation. This Appendix provides an analytical basis for current leakage calculations and discusses options to consider when the calculated results exceed the available margin. If a bounding IRA value for a given device has been established per References 5.22, 5.23, and 5.24 and the values are acceptable for use in the setpoint or indication uncertainty calculation, then no further action is required.

Current leakage due to IR is not expected during normal operation.

However, the methodology presented in this Appendix D could be used to determine IR effects during normal environmental conditions.

Cable insulation resistance typically exceeds 1 megohm during normal operation, which results in a negligible contribution to the overall uncertainty.

Page 146 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 APPENDIX E FLOW MEASUREMENT UNCERTAINTY EFFECTS E.1 Uncertainty of Differential Pressure Measurement Differential pressure transmitters are generally used for flow measurement. The differential pressure measurement is normally obtained across a flow restriction such as a flow orifice, nozzle, or venturi. Each type of flow measurement device is briefly described below:

  • A flow orifice is a thin metal plate clamped between gaskets in a flanged piping joint. A circular hole in the center, smaller than the internal pipe diameter, causes a differential pressure across the orifice plate that is measured by the differential pressure transmitter. A flow orifice is inexpensive and easy to install, but it has the highest pressure drop of all flow restrictor types.
  • The flow nozzle is a metal cone clamped between gaskets in a flanged piping joint so that the cone tapers in the direction of fluid flow. The nozzle does not cause as large a permanent reduction in pressure as does the orifice because the entrance cone guides the flow into the constricted throat section, reducing the amount of turbulence and fluid energy loss.
  • A flow venturi is a shaped tube inserted in the piping as a short section of pipe. The venturi has entrance and exit cones that serve as convergent and divergent nozzles, respectively, guiding the flow out of, as well as into, the constricted throat area. The venturi design is the most efficient and accurate of the flow restrictors. However, it is also the most expensive and difficult to maintain.

Regardless of how the pressure drop is created, flow transmitters measure the differential pressure across the flow restrictor. The high-pressure connection is always made upstream of the flow restrictors. The low-pressure connection is made downstream of orifices and nozzles (the exact location can vary), based on the constricted throat section of a venturi.

Flow is proportional to the square root of the differential pressure. This means that flow and differential pressure have a nonlinear relationship. The uncertainty also varies as a function of the square root relationship. The following example considers flow accuracy as a function of flow rate.

Page 147 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 Example E-1 This example is illustrative only and does not directly correlate to any particular system flow rates or designs. However, the relative change in accuracy as a function of flow is considered representative of expected performance. A flow transmitter is used to monitor system flow. The instrument loop diagram is shown in Figure E-1.

Flnw lndiaitnr Flow Element (Orifice)

Isolation Signal Flow Figure E-1 Flow Monitoring Instrument Loop Diagram The flow transmitter measures the differential pressure across the flow orifice. The relationship between flow in gpm and the differential pressure in inches is given by:

Flow = k \/FKA The constant, k, is the flow constant for a specified configuration and the term, p, is the density of water at the design operating temperature (refer to ASME MFC-3M-1989, reference 5.7 for a detailed explanation of the flow equation). If we assume that the fluid temperature is essentially constant, the density can be incorporated into the flow constant and the above expression simplifies to:

Flow = k J Page 148 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 For this example and assuming constant fluid temperature, the maximum flow will be given as 1,500 gpm when differential pressure is 100 inches. Therefore, the flow constant is:

Flow _1,500 k = Fl = 1-00 =150 Assume that the various manufacturers provided the following measurement uncertainties:

Flow Orifice Accuracy (PEA) - +/-1.5%

Flow Transmitter Accuracy (VAT) - +/-0-5%

Drift (VDT) - +/-1.0%

Temperature Effects (ATET) - +/-0.5 Indicator Accuracy (VA,) - +/-0.5%

Drift (VD,) - +/-1.5%

Input Resistor Accuracy (VAR) - +/-0.1%

Assume that all of the above uncertainty terms are random and independent for this example. The transmitter is providing an output signal proportional to the differential pressure across the flow orifice.

For this reason, we should first determine the uncertainty in our differential pressure measurement. The flow uncertainty can be estimated by taking the square root of the sum of the squares of the individual component uncertainties. The following equation is shown for example only AND does not replace the equations presented in Section 4.5.4:

Z = (PEA2 + VAT2 + VDT2 + ATET 2 + VA,2 + VD2 + VAR2 ) 1 /2 Z 41.5 52 +0.52+1.02+0.52+0.52+1.52+0.12

= + 2.5* = +/- 2.5 inches AP Page 149 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 Now, remember that our understanding of flow is based on the square root relationship between flow and differential pressure. Because, the relationship is not linear, we must consider the flow uncertainty at specific points. We already determined that flow for this particular application is related to differential pressure by the following expression:

Flow = 150 (AP)1/ 2 Table E-1 provides the flow-to-AP relationship at different flow points:

Percent of Differential Full Scale Flow Flow (gpm) Pressure (inches) 100%7 1,500 100.00 75% 1,125 56.25 50% 750 25.00 25% 375 6.25 10% 150 1.00 Table E-1 Flow Versus Differential Pressure for Example E-1 Now, let's estimate our uncertainty in flow for each of the above flow rates based on the +/-2.5 inches of measurement uncertainty in differential pressure.

100%: Flos =150 f00+72. = 1,500 9 gpni 75%: Flow = 150 [56.25 +/- 2.5 =1,125 gpm 50%: Flow = 150 F25++/-2.5 = 750 gpm

-38

+69 25%: Flow = 150 [6.25 +/- 2.5 = 375 85 gpm

-85 10% : Flow = 150 jlIE0+/- T2S = 150 - +130 17 Page 150 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 If the flow versus the uncertainty of that flow measurement is graphed, the relative uncertainty at low flow conditions is readily apparent (see Figure E-2). This example shows the problem of obtaining accurate flow measurements by differential pressure at low flow conditions. The use of more accurate instrumentation would change the magnitude of the uncertainty, but would not affect the relative difference in uncertainty at low flow versus high flow conditions.

10 0 %

8 0 %

<z a) 6 0 %

0-C c 40 %

20 %

0%

0%

Flow Rate (% of Full Flow)

Figure E-2 Flow Uncertainty as a Function of Flow Rate Page 151 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 E.2 Effects of Piping Configuration on Flow Accuracy Bends, fittings and valves in piping systems cause flow turbulence.

This can cause process measurement uncertainties to be induced in flow elements. ASME has published guidance for various types of installation examples to show the minimum acceptable upstream/downstream lengths of straight pipe before and after flow elements. Following this ASME guidance helps reduce the effect of this turbulence. The piping arrangement showing locations of valves, bends, fittings, etc. can usually be obtained from piping isometric drawings. Reference 5.7,ASME MFC-3M-1989, states that, if the minimum upstream and downstream straight-pipe lengths are met, the resultant flow measurement uncertainty for the piping configuration (not including channel equipment uncertainty) should be assumed to be 0.5%. If the minimum criteria cannot be met, additional uncertainty (at least 0.5%) should be assumed for conservatism based on an evaluation of the piping configuration and field measurement data, if available.

E.3 Varying Fluid Density Effects on Flow Orifice Accuracy In many applications, process liquid and gas flows are measured using orifice plates and differential pressure transmitters. The measurement of concern is either the volumetric flow rate or the mass flow rate. Many reference books and standards have been written using a wide variety of terminology to describe the mathematics of flow measurement, but in basic form, the governing equations are:

Q = k A (AP /p)112 and W = k A ( (AP) (p))"12 where, Q = Volumetric flow rate W = Mass flow rate A = Cross-sectional area of the pipe AP = Differential pressure measured across the orifice p = Fluid density K = Constant related to the beta ratio, units of measurement, and various correction factors Page 152 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 As shown above, the density of the fluid has a direct influence on the measured flow rate. Normally, a particular flow-metering installation is calibrated or sized for an assumed normal operating density condition. As long as the actual flowing conditions match the assumed density, additional related process errors should not be present If the flow-measuring system has been calibrated for the normal low-temperature condition, significant process uncertainties can be induced under accident conditions when the higher-temperature (lower-density) water is flowing. Of course, the flow measurement could be automatically compensated for density variations, but this is not the usual practice except on systems such as steam flow measurement.

To examine the effects of changing fluid density conditions, a liquid flow process shall be discussed. For most practical purposes, K and A can be considered constant. Actually, temperature affects K and A due to thermal expansion of the orifice, but this is assumed to be constant for this discussion to quantify the effects of density alone. If the volumetric flow rate, Q, is held constant, it is seen that a decrease in density will cause a decrease in differential pressure (AP), causing a measurement uncertainty. This occurs because the differential pressure transmitter has been calibrated for a particular differential pressure corresponding to a specific flow rate. A lower AP due to a lower fluid density causes the transmitter to indicate a lower flow rate.

Assuming the actual flow remains constant between a base condition (the density at which the instrument is calibrated, pl) and an actual condition (P2), an equality may be written between the base flow rate (Qj) and actual flow rate (Q2), as shown below:

Q1= Q2 or k A (AP2 /P2) 1/2 = k A (AP,/plv 112 or AP 2 /p 2 = API/P 1 AP 2 /Ap 1 = P2/Pl Page 153 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 Density is the inverse of specific volume, SV. Accordingly, the above expression can be restated in terms of specific volume.

AP2 = SVI ARI S J$2 E.4 Effects of cavitating flows, I ratios, and fluid velocity on Flow Orifice Accuracy There are three elemental considerations to analyze when evaluating errors in flow measurement. First is the uncertainty of the coefficients used to determine the differential pressure of flow rate. This can be termed as flow element error or accuracy.

Second is a temperature variation, which occurs during normal operation, which was discussed in Section E.3 for density effects but may also create material property effects such as pipe size variations from thermal expansion. The third is flow rate variation, which will cause the discharge coefficient to vary slightly.

The three primary components of flow element error are:

(1) uncertainty of the discharge coefficient (2) bore diameter uncertainty and (3) pipe diameter uncertainty. The diameter ratio is represented as the bore diameter relative to the pipe diameter or S ratio and is given as: diameter ratio = d/D Where d = uncertainty of orifice bore diameter D = uncertainty of upstream pipe diameter As stated, the discharge coefficient can vary with flow rate and cause the flow coefficient to vary. Flow element installation assumes design condition and therefore a constant flow coefficient (K). Flow Variations decreasing from design flow will lower the flow element Reynolds number and as Reynolds number falls the discharge coefficient, C, will rise above the value that existed for design flow such that the relative error is predicted by:

APA - (-) -2 APD C Therefore, flow below design flow induces a small negative bias error.

Page 154 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 APPENDIX F LEVEL MEASUREMENT TEMPERATURE EFFECTS F.1 Level Measurement Overview Differential pressure transmitters are typically used for level measurement involving an instrument loop. One side of a d/p cell is connected to a water column of fixed height (often called a reference leg) and the other side is connected to the fluid whose level is to be measured (see Figure F-1).

A A

Tank Level Reference Level

'V IV Level Tank Transmit ter Figure F-1 Simplified Level Measurement in a Vented Tank The measured level in Figure F-1 is determined by the pressure caused by the column of water in the reference leg minus the pressure caused by the water level in the tank:

AP (Lref X grf ) - (Ltank X 2'ank) where, Lref Height of liquid in reference leg Yref = Specific weight of liquid in reference leg Ltank = Height of liquid in tank Wtank = specific weight of liquid in tank Notice in this case that tank level and differential pressure are inversely related. Maximum differential pressure occurs at minimum tank level.

Page 155 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 As implied by the above expression, the specific weight of the liquid in the reference leg may not equal the specific weight of liquid in the tank. The two liquids might be at different temperatures (or might even be different liquids in the case of sealed reference legs).

F.2 Uncertainty Associated with Density Changes Density changes in the reference leg fluid or the measured fluid can add to the uncertainty of a level measurement by a differential pressure transmitter. Differential pressure transmitters respond to the hydrostatic (head) pressure caused by a height of a liquid fluid column; for a given height, the response varies as the liquid density varies. The density changes as a function of temperature which then potentially changes the differential pressure measured by the transmitter. The transmitter cannot distinguish between the difference caused by a level change and the difference caused by a fluid density change.

Two types of level measurement system uncertainties are presented here. Section F.2.1 provides the methodology if no temperature compensation is provided for the vessel level measurement. Section F.2.2 provides the methodology for those cases in which the vessel temperature is measured to provide automatic compensation of the vessel liquid density, but the reference leg is still not compensated.

Page 156 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 F.2.1 Uncompensated Level Measurement Systems The methodology developed and described in this section assumes that vessels are closed and contain a saturated mixture of vapor and water. For this discussion, the reference leg is water-filled and also saturated. Note that the reference leg liquid may well be compressed (subcooled). Figure F-3 shows a closed vessel containing a saturated vapor/water mixture. The symbols used to explain the effect of density variations are provided immediately below Figure F-3.

H 10 0 A Head Reference HR V9 HO Level Tank Transmitter Figure F-3 Saturated Liquid/Vapor Level Measurement Page 157 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Table F-1 provides the list of symbols used in a level measurement analysis and their explanation.

HW: Height of water SVW: Specific volume of water at saturation temperature HV: Height.of vapor SVV: Specific volume of vapor HR: Height of reference leg SVR: Specific volume of reference leg fluid HO: Height of 0% indicated level SGW: Specific gravity of water at saturation temperature H100: Height of 100% indicated SGV: Specific gravity of vapor level AP: Differential pressure

'SGR: Specific gravity of reference leg fluid (inches H2 0)

Any vapor higher than the entrance to the reference leg has an equal effect on both sides of the.differential pressure transmitter and can be ignored.

Table F-1 Symbols Used in a Level Measurement Density Effect Analysis All heights in Table F-1 are referenced to the centerline of the lower level sensing line. HV and HR are measured to the highest possible water column that can be obtained by condensing vapor.

Specific gravity, is calculated by the specific volume of water at 680 F divided by the specific volume of the fluid at the stated condition.

Referring to Figure F-3, the differential pressure applied to the transmitter is the difference between the high pressure and the low pressure inputs:

AP = Pressure (Hi) - Pressure (Lo)

The individual terms above are calculated by:

Pressure (Hi) = (HR) ( SGR) + Static Pressure Pressure (Lo) = (HW) ( SGW) + (HS) (SGS) + Static Pressure Page 158 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Substituting the above equations into the general expression for differential pressure yields:

AP = (HR) (SGR) - (HW) (SGW) - (HS) (SGS)

Referring to Figure F-3, it can be seen that the height of the vapor (HV) is equal to the height of the reference leg (HR) minus the height of the water (HW). Substituting (HR - HW) for HS yields:

AP = (HR) (SGR) - (HW) (SGW) - (HR - HW) (SGS) or aP = [ (HR) (SGR - SGS)] + [((W) (SGS- SGW)1 Using Equation F.1 and substituting for HW the height of water at 0% level (HO) and at 100% level (H100), the differential pressures at 0% (APO) and at 100% (AP100) can be determined. Note that HR, HO, and H100 are normally stated in inches above the lower sensing line tap centerline. It is normally assumed that the fluid in both sensing lines below the lower sensing line tap is at the same density if they contain the same fluid and are at equal temperature. The specific gravity or specific weight terms (SGW, SGR, and SGV) are unit-less quantities, which means that AP, APO, and APIOO are normally stated in "inches of water."

The transmitter is calibrated for proper performance at a given operating condition. Before the transmitter calibration requirements can be expressed, it is necessary to define the reference operating conditions in the vessel and reference leg from which SGW, SGR, and SGV may be determined by the use of thermodynamic steam tables. After the specific gravity terms are known, they can be used in Equation F.1 along with HR, HO, and H100 and the equation solved for the minimum and maximum level conditions, APO and AP100.

Provided that the actual vessel and reference leg conditions remain unchanged, the indicated level is a linear function of the measured differential pressure; no density error effects are present. Under this base condition, the following proportionality can be written.

HU'- HO AP-APO H10O- HO AP100- APO Solving for HW yields:

HW = [(H100 - HO) (AP - APO)/(AP100 - APO)] + HO Page 159 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Now, assess the effects of varying the vessel and reference leg conditions from'the assumed values. Let an erroneous differential pressure, APU, and erroneous water level. HU, be developed because of an operating condition different from that assumed for the transmitter calibration. The uncertainty in the water level is given by:

HW +/- HU = [(HI00 - HO) ( AP +/- APU - aPo)/( APiOO - APO)] + HO Or, the uncertainty HU is given by:

HU = (H100 - HO) (,iPU)/( APIOO - APO)

And, APIOO - APO can be expressed by:

API00 - APO = [(HR) (SRG - SGS) + (H100) (SGS-SGW)]

- f (HR) (SGR - SGS) + (HO) (SGS - SGW))

or AP1O0 - APO = (H100 - HO) (SGS - SGW)

Thus, the uncertainty HU is given by:

HU-= APU SGS - SGW The term APU is just the difference between the differential pressure measured at the actual conditions, APA, minus the differential pressure measured at the base condition, APB:

APU =APA- APB Page 160 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Assuming that HR and HW are constant (only the density is changing, not the actual levels), APA and APB can be expressed as:

APA = (HR) (SGRA - SGSA) + (HW) (SGSA - SGWA)

APB = (HR) (SGRB - SGSB) + (HW) (SGSB - SBWB)

Substituting into the expression for APU yields:

APU = (HR) (SGRA - SGSA - SGRB + SGSB) + (HW) (SGSA -

SGWA - SGSB + SGWB)

Returning to the expression for the uncertainty in measured level, HU, the substitution of the above expression for APU yields:

HU = [(HR) (SGRA - SGSA - SGRB + SGSB) + (HW)(SGSA - SGWA - SGSB

+ SGWB)]/(SGSB - SGWB)

The above expression for level measurement uncertainty describes the uncertainty caused by liquid density changes in the vessel, reference leg, or both.

F.2.2 Temperature-Compensated Level Measurement System The previous section describes the analysis methodology for the case in which no temperature compensation is provided to the level measurement system. The next section describes how to account for varying density effects on a differential pressure measurement.

This section clarifies the methodology for a system in which the vessel temperature is monitored and the level measurement system includes automatic temperature compensation to account for the vessel's liquid density changes.

If the temperature inside the vessel is monitored, then the specific gravity of the steam and the water inside the vessel can be corrected as a function of temperature. In the analysis methodology for the water level measurement uncertainty, HU, the following terms become effectively equal because of the automatic correction for temperature:

SGSA = SGSB and SGTVA = SGJVB Page 161 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 In this case, the vessel density effects are eliminated, but note that the reference leg density changes are not monitored and still require consideration. The uncertainty of the differential pressure measurement reduces to:

APU = (HR) (SGRA - SGRB)

The above equation shows that the differential pressure uncertainty becomes increasingly negative as the actual temperature increases above the reference temperature. As the temperature in the reference leg increases above the reference temperature, the fluid density decreases, causing a negative APU. Returning to Figure F-3, note that a lower differential pressure means that a higher level will be indicated, or a negative APU will cause a positive level uncertainty HU. The magnitude of the error can be estimated by:

HU = (HR) (SGRA - SGRB)/(SGSB - SGWB)

If the transmitter connections were reversed (high pressure connection reversed with low pressure connection to reverse the AP), the above discussion would still apply, but the uncertainty would change direction:

APU = (HR) (SGRB - SGRA)

The above equations calculate uncertainties in actual engineering units. If desired, the quantities HU and APU can be converted to percent span units by dividing each term by (HIOO - HO) or (AP100 -

APO), respectively, and multiplying the results by 100t. As discussed above, the sign (or direction of the uncertainty) for APU depends on which way the high- and low-pressure sides of the transmitter are connected to the vessel.

Page 162 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 F.2.3 Example Calculation for Uncompensated System For this example, assume that a level measurement is not compensated for density changes and has the following configuration:

1. HR = 150 in.
2. HO = 50 in.
3. H100 = 150 in.
4. HW = 100 in.
5. Reference conditions:

Vessel temperature = 5320 F (saturated water)

Reference leg temperature = 68 0 F (assume saturated, but could be compressed)

6. Actual conditions:

Vessel temperature = 500OF (saturated water)

Reference leg temperature = 300OF (assume saturated, but could be compressed)

Determine the level measurement uncertainty for this operating condition.

First, calculate the specific gravity terms for each condition by using steam table specific volumes of water (SVW) and specific volumes of vapor (SW). The following values are calculated:

SGrA- SVWV (680 F) 0.016046 fi 3 /lbm 078541 SVW (5000F) 0.02043 f 3 /Ibm SGSA STVW (68 0 F) _0.016046 ft 3 /lbm 0.02377 SVS (5000F) 0.67492 ft3 /lbm SGRA= SVY (68 0 F) 0.016046 fi3 /lbm 0.91954 SVII' (3000 F) 0.01745 ft3 /lbm SGW'B - SVI (680F) 0.016046 ft3 11bm 0.75582 SVW (532°F) 0.02123 ft 3 /ibm SGSA = SVW (68 0 F) 0.016046 ft3 llbin 0.03205 SMIV (5320F) 0.50070 ft3 11bm Page 163 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 SGRB = SVJ (68 0 F) 0.016046 f3 11bm 1.0 SVW (680F) 0.016046 fi3 /lbrn Next, substitute HW = 100 in. and HR = 150 in., as well as the above quantities, into the expression for HU:

HU = [(HR) (SGRA - SGSA - SGRB + SGSB) + (HW) (SGSA - SGWA -

SGSB + SGWB)]/(SGSB - SGWB)

= [150(0.91954 - 0.02377 - 1.0 + 0.03205) + 100 (0.02377 -

0.78541 - 0.03205 +.0.75582)]/(0.03205 - 0.75582)

= + 20.2 inches In percent of span, the uncertainty is given by:

HU% = [(HU)/(H100 - HO)](100%) = [(20.2)/(150 - 50)1(100%) =

+20.2% span Page 164 of 214

Instrument Setpoint APPENDIX G - STATIC HEAD AND Calculation Methodology LINE LOSS PRESSURE EFFECTS REVISION 3 APPENDIX G STATIC HEAD AND LINE LOSS PRESSURE EFFECTS The flow of liquids and gases through piping causes a pressure drop from Point A to some Point B due to fluid friction (see Figure G-1). Many factors are involved, including piping length, piping diameter, pipe fittings, fluid viscosity, fluid velocity, etc. If a setpoint is based on pressure at a point in the system that is different from the point of measurement, the pressure drop between these two points must be taken into account.

Pressure Drop I ,

Point A 'Point B Flow T Figure G-1 Line Pressure Loss Example Example G-1 Refer to Figure G-1 for this example. If protective action must be taken during an accident when the pressure at Point A exceeds the analysis limit (AL) = 1060 psig, the pressure switch setpoint needs to be adjusted to account for the line loss (30 psig) and channel equipment errors (10 psig) as shown below (it is assumed that the sensing line head effect for the accident condition is negligible in this case).

Setpoint = AL - Line Loss - Total Channel Equipment Uncertainty

= 1060 -30 - 10

= 1020 psig Page 165 of 214

Instrument Setpoint APPENDIX G - STATIC HEAD AND Calculation Methodology LINE LOSS PRESSURE EFFECTS REVISION 3 Note that if the line loss had been neglected and the setpoint adjusted to the analysis limit minus equipment error (1050 psig),

the resultant setpoint would be non-conservative. In other words, when the trip occurred, the pressure at Point A could be equal to 1050 + 30 = 1080 psig, which non-conservatively exceeds the analysis limit Example G-2 If the pipe had dropped down vertically to Point B, the result would be a head effect plus line loss example. Assume the head pressure exerted by the column of water in the vertical section of piping is 5 psig and that the line loss of Point A to Point B is still equal to 30 psig. Also, assume that the pressure at Point A is not to drop below 1,500 psig without trip action. For this example, the setpoint is calculated as follows:

Setpoint = AL + Head + total Channel Equipment Uncertainty

= 1,500 + 5 + 10 = 1,565 psi In this case, the 30 psig line loss was neglected for conservatism.

Note that the head effect/line loss errors are bias terms, unless they can be calibrated out in the transmitter, in which case this effect can be removed from the channel uncertainty calculation.

CPS C&I department typically calibrates the effects of head out during transmitter calibration testing, this must be verified for each channel during analysis. If head effects are included in the channel uncertainty calculation, the effect must be added or subtracted from the analytical limit, depending on the particular circumstances, to ensure that protective action occurs before exceeding the analytical limit.

Page 166 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 APPENDIX H MEASURING AND TEST EQUIPMENT UNCERTAINTY M&TE uncertainty is the inaccuracy introduced by the calibration process due to the limitations of the test instruments. M&TE uncertainty includes three principal components: (1) vendor accuracy of the test equipment, (2) effect of temperature on the test equipment, and (3) accuracy of the test equipment calibration process. The first two components are included directly in the M&TE uncertainty and the third is assumed to be included in the conservatism of the vendor accuracy of the test equipment.

All (100%) of test equipment is certified to pass the calibration requirements, not just 95%, the common confidence level used for uncertainty calculations. Discussion with vendors shows that the actual accuracy of the test equipment is better than the vendor published values. Both of these provide conservatism in the accuracy of the test equipment and, therefore conservatism in the M&TE determination. As discussed in H.1 below the standards used to calibrate the test equipment are generally rated 4:1 better than the equipment being calibrated. For these reasons it is generally accepted that the published vendor accuracy of the test equipment includes the uncertainty of the calibration standard since vendor accuracy divided by 4 is negligible in the relation to other uncertainties. For the purposes of setpoint and uncertainty calculations, the total M&TE uncertainty for any module should be based on test equipment, which has been calibrated using 4:1 reference standards.

The module calibration also includes an As-Left tolerance (ALT) which can be related to the test equipment uncertainty. An instrument does not provide an exact measurement of the true process value; there is always some level of uncertainty or error in our measurement. The As-Left tolerance is (1) a reflection of the best accuracy that we can realistically obtain or (2) the minimum accuracy that we feel is needed to assure that the process is properly controlled.

For example, a pressure transmitter may have vendor accuracy (VA) of +/-0.1%, but its As-Left tolerance may be allowed to be +/-0.5%.

Thus, the instrument technician is allowed to leave the instrument as-is if it is found anywhere within +/-0.5t of the calibration check point. Without any other considerations, we would have to conclude that the calibrated condition of the instrument is only accurate to

+/-0.5% rather than the device's VA of +0.1%. If greater accuracy is needed, the calibration procedure should be revised for the tighter As-Left tolerance.

Appendix H provides the details for calculation preparers to consider when evaluating the M&TE uncertainty for a module.

Page 167 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.1 General Requirements The control of measuring and test equipment (M&TE) is governed at CPS, by procedure CPS 1512.01, Reference 5.14. This procedure requires the M&TE accuracy to be at least a 4:1 ratio, greater than the Reference Standards used. In discussion with NSED, loop M&TE is specified as the statistical combination of all of the pieces of input and output M&TE. Instrument and loop calibration procedures, CPS 8801.01 and 8801.02, References 5.15 and 5.16 required the M&TE to be at least as accurate as the device being calibrated (1:1 ratio). CPS does have an M&TE calculation (IP-C-0089, Ref. 5.30) supporting both maintenance selection activities and engineering assumptions used in calculations.

The following discusses specific requirements of this procedure:

1. Reference standards used for calibrating M&TE shall have an uncertainty (error) requirement of not more than 1A of the tolerance of the M&TE equipment being calibrated. A greater uncertainty may be acceptable as limited by "State of the Art."
2. Total SRSS of M&TE accuracy used for calibrating a loop or component shall have an uncertainty (error) requirement of no more than a 1:1 ratio of the tolerance of the loop or component being calibrated.
3. No measurement and test equipment shall be used if the record date for recalibrating the test equipment has been exceeded.

CPS 1512.01, does not address the accuracy of M&TE equipment with respect to the loop or component being checked for calibration. The accuracy of M&TE equipment is addressed by calculation, CPS (IP-C-0089, Reference 5.30). SRSS of M&TE device(s) accuracy uncertainty will be considered in terms of the VA of the loop or component to be calibrated.

For the purposes of setpoint and uncertainty calculations, the total M&TE uncertainty should be based on CPS Standard Assumption (Section I.11) that a 4:1 ratio exists between M&TE and references standards, thus CSTD = 0. If the test equipment accuracy is not based on 4:1 reference standards, the required total M&TE uncertainty should be met by using better test equipment for calibration.

In general, it is desirable to minimize the contribution of M&TE to the uncertainty of the loop. Every effort should be made to use the most accurate M&TE available during calibration.

Page 168 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.2 Uncertainty Calculations Based on Plant Calibration Practices The M&TE uncertainty included in an uncertainty calculation is based on historical practices and the uncertainty assigned to the M&TE by calculation, IP-C-0089, Ref. 5.30. The implicit design assumption is that M&TE used in the future will be equal to or better than the M&TE used in the past (due to improvements in State of the Art test equipment). In order to ensure this assumption is not invalidated by future calibrations, review the M&TE specified in the applicable C&I procedures. Verify the uncertainty of the M&TE specified (including calibration standards) is bounded by VA used in the calculation as shown in the following sections for each type of instrument or configuration.

NOTE: ALT does not have to equal VA. It can be greater or smaller based on the needs of C&I maintenance.

H.2.1 Loop Component For all components, the M&TE reference accuracy used for calibration should be no greater than VA of that component.

The calculation of Calibration uncertainty should include both the input and output M&TE. M&TE errors are present with the input signal provided to the input of the sensor as well as with the instrumentation used to measure the output of the sensor (see Figure H-1). The input M&TE is independent from the output M&TE.

Additionally, it should include any other affects on the M&TE equipment such as ATE and/or IRE.

Page 169 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 Signal Processing GA put Process Figure H-1 Measuring and Test Equipment Uncertainty An example is given for Figure H-1. In the case of a transmitter (sensor), where VA = +/-0.5%. The 1:1 criteria for M&TE would be met by the statistical combination of the input and output MTE reference accuracies.

2 VAsensor 2 (MTEI + MTEo2 ) 1/2 This comparison should be made for all components in the loop regardless of whether they have M&TE on both input and output, or multiple M&TE on input, output, or both.

H.2.2 Instrument Loops For an entire instrument loop, the Calibration Error used should be the statistical combination of the As-Left tolerance (ALT),

Calibration Device Error (Ci), and Calibration Standard Error (CSTD).

Ci should be the statistical combination of all of the pieces of input and output M&TE including all uncertainties associated with the M&TE (example: temperature effect and readability). CPS calculation IP-C-0089, "M&TE Uncertainty Calculation", provides uncertainty values for the most commonly used M&TE.

Page 170 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.2.3 Example Channel Loop Error Section for a Typical Transmitter, ATM Loop 7.6 Loop Calibration Error (CL)

Loop Calibration Error is determined by the SRSS of As-Left Tolerance (ALT;), Calibration Tool Error (C;), and Calibration Standards Error (C; STD) for the individual devices in the loop.

The equation below is used to calculate this effect.

From Section 7.3.3:

C= Aj ALT , 2+ (__) (20y) 7.6.1 As-Left Tolerance (ALTL)

From Section 7.5 ALTiPT =0.25% (20y)

ALTiAnfI= +/-0.25% , (2a)

ALTL +/-0.354% Span (2a) 7.6.2 Calibration Tool Error (C;)

7.6.2.1 Transmitter Calibration Tool Error (Cipr)

The IB2INXXXA, B, C, D transmitters located in the Aux. Bldg. (Refer to Section 7.2) are calibrated with a Fluke Model 45 DC voltmeter on the slow response setting that is capable of measuring 1-5 Vdc and a 250-ohm precision resistor, accurate to

+/-0.02 ohms. The calibration also requires a test gauge with a range of 0-2000 psig.

This information is from Section 7.0 of Output [calibration procedure listed in output section]. Per Assumption [ ], all M&TE equipment is a 3a value.

Per Section 7.4.1:

Transmitter span is 0-1500 psig VApr = +/- 0.25% span. (2a)

Per Reference [IP-C-0089], VA for the M&TE devices are:

Heise (0-2000 psig) = 0.1% FS (3a)

Fluke 45 (1-5 Vdc, Slow) = 0.065% reading, where max reading is 5 Vdc. (3a)

Page 171 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 The accuracy of the precision resistor is calculated as follows:

CPR = +/-0.02/250 *100 CPR = +/-0.008% Span (3a)

Per Ref. [CI-0 .00, Appendix H, Section H.2.1]

VAPT 2 (MTE1 2 + MTEO 2 ) 1 /2 0.25% span 2 ((0.1%FS/SP) 2 + (0.065%R/SP) 2 + (0.008%Span)2)1/2 (0.0025*1500) 2 ((0.001*2000/1500 2 + (0.00065*5/4)2 + (0.00008*1500)2)1/2 3.75 2 0.12 it The total M&TE error for the Heise gauge (CPG) is therefore:

Per Reference [IP-C-0089], Total error M&TE devices are:

CpG = +/-1.187 FS Converting to the 1500 psig span of the transmitter:

CPG = +/-1.187% (2000 psig/1 500 psig)

CPG = +/-1.583% Span (3a)

The M&TE error for the voltmeter ( yM) is therefore:

Cvq = +/-0.097% RISP

= +/-0.097% 5/4

=+0.121% Span (3a)

The M&TE error for the precision resistor (CPR) is therefore:

CPR = +/-0.008% Span (3a)

Substituting terms:

CFT = +CP + CVM + CPR cpr = +JV.583%span2 +0.121%span2 +0.008%span 2 CPT= +/-1.588% Span (3a)

Page 172 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 7.6.3 ATM Calibration Tool Error (CATM)

The ATM's are calibrated using a DAC, which uses a readout assembly. This assembly does introduce some error into the calibration. Per Reference [IP-C-0089], Total error M&TE devices are 0.195%FS.

CRes= + 0.195% *20 mA/l6mA CATI = +/-0.0901  % Span (3cy) 7.6.4 Calibration Standard Error (CSTD):

Per Assumption[ ], Calibration Standard Error is considered negligible for the purposes of this analysis.

CSTD = ° 7.6.5 Loop Calibration Error (CL):

Per Outputs [ ], the loop calibration is performed using a pressure gauge only. Therefore, C; for the loop will be CPG-From Section 7.6 above:

CL =+/-NV j(ALL)'+ Z(C'+/-) 2 +X CSTD J2 Fmn aon vn From above:

ALTL = 0.354% Span (2a) Section 7.6.1 CPG = 1.583% Span (3a) Section 7.6.2.1 CiSTD = 0 Section 7.6.3 Substituting terms for the pressure loop:

Cm= 2 0(O354 %span ) + (1.583 %span )2+2 CL = +/- 1.622% Span (2a)

Page 173 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.2.4 Special Considerations CL is used in the development of AFTL, which is used to calculated NTSP. In order to preserve an existing setpoint, CL can be reduced as follows:

1. Reduce the M&TE temperature uncertainty by reducing the temperature-band from maximum (Bldg Temp. Band) to a lower Room Temp. Band for the location of the component. This will require calculating new M&TE uncertainty values consistent with calculation IP-C-0089.

Discussion and agreement with C&I Maintenance is required for the below options, but these may be considered as well;

2. Specify a more accurate M&TE, such as digital heise, which are temperature compensated. Also, there are some regular heise gauges, which are temperature compensated.
3. Reduce or change the range specified for M&TE. For the example above, specify a 15d0 psig Heise (if it exists).

However, the upper Cardinal Point(typically 100% span)used in the calibration procedure will have to be reduced such that the range of the M&TE is not exceeded when allowing for As Found and As Left calibration tolerances.

Page 174 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 APPENDIX I NEGLIGIBLE UNCERTAINTIES / CPS STANDARD ASSUMPTIONS The uncertainties listed and discussed in sections I.1 through I.6 below. The CPS Standard Assumptions are listed in I.11. Personnel performing an uncertainty calculation must evaluate the calculation with respect to this Appendix to verify that any special circumstances or unusual configurations do not invalidate any of these negligible uncertainties or CPS Standard Assumptions.

1.1 Normal Radiation Effects DC-ME-09-CP, Ref. 5.36, defines the normal and harsh environments for areas within the plant. There is not a substantial increase in radiation during normal operating conditions. In these areas, radiation changes during normal operation do not exist and/or are minimal, with no impact to vendor equipment. Normal radiation induced errors shall be incorporated when provided by the manufacturer. Otherwise, it is assumed that any accumulative effects of <104 RAD TID radiation are calibrated out on a periodic basis. For these reasons, the uncertainty introduced by any radiation effect during normal operation is assumed to be negligible.

1.2 Humidity Effects Most manufacturers' literature and technical manuals do not address the effect of humidity (10% RH to 95% RH) on their equipment. The uncertainty introduced by humidity changes during normal operation is assumed to be negligible unless the manufacturer specifically discusses humidity effects in the technical manual. The effects of humidity changes are assumed to be calibrated out on a periodic basis. A condensing environment is considered an abnormal event that would require equipment maintenance. A humidity below 10% is considered to occur very infrequently.

1.3 Seismic/Vibration Effects The effects of normal vibration (or a minor seismic event that does not cause an unusual event) on a component are assumed to be calibrated out on a periodic basis. As such, the uncertainty associated with this effect is assumed to be negligible. Abnormal vibrations, e.g., levels that produce noticeable effects on equipment, are considered abnormal events that require maintenance or equipment modification.

Page 175 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 I.4 Normal Insulation Resistance Effects The uncertainties associated with insulation resistance are assumed to be negligible during normal plant operating (non-accident) conditions. Typical insulation resistances are greater than 1,000 megohm. As an example, assume that the total IR is only 10 megohm and assume minimum instrument loop loading. Using the methodology provided in Appendix D, the expected uncertainty attributable to IR is given by:

(48 - (0.004) (250))/((10x106 ) (0.016)) = 0.03%

As can be seen, the IR can be considered negligible as long as the environment remains mild.

I.5 Lead Wire Effects Since the resistance of a wire is equal to the resistivity times the length divided by the cross-sectional area, it is assumed that the very small differences in wire lengths between components do not contribute to any significant'resistance differences between wires. The uncertainty associated with these insignificant resistance variations is assumed to be negligible.

If a system design includes lead wire effects that must be considered as a component of uncertainty, the requirement must be included in the design basis. The general design standard is to eliminate lead wire effects as a concern both in equipment design and installation. Failure to do so is a design fault that should be corrected. Unless specifically identified to the contrary, lead wire effects are to be assumed to be negligible. An exception to this is thermocouples and RTDs. These cases require individual evaluation of lead wire effects.

1.6 Calibration Temperature Effects Calibration temperature is not recorded at CPS, however, the temperature at which an instrument is calibrated is within the normal operating range of the instrument and generally reasonably close to one another between calibrations. Although, the ambient temperature effects cannot be determined, they are considered small. Therefore, the uncertainty associated with the temperature variations during calibration is assumed to be included within the instrument drift errors. Note that this applies only to temperature changes for calibration. Temperature effects over the expected range of equipment operation and M&TE temperature effects must be considered.

Page 176 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 1.7 Atmospheric Pressure Effects Assuming that the atmospheric pressure might change as much as one inch of mercury, this equates to approximately 0.5 psi. Because this change is small, this effect will be assumed negligible for pressures of 5 psi and larger, unless the pressure transmitter is measuring a relatively small pressure.

I.8 Dust Effects Any uncertainties associated with dust are assumed to be compensated for during normal periodic calibration and are assumed to be negligible.

1.9 RTD Self Heating Errors To determine a typical RTD self heating error, the following computation is provided:

RTD: Rosemount Model 104 RTD Self Heating Effect: 0.10 C or less Resistance @ 4000 C: 249.61 £Q Resistance @ 380 0 C: 242.58Q Resistance/OC around 4000 C = (249.61 - 242.58)/20

= 0.35 Q/OC Self Heating Error = 0.1 0 C x 0.35 !Q/OC = 0.035 Q At 400 0 C = 0.035/249.61 = 0.014t The above results show that the RTD self heating error can be assumed to be negligible.

I.10 Digital Signal Processing An accuracy of 0.1% of full scale or less is often specified.

Additionally, linearity and repeatability are often specified as 1 least significant bit (LSB). When this 0.1% uncertainty is compared to the percent uncertainty for the rest of the instrument loop, it is clear that this uncertainty can be neglected.

Page 177 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 1.11 Assumptions As defined in Section 2.2, these assumptions are considered to be defendable and should be used in Section 2.0 for any new or revised calculation, performed under this methodology. All standard assumptions shall be listed first without modification, except for where an assumption points to another assumption, which may not be the same number as listed (see assumptions 2.10 & 2.11 below). The Setpoint Program Coordinator may provide corrections and/or new standard assumptions that may have not been incorporated into the latest revision of CI-01.00. It may be necessary to modify some of the CPS Standard Assumptions listed below during the development or revision of calculations. The preparer and reviewer of a calculation must ensure the assumptions used are valid and applicable to their calculation.

2.1 Published instrument vendor specifications are considered to be 2a values unless specific information is available to indicate otherwise 2.2 Temperature, humidity, power supply, and ambient pressure errors have been incorporated when provided by the manufacturer. Otherwise, these errors are assumed to be included in the manufacturer's accuracy or repeatability specifications 2.3 Changes in ambient humidity are assumed to have a negligible effect on the uncertainty of the instruments used in these loops.

2.4 Normal radiation induced errors have been incorporated when provided by the manufacturer. Otherwise, these errors are assumed to be small and capable of being adjusted out each' time the instrument is calibrated. Therefore, unless specifically provided, normal radiation errors can be assumed to be included within the instrument drift errors.

2.5 If the manufacturers instrument performance data does not specify Span, Calibrated Span, Upper Range Limit, etc. the calculation will assume URL because it will result in the most conservative estimate of instrument uncertainty. In all cases the URL is greater than or equal to the calibrated span (CS) and it is conservative to use the URL in calculating instrument uncertainties. This is because, by definition, URL is the maximum upper calibrated span limit for the device.

Page 178 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 2.6 This analysis assumes that the instrument power supply stability (PSS) is within +5% (+1.2 Vdc) of a nominal 24 Vdc.

2.7 The effects of normal vibration (or a minor seismic event that does not cause an unusual event) on a component are assumed to be calibrated out on a periodic basis. As such, the uncertainty associated with this effect is assumed to be negligible and included within the instrument drift errors.

Abnormal vibrations, e.g., levels that produce noticeable effects on equipment, are considered abnormal events that require maintenance or equipment modification.

2.8 Evaluation of M&TE errors is based on the assumption that the test equipment listed in Analysis Section 7.0 is used. Use of test equipment less accurate than that listed will require evaluation of the effect on calculation results.

2.9 It is assumed that the M&TE listed in Section 7.0 is calibrated to the requires manufacturer's recommendations and within the manufacturer's required environmental conditions.

Temperature related errors are based on the difference between the Calibration Lab temperature and the worst case temperature at which the device is used.

2.10 It is assumed that the reference standards used for calibrating M&TE or Calibration tools shall have uncertainty requirements of not more than Y4 of the tolerance of the equipment being calibrated. A greater uncertainty may be acceptable as limited by "State of the Art". It is generally accepted that the published vendor accuracy of the M&TE or Calibration tool includes the uncertainty of the calibration standard M&TE when the 4:1 accuracy standard is satisfied.

Hence, Calibration Standard uncertainty is considered negligible to the overall calibration error term and can be ignored. This assumption is based primarily upon inherent M&TE conservatism built into the calculation. Per assumption

[2.11), this calculation considers the combined M&TE vendor or reference accuracy used for calibration satisfies 1:1 accuracy ratio to the instrument under calibration. This ratio bounds the upper accuracy limit on Calibration tool equal to the Vendor's Accuracy (VA) specification for the device under calibration. Use of M&TE more accurate than 1:1 is conservative to this assumption and thereby acceptable without impacting the results of this calculation.

Page 179 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 2.11 It is assumed that when M&TE is not specified uniquely in a controlling calibration procedure (e.g., Surveillance Procedure or Preventive Maintenance Procedure), the combined M&TE vendor or reference accuracy used for calibration satisfies a 1:1 accuracy ratio to the instrument under calibration. This accuracy ratio establishes the limit on selected M&TE equal to the Vendor's Accuracy (VA) requirement.

Further, M&TE uncertainty assumed per this discussion, is considered a 30 value regardless of the confidence associated with the related VA term.

2.12 The effects of EMI and RFI are considered negligible for panel mounted meters in administratively controlled EMI/RFI environments, unless a specific uncertainty term is provided by the vendor.

2.13 If the instrument vendor provides no drift information and there is no clear basis for assuming drift is zero, it may be conservatively assumed that the drift over the entire calibration period equals Vendor Accuracy (i.e., VD = VA 2a).

2.14 Data from comparable but different instruments may be used when vendor specification is not available or is lacking.

This comparison should evaluate like applications in like environment with the instrument analyzed consistent for form, fit, and function.

Page 180 of 214

Instrument Setpoint APPENDIX J - DIGITAL SIGNAL Calculation Methodology PROCESSING UNCERTAINTIES REVISION 3 APPENDIX J DIGITAL SIGNAL PROCESSING UNCERTAINTIES This Appendix presents a discussion on digital signal processing and the uncertainties involved with respect to determining instrument channel setpoints for a digital system. This Appendix assumes that a digital signal processing system exists that receives an analog signal and provides either a digital or analog output. In many respects, the digital processor is treated as a black box; therefore, the discussion that follows is applicable to many different types of digital processors.

The digital processor is programmed to perform a controlled algorithm. Basic functions performed are addition, subtraction, multiplication and division, as well as data storage. The digital processor is the most likely component to introduce rounding and truncation errors.

In general, an analog signal is received by the digital processor, filtered, digitized, manipulated, converted back into analog form, filtered again and sent out. The analog input signal is first processed by a filter to reduce aliasing noise introduced by the signal frequencies that are high relative to the sampling rate. The filtered signal is sampled at a fixed rate and the amplitude of the signal held long enough to permit conversion to a digital word. The digital words are manipulated by the processor based on the controlled algorithm. The manipulated digital words are converted back to analog form, and the analog output signal is smoothed by a reconstruction filter to remove high-frequency components.

Several factors affect the quality of the representation of analog signals by digitized signals. The sampling rate affects aliasing noise, the sampling pulse width affects analog reconstruction noise, the sampling stability affects jitter noise and the digitizing accuracy affects the quantization noise.

J.1 Sampling Rate Uncertainty If the sampling rate is higher than twice the analog signal bandwidth, then the sampled signal is a good representation of the analog input signal and contains all the significant information.

If the analog signal contains frequencies that are too high with respect to the sampling rate, aliasing uncertainty will be introduced. Anti-aliasing band limiting filters can be used to minimize the aliasing uncertainty or else it should be accounted for in setpoint calculations.

Page 181 of 214

Instrument Setpoint APPENDIX J - DIGITAL SIGNAL Calculation Methodology PROCESSING UNCERTAINTIES REVISION 3 J.2 Signal Reconstruction Uncertainty Some information is lost when the digitized signal is sampled and held for conversion back to analog form after digital manipulation.

This uncertainty is typically linear and about +/-% Least Significant Bit (LSB).

J.3 Jitter Uncertainty The samples of the input signal are taken at periodic intervals. If the sampling periods are not stable, an uncertainty corresponding to the rate of change of the sampled signal will be introduced. The jitter uncertainty is insignificant if the clock is crystal controlled, which it is in the majority of cases.

J.4 Digitizing Uncertainty When the input signal is sampled, a digital word is generated that represents the amplitude of the signal at that time. The signal voltage must be divided into a finite number of levels that can be defined by a digital word n bits long. This word will describe 2n different voltage steps. The signal levels between these steps will go undetected. The digitizing uncertainty (also known as the quantizing uncertainty) can be expressed in terms of the total mean square error voltage between the exact and the quantized samples of the signal. An inherent digitizing uncertainty of +/-% the least significant bit (LSB) typically exists. The higher the numbers of bits in the conversion process the smaller the digitizing uncertainty.

J.5 Miscellaneous Uncertainties Analog-to-digital converters also introduce offset uncertainty, i.e., the first transition may not occur at exactly +/-% LSB. Gain uncertainty is introduced when the difference between the values at which the first transition and the last transition occurs is not equal. Linearity uncertainty is introduced when the differences between the transition values are not all equal.

As a rule of thumb, use +/-i LSB for relative uncertainty for the analog-to-digital conversion. For digital-to-analog conversion, the maximum linearity uncertainty occurs at full scale when all bits are in saturation. The linearity determines the relative accuracy of the converters. Deviations from linearity, once the converters are calibrated, is absolute uncertainty. As a rule of thumb, use +/-S LSB for absolute uncertainty and + LSB for linearity uncertainty.

Page 182 of 214

Instrument Setpoint APPENDIX J - DIGITAL SIGNAL Calculation Methodology PROCESSING UNCERTAINTIES REVISION 3 J.6 Truncation and Rounding Uncertainties The effect of truncation or rounding depends on whether fixed-point or floating-point arithmetic is used and how negative numbers are represented. For the sign-and-magnitude one's compliment and two's compliment methods, the numbers are represented identically. The largest truncation error occurs when all bits discarded are one's.

For negative numbers, the effect of truncation depends on whether sign-and-magnitude, two's compliment or one's compliment representation is used. Rounding is used on the magnitude of the numbers, and uncertainty is independent of the method of negative numbers representation.

For positive numbers and two's compliment negative numbers, the truncation uncertainty is estimated by:

b< ET

  • 0 For sign-and-magnitude and one's compliment negative numbers, the truncation uncertainty is estimated by:

0

  • ET < 2 -b where b is the number of bits to the right of the binary point after truncation or rounding.

Estimation for rounding uncertainty is:

(-1/2)( 2 b) < ER * (1/2)( 2 b)

Where b, is the number of bits to the right of the binary point after truncation or rounding. Truncation and rounding Effects the mantissa in floating point arithmetic. The relative uncertainty is more important than the absolute uncertainty, i.e., floating-point errors are multiplicative.

For floating point arithmetic, the relative uncertainty for rounding is estimated by:

-2.2- < E

  • 0 For one's compliment and sign-and-magnitude, truncation uncertainty is estimated by:

-22b<bb E < 0, for X <0 0

  • E < 2 .2 -b, for X >0 Where X is the sign and magnitude value prior to truncation.

Page 183 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 APPENDIX K PROPAGATION OF UNCERTAINTY THROUGH SIGNAL CONDITIONING MODULES This Appendix discusses techniques for determining the uncertainty of a module's output when the uncertainty of the input signal and the uncertainty associated with the module are known. Using these techniques, equations are developed to determine the output uncertainties for several common types of functional modules.

For brevity, error propagation equations (See Table K-1) will not be derived for all types of signal-processing modules. Equations for only the most important signal-processing functions will be developed; however, the methods discussed can be applied to functions not specifically addressed here. The equations derived are applicable to all signal conditioners of that type regardless of the manufacturer.

The techniques presented here are not used to calculate the inaccuracies of individual modules; they are used to calculate uncertainty of the output of a module when the module inaccuracy, input signal uncertainty and module transfer function are known.

This section discusses only two classifications of errors or uncertainties: those, which are random and independent and can be combined statistically, and those, which are biases, which must be combined algebraically. The methods discussed can be used for both random and biased uncertainty components.

It is important to note that the method of calibration or testing may directly affect the use of the information presented in this section. If, for example, all modules in the process electronics for a particular instrument channel are tested together, they may be considered one device. The uncertainty associated with the output of that device should be equal to or less than the uncertainty calculated by combining all individual modules.

K.1 Error Propagation Equations Using Partial Derivatives and Perturbation Techniques There are several valid approaches for the derivation of equations, which express the effect of passing an input signal with an error component through a module that performs a mathematical operation on the signal. The approaches discussed here, which are recommended for use in developing error-propagation equations, are based on the use of partial derivatives or perturbation techniques, i.e.,

changing the value of a signal by a small amount and evaluating the effect of the change on the output. Either technique is acceptable and the results, in most cases, are similar.

Page 184 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 For simplicity, this discussion assumes that input errors consist of either all random or all biased uncertainty components. The more general case of uncertainties with both random and biased components is addressed later in this Appendix.

K.2 Propagation of Input Errors through a Summing Function The summing function is represented by the equation:

C = k1 A + k2 B where, C = Output signal A, B = Input signals k1 and k2 = Constants representing gain or attenuation of the input signals The summing function is shown on Figure K-l.

k1 A Output C=klA+ k2 B k2 B Figure K-1 Summing Function Page 185 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 The input signals are summed as shown above to provide an output signal. If the input signals A and B have errors, a and b, the output signal including propagated error is given by:

C + c = k 1 (A + a) + k 2 (B + b)

Or C + c = k 1A + kja + k 2 B + k 2 b where c is the error of the output signal C. Subtracting Equation K.1 from Equation K.2 provides the following estimate of the output signal uncertainty:

c = k.a + k 2 b Equation K.3 is appropriate if the errors, a and b, are bias errors. If the input errors are random, they can be combined as the square root of the sum of the squares to predict the output error:

2 c = ( (k a) +(k b)2) 12 The above expressions for uncertainty can also be derived using partial derivatives. Start by taking the partial derivative of Equation K.1 with respect to each input:

AC = (dC/dA)AA + (dC/dB)AB dcldA = k1 (dA/dA) + k 2 (dB/dA)

= k1 + 0 = ki dC/dB = k2 (dA/dB) + k 2 (dB/d B)

= 0 + k2 = k2 The input signals are independent. The input errors, a and b, represent the change in A and B, or AA = a and AB = b. If c represents the change in C, then AC = c, yielding:

C2= (kla)2 + (k 2 b) 2 or 2

c = (k(ka) +(k2b)2) 12 Page 186 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 K.3 Propagation of Input Errors through a Multiplication Function The summing function is represented by the equation:

C = (kA) (k 2 B) where, C Output signal A, B Input signals k, and k2 = Constants representing gain or attenuation of the input signals The multiplication function is shown on Figure K-2.

k1 A Output 0 = (kiA) (k2 B) k2 B Figure K-2 Multiplication Function The input signals are multiplied as shown above to provide an output signal. If the input signals A and B have errors, a and b, the output signal including propagated error is given by:

C + c = k 2 (A + a)k 2 (B + b) where c is the error of the output signal C. Equation K.11 can be expanded as shown:

C + c = kjAk 2 B + klAk 2 b + kjak2 B + kjak2 b Page 187 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 Subtracting Equation K.12 from Equation K.10 provides the following estimate of the output signal uncertainty:

c = kjAk 2 b + kjak2 B + kjak2 b or c = kzk 2 (Ab + aB + ab)

If a and b are small with respect to A and B, the term ab is usually neglected to obtain the final result:

c = klk 2 (Ab + aB)

If the input signals are random, they can be combined as the square root of the sum of the squares to predict the output error:

c = klk 2 ( (Ab) 2 + (aB) 2)12 K.4 Error Propagation Through Other Functions Below are equations for other functions derived by the same techniques presented in the previous sections. The algebraic expressions represent the more conservative approach assuming bias errors and the SRSS expressions apply to random errors. Refer to Table 1 in reference 5.3, ISA-RP67.04, Part II, for more information.

Function Treatment of Error Division C = (kl* A)/(k2

  • B)

C = kl/k2 [(B

  • a) - (A
  • b)/B2 ] Algebraic C = kl/k2 [((B
  • a) 2 - (A
  • b) 2

) 1/2/B2 ] SRSS Logarithmic C = ki + (k2

  • Log A)

C = [k2

  • Log e/A]
  • a Algebraic C = [k2 *Log e/A] *a SRSS Page 188 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 Squaring C = A2 C = (2

  • A
  • a) + a2 Algebraic C = 2
  • A
  • a SRSS Square Root Extraction C = (A) 1 / 2 Page 189 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 APPENDIX L GRADED APPROACH TO UNCERTAINTY ANALYSIS L.1 Introduction The methodology presented in this engineering standard is intended to establish a minimum 95% probability with a high confidence that a setpoint will actuate when required. The methodology is based, in part, on ISA -S67.04, Reference 5.3.

When a calculation is prepared in accordance with this engineering standard, it will accomplish a rigorous review of the instrument loop layout and design. Each element of uncertainty will be evaluated in detail and the estimated loop uncertainty justified at length. The setpoint will be carefully established with respect to the process analytical limit and channel uncertainty. A calculation prepared with this engineering standard will be comprehensive and can typically take an engineer at least two weeks to prepare. This level of effort is justified for those calculations involving reactor safety and integrity.

The importance of the various types of safety-related setpoints differ, and as such it may be appropriate to apply different setpoint determinations requirements. As described in Reference 5.3, for automatic setpoints that has a significant importance to safety. For example, those required by the plant safety analyses and directly related to Reactor Protection System, Emergency Core-Cooling Systems, Containment Isolation, and Containment Heat Removal, a stringent setpoint methodology should consider all sources of instrument error. However, for setpoints that may not have the same level of stringent requirements, for example, those that are not credited in the safety analyses or that do not have limiting values, the setpoint determination methodology could be less rigorous. The level of detail should be commensurate with the importance of the application.

Multiple setpoint methodologies for engineering calculations have been attributed to programmatic setpoint errors at other power stations. These stations have incorporated corrective actions that implement setpoint and loop uncertainty analysis that are balanced with the importance or significance of the related plant system safety function. This approach is acceptable and is consistent with a draft recommended practice by Instrument Society of America (ISA) standards, (ISA dTR 67.04.09, Graded Approaches to Setpoint Determination, Draft Technical Report, 1994 and the subsequent version Draft 4, May, 2000). This Appendix provides guidance regarding how to satisfy the needs for proper setpoint control while allowing for simpler approaches for less critical applications.

Page 190 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 The CPS setpoint methodology will establish the basis of a graded setpoint program by grouping the instrument loops according to their safety significance. The graded approach to setpoint determination provides the maximum available tolerance to optimize the safety and reliability of the plant.

Graded approaches are based on fact that all the rigor and conservatism established in RP67.04-1994, Part II may not be warranted for all setpoints in a nuclear power plant. Per RP67.04-1994, a nuclear plant licensee may establish a multilevel classification scheme by documenting the rationale used to establish the classification. Implementation of a graded approach to setpoints requires the users to identify how critically important each setpoint is. For example, setpoints for RPS and ESFAS are to be maintained with a high degree of conservatism and a high level of confidence. Setpoints for Reg. Guide 1.97, Type C variables for post accident monitoring do not require the same level of confidence. Therefore, a graded approach, with classification for setpoints, will help proper maintenance of safety grade nuclear instrumentation without compromising the safe and reliable operation of the plant.

L.2 GRADED CLASSIFICATIONS CPS Setpoint Control distinguishes between applications by providing the following classifications of setpoint categories in terms of safety significance. For example, Setpoint Category 1 instrument loops are deemed safety significant and calculations for this class of instruments would require full rigor and conservatism established in RP67.04-1994, Part II for safety related setpoints.

The Setpoint Category Tables are presented in order of descending safety significance and therefore, calculation rigor.

Page 191 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 CPS Graded Approach Recommendations SETPOINT CATEGORY FUNCTIONAL DESCRIPTION 1 RPS (Reactor Protection System).

ESF (Engineered Safety Features).

ECCS (Emergency Core Cooling System).

PCIS (Primary Containment Isolation System).

SCIS (Secondary Containment Isolation System).

Emergency Reactor Shutdown Containment Isolation Reactor Core Cooling Containment and Reactor Heat Removal Prevent/mitigate a significant release of radioactivity.

2 Ensure compliance with Tech Spec but are not Level 1 setpoints.

Provide setpoints/limits for Reg. Guide 1.97 Type A variables.

3 Provide setpoints/limits for Reg. Guide 1.97, Type B, C, D variables.

Provide setpoints/limits for other regulatory requirements or operational commitments.

Provide setpoints/limits that are associated with personnel safety or equipment protection.

4 Provide setpoints/limits not identified with levels 1,2 & 3 above. Require documentation of engineering judgement, industry or station experience or other methods have been used to set or identify an operating limit.

Provide setpoints/limits for station EOP requirements. GE BWR methodology for EOP's does not require or desire treatment for uncertainties.

Page 192 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 The following guidelines should be followed with regard to the level of rigor required for a setpoint determination.

Cat. 1 and 2: A Calculation in accordance with CC-AA-309 and this standard is required. Setpoints must be prepared in accordance with this standard and must account for all known sources of uncertainty. The expected results of these calculations are that they establish a well-documented basis for the 95%

probability that the setpoint will actuate as desired Cat 3: A Calculation in accordance with CC-AA-309 and this standard is required. Setpoints need not meet all the requirements of this engineering standard, including the required level of detail or depth of analysis, unless they involve nuclear safety-related setpoints protecting a safety limit, initial condition or support a primary success path in any design basis accident or transient analysis functions. Cat. 3 Setpoints are normally associated with system control functions. Documented engineering judgement can be applied to those uncertainties that are not readily known or available.

Cat 4: Documented basis for the setpoint or limit is required but may be captured in ECN, Engineering Evaluation, or a Calculation. Engineering judgement can be applied to those uncertainties that are not readily known or available. Industry or station experience or other methods can be used to set the limit. Need not meet the requirements accounting for all known sources of uncertainty, including the required level of detail or depth of analysis.

Page 193 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 L.3 Correction for Single-Sided Setpoints The methodology presented in this engineering standard is intended to establish a 95% probability with a high confidence that a setpoint will actuate when required. Without consideration of bias effects, the probability is two-sided and symmetric about the mean as shown in Figure L-1.

2.5% 2.5 %

- 95%

-2a 0 Figure L-1 Typical Two-Sided Setpoint at 95% Level Figure L-1 shows the configuration in which there may be high and low setpoints with a single process. In some cases, there will only be a single setpoint associated with a particular sensor. For example, a pressure switch may actuate a high setpoint when steam dome pressure is too high. In this case a 95% probability is desired for the high pressure setpoint only as shown in Figure L-2.

Page 194 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 Figure L-2 Typical One-Sided Setpoint at 95% Level A two-sided normally distributed probability at the 95% level will have 95% of the uncertainties falling within +/- 1.96a (see example L-1) with 2.5% below -1.96a and 2.5% above 1.96a. However, for one-sided normally distributed uncertainties, 95% of the population will fall below + 1.645a (see Table M-2). If the concern is that a single value of the process parameter is not exceeded and the single value is approached only from one direction, the appropriate limit to use for the 95% probability is + 1.645a (or - 1.645a depending on the direction the setpoint is approached). Provided that the individual component uncertainties were approached at the 95% level, or greater, the final calculated uncertainty result can be corrected for a single side of interest by the following expression:

1.645/1.96 = 0.839 Example L-1 Suppose the calculated uncertainty for the High Steam Dome Pressure channel is +/- 2% of span and this represents 95% probability for the expected uncertainty. Suppose the uncertainty applies only to the high pressure trip setpoint. In this case we are only concerned with what happens on the high end of span (near the setpoint). The setpoint can be established for a single side of interest by multiplying the Equation L.1 correction by the calculated channel uncertainty, or:

(0.839) (2%) = 1.68%

Hence, rather than require that the setpoint allowance include a 2%

uncertainty value, only a 1.68% allowance needs to be considered.

This correction can provide additional margin for normal system operations.

Page 195 of 214

Instrument Setpoint APPENDIX M - USING THE RESULTS OF Calculation Methodology A STATISTICAL DRIFT ANALYSIS REVISION 3 REVISION 3 APPENDIX M USING THE RESULTS OF A STATISTICAL DRIFT ANALYSIS Section items M.1 to M.3 are adopted from Ref. 5.27, NES-EIC-20.04 Rev. 3, "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy" Appendix J.

The drift analyses herein intended for use in the setpoint and channel error calculations are those performed for CPS' transition to a 24 month refueling cycle Ref. 19, Assessment EA # 2003-06220 and future updates in accordance with Ref. 5.13, ER-AA-520, Rev. 3, "Instrument Performance Trending". The analyses were done in accordance with Ref. 5.27 Appendix J which is in compliance with Ref 5.26, NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle," dated April 2, 1991 and 5.32, EPRI TR-103335, Rev. 1, Statistical Analysis of Instrument Calibration Data. Guidelines for Instrument Calibration Extension/Reduction Programs. The CPS surveillance AF/AL data is from loop calibrations for the nominal trip setpoint.

M.1 The data reduction has generated a "drift" value, but that number includes several uncertainties in addition to the classical drift. If the determined drift value is used in uncertainty calculations, the following uncertainties can normally be eliminated. To replace these values state that they are included in the calculated drift tolerance interval value (DTIc) and set their individual values to zero.

1.1 Reference Accuracy - The reference accuracy of the instrument is included in the calibration data and can be removed from the uncertainty calculation.

1.2 M&TE - As long as the calibration process uses the same, or more accurate, test equipment then this uncertainty is included in the calibration data and can be removed from the uncertainty calculation.

1.3 Drift - The true drift is included in the determined drift and is included in the calibration data and can be removed from the uncertainty calculation.

1.4 Normal Environmental Effects - For the instruments that are included in the calibration, the effects of variations in radiation, humidity, temperature, vibration, etc. experienced during the calibration are included in the calibration data and can be removed from the uncertainty calculation. These terms cannot be removed from the uncertainty calculations if these components see different conditions or magnitudes of the parameter, such as vibration or temperature, while operating than during calibration.

Page 196 of 214

Instrument Setpoint APPENDIX M - USING THE RESULTS OF Calculation Methodology A STATISTICAL DRIFT ANALYSIS REVISION 3 REVISION 3 1.5 Power Supply Effects - If the instruments are attached to the same power supply during calibration that is used during operation, then the affects are included in the calibration data and can be removed from the uncertainty calculation.

1.6 Setting Tolerance - If the setting tolerance is such that it is less than the determined drift then this tolerance will show up in that determined drift and can be removed from the uncertainty calculation. If the ST is much larger than the determined drift it will not normally be used in the calibration process and will not be seen in the determined drift. In this case the ST can be combined with the determined drift using SRSS.

M.2 For cases where there are time dependent drifts, the time frame used for determining the drift should be the normal surveillance interval plus twenty-five percent. Time dependent drift that is random is assumed to be normally distributed and can be combined using the Square Root Sum of the Squares method for intervals beyond the given interval.

M.3 Time independent drift can be assumed constant over the Valid Interval.

M.4 Loop As Found Tolerance - Since AFT is made up of drift, reference accuracy, and calibration errors including setting tolerances, the AFT will generally be set equal to the calculated Drift Tolerance Interval when valid drift results are available.

AFTL = DTIc M.5 When applying DTIc to an existing Method 1 calculation (the preferred method purported in this standard for calculating a setpoint for a function with an analytical limit) the reference accuracy used to develop the AV may be zeroed out. CPS 24 month drift analysis experience however showed it was typically not zeroed (conservatively) because a TS change to the AV would be required in order to take advantage of the increased operating margin it would provide to the setpoint.

M.6 Device As Found Tolerance - Since the CPS AF/AL data is for loops, the device AFT values remain to be calculated in accord with section 4.5.4 equations. Note that other plants drift analyses are typically not based on loop calibrations.

Page 197 of 214

Instrument Setpoint APPENDIX M - USING THE RESULTS OF Calculation Methodology A STATISTICAL DRIFT ANALYSIS REVISION 3 REVISION 3 M.7 The use of AF/AL data with fewer valid inputs than 30 is not allowed by ref. 5.27 and NRC RAI experience for extension of surveillance interval to 24 months. Where fewer than 30 valid points were available, other means of estimating drift were used such as covered in Appendix sections A.2.6 and C.3.4. In such cases the AF/AL data may however be used to validate assumptions for drift.

M.8 Existing calculations which have already calculated AFT per this standard were not revised to include the use of DTIc if the calculated experience DTIc was less than the existing AFT.

M.9 Future generation of new or revised DTIc values will be treated similarly. If the DTIc is less than the existing AFT the existing calculation will remain as is.

Page 198 of 214

Instrument Setpoint APPENDIX N - STATISTICAL ANALYSIS OF Calculation Methodology SETPOINT INTERACTION REVISION 3 APPENDIX N STATISTICAL ANALYSIS OF SETPOINT INTERACTION Frequently, there is more than one setpoint associated with a process control system. For example, a tank may have high and low level setpoints that are designed to prevent overfilling or completely emptying the tank. Each setpoint has a lower and upper actuation uncertainty and, in some cases, two or more setpoints can be very close to one another (or overlap) when all uncertainties are included. A calculation that involves multiple setpoints should also confirm that the setpoints are adequate with respect to one another.

Setpoints that are prepared in accordance with this engineering standard represent a 95% probability with a high confidence (approximately 95%) that the setpoint will actuate within the defined uncertainty limit. The uncertainty variation about the setpoint, is assumed to be approximately normally distributed. If two setpoints are close together, it could appear that they have an overlap region as shown in Figure N-1.

II I

I II I

II I

I I

I I

I 0-0

. Set point , Set point 2 Figure N-1 Distribution of Uncertainty about Two Setpoints Page 199 of 214

Instrument Setpoint APPENDIX N - STATISTICAL ANALYSIS OF Calculation Methodology SETPOINT INTERACTION REVISION 3 As shown in Figure N-1, setpoint overlap can occur when Setpoint 1 drifts high at the same time that Setpoint 2 drifts low. The probability of this occurrence can be estimated based on the behavior of the normal distribution. For a normal distribution, 68.3% of the total probability is contained within +/-1.Oa of the mean, with 15.85% in either tail. Because the setpoints have been statistically determined, it is reasonable to evaluate the possibility of setpoint overlap statistically also. It is highly unlikely for one setpoint to drift by the I.Oa value in the high direction when the other setpoint simultaneously drifts low by the 1.Oa value. The probability, PT, of this occurring is:

PT = (PA) (PB) = (O . 1585) (0.1585) = 0.0251 = 2.51*

The above probability readily shows the low likelihood of setpoint overlap even at the 1.0a level. The probability becomes virtually insignificant at the 1.50a level. In this case, 86.64% of the total probability is contained with +/-1.5c level, with 6.68% in either tail. The probability of one setpoint to drift high by 1.5a when the other setpoint drifts low by 1.5a is:

P1. = (PA) (PB) = (0. 0668) (0.0668) = 0.0045 = 0.45*

The above approach can be used to demonstrate the low likelihood of setpoint overlap. If setpoints appear to have a higher-than-desired probability of overlap, the electrical circuits should be reviewed to determine the possible consequences of the overlap.

Page 200 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 APPENDIX 0 INSTRUMENT LOOP SCALING 0.1 Introduction CPS Calibration Procedures and data sheets include head corrections and scaling. CPS procedure 8801.05, Reference 5.17, controls the method of instrument corrections. For calculations developed by this methodology, the scaling will be evaluated and documented in Attachment 1 of calculation. Scaling instrument loops and development of calibration correction values should be done in a consistent and correct manner. This vital instrument engineering function must be deliberately integrated into maintenance and engineering activities. This Appendix provides the guidance relative to the analysis of an instrument loop and preparation of scaling calculations.

A process instrumentation loop (circuit) typically consists of three distinct sections:

1. Sensing: The parameter to be measured is sensed directly by some mechanical device. Examples include a flow orifice for flow, a differential pressure cell for level, a bourdon tube for pressure, and a thermocouple for temperature measurement.

The sensing element may include a transmitter that converts the process signal into an electrical signal for ease of transmission.

2. Signal Processing: The electrical signal sent by the sensor/transmitter may be amplified, converted, isolated, or otherwise modified for the end-use devices.
3. Display or Actuation: The process signal is used somehow, either as a display, an actuation setpoint above or below some threshold, or as part of some final actuation device logic.

Figure 0-1 shows a typical instrument application. As shown, a level transmitter monitors a tank's water level. A power supply provides a constant voltage to the transmitter and the transmitter outputs a current proportional to the tank level. The indicator displays a tank level corresponding to the electrical current. If the electrical current is above (below) a predetermined level, indicative of a high (low) tank level, the trip unit actuates. The current is provided to the controller for some control action.

Page 201 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 Tank

  • ~ A'I IJ

0S0------- T l Figure 0-1 Simple Instrument Loop for Level Measurement The above example of a tank level measurement illustrates the various elements of an instrument loop. Regardless of the application, an instrument loop measures some parameter -

temperature, pressure, flow, level, etc. - and generates signals to monitor or aid in the control of the process. The instrument loop may be as simple as a single indicator for monitoring a process, or can consist of several sensor outputs combined to create a control scheme.

An instrument and control engineer, will usually design an instrument circuit such that the transmitter (or other instrument) output is linearly proportional to the measured process. Consider the tank level instrument loop just described. As tank level varies from 0t to 100t, we want a transmitter electrical output that can be scaled in direct proportion to the actual tank level. A typical transmitter output signal is shown in Figure 0-2. The output signal varies linearly with the measured process parameter with a low value of 4 milliamps (mA) to a high limit of 20 mA. Under ideal conditions, a zero tank level would result in a 4 mA transmitter output and a 100% level would correspond to a 20 mA output (or 10 to 50 mA, respectively).

Page 202 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 100%

.) 75%

.0LCZ 50%

25%

4 8 12 16 20 Signal mA Figure 0-2 Desired Relationship between Measured Process and Sensor Transmitter Output Example 0-1 Referring to Figure 0-2, what is the expected transmitter output signal if tank level is 50%? The tank level varies from 0% to 100%

for a transmitter output span of 16 mA (4 to 20 mA). The transmitter output signal should be:

Transmitter Output = 4 mA + (0.50)(16 mA) = 12 mA As expected, the transmitter output is at the half-way point of its total span. The above equation will be developed in more detail in the following section.

Page 203 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 Example 0-2 Referring again to Figure 0-2, what is the expected tank level if the transmitter signal is 18 mA?

Tank Level =18 mA -4 nA 100% = 87.5%

16 nA span 0.2 Scaling Terminology Instrument scaling, applied to a process instrumentation, is a method of establishing a relationship between a process sensor input and the signal conditioning devices that transmit/condition the sensor's output signal. The goal is to provide an accurate representation of the measured parameter throughout the measured span. In its simplest perspective, scaling converts process measurements (temperature, pressure, differential pressure, etc.)

from engineering units (OF, psig, etc.) into analog electrical units (VDC, mADC, etc.).

A typical instrument loop consists of a sensor, power supply, and end-use instruments as shown in Figure 0-3. Whereas Figure 0-1 showed the functionality of the circuit, Figure 0-3 shows the instrument loop as an actual circuit. All components are connected in a series arrangement. The power supply provides the necessary voltage for the pressure transmitter to function. In response to the measured process, the pressure transmitter provides a 4 to 20 mA output current.

Page 204 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 Indicator Pi Isolation Signal Figure 0-3 Simplified Instrument Loop Schematic Suppose the pressure transmitter shown in Figure 0-3 monitors tank pressure and is designed to operate over a process range of 1700 to 2500. The transmitter has an elevated zero or pedestal of 1700 psig. The transmitter has an analog output signal of 4 to 20 mADC.

Other components in Figure 0-3 include a pressure indicator and trip unit, each sensing the same 4 to 20 mA signal from the transmitter. The loop signal is developed from the transmitter input via the voltage developed across a 250 ohm input resistor; this arrangement is typical. As the current through the input resistors varies from 4 to 20 mA, the voltage developed across the resistor is 1 V to 5 V, maintaining a linear relationship between the measured process and the resultant output signal. The only purpose of the resistors is to convert the current signal to a voltage signal.

Page 205 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 As configured in this example, the 1700 to 2500 psig process signal has a span of 800 psig which corresponds to the 1 to 5 VDC (or 4 VDC span) across the input resistor. The scale factor is defined as the ratio of the analog electrical signal span to the process span, or 4 VDC/800 psig = 0.005 VDC/psig. Accounting for the 1700 psig input pedestal and the 1 VDC output pedestal, the scaling equation that relates the input to the output is given by:

Ep= (0.005V/psig) (P - 1700 psig)I + 1V where, Ep =Voltage corresponding to the input pressure P =Input pressure value between 1700 and 2500 psig The above scaling equation provides an exact relationship between the process variable and the voltage developed across an input resistor for the stated configuration.

0Ov-3 Module Equations Module equations are commonly referred to as transfer functions.

They define the relationship between a module's input and output signals and are just scaling equations that describe this input/output relationship. Transfer functions are typically classified as either static or dynamic.

Static transfer functions are time-independent and can be either linear or nonlinear. Modules that typically have static transfer functions include:

  • Input resistors (I/V modules)
  • Isolators
  • Summators Page 206 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 The module equation of a static device will sometimes include a gain adjustment also. For example, a simple summator may have the following module equation:

E..,= G(k1 E1 + k2 E2 + kBEB) +1 V where, ki, k2 = input signal gains kB = Bias input gain E1, E2 = Input voltages EB = Bias voltage G = Output gain

=Eout Output voltage 0;4 Scaling calculation I After the process algorithm, module equations, and required ranges have been determined, the scaling calculation can be completed. The scaling factor is used with the scaling equation to derive the voltage equation form the process equation. An overall system equation can be developed, by combining module equations, as applicable. For example, assume the use of two modules in an instrument loop.

The first module has two inputs, Eland E2, that are summed together with a module gain of G1. The simplified equation for this module is given by:

EA =GI (El + E2 )

Now, assume that the output, EA, is summed with another input, E3 ,

which has a module gain of G2 . The resulting module equation is:

Eout = G 2 (E3 + EA) or, substituting in for EA, E..t = G2 [E3 + G (E + E2 )]

Page 207 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 The expression for each voltage above can be complex also. But, the result is an overall scaling equation that defines the system operation. Once a scaling equation has been developed and the scaling calculation performed, the equation should be checked by inputting typical process values and determining if reasonable analog values are calculated. Each module should be tested separately to ensure its accuracy before combining it with other modules. As part of the test process, include minimum and maximum process values to ensure that the limits work as expected.

Page 208 of 214

Instrument Setpoint APPENDIX P - RADIATION MONITORING Calculation Methodology SYSTEMS REVISION 3 APPENDIX P RADIATION MONITORING SYSTEMS Radiation monitoring systems have unique features that complicate an uncertainty analysis. The system design, detector calibration, and display method all can reduce the system accuracy. Whenever evaluating a radiation monitoring system, review References 5.9 and 5.33, for additional information and:

Radiation monitoring system operation and maintenance manual Radiation monitoring system calibration procedures The following should be considered as part of any uncertainty analysis:

Detector Measurement Uncertainty A radiation monitoring system detector's response varies with the following parameters:

  • Energy level of the incident particles.
  • Count rate of the detected particles.
  • Type of particle being counted (depending on application, the particles may be gamma photons, neutrons, or beta particles).

Detector Count Rate Measurement Uncertainty The detector's measurement uncertainty can be affected by the following:

  • On the low end of scale, the uncertainty in count rate response is affected by signal to noise ratio effects.
  • On the high end of scale, the uncertainty in count rate is affected by pulse pile-up in which discrete pulses are missed.
  • Through the detection range, the alignment of the source to the detector geometry can impact the measurement uncertainty. For example, the containment high range radiation monitors need an unobstructed view of the containment dome. Blockages such as concrete walls can degrade the measurement capability of the detector.

Detector Energy Response Uncertainty The detector energy response uncertainty can be affected by the following:

Page 209 of 214

Instrument Setpoint APPENDIX P - RADIATION MONITORING Calculation Methodology SYSTEMS REVISION 3

  • On the low end, the discriminator setting and the energy sensitivity of the detector.

On the high end, the point at which a rise in incident particle energy does not result in a change in pulse height output.

Throughout the detection range, by a degrading failure of the system.

For most permanently installed radiation detectors, the detector is designed to respond to incident particles over a certain range of energies. The output of count rate is then correlated to a mR/hr or pC/cc indication by the application of a conversion factor, without regard to differing incident particle energies.

When the plant is shutdown, the detector indicated count rate is generally derived from lower energy particles. When the plant is operating, the particle energy tends to be higher. In this case, a typical detector will display a higher count rate, even if the number of incident particles per unit time remains the same. As the incident particle energy level changes, the probability of detection changes, for a given count rate. During initial calibration, this difference is accounted for by exposing the detector sample streams of different radioisotopes and measuring the detector's response.

After in-plant installation, the calibration is checked, by exposing the detector to fixed external sources of different radioisotopes.

The detector coefficient represents the sensitivity of the detector, which is typically specified in Amp/(R/hr). The sensitivity is provided by the vendor for each detector and can be different if the detectors are ever replaced.

Post Accident Radiation Measurement and Indication accuracy for containment area monitoring, is specified in Regulatory Guide 1.97, Table 2, Footnote 7. "Detectors should respond to gamma radiation photons, within any energy range from 60 keV to 3 MeV with an energy response accuracy of "20% at any specific photon energy from 0.1 MeV to 3 MeV". Overall system accuracy should be within a factor of 2 over the entire range." Revision 3 of R.G. 1.97 revised the above footnote to omit the "20% accuracy requirement for the detector.

Now the containment area radiation monitors "should respond to gamma radiation photons within any energy range from 60 keV to 3 MeV with a dose rate response accuracy within a factor of 2 over the entire range". Considering the prior revision, it is clear the intent of the "factor of 2" current requirement applies to the "overall system accuracy" and not the detector accuracy alone. This interpretation is consistent with the requirements placed on other radiation monitoring devices in the same table.

The uncertainty terms identified in radiation monitoring technologies are either percent of reading or in Equivalent Linear Page 210 of 214

Instrument Setpoint APPENDIX P - RADIATION MONITORING Calculation Methodology SYSTEMS REVISION 3 Full Scale (ELFS), which is the same as percent of span provided the span and full scale are equivalent. The method for converting percent of reading uncertainties to percent ELFS using the "error factor" concept is based on the model from an example radiation trip calculation in Reference 5.3, ISA S 67.04 Part II.

Conversion of this error to an ELFS error permits combining the percent of reading error with other string errors.

Consider the following example; A containment area monitor indicates R/Hr over an eight (8) decade range, uncertainty is calculated for the detector at 12.2%.

This detector accuracy error can be expressed as error factors of:

(1.0 + 0.122)/1.0 = 1.122 and (1.0 - 0.122)/1.0 = 0.878. ELFS is calculated for both factors as:

ERROR FACTOR = ioDX, where D = 8, the number of decades on the meter and X =

ELFS as a decimal value.

X(+) = (log(l.122)/8)

  • 100%= +0.62% ELFS X(-) = (log(0.878)/8)
  • 100%= - 0.71% ELFS The error will be assumed to be symmetrical and set at the larger of two values, thus EDET(ref) = "0.71 % ELFS.

Whenever evaluating the uncertainty of a radiation monitoring system, the periodic calibration methods are particularly important to consider. EPRI TR-102644, Reference 5.33, provides additional guidance. Also, the applicable system engineer should be contacted for additional expertise.

Page 211 of 214

Instrument Setpoint APPENDIX Q - ROSEMOUNT LETTERS Calculation Methodology REVISION 3 APPENDIX Q Rosemount Nuclear Instruments Rosemount Nuclear Instruments. Inc.

12001 Technology Drive Eden Prairie, MN 55344 USA Tel I (612) 828-8252 Fax 1 (612) 828-8280 4 April 2000 Ref: Grand Gulf Nuclear Station message on INPO plant reports, subject Rosemount Instrument Setpoint Methodology, dated March 9, 2000

Dear Customer:

This letter is intended to eliminate any confusion that may have arisen as a result of the reference message from Grand Gulf. The message was concerned with statistical variation associated with published performance variables and how the variation relates to the published specifications in Rosemount Nuclear Instruments, Inc.(RNII) pressure transmitter models 152, 1153 Series B1 3 153 Series D, 154 and 154 Series H. According to our understanding, the performance variables of primary concern are those discussed in GE Instrument Setpoint Methodology document NEDC 31336, namely I. Reference Accuracy

2. Ambient Temperature Effect
3. Overpressure Effect
4. Static Pressure Effects
5. Power Supply Effect It is RNII's understanding that GE and the NRC have accepted the methodology of using transmitter testing to insure specifications are met as a basis for confirming specifications are +/-3a, The conclusions we draw regarding specifications being +/-3a are based on manufacturing testing and screening, final assembly acceptance testing, periodic (e.g., every 3 months) audit testing of transmitter samples and limited statistical analysis. Please note that all performance specifications are based on zero-based ranges under reference conditions. Finally, we wish to make clear that no inferences are made with respect to confidence levels associated with any specification.
1. Reference Accuracy.

All (100%) RNII transmitters, including models 1152, 1153 Series B, 1153 Series D, 1154 and 1154 Series H, are tested to verify accuracy to +/-0.25% of span at 0%, 20%, 40%, 60%, 80% and 100% of span. Therefore, the reference accuracy published in our specifications is considered +/-3cF.

2. Ambient Temperature Effect All (100%) amplifier boards are tested for compliance with their temperature effect specifications prior to final assembly. All sensor modules, with the exception of model 1154, are temperature compensated to assure compliance with their temperature effect specifications. All (100%) model 154, model 154 Series H and model 1153 gage and absolute pressure transmitters are tested following final assembly to verify compliance with specification. Additionally, a review of audit test data performed on final assemblies of model 152 and model I 153 transmitters not tested following final assembly indicate FISIIER-ROSEIlOUNT conformance to specification. Therefore, the ambient temperature effect published in our specifications is considered +/-3a Page 212 of 214

Instrument Setpoint APPENDIX Q - ROSEMOUNT LETTERS Calculation Methodology REVISION 3 3.Overpressure Effect Testing of this variable is done at the module stage. All (100%) range 3 through 8 sensor modules are tested for compliance to specifications. We do not test range 9 or 10 modules for overpressure for safety reasons. However, design similarity permits us to conclude that statements made for ranges 3 through 8 would also apply to ranges 9 and 1 0. Therefore, the overpressure effect published in our specifications is considered +/-3a.

4. Static Pressure Effects All (100%) differential pressure sensor modules are tested for compliance with static pressure zero errors.

Additionally, Models I 153 and 1154 Ranges 3, 6,7 and 8 are 100% tested after final assembly for added assurance of specification compliance. Audit testing performed on ranges 4 and 5 have shown compliance to the specification.

Therefore, static pressure effects published in our specifications are considered +/-3a.

5. Powver Suprilv Effect Testing for conformance to this specification is performed on all transmitters undergoing sample (audit) testing.

This variable has historically exhibited extremely small performance errors and small standard deviation (essentially a mean error of zero with a standard deviation typically less than 10% of the specification). All transmitters tested were found in compliance with the specification. Therefore, power supply effect published in our specifications is considered +/-3a.

Should you have any further questions, please contact Jerry Edwards at (612) 828-3951.

Sincerely, Jerry L. Edwards Manager, Sales, Marketing and Contracts Rosemount Nuclear Instruments, Inc.

FISH ER-ROSEMOUNT Page 213 of 214

Instrument Setpoint APPENDIX R - RECORD OF COORDINATION FOR Calculation Methodology COMPUTER POINT ACCURACY REVISION 3 APPENDIX R RECORD OF COORDINATION FOR COMPUTER POINT ACCURACY Computer Point Accuracy (using single point data)

Hardware and software, considering that digital displays involve compression limits affect the accuracy of computer inputs. Taking into consideration the following errors, an accuracy of 0.25% of full range will be utilized. (Reference 5.28 and 5.29)

Gain Error +/- 0.025% Full Range Repeatability Error = +/- 0.025% Full Range

  • Others +/- 0.2% Full Range Total +/- 0.25% Full Range
  • In accuracy of filter input card, Reference Junction Compensation, and any other loss due to conversions and scan frequency.

Page 214 of 214

Text

AmerGen Energy Company, LLC www.exeloncorp.com AmerGen An Exelon Company SM 4300 Winfield Road Warrenville, IL 60555 RS-05-062 10 CFR 50.90 May 23, 2005 U. S. Nuclear Regulatory Commission ATTN: Document Control Desk Washington, DC 20555-0001 Clinton Power Station, Unit 1 Facility Operating License No. NPF-62 NRC Docket No. 50-461

Subject:

Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle

Reference:

Letter from Keith R. Jury (AmerGen Energy Company, LLC) to U. S. NRC, "Request for Amendment Related to Technical Specification Surveillance Requirement Frequencies to Support 24-Month Fuel Cycles in Accordance with the Guidance of Generic Letter 91-04, 'Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle'," dated May 20, 2004 In the referenced letter, AmerGen Energy Company, LLC (AmerGen) submitted a request for a change to Appendix A, Technical Specifications (TS), of Facility Operating License No. NPF-62 for Clinton Power Station (CPS), Unit 1. Specifically, the change addresses certain TS Surveillance Requirement (SR) frequencies that are specified as "18 months" by revising them to "24 months" in accordance with the guidance of Generic Letter (GL) 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle."

Additional revisions to the CPS TS were proposed to support the change to a 24-month fuel cycle.

The NRC, in support of their review of the referenced amendment request, has requested additional information. This request was provided electronically from Douglas V. Pickett (U. S.

NRC) to Timothy A. Byam (AmerGen) on November 15, 2004. The attachment to this letter provides the requested information.

There are no regulatory commitments associated with this letter.

A.661

May 23, 2005 U. S. Nuclear Regulatory Commission Page 2 AmerGen has reviewed the information supporting a finding of no significant hazards consideration that was previously provided to the NRC in the referenced letter. The supplemental information provided in this submittal does not affect the bases for concluding that the proposed license amendment does not involve a significant hazards consideration.

If you have any questions concerning this letter, please contact Mr. Timothy A. Byam at (630) 657-2804.

I declare under penalty of perjury that the foregoing is true and correct. Executed on the 23'd day of May 2005.

Respectfully, Keith R. Jury Director - Licensing and Regulatory Affairs AmerGen Energy Company, LLC

Attachment:

Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle*

_ _1 ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Instrumentation and Controls Section Questions I though 11 refer to the page number of the licensee's application dated May 20, 2004 (ADAMS Accession No. ML041460522).

I&C Request 1:

On page 21, Attachment 1, AmerGen states that the Clinton Power Station (CPS) setpoint calculations were based on Instrument Society of America (ISA) Standard 67.04, Part II and that "Method 3" was not utilized. The staff is aware that both ISA Methods 2 and 3 have been used at CPS for setpoint calculations in another TS amendment request. Provide details of the setpoint calculation methodology used in this amendment request including some typical sample calculations. Also, please confirm that this amendment request only incorporates ISA Method 2.

I&C Response I The Clinton Power Station (CPS) 24-Month Cycle License Amendment Request (Reference 1) states that in performing the revised setpoint calculations to support any revised allowable values the use of Instrument Society of America (ISA) RP67.04, Part II (Reference 2) Method 3 was not utilized. The revised allowable values proposed in the license application are all supported by Reference 2 Method 1 calculations or Channel Error (CE) calculations. The CE calculations are applied for those setpoints that do not have a safety analysis analytical limit as described in the CPS Nuclear Engineering Standard CI-01.00, Revision 3 "Instrument Setpoint Calculation Methodology," Section 4.5.3. This standard is provided as Appendix A to this attachment. For these CE calculations, all applicable uncertainty is placed between the allowable value (AV) and the nominal trip setpoint based on the Square Root Sum of Squares (SRSS).

Regardless of the calculation method used, after the as found readings are taken the setpoint is always calibrated to be within the As-Left Tolerance (ALT) limits. Restoration of the setpoint to within the ALT provides adequate margin to the AV to account for 30 months of drift in addition to other channel uncertainties.

There are however, two existing Method 3 calculations that support current allowable values. As part of the AmerGen review of these calculations it was determined that changes to the calculated AVs were not necessary to support the change in calibration frequency to 24 months. In addition to the two Method 3 calculations, proposed changes in calibration frequency are also supported by setpoint calculations performed in accordance with Reference 2 Method 1 and Method 2, and General Electric (GE)

"Method 2" as defined in NEDC-32889P (Reference 3).

As a sample calculation, a copy of Method 3 calculation IP-C-0059, "Setpoint Calculation for RPV Level 3 and Level 8 (NR); Transmitter 1B21 N095A, B," is provided as Appendix B to this attachment. This calculation supports the AV for Technical Specification (TS)

Section 3.3.5.1, "Emergency Core Cooling System (ECCS) Instrumentation," Table 3.3.5.1-1, Function 4.d, "Reactor Vessel Water Level - Low, Level 3 (Confirmatory),"

Table 3.3.5.1-1, Function 5.d, "Reactor Vessel Water Level - Low, Level 3 (Confirmatory)," and TS Section 3.3.5.2, "Reactor Core Isolation Cooling (RCIC) System Instrumentation," Table 3.3.5.2-1, Function 2, "Reactor Vessel Water Level - High, Level 8." In addition, a copy of Method 1 calculation IP-C-0067, "Setpoint Calculation for Main Steam Line Pressure - Low; Transmitters 1B21N076A, B, C, D," is provided as Page 1 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Appendix C to this attachment. This calculation supports the proposed new AV for TS Section 3.3.6.1, "Primary Containment and Drywell Isolation Instrumentation," Table 3.3.6.1-1, Function 1.b, "Main Steam Line Pressure - Low."

I&C Request 2:

On page 17, Attachment 1, Outlying and Pooling Requirements, AmerGen proposes to limit the number of outliers excluded from any dataset to one datum. This excluded datum is above and beyond any and all data that are excluded according to the seven (7) criteria listed in pages 16 and 17 of Attachment 1. The practice of excluding a datum on statistical grounds without a plausible explanation, however, may be unwarranted.

The statistical test for outliers serves to identify a potential outlier and, as such, the offending datum is investigated for cause. The seven criteria listed in pages 16 - 17 appear to have covered all plausible causes. Exclusion of an outlier, therefore, robs the data of real information and makes any measure of variability smaller than it has to be.

Identify all (if any) outliers that surfaced in the CPS study and their disposition.

I&C Response 2:

An outlier is a data point that is significantly different from the rest of the sample. The presence of an outlier in a sample of instrument data will result in the calculation of a larger sample standard deviation. In the small sample sizes available for CPS, outlier identification is more likely and its contribution to the calculated standard deviation will be more pronounced.

The resulting drift calculations after removal of the outliers is anticipated to more accurately reflect actual device performance. Inclusion of data that is significantly different than the general data population will result in applying a broader range of acceptable as-found instrumentation settings. In this case, marginal performing instruments, or instruments that should be more closely evaluated for corrective action, may be overlooked. By eliminating a single outlier, the resultant more restrictive As-Found I As-Left (AFAL) acceptance criteria will facilitate identification and allow the ongoing trend program to detect this condition and appropriately initiate design action, maintenance action, or both to address the problem. According to American Society for Testing and Materials (ASTM) Standard E 178-80, "Standard Practice for Dealing With Outlying Observations," the Critical-T Test is the best one to use to identify a single outlier.

Beyond the explicit seven criteria specified in Attachment 1 to Reference 1, where investigation has justified removal of data, data may be corrupt (i.e., not reflecting actual performance) for a number of unverifiable causes (e.g., personnel error). As such, the allowance for exclusion of a single outlier, after addressing the seven criteria, attempts to focus the data on what should be the expected performance of the instrumentation and results in triggering future evaluations at more conservative levels.

Table 1 below summarizes the instrument drift groups with a single outlier removed beyond the data that were excluded according to the seven criteria listed in Attachment 1 to Reference 1, pages 16 and 17.

Page 2 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Table 1 Instrument Drift Groups Removed with Single Outlier Drift Number Valid Data Critical T Value @

T Value for No. Of Analysis Points (after Outlier 2.5% Significance Analyss No. Removal) Outlier Outliers Level Group 8A 45 3.04 3.25 1 Group 13 89 3.28 8.22 1 Group 15 27 2.71 2.76 1 Group 16 47 3.04 4.40 1 Group 17 28 2.71 3.10 1 Group 18 27 2.71 3.36 1 Group 19 51 3.13 3.29 1 Group 20 26 2.71 4.99 I Group 24A 29 2.91 3.38 1 Group 32 255 4.00 14.43 I Group 35 26 2.71 4.64 I Group 40 51 3.13 3.21 I Group 41 27 2.71 3.15 1 I&C Request 3:

On page 18, Normality, AmerGen states that "The Chi-Square Goodness of Fit test or either the W or D Prime test is used, depending on.... " However, the Chi-Square test is known for having low sensitivity for testing goodness of fit, especially for small to moderate sample sizes. Additionally, the result of the test of fit is a function of the binning scheme used. For these reasons, the Chi-Squared test should not be used to test normality. Furthermore, when more than one test is available, the testing procedure must be declared in advance of the data collection and not left up to the engineer.

Identify instances where the Chi-squared test was used, (either by itself or in combination with other tests) of normality, and the results of such tests.

I&C Response 3:

The CPS drift analysis work plan requires the following tests for normality to be performed (as applicable to sample size):

  • Chi-Squared
  • D-Prime (D') for moderate to large sample sizes
  • W Test, for sample sizes less than 50
  • Coverage Analysis Histogram
  • Probability Plot None of the above tests was used alone to confirm normality.

Table 2 below provides a listing of the drift analysis groups using the Chi-Squared test to show normality and what other tests were performed to confirm that normality.

Page 3 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Table 2 Drift Analysis Groups Using Chi-Squared Test Drift Number Chi-Ayift Valid Degrees of Squared Chi-Squared Analysis Data Freedom computa- Result Confirming Test(s)

N Points tion Group 39A 24 9 5.079 Satisfied W and Coverage Analysis Histogram Group 15 27 9 5.528 Satisfied W and Coverage Analysis Histogram Group 20 26 9 5.566 Satisfied W and Coverage Analysis Histogram Group 14 26 9 6.294 Satisfied W and Coverage Analysis Histogram Group 18 27 9 7.506 Satisfied Coverage Analysis Histogram Group 23 30 9 7.920 Satisfied W and Coverage Analysis Histogram Group 40 51 9 7.925 Satisfied D' and Coverage Analysis Histogram Group 39 67 9 8.336 Satisfied D' and Coverage Analysis Histogram Group 17 28 9 8.397 Satisfied W and Coverage Analysis Histogram and Normal Probability Plot I&C Request 4:

Page 19, Time Dependency. Justify the use of R-squared thresholds of 0.3 and 0.1.

I&C Response 4:

The R-squared value thresholds of 0.3 and 0.1 are provided in Exelon Generation Company, LLC (Exelon) Nuclear Engineering Standard NES-EIC-20.04 (Reference 4),

Appendix J, pages J17-18, which was previously reviewed by the NRC as part of their review of the LaSalle 24-month cycle submittal (Reference 5). The R-squared test is not intended to be supportable independently, but as one diverse check among several. As described in Reference 1, Attachment 1, page 19, the conclusion of the Time Dependency evaluation is determined by the collective evaluation of the results of the Scatter Plot, Binning Analysis, Drift Regression, and Absolute Value of the Drift Regression analyses.

I&C Request 5:

Pages 19 - 20, Tolerance Interval and Drift Characterization. Describe, or give formula for the "extrapolated standard deviation." Please indicate how the extrapolated standard deviation is used for the extrapolated prediction.

I&C Response 5:

The phrase "extrapolated standard deviation" is from page 20 of Attachment 1 to Reference 1 and is referring to how the time dependent random drift is established for 915 days. The extrapolated standard deviation is a linear extrapolation developed from the slope and intercept of the plotted bin standard deviations from the regression analysis. The equation for extrapolated standard deviation (S) is as follows.

Page 4 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle S = mt + b Where:

m is the slope of the drift line b is the intercept with the y axis t is 915 days The time dependent random drift is then calculated by the following formula.

Time Dependent Random Drift = +1-KNS Where:

K is the required confidence factor from K-Values Worksheet N is the normality adjustment factor from Histogram Adjustment Worksheet S is the extrapolated standard deviation In summary, the extrapolated standard deviation is used to determine the time dependent random drift. Multiplying the extrapolated standard deviation by the confidence factor (based on the sample size and 95/95 confidence) and the normality adjustment factor determines the time dependent random drift.

I&C Request 6:

Page 17, Attachment 1. Clarify the statement in the first paragraph, "These changes were only eliminated where insufficient as-found or as-left data was available."

I&C Response 6:

The discussion provided on Page 17 of Attachment 1 to Reference 1, is a continuation of the "Data Collection and Conditioning" section that describes the methodology used to make adjustments or elimination of data points during the data conditioning process.

The first paragraph on page 17 describes how scaling or setpoint changes can be used as a basis for eliminating a data point. When scaling or setpoint changes are incorporated into a revision of the calibration procedure, and that procedure is performed at the subsequent calibration, the initial as-found data reflects a different test point than the test point data available from the previous as-left. In instances where the as-found data did not correlate to the same test point as the previous as-left, the data was eliminated. This is the intent of the statement that changes were eliminated only where "insufficient as-found or as-left data was available."

I&C Request 7 Page 18, Attachment 1. Clarify the statement in the second paragraph, "For the instances where statistical analysis could not be performed, CPS setpoint methodology assumptions for drift values are utilized to support 30 month (i.e. 24 months plus 25%

scheduling allowance of TS SR 3.0.2) calibration intervals." Provide the basis for acceptability of the assumptions.

Page 5 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle l&C Response 7:

In the absence of a statistical analysis of drift, the CPS setpoint methodology (i.e.,

Appendix A to this attachment) requires the use of vendor supplied drift data in the setpoint calculation. In the absence of vendor supplied drift data the standard conservatively assumes that drift will occur, however, it is not required to be modeled as time dependent. The standard provides two alternatives for the drift value. The first alternative is the assumption that the drift is equal to the vendor stated accuracy for the device involved. A second alternative provided in the standard is to use 0.5% of span for electrical devices and 1.0% of span for mechanical devices in the absence of vendor data. Selection of these drift values is the result of engineering review of typical Reference Accuracy and industry practices for these device types. The setpoint drift value is based on the SRSS of the individual device drift values (e.g., vendor accuracy for each device in the loop).

In order to confirm adequate drift modeling, whether by one of these alternatives or by statistical analysis of historical performance values, CPS is committed to performing drift trending as documented in Attachment 4 to Reference 1 (i.e., commitment 2). This program requires a condition report be written for any instrument found out of tolerance (OOT) (i.e., outside the As-Found Tolerance (AFT)). AFT includes the assigned drift, accuracy, and calibration uncertainties. During calibration as-found readings are taken.

If the readings are found outside the AFT a condition report is written. If the readings are also beyond the AV the instrument is declared inoperable. In either case the calibration is always reset within the As Left Tolerance (ALT) limits. The condition report documents the occurrence and provides for drift performance trending including proper setpoint modeling and equipment performance.

I&C Request 8:

Page 6, Attachment 5, fourth paragraph. Two failures were identified for Electroswitch 20K. Provide justification how 95/95 percent confidence level was achieved.

I&C Response 8:

This request indicates that two Electroswitch 20K switch failures were identified during the review of CPS surveillance history of the logic system components. However, as documented on page 6 of Attachment 5 to Reference 1, there was one Electroswitch 20K failure and one GE Type CR2940 switch failure. The conclusion of this surveillance history evaluation was that since the switch types were unique and only two failures were identified in a large population of control switches over the evaluation period, these failures were not indicative of a repetitive or time based failure problem.

The failures addressed in this request were associated with TS Surveillance Requirement (SR) 3.3.3.2.2 which requires verification that each required control circuit and transfer switch in the remote shutdown panel is capable of performing its intended function. This SR is not a calibration surveillance. The guidance provided in Regulatory Guide (RG) 1.105, "Setpoints for Safety-related Instrumentation," Revision 3, indicates that the 95/95 percent confidence level is the criterion for combining uncertainties in determining a trip setpoint and its AV to assure that there is a 95% probability that the constructed limits contain 95% of the population of interest. Since SR 3.3.3.2.2 is not a calibration surveillance, contains no trip setpoints or AVs, and has no measured Page 6 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle uncertainties to combine, the 95/95 percent confidence level is not applicable to the components discussed in this request.

I&C Request 9:

Page 32, Attachment 5, last paragraph. Three failures were identified. Provide justification how 95/95 percent confidence level was achieved.

I&C Response 9:

The failures addressed in this request were associated with TS SR 3.3.3.2.3 which requires performance of a channel calibration for each required instrumentation channel.

As stated on page 32 of Attachment 5 to Reference 1, no allowable value is applicable to these functions and a separate drift evaluation was not performed for the Remote Shutdown System instrument channels based on the design function and equipment history. The guidance provided in RG 1.105 indicates that the 95/95 percent confidence level is the criterion for combining uncertainties in determining a trip setpoint and its AV to assure that there is a 95% probability that the constructed limits contain 95% of the population of interest. Since SR 3.3.3.2.3 contains no trip setpoints or AVs, and has no measured uncertainties to combine, the 95/95 percent confidence level is not applicable to the failures discussed in this request.

I&C Request 10:

Page 33, Attachment 5, fourth paragraph. Two failures were identified. Provide justification how 95/95 percent confidence level was achieved.

I&C Response 10:

The failures addressed in this request were associated with TS SR 3.3.4.1.2 which requires performance of a channel calibration for each required instrumentation channel.

As stated on page 33 of Attachment 5 to Reference 1, drift evaluations were not performed for the turbine stop valve limit switches since they are mechanical devices that require mechanical adjustment only. Drift is not applicable to these devices. The identified two failures were the only limit switch failures that occurred during a review period from 1992 to 2002. Only one of the two failures was corrected by adjusting the

.setting," the second was strictly a mechanical failure. In lieu of attempting to analyze this single failure as reflective of a statistical uncertainty to be evaluated against the 95/95 criterion, engineering judgment is used to apply margin from the setpoint to the AV and the AV to the AL. The limit switches are part of the Maintenance Rule condition monitoring program, which tracks the devices for failure trends. As such, any identified adverse trend requires an action plan to correct the deficiency. In addition, providing assurance that mechanical failures have not occurred the switches are functionally tested on a quarterly basis (i.e., SR 3.3.4.1.1) to verify operation.

I&C Request 11:

Page J16, Attachment 6, second paragraph. Clarify the statement, "The 46 to 135 day and 46 to 135 day bins.......

Page 7 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle l&C Response 11: of Reference 1 provides Appendix J to Reference 4. Page J16 of shows an example of a Time Dependence Evaluation. In the example, the first table indicates the data count and percent of total count for each Bin. As noted in this request, the paragraph below the table states "The 46 to 135 day and 46 to 135 day bins are thrown out due to less than 5 data points and..." This is a typographical error.

The statement should read "The 46 to 135 day and 651 to 800 day bins are thrown out due to less than 5 data points and..." Reference 4 will be corrected and CPS has written an Issue Report to track resolution of the error in this standard.

Questions 12 and 13 refer to the Clinton Power Station Instrument Setpoint Calculation Methodology included in the licensee's letter dated April 16, 2004 (ADAMS Accession No. ML041120059).

I&C Request 12:

Appendix L. Indicate the setpoint calculations for which the graded approach to Categories 2, 3, and 4 of this Appendix has been used and provide sample calculations, indicating the confidence level achievable.

l&C Response 12:

All the calculations supporting the 24-month cycle amendment request have been prepared to the same level of rigor. No attempt has been made to establish whether they are category 1 or 2 because they both require the highest level of rigor.

Appendix L to the CPS setpoint methodology provides the CPS graded approach to uncertainty analysis (see Appendix A to this attachment). Graded approaches are based on the fact that all the rigor and conservatism established in Reference 2 may not be warranted for all setpoints in a nuclear power plant. In accordance with Reference 2, a nuclear plant licensee may establish a multilevel classification scheme by documenting the rationale used to establish the classification. Implementation of a graded approach to setpoints requires the user to identify how critically important each setpoint is.

Therefore, a graded approach, with classification for setpoints, will help ensure proper maintenance of safety grade nuclear instrumentation without compromising the safe and reliable operation of the plant.

I&C Request 13:

Appendix N. Has this Appendix been applied for any setpoint calculation? If yes, justify how 95/95 confidence level has been achieved and provide sample setpoint calculations.

I&C Response 13:

Appendix N to the CPS setpoint methodology (see Appendix A to this attachment) addresses the potential interaction of setpoints due to the uncertainty tolerances about the different setpoints. An example process would be the high and low level setpoints for a tank. None of the calculations supporting the proposed amendment request in Reference 1 needed to utilize Appendix N to the CPS setpoint methodology to assure Page 8 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle the low likelihood of overlap. The setpoints in the calculations that contain two setpoints were not close enough to each other to require consideration of potential overlap.

Electrical Engineering Section Electrical Request 1:

Surveillance Requirement (SR) 3.8.1.18, Diesel Generator (DG) load sequence timer calibration.

This SR requires each timer to be within +/- 10% of its design setpoint. Please provide the basis to demonstrate that the change in frequency from 18 months to 24 months does not require a closer tolerance for the as-left setpoint for the timer.

Electrical Response 1:

CPS TS SR 3.8.1.18 states "Verify the sequence time is within + 10% of design for each load sequence timer." This SR does not require calibration of any instrument. Therefore the +/- 10% value is not a calibration tolerance. The SR is performed as part of CPS procedures 9080.21, "Diesel Generator 1A - ECCS Integrated," and 9080.22, "Diesel Generator 1B - ECCS Integrated," rather than as a calibration procedure. The surveillance is currently performed on an 18-month frequency consistent with the recommendations of RG 1.108, "Periodic Testing of Diesel Generator Units Used as Onsite Electric Power Systems at Nuclear Power Plants." AmerGen has proposed in Reference 1 to revise the frequency for this surveillance from 18 months to 24 months consistent with the guidance in RG 1.9, "Selection, Design, Qualification, and Testing of Emergency Diesel Generator Units Used as Class 1E Onsite Electric Power Systems at Nuclear Power Plants," plant conditions required to perform the SR, and the expected fuel cycle length. Historically, there have been no failures of the timing sequence verification while performing this surveillance and while employing the current calibration intervals for the time delay devices involved. There are three types of time delays checked in the procedure. The Nuclear System Protection System (NSPS) circuit card timer (5 seconds), the Westinghouse TD-5 time delay relay (10 seconds), and the Agastat E7000 time delay relay (40 seconds). The calibration frequency of the NSPS circuit card timer is the only one that will be impacted by the new fuel cycle duration.

This frequency will be increasing from 18 to 24 months. The as-left setting requirement during calibration of this timer is + 1% of setpoint. Review of the calculation, which evaluates drift on this device, indicates that no change to this as-left value is required for this device when increasing the calibration interval from 18 to 24 months.

Electrical Request 2:

SR 3.8.11.2, System functional test of the Static VAR compensator (SVC) protection subsystem.

Please identify the signals and components in the SVC protection subsystem whose function may be affected by increasing the test frequency from 18 to 24 months.

Describe what measures you plan to take to detect and compensate for any degraded performance between surveillance intervals. Please provide copies of drawings M01-1103-1 and E02-IAP03 describing the SVC.

Page 9 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Electrical Response 2:

TS SR 3.8.11.2 requires performance of a system functional test of each static VAR compensator (SVC) protection subsystem, including breaker actuation. This SR requires a functional test of the reserve auxiliary transformer (RAT) SVC and the emergency reserve auxiliary transformer (ERAT) SVC to ensure that each SVC protection subsystem will actuate to automatically open the associated SVC's main circuit breakers in response to signals associated with SVC failure modes that could potentially damage or degrade plant equipment. System function testing should thus include satisfactory operation of the associated relays and testing of the sensors for which failure modes would be undetected. The functional checks of the SVC protection subsystems are performed by procedures CPS 9384.01, "ERAT SVC Protective Relays Functional Test," and CPS 9384.02 "RAT SVC Protective Relays Functional Test."

These procedures identify the 18-month test frequency from the TS SR for performing the functional check. The 18-month frequency was selected to correspond with the CPS fuel cycle length. Performing the functional checks of these devices requires operating the breakers that isolate the SVC from the associated 4.16 kilovolt (kV) bus and therefore, require a plant outage for testing the RAT SVC protection devices. Testing the ERAT SVC protection devices does not require a plant outage, however, the ERAT SVC functional testing is performed on the same frequency as the RAT SVC for consistency, to conform to the fuel cycle length, and to allow analysis of all the SVC test data on the same basis for trending purposes.

The devices functionally tested as part of this SR are electronic protective relays monitoring the output of the SVC for changes in voltage, current, and harmonic content.

Since they are electronic relays, they are programmed rather than being adjusted by dial settings and movement of induction disks. Their function is to serve as the redundant protective system to the programmable high speed controller and isolate the SVC before the SVC output could negatively affect the voltage supplied to the safety related buses.

The inputs to these relays are from current transformers (CTs) and potential transformers (PTs ) located at the SVC connection to the associated 4.16 kV bus. CTs and PTs are static devices with no adjustments and no expected change to their output ratio. Based on the types of devices tested as part of TS SR 3.8.11.2, there is no need to take additional actions to detect and compensate for any degraded performance between surveillance intervals as a result of the extended test frequency.

Based on clarification provided by the NRC during a February 3, 2005 teleconference, the SVC systems single line diagrams and protection single line diagrams for the RAT and ERAT SVCs are provided in Appendix D. The SVC system description is also provided as Appendix E to this attachment. This system description provides a description of the operation and function of the CPS SVC protection subsystem devices.

Electrical Request 3:

Table 3.3.8.1-1, Loss of Power Instrumentation, indicates a change in the loss of offsite power (LOOP) time delay from 10 seconds to 5 seconds. FSAR (Rev. 10), Section 8.3.1.1.2, Unit Class IEA-C Power Systems, indicates (on page 8.3-7) that the starting time of the largest Class IE motor is approximately 10 seconds when the offsite voltages are at their minimum expected value. It is our understanding that the 5 second delay Page 10 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle corresponds to a complete loss of voltage (0 Volts). Please confirm that the decrease in the time delay for the LOOP trip to 5 seconds does not challenge the voltage-time trip characteristic of the LOOP relay by any motor starting at minimum expected voltage.

Electrical Response 3:

There were no changes to the setpoints for the loss of voltage relays. The operating times of the relays during Loss of Offsite Power (i.e., 0 bus volts) events or during voltage transients (i.e., most severe dip during motor starting) are unchanged.

Therefore, there is no change to the relay/bus/system response to motor starting transients as a result of changing the value listed in TS Table 3.3.8.1-1, "Loss of Power Instrumentation," Item 1.b, Loss of Voltage - Time Delay, from 10 to 5 seconds.

Electrical Request 4:

The TS Bases statements for the change request for SR 3.8.1.8, Transfer of Offsite Power from Normal source to Alternate source, SR 3.8.1.12, DG auto start and load on ECCS signal, and SR 3.8.1.13, DG automatic trip bypass, indicate the change can be justified by operating experience that has shown that these components usually (emphasis added) pass the SR (and removed "when performed on the 18 month frequency'). Please provide the data that supports the justification that, even with some failures at the 18 month surveillance frequency, the frequency can be extended to 24 months.

Electrical Response 4:

As stated in Attachment 5 of Reference 1, a review of the applicable CPS surveillance history for the AC Sources demonstrated there have been no previous failures of these three SRs that would have been detected solely by the required 18-month periodic performance. Additionally, the more frequent testing required by SRs 3.8.1.1, 3.8.1.2, 3.8.1.3, and 3.8.1.7 provides additional assurance that offsite power and diesel generator availability and proper functioning will be promptly detected. The commitment to trend ongoing performance at CPS will also identify any potential unanticipated degradation resulting from extending these tests from 18 to 24 months.

The phrase "usually pass the SR when performed on the 18 month frequency" is a common generic Bases statement (which occurs in 49 instances in the CPS TS Bases).

In these instances, the proposed Bases revisions that coordinate with the change in Surveillance Frequencies from 18 to 24 months has simply deleted the portion "when performed on the 18 month frequency." The word "usually" is not intended to necessarily reflect that there have been failures, but is simply a generic statement that would encompass occasional failures. The three Bases changes addressed in this request are also made consistently in each of the other 46 occurrences.

Electrical Request 5:

The TS Bases statements for SR 3.8.1.15, DG hot restart test, SR 3.8.1.16, DG synchronizing test, SR 3.8.1.17, DG protective trip bypass and SR 3.8.1.18, DG load sequence timer calibration, state that the surveillances are consistent with Regulatory Guide (RG) 1.108. This RG had been withdrawn and replaced with Revision 3 to RG 1.9 in 1993. Please explain the continued reference to RG 1.108.

Page 11 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Electrical Response 5:

The TS Bases for the SRs specified in this request provide separate RG cross-reference citations for (1) testing acceptance criteria and (2) testing frequency. The intent of the TS Bases discussions is to provide a basis for the requirements addressed by a given Limiting Condition for Operation (LCO) or SR. There is no intent to infer broader commitment to these RGs than the context in which the citation is made.

In the surveillances referenced in this request, the testing acceptance criteria are not proposed for change, and therefore, the current licensing basis for these tests continue to reference RG 1.108. However, the frequency of testing specified in RG 1.108 was 18 months, while RG 1.9 supports the proposed 24-month testing frequency. As such, only the portion of the Bases associated with the frequency is revised to reflect its support within RG 1.9. CPS is committed to portions of RG 1.108, Revision 1, dated August 1977, as well as portions of RG 1.9, Revision 2, dated December 1979 and Revision 3, dated July 1993, as indicated in the Updated Safety Analysis Report (USAR) Section 1.8.

Electrical Request 6:

No justification has been provided in the TS Bases statements for a 24 month surveillance frequency for SR 3.8.1.19, DG auto start on a combined LOOP and ECCS signal, and SR 3.8.4.2, Battery charger full load and recharge capability. Please provide the basis for this requested change.

Electrical Response 6:

Based on clarification provided by the NRC Staff in a February 3, 2005 teleconference, the following additional justification is provided. However, AmerGen notes that it is inappropriate for the TS Bases to contain justification for past changes. The Bases provide standard wording related to the Frequency basis, consistent with the content and format of NUREG-1434, "Standard Technical Specifications General Electric Plants, BWR/6."

The diesel generator (DG) is started numerous times during the operating cycle in accordance with various surveillance requirements. Performance of SR 3.8.1.19 encompasses portions of the logic and starting relays that are more frequently tested, such that the surveillance uniquely tests only a small number of items that are not tested during the monthly and semi-annual tests of the diesel generator. This includes the bus and offsite source loss of power relays, the LOCA signal to the DG start logic, and the contacts of the auxiliary relays for these inputs to the DG start logic. These relays are located in mild environmental zones of the plant. The increased interval between calibrations for the loss of power relays and sensing circuits for the LOCA signals have been evaluated in other portions of Reference 1 and have been evaluated for satisfactory performance to support extension to 24-month calibration intervals. The auxiliary relays will age an additional 6 months before being operated during the integrated test. This additional aging will, however, have no impact on the condition of the relay coils since they are de-energized during this period. Any small amount of increased oxidation on the relay contacts surface, assumed to occur during the additional 6 months of aging, would not be expected to be capable of maintaining its Page 12 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle integrity when exposed to the 125 VDC potential of the circuit nor would it provide sufficient resistance to prevent pick up of the auto start relay. Accordingly, the increase in the surveillance interval for SR 3.8.1.19 is not expected to impact successful performance of this surveillance.

The battery charger provides power to the DC bus continuously during the operating cycle so the capability of the charger to provide the required voltage is continuously demonstrated. The battery charger full load and recharge capability surveillance required by SR 3.8.4.2 verifies the ability of the charger to produce its nameplate output for a specified duration. The charger output is checked by feeding a load bank, which is adjusted to produce the required current output from the charger. This does not require the charger to operate any differently than during normal operation since the charger automatically adjusts its output to maintain the selected voltage level. Aging of internal components of the charger is adequately addressed by preventive maintenance tasks, which inspect the charger and dictate periodic replacement of age sensitive components (such as capacitors on a 6 year interval). Accordingly the increase in the surveillance interval is not expected to impact successful performance of this surveillance.

Electrical Request 7:

The TS Bases statements for SR 3.8.4.3, Battery service test, indicates the change request is an exception to RGs 1.32 and 1.129 without any explanation. Please provide the justification why the extension to 24 months is acceptable.

Electrical Response 7:

AmerGen is committed to RG 1.32, "Criteria for Safety-Related Electric Power Systems for Nuclear Power Plants," and RG 1.129, "Maintenance, Testing, and Replacement of Large Lead Storage Batteries for Nuclear Power Plants," which include commitments to perform a battery "service test" (i.e., SR 3.8.4.3) during refueling outages, or at some other outage, with intervals between tests "not to exceed 18 months." Since the battery service test is required to be performed during outage conditions in accordance with Note 2 to SR 3.8.4.3, and the expected fuel cycle lengths are nominally 24 months, this exception is required.

A battery service test is a special as found test of the battery's capability to satisfy the design requirements (i.e., battery duty cycle) of the DC electrical power system. Note 1 to SR 3.8.4.3 allows the performance of a modified performance discharge test (i.e., SR 3.8.6.6) in lieu of the battery service test. As explained in the CPS TS Bases for SR 3.8.4.3, this substitution is acceptable because the modified performance test of SR 3.8.6.6 represents an equivalent test of battery capability as SR 3.8.4.3.

The battery performance test is a test of constant current capacity of a battery, normally done in the as-found condition, after having been in service, to detect any change in the capacity determined by the acceptance test. The modified performance test utilizes current values that bound the battery duty cycle of the service test. The test is intended to determine overall battery degradation due to age and usage. Based on trending the battery capacity determined by the performance discharge test, the battery will be replaced prior to its capacity dropping below 80% of the manufacturer's rating. A Page 13of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle capacity of 80% shows that the battery rate of deterioration is increasing even though the battery is sized to meet the assumed duty cycle loads when the battery design capacity reaches this 80% limit. Replacement of the battery prior to the capacity dropping below 80% of the manufacturer's rating will ensure that the battery continues to meet the requirements of SR 3.8.6.6.

The Surveillance Frequency for the performance discharge test is normally 60 months.

If the battery shows degradation, or if the battery has reached 85% of its expected life, the Surveillance Frequency required by SR 3.8.6.6 is reduced to either 24 months or 12 months. This 12-month Frequency is not being extended to 24 months. As such, when the battery begins to show degradation or has reached 85% of its expected life with capacity < 100% of manufacturer's rating, the increased testing frequency of 12 months will continue to appropriately monitor the battery condition. Use of the modified performance test will assure capability to meet the design required battery duty cycle (i.e., service test acceptance criteria).

As such, extending the periodic battery service test required by SR 3.8.4.3 will not result in any increased potential for battery age related degradation to impact continued ability of the battery to perform its assumed duty cycle since any additional monitoring will continue to be imposed by SR 3.8.6.6.

References:

1. Letter from Keith R. Jury (AmerGen Energy Company, LLC) to U. S. NRC, "Request for Amendment to Technical Specification Surveillance Requirement Frequencies to Support 24-Month Fuel Cycles in Accordance with the Guidance of Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle'," dated May 20, 2004
2. Instrument Society of America (ISA) RP67.04, "Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation," Part 11, 1994
3. GE Nuclear Energy Report NEDC-32889P, "General Electric Methodology for Instrumentation Technical Specification and Setpoint Analysis," Revision 2, dated February 2000
4. Exelon Nuclear Engineering Standard NES-EIC-20.04, "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy," Revision 3
5. Letter from U. S. NRC to Oliver D. Kingsley (Exelon Generation Company, LLC),

Amendment Nos. 147 and 133 for LaSalle County Station Units 1 and 2, dated March 30, 2001 Page 14 of 14

ATTACHMENT Additional Information Supporting the Request for License Amendment Related to 24-Month Fuel Cycle Appendix A CI-01.00, Revision 3 Clinton Power Station Instrument Setpoint Calculation Methodology

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 TITLE: INSTRUMENT SETPOINT CALCULATION METHODOLOGY SCOPE OF REVISION:

1. Updated references to current procedures, standards and revisions.
2. Incorporated revisions necessary to produce setpoint calculations using the results of the drift analysis prepared for implementation of NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle"
3. Incorporated guidance from NES-EIC-20.04 "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy" providing additional reasonable assumptions for drift in lieu of better data.

4.Incorporated guidance acknowledging that calculations may be prepared in accordance with other methodologies such as ISA Method 2 and 3 after consulting with the Electrical / Instrument and Control Design Manager.

INFORMATION USE Procedure Owner: Paul Marcum Approval Date 04-21-04 CHANGE NO. DATE PAGES 0

0 0

Page 1 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 TABLE OF CONTENTS PAGE 1.0 PURPOSE 3 2.0 DISCUSSION/DEFINITION 3 2.1 Discussion 3 2.2 Definitions 9 3.0 Responsibility 21 4.0 STANDARD 21 4.1 Setpoint Calculation Guidelines 21 4.2 Definition of Input Data and Requirements 23 4.3 Determining Individual Device Error Terms 35 4.4 Determining Loop/Channel Values (Input to Setpoint 39 Calculation) 4.5 Calculation Nominal Trip Setpoints and 54 Indication/Control Loops

5.0 REFERENCES

60 6.0 APPENDICES 64 Appendix A, Guidance on Device Specific Accuracy and 66 Drift Allowances Appendix B, Sample Calculation Format 76 Appendix C, Uncertainty Analysis Fundamentals 94 Appendix D, Effect Of Insulation Resistance on Uncertainty 131 Appendix E, Flow Measurement Uncertainty Effects 147 Appendix F, Level Measurement Temperature Effects 155 Appendix G, Static Head and Line Loss Pressure Effects 165 Appendix H, Measuring and Test Equipment Uncertainty 167 Appendix I, Negligible Uncertainties / CPS Standard 175 Assumptions Appendix J, Digital Signal Processing Uncertainties 181 Appendix K, Propagation Of Uncertainty Through Signal 184 Conditioning Modules Appendix L, Graded Approach to Uncertainty Analysis 190 Appendix M, Using the Results of Statistical Drift 196 Analysis Appendix N, Statistical Analysis of Setpoint Interaction 199 Appendix 0, Instrument Loop Scaling 201 Appendix P, Radiation Monitoring Systems 209 Appendix Q, Rosemount Letters 212 Appendix R, Record of Coordination for Computer Point 214 Accuracy Page 2 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 1.0 PURPOSE 1.1 The purpose of this Engineering Standard is to provide a methodology for the determination of instrument loop uncertainties and setpoints for the Clinton Power Station.

The methodology described in this standard applies to uncertainty calculations for setpoint, control, and indication applications.

1.2 This document provides guidelines for the calculation of instrumentation setpoints, control, and indication applications for the Clinton Power Station.

1.3 These guidelines are applicable to all instrument setpoints. They include guidance for calculation of both Allowable Values and Nominal Trip Setpoints for setpoints included in plant Technical Specifications and calculation of Nominal Trip Setpoints for instruments not covered in the plant Technical Specifications. This document also includes guidance for determination of all input data applicable to the calculations as well as important topics concerning the interfaces with surveillance and calibration procedures and practices.

2.0 DISCUSSION/DEFINITIONS 2.1 Discussion 2.1.1 This document is structured to progress through a complete calculation process, from the most detailed level of individual device characteristics (drift, accuracy, etc.),

through determination of loop characteristics, and finally to calculation of setpoints and related topics, as outlined in the following figure:

Definition of Input Data and Requirements Calculation of Individual Device Terms (device accuracy, drift, etc.)

Combination of Individual Device Terms into Loop Terms (loop accuracy, etc.)

Calculation of Total Channel/Loop Values (Setpoint, Allowable Value, etc.)

Evaluation of Results and Resolution of Problem areas Supporting Information Page 3 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 FIGURE 1. THE SETPOINT CALCULATION PROCESS

a. DETERMINE SETPOINT OR CHANNEL ERROR VALUE TO BE CALCULATED
b. DEFINE INSTRUMENT CHANNEL CHARACTERISTICS INSTRUMENT DEFINITION PROCESS & PHYSICAL INTERFACES EXTERNAL INTERFACES
c. DETERMINE INSTRUMENT CHANNEL DESIGN REQUIREMENTS REGULATORY REQUIREMENTS FUNCTIONAL REQUIREMENTS
d. CALCULATE DEVICE SPECIFIC ERROR TERMS ACCURACY DRIFT CALIBRATION I
e. CALCULATE CHANNEL SPECIFIC ERROR TERMS I

ACCURACY DRIFT CALIBRATION PMA/PEA OTHERS l

SETPOINTS WITH ANALYTICAL LIMITS SETPOINTS/INDICATIONS WITH NO ANALYTICAL LIMIT

f. CALCULATE AV i. CALCULATE CHANNEL ERROR
g. CALCULATE NTSP j. CALCULATE SETPOINT
h. SELECT ACTUAL SETPOINT
k. COMPARE NTSP, AV, CHANNEL ERROR TO EXISTING REQUIREMENTS TECHNICAL SPECIFICATIONS FUNCTIONAL REQUIREMENTS OTHER REGULATORY REQUIREMENTS
1. OPTIMIZE CHANNEL TO MEET REQUIREMENTS Page 4 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.1.2 Instrument setpoint uncertainty allowances and setpoint discrepancies are issues that have led to a number of operational problems throughout the nuclear industry.

Historically CPS instrument loop uncertainty and setpoint determination had been based upon varying setpoint methodologies. Instrument channel uncertainty and setpoint determination had been established by two different methods depending on whether or not they applied to the Reactor Protection System and Engineered Safeguards Functions developed by GE or other safety related systems.

These methods involved:

1. Legacy S&L setpoint calcs which conservatively added accuracy errors to drift errors rather than SRSS. These calculations rarely recognized an Analytical Limit and as such did not calculate a Tech Spec Allowable Value.
2. GE setpoint calcs which are similar to "ISA method 2" Ref 5.3.

A third methodology was used to verify that an allowance for instrument uncertainty was contained in the allowable value for Technical Specifications indicating instruments (i.e.: "Channel Error" as in this standard). All three methodologies were rigid in recommendation and differed in both process and application. This resulted in CPS instrument uncertainty and setpoint calculations lacking consistent definition of allowable value and improper understanding of the relationship of the allowable value to earlier setpoint methodologies, procedures, and operability criteria. Beginning with Rev. 1, this Engineering Standard is intended to provide consistency between all CPS instrument setpoint calculations by incorporating the common strengths of CPS historical methodologies and ISA into one common standard with common terms.

This Standard provides a mechanism for the uniform development of new and revised CPS instrument setpoint and channel error calculations.

This standard does not prohibit the use of ISA recommended practice methods 2 and 3, but does strongly prefer method 1 for setpoints with analytical limits and, as such, is the method prescribed within this standard. This prescribed method should be used unless there is an infringement on operating margin to the point where the increase in nuisance alarms / actuations could cause more harm than the added conservatism gained. In that Method 2 and 3 calculate the setpoint directly from the analytical limit more operating margin can be attained. The Electrical/Instrument and Control Design Manager should be consulted prior to using methods other than the preferred in this standard.

Page 5 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.1.3 This standard provides flexibility, then, in the precise method by which a setpoint is determined, allowing for variations in calculation rigor dependent upon the significance of the function of the setpoint or operator decision point. The intent is to provide a format and systematic method, in contrast with a prescriptive method, of identifying and combining instrument uncertainties. As such, this standard provides guidelines to statistically combine uncertainties of components in a measurement and perform comparisons to ensure that there is adequate margin between the setpoint and a given limit to account for measurement error. This descriptive systematic method provides a consistent criterion for assessing the magnitude of uncertainties associated with each uncertainty component, thereby ensuring plant safety.

2.1.4 A systematic method of identifying and combining instrument uncertainties is necessary to ensure that adequate margin has been provided for safety related instrument channels that perform protective functions and for instrument channels that are important to safety.

Thus ensuring that vital plant protective features are actuated at the appropriate time during transient and accident conditions. Analytical Limits have been established through the process of accident analysis, which assumed that plant protective features would intervene to limit the magnitude of a transient. Limiting Safety System Settings (LSSS) are established in accordance with 10 CFR 50.36. Ensuring that these protective features actuate as they were assumed in the accident analysis provides assurance that safety limits will not be exceeded. The methodology presented by this revision is based on the industry standard ANSI/ISA S67.04, "Setpoints for Nuclear Safety Related Instrumentation" Parts I and II (Ref. 5.3), which is endorsed by Regulatory Guide 1.105 (Ref. 5.11). Clinton Power Station (CPS) has invoked RG 1.105 for a basis for meeting the requirements of 10CFR50, Appendix A, general design criterion 13 and 20.

2.1.5 Relation to ISA Standards and Regulatory Guides 2.1.5.1 The applicable ISA Standard for setpoint calculations is ISA S67.04. That standard was prepared by a committee of the ISA, which included some representatives who also participated in preparation of the CPS Setpoint Methodology. The CPS Setpoint Methodology is consistent with ISA Standard S67.04. More specifically this standard as it applies to setpoints with analytical limits strongly prefers the use of ISA Recommended Practice Method 1. It Page 6 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 is recognized that maintenance of operational margin has not been possible in rare cases using Method 1. It is also recognized that GE normally uses a method similar to ISA Recommended Practice Method 2. CPS currently uses Method 3 for reactor water level setpoints and GE provided several Method 2 calculations when power uprate was implemented.

2.1.5.2 There are three Regulatory Guides related to setpoint methodology; RG 1.105 (Ref. 5.11), RG 1.89 (Ref. 5.35) and RG 1.97 (Ref. 5.34). RG 1.105 covers setpoint methodology.

This Setpoint Methodology complies with RG 1.105. RG 1.89 covers equipment qualification. This Setpoint Methodology does not directly address equipment qualification, beyond the basic assumption that instrumentation is qualified for its intended service. This Setpoint Methodology may be used to determine instrument errors under various conditions as part of the process of demonstrating that instruments are qualified to perform specified functions, in accordance with RG 1.89. RG 1.97 covers the topic of post accident instrumentation. This Setpoint Methodology also does not address RG 1.97. However, as is the case with RG 1.89, the methods of determining instrument performance inherent in this Setpoint Methodology may be used when demonstrating that a particular instrument channel satisfies the guidance of RG 1.97.

2.1.6 In summary, this standard, based upon ISA-S67.04, provides an acceptable method to calculate instrument loop accuracy and setpoints, and applies to NSED as well as any technical staff members involved in the modification of instrument loops at CPS. The results of an uncertainty analysis might be applied to the following types of calculations:

  • Parameters and setpoints that have Analytical Limits
  • Evaluation or justification of previously established setpoints
  • Parameters setpoints that do not have Analytical Limits.
  • Determination of instrument indication uncertainties Page 7 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.1.7 Setpoints without Analytical Limits Many, setpoints are important for reliable power generation and equipment protection. Because these setpoints may not be derived from a safety limit threaded to an accident analysis, the basis for the setpoint calculation is typically developed from process limits providing either equipment protection or maintaining generation capacity. As defined in Appendix L, "Graded Approach to Uncertainty Analysis", the criteria in this Engineering Standard may also be used as a guide for setpoints that do not have Analytical Limits to improve plant reliability, but the calculation may not be as rigorous.

2.1.8 These guidelines are applicable to all instrument setpoints. They include guidance for calculation of both Allowable Values and Nominal Trip Setpoints for setpoints included in plant Technical Specifications, and calculation of Nominal Trip Setpoints for instruments not covered in plant Technical Specifications.

2.1.9 Indication Uncertainty (Channel Error)

Uncertainty associated with process parameter indication is also important for safe and reliable plant operation.

Allowing for indication uncertainty supports compliance with the Technical Specifications and the various operating procedures. The methodology presented in this Engineering Standard is applicable to determining indication uncertainty.

2.1.10 Mechanical Equipment Setpoints This Engineering Standard was developed specifically for instrumentation components and loops. This Engineering Standard does not specifically apply to mechanical equipment setpoints (i.e. safety and relief valve setpoints) or protective relay applications. However, guidance presented herein may be useful to predict the performance of other non-instrumentation-type devices.

2.1.11 Rounding Conventions Normal rounding conventions (rounding up or down depending on the last digit in the calculated result) do not apply to error calculations or setpoints. All rounding of results should be done in the direction, which is conservative relative to plant safety (upward for error terms, away from the Analytical Limit for Allowable Values and Nominal Trip Setpoints). Additionally, all output values to calibration procedures should be in the precision required by the calibration procedure.

Page 8 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2 Definitions NOTE Many of the followi ing definitions are based on the methodology of NEDC-31336 (Ref 5.1).

IW'here the termns defined are equivalent to terms uised in ISA StandardS67.04 (Ref 5.3), the equivalence is noted.

2.2.1 AS-FOUND TOLERANCE (AFTL): the tolerance of the As-Found error in the instrument loop (AFTL), which requires calibration to restore the loop within the As-Left Tolerance. An as-found tolerance (AFTj) is also developed for all devices in channel.

2.2.2 ACCURACY TEMPERATURE EFFECT (ATE): The change in instrument output for a constant input when exposed to different ambient temperatures.

2.2.3 ALLOWABLE VALUE (AV): (Technical Specifications Limit):

The limiting value of the sensed process variable at which the trip setpoint may be found during instrument surveillance. Usually prescribed as a license condition.

Equivalent to the term Allowable Value as used in ISA Standard S67.04.

2.2.4 ANALYTICAL LIMIT (AL): The value of the sensed process variable established as part of the safety analysis prior to or at the point which a desired action is to be initiated to prevent the safety process variable from reaching the associated licensing safety limit. Equivalent to the term Analytical Limit as used in ISA Standard S67.04.

2.2.5 AS-LEFT TOLERANCE (ALTi): This tolerance is the precision with which the technician should be able to set the device during surveillance. Additionally, if the As-Found value is within the (ALTi) then re-calibration is not required.

The As-Left Tolerance is determined by the organization responsible for defining the surveillance procedures (recommendations are provided in this document). A loop as-left tolerance (ALTL) is also developed for all devices in channel.

2.2.6 BIAS (B): A systematic or fixed instrument uncertainty, which is predictable for a given set of conditions because of the existence of a known direction (positive or negative). See Appendix C, Section C.1.2, for additional discussion.

Page 9 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.7 BOUNDING VALUE (BV): The extreme value of the conservatively calculated process variable that is to be compared to the licensing safety limit during the transient or accident analysis. This value may be either a maximum or minimum value, depending upon the safety variable.

2.2.8 CALIBRATION TOOL ERROR (Ci): The accuracy of the device (multimeter, etc.) being used to perform the calibration or surveillance test. Also referred to as M&TE (MTE). For typical precision equipment CPS recommends that this error term be considered to be a 3 sigma value, provided that the calibration of these devices is to NIST traceable standards and minimizes the effects of hysteresis, linearity and repeatability.

2.2.9 CALIBRATION STANDARD ERROR (CSTD): The error in the calibration of the calibrating tool. Per CPS standard CI-01.00 assumptions, this value considered negligible to the overall calibration error term and can be ignored.

2.2.10 CHANNEL CALIBRATION ACCURACY (CL): The quality of freedom from error to which the nominal trip setpoint of a channel can be calibrated with respect to the true desired setpoint. Considering only the errors introduced by the inaccuracies of the calibrating equipment used as the standards or references and the allowances for errors introduced by the calibration procedures. The accuracy of the different devices utilized to calibrate the individual channel instruments is the degree of conformity of the indicated values or outputs of these standards or references to the true, exact, or ideal values. The value specified is the requirement for the combined accuracies of all equipment selected to calibrate the actual monitoring and trip devices of an instrument channel plus allowances for inaccuracies of the calibration procedures. Channel calibration accuracy does not include the combined accuracies of the individual channel instruments that are actually used to monitor the process variable and provide the channel trip function.

2.2.11 CHANNEL INSTRUMENT ACCURACY (AL): The quality of freedom from error of the complete instrument channel with respect to acceptable standards or references. The value specified is the requirement for the combined accuracy's of all components in the channel that are used to monitor the process variable and/or provide the trip functions and includes the combined conformity, linearity, hysteresis and repeatability errors of all these devices. The accuracy of each individual component in the channel is the degree of conformity of the indicated values of that instrument to the values of a recognized and acceptable standard or reference device (Usually National Bureau of Standards Page 10 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 traceable), that is used to calibrate the instrument.

Channel instrument accuracy, channel calibration accuracy, and channel instrument drifts are considered to be independent variables. This definition encompasses the terms Vendor Accuracy, Hysteresis, and Repeatability defined in ISA Standard S67.04.

2.2.12 CHANNEL INSTRUMENT DRIFT (DL): The change in the value of the process variable at which the trip action will occur between the time the nominal trip setpoint is calibrated and a subsequent surveillance test. The initial design data considers drift to be an independent variable. As field data is acquired, it may be substituted for the initial design information. This term is equivalent to the Drift Uncertainty (DR) term used in the ISA Standard S67.04.

2.2.13 CHANNEL INDICATION UNCERTAINTY (CE): This is a prediction of error in an indicator or data supply channel resulting from all causes that could reasonably be expected during the time the channel is performing its function. This term is not used in setpoint calculations.

2.2.14 CONFIDENCE LEVEL: The relative frequency that the calculated statistic is correct.

2.2.15 CONFIDENCE INTERVAL: The frequency that an interval estimate of a parameter may be expected to contain the true value. For example, 95% coverage of the true value means, that in a repeated sampling, when 95% uncertainty interval is constructed for each sample, over the long run, the intervals will contain the true value 95% of the time.

2.2.16 CPS STANDARD CI-01.00 ASSUMPTIONS: Assumptions established by the Setpoint Program that are considered to be defendable and should be used without modification to any new or revised calculation, performed under this methodology, as applicable. See Appendix I,Section I.11 for the current standard assumptions. However, it should be noted, that specific assumptions germane to the individual calculation shall follow all standard assumptions.

2.2.17 DEADBAND: The range within which the input signal can vary without experiencing a change in the output.

2.2.18 DESIGN BASIS EVENT (DBE): The limiting abnormal transient or an accident which is analyzed using the analytical limit value for the setpoint to determine the bounding value of a process variable.

Page 11 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.19 DRIFT TOLERANCE INTERVAL (DTIc).- Defined herein as the calculated drift based on As-Found / As-Left data for the calibration interval and tolerance interval of interest from a statistical drift study.

2.2.20 FULL SPAN/SCALE (FS): The highest value of the measured variable that device is adjusted to measure.

2.2.21 HARSH ENVIRONMENT: This term refers to the worst environmental conditions to which an instrument is exposed during normal, transient, accident or post-accident conditions, out to the point in time when the device is no longer called upon to serve any monitoring or trip function. This term may be used in Equipment Qualification to define the qualification conditions.

From the standpoint of establishing setpoints, Harsh Environment does not apply. This distinction is made to avoid confusion between the long-term functional requirements for the devices, which includes post-trip operation, and the operational requirements during the initial period leading to the first trip.

2.2.22 HUMIDITY EFFECT (HE): Error due to humidity.

2.2.23 HYSTERESIS: An instrument's change in response as the process input signal increases or decreases (see Fig. C-5).

2.2.24 INDICATOR READING ERROR (IRE): The error applied to the accuracy with which personnel can read the analog and digital indications in an instrument loop or on M&TE. This value will normally be one quarter of the smallest division of the scale. IRE is not required IF the device ALT is rounded to the nearest conservative half-minor division.

For non-linear scales the IRE may be evaluated for the area of interest. Appendix C provides in depth discussion and usage guidelines for IRE.

2.2.25 INSTRUMENT CHANNEL: An arrangement of components required to generate a protective signal, or, in the case of monitoring channels, to deliver the signal to the point at which it is monitored. Unless otherwise stated, it is assumed that the channel is the same as the loop.

Equivalent to the term Instrument Channel in ISA Standard S67.04.

Page 12 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.26 INSTRUMENT RESPONSE TIME EFFECTS: The delay in the actuation of a trip function following the time when a measured process variable reaches the actual trip setpoint due to time response characteristics of the instrument channel.

2.2.27 INSULATION RESISTANCE ACCURACY ERROR (IRA): This is the error effect produced by degradation of insulation resistance (IR), for the various cables, terminal boards and other components in the instrument loop, exclusive of other defined error terms (Accuracy, Calibration, Drift, Process Measurement Accuracy, Primary Element Accuracy).

Since the effect of current leakage associated with IRA is predictable and will act only in one direction for a given loop, IRA is always treated as a bias term in calculations.

2.2.28 LICENSEE EVENT REPORT (LER): A report which must be filed with the NRC by the utility when a technical specifications limit is known to be exceeded, as required by 10CFR50.73.

2.2.29 LICENSING SAFETY LIMIT (LSL): The limit on a safety process variable that is established by licensing requirements to provide conservative protection for the integrity of physical barriers that guard against uncontrolled release of radioactivity. Events of moderate frequency, infrequent events, and accidents use appropriately assigned licensing safety limits. Overpressure events use appropriately selected criteria for upset, emergency, or faulted ASME category events. Equivalent to Safety Limit in ISA Standard S67.04.

2.2.30 LIMITING SAFETY SYSTEMS SETTING (LSSS): A term used in the Technical Specifications, and in ISA Standard S67.04, to refer to Reactor Protection System (nominal) trip setpoints and allowable values.

2.2.31 LIMITING NORMAL OPERATING TRANSIENT: The most severe transient event affecting a process variable during normal operation for which trip initiation is to be avoided.

2.2.32 LINEARITY: The ability of the instrument to provide a linear output in response to a linear input (see Fig. C-6).

2.2.33 MEAN VALUE: The average value of a random sample or population. For n measurements of Xi, where i ranging from 1 to n, the mean is given by M = Z XI/n 2.2.34 MEASURED SIGNAL: The electrical, mechanical, pneumatic, or other variable applied to the input of a device.

Page 13 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.35 MEASURED VARIABLE: A quantity, property, or condition that is measured, e.g., temperature, pressure, flow rate, or speed.

2.2.36 MEASUREMENT: The present value of a variable such as flow rate, pressure, level, or temperature.

2.2.37 MEASUREMENT AND TEST EQUIPMENT EFFECT (MTE): The uncertainty attributed to measuring and test equipment that is used to calibrate the instrument loop components. Also called Calibration Tool Error (Ci).

2.2.38 MILD ENVIRONMENT: An environment that at no time is more severe than the expected environment during normal plant operation, including anticipated operational occurrences.

2.2.39 MODELING ACCURACY: The modeling accuracy may consist of modeling bias and/or modeling variability. Modeling bias is the result of comparing analysis models used in event analysis to actual plant test data or more realistic models. Modeling variability is the uncertainty in the ability of the model to predict the process or safety variable.

2.2.40 MODULE: Any assembly of interconnecting components, which constitutes an identifiable device, instrument or piece of equipment. A module can be removed as a unit and replaced with a spare. It has definable performance characteristics, which permit it to be tested as a unit. A module can be a card, a drawout circuit breaker or other subassembly of a larger device, provided it meets the requirements of this definition.

2.2.41 MODULE UNCERTAINTY (As): The total uncertainty attributable to a single module. The uncertainty of an instrument loop through a display or actuation device will include the uncertainty of one or more modules.

2.2.42 NOISE: An unwanted component of a signal or variable. It causes a fluctuation in a signal that tends to obscure its information content.

2.2.43 NOMINAL TRIP SETPOINT (NTSP): The limiting value of the sensed process variable at which a trip may be set to operate at the time of calibration. This is equivalent to the term Trip Setpoint in ISA Standard S67.04.

2.2.44 NOMINAL VALUE: The value assigned for the purpose of convenient designation but existing in name only; the stated or specified value as opposed to the actual value.

2.2.45 NONLINEAR: A relationship between two or more variables that cannot be described as a straight line. When used to describe the output of an instrument, it means that the output is of a different magnitude than the input, e.g.,

square-root relationship.

Page 14 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.46 NORMAL DISTRIBUTION: The density function of the normal random variable x, with mean p and variance o2 is:

2 (X- )

n(x;, a)= e 2a2 2.2.47 NORMAL PROCESS LIMIT (NPL): The safety limit, high or low, beyond which the normal process parameter, should not vary.

Trip setpoints associated with non-safety-related functions might be based on the normal process limit.

2.2.48 NORMAL ENVIRONMENT: The environmental conditions expected during normal plant operation.

2.2.49 OPERATIONAL LIMIT (OL): The value of a process variable established to enable determination of trip avoidance margin (operating margin) for the limiting normal operating transient.

2.2.50 OVERPRESSURE EFFECT (OPE): Error due to overpressure transients (if any).

2.2.51 POWER SUPPLY EFFECT (PSE): Error due to power supply fluctuations.

2.2.52 PRIMARY ELEMENT ACCURACY (PEA): The accuracy of the device (exclusive of the sensor) which is in contact with the process, resulting in some form of interaction (e.g., in an orifice meter, the orifice plate, adjacent parts of the pipe, and the pressure connections constitute the primary element).

2.2.53 PROBABILITY: The relative frequency with which an event occurs over the long run.

2.2.54 PROCESS MEASUREMENT ACCURACY (PMA): Process variable measurement effects (e.g., the effect of changing fluid density on level measurement) aside from the primary element and the sensor.

2.2.55 RADIATION EFFECT (RE): Error due to radiation.

2.2.56 RANDOM: Describing a variable whose value at a particular future instant cannot be predicted exactly, but can only be estimated by a probability distribution function. See Appendix C, Section C.1.1, for additional discussion.

2.2.57 RANGE: The region between the limits within which a quantity is measured, received, or transmitted, expressed by stating the lower and upper range values.

2.2.58 REPEATABILITY: The ability of an instrument to produce exactly the same result every time it is subjected to the same conditions (see Figure C-4).

Page 15 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.59 REQUIRED LIMIT (RL): A criterion sometimes applied to As-Found surveillance data for judging whether or not the channel's Allowable Value could be exceeded in a subsequent surveillance interval.

2.2.60 REVERSE ACTION: An increasing input to an instrument producing a decreasing output.

2.2.61 RFI/EMI EFFECT (REE): Error due to RFI/EMI influences (if any).

2.2.62 RISE TIME: The time it takes a system to reach a certain percentage of its final value when a step input is applied.

Common reference points are 50%, 63%, and 90% rise times.

2.2.63 RPS: Reactor Protection System.

2.2.64 RTD: Resistance Temperature Detector.

2.2.65 SAFETY LIMIT (Licensing Safety Limit): A limit on an important process variable that is necessary to reasonably protect the integrity of physical barriers that guard against the uncontrolled release of radioactivity.

2.2.66 SAFETY-RELATED INSTRUMENTATION: Instrumentation that is essential to the following:

  • Provide emergency reactor shutdown
  • Provide containment isolation
  • Provide reactor core cooling
  • Provide for containment or reactor heat removal
  • Prevent or mitigate a significant release of radioactive material to the environment or is otherwise essential to provide reasonable assurance that a nuclear power plant can be operated without undue risk to the health and safety of the public Other instrumentation, such as certain Regulatory Guide 1.97 instrumentation, may be treated as safety related even though it may not meet the strict definition above.

2.2.67 SEISMIC EFFECT (SE): The change in instrument output for a constant input when exposed to a seismic event of specified magnitude.

2.2.68 SENSOR (TRANSMITTER): The portion of the instrument channel, which converts the process parameter value to an electrical signal. This is equivalent to ISA Standard S67.04.

2.2.69 SIGMA: The value specified is the maximum value of a standard deviation of the probability distribution of the parameter based on a normal distribution.

2.2.70 SIGNAL CONVERTER: A transducer that converts one transmission signal to another.

Page 16 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.71 SPAN: The algebraic difference between the upper and lower values of a range.

2.2.72 SPAN SHIFT: An undesired shift in the calibrated span of an instrument (see Figure C-8). Span shift is one type of instrument drift that can occur.

2.2.73 SQUARE-ROOT EXTRACTOR: A device whose output is the square root of its input signal.

2.2.74 SQUARE-ROOT-SUM-OF-SQUARES METHOD (SRSS): A method of combining uncertainties that are random, normally distributed, and independent.

C= + a2 b2 2.2.75 STANDARD DEVIATION (POPULATION): A measure of how widely values are dispersed from the population mean and is given by n x2 - (X) 2 n(n-1) 2.2.76 STANDARD DEVIATION (Sample): A measure of how widely values are dispersed from the sample mean and is given by

_ n x2 - (EX)2 n2 2.2.77 STATIC PRESSURE: The steady-state pressure applied to a device.

2.2.78 STATIC PRESSURE EFFECT (SPE): The change in instrument output, generally applying only to differential pressure measurements, for a constant input when measuring a differential pressure and simultaneously exposed to a static pressure. May consist of three effects:

(SPEs) Static Pressure Span Effect (random)

(SPEz) Static Pressure Zero Effect (random)

(SPEBS) Bias Span Effect (bias) 2.2.79 STEADY-STATE: A characteristic of a condition, such as value, rate, periodicity, or amplitude, exhibiting only a negligible change over an arbitrary long period of time.

2.2.80 STEADY-STATE OPERATING VALUE (X0): The maximum or minimum value of the process variable anticipated during normal steady-state operation.

Page 17 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.81 SUPPRESSED-ZERO RANGE: A range in which the zero value of the measured variable is less than the lower range value.

2.2.82 SURVEILLANCE INTERVAL: The elapsed time between the initiation or completion of successive surveillance's or surveillance checks on the same instrument, channel, instrument loop, or other specified system or device.

2.2.83 TEST INTERVAL: The elapsed time between the initiation or completion of successive tests on the same instrument, channel, instrument loop, or other specified system or device.

2.2.84 TIME CONSTANT: For the output of a first-order system forced by a step or impulse, the time constant T is the time required to complete 63.2% of the total rise or decay.

2.2.85 TIME-DEPENDENT DRIFT: The tendency for the magnitude of instrument drift to vary with time.

2.2.86 TIME-INDEPENDENT DRIFT: The tendency for the magnitude of instrument drift to show no specific trend with time.

2.2.87 TIME RESPONSE: An output expressed as a function of time, resulting from the application of a specified input under specified operating conditions.

2.2.88 TOLERANCE: The allowable variation from a specified or true value.

2.2.89 TOLERANCE INTERVAL: An interval that contains a defined proportion of the population to a given probability.

2.2.90 TOTAL HARMONIC DISTORTION (THD): The distortion present in an AC voltage or current that causes it to deviate from an ideal sine wave.

2.2.91 TRANSFER FUNCTION: The ratio of the transformation of the output of a system to the input to the system.

2.2.92 TRANSMITTER (SENSOR): A device that measures a physical parameter such as pressure or temperature and transmits a conditioned signal to a receiving device.

2.2.93 TRANSIENT OVERSHOOT: The difference in magnitude of a sensed process variable taken from the point of trip actuation to the point at which the magnitude is at a maximum or minimum.

2.2.94 TRIP ENVIRONMENT: The environment that exists up to and including the time when the instrument channel performs its initial safety (trip) function during an event.

2.2.95 TRIP UNIT: The portion of the instrument channel which compares the converted process value of the sensor to the trip value, and provides the output "trip" signal when the trip value is reached.

Page 18 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 2.2.96 TURNDOWN RATIO: The ratio of maximum span to calibrated span for an instrument.

2.2.97 UNCERTAINTY: The amount to which an instrument channel's output is in doubt (or the allowance made therefore) due to possible errors either random or systematic which have not been corrected for. The uncertainty is generally identified within a probability and confidence level.

2.2.98 UPPER RANGE LIMIT (URL): The maximum upper calibrated span limit for the device.

2.2.99 VENDOR ACCURACY (VA): A number or quantity that defines the limit that errors will not exceed when the device is used under reference operating conditions (see Figure C-3). In this context, error represents the change or deviation from the ideal value.

2.2.100 VENDOR DRIFT (VD): The drift value identified in vendor specifications or device testing (history) data.

2.2.101 ZERO: The point that represents no variable being transmitted (0% of the upper range value).

2.2.102 ZERO ADJUSTMENT: Means provided in an instrument to produce a parallel shift of the input-output curve.

2.2.103 ZERO ELEVATION: For an elevated-zero range, the amount the measured variable zero is above the lower range value.

2.2.104 ZERO SHIFT: An undesired shift in the calibrated zero point of an instrument (see Figure C-7). Zero shift is one type of instrument drift that can occur.

2.2.105 ZERO SUPPRESSION: For a suppressed-zero range, the amount the measured variable zero is below the lower range value.

2.2.106 The following Abbreviations and Acronyms are used:

AFTi = As-Found Tolerance Ai = Device Accuracy AF/AL= As Found/As Left Data AL = Analytical Limit AL = Loop/Channel Accuracy ALT = As-Left Tolerance ATE = Accuracy Temperature Effect AV = Allowable Value B = Bias Effect BV = Bounding Value BWR = Boiling Water Reactor Ci = Calibration Device Error CE = Channel Indication Uncertainty CU = Channel Uncertainty CL = Loop/Channel Calibration Accuracy Error CSTD = Calibration Standard Error D = Device Drift DBE = Design Bases Event Page 19 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 DL = Loop/Channel Drift DTIC = Calculated Drift Tolerance Interval ECCS = Emergency Core Cooling System FS = Full Span/Scale Value g = Acceleration of gravity HE = Humidity Effect IR = Insulation Resistance IRA = Insulation Resistance Accuracy Error IRE = Indicator Reading Error ISA = Instrument Society of America LER = Licensee Event Report LOCA = Loss of Coolant Accident LSL = Licensing Safety Limit LSSS = Limiting Safety Systems Setting N,n = The number of Standard Deviations (sigma values) used NIST = National Institutes of Science and Technology NPL = Nominal Process Limit NTSP = Nominal Trip Setpoint OL = Operational Limit OPE = Overpressure Effect PEA = Primary Element Accuracy PMA = Process Measurement Accuracy PSE = Power Supply Effect RE = Radiation Effect REE = RFI/EMI Effect RFI/EMI = Radio Frequency/Electro-Mechanical Interference RG = Regulatory Guide RL = Required Limit RPS = Reactor Protection System RTD = Resistance Temperature Detector SE = Seismic Effect SL = Safety Limit SP = Span SPE = Static Pressure Effect SPEBS = Bias Span Effect SPEs = Random Span Effect SPEz = Random Zero Effect SRSS Square root of the sum of the squares.

T Temperature THD Total Harmonic Distortion URL = Upper Range Limit USNRC= United States Nuclear Regulatory Commission VA = Vendor Accuracy VD = Vendor Drift Z = Measure of Margin in Units of Standard Deviations ZPA = Zero Period Acceleration G = Sigma Page 20 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 3 .0 RESPONSIBILITY The Supervisor- C&I Design Engineering is responsible for the implementation of this Standard.

4.0 STANDARD 4.1 Setpoint Calculation Guidelines The overall process for evaluating instrumentation is depicted in Figure 1, and described in the sections of this document which follow.

4.1.1 Overview 4.1.1.1 Summary of Setpoint Methodology The Clinton Power Station (CPS) Setpoint Methodology is a statistically based methodology. It recognizes that most of the uncertainties that affect instrument performance are subject to random behavior, and utilizes statistical (probability) estimates of the various uncertainties to achieve conservative, but reasonable, predictions of instrument channel uncertainties. The objective of the statistical approach to setpoint calculations is to achieve a workable compromise between the need to ensure instrument trips when needed, and the need to avoid spurious trips that may unnecessarily challenge safety systems or disrupt plant operation. With special approval, methods 2 or 3 of ref. 5.3 may be used to gain small increases in operating margin to avoid spurious trips or nuisance alarms. See section 2.1.2.

4.1.2 Fundamental Assumptions 4.1.2.1 Treatment of Uncertainties The first fundamental assumption of the CPS Setpoint Methodology is that all uncertainties related to instrument channel performance may be treated as a combination of bias and/or independent random uncertainties. It is assumed that, although all random uncertainties might not exhibit the characteristics of a normal random distribution, the random terms may be approximated by a random normal distribution, such that statistical methods may be used to combine the individual uncertainties. Thus, a key aspect of properly applying this methodology is to examine the various error terms of interest and properly classify each term as to whether it represents a bias or random term, and then to assign adequately conservative values to the terms.

Page 21 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.1.2.2 Trip Timing The second fundamental assumption of the CPS Setpoint Methodology is that the automatic trip functions associated with setpoints are optimized to function in their first trip during an event, the point in time when they (and they alone) are most relied upon for plant safety. Additional or subsequent trip functions are permitted to be less accurate because their importance to plant safety (relative to the importance of operator action) is less. Worst case environmental conditions, that assume failure of protective equipment, or conditions that would only exist after the point in time where manual operation action is expected are not applicable to the automatic trip functions that are expected or relied upon to occur in the early part of an event. This assumption is necessary to ensure that overly conservative environmental assumptions are not permitted to inflate error estimates, producing overly conservative setpoints, which may themselves lead to spurious trips and unnecessary challenges to safety systems. Paragraph 4.2.4.2.(d), discusses determination of trip timing.

4.1.2.3 Instrument Qualification The third fundamental assumption of the CPS Setpoint Methodology is that safety related instrumentation has been qualified to function in the environment expected as a result of plant events. This relates to the second assumption, above. Specifically, although the setpoint is optimized for the first trip expected in an event, the instrumentation might be required to function after the first trip. In optimizing the setpoint for the first automatic function, it is expected that later automatic functions will occur, but with potentially poorer accuracy (see paragraph 4.2.4.2.(d) for further discussion on trip timing). The later automatic functions of the instrumentation can only be expected if the instrumentation has been qualified for the expected environmental conditions.

4.1.3.1 Probability Criteria 4.1.3.2 Because the CPS Setpoint Methodology is statistically based, it is necessary to establish a desired probability for the various actions associated with the setpoints. The probability target is 95%. This value has been accepted by the USNRC. Appendix C, Uncertainty Analysis Fundamentals and Reference 5.32, EPRI TR-103335, provide detail discussion of the systematic methodology.

Page 22 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.1.3.3 In applying the 95% probability limit, it is important to recognize the form of the data and the objective of the calculation. For the case of test data or vendor data, the 95% probability limit corresponds to plus or minus two (2) standard deviations (i.e., 2 sigma). This represents a normal distribution with 95% of the data in the center, and 2.5% each at the upper and lower edges of the distribution.

In the case of a setpoint calculation, we are usually not interested in a plus or minus situation. Instead, since the purpose of the trip setpoint is to ensure a trip only when approaching a potentially unsafe condition (one direction only). CPS is interested in a distribution in which 95% is below the trip point, and 5% is beyond the trip point, all at one end of the normal distribution.

This is called a normal one-sided distribution. The point at which 5% of the cases lie beyond the trip point corresponding to 1.645 standard deviations (i.e., 1.645 sigma).

4.1.3.4 In performing the setpoint or channel error calculations it will be important that the probabilities associated with various elements of the calculation be known and properly accounted for. Scaling and the design requirements necessary for implementing process measurement will be evaluated and controlled in a device calculation.

4.1.3.4 In performing the setpoint or channel error calculations it will be important that the probabilities associated with various elements of the calculation be known and properly accounted for. Vendor and calibration data will generally be 2 or 3 sigma values. In determining channel accuracies and other errors, the data will generally be adjusted to a common 2 sigma basis. Subsequently in setpoint calculations, etc., the probability limits will be adjusted from 2 sigma to the particular probability limit of interest.

4.2 Definition of Input Data and Requirements This section of this document provides detailed discussion of the input data and requirements that may apply to a given calculation, in terms of information on the characteristics of the instrument channel and the applicable design requirements. Additional guidance is provided in Appendix C, and in detailed Appendices, as indicated.

Page 23 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.1 Defining Instrument channel characteristics, Overview The instrument characteristics to be defined depend on the nature of the instrument channel. Generally, the following information should be included in the instrument channel design characteristics:

4.2.1.1 Instrument Definition Manufacturer Model Range Vendor Performance specifications Tag Number Instrument Channel Arrangement 4.2.1.2 Process and Physical Interfaces Environmental Conditions Seismic Conditions Process Conditions 4.2.1.3 External Interfaces Calibration Methods Calibration Tolerances Installation Information Surveillance Intervals External Contributions Process Measurement Primary Element Special terms and Biases Each of these aspects is discussed in more detail in the following Section 4.2.2 Defining Instrument Channel Characteristics 4.2.2.21 Instrument Definition

a. Manufacturer, Model, Tag Number, Instrument Arrangement The instrument tag number, Manufacturer and model number are determined from controlled design information or by examination of the actual instruments. Instrument channel arrangement refers to the schematic layout of the channel, including both the physical layout and the electrical connections.

The physical layout is important for devices that may be exposed to static head or local environmental conditions, so that the conditions can be properly accounted for in the calculations. The electrical connections are of importance because the actual manner in which the devices in a channel are connected affects the combination of error terms, particularly with regard to estimating calibration errors.

Page 24 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

b. Instrument Range The instrument range for each device in the instrument channel includes at least four terms.

The Upper range limit(URL) of the instrument and the calibrated span (SP) of the device. The last two, are the range of the input signal to the device, and the corresponding range of output signal produced in response to the input.

As an illustration, consider a typical channel consisting of a pressure transmitter connected to a trip unit and a signal conditioner leading to an indicator channel:

The maximum pressure range over which the transmitter is capable of operating is the URL. The process pressure range for which the transmitter is calibrated is the SP.

The output signal range of the transmitter is the electrical output(volts or milliamps) corresponding to the calibrated span.

The input to the trip unit and the signal conditioner would be the electrical input corresponding to the electrical output of the transmitter. In a similar fashion, the input and output ranges for every device in the instrument channel is defined by establishing the electrical signal that corresponds to the calibrated span of the transmitter.

c. Vendor Performance Specifications Vendor performance specifications are the terms that identify how the individual devices in an instrument channel are expected to perform, in terms of accuracy, drift, and other errors. All error terms identified in manufacturers performance data should be considered for potential applicability to the calculation of errors. In addition, the results of plant specific or generic Equipment Qualification (EQ) programs should be considered. When EQ program data applicable to a particular application indicates different performance characteristics than that published in open vendor data, the limiting or most conservative data will be used. If additional margin is required, then the differences should be resolved. In order to assure consistency in combining errors in an instrument channel, vendor performance specifications must be expressed as a percentage of Upper Range, Calibrated Span, or the electrical input or output ranges of the devices.

Page 25 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.2.2 Process and Physical Interfaces

a. Environmental Conditions Up to four distinct sets of environmental conditions must be defined for a given instrument channel.
  • The first of these is the set of environmental conditions that applies at the time the instruments are calibrated. Under normal conditions, the only environmental condition of interest during calibration is the possible range of temperatures.

This is of interest because temperature changes between subsequent calibrations can introduce a temperature error, which becomes part of the apparent drift of the device.

  • The second distinct set of environmental conditions is the plant normal conditions. These are the combination of radiation, temperature, pressure and humidity that are expected to be present at the mounting locations of each of the devices during normal plant operation under conditions where the instrument is in use. These conditions are used to estimate normal errors, particularly in the spurious trip margin evaluation.
  • The third distinct set of environmental conditions to be identified is the trip environmental conditions. These are the combination of radiation, temperature, pressure and humidity expected to be present at the mounting location of each device at the point in time that the device is relied upon to perform its automatic trip function. These environmental conditions are generally those that may exist at the first trip of an automatic system, before the operator takes control of an event.
  • The fourth distinct set of environmental conditions that may be needed is the long-term post-accident environmental conditions. These conditions do not apply to most setpoints, but may apply for evaluations of channel error for post-accident monitoring and long-term core cooling (or similar) functions.

Page 26 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.2.2 (cont'd)

  • In all cases, it should be noted that the environmental conditions of importance are those seen by all the devices in the instrument channel.

This includes equipment, which connects to the instrument, such as instrument lines. For example, instrument lines, which pass through multiple areas (particularly the Drywell) will experience static head variations due to the temperature effects on the fluid in the lines (see Process Measurement Accuracy of Appendix C).

b. Seismic Conditions
  • Seismic conditions ("g" loads) apply to setpoints associated with events that may occur during or after an earthquake. Depending on the type of instrument (and the manufacturer's definition of how seismic loads affect the devices) two different seismic conditions may be of interest. These are the seismic loads that may occur prior to the time the instrument performs its function, and the seismic loads that may be present while the instrument is performing its function. In general, the seismic loading of interest is the Zero Period Acceleration at the point the instrument is mounted.
c. Process Conditions As discussed in Appendix C, three sets of process conditions may be of importance for most instrument channels.
  • The first of these is the calibration conditions that may be present at the time the device is calibrated. This is generally of interest for devices such as differential pressure transmitters, which are calibrated at zero static pressure, but then operated when the reactor is at normal operating pressure. The change in static pressure conditions must be known and accounted for in calibration and/or channel error calculations.
  • The second set of process conditions of interest is the set of worst case conditions that may be imposed on the instrument from within the process. Certain types of pressure transmitters, for example, are subject to overpressure errors if subjected to pressures above a specified value.

Page 27 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

  • The third set of process conditions of interest is the conditions expected to be present when the instrument is performing its function. Conceivably, this can be more than one set of conditions. These process conditions determine the errors that may exist when the instruments are calibrated at different process conditions, and may also affect the magnitude of Process Measurement Accuracy and Primary Element Accuracy terms in the setpoint or channel error calculations.

4.2.2.3 External (outside world) Interfaces

a. Calibration Methods and Tolerances Calibration methods and tolerances are of importance because they have an effect on many aspects of the setpoint or channel error evaluations. They determine the channel calibration error, and may also be used to determine As-Found and As-Left tolerances. Calibration tolerances can be identified in a number of different ways. If the plant operating personnel have evaluated their calibration procedures and established an overall channel calibration error for each channel, then this information may be used directly in setpoint calculations. If not the following information should be obtained, so that the channel calibration error can be determined:
1. A list of the instruments used to calibrate the channel.
2. A calibration diagram, showing the locations in the instrument channel where calibration signals are input or measured, the type and accuracy of instruments used at each location, and values of calibration signals.
3. If known, accuracy of the NIST or equivalent Calibration standards used to calibrate devices such as pressure gauges used in the calibration.
4. If established, As-Left and As-Found tolerances used in calibration of each of the devices.
b. Installation Information Installation information of interest includes the installed instrument arrangement, including all connections to the process, instrument line routings, panel and rack locations and elevations, etc.

Elevations and instrument line routings are important for determining head corrections, Process Measurement Accuracy and Primary Element Accuracy, and other effects associated with instrument physical arrangement.

Page 28 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

c. Surveillance Intervals The surveillance interval associated with each device in the instrument channel should be determined from the plant surveillance documents. In general, the surveillance interval assumed for the setpoint or channel error calculations should be the longest normal surveillance interval of any device in the channel (e.g., 18 months, due to the transmitter). In cases where the calibration interval can be delayed, the maximum interval should be used (e.g., CPS Technical Specifications allow for calibration intervals to be delayed for up to 125% of the required interval, or (18 months)
  • 1.25 = 22.5 months).

However, for devices in the instrument channel that are calibrated on a shorter interval, inaccuracies need not be extrapolated to the maximum interval.

Refer to Section 4.3.2 for more detail.

d. External Error Contributions The final step in determining instrument channel characteristics is to determine whether the instrument channel of interest may be subject to any additional error contributions beyond those normally associated with the instruments themselves. If any of these effects may apply to a particular channel, data necessary to define the effect must be obtained.

Potential External Error Contributions may include:

  • Process Measurement Accuracy (PMA)
  • Primary Element Accuracy (PEA)
  • Indicator Reading Error (IRE)
  • Insulation Resistance Accuracy (IRA)
  • Unique error terms 4.2.3 Instrument Channel Design Requirements Design requirements applicable to the instrument channel should be defined, including, as applicable:

4.2.3.1 Regulatory Requirements

  • Technical Specifications
  • Safety Analysis Reports
  • NRC Safety Evaluation Reports
  • Regulatory Guides 1.89, 1.97 and 1.105 Page 29 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.3.2 Functional Requirements

  • Instrument function
  • Analytical and Safety Limits
  • Operational Limits
  • Function Times
  • Requirements imposed by plant procedures, Emergency Operating Procedures (EOPs), etc.
  • For indicator or computer channels, allowable channel error (CE)

Each of these aspects is discussed below.

4.2.4 Defining Instrument Channel Design Requirements 4.2.4.1 Regulatory Requirements

a. Technical Specifications Technical Specifications requirements are of importance for setpoints and instrument channels covered within the Technical Specifications.

Requirements of importance are Surveillance intervals, Allowable Values and Nominal Trip Setpoints specified in the Technical Specifications. Existing values in the Technical Specifications should be reviewed, even for new setpoint calculations, because it is usually desirable to preserve the existing Technical Specifications values if they can be supported by the setpoint calculations. Thus, the Technical Specifications values (particularly the Allowable Value and Nominal Trip Setpoint) are used in evaluating the acceptability of calculation results, and may also be used in the evaluation of As-Found and As-Left Tolerances and determination of Required Limits (if used).

b. Safety Analysis Reports, NRC, SERs, 10CFR50, Regulatory Guides While the Technical Specifications are the key documents to examine for regulatory commitments or requirements, the balance of the plant licensing documentation may contain commitments or agreements reached with the NRC, as well as system specific requirements that may affect setpoint calculations.

Normally, all such commitments or requirements should also be reflected in the applicable plant specifications and documents. However, the licensing documentation should be considered in assuring commitments are known.

Page 30 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.2.4.2 Functional Requirements

a. Instrument Function Instrument functional requirements are normally contained in system Design Specifications, Design Specification Data Sheets, Instrument Data Sheets and similar documents. The functional requirements to be determined should not only include the purpose of the setpoint, but also the plant operating conditions or operating modes under which the trip is required to be operable, and identification of the most severe conditions under which the trip should be avoided.

The plant operating conditions under which a trip must be operable should be correlated to the licensing basis events so that the questions of trip environment, absence or presence of seismic loads, etc., can be answered.

b. Analytical and Safety Limits
  • The Licensing Safety Limit (LSL) is the value of a safety parameter that must not be violated in order to assure plant safety. In the case of a safety situation for which there is an accident or transient analysis, the safety limit is the limit that the analysis is intended to support. For situations where there is no transient analysis, such as the pressure limit for a section of pipe.

The Safety Limit or Nominal Process Limit (NPL) would be the limit assumed in design (the Design pressure and Temperature of the pipe, for example).

  • The Analytical Limit (AL) is a slightly different concept. The Analytical Limit is the value at which the trip is assumed to occur, as part of the analyses, which prove that the Safety Limit is satisfied. For the example of pipe pressure, if there is a stress analysis, which assumes that a particular event is terminated, by instrument action, at or before a certain pressure is reached.

The pressure at which the instrument is assumed to react, to terminate the event, is the Analytical Limit for that event, even if it is different than the Design Pressure of the piping.

  • The section of this document dealing with the actual setpoint calculations gives more specific guidance on how to select the Analytical Limit to be used.

Page 31 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

c. Operational Limits (OL)

Operational Limits are the values of the measured parameter which may occur during plant operation, and at which it would be undesirable to have a trip occur.

Usually, there is one limiting Operational Limit for a given setpoint. In certain cases, such as High Drywell Pressure, there may be no credible operating condition, short of the design basis accident (which requires a trip). In such situations, there would be no Operational Limit.

d. Function Times
  • Function times should be identified for every instrument channel requiring either a setpoint calculation or channel error calculation. The function time is important because it is used to determine the worst rational environmental conditions for use in determining instrument error.

Caution should be exercised in determining function times. This is because the function time selected for a particular case can have a very large impact on instrument error calculations, and this in turn can have a significant impact on the setpoint, and the risk of spurious trip. That is, over-conservative function times lead to over-conservative setpoints and higher spurious trip risk. Since spurious trips can themselves lead to safety system challenges, the ultimate result of over-conservative function times can be a situation, which is counter productive to overall safety.

Page 32 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 In determining the function time for a particular setpoint, attention should be given to the conditions under which the operator depends most on the automatic actions triggered by the setpoint.

For example, in the case of a reactor water level signal intended to start the ECCS system in the event of a Loss of Coolant Accident. The operator depends most on the automatic function during the first 10 minutes of the event, before reactor power is significantly reduced and before the operator has had an opportunity to take control of the situation. During this early period of a LOCA, the core is not yet uncovered and therefore no core damage and major radioactive release would be expected. The operator could reset the water level trip devices after the event, but since the reactor would then be shutdown, and rapidly changing water levels would no longer be credible, the need for trip accuracy would be considerably reduced. Thus, it is appropriate to base the trip setpoint on the conditions existing in the first 10 minutes, without assuming core damage (it should be noted, however, that environmental conditions used for Equipment Qualification might indicate otherwise, since they assume failures).

Note: All setpoints, controls or indications need only be evaluated to the worst environmental conditions present at the time their function is required.

Page 33 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

e. Requirements Imposed by Plant Procedures (EOPs, etc.)

As defined in Appendix L, plant operating procedures, particularly Emergency Operating Procedures, should be considered in defining the functions of instruments.

This is particularly important in connection with the topic of instrument function times, since the Plant Procedures define the extent that the operator may depend on the instrumentation, and the events for which this dependence is most important. Engineering judgment must be exercised in evaluating the effect of operating procedures. For example, while a particular procedure may require the operator to reset a particular trip device, the reset requirement does not necessarily imply that the instrument must react as accurately in a subsequent trip. Thus, the first trip, prior to the operator taking control, may still be the appropriate basis for the setpoint calculation.

Engineering judgment and a good understanding of the design bases of the plant must be applied to identifying the impact of Plant Procedures on the functional requirements applicable to the instrumentation.

f. Allowable Channel Error (CE)

As defined in Section 2.2, Channel Error Indication Uncertainty, for certain types of channels, particularly indicator channels and channels which supply signals to computers and data collection systems, there may be requirements on the maximum allowable error in the channel. Such requirements may be imposed by the purpose of the indicating functions (such as a Plant procedure requirement), or by the use that is made of the data. The manner in which the instrument data is used should be evaluated to determine if there are any inherent limits on acceptable channel error, independent of the setpoint calculation.

4.2.5 Data Collection All data collected should be referenced to its source (document number, title, and revision level) and recorded in the Input, Output, or Reference Section of the calculation, so that the basis for the setpoint or channel error calculations will be traceable to the proper plant documents.

Page 34 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.3 Determining Individual Device Error Terms 4.3.1 Determining Individual Device Accuracies As defined in Section 2.2, the overall accuracy error for any individual device is developed by combining all the individual error contributions identified by vendor performance specifications or device qualification tests.

As a means of assuring consideration of all terms, it is useful to view the accuracy error of the device in terms of the factors that might cause the device to exhibit errors.

That is, what external or internal effects might affect the performance of the device? The answer to this question is straight forward: Device accuracy may be influenced by the inherent precision of the internal components, plus errors caused by each and every external (environmental) influence on the device. Specifically, the following potential causes of accuracy error should be considered for any given device:

a. Vendor Accuracy (VA)
b. Accuracy Temperature Effect (ATE)
c. Overpressure Effect (OPE)
d. Static Pressure Effect (SPE)
e. Seismic Effect (SE)
f. Radiation Effect (RE)
g. Humidity Effect (HE)
h. Power Supply Effect (PSE)
i. RFI/EMI Effect (REE)

The identification of these potential effects is not intended to indicate that they apply to all devices. First of all, some suppliers of instrumentation provide a single value of accuracy error, which may already include all or many of the external environmental effects listed above (within some bounding environment specified by the vendor).

Guidance and information for some common devices is provided in Appendix A and C to this document, additionally, Appendix L, Graded Approach to Uncertainty Analysis, provides guidance in terms of rigor in which elements of device uncertainty should be considered during a calculation.

Following identification of potential effects, each of the error terms should be examined to determine if it may be treated as a random term, or whether dependencies may exist which would include systematic or bias error as described in Appendix C, Sections C.l.1 and C.1.2.

Page 35 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Once all the accuracy error contributions for a particular instrument are identified, they should be combined using the SRSS method to determine total device accuracy. In performing the SRSS combination, the individual level of confidence of each term (sigma level) should be accounted for such that the resultant device accuracy error is a 2 sigma value. Refer to Section C.4 for cases where instruments are calibrated together as a rack.

Ai = + N( (VAi/n)2 + (ATEi/n)2 + (OPEi/n)2 + (SPEi/n) 2 +

(SEi/n) 2 + (REi/n) 2 + (HEi/n) 2

+ (PSEi/n) 2 + (REEi/n)2 )1/2

+ Any bias term associated with the above random errors (2a)

Where the values of 'n' are the sigma values associated with each individual effect (i.e., 1, 2, 3) and N is 2 for a 2 sigma value of Ai.

Generally, two accuracy terms are required for setpoint calculations; accuracy under normal plant operating conditions (AiN) and accuracy under the conditions for which the circuit will be required to trip(AI Accident/seismic) -

The Setpoint Program Coordinator can provide sample calculations.

4.3.2 Determining Individual Device Drift Drift for individual devices are determined in a manner similar to that of accuracy.

Vendor Drift (VD): Refer to Section 2.2 for definition.

The Vendor Drift term should be adjusted to the surveillance interval for that device. In accordance with References 5.1 and 5.3 this adjustment is made by multiplying the value of VD by the square root of the ratio of the surveillance interval (M) to the drift interval associated with the vendor data.

Example (six month drift interval specification):

VDM = (M/6)1/2VD5month Refer to Appendix I, Standard Assumptions for sigma value.

Further information on drift for specific types of commonly used instruments, is provided in Appendix A.

Page 36 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Several cautions should be noted concerning drift calculations, specifically:

The functional life of the device must exceed the assumed surveillance interval. This is because the extrapolation of drift to longer surveillance intervals fundamentally assumes the instrument is qualified for, and expected to perform normally for, the intended length of service. The drift allowance is intended to account for natural long-term variations in the performance of a basically

'healthy' instrument, not instrument failures.

Drift calculations should be consistent with observed performance. Surveillance testing (As-Found and As-Left data) gives an indication of apparent drift. The surveillance test data is not pure drift; since it is masked by accuracy, calibration errors and other contributors as described in Section C.3.4. However, calculation models exist to permit evaluating drift performance. Conversely, good apparent performance in surveillance testing may be used to justify improvements in assumed drift values used in setpoint or channel error calculations. This is a very important consideration, since the setpoint calculation methods assume drift is a random variable, such that drift for longer intervals is determined using the SRSS method. The USNRC may require that drift assumptions be validated based on field data (the use of field data to validate drift assumptions is discussed in Appendices A and C).

4.3.3 Determining Device Calibration Tolerances Four key considerations have been introduced in other sections of these guidelines concerning calibration tolerances. These are:

a. As Found Tolerance (AFTi): Refer to Section 2.2 for definition.
b. As-Left Tolerance (ALTi): Refer to Section 2.2 for definition.
c. The Calibration Tool Error (Ci): Refer to Section 2.2 for definition and Appendix H for guidance.
d. The Calibration Standard Error (CSTD): Refer to Section 2.2 for definition. Per Standard Assumptions in Appendix I,Section I.11, this value is considered negligible.

Page 37 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 The first two of these terms are arbitrary. That is, AFT is typically calculated as shown below, however it can be rounded in a conservative manner to force a more limiting value in order to preserve an existing setpoint (See Section 4.4.5 for Loop AFT). ALT is up to personnel establishing calibration and surveillance procedures to establish these values. Once established, they should be used in the setpoint and channel error calculations.

Generally, ALT is set to VA, however ALT will be considered a 2 sigma value. In the absence of other guidance, this methodology recommends that the terms be established as follows:

AFT+/- = +/- (N) ((ALTi/n)2 + (Ci/n) 2 + (Di/n) 2 ) 1 /2 (20)

ALT+/- = +/- VAj (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Refer to Section 2.2 for definitions and Sections C.3.16 &

C.3.17 for additional guidance.

Typically ALT was established in calibration procedures equal to VA. However, per Sections 2.2.5 and 4.2.2.3, the ALT established in plant procedures should be used. If, in order to preserve a setpoint, a smaller tolerance is needed, then plant personnel should be contacted for concurrence prior to using in calculation. If, the ALT established in calibration procedures is smaller than VA, then the calculation should use VA, so that plant personnel could relax the tolerance, if desired.

NOTE: The AFT and ALT values should be converted to the engineering units required by the calibration procedure and rounded to the precision of the M&TE equipment used. In cases where values are established for indication, the values should consider the readability of the device and round to the next M minor division.

These guidelines have been established because they permit surveillance procedure error bands, which are consistent with the types of errors that may be present during calibration.

Page 38 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4 Determining Loop/Channel Values 4.4.1 Determining Loop Accuracy (AL)

Loop Accuracy must be determined in such a way as to be compatible with the various setpoint and channel error calculations. Loop Accuracy shall be determined to a level of confidence corresponding to 2 Standard Deviations (20).

In order to determine Loop Accuracy, the accuracy of all devices in the loop must be determined (with a known or assumed sigma value associated with each), adjusted to a common sigma value (2), and then combined to produce the value of Loop Accuracy. All bias effects related to any of the devices shall be separated from the random portion of the accuracy data and will be dealt with separately, such that the individual device accuracy values may be assumed to be approximately random, independent, and normally distributed.

All individual device errors shall be determined on the basis of the environmental conditions (normal, trip, post accident, etc.) applicable to the event (and function time) for which the Loop Accuracy applies.

Once the individual device accuracy errors have been identified and characterized to a common sigma value (2),

they are combined by the SRSS method to find the Loop Accuracy.

AL = +(Al2 + A2 2 +...+ A 2 ) 1/2 +/- any bias terms (20)

Normally, two distinct values of loop accuracy must be determined using the equation above. These are the normal loop accuracy (ALenorma1) and the accuracy under accident or seismic conditions or both (AL(accident/seismic)).

Two important cautions must be noted concerning Loop Accuracy. First, the devices included in Loop Accuracy must be consistent with the signal path of interest (i.e., every device from the signal source to the point at which the setpoint trip is produced or the channel output utilized).

Secondly, the term 'devices' is not intended to restrict the calculation to hardware, or to include hardware that is treated uniquely elsewhere in the setpoint calculations.

'Devices' may include software.

Page 39 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.1.1 The following devices are typically included in Loop Accuracy:

(1) Transmitters (2) Trip Units (3) Signal Conditioners/Multiplexers/Network Resistors (4) Software errors associated with signal processing (5) Anything which introduces a random, non-time dependent error is included, in the signal from source to point of use, unless handled elsewhere in setpoint calculations.

4.4.1.2 The following are exceptions, which are normally not included in determination of loop accuracy:

(1) Process measurement errors (PMA) and the errors of the Primary Element (PEA) are treated separately.

(2) Errors due to Insulation Degradation (IRA) are treated separately.

4.4.2 Determining Loop AS-Left Calibration Tolerances (AITL)

Refer to Section 2.2 for definition and Section 4.3.3 for component As-Left Tolerance (ALTi).

Loop As-Left Tolerance (ALTL) is calculated by combining the individual component As-Left tolerances (ALTi). Once the calculated Loop As-Left Tolerance has been determined by the SRSS of component As-Left Tolerances, this value should be compared to existing calibration procedure Loop As-Left Tolerances. If feasible, it is desired to retain existing procedural Loop As-Left Tolerances. Selection and use of existing procedural As-Left Tolerances is desired since these values already consider readability of test equipment.

If the procedural Loop As-Left tolerance is retained, this value shall be used in the development of CL and AFTL and listed in the calculation results summary. Likewise, if the calculated loop As-Left tolerance is selected, this value shall be used in the development of CL and AFTL and will be listed in the calculation results summary. If selecting the calculated Loop As-Left Tolerance, consideration should be given to the readability of the test equipment. The selected As-Left tolerance shall be considered a 2 a value.

Page 40 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 If it is desired to implement an ALTL less than the existing procedural ALTL, I&C Maintenance should be contacted for concurrence.

NOTE: The ALTL value shall be converted to the engineering units required by the calibration procedure and rounded to the precision of the M&TE equipment used. In cases where values are established for indication, the values should consider the readability of the device and round to the next M minor division.

The formula is shown as follows:

ALTL = +/-(N) [(ALT 1/n)2 +(ALT2 /n)2 + . .+(ALTi/n)2 J 1 /2 (20y)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.4.3 Loop Calibration Error (CL)

Loop Calibration Errors may be established by the organization responsible for calibration. Generally, Loop Calibration Error shall be calculated as 2 Sigma confidence level as shown in Section 4.4.3.1.

There are three basic components of Loop Calibration error, see Section 2.2 for definitions. These are the following:

a. ALT+/-
b. Ci, C. CSTD, It is important to note that Ci and CSTD are controlled by 100% testing per procedure CPS 1512.01, Reference 5.24.

For these reasons it is assumed that the Ci and CSTD values represent 3 sigma values.

Page 41 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.3.1 The process of determining Loop Calibration Error is performed in two steps. The first step is to review the loop diagram and calibration procedures to determine what calibration tools are used and how many times each are used in establishing the calibration of the loop. This is a function of the plant specific calibration procedures.

Typically, the calibration of a particular loop containing a transmitter and trip unit involves the use of only one pressure source and the alarm indication at the ATM. Once the device usage is determined, the loop calibration tool error is determined by combining the errors by SRSS. In the above example, there would be 4 terms in the SRSS calculation (ALTi for each instrument, and a Ci and CSTD value for the pressure source gauge).

CL = + N (2(ALTj/n)2 + z (Ci/n) 2 + z (CSTD/n) 2 ) /2 (26)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Further discussion on M&TE is provided in Appendix H.

4.4.4 Determining Loop Drift (DL)

Loop Drift must be determined in such a way as to be compatible with the various setpoint and channel error calculations.

In order to determine Loop Drift, the drift of all devices in the loop must be determined (with a known or assumed sigma value associated with each) and then combined to produce the value of Loop Drift. Any bias effects related to any of the devices shall be separated from the drift data and dealt with separately, such that the individual device drift values may be assumed to be approximately random, independent, and normally distributed.

All individual device drifts must be determined on the basis of the environmental conditions applicable to the initial and subsequent surveillance tests and device calibrations (generally, temperature variations between subsequent calibrations).

DL = +/-N(Dj 2 /n + D2 2 /n +...+ Di2 /n)/2 +/- any bias terms (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Page 42 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Two important cautions must be noted concerning Loop Drift. First, the devices included in Loop Drift must be consistent with the signal path of interest (i.e., every device from the signal source to the point at which the setpoint trip is produced or the channel output utilized).

Secondly, the term 'devices' is not intended to restrict the calculation to hardware, or to include hardware that is treated uniquely elsewhere in the setpoint calculations.

4.4.4.1 The following devices are typically included in Loop Drift:

(1) Transmitters (2) Trip Units (3) Signal Conditioners/Multiplexers/Network resistors (if these devices exhibit drift)

(4) Anything, which introduces a time dependent change in the signal from source to point of use.

4.4.5 Determining Loop As-Found Calibration Tolerances (AFTL)

Key considerations have been introduced in other sections of these guidelines concerning individual loop errors used to calculate AFTL. These are:

1. Loop Calibration Error (CL): Defined in Section 2.2 and calculated in Section 4.4.3.
2. Loop Drift Error (DL): Defined in Section 2.2 and calculated in Section 4.4.4.

To calculate AFTL, loop calibration equipment and drift tolerances should be combined using the SRSS methodology.

AFTL is calculated as follows:

AFTL = + (N) ((CL/n)2 + (DL/n) )1/2 (2a)

NOTE: The AFTL value shall be converted to the engineering units required by the calibration procedure and rounded to the precision of-the M&TE equipment used. In cases where values are established for indication, the values should consider the readability of the device and round to the next M minor division.

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Page 43 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 This provides assurance, that the loop is functional and the AV is protected.

These guidelines have been established because they permit surveillance procedure error bands, which are consistent with the types of errors that may be present during calibration.

4.4.6 Determining Process Measurement Accuracy and Primary Element Accuracy (PMA/PEA)

Per definition in Section 2.2 and discussion in Appendix C, Process Measurement Accuracy (PMA) and Primary Element Accuracy (PEA) are generalized terms used in channel error calculations and setpoint calculations to account for measurement errors which lie outside the normal calibration bounds of the channel. For example, consider the case of venturi flow meter connected to a differential pressure transmitter and trip unit. The normal surveillance testing of the instrument channel would concern itself with the transmitter and trip unit. The flow meter might have been calibrated by some sort of test, but it is not part of the instrument channel. On the other hand, it very definitely is part of the measurement process.

The use of PMA and PEA in the channel evaluation is a matter of engineering judgment. These two categories are defined as a means of reminding the engineer to account for everything that affects the performance of the instrument loop. Since both PMA and PEA are treated identically in the setpoint and channel error calculations, it is not important which effects are assigned to each value, as long as the effects are assigned in such a way that there is a proper separation/combination of independent and dependent effects. This point is best illustrated by a few examples.

Keep the definitions (Section 2.2) of the terms in mind:

The following paragraphs illustrate various instrument systems and application of these two definitions.

Page 44 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.6.1 Flow Measurement As discussed in Appendix E, Flow Measurement Uncertainty Effects, consider a flow measurement system consisting of a flow meter, such as a venturi, instrument lines connecting the flow meter to a differential pressure transmitter, and the transmitter itself. The device in contact with the process is the flow meter itself. The flow meter is therefore the Primary Element. There is some fundamental error or uncertainty in the differential pressure at the instrument line connections on the meter, due to the design of the flow meter, as-built dimensions, etc. This error may consist of both a bias term and a random component. These random and bias errors are both components of Primary Element Accuracy (PEA).

The connection between the flow meter (primary element) and the transmitter (sensor) is made using instrument lines. The density of the fluid in these lines will vary with ambient temperatures on the spaces through which these lines are routed. These density changes will affect the pressure transmitted from the primary element to the sensor. This affect can be considered negligible if the sensing lines of a differential pressure transmitter are routed together and can be proven affected by the same ambient temperature. These errors inherent in the use of the instrument lines are Process Measurement Accuracy.

4.4.6.2 Water Level Measurement Refer to Appendix F, Level Measurement Temperature Effects, and consider a water level measurement system, particularly in a BWR, may consist of a condensing chamber, sensing lines (variable and reference leg) and differential pressure transmitters. In a manner similar to that in paragraph 4.4.6.1 we would normally classify the elevation uncertainty associated with the condensing chamber as PEA. The errors due to ambient temperature fluctuations, and their effects on instrument line fluid density, would be considered to be PMA.

4.4.6.3 Temperature Measurement A typical temperature measurement system may consist of a temperature detector, such as a thermocouple or resistance temperature detector, and a temperature switch. In this case, the temperature detector could be treated as a sensor, much in the same fashion as a pressure detector.

However, the temperature detector is generally not calibrated with the channel. For this reason, the errors of the temperature detector are usually treated as PEA.

There is no PMA in this case.

Page 45 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.6.4 General Guidance In general, PMA and PEA are shown in the calculations being random independent variables. Therefore, random effects assigned to PEA and PMA should be independent of each other. However, if they are determined to be a bias, then they will be dealt with separately. The boundaries between PMA and PEA are a matter of convenience and judgment. The most important factor is that all potential error sources arising anywhere in the process, from the true variable desired to be measured all the way to the sensor in the instrument channel, must be considered in error calculations, as PMA, PEA, or as some other error term.

4.4.7 Determining Other Error Terms The fundamental objective of the calculation of setpoints or channel errors is to incorporate all reasonably expected error sources, as well as any that are part of the licensing commitments applicable to the plant. As part of the design or calculation process, the responsible engineer should consider whether additional error terms should be considered. The following paragraphs discuss several potential error sources. It is up to the responsible engineer to determine whether these are applicable, and, if applicable, to define the error values.

4.4.7.1 Indicator Reading Error (IRE)

As defined in Section 2.2 and further discussed in Appendix C, Section C.3.13, if a particular channel error calculation is intended to define the potential errors in data which is manually recorded, based on reading indicator or gauges, the error in reading the scale on the indicator must be considered. This error must be established on a case basis. In general, it is a question of the scale divisions, scale curvature, etc (See Section 4.3.3 for discussion on AFT and ALT).

4.4.7.2 Resistors, Multiplexers, etc.

The signal processing hardware is not the only source of significant error in some types of instrument channels.

Channels that supply signals to computer inputs, recorders, etc., sometimes setup to measure the voltage drop across a resistor in the circuit. The resistor accuracy (1%, for example) may introduce a significant error into the voltage measurement. Similar signal transmission devices, such as multiplexers, may introduce errors, which must be considered.

Page 46 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.7.3 Software Errors With the increased use of instrument channels which provide data to microprocessors and computers, where that data is manipulated then used to trigger some action or provide data, the software used becomes important.

Software that influences the use of data introduces errors, which should be considered for applicability.

4.4.7.4 Degradation of Insulation Resistance Accuracy Error (IRA)

References 5.22, 5.23, 5.24, may provide a bounding IRA value to use, if the device is identified by these calculations. However, if a more precise IRA value for the identified devices is needed or a non identified device requires IRA to be established, then the guidance, provided in Appendix D shall be used. It determines the Effect of Insulation Resistance (IR) on Uncertainty, under certain accident conditions, particularly steam environments, where the insulation resistance of cables, terminal blocks and other devices may be reduced, producing larger than expected leakage currents, which degrade signals. This error (IRA) is defined in Section 2.2. The applicability of IRA depends on both the accident environment and the time of function. Many reactor protection setpoints, which are intended to prevent accident consequences, are not subject to IRA because of timing considerations. IRA, on the other hand, may significantly affect certain post-accident monitoring functions. These type errors are generally determined as part of equipment qualification programs.

4.4.8 Channel Error Calculation As defined in Section 2.2, Channel Error Indication Uncertainty, Channel Error is determined when there are requirements for channel uncertainty, independent of a Safety Related Setpoint. Typically, there are three situations where Channel Error is of interest. These are (1) Non-Safety Related Setpoints, (2) when the channel serves as an indicator/recorder/control function and where the accuracy must be known (RG 1.97 indicators, information for operators, etc.), and (3) channels which supply information to data collection systems, computer systems, etc.

Page 47 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 The channel error is determined by:

CE = +/-(1.645/N)(SRSS OF RANDOM TERMS) +/- BIAS TERMS Typically calculated and shown as below:

CU= iN(PMA +PEA2 + AL2 + (CL/n) 2 + (DL/n) 2 ) 1/2 +/-B (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE = +(1.645/N)(CU2 + IRE 2 )"/2 + Bias Terms Note: An (1.645/N) adjustment to channel error is applicable to non-safety setpoints and required indicator readings that have a limit approached in one direction (single sided interest).

4.4.8.1 The RANDOM TERMS that should be considered include the following:

(1) Loop Accuracy (AL) under the worst environmental conditions applicable to the channel function (2) Loop Calibration Error (CL)

(3) Loop Drift (DL)

(4) Process Measurement Accuracy (PMA)

(5) Primary Element Accuracy (PEA)

(6) Indicator Reading Error (IRE) if applicable.

(7) Any other random terms expected to be present for the indicator and or computer channel function (such as software errors)

Refer to definitions in Section 2.2.

4.4.8.2 The BIAS TERMS that should be considered include:

(1) Any bias associated with Process Measurement or the Primary Element (PMA/PEA)

(2) The bias component of Insulation Resistance Accuracy Error (IRA)

(3) The bias portion of readout errors (IRE).

(4) The bias portion of any other unique terms known to exist (including drift and software bias).

Page 48 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.9 Setpoints with no Analytical Limit or Allowable Value In some cases it is necessary to determine setpoints when there are no Tech. Spec. Allowable Values or Analytical Limits. As discussed in section 2.2.47, the NPL is a limit, high or low, beyond which the normal process parameter should not vary.

NTSP(zINC) = NPL - CE NTSP(DEC) = NPL + CE Note: An (1.645/N) adjustment should be made when calculating CE for non-safety setpoints and required indicator readings (single sided interest).

4.4.10 Determining Analytical Limits (AL)

Analytical Limits are used in calculating the Nominal Trip Setpoint and Allowable Value (if required). Methods of calculating Analytical Limits are not within the scope of these guidelines. However, the process by which the designer determines an Analytical Limit is of interest.

Per Section 2.2, the Analytical Limit is "the value of the sensed process variable established as part of the safety analysis, prior to or at the point which a desired action is to be initiated to prevent the safety process variable from reaching the associated licensing safety limit".

NEDC-31336, Reference 5.1, includes a discussion of the source of the Analytical Limits applicable to the set of key setpoints for which direct credit is taken in the Safety Analysis Report. For setpoints not discussed in Reference 5.1, the following guidelines are provided for determining Analytical Limits:

a. The first step for determination of an Analytical Limit is to determine the purpose of the particular setpoint. That is, what event is the setpoint intended to mitigate, prevent or initiate?
b. Once the event of interest is identified, determine what assumptions have been made in the system design or analysis regarding the setpoint. These assumptions may be explicit in the design or implicit.
c. The value of the sensed process variable, which corresponds to the design assumptions for that event is the Analytical Limit.

Page 49 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 The key question is what value of the sensed variable corresponds to the design assumptions. This correspondence may be indirect. For example, a setpoint intended to isolate a line on a high flow would have a design basis in terms of flow rate. Whereas the Analytical Limit and setpoint calculations would be done in terms of the differential pressure across the flow measurement device, corresponding to the flow rate at which the isolation is assumed to occur. As another example, consider a setpoint intended to limit pressurization of a pipe. In this case, the Analytical Limit may be the design pressure of the pipe, but not always. If the stress analysis of the pipe assumes some peak pressure in the pipe different from the design pressure, the assumed peak pressure corresponding to the event for which the setpoint is intended, less any transient overshoot, would be the Analytical Limit. When in doubt, the organization that provided the design bases and/or analyses of the system or component should be consulted to ensure proper identification of the Analytical Limit. Trip setpoints associated with non-safety related functions are typically based on the process limit, High or Low, beyond which normal process parameter should not vary.

This limit is defined as the Normal Process Limit (NPL).

4.4.11 Allowable Value Calculation (AV)

If the setpoint in question is contained in Technical Specifications and is required to have an Allowable Value, the Allowable Value (AV) should be calculated using either equation depending on the direction of process variable change when approaching the Analytical Limit. The first equation is for process variables, which increase to trip, and the second equation is for process variables, which decrease to trip.

AV(INC) = AL -(1.645/N) (SRSS OF RANDOM TERMS) -BIAS TERMS AV(DEC) = AL +(1.645/N) (SRSS OF RANDOM TERMS) +BIAS TERMS Or, as further described by Sections 4.4.11.1 and 4.4.11.2:

AV(INC) = AL -((1.645/N) ((PMA 2 +PEA 2 + AL2 )1/2 + B))

2 AV(DEC) = AL +((1.645/N) ((PMA +PEA 2 + AL2 )1/2 +/- B))

Where N represents the number of standard deviations with which the value is calculated to (normally 2 standard deviations).

Note: An (1.645/N) adjustment is applicable to setpoints that have a limit approached in one direction (single sided interest).

Page 50 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Per Sections 4.5.1.(l) and 4.4.13.(a), if the existing Tech. Spec. AV is conservative to the calculated AV, therefore preserved, then the existing AV should be used in any other Sections requiring AV, unless a change in AV is desired.

4.4.11.1 The RANDOM TERMS that should be considered for particular AV calculations include the following:

(1) Loop Accuracy under Trip conditions (AL(trip))

(2) Process Measurement Accuracy (PMA)

(3) Primary Element Accuracy (PEA)

(4) The random portion of any other unique terms known to exist for a particular instrument application, excluding Drift.

4.4.11.2 BIAS TERMS that should be considered are:

(1) Any Biases associated with Process Measurement or the Primary Element (PMA/PEA).

(2) The bias component of Insulation Resistance Error (IRA).

(3) The bias portion of any other unique terms known to exist (including drift and software bias).

It should be noted that the sign applied to bias terms should be conservative relative to plant safety (i.e.,

credit should not be taken for a beneficial bias unless it can be assured that the beneficial bias will always be present).

4.4.12 Setpoints with Allowable Values The NTSP should be calculated using either equation below, depending on the direction of process variable change when approaching the Analytical Limit. The first equation is for process variables, which increase to trip, and the second equation is for process variables that decrease to trip.

NTSP(INC) = AV - AFTL NTSP(DEC) = AV + AFTL Page 51 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.4.12.1 Selecting Actual Setpoints The actual setpoint used in calibrating instrumentation may not be the value of the NTSP calculated. The choice of the actual setpoint to be used in the plant is a matter-of evaluating setpoint conservatism as compared to the AV and operational preferences. In other words, the existing plant setpoint may be conservative to the calculated setpoint and AV and pose limited impact on plant operations or spurious trips. This in-plant (existing) setpoint would satisfy both the calculation requirements and plant operation, as such, the channel would not require a setpoint revision. The existing setpoint becomes the NTSP and used in any other Sections requiring NTSP.

4.4.12.2 Evaluation of Trip Reset Value The reset setting is a variable % span adjustment of the trip setpoint. CPS calibration procedures typically has it set at 3t span (i.e. Trip is set at 100t, reset is shown as 97%). The same AFT and ALT is placed on the trip setpoint, as well as the reset, however, it is not possible for the trip to be found low in its band, while the reset is found high. Areas to consider are as follows:

a. The loop has both a high and low setpoint, with the resets overlapping, thus potentially both alarms at the same time.
b. When calculated AFT is greater than the reset in calibration procedure.
c. Both trip and reset require a NTSP calculation to provide different functions.

The reset value may require adjustment different than the typical setting of 3t span.

4.4.13 Evaluating Results and Resolving Problems The evaluation of results depends to some extent on the ultimate goal of the setpoint calculations. If there is no existing setpoint in use no evaluation may be necessary.

However, in the more normal case, there is already an existing setpoint and, in some cases, Technical Specifications requirements. In this case, the evaluation of results should include:

a. Evaluate the calculated Nominal Trip Setpoint and Allowable Value against existing values. If existing values are not supported by the calculations, determine whether or not it is desirable to preserve the existing values.

Page 52 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

b. If existing values are to be preserved, investigate iteration opportunities and revise the calculations.

4.4.13.1 Iteration to Resolve Setpoint Problems There are usually opportunities for iteration as a means of resolving problems with a calculated setpoint, short of modifying instrument installations or hardware. As a minimum, the following alternatives should be considered:

(1) Modify the Analytical Limit. Frequently, analyses that are the source of the analytical limit have margin. Changes to the analytical limit, to take credit for existing analysis margins, is a powerful way to optimize setpoint calculations, since it has no impact on instrumentation or instrument error allowances. Further, there are many situations (even in plant transient or accident analyses) where relatively simple parameter studies can be used to adjust the analytical limit without re-doing the actual transient or accident analyses.

(2) Re-evaluate environmental assumptions. Many environmental assumptions are driven by worst case licensing assumptions, which may not be appropriate to instrument error analyses. For example, it makes no sense to use an environment that assumes plant conditions that the instrument setpoint of interest is designed to prevent. Environmental assumptions may also be optimized by careful consideration of trip timing, and by refining the analyses that predict environmental conditions.

(3) Re-evaluate calibration errors. Use of different calibration instruments, modified As-Found or As-Left Tolerances can be used to change calibration error allowances and improve setpoint calculations.

(4) Re-evaluate drift assumptions. Consider using statistical analyses of actual as-found and as-left data from surveillance testing to justify improved drift allowances.

(5) Evaluate other assumptions in setpoint calculations, such as function requirements for the instrumentation, trip timing, surveillance intervals, etc.

(6) Examine instrument applications. For example, for setpoints heavily impacted by a predicted radiation dose, a change from a standard model to a radiation resistant model of the same instrument can have major benefits (changing from a Rosemount 1153B "PI' output to an 1153B "R" output, for example).

Page 53 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.5 Calculation Nominal Trip Setpoints and Indication/Control Loops The individual calculations associated with setpoint and channel error evaluations are outlined below. The engineer performing the calculations should determine which calculations apply to the particular situation, based on the guidance provided.

4.5.1 Setpoint with Analytical Limit The following steps shall be performed for a Setpoint with Analytical Limit:

a. Calculate the individual device accuracy (Ai) per Section 4.3.1.
b. Calculate the individual device As-Left Tolerance (ALTi) per Section 4.3.3.
c. Calculate the loop As-Left Tolerance (ALTL) per Section 4.4.2.
d. Calculate the individual device Calibration Error (Ci) per Section 4.3.3.
e. Calculate the loop Calibration Error (CL) per Section 4.4.3.
f. Calculate the individual device drift error (Di) per Sections 4.3.2.
g. Calculate the loop Drift Error (DL) per Section 4.4.4
h. Calculate the individual device As-Found Tolerance (AFTi) per Section 4.3.3.
i. Calculate the loop As-Found Tolerance (AFTL) per Section 4.4.5
j. Develop PMA, PEA, IRA, and other error terms per Sections 4.4.6 and 4.4.7 as applicable.
k. Calculate the Allowable Value (AV) from the Analytical Limit (AL) per Sections 4.4.10 and 4.4.11. .
1. Compare calculated Allowable Value to existing Technical Specification AV. Use the existing AV if conservative, unless it is desired to revise the existing Technical Specifications.
m. Calculate the Nominal Trip Setpoint (NTSP) from the Allowable Value per Section 4.4.12.
n. Consider whether adequate separation exists between the Nominal Trip Setpoint and Allowable Value to avoid LERs.

Page 54 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

o. Use the existing setpoint if conservative, unless it is desired to revise it. Then select a setpoint to be used in the calibration procedure that is bounded by the Nominal Trip Setpoint.
p. Evaluate the Trip Reset Value
q. Optimize calculations, if necessary, to validate existing Technical Specifications, designs, etc.

4.5.2 Indication/Control Loop The following steps shall be performed for an Indication/Control Loop:

a. Calculate values per Section 4.5.1.a through 4.5.1.j.
b. Calculate the channel uncertainty (CU) and channel error (CE) per Section 4.4.8.
c. Optimize calculations, if necessary, to validate existing Technical Specifications, designs, etc.

Note: If indication loop also provides indication for a specific reading as required by the Tech. Spec, then sections 4.5.1.k through 4.5.1.o should be addressed for that indicated reading (in lieu of setpoint).

4.5.3 Setpoint without Analytical Limit The following steps shall be performed for Setpoint without Analytical Limit:

a. Calculate values per Section 4.5.1.a through 4.5.1.j.
b. Calculate the channel uncertainty (CU) and channel error (CE) per Section 4.4.8.

c.Identify the Nominal Process Limit (NPL) per Section 4.4.9. This also might be given as an Allowable value.

d. Calculate the Nominal Trip Setpoint (NTSP) from the Nominal Process Limit using the channel error per Section 4.4.9.
e. Use the existing setpoint if conservative, unless it is desired to revise it. Then select a setpoint to be used in the calibration procedure that is bounded by the Nominal Trip Setpoint.
f. Optimize calculations, if necessary, to validate existing designs, etc.

Page 55 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 4.5.4 The following tables lists the equations developed in Sections 4.3 & 4.4 for the different calculation scenarios in Section 4.5.1 above.

Setpoint/Indication/Control Calculation Section 4-Formulas 4.3.1 Device Accuracy (A,):

Ai = + N( (VAi/n) 2 + (ATEi/n) 2 + (OPEi/n) 2 + (SPEi/n) 2

+(SEi/n) 2 + (REi/n) 2 + (HEi/n) 2 + (PSEi/n) 2 + (REEi/n)2 ) 1 /2

+/- Any bias term associated with the above random errors (20) 4.4.1 Loop Accuracy (AL):

AL = +/-(A1 2 + A2 2 + . . . + Ai2 ) 1 /2 +/- any bias terms (2a) 4.3.3 Device As-Left Tolerance (ALTi):

ALTI = + VAi (2a)

See discussion on whether to use ALT from calibration procedures or establish as VA 4.4.2 Loop As-Left Tolerance (ALTL):

ALTL = +(N) [(ALT1 /n)2 + (ALT2 /n)2 +...+ (ALTi/n)2 ]1/2 (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.3.3 Determining Device Calibration Tolerances Guidance for M&TE is given in Appendix H 4.4.3 Loop Calibration Error (CL):

CL + N (X(ALTi/n) 2 + X (Ci/n)2 + v (Cs/n)2 ) 1/2 (2a)

Where N represents the number of standard deviations

._ with which the value is evaluated to (normally 2 Page 56 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Setpoint/Indication/Control Calculation Section Formulas standard deviations) and n represents the sigma value for each device.

4.3.2 Device Drift (Di):

Refer to Appendix I, Standard Assumptions for sigma value.

VDM = (M/6) 1 1 2 VD6 -month 4.4.4 Loop Drift (DL):

DL + ( (D./n) 2 + (D 2 /n) 2

+...+ (Di/n) 2 )1 /2 +/- bias terms (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.3.3 Device As-Found Tolerance (AFTi):

AFTi + (N) ((ALTi/n)2 + (Ci/n) 2 + (Di/n) 2 )1/2 (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.4.5 Loop As-Found Tolerance (AFTL):

AFTL = + (N) ((CL/n) 2 + (DL/n) 2 )1/ 2 (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

4.4.6 & Determine PMA,PEA, IRA, and other error terms 4.4.7 For Setpoint Calculations with Analytical Limit 4.4.10 & Allowable Value (AV):

4.4.11 AV(INC) = AL -(1.645/N) (SRSS OF RANDOM TERMS) -BIAS TERMS AV(DEC) = AL +(1.645/N) (SRSS OF RANDOM TERMS) +BIAS TERMS Page 57 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Setpoint/Indication/Control Calculation Section Formulas Typically calculated and shown as below:

AV(INC) = AL -((1. 645/N) ((PMA 2 +PEA 2 + AL2 )1/2 B) )

AV(DEC) = AL +((1.645/N) ( (PMA 2 +PEA2 + AL2 ) 1/2 _ B) )

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations)

Note: An (1.645/N) adjustment is applicable to setpoints that have a limit approached in one direction (single sided interest).

4.4.12 Nominal Trip Setpoint (NTSP):

NTSP (INC) = AV - AFTL NTSP (DEC) = AV + AFTL For Indication/Control Calculations only 4.4.8 Channel Error (CE):

CE = + (SRSS OF RANDOM TERMS) +/- BIAS TERMS Typically calculated and shown as below:

CU = +/- N (PMA 2 +PEA 2 + AL 2 + (CL/n) 2

+ (DL/n) 2)1/ 2 +/- B (20)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE + (CU 2 + IRE 2 ) 1/ 2 +/- Bias Terms For Setpoints without Analytical Limit and/or Indication/Control 4.4.8 Channel Error (CE):

CE = +/- (1.645/N) (SRSS OF RANDOM TERMS) +/- BIAS TERMS Typically calculated and shown as below:

Page 58 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Setpoint/Indication/Control Calculation Section Formulas 4

CU = +/- N(PMA2 +pEA2 + AL2 + (CL/n)2 + (DL/n) 2 )1 2 +/- B (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE = +/- (1.645/N) (CU2 + IRE 2 )1/2 + Bias Terms Note: An (1.645/N) adjustment to channel error is applicable to non-safety setpoints or required indicator readings that have a limit approached in one direction (i.e. increasing or decreasing only, but not both), single sided interest.

4.4.9 Nominal Trip 8etpoint (NTSP):

NTSPCINC) = NPL - CE Or NTSP (DEC) = NPL + CE Page 59 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3

5.0 REFERENCES

5.1 NEDC-31336, General Electric Improved Setpoint Methodology, October 1986, (GE Proprietary information) 5.2 NEDC-32889P rev 2, General Electric Methodology for Instrumentation Technical Specification and Setpoint Analysis, February, 2000. GE reference for use in Extended Power Uprate Calculations.

5.3 ANSI/ISA S67.04, Setpoints for Nuclear safety Related Instrumentation Parts I and II.

Part I is the Standard and Part II is the Recommended Practice. See Part II page 46 for description of "Methods" And, ISA dTR 67.04.09, Graded Approaches to Setpoint Determination, Draft Technical Report, 1994 and the subsequent version Draft 4, May, 2000 5.4 GE Nuclear Energy internal procedures 5..5 General Electric Document EDE-40-1189 (Rev. 0) 5.6 ANS/ASME PTC 19.1-1985, Measurement Uncertainty Establishes a basis for the principles of uncertainty analysis.

5.7 ASME MFC-3M-1989, Measurement of fluid Flow in Pipes Using Orifice, Nozzle, and Venturi Provides information regarding expected uncertainties and errors associated with flow measurement.

5.8 ASME 1967 Steam Tables Provides the basis for water density as a function of temperature and pressure. When used, the appropriate pages should be copied and made as an attachment to the calculation.

5.9 ANSI N42.18, American National Standard for Specification and Performance of On-Site Instrumentation for Continuously Monitoring Radioactivity in Effluents This standard establishes minimum expected performance standards for certain types of radiation monitoring equipment.

5.10 The Institute for Nuclear Power Operations (INPO) Good Practice TS-405, Setpoint Change Control Program.

Provides guidance for setpoint change control and implementation practice.

Page 60 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 5.11 Regulatory Guide 1.105, Rev. 01, Setpoints for Safety-Related Instrumentation CPS has committed to Regulatory Guide 1.105 Rev 01 for guidance relative to instrument setpoint preparation and control. This Regulatory Guide 1.105 establishes the NRC's proposed endorsement of the ISA-67.04. The discussion also provides the NRC's perspective on various technical areas related to setpoint methodologies and statistical analysis.

5.12 NRC Information Notice 92-12, Effects of Cable Leakage Currents on Instrument Settings and indications Information Notice 92-12 describes a potential problem related to instrument loop current leakage. During the high humidity and temperature conditions of a LOCA or HELB, insulation resistance can be degraded, thereby contributing to the measurement uncertainty of affected instrument loops.

5.13 ER-AA-520, Rev. 3, "Instrument Performance Trending" T&RM 5.14 CPS 1512.01, Rev. 18a, Calibration and Control of Measuring and Test Equipment (M&TE) and MA-AA-716-040 Rev.

2, Control of Portable Measurement and Test Equipment Program.

These procedures establish generic requirements and controls for calibration and verification of Test Equipment and Reference Standards. Additionally, the administrative requirements for controlling M&TE are provided. These procedures establish the minimum requirements for M&TE control. This Engineering Standard assumes that M&TE is controlled in accordance with this directive.

5.15 CPS 8801.01, Rev. 13, Instrument Calibrations This procedure provides instructions for performing operations verification and calibration of single and multiple input devices as an individual instrument. It also includes instructions for development of Instrument Data Sheets.

5.16 CPS 8801.02, Rev. 12, Loop Calibrations This procedure provides instructions for performing operations verification and calibration of instrument loops. It also includes instructions for development of Loop Calibration Data Sheets.

Page 61 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 5.17 CPS 8801.05, Rev. 15a, Corrections to Instrument Calibrations This procedure provides instructions for scaling and applying corrections to setpoint data obtained from Engineering.

5.18 Not Used 5.19 Assessment EA # 2003-06220 r/2, "Performance of Instrument Drift Analyses In Support of the Clinton Power Station 24 Month Refuel Cycle Project", dated 3/19/04.

5.20 CC-AA-309-1001 Rev. 0, Guidelines for Preparation and Processing of Design Analysis 5.21 CC-AA-309 Rev. 3, Control of Design Analysis This procedure establishes requirements and controls for preparation, review, documentation and approval of design analyses.

5.22 Calculation 01ME127, Rev.0, DBA Influence On Insulation-Resistance Related Instrument Errors This calculation determines the influence of design basis accident (DBA) conditions on containment instrumentation loop signal transmission systems (i.e., penetrations, cabling, splices, and conduit seals) and the consequent effect on the accuracy of measurement of safety-related process parameters. The calculation addresses those instrument loops which have the primary devices located inside containment and for which S&L has prepared instrument setpoint accuracy calculations per the requirements of Reg. Guide 1.105.

5.23 Calculation 01ME128, Rev. 0, DBA Influence On Insulation-Resistance Related Instrument Errors For GE RG 1.105 Instruments This calculation determines the influence of design basis accident (DBA) conditions on containment instrumentation loop signal transmission systems (i.e., penetrations, cabling, splices, and conduit seals) and the consequent effect on the accuracy of measurement of safety-related process parameters. The calculation addresses those instrument loops which have the primary devices located inside containment and for which GE has prepared instrument setpoint accuracy calculations per the requirements of RG 1.105.

Page 62 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 5.24 Calculation CI-CPS-187, Rev. 0, DBA Influence On Insulation-Resistance Related Instrument Errors This calculation provides similar information as Calculations 01ME127 or 01ME128. Also, this calculation determines the bounding influence on instrumentation loops for each generic circuit type (current source, voltage source, and bridge current source), that can be applied to similar circuits under harsh conditions. This calculation addresses instrument loops that have the primary devices located outside containment and for which Sargent & Lundy prepared Reg. Guide 1.105 instrument setpoint calculations.

5.25 Not Used 5.26 NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle," dated April 2, 1991 5.27 NES-EIC-20.04 Rev. 3 "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy" 5.28 Honeywell 4450 Extended Analog System Input 4400 AG-T, Termination Assembly, K2801-0116A, Tab 15, and Analog Input Subsystem, K2801-0116B,,Book 1, Tab 2.

Vendor Manual and Specifications 5.29 Record of Teleconference from Carl M. Ingram to J. Miller.

File Nos. 126.5, S/U 33.1. 10/16/81 File Nos. 126.5, S/U 33.1. 10/16/81 5.30 IP-C-0089 Rev. 0, "M&TE Uncertainty Calculation" 5.31 ASTM Standard D257-91, Standard Test Methods for D-C Resistance or Conductance of Insulating Materials, Appendix XI 5.32 EPRI TR-103335, Rev. 1, Statistical Analysis of Instrument Calibration Data. Guidelines for Instrument Calibration Extension/Reduction Programs.

5.33 EPRI TR-102644, Calibration of Radiation Monitors at Nuclear Power Plants 5.34 Regulatory Guide 1.97, Rev. 3, Instrumentation for Light-Water-Cooled Nuclear Power Plants to Assess Plant and Environs Conditions During and Following an Accident.

5.35 Regulatory Guide 1.89, Rev. 0, Qualification of Class lE Equipment For Nuclear Power Plants 5.36 DC-ME-09-CP, Rev. 11, "Equipment Environmental Design Conditions, Design Criteria."

5.37 CC-AA-103-2001, Rev. 0, "Setpoint Change Control" Page 63 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 6.0 APPENDICES This Engineering Standard includes Appendices organized to provide all required technical information necessary to prepare a CPS Instrument Setpoint Calculation. The Appendices are listed as follows:

Appendix A, GUIDANCE ON DEVICE SPECIFIC ACCURACY AND DRIFT ALLOWANCES Appendix B, SAMPLE CALCUALTION FORMAT Appendix C, UNCERTAINTY ANALYSIS FUNDAMENTALS Appendix D, EFFECT OF INSULATION RESISTANCE ON UNCERTAINITY Appendix E, FLOW MEASUREMENT UNCERTAINTY EFFECTS Appendix F' LEVEL MEASUREMENT TEMPERATURE EFFECTS Appendix G, STATIC HEAD AND LINE LOSS PRESSURE EFFECTS Appendix H, MEASURING AND TEST EQUIPMENT UNCERTAINTY Appendix I, NEGLIGIBLE UNCERTAINTIES / CPS STANDARD ASSUMPTIONS Appendix J, DIGITAL SIGNAL PROCESSING UNCERTAINTIES Appendix K, PROPAGATION OF UNCERTAINTY THROUGH SIGNAL CONDITIONING MODULES Appendix L, GRADED APPROACH TO UNCERTAINTY ANALYSIS Appendix M, NOT USED Appendix N, STATISTICAL ANALYSIS OF SETPOINT INTERACTION Appendix 0, INSTRUMENT LOOP SCALING Appendix P, RADIATION MONITORING SYSTEMS Appendix Q, Rosemount Letters Appendix R, RECORD OF COORDINATION FOR COMPUTER POINT ACCURACY Page 64 of 214

NUCLEAR STATION ENGINEERING STANDARD CI-01.00 INSTRUMENT SETPOINT CALCULATION METHODOLOGY Revision 3 Figure 2. Setpoint Relationships SAFETY LIMIT TRANSIENT ANALYSIS ANALYTICAL LIMIT ALLOWABLE VALUE LOOP AS-FOUND TOLERANCE I

LOOP AS-LEFT TOLERANCE I

SELECTED SETPOINT (NTSP)

LOOP AS-LEFT TOLERANCE LOOP AS-FOUND TOLERANCE OPERATING LIMIT

+

TRANSIENT ANALYSIS NORMAL OPERATING VALUE Page 65 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 APPENDIX A GUIDANCE ON DEVICE SPECIFIC ACCURACY AND DRIFT ALLOWANCES A.1 Overview In general, there are three parameters relating to Accuracy and Drift, which must be determined for any given device. These are Accuracy under normal conditions (Ai(normal)), Accuracy under trip conditions (Ai (trip)) and Drift (Di). There are two steps that must be taken to determine these values.

a. Identify the individual effects that may contribute to these errors.
b. Obtain numerical data on the identified individual effects.

In determining the effects that may contribute, and identify the numerical values, consideration should be given to the following sources of information (in order of importance):

c. Clinton specific data from testing of actual instruments, surveillance records, qualification programs, etc.
d. Generic data from testing of actual instruments, surveillance data, qualification programs, etc.
e. Vendor supplied data sheets and data.
f. Purchase specifications for equipment
g. Generally accepted assumptions.

The purpose of this appendix is to provide guidance for the process described above.

Page 66 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2 Effects Expected to be Present in Accuracy and Drift Values A.2.1 Accuracy As discussed in paragraph 4.3.1 and defined in Section 2.2, the following effects may typically be part of instrument accuracy (potentially, for both normal and trip conditions):

a. Vendor Accuracy (VA)
b. Accuracy Temperature Effect (ATE)
c. Overpressure Effect (OPE)
d. Static Pressure Effect (SPE)
e. Seismic Effect (SE)
f. Radiation Effect (RE)
g. Humidity Effect (HE)
h. Power Supply Effect (PSE)
i. RFI/EMI Effect (REE)

It may not be possible, in many cases, to determine all of the above effects. Qualification testing, or vendor performance specifications may simply state a value for accuracy, and then stipulate a range of temperatures, radiation levels, seismic loads, humidity and other boundaries within which the value of accuracy is applicable. In such cases, there is no need to determine the separate effects.

A.2.1.a Rosemount Transmitter Devices In the absence of suitable vendor data, Clinton specific qualification data or surveillance test data GE recommends that the information in the following paragraphs be used. For a selected group of Rosemount devices GE has determined recommended accuracy assumptions based on generic qualification testing. This information has been provided to the USNRC (Reference 2.1) and used for many setpoint calculations accepted by the NRC.

Page 67 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.1.a(l) Rosemount Transmitters GE recommends that the following be used as a basis for determining normal and trip environment accuracies for Rosemount transmitters (models 1151, 1152-T0280, 1153 Series B, and 1154).

A.2.1.a.(l).(a) Vendor Accuracy (VA), Accuracy Temperature Effect (ATE), Power Supply Effect (PSE), Humidity Effect (HE) and RFI/EMI Effect (REE)

VA = 0.25% SP (3 Sigma)

ATE = (0.75% UR + 0.5% SP) (delta Ta)/100 (3 Sigma)

(double this value for Range Code 3)

PSE = 0.005% SP per volt (3 Sigma)

HE = 0 (included in VA)

REE = 0 (Normally negligible)

Determination of 'delta Ta' is discussed in paragraph A.2.3.

A.2.1.a.(1).(b) Overpressure Effect (OPE)

This effect varies depending on the instrument range, and is identified in Rosemount product data sheets. GE treats the resulting values as 3 Sigma values based on experience with the Rosemount data.

A.2.1.a.(1).(c) Static Pressure Effect (SPE)

As discussed in paragraph 4.3.1, SPE sometimes consists of several effects, some of which are random and some of which are bias. This is particularly the case with Rosemount differential pressure transmitters (note, SPE does not apply to absolute pressure or gage pressure transmitters). In the case of Rosemount transmitters, there are three SPE components: (1) a random zero point error, (2) a random span error, and (3) a bias span error. The bias span error is easily adjusted for as part of the calibration process (this is often done). If accommodated in the calibration process, it need not be included in the accuracy error calculations.

GE has found that the Rosemount manuals may be difficult to interpret concerning SPE. For this reason, the following summary is provided to describe definition of the Rosemount SPE.

Page 68 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 The components of SPE are calculated as follows:

Random Zero Effect; SPEz = (Zero)% UR (delta P)/1000 (3 Sigma)

Random Span Effect; SPES = (Span)% SP (delta P)/1000 (3 Sigma)

Bias Span Effect; SPEBS = (BS)% SP (delta P/1000 (3 Sigma)

Where 'delta P' is the pressure difference between the system pressure at calibration and the system pressure under trip conditions, and the terms SPEz, SPEs, and SPEBs are shown in Table A.1.

TABLE A.1 ROSEMOUNT STATIC PRESSURE EFFECT EFFECT RANGE 1151DP 1152-T0280 1153B 1154 (Zero) % (Zero)%- (Zero)t (Zero)%

Random Zero Error (SPEz) 3 0.25 0.25 0.50 N/A 4,5 0.125 0.125 0.2 0.2 6, 7, 8 0.125 0.25 0.5 0.5 (Span)% (Span)% (Span) % (Span) %

Random Span Error (SPEs) 3 0.5 0.25 0.5 N/A 4,5,6,7,8 0.25 0.25 0.5 0.5 (BS)% (BS) % (BS)% (BS)%

Bias Span Error (SPEBs) 3 1.75 1.5 1.5 N/A 4 0.87 1.0 0.75 0.75 5 0.81 1. 0 0.75 0.75 6 1.45 1. 0 1.25 1.25 7 1.05 1. 0 1.25 1.25 8 0.55 1.0 0.75 0.75 CPS Vendor Manual 4256/57 K2801-091, K2801-091, M008-0002 (3/87) Tab 1 Tab 2 NOTE: Rosemount manuals supplied with purchased instrumentation should be checked to determine if any changes apply to this information.

Page 69 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.1.a.(1).(d) Seismic Effect (SE)

Based on an evaluation of Rosemount test data, GE recommends the following:

SE = 0.23% UR (2 Sigma)

Where equation applies to situations in which the Zero Period Acceleration (ZPA) at the mounting location of the transmitter does not exceed 1 "g" for the event of interest, and where the transmitter is expected to be performing its trip function simultaneous with the seismic event.

SE = (0.03 ZPA + 0.20)% UR (2 Sigma)

Where ZPA exceeds 1 "g", but not 10 "g", and the transmitter is expected to be performing its trip function simultaneous with the seismic event.

SE = 0.25% UR (2 Sigma)

Where ZPA exceeds 2 "g", but the seismic event is expected to occur between the time of the last calibration and the time of trip, but not simultaneously.

If the seismic event ZPA does not exceed 2 "g", and the event is not simultaneous with the trip event, the effect on transmitter accuracy is negligible.

A.2.1.a.(1).(e) Radiation Effect (RE)

GE does not recommend use of Rosemount model 1151 transmitters for trip applications for which the gamma Total Integrated Dose (TID) to time of trip exceeds approximately 104 RAD. Up to this value, the radiation effect on 1151 transmitters is negligible (plant specific EQ program data should be used to support use of 1151 transmitters in a radiation environment, if such data is available).

For the 1152-T0280 transmitter:

RE = (1.25X + 1.25)% UR (2 Sigma)

Where TID exceeds 0.1 MRAD, but does not exceed 0.4 MRAD. This effect should be multiplied by 1.68 for Range Code 3. There is no effect at or below 0.1 MRAD.

RE = (4.5X + 4.5)% UR (2 Sigma)

Where TID exceeds 0.4 MRAD, but not 20 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

Page 70 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 The term "X" is defined as:

X = (setpoint of interest-instrument zero)/calibrated span For the 1153 Series B transmitter with a "P" output:

RE = (3.OX + 3.0)% UR (2 Sigma)

Where TID exceeds 0.1 MRAD, but not 22 MRAD. There is no effect at or below 0.1 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

For the 1153 Series B transmitter with an "R" output:

RE = (1.5X + 1.5)% UR (2 Sigma)

Where TID exceeds 0.1 MRAD, but not 22 MRAD. There is no effect at or below 0.1 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

For the 1154 transmitter:

RE = (1.OX + 1.0)% UR (2 sigma)

Where TID exceeds 0.5 MRAD, but not 50 MRAD. There is no effect at or below 0.5 MRAD. This effect should also be multiplied by 1.68 for Range Code 3.

A.2.1.a.(2) Rosemount Trip Units For unmodified Rosemount model 510DU and 710DU trip units use vendor specified data for instrument uncertainties. For trip units modified by GE (model number 147D8505G005), use GE Performance Specification 22A7866 for instrument uncertainties.

A.2.2 Drift As discussed paragraph 4.3.2, there are two terms of interest in determining device drift. These are Vendor Drift (VD) and some time interval associated with VD(usually 6 months). These effects should be determined from vendor data, field data, or qualification data, if available.

A.2.2.a Rosemount Devices For a selected group of Rosemount devices GE has determined recommended drift assumptions based on generic qualification testing. This information has been provided to the USNRC (Reference 5.1) and used for many setpoint calculations accepted by the NRC. In the absence of suitable Clinton specific qualification data or surveillance test data GE recommends that the information in the following paragraphs be used.

A.2.2.a.(1) Rosemount Transmitters For Rosemount model 1151, 1152-T0280, 1153 Series B and 1154 transmitters refer to vendor supplied information for the appropriate drift term. Due to Rosemount correspondences in the year 2000, the Rosemount drift terms will conservatively be considered to be 2 sigma.

Page 71 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.2.a.(2) Rosemount Trip Units For Rosemount model 510DU and 710DU trip units use the vendor specified data. For trip units modified by GE (model number 147D8505G005), use the GE Performance Specification 22A7866 A.2.3 (Deleted)

A.2.4 Interpreting Vendor Data For many devices, it may be necessary to use vendor data sheets or specifications as the source of accuracy and drift information for setpoint calculations. However, vendors commonly use many different terms to describe the performance of their equipment. In addition, most vendors do not specify their data in terms of a probability of error (i.e., they don't say how many standard deviations their values represent). Therefore, interpretation is necessary.

When interpreting terminology, the definitions in Section 2.2 of this document should be used to ensure consistent interpretation.

For example, the definition of Channel Instrument Accuracy, paragraph 2.2.11, states that accuracy, as referred to in the CPS Setpoint Methodology, includes "the combined conformity, hysteresis and repeatability errors". Paragraph 2.2.11 also indicates certain terms, which are not considered to be part of accuracy.

Care should be exercised to relate the vendor-defined errors to the functions of the instrument channel. For example, a Rosemount trip unit with an analog indicator has two distinct sets of errors.

There are errors associated with the trip circuitry, and which apply to a trip setpoint calculation. There are also errors associated with the analog indicator, which do not apply to the trip function, but which would apply if the purpose of the calculation is to define the error associated with readings taking using the analog indicator.

In some cases, vendors may not identify all errors of interest.

For some types of devices, vendors identify accuracy errors but no drift effects. In such cases, it is necessary to first determine whether or not there is satisfactory evidence that the omitted item (drift, for example) does not apply to this type of device. If available information is not convincing, it may be necessary to assume a value. Paragraphs A.2.5 and A.2.6 contain recommendations for establishing error terms on the basis of field data and/or conservative assumptions.

Page 72 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 The final aspect of importance when interpreting vendor data is determining how many standard deviations (sigma values) the data represents. In general, this is an issue of how much confidence we have in the vendor data. Data may be qualitatively classified into three categories: (1) best estimate data, (2) worst case data which is backed by limited testing, and (3) worst case data backed by extensive qualification testing or testing of every delivered device. In the absence of information from a vendor, which specifies the sigma value associated with the data, GE recommends treating data as follows:

a. Best Estimates: Assume they are (1) sigma values.
b. Worst case data backed by limited testing: Assume two (2) sigma.
c. Worst case data extensively backed: Assume three (3) sigma.

Under normal circumstances, all vendor data will be one of the latter two cases (i.e., 2 or 3 sigma). This is because most vendors specify instrument performance in terms of guaranteed performance. In order to guarantee performance, the vendor must have considerable confidence in the data. A two (2) sigma value corresponds to a 95% probability value, while three (3) sigma corresponds to slightly greater than 99%. Thus, assignment of the sigma value to be assumed in the calculations is a question of the confidence placed in the vendor data.

A.2.5 Interpreting Surveillance Test Data Surveillance test data can be a valuable source of information with which to improve the database and refine setpoint calculations.

The primary use of surveillance test data is in validating and/or refining drift assumptions, and in extending instrument surveillance intervals. The primary limitation associated with use of field data is that there .must be a valid basis for assumptions as to what the data contains. For example, surveillance data is normally valid as a source of improved drift information, and may be used to estimate other surveillance test related errors, but is not a good source for validating accuracy assumptions. Instrument accuracies may be quite different under trip conditions than during surveillance testing.

The basic approach to use of surveillance test data is a three part approach:

a. Define, in terms of the values of interest (drift, etc.), what the surveillance data represents, as a means of defining how you will interpret the data.
b. Collect the surveillance data needed to provide a strong statistical basis.
c. Perform a statistical analysis of the data, and establish the desired values along with the associated sigma level for use in channel error calculations or setpoint calculations.

Page 73 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3

.The area of greatest potential benefit associated with surveillance test data analyses are the use of test data to validate reduced drift assumptions for existing surveillance test intervals, and the use of the data to predict revised drift values for longer surveillance test intervals. This latter is particularly useful in preparing justifications for temporary surveillance interval extensions in order to avoid undesired plant shutdowns for surveillance testing.

Detailed calculation models and methods for evaluating surveillance test data are beyond the scope of this document. Standard statistical methods may be used. In addition, References 5.1, 5.3,

& 5.32 contain a detailed discussion of validating drift assumptions from surveillance test data.

A.2.6 Recommended Assumptions in the Absence of Data In the absence of better information, the following assumptions can be used in channel error and setpoint calculations:

a. Calibrating equipment accuracies are taken as 3 sigma values provided that the calibration of these devices is to NIST traceable standards and minimizes the effects of hysteresis, linearity and repeatability. The accuracies of the standards themselves are also taken to be 3 sigma values.
b. If Vendor Drift (VD) is not specified by the vendor or available from other sources, and if there is no basis for assuming drift is zero or negligible, assume VD equals Vendor Accuracy (VA) over the entire calibration period.

OR If Vendor Drift (VD) is not specified by the vendor or available from other sources, and if there is no basis for assuming drift is zero or negligible, the following default values may be included for additional conservatism when preparing the analysis. The default drift effect values that will be used in these cases are: -

  • Mechanical Components: +1.0% of span per refueling cycle
  • Electronic Components: +0.5% of span per refueling cycle The intent of these default drift effect values (Reference 5.27 Appendix A) is to establish consistent values for this type of error for inclusion into the calculations to achieve additional conservatism when this data is not available, applicable, or published. Selection of these default drift effect values is the result of engineering review and judgment of industry practices, typical Reference Accuracy for these device types, and industry experience.

Choosing between these two involves a balance of the margins desired to the AL and the margins available to the operating limit.

Page 74 of 214

INSTRUMENT SETPOINT APPENDIX A GUIDANCE ON DEVICE CALCULATION METHODOLOGY SPECIFIC ACCURACY AND DRIFT ALLOWANCES Revision 3 A.2.7 Cautions Concerning Use of Qualification Program Data Plant specific data from Equipment Qualifications programs is a valuable source of data on instrument performance, particularly regarding the various accident related accuracy error terms (Radiation Effect, Seismic Effect, etc.). However, care should be exercised in use of this data.

In many cases, Equipment Qualification programs have been conducted to prove that class IE equipment will function throughout its intended lifetime. Because the post-accident functions include indications for operator use, the environmental conditions used in EQ programs may include long term post-accident conditions, which do not apply to most setpoint calculations. Use of EQ results, without taking into account less severe trip conditions can result in extreme conservatism. Overly conservative setpoints can impact plant operations and lead to unnecessary challenges to safety systems.

Page 75 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 APPENDIX B SAMPLE CALCULATION FORMAT This samplepresents, the format usedfor a setpoint and indication/control calculation. An Example of these types of calculations can be obtainedfrom the Setpoint Program Coordinator. The calculationcover sheets areproduced using Attachment I or 2 from, Reference 5.20, depending on whether the calculation is a major or minor revision. The calculationshall reflect the name and order of major sections as shown in the TOC below, howt'ev'er, it is only recommended that sections within each major section be presented as shown in this Attachment. For other types of calczmlations, such as Nis, APRMs, andRadiation Monitors, the major sections of this sample should be used andAppendix Pforguidance. The Selpoint Program Coordinatorcanprovide examples of what is shown within each major section.

TABLE OF CONTENTS CALCULATION COVER SHEET............................. (PAGE #)

TABLE OF CONTENTS................................... (PAGE #)

1.0 OBJECTIVE ............ (PAGE #)

2.0 ASSUMPTIONS .(PAGE #)

3.0 METHODOLOGY .(PAGE #)

4.0 INPUTS .(PAGE #)

5.0 OUTPUTS .(PAGE #)

6.0 REFERENCES

.(PAGE #)

7.0 ANALYSIS AND COMPUTATION SECTION(S) .(PAGE #)

8.0 RESULTS .(PAGE #)

9.0 CONCLUSION

S .(PAGE #)

ATTACHMENTS ATTACHMENT 1, Scaling (# of pages)

ATTACHMENT 2, Results Summary (# of pages)

ATTACHMENT 3 (etc. as required) (# of pages)

Page 76 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 1.0 OBJECTIVE Should state purpose,functions and objectives of calculation, including the category to which the amount of rigor is required.

2.0 ASSUMPTIONS Other than CPS Standard Assumptions, there are two types that can be made: an assumption as to a value; or an assumption as to the quality of input information.

For each assumption, a judgment must be made as to whether confirmation is required or justification is provided to show it is reasonable. Refer to CC-AA-309 and CC-AA-309-1 001, for further guidance.

All standardassumnptions (See Appendix 1, Section .11) requiredby this calculationwill be listedfirst. Any additionalassumptions as discussed above, willfollow, standardassumptions.

.3.0 METHODOLOGY Typical:

This calculation will determine the instrument uncertainty associated with the (Function - Description). The evaluation will determine the loop setpoint and Allowable Value for the (Function). Instrument uncertainty will be determined in accordance with CI-01.00, "Instrument Setpoint Calculation Methodology". The evaluation will then compare the current setpoint and Allowable Value with the results determined by this calculation.

M&TE error will be determined from the results of Calculation IP-C-0089, which uses building temperature minimum and maximums to develop the uncertainty, and review of the corresponding loop and device calibration procedures. Any changes to the calibration procedures will be shown in Attachment 2.

Per CI-01.00, Head Correction is determined by evaluating design drawings, survey data, and/or walk down data as applicable and calculated in Attachment 1.

Page 77 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 4.0 INPUTS Inputs that cannot be easily retrievedfrom the C~PS Document System, should be also added as attachments. Typical: (Number, Revision Level, Title) 5.0 OUTPUTS Typical: (ATunuber, Revision Level, Title)

Calibrationproceduresand other calculationsas required.

6.0 REFERENCES

Typical. (Number, Revision Level, Title).

7.0 ANALYSIS AND COMPUTATION SECTION(S)

This section should list all of the equations identified in Section 4.5.11 of CI-01. 00 for the type of calculation to be performed. All inputs, outputs, and references should be identified as requirediwithin the document (eg Input 4.1, Output 5.1, Ref 6. 1). Titles can be shown in document (typically not shown), however revision levels shall only be identified in Sections4.0, 5.0, and 6.0.

From CI-01.00, Section 4.5.11, Note: The individual terms and acronyms are defined in Cl-01.00, Section 2.2.

7.1 Loop Function 7.2 Loop Diagram 7.3 Equations 7.3.1 Loop Accuracy (AL):

For component, A= i_) + (A iE)2+ (O___ iE) ( _+)2+

_; (-) + (-) + i) +( i )2+(R ;) +/-B n n n ) n n ( n +(n n (2o)

Page 78 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 For loop, AL = +/- V,2 + A 22 + ... + A' 2 +/- B (2oy) 7.3.2 Calculation of As-Left Values For component, ALT = (existing ALT or VA) (2a)

The loop As-Left Tolerance (ALT) will be calculated as follows:

ALT I= +/-(AN)1( ALT I'+ ( ALT )2+ ...+ (ALT,)' (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

7.3.3 Loop Calibration Error (CL):

CL = +/-Aj _ ( ALT) *+/-+( 2 CST D )2 (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

7.3.4 Loop Drift (DL):

DL =+/-N J(D-LJ+(D 2 + ... + ( D (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

Page 79 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.3.5 Calculation of As-Found Values For component, AFTi= +/-(N))j( n ) +(D1LJ+J(. ) (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

The loop As-Found Tolerance (AFT) will be calculated as follows:

AFTL =+(N) ) +(P ) (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

7.3.6 Channel Uncertainty (CU) and Channel Error (CE):

This Section is for non-safety setpoints, indication, and control loops, and need not be derived for Safety Related setpoints.

CU =+/-N PMA2 +PEA 2 + AL2 +(§1-) +L(PJ ) B (2a)

Where N represents the number of standard deviations with which the value is evaluated to (normally 2 standard deviations) and n represents the sigma value for each device.

And CE =i *5 CU 2+JRE2 +/-B Note: An (1.645/N) adjustment to channel error is applicable to non-safety setpoints or required indicator readings that have a limit approached in one direction (single sided interest).

Page 80 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.3.7 Setpoints with no Analytical Limits or Allowable Values NTSP (INC) = NPL - CE NTSP (DEC) NPL + CE 7.3.8 Allowable Value Calculation Allowable Value calculated for an increasing trip, AV=AL-(l"'

  • PMA2+PEA 2+AL2_B Allowable Value calculated for a decreasing trip, AV =AL+( N )lIPMA2+PEA +AL2+B Note: An (1.645/N) adjustment is applicable to setpoints that have a limit approached in one direction (single sided interest)

Note: The calculation of the AV does not include the CL and DL terms.

7.3.9 Nominal Trip Setpoint Calculation The Nominal Trip Setpoint (NTSP) should be calculated using the equations below depending on the direction of process variable change when approaching the Analytical Limit.

For process variables that increase to trip, NTSP = AV - AFTL For process variables that decrease to trip, NTSP = AV + AFTL 7.4 Determination of Uncertainties A section is requiredforeach device in the loop as shown by the loop diagram in section 7.2. In cases where there are multiple loops, and one device depicted in the loop diagram has different manufacture/model numbers (i.e. tvo channels, where the sensor has two different model numbers). A section evaluating each manufacturehinodelnumber is requiredand the worst case will be used in the Results, Section 8. 0. Belowv is example for Rosemount Transmitter:

Page 81 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1 Sensor/Transmitters; Calculationsare typicallyperformed in % Span and converted to engineering units as requiredin different sections of calculation.

This is not a requirement,however all values calculatedforoutput to calibrationproceduresshall be in the units andprecision necessary to support the calibrationprocedure.

7.4.1.1 Vendor Accuracy of pressure transmitters (VAPT)

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value.

VAir=+/-I ]%Span (?a) 7.4.1.2 Accuracy Temperature Effect 7.4.1.2.1 Normal Accuracy Temperature Effect (ATEpT(Nor,,a1))

Calculationor conversion if required Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

ATEpT(Norial)= +/- I I% Span (?CY) 7.4.1.2.2 Accident Accuracy Temperature Effect (ATEPT(AccId))

This Section based on time when finction is required; may need to be calculated. Refer to the Appendicesfor aid in developing value.

Also, refer to the EQ manualsfor more information.

ATEPT(A¢cid) = +/- [ 1% Span (?a) 7.4.1.3 Humidity Effect (HEpr)

Calculationor conversion if required. Refer to the Appendices for aid in developing value. Use standardassumptionwhen no vendor information is available.

HEvr= +/- [ ]%Span (?a)

Page 82 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1.4 Radiation Effect 7.4.1.4.1 Normal Radiation Effect (REpr(Normal))

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

REPT(Normal) = + I% Span (?a) 7.4.1.4.2 Accident Radiation Effect (REpT(Accidnetl))

This Section based on time whenfimction is required; may need to be calculated. Refer to the Appendicesfor aid in developing value.

Also, refer to the EQ manualsfor more information REPT(Accid) = I J% Span (?a) 7.4.1.5 Power Supply Effects of pressure transmitters (PSEpr)

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

PSErT=+/-[ ]% Span (?a) 7.4.1.6 Static Pressure Effect (SPEpr)

Calculation or conversion ifrequired Refer to the Appendices for aid in developing value.

SPEpT=+/-[ ]%Span (?a) 7.4.1.7 Overpressure Effect (OPEpr)

Calculation or conversion if required. Refer to the Appendicesfor aid in developing value.

OPEpr=+/-I ]%Span (?a)

Page 83 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1.8 Seismic Effect 7.4.1.8.1 Normal Seismic Effect (SEpT(NormaI))

Use standardassumption.

SEPT(Normal) = 0 7.4.1.8.2 Accident Seismic Effect (SEPT(Accid))

PerSection C3. 14, A seismic event coincident with a LOCA is a design basis event per USAR 15.6.5. However, per USAR 15.6.5.1.1, there are no realistic, identifiable events which would result in a pipe break inside the containment ofthe magnitude requiredto cause a loss-of-coolant accident coincident with a safe shutdown earthquake. Therefore, each setpoint calculation should consider the largereffect of a seismic event or loss-of-coolant.

SEpT(Accid) = 0 7.4.1.8.3 OBE/SSE Seismic Effect (SEpT(scismic))

Refer to the Appendicesfor aid in developing value. Also, refer to the SQ manualsfor more information SEPT(Seismic) = i 1% Span (?a) 7.4.1.9 RFI/EMI Effect (REEpT)

Use standardassumption, ifapplicable or review historicalwork packages and vendor data to build ajustifiable assumption.

REEPT= 0 7.4.1.10 Bias (BpT) -

Refer to Appendix Cfor guidance.

BPT = 4 [ I% Span (?CF) 7.4.1.11 Pressure Transmitter Accuracy Refer to Section 7.3.1 forformula.

Page 84 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.4.1.1 1.1 Normal Pressure Transmitter Accuracy (Apr(Norma1))

ArT(Normal)= +/-1 ]% Span (?a) 7.4.1 .11.2 Accident Pressure Transmitter Accuracy (APT(Accid))

Calculatedthe same as normal, however the accident uncertainties replace the similarnormal uncertainities.

APT(Accid) = +/-1 1% Span (?a) 7.4.1.11.3 Seismic Pressure Transmitter Accuracy (ApT(Seismic))

Calculatedthe same as normal, howu'ever the seismic uncertainty replaces the normal seismic uncertainity.

APT(seismic)= +/-1 % Span (?a) 7.4.1.11.4 Pressure Transmitter Accuracy (Apr)

Based on the above, use the largest uncertainty is calculated under

[normal/accident/seismic] conditions to determine AV, NTSP, and CE. Therefore:

APT = +/-APT(normallaccident/seismic)

Arr =+/- +/-1 % Span (?a) 7.4.2 Loop Accuracy (AL)

Refer to Section 7.3.1forformula AL=I 1% Span (2a) 7.5 As-Left Values (ALT)

Each device in loop requiresan ALTj.

For component, ALT = (existing ALT or VA) units (3a)

Page 85 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 The loop As-Left Tolerance (ALT) will be calculated as follows:

Refer to Section 7.3.2forformula.

ALTL= +/- 1 units (2o) 7.6 Loop Calibration Error (CL)

Refer to Section 7.3.3forformula 7.6.1 As-Left Tolerance (ALTL)

Refer to Section 7.5 for values.

ALTL = +/-1I% Span (2a) 7.6.2 Calibration Tool Error (Ci)

Each device requiresa calibrationtool error.

7.6.2.1 Transmitter Calibration Tool Error (Cpsr)

Refer to M& TE calculationIP-C-0089,for maximum V1alues howtever, if extra margin is required,refer to Appendix Hfor additionalguidance.

CpT=+/-I ]% Span (3a) 7.6.3 Calibration Standard Error (CsTD):

Per Assumption [ ],Calibration Standard Error is considered negligible for the purposes of this analysis.

CsTD = O 7.6.4 Loop Calibration Error (CL):

Calculate usingformulafrom Section 7.6 above. Only the M&TE required for the loop is usedfor calculatingthe Loop CalibrationError(C).

CL=+I +/-  % Span (2a)

Page 86 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.7 Loop Drift Each device requiresa drift evaluation.

7.7.1 Pressure Transmitter Drift (Dpr):

Calculationor conversion ifrequired. Refer to the Appendicesfor aid in developing value. Use standardassumption when no vendor information is available.

Dprw I ]% Span (?C) 7.7.2 Loop Drift (DL):

Refer to Section 7.3.4 forformula.

DL=+/-I 1% Span (2a) 7.8 Calculation of As-Found Values (AFT)

Each device in loop requiresan AFTj. Refer to Section 7.3.5 for formulas.

For component, AFTj = +/-1]units (2a)

The loop As-Found Tolerance (AFTL) will be calculated as follows:

AFTL = i+/-I units (2a) 7.9 Process Measurement Accuracy (PMA):

Discussion and calculationas required Refer to the Appendicesfor aid in developing value.

PMA=+I +/- % Span (?a) 7.10 Primary Element Accuracy (PEA):

Discussion and calculation as required. Refer to the Appendices for aid in developing value.

PEA = +/-1 I% Span (?a)

Page 87 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 7.11 Insulation Resistance Accuracy Error (IRA):

References 5.22, 5.23, 5.24from C-0 1.00, may provide a bounding IRA value to use, if the device is identified by these calculations.

Howtever, if a more precise IRA valuefor the identified devices is needed or a non identified device requiresIRA to be established, then the guidance, provided in Appendix D shall be used.

8.0 RESULTS 8.1 Determine Channel Uncertainty (CU):

This section is only applicable to indication/controlloop calculations.

Refer to Section 7.3.6forformnula. N/A for safety relatedsetpoint calculations.

CU = +/- ]units (2a)

CE = +/- units (2a) 8.2 Calculation of Setpoints with not Analytical Limits or Allowable Values This section is only applicable to setpoint calculations. Refer to Section 7.3. 7forformula. N/A for safety relatedselpoint calculations.

NTSP =I ]units 8.3 Calculation of the Allowable Value (AV)

This section is only applicable to setpoint calculations. Refer to Section 7.3.8forformula. N/A for non-safety relatedsetpoint, indication, and controlloop calculations.

AV=(( units (2a) 8.4 Calculation of the Nominal Trip Setpoint (NTSP)

This section is only applicable to setpoint calculations. Refer to Section 7.3.9forformula. N/A for non-safety relatedsetpoint, indication, and controlloop calculations.

NTSP= [ I units Page 88 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 8.5 Evaluation of Reset Value Evaluateper guidance given by Section 4.4.12.2

9.0 CONCLUSION

S Add discussion of results to verbalize that the objectives are met and that they graphicallypresented, the figure should reflect the direction of the setpoint.

Page 89 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 FIGURE 1- [NAME] FUNCTION Maximum Instr. Range - [] UNITS Analytical Limit (AL) - [ ] UNITS CalculatedA V Actual AV [ ] UNITS

+ AFT [ ] UNITS CalculatedNTSP

+ALT Actual NTSP f--

, _ __ [ ] UNITS

-ALT 1 [ ] UNITS

-AFT [ ] UNITS Minimum Instr. Range [I UNITS Page 90 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 ATTACHMENT I SCALING OF THE [NAME) FUNCTION There shotld be a discussion whether head correctionis applicable or not. If applicable then it should be developed. CPS 8801.05 shall be used as guidance, however only verified information (typically w'alkdowns) may be usedfrom existing CPS 8801.05 head corrections.

Scaling shall be performedfor each device in loop as presentlypresented in the existing calibrationprocedures (CardinalPoints, Units, andprecision).

Discussionwith C&I maintenanceshall be requiredwhen unable to support existing calibrationprocedures.

1 Transmitter EINs Manufacturer: Rosemount Inc.

Model No.:

Input:

Output:

Process Range Min (p) Max (P) Units Transmitter Output Range Min(o) Max(O) Units Page 91 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE OF CALCULATION FORMAT CALCULATION METHODOLOGY Revision 3 EINs Transmitter Calibration Cal. Pt. [Input Output folts DC)

Units AFT units ALT units 0% [] []

( to ) ( to )

25% [] - []

( to ) ( to )

50% [] []

( to ) ( to )

75% [] []

( to ) ( to )

100% [] [I

( to ) ( to )

Page 92 of 214

INSTRUMENT SETPOINT APPENDIX B SAMPLE CALCULATIONS CALCULATION METHODOLOGY Revision 3 ATTACHMENT 2 RESULTS

SUMMARY

The following tables list the applicable results of this calculation:

P.i.ary SensiorSc6alingCalibration Primajy Sensor, - Calibration Span 0% -- . S%'  ; 7 100%

units units units units units Individual Comipoinent Setting Tolerances.'.. ' . '

,, ,'.' .. ' ,-- . ' ' ' . 4. '

Component EIN 2. A'.§Found 'As-Left ':,

___ __ __ _ _ _ _ _ _ (u niits) (units):

-Tr-ip' Setpoint'arnd LoopSetting Tolerances Co'mponent EIN .. I ,' As-Found 'As-Left

-units):'

(u -units)

(

M&T Usd 'In Calulation' WManufacturer' Model Number' Ran.e  :-  :'

',-'. .'..'":': USAR/Tec ical pecift etpoint,',

Component EN 'Allowable Value .' USATechnical Specification

- 'Design Setpoint--. Section: -

Tech. Spec. Tables:

ORM Tables:

Page 93 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 APPENDIX C UNCERTAINTY ANALYSIS FUNDAMENTALS The ideal instrument would provide an output that accurately represents the input signal, without any error, time delay, or drift with time. Unfortunately, this ideal instrument does not exist. Even the best instruments tend to degrade with time when exposed to adverse environments. Typical stresses placed on field instruments include ambient temperature, humidity, vibration, temperature cycling, mechanical shock, and occasionally radiation.

These stressors may affect an instrument's reliability and accuracy. This Appendix discusses the various elements of uncertainty that should be considered as part of an uncertainty analysis. The methodology to be applied to uncertainty analysis and the determination of trip setpoints is also described in this Appendix.

Instrument loop uncertainty is a combination of individual instrument uncertainties and variations in the process that the loop is monitoring. Individual instrument uncertainty may vary with the environmental conditions around the instrument and with process variations.

The are five general categories of environmental and process conditions which need to be considered: (1) normal operations, (2) seismic event, (3) post seismic, (4) accident, which could be LOCA, MSLB, HELB, etc., (5) post accident. This standard provides information for determining instrument uncertainties under each condition. The total instrument uncertainty may be used alone, as for indicators and recorders to provide an estimate of possible error between actual and indicated process conditions, or as a step toward determining instrument setpoints and operator decision points.

Not all categories of uncertainty described in this Appendix will apply to every configuration. But, the analyst should provide, in the body of the calculation, a discussion sufficient to explain the rationale for any uncertainty category that is not included.

C.1 Categories of Uncertainty The basic model used in this design standard requires that the user categorize instrument uncertainties as random, bias, or arbitrarily distributed. This section describes the various categories of instrument uncertainty and provides insight into the process of categorizing instrumentation based on performance specifications, test reports, and plant calibration data.

Page 94 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 The estimation of uncertainty is an interactive process requiring the development of assumptions and, where possible, verification of assumptions based on actual data. Ultimately, the user is responsible for defending assumptions that affect the basis of uncertainty estimates.

It should not be assumed that, since this design standard addresses three categories of uncertainty, all three types must be used in each uncertainty calculation. Additionally, it should not be assumed that instrument characteristics would fit neatly into a single category. For example, the nature of some data may require that an instrument's static pressure effect be described as bimodal, which might best be represented as a random uncertainty with an associated bias.

C.1.1 Random Uncertainties When repeated measurements are taken of some fixed parameter, the measurements will generally not agree exactly. Just as these measurements do not precisely agree with each other, they also deviate by some amount from the true value. Uncertainties that fluctuate about the true value without any particular preference for a particular direction are said to be random.

Random uncertainties are sometimes referred to as a quantitative statement of the reliability of a single measurement or of a parameter, such as the arithmetic mean value, determined from a number of random trial measurements. This is often called the statistical uncertainty and is one of the so-called precision indices. The most commonly used indices, usually in reference to the reliability of the mean, are the standard deviation, the standard error (also called the standard deviation in the mean),

and the probable error.

In the context of instrument uncertainty, it is generally accepted that random uncertainties are those instrument uncertainties that a manufacturer specifies as having a +/- magnitude and are defined in statistical terms. It is important to understand the manufacturer's data thoroughly and be prepared to justify the interpretation of the data. After uncertainties have been categorized as random, it is required that a determination be made whether there exists any dependency between the random uncertainties. Figure C-1 shows the expected nature of randomly distributed data. There is a greater likelihood that data will be located near the mean; the standard deviation defines the variation of data about the mean.

Page 95 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 95.4%

01.

-3a -2a -Ic 0 la 2a 3a Figure C-1 Random Behavior C.1.2 Bias Uncertainties Suppose that a tank is actually 50% full, but a poorly designed level monitoring circuit shows the tank level as fluctuating randomly about 60%. As discussed in the previous section, the fluctuations about some central value represent random uncertainties. However, the fixed error of 10t in this case is called a systematic or bias uncertainty. In some cases, the bias error is a known and fixed value that can be calibrated out of the measurement circuit. In other cases, the bias error is known to affect the measurement accuracy in a single direction, but the magnitude of the error is not constant.

Bias is defined as a systematic or fixed instrument uncertainty, which is predictable for a given set of conditions because of the existence of a known direction (positive or negative). A very accurate measurement can be made to be inaccurate by a bias effect.

The measurement might otherwise have a small standard deviation (uncertainty), but read entirely different than the true value because the bias effectively shifts the measurement over from the true value by some fixed amount. Figure C-2 shows an example of bias; note that bias as shown in Figure C-2 shifts the measurement from the true process value by a fixed amount.

Page 96 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Measured Value True Value Bias Figure C-2 Effect of Bias Examples of bias include head correction, range offsets, reference leg heat-up or flashing and changes in flow element differential pressure because of process temperature changes. A bias error may have a random uncertainty associated with the magnitude.

Some bias effects, such as static head of the liquid in the sensing lines, can be corrected by the calibration process. These bias effects can be left out of the uncertainty analysis if verified to be accounted for by the calibration process. Note that other effects, such as density variations of the static head, might still contribute to the measurement uncertainty.

C.1.3 Arbitrarily Distributed Uncertainty Some uncertainties do not have distributions that approximate the normal distribution. Such uncertainties may not be eligible for the rules of statistics or square root of the sum of the squares combinations and are categorized as arbitrarily distributed uncertainties. Because they are equally likely to have a positive or a negative deviation, worst-case treatment should be used.

It is important that the engineer recognize that the direction (sign) associated with a bias is known, whereas the sign associated with an arbitrarily distributed uncertainty is not known but is assumed based on a worst-case scenario.

C.1.4 Independent Uncertainties Independent uncertainties are all those uncertainties for which no common root cause exists. It is generally accepted that most instrument channel uncertainties are independent of each other.

Page 97 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.1.5 Dependent Uncertainties Because of the complicated relationships that may exist between the instrument channels and various instrument uncertainties, it should be recognized that a dependency might exist between some uncertainties. The methodology presented here provides a conservative means for addressing these dependencies. If, in the engineer's judgment, two or more uncertainties are believed to be dependent, then these uncertainties should be added algebraically to create a new, larger independent uncertainty. For the purpose of this design standard, dependent uncertainties are those for which the user knows or suspects that a common root cause exists, which influences two or more of the uncertainties with a known relationship.

C.2 Interpretation of Uncertainty Data The proper interpretation of uncertainty information is necessary to ensure that high confidence levels are selected and that protective actions are initiated before safety limits are violated.

Also, proper interpretation is necessary for the valid comparison of instrument field performance with setpoint calculation allowances. This comparison confirms the bounding assumptions of the appropriate safety analysis.

Accuracy (uncertainty) values should be based on a common confidence level (interval) of at least two standard deviations (95% corresponds to approximately 2 standard deviations). The use of three or more standard deviations may be unnecessarily conservative, resulting in reduced operating margin. Some uncertainty values may need to be adjusted to 2-standard deviation values.

For example, if a vendor accuracy for a 99% level (3 standard deviations) is given as +/-6 psig, the 95% confidence level corresponds to +/-4 psig (= (2/3) x 6). This approach assumes that vendor data supports this 3 standard deviation claim.

Performance specifications should be provided by instrument or reactor vendors. Data should include vendor accuracy, drift, environmental effects and reference conditions. Since manufacturer performance specifications often describe a product line, any single instrument may perform significantly better than the group specification. If performance summary data is not available or if it does not satisfy the needs of the users, raw test data may need to be reevaluated or created by additional testing.

Page 98 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 If an uncertainty is known to consist of both random and bias components, the components should be separated to allow subsequent combination of like components. Bias components should not be mixed with random components during the square root of the sum of the squares combination.

Historically, there have been many different methods of representing numerical uncertainty. Almost all suffer from the ambiguity associated with shorthand notation. For example, without further explanation, the symbol +/- is often interpreted as the symmetric confidence interval associated with a random, normally distributed uncertainty. Further, the level of confidence may be assumed to be 68% (standard error, 1 standard deviation), 95% (2 standard deviations) or 99% (3 standard deviations). Still others may assume that the +/- symbol defines the limits of error (reasonable bounds) of bias or non-normally distributed uncertainties. Vendors should be consulted to avoid any misinterpretation of their performance specifications or test results.

Reactor vendors typically utilize nominal values for uncertainties used in a setpoint analysis associated with initial plant operation. These generic values are considered conservative estimates, which may be refined if plant-specific data is available. Since plant-specific data may be less conservative than the bounding generic data, care should be taken to ensure that it is based on a statistically significant sample size.

One source of performance data that requires careful interpretation is that obtained during harsh environment testing. Often, such tests are conducted only to demonstrate the functional capability of a particular instrument in a harsh environment. This usually requires only a small sample size and invokes inappropriate rejection criteria for a probabilistic determination of instrument uncertainties. The meager data base typically results in limits of error (reasonable bounds) associated with bias or non-normally distributed uncertainties.

The limited database from an environmental qualification test also precludes adjusting the measured net effects for normal environmental uncertainties, vendor accuracies, etc. Thus, the results of such tests describe several mutually exclusive categories of uncertainty. For example, the results of a severe environment test may contain uncertainty contributions from the instrument vendor accuracy, measuring and test equipment uncertainty, calibration uncertainty and others, in addition to the severe environment effects. A conservative practice is to treat the measured net effects as only uncertainty contributions due to the harsh environment.

Page 99 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 In summary, avoid improper use of vendor performance data. Just as important, do not apply overly conservative values to uncertainty effects to the point that a setpoint potentially limits normal operation or expected operational transients. Because of the diversity of data summary techniques, notational ambiguities, inconsistent terminology and ill-defined concepts that have been apparent in the past, it is recommended that vendors be consulted whenever questions arise. If a vendor-published value of an uncertainty term (source) is confirmed to contain a significant bias uncertainty, then the +/- value should be treated as an estimated limit of error. If the term is verified to represent only random uncertainties (no significant bias uncertainties), then the

+/- value should be treated as the 2-standard deviation interval for an approximately normally distributed random uncertainty.

C.3 Elements of Uncertainty NOTE: The following sections may expand or add clarification for elements of uncertainty, but does not replace the definitions specified in Section 2.2.

C.3.1 Process Measurement Accuracy (PMA)

PMA are those effects that have a direct effect on the accuracy of a measurement. PMA variables are independent of the process instrumentation used to measure the process parameter. PMA can often be thought of as physical changes in the monitored parameter that cannot be detected by conventional instrumentation.

The following are examples of PMA variables:

  • Temperature stratification and inadequate mixing of bulk temperature measurements
  • Reference leg heatup and process fluid density changes from calibrated conditions
  • Piping configuration effects on level and flow measurements
  • Fluid density effects on flow and level measurements
  • Line pressure loss and pressure head effects
  • Temperature variation effect on hydrogen partial pressure
  • Gas density changes on radiation monitoring Some PMA terms are easily calculated, some PMA terms are quite complex and are obtained from General Electric documents, and other PMA terms are allowances developed and justified by Design Basis Documents.

Page 100 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.2 Primary Element Accuracy (PEA)

PEA is generally described as the accuracy associated with the primary element, typically a flow measurement device such as an orifice, venturi, or other devices from which a process measurement signal is developed. The following devices are typically considered to have a primary element accuracy that requires evaluation in an uncertainty analysis:

  • Flow venturi
  • Flow nozzle
  • Orifice plate
  • RTD or thermocouple thermowell
  • Sealed sensors such as a bellows unit to transmit a pressure signal PEA can change over time because of erosion, corrosion, or degradation of the sensing device. Installation uncertainty effects can also contribute to PEA errors.

Page 101 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.3 Vendor Accuracy (VA)

VA defines a limit that error will not exceed when a device is used under reference or specified operating conditions. An instrument's accuracy consists primarily of three instrument characteristics:

repeatability, hysteresis, and linearity. These characteristics occur simultaneously and their cumulative effects are denoted by a band, that surrounds the true output (see Figure C-3). This band is normally specified by the manufacturer to ensure that their combined effects adequately bounds the instrument's performance over its design life. Deadband is another attribute that is sometimes included within the vendor accuracy (see Section C.3.9).

Accuracy Band 20 Ma . ...........

True Va ue Output Ma 4 Ma ....... .......... ...................... .......

PO - Zero PS - Upper Span Limit PO PS Figure C-3 Instrument Accuracy Page 102 of 214

Instirument Setpoint APPENDIX C - UNCERTAINTY Calci. ilation Methodology ANALYSIS FUNDAMENTALS REVISION I Repeatability is an indication of an instrument's stability and describes its ability to duplicate a signal output for multiple repetitions of the same input. Repeatability is shown on Figure C-4 as the degree that signal output varies for the same process input.

Instrument repeatability can degrade with age as an instrument is subjected to more cumulative stress, thereby yielding a scatter of output values outside of the repeatability band.

20 Repeatability Output Band mA 4 . I Pin Pressure Input Figure C-4 Repeatability Page 103 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Hysteresis describes an instrument's change in response as the process input signal increases or decreases (see Figure C-5). The larger the hysteresis, the lower is the corresponding accuracy of the output signal. Stressors can affect the hysteresis of an instrument.

20 Res po ns e to Dec rea sing P res s u re Out put mA Res p on se to Increasing Pressure 4

P res s u re In p u t Figure C-5 Hysteresis Page 104 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 All instrument transmitters preferably exhibit linear characteristics, i.e., the output signal should be linearly and proportionately related to the input signal. Linearity describes the ability of the instrument to provide a linear output in response to a linear input (see Figure C-6). The linear response of an instrument can change with time and stress.

20 ...............................................................................................................................

Act ual ,

Calibrat ion .;

Cu rve ,,. /

Out put mA m A /Des ired

/,.-' Ca libra t io n

/ ' Curve 4................. ............................... ...............

Pressure Input Figure C-6 Linearity In cases in which the measurement process is not linear, the more appropriate term to use is conformity, meaning that the output follows some desired curve. Linearity and conformity are often used interchangeably.

As discussed, vendor accuracy is generally described as the combined effect of hysteresis, linearity, and repeatability. These three separate effects are sometimes combined to form the bounding estimate of vendor accuracy as follows:

VA = +/-(h2 +12 +r2)1/2 where, VA = Vendor Accuracy h = Hysteresis 1 = Linearity r = Repeatability Page 105 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Accuracy cannot be adjusted, improved, or otherwise affected by the calibration process. Rather, accuracy is a performance specification against which the device is tested during calibration to determine its condition. A 5-point calibration check, (0t, 25t, 50%, 75t, and 100%), of an instrument's entire span verifies linearity. If a 9-point check is performed, by checking up to 100t and back down to 0t, hysteresis is also verified. Finally, if the calibration check is performed a second time (or more),

repeatability is verified. The calibration check process is rarely performed to a level of detail that also confirms repeatability but if it is, per ISA S 67.04, both vendor accuracy and the calibrations tolerance do not both need to be included in the uncertainty analysis. For this reason, the vendor accuracy term should be checked to verify that it includes the combined effects of linearity, hysteresis, and repeatability. If the vendor accuracy specification does not include all of these terms, the missing terms are included into the vendor accuracy specification as follows:

VA = +/-(va2 + h2 +12 +r2) 112 where, VA = Revised estimate of vendor accuracy va = Vendor's stated accuracy with some terms not included h = Hysteresis (if not already included) 1 = Linearity (if not already included) r = Repeatability (if not already included)

Vendor accuracy is considered an independent and random uncertainty component unless the manufacturer specifically states that a bias or dependent effect also exists. Vendor accuracy is normally expressed as a percent of instrument span, but this should be confirmed from the manufacturer's specifications.

Bistables, trip units, and pressure switches may not require a consideration of hysteresis and linearity because the calibration might be checked only at the setpoint. If the accuracy is checked at the setpoint for these devices, the accuracy elsewhere in the instrument's span is not directly verified.

The calibration process might not adequately confirm the vendor accuracy if the measuring and test equipment (M&TE) uncertainty significantly exceeds the accuracy of the device being calibrated.

For example, the calibration process cannot verify a 0.1t accuracy specification with M&TE having an uncertainty of 0.5%. If the M&TE uncertainty exceeds the specified vendor accuracy, then the vendor accuracy should be considered no better than the M&TE allowance.

Page 106 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.4 Drift Drift is commonly described as an undesired change in output over a period of time; the change is unrelated to the input, environment, or load. A shift in the zero setpoint of an instrument is the most common type of drift. This shift can be described as a linear displacement of the instrument output over its operating range as shown in Figure C-7. Zero shifts, can be caused by transmitter aging, an overpressure condition such as water hammer, or sudden changes in the sensed input that might stress or damage sensor components.

20 . r As-Found Condition at Calibration Output MA 0

,o'/Original Calibr~tion 4 4.. .......t--------

e.

PZc PZo Pressure Input PSC PSo PZc = Pressure Zero @ Recal PSc = Pressure Span @ Recal PZo = Pressure Zero @ Original PSo = Pressure Span @ Original Figure C-7 Zero Shift Drift Span shifts are less common than zero shifts and are detected by comparing the minimum and maximum current outputs to the corresponding maximum and minimum process inputs. Figure C-8 shows an example of forward span shift in which the instrument remains in calibration at the zero point, but has a deviation that increases with span. Reverse span shift is also possible in which the deviation increases with decreasing span.

Page 107 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 20 .....................................

IAs-Found Condition I at Calibration mA 4

PZo Pressure Input PSc PSo PZo = Pressure Zero @ Original PSc = Pressure Span @ Recal PSo = Pressure Span @ Original Figure C-8 Span Shift Drift The amount of drift allowed for an instrument depends on the manufacturer's drift specifications and the period of time assumed between calibrations. For safety-related devices, the drift allowance should be based on the Technical Specifications allowance for plant operation (i.e. 24 months) plus an additional allowance of 25%. Note that not all equipment is checked at this frequency; the Technical Specifications still states a shorter frequency for certain equipment, such as quarterly checks of trip units.

The manufacturer's specified drift is often based on a maximum interval of time between calibration checks. Several methods are available to adjust the drift allowance to match the calibration period of the instrument. If the instrument drift is assumed to be linear as a function of time and continuing in one direction once it starts, the drift allowance would be calculated as shown below:

For an example of vendor drift interval of 6 months and 0.5%:

DR30 = +/-0.5% (30/6) = +/-2.5% of span Page 108 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 In the absence of other data, this is a conservative assumption.

However, if the vendor states that the drift during the calibration period is random and independent, then it is just as likely for drift to randomly change directions during the calibration period.

In this case, the square root of the sum of the squares of the individual drift periods between calibrations could be used. In this case, the total drift allowance for 30 months would be:

DR3 0 +/-(0.5%2 + 0.5%2 + 0.5%2 + 0.5%2 + 0.5%2) /21= +1.12% of span The approach in section 4.3.2 assumes the drift is random and independent as above.

Some vendors have stated that the majority of drift tends to occur in the first several months following a calibration and that the instrument output will not drift significantly after the "settle-in period." In this case, a lower drift value might be acceptable provided that the vendor can supply supporting data of this type of drift characteristic. However, when the vendor stated drift is for a longer period (i.e. Rosemount drift = 0.2% for 30 months) then the calibration period it is not'acceptable to arbitrarily reduce the drift value. In this case the data supporting a "settle-in period" drift characteristic must be evaluated.

VD3 0 = +/-[VDyr2 + VDyr2 + (VDyr 2 . 2)]1/2 In the above expression of drift, VDyr represents the annual drift estimate and the resultant drift, VD30 , represents the 30-month drift estimate. If VDyr = 1%, the 30-month drift estimate is obtained by:

VD30 =[1 [. 02 + 1 .02 + (1 0%2 . 2)] 1/21 +/-1 .58% of span Drift can also be inferred from instrument calibration data by an analysis of as-found and as-left data. Typically, the variation between the as-found reading obtained during the latest calibration and the as-left reading from the previous calibration is taken to be indicative of the drift during the calibration interval. By evaluating the drift over a number of calibrations for functionally equivalent instruments, an estimate of the drift can be developed.

Typically, the calibration data is used to calculate the mean of drift, the standard deviation of drift, and the tolerance interval that contains a defined portion of the drift data to a certain probability and confidence level (typically 95%/95%). This statistically determined value of drift can be used to validate the vendor's performance specification and can also be used as the best estimate of drift in the uncertainty calculation.' Assigning all of the statistically determined drift from plant specific data is Page 109 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 especially conservative because this drift allowance contains many other contributors to uncertainty, including:

  • Instrument hysteresis and linearity error present during the first calibration
  • Instrument hysteresis and linearity error present during the second calibration
  • Instrument repeatability error present during the first calibration..
  • Instrument repeatability error present during the second calibration
  • Measurement and test equipment error present during the first calibration
  • Measurement and test equipment error present during the second calibration
  • Personnel-induced or human-related variation or error during the first calibration
  • Personnel-induced or human-related variation or error during the second calibration
  • ^ Instrument temperature effects due to a difference in ambient temperature between the two calibrations (this is particularly true for 18 month cycle plants in which the first calibration is performed in the winter and the second calibration is performed in the summer)
  • Environmental effects on instrument performance, e.g.,

radiation, temperature, vibration, etc., between the two calibrations that cause a shift in instrument output

  • Misapplication, improper installation, or other operating effects that affect instrument calibration during the period between calibrations
  • True instrument "drift" representing a change, time-dependent or otherwise, in instrument output over the time period between calibrations See Appendix M for information about how to incorporate the results of an As Found As Left (AF/AL) drift analysis into a setpoint or channel error calculation.

Regardless of the approach taken for determining the drift allowance, the uncertainty calculation should provide the basis for the value used.

C.3.5 Accuracy Temperature Effects (ATE)

The ambient temperature is expected to vary somewhat during normal operation. This expected temperature variation can influence an instrument's output signal and the magnitude of the effect is referred to as the temperature effect. Using a maximum temperature that bounds the maximum observed temperature can reduce the Page 110 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 conservatism of using the maximum temperature difference. Larger temperature changes associated with accident conditions are considered part of the environmental allowance and the effect of larger temperature changes was determined as part of an environmental qualification test. The temperature effects described here only relate to the effect on instrument performance during normal operation.

The vendor normally provides an allowance for the predicted effect on instrument performance as a function of temperature. For example, a typical temperature effect might be +/-0.75% per 100OF change from the calibrated temperature. This vendor statement of the temperature effect would be correlated to plant-specific performance as follows:

ATE = +/-(Int - ctl) (vte) where, ATE = Temperature effect to assume for the uncertainty calculation nt = Normal expected maximum or minimum temperature (both sides should be checked) ct = Calibration temperature (typically, minimum zone temp.)

vte Vendor's temperature effects expression For example, if the vendor's temperature effects expression is

+/-0.75t of span per 100 0 F, the calibration temperature is 650 F if known, otherwise use the minimal temperature for that zone, and the maximum expected temperature is 110 0 F. This vendor statement of the temperature effect would be correlated to plant-specific performance as follows:

ATE = +/-[I 110 0 F - 650F1 x (0.75t - 100 0F)] = + 0.3375t of span Notice that the above approach starts with the minimal zone temperature, and then determines the maximum expected variation from the minimal zone temperature under normal operating conditions. Design Criteria DC-ME-09-CP "Equipment Environmental Design Conditions" provides all normal and harsh environments for the plant.

The above discussion applies to temperature effects on instrumentation, in response to expected ambient temperature variations during normal plant operation. Some manufacturers have also identified accident temperature effects that describe the expected temperature effect on instrumentation for even larger ambient temperature variations. An accident temperature effect describes an uncertainty limit for instrumentation operating Page 111 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 outside the normal environmental limits and in some cases may include normal temperature effects.

Temperature effect is considered a random error term unless otherwise specified by the manufacturer.

C.3.6 Radiation Effects (RE)

During normal operation, most plant equipment is exposed to relatively low radiation levels. Although the lower dose rate, radiation effects, might have a nonreversible effect on an instrument, the calibration process can eliminate them. If the dose rate is low enough, the ambient environment might be considered mild during normal operation and radiation effects can be considered negligible. Any effects of relatively low radiation effects are considered indistinguishable from drift and are calibrated out during routine calibration checks.

If the normal operation dose rate is high enough that radiation effects should be considered, the environmental qualification test report will provide the best source of radiation effect information. During the worst-case accident environment, radiation effects can be part of the simultaneous effect of temperature, pressure, steam, and radiation that was determined during the environmental qualification process. Other plant locations might experience a more benign temperature and pressure environment, but still be exposed to significant accident radiation. For each case, the determination of the radiation effects should rely on the data in the environmental qualification report. Environmental qualification test report data should usually be treated as an arbitrarily distributed bias unless the manufacturer has provided data supporting its treatment as a random contributor to uncertainty.

C.3.7 Static Pressure Effects (SPE)

Some devices exhibit a change in output because of changes in process or ambient pressure. A differential pressure transmitter might measure flow across an orifice with a differential pressure of a few hundred inches of water while the system pressure is over 1,000 psig. The system pressure is essentially a static pressure placed on the differential pressure measurement. The vendor usually specifies the static pressure effect; a typical example is shown below:

Static pressure effect = +/-0.5t of span per 1,000 psig The static pressure effect is a consequence of calibrating a differential pressure instrument at low static pressure conditions, but operating at high static pressure conditions.

Page 112 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 If the static pressure effect is considered a bias by the manufacturer, the operating manual usually provides instructions for calibrating the instrument to read correctly at the normal expected operating pressure, assuming that the calibration is performed at low static pressure conditions. This normally involves changing the zero and span adjustments by a manufacturer-supplied correction factor at the low-pressure (calibration) conditions so that the instrument will provide the desired output signal at the high-pressure (operating) conditions. The device could also be calibrated at the expected operating pressure to reduce or eliminate this effect, but is not normally done because of the higher calibration cost and complexity.

Some static pressure effects act as a bias rather than randomly.

For example, some instruments are known to read low at high static pressure conditions. If the calibration process does not correct the bias static pressure effect, the uncertainty calculation needs to include a bias term to account for this effect.

Ambient pressure variation can cause some gauge and absolute pressure instruments to shift up or down scale depending on whether the ambient pressure increases above or decreases below atmospheric pressure. Normally, this effect is only significant on 1) applications measuring very small pressures or 2) applications in which the ambient pressure variations are significant with respect to the pressure being measured. Gauge pressure instruments can be sensitive to this effect when the reference side of a sensing element is open to the atmosphere. If the direction of the ambient pressure change is known, the effect is a bias. If the ambient pressure can randomly change in either direction, the effect is considered random.

C.3.8 Overpressure Effect (OPE)

In cases where an instrument can be over-ranged by the process pressure without the process pressure exceeding system design pressure, an overpressure effect must be considered. Overpressure effects are often considered in low-range monitoring instruments in which the reading is expected to go off-scale high as the system shifts from shutdown to operating conditions. Some pressure switches may also be routinely over-ranged during normal operation.

The overpressure effect is normally considered random and is usually expressed as a percent uncertainty as a function of the amount of overpressure. The contribution of the overpressure effect on instrument uncertainty would only apply after the instrument has been over-ranged.

Page 113 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.9 Deadband Deadband represents the range within which the input signal can vary without experiencing a change in the output. The ideal instrument would have no deadband and would respond to input changes regardless of their magnitude. Instrument stressors can change the deadband width over time, effectively requiring a greater change in the input before an output response is achieved.

The vendor's instrument accuracy specification might include an allowance for deadband or it might be considered part of hysteresis (included in vendor accuracy). Recorders generally have a separate allowance for deadband to account for the amount the input signal can change before the pen physically responds to change.

Pressure switches are also susceptible to deadband. For this reason, a pressure switch setpoint near the upper or lower end of span should confirm that the setpoint allows for deadband. In extreme cases, the pressure switch might reach a mechanical stop with the deadband not allowing switch actuation.

C.3.10 Measuring and Test Equipment Uncertainty Measuring and test equipment (M&TE) uncertainty is defined in Section 2.2 and further describ6d in Appendix H.

C.3.11 Turndown Ratio Effect If a transmitter has an adjustable span over some total range, the uncertainty expression may require adjustment by the turndown factor. For example, a transmitter may have a range of 3,000 psig with an uncertainty of 2% of the total range, sometimes referred to as the upper range limit (URL). If the span is adjusted such that only 1,000 psig of the entire 3,000 psig range is used, the transmitter has not somehow become more accurate. The 2%

uncertainty of the 3,000 psig span is 60 psig, which equates to a 6t uncertainty for the 1,000 psig span. Transmitters with variable spans typically define performance specifications in terms of the total range and the calibrated span.

If the performance. specifications are quoted as a percent of full span (FS), the uncertainty expression will not require an adjustment for the turndown factor.

Page 114 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.12 Power Supply Effects (PSE)

Power supply effects are the changes in an instrument's input-output relationship due to the power supply stability. For 2-wire current loop systems, AC supply variations must be considered for their effects on the loop's DC power supply. The consequential DC supply variations must then be considered for their effects on other components in the series loop, such as the transmitter.

Using the manufacturer's specifications, the power supply is typically calculated as follows:

PSE = (pss) (vpse) where, PSE = Power supply effect to assume for the uncertainty calculation pss = Power supply stability vpse = Vendor's power supply effect expression Power supply stability refers to the variation in the power supply voltage under design conditions of supply voltage, ambient environment conditions, power supply accuracy, regulation, and drift. This effect can be neglected when it can be shown that the error introduced by power supply variation is <10% of the instrument's reference accuracy.

Harmonic distortion on the electrical system can also contribute to power supply uncertainty.

Page 115 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.13 Indicator Reading Error (IRE)

An analog indicator can only be read to a certain accuracy. The uncertainty of an indicator reading depends on the type of scale and the number of marked graduations (See Section 4.4.7.1). An analog indicator can generally be read to a resolution of M of the smallest division on the scale. Figure C-9 shows an example of a linear analog scale. As shown, the indicator would be read to M of the smallest scale. Anyone reading this scale is able to confirm that the indicator pointer is between 40 and 45. In this case, the estimated value would be 42.5. If an imaginary line is mentally drawn at the M of smallest scale division point, an operator can also tell whether the pointer is on the high side or the low side of this line. Therefore, the uncertainty associated with this reading would be +/- :S of the smallest scale division, or +1.25 for the example shown in Figure C-9. Notice that this approach defines first the resolution to which the indicator could be read (M of smallest scale division) with an uncertainty of +/- Ya of smallest scale division about this reading resolution. In terms of an uncertainty analysis, it is not the reading resolution, but the uncertainty of the resolution that is of interest.

Per Section 4.3.3, the AFT and ALT values are rounded to the next M minor marking, thus typically eliminates the need to include the :YS minor division uncertainty. Also, for cases where calibration procedures require reverse calibration of devices, where readability of the end device does not need to be taken into account. However, readability of M&TE may need to be considered.

0 10 20 30 40 50 6 0 70 80 9 0 10 0 Figure C-9 Analog Scale Page 116 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Type of Scale Discussion Analog An uncertainty of +/- VR of the smallest division Linear should be assigned as the indication reading error, if applicable. See above discussion.

Analog Logarithm or exponential scales allow the Logarithm presentation of a wide process range on a single or scale. Radiation monitoring instruments commonly Exponential used an exponential scale. An uncertainty of +/- V4 of the specific largest division of interest should be assigned as the indication reading uncertainty.

This requires an understanding of where on the scale that the operators will be most concerned regarding the monitored process, if applicable. See above discussion.

Analog Square root scales show the correlation of Square Root differential pressure to flow rate. An uncertainty of +/- (1/4 of the specific largest division of interest)% should be assigned as the indication reading uncertainty. This requires an understanding of where on the scale that the operators will be most concerned regarding the monitored process, if applicable. See above discussion.

Digital The reading uncertainty is the uncertainty associated with the least significant displayed digit, which is usually negligible as an indication reading uncertainty. The digital display must be evaluated to confirm that the reading uncertainty is insignificant, if applicable. See above discussion.

Analog Analog recorders have the same reading Recorder uncertainties, as do analog indicators. The only potential difference is that the indicator scale is fixed in place but the recorder chart paper can be readily replaced with a different scale paper. The chart paper used for the recorder should be checked to verify that the indication reading uncertainty can be estimated, if applicable. See above discussion.

Page 117 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.14 Seismic Effects Two types of seismic effects should be considered: 1) normal operational vibration and minor seismic disturbances, and 2) design basis seismic events in which certain equipment performs a safety function.

The effects of normal vibration (or a minor seismic event that does not cause an unusual event) are assumed to be calibrated out on a periodic basis and are considered negligible. Abnormal vibrations (vibration levels that produce noticeable effects) and more significant seismic events (severe enough to cause an unusual event) are considered abnormal conditions that require maintenance or equipment modification.

Design basis seismic events can cause a shift in an instrument's output. For the equipment that must function during and following a design basis seismic or accident event, the environmental qualification test report should be reviewed to obtain the bounding uncertainty. The seismic effect may be specified as a separate effect or, in some cases, may be included in the overall environmental allowance. A seismic event coincident with a LOCA is a design basis event per USAR 15.6.5. However, per USAR 15.6.5.1.1, there are no realistic, identifiable events which would result in a pipe break inside the containment of the magnitude required to cause a loss-of-coolant accident coincident with a safe shutdown earthquake. Therefore, each setpoint calculation should consider the effects of a seismic event and loss-of-coolant accident independently to establish the worst case scenario for the instrumentation being evaluated. Consideration should be given to the accident that the equipment is required to mitigate. For example, it is not necessary to impose LOCA conditions as worse case if no credit is taken to mitigate a LOCA condition (e.g. a trip function may activate prior to any harsh environment, thus calculation of LOCA is not required, whereas, indication may be required LOCA/post LOCA, therefore both seismic and LOCA would be calculated and the worst value used). This consideration should be documented in the calculation.

For well-designed and properly mounted equipment, the seismic effect will often contribute no more than +0.5% to the overall uncertainty. This effect can be considered random and can be included within the uncertainty expression as a random term.

Including a small allowance for seismic effects is considered a conservative, but not required, approach to the uncertainty analysis.

Page 118 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.3.15 Environmental Effects - Accident The environmental allowance is intended to account for the effects of high temperature, pressure, humidity, and radiation that might be present during an accident, such as a LOCA or HELB event. This allowance should include an evaluation of the timing of the event including the environmental condition existing at the time the function is designed to trip (See example in C.3.14 above). Some manufacturers do:not distinguish the uncertainties due to each of the accident effects. In such cases, the accident uncertainty may be a single + value given for all accident effects.

Qualification reports for safety-related instruments normally contain tables, graphs or both, of accuracy before, during and after radiation and steam/pressure environmental and seismic testing. Many times, manufacturers summarize the results of the qualification testing in their product specification sheets. More detailed information is available in the equipment qualification report. The manufacturer's specification sheet tends to be very conservative, as the worst-case performance result is normally presented.

Because of the limited sample size typically used in qualification testing, the conservative approach to assigning uncertainty limits is to use the bounding worst-case uncertainties. It is also recommended that discussions with the instrument manufacturer be conducted to gain insight into the behavior of the uncertainty (should it be considered random or bias?). This is important because if the uncertainty is random and of approximately the same magnitude as other random uncertainties, then SRSS methods might be used to combine the accident-induced uncertainty with other uncertainties. The environmental allowance should be of approximately the same size as the other random uncertainties if it is combined with other random terms in an SRSS expression. This consideration comes from the central limit theorem, which allows the combination of uncertainties by SRSS as long as they are of approximately the same magnitude. If not, then the accident uncertainty should be treated as an arbitrarily distributed uncertainty.

Using data from the qualification report in place of performance specifications, it is often possible to justify the use of lower uncertainty values that may occur at reduced temperatures or radiation dose levels. Typically, qualification tests are conducted at the upper extremes of simulated accident environments so that the results apply to as many plants as possible, each with different requirements. Therefore, it is not always practical or necessary to use the results at the bounding environmental extremes when the actual requirements are not as limiting. Some cautions are needed, however, to preclude possible misapplication of the data:

Page 119 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3

1. The highest uncertainties of all the units tested at the reduced temperatures or dose should be used. A margin should also be applied to the tested magnitude of the environmental parameter consistent with Institute of Electrical and Electronics Engineers 323-1975.
2. The units tested should have been tested under identical or equitable conditions and test sequences.
3. If data for a reduced temperature is used, ensure that sufficient "soak-time" existed prior to the readings at that temperature to ensure sufficient thermal equilibrium was reached within the instrument case.

The requirement in Item (1) above is a conservative method to ensure that bounding uncertainties are used in the absence of a statistically valid sample size. Item (2) above is an obvious requirement for validity of this method. Item (3) ensures that sufficient thermal lag time through the instrument case is accounted for in drawing conclusions of performance at reduced temperatures. In other words, if a transmitter case has a one-minute thermal lag time, then ensure that the transmitter was held at the reduced temperature at least one minute prior to taking readings.

Generally, the worst uncertainty is used from either the qualification report or the performance specification, unless more consideration is needed to preserve the existing AV or setpoint.

C.3.16 As-Left Tolerance Specification The device as-left tolerance establishes the required accuracy band that a device or group of devices must be calibrated to within when periodically tested. If an instrument as-found value is found to be within the as-left tolerance, no further re-calibration is required for the instrument and calculations should assume that an instrument might be left anywhere within this tolerance.

See Section 4.3.3 for establishing the calibration as-left tolerance for a device. For all existing CPS instruments, an as-left tolerance is already specified by the applicable surveillance calibration procedure. CPS typically calibrates non-safety related instruments to a generic calibration procedure with tolerances per the Instrument Data Sheet (IDS). This as-left tolerance is recommended for use in the calculation unless other conditions suggest that a different tolerance is warranted. For example, a tighter tolerance is easily achievable for most electronic equipment and a tighter tolerance might provide needed margin for a setpoint calculation. Conversely, establishing a tighter tolerance than is achievable per the manufacturer ensures that it will routinely be found out of calibration.

Page 120 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 The as-left tolerance should be specified for all instruments covered by the associated calculation, even if the as-left tolerances are unchanged from the values already specified in the applicable calibration procedures. The as-left tolerance is treated as a random term in the uncertainty analysis.

For all instrument loops, the loop as-left tolerance is calculation per Section 4.4.5.

C.3.17 As-Found Tolerance Specification The device as-found tolerance establishes the limit of error the defined devices can have and still be considered functional. The as-found tolerance will never be less than the as-left tolerance.

The purpose of the loop as-found tolerance is to establish a level of drift within which the instrument loop is still clearly functional, but not so large that an allowable value determination is required. An instrument or loop found outside the as-left tolerance but still within the as-found tolerance requires a recalibration but no further evaluation or response.

The as-found tolerance is generally defined to include the effects of M&TE, ALT, and vendor drift. Reference Section 4.3.3 for calculating the as-found tolerance.

The as-found tolerance should be specified for all instruments covered by the associated calculation.

For all instrument loops, the loop as-found tolerance is calculation per Section 4.4.5. For Technical Specifications, the loop as-found tolerance as defined at CPS, impacts the setpoint determination.

C.4 Uncertainty Analysis Methodology An uncertainty calculation establishes a statistical probability and confidence level that bounds the uncertainty in the measurement and signal processing of a parameter such as system pressure or flow. Knowledge of the uncertainty in the process measurement is then used to establish an instrument setpoint or provide operators with the expected limits for process measurement indication uncertainty.

The basic approach used to determine the overall uncertainty for a given channel or module is to combine all terms that are considered random using the Square Root of the Sum of the Squares (SRSS) methodology, then adding to the result any terms that are considered nonrandom.

Page 121 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Note that the bias terms do not all operate in the same direction.

Although it could be argued that some bias terms operate in opposite directions and therefore should be somewhat self-canceling, the standard practice is to treat the positive and negative channel uncertainty separately, if bias terms are present.

The reason for this approach is based on generally not knowing the actual magnitude of.the bias terms at a particular instant; the bias terms are defined at bounding levels only. Accordingly, the maximum positive uncertainty is given by:

2 VAi=+ (VA 1+VA,)

In the determination of the random portion of an uncertainty, situations may arise where two or more random terms are not totally independent of each other, but are independent of the other random terms (e.g. two instruments calibrated together as a rack). This dependent relationship can be accommodated within the SRSS methodology by algebraically summing the dependent random terms prior to calculating the SRSS. The uncertainty expression would be similar for all random terms for both devices developed by section 4.3.1.

C.5 Propagation of Uncertainty through Modules If signal conditioning modules such as scalars, summers, square root extractors, multipliers, or other similar devices are used in the instrument channel, the module's transfer function should be accounted for in the instrument uncertainty calculation. The uncertainty of a signal conditioning module's output can be determined when 1) the uncertainty of the input signal, 2) the uncertainty associated with the module, and 3) the module's transfer function are known. Equations have been developed to determine the output signal uncertainties for several types of signal conditioning modules. Refer to Appendix K for additional information.

C.6 Calculating Total Channel Uncertainty The calculation of an instrument channel uncertainty should be performed in a clear, straightforward process. The actual calculation can be completed with a single loop equation containing all potential uncertainty values or by a series of related term equations. Either way, a specific channel calculation should be laid out to coincide with a channel's layout from process measurement to final output module or modules, using the formulas described previously in Section 4.4.9 & 4.4.12 (setpoints) and 4.4.8 (indication).

Page 122 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Depending on the loop, the uncertainty may be calculated for a setpoint(s), indication function, or control function. In some cases, all three functions may be calculated. Because each function will typically use different end-use devices, the channel uncertainty is calculated separately for each function.

Components for these equations, generally are built as follows:

1. Per Section 4.3.1, an instrument loop may contain several discrete instruments (modules) that process the measurement signal from sensor to display, or from sensor to trip unit. An uncertainty calculation would determine the expected uncertainty for the selected instrument loop and each discrete component could have several uncertainty terms contributing to the overall expression. The overall uncertainty calculation for the device (Ai) may contain any or all (or other) of the following uncertainty terms.
2. Per Section 4.4.1, AL is determined from analysis of loop device error (Ai). All individual device error must be determined on the basis of the environmental conditions (normal, trip,-post accident, etc.) applicable to the event and function time for which the loop accuracy applies. Once all the accuracy error contributions for a particular instrument are identified they should be combined using the SRSS method to determine total device accuracy. In performing the SRSS combination, the individual level of confidence of each term (sigma Level) should be accounted for to ensure the resultant device accuracy error is a 2 sigma value.
3. CL is determined from two basic components. These are As Left Tolerance *(ALT) and Maintenance and Test Equipment (M&TE).

Per Appendix H, M&TE error consists of the error associated with each calibration tool or device used to calibrate the individual devices in the loop (including reading error) and the error associated with the Reference Standards used to calibrate the calibration tools.

Per Appendix I, all potential errors from M&TE are controlled by 100% testing and can therefore be assumed as 3 sigma values.

4. Per Section 4.4.4, DL is determined from analysis of loop device drift error. All individual device drift error must be determined on the basis of the environmental conditions (normal, trip, post accident, etc.) applicable to the event and function time for which the loop accuracy applies and adjusted to a common drift interval. Once the drift error contribution for a particular instrument is identified it is combined with each loop device drift term using the SRSS method to determine total loop drift. In performing the SRSS combination, the individual level of confidence of each term (sigma Level) should be accounted for to ensure the resultant drift accuracy error is a 2 sigma value. Per section 4.4.4, DL is determined as:

Page 123 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3

5. Per Sections 4.4.6, C.3.1, and C.3.2, PMA and PEA are established as uncertainties to account for measurement errors, which lie outside the normal calibration bounds of the channel.
6. Per Section 4.4.8.2, the biases for all modules should be accounted for and combined outside the square root radical.

Page 124 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Table C-1 Channel Uncertainty/Setpoint Calculation Checklist Task Completed?

Yes No (1) Are purpose and objectives clearly defined. 0 El (2) Are standard assumptions used as appropriate and any new assumptions used clearly justified and/or identified, as confirmation required. El E (3) Are Inputs/Outputs/References appropriately used, identified to latest revisions, and attached if required. - l (4) Diagram instrument channel. EQ (5) Identify functional requirements, including actuations, any EOP setpoint requirement. E E (6) Identify operating times for functions. E E (7) Identify environment associated with functions during defined operating times. ElO (8) Identify limiting environment and function. E E (9) Identify Process Measurement Accuracy (PMA) and Primary element accuracy (PEA) associated with each function and all drawings/walkdowns/other references identified to calculate values. ElO (10) Identify biases due to linear approximations of nonlinear functions (RTDs). Determine if the biases are of concern over the region of interest for the setpoint. E E]

(11) Identify any modules with non-unity gains. El 0 (12) Identify transfer function for each module with a non-unity gain. ElO (13) For each module, identify normal environment uncertainty effects, as applicable:

Vendor Accuracy (VA) El Vendor Drift (VD) lO Temperature effects (ATE) E lO Radiation effects (RE) El l Power supply effects (PSE) E l Static pressure effects (SPE) El Overpressure effects (OPE) E l Page 125 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Table C-1 (continued)

Channel Uncertainty Calculation Checklist Task Completed?

YesNo Deadband (DB) 00 Measuring and test equipment uncertainty (MTE) O O Turndown Ratio Effect (TD) 00 Indicator Reading Error (IRE) 00 (14) For each module, identify harsh environment uncertainty effects, as applicable.

Accident temperature effects (ATE) El El Accident radiation Effects (RE) 00.

Humidity effects (HE) 00 Seismic effects (SE) 00 Worst case between seismic and harsh environment used to establish AV and NTSP 00 (15) For electrical penetrations, splices, terminal blocks, or sealing devices in a harsh environment, are current leakage effects (IRA) determined. 0 0 (16) Classify each module and process effect as random or bias. Determine if any of the random terms are dependent. Combine dependent random terms algebraically before squaring in the SRSS. 0 E (17) Combine random effects for each module by SRSS.

Add bias effects algebraically outside the SRSS. 0 E (18)If the instrument channel has a module with non-unity gain, the total uncertainties in the input signal to the module must be determined, the module transfer function effect on this uncertainty calculated, and the result combined with the non-unity gain module and downstream module uncertain-ties to determine total channel uncertainty. 0 El (19)Has the ALT and AFT been appropriately identified for each device. 0 0 (20)Has M&TE been appropriately identified and values correctly calculated, using the guidance of calculation IP-C-0089 (Ref. 5.30), as a minimum. 0 El (21)Does the drift interval meet or exceed the calibration interval, for each device. El 0 Page 126 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 Table C-1 (continued)

Channel Uncertainty Calculation Checklist Task Completed?

YesNo (22)Are the appropriate equations used for the type of calculation (i.e. setpoint or indication). EJE0 (23) Has values such as AV, NTSP, ALT, AFT, etc. been converted to the units required by the calibration procedure. on0 (24) Has the existing AV and Setpoint been preserved and if not has all efforts been made to minimize the terms that affect calculation of AV and NTSP. 0 0 (25) Does the conclusions verbalize that the objectives were met and are they graphically presented.

(26) Does Attachment 1, identify head correction for the loop and identified all drawings/walkdowns/other references required to calculate head correction. no0 (27) Does Attachment 2 present all the information required by C&I maintenance and calibration procedures. Examples are:

M&TE model and ranges or equivalent identified AV, NTSP, ALT, AFT given in the appropriate units and precision required by calibration procedures. no (28) Has the Cover Pages and Table of Contents been prepared correctly no1 Page 127 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 C.7 Nominal Trip Setpoint Calculation An uncertainty calculation defines the instrument loop uncertainty through a specific arrangement of instrument modules. This calculation is then used to determine an instrument setpoint based upon the safety parameter of interest. The relationship between the setpoint, the uncertainty analysis, and normal system operation is shown in Figure C-10.

Process Safetv Limit AL Analysis Margin, Transient Response Transient nalysis Modeling Error, Response Time, Etc.

Analytical Limit L

Accident Environmental Effects Process Measurements Effects Process Element Effects, Etc.

Process Uncertainties AL Device Uncertainty: Channel Modules, Temperature, Environment, Humidity Effects, etc.

j Allowable Value f LER Avo'dance Margin ALT, M&TE, Drift As-Found As-Left Nominal Trip Setpoint Tolerance Tolerance

- Spurious Trip Avoidance Margin 4i Operating Limit Limits of Normal Operating Range Including Transients Operating Range

,r Normal Operating Value Figure C-10 Setpoint Relationships Page 128 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3 The information provided in figure C-10, prompts several observations:

  • The relationships shown can vary between applications or plants, and is provided for illustrative purposes only.
  • The setpoint has a nominal value. The upper and lower limits for the setpoint shown represent the allowed AFT & ALT tolerances for the setpoint. Typically, an instrument found within the band defined by the as-left tolerance does not require an instrument reset.
  • The setpoint relationship shown assumes that the process increases to reach the setpoint. If the process decreased towards the setpoint, the relationships shown in Figure C-10 would be reversed around the setpoint.
  • The as-found tolerance is wider than the as-left tolerance and accounts for expected drift or certain other normal uncertainties during normal operation. Instruments found within the as-found tolerance, but outside the as-left tolerance require resetting with no further action. Instruments found outside the as-found tolerance require resetting and an evaluation to determine if the loop is functioning properly.

eSafety limits are established to protect the integrity of systems or equipment that guard against the uncontrolled release of radioactivity. Process limits may also be established to protect against the failure, catastrophic or otherwise, of a system.

  • Analytical limits are established to ensure that the safety limit is not exceeded. The analytical limit includes the effects of system response times or actuation delays to ensure that the safety limit is not exceeded.
  • The allowable value is a value that the trip setpoint should function on or before, when tested periodically due to instrument drift or other uncertainties associated with the test to protect the analytical limit. A calibrated or loop verified setpoint found within the allowable value region, but outside the instrument's as-found tolerance, is usually considered acceptable with respect to the analytical limit and allowable value. The instrument must be reset to return it within the allowed as-left tolerance. A setpoint, found outside its as-found tolerance but with the allowable value, should be evaluated for functionality. A setpoint, found outside the allowable value region, requires an evaluation for operability. Normally, an allowable value is assigned to Technical Specifications parameters that also have an analytical limit.

Page 129 of 214

Instrument Setpoint APPENDIX C - UNCERTAINTY Calculation Methodology ANALYSIS FUNDAMENTALS REVISION 3

  • The trip setpoint is the desired actuation point that ensures, when all known sources of measurement uncertainty are included, that an analytical limit is not exceeded. Depending on the setpoint, additional margin may exist between the trip setpoint and the analytical limit. The trip setpoint is selected to ensure the analytical limit is not exceeded while also minimizing the possibility of inadvertent actuations during normal plant operation.

Page 130 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 APPENDIX D EFFECT OF INSULATION RESISTANCE ON UNCERTAINTY D.1 Background Under the conditions of high humidity and temperature associated with either a Loss of Coolant Accident (LOCA) or high energy line break (HELB), the insulation resistance (IR) may decrease in instrument loop components such as cables, splices, connectors, containment penetrations, and terminal blocks. A decrease in IR results in an increase in instrument loop leakage current and a corresponding increase in measurement uncertainty of the process parameters, defined in Section 2.2 as IRA.

Degraded IR effects during a LOCA or HELB are a concern for instrumentation circuits due to the low signal current levels. A decrease in IR can result in substantial current leakage that should be accounted for in instrument setpoint and post accident monitoring uncertainty calculations. The NRC expressed concern with terminal block leakage currents in Information Notice 84-47. More recently, the NRC stated in Information Notice 92-12 (Ref. 5.12) that leakage currents should be considered for certain instrument setpoints and indication.

This Appendix provides an overview of IR effects on standard instrumentation circuits and provides examples of the effect of IRA on instrument uncertainty. Specifically, this Appendix addresses the following:

  • Qualitative effects of temperature and humidity on IR
  • Analytical methodology for evaluating IR effects on instrument loop performance
  • Technical information needed to perform an evaluation
  • Application of results to uncertainty calculations
  • Consideration of inherent margins in the analytical methodology Page 131 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 D.2 Environmental Effects on Insulation Resistance IR is affected by changes in the environment. ASTM Standard D257-91 (Ref. 5.31), provides a discussion of the factors that affect the resistance of a material. This ASTM standard discusses material properties in general; it does not limit itself to cables or any other type of particular construction. Factors that affect the resistance or the ability to measure resistance include:

  • Temperature
  • Humidity oTime of electrification (electrical measurement of resistance)
  • Magnitude of voltage
  • Contour of specimen
  • Measuring circuit deficiencies eResidual charge Temperature and humidity effects are of particular interest for circuits that may be exposed to an accident harsh environment. The resistance of an organic insulating material changes exponentially with temperature. Often, this variation can be represented in the form:

R = Be-m/T) where, R =Resistance of an insulating material B =Proportionality constant m =Activation constant T =Absolute temperature in degrees Kelvin One manufacturer predicts a similar exponential variation of IR with respect to temperature for their cable; the manufacturer provides the following equation, for determining IR at a given temperature:

IR = (4 X 1015) log (D/d) e 0 0 79 X T) where, IR = Calculated cable insulation resistance, megohm for 1,000 ft T = Temperature, degrees Kelvin d = Diameter of conductor D = Diameter of conductor and insulation Page 132 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Example D-1 Using the above expression, a sample IR will be calculated at 300OF (4220 K). Cable heatup due to current flow will be neglected for instrument cables since they carry no substantial current. Typical values for d and D are 0.051 in. and 0.111 in., respectively, for a 16 awg conductor.

IR = (4 x 70'5) log (0.111/0.051) e0-0. 079 X 422) =

4.5 megohms per 1, Offt Using the above equation, a graph of the cable IR variation with temperature is provided in Figure D-1. This figure is illustrative only and does not necessarily apply to other configurations or materials.

5.0 4.0 I1' 3.0 2.0 FS 1.0 F-0 .0 1I 300 Degrees Fahrenheit Figure D-1 Typical Cable Insulation Resistance Variation with Temperature Page 133 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Insulation resistance of solid dielectric materials decreases with increasing temperature and with increasing humidity. Volume resistance of the insulating material is particularly sensitive to temperature changes. Surface resistance changes widely and very rapidly with humidity changes. In both cases, the change in IR occurs exponentially.

ASTM D257, Reference 5.31, discusses temperature and humidity as a combined effect on IR. In some materials, a change from 25 0 C to 100 0 C may change IR by a factor of 100,000 due to the combined effects of temperature and humidity. The effect of temperature alone is usually much smaller.

IR is a function of the volume resistance as well as the surface resistance of the material. In the case of an EQ test that includes steam and elevated temperatures, the minimum IR is expected near the peak of the temperature transient in a steam environment.

Condensation of steam and chemical spray products will reduce the surface resistance substantially.

Page 134 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 D.3 Analytical Methodology D.3.1 Floating Instrument Loops (4 - 20 mA or 10 - 50 mA)

Instrument loops for pressure, flow or level measurement normally use a 4 to 20 mA (or 10 to 50 mA) signal. The instrument circuit typically consists, as a minimum, of a power supply, transmitter (sensor), and a precision load resistor from which a voltage signal is obtained for further signal processing. A typical current loop (without IR current leakage) is shown in Figure D-2.

Figure D-2 Typical Instrument Circuit In a current loop, the transmitter adjusts the current flow by varying its internal resistance, RT, in response to the process. The transmitter functions as a controlled current source for a given process condition. The signal processor load resistor, RL is a fixed precision resistor. Under ideal conditions, the voltage drop across RL is directly proportional to the loop current and normally provides the internal process rack signal.

Page 135 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 If current leakage develops in an instrument loop due to a degraded insulation resistance, the path is represented as a shunt resistance, Rs, in parallel to the transmitter as shown in Figure D-3.

P.-

Figure D-3 Instrument Circuit with Current Leakage Path Note that Figure D-3 applies only to floating instrument loops. In a floating instrument loop, the signal is not referenced to instrument ground. Thus, even if there is a low IR between cables or other instrument loop components to ground, the effect on instrument loop performance will be negligible as long as there is not a return path to ground for current flow. In this case, the only potential current leakage path is from conductor to conductor across the transmitter as shown in Figure D-3. See Section D.3.2 for necessary analytical methodology if the signal negative is grounded.

Page 136 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Leakage current disrupts the one-to-one relationship between the transmitter current and load current, such that a measurement error is introduced at the load. For a standard 4 - 20 mA (or 10 to 50 mA) instrument loop, the error is always in the higher-than-actual direction, meaning that the load current will be higher than the transmitter output current. The magnitude of the error in percent span (Is(*)) caused by leakage is defined as the ratio of leakage current to the 16mA span of a 4 - 2OmA loop, or, Is~t)= (X,/16mA) X 100 Where Is = shunt current From figure D-3, Is can be expressed in terms of voltage, current and resistance in the current loop consisting of a power supply, load resistance and IR (shunt resistance) as follows:

V = IL RL + Is Rs where, VP = Power supply voltage IL = Current through the load resistor Is = Shunt current RL = Rack load resistance Rs = Equivalent shunt (IR) resistance Solving for Is, IS = (V - IL RL) /RS Converting mA to Amps and normalizing for a 16mA span yields the following result:

Is(* span) = [(V - IL RL )/(Rs X 0. 016)) X 100 The error due to current leakage is inversely proportional to the IR, or Rs in the above equation. As Rs decreases, the loop error due to current leakage increases. Note that equation to determine "V1 has been simplified to provide an error in terms of percent span.

For this case, the total instrument span is 16 mA for a 4 to 20 mA instrument loop.

Rs is an equivalent shunt resistance obtained from several parallel shunt paths. A typical circuit inside containment, showing all potential parallel current leakage paths, is shown in Figure D-4.

Page 137 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 I

I Figure D-4 Potential Current Leakage Paths As depicted in Figure D-4, the current leakage paths include the following:

Rsp, Splice at sensor Rc Field cable RsP 2 Splice between field cable containment penetration Rp Containment penetration Figure D-4 is intended to provide a feel for the various current leakage paths that might be present inside containment or a steam line break area; however, it is not necessarily complete. The containment penetrations might include the use of an extension (or jumper) cable to accomplish the transition from the field cable to the electrical penetration pigtail. Additional cables and splices may also be installed in the circuit, and each additional component should be included in the model.

Example D-2 Suppose we want to determine the IR that will affect the instrument loop uncertainty by 5%. The instrument loop conditions that yield the worst-case conditions for this example are as follows:

V =50 VDC (highest typical loop power supply voltage)

IL =4 mA (0.004 A) (lowest possible loop current)

RL =250 ohm (lowest typical total load resistance)

Page 138 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Using the last equation from D.3.1 above, 5= [(50 - (0.004 X 250))/(Rs X 0.016)]

Rs =61, 250 OHM For a 10 to 5OmA loop, the result is as follows:

5* = [(50 - (0.010 X 100))/(Rs X 0.040))

Rs 24,500 OHM The interpretation of the above result is that any combination of current leakage paths with an equivalent IR of 61,250 ohm can cause an error of 5t of span in a 4 to 20 mA loop. Note that the above example is based on a worst-case configuration. Any decrease in power supply voltage, or an increase in total load resistance or current, will result in a smaller percent error for a given shunt resistance. Note that leakage current is a bias, causing the load current to always be higher than the transmitter current.

D.3.2 Ground Referenced Instrument Loops (4 - 20 mA or 10-50 mA)

. A.

The methodology provided in Section D.3.1 can be used if the signal negative is connected to ground; however, the circuit model is different in this case since there are additional current leakage paths than for a floating circuit. As discussed in Section D.3.1, a floating circuit is not ground-referenced; therefore, current leakage to ground is not likely since there is not a return path for current flow at the instrument power supply. In the case of an instrument loop with the signal current grounded at the instrument power supply, leakage paths to ground are possible since there is a return path to ground. This configuration is shown in Figure D-6.

Page 139 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3

-D.

Figure D-6 Current Leakage Paths for a Ground-Referenced Instrument Loop As shown in Figure D-6, the current leakage paths are as follows:

Rs, Conductor-to-conductor for equivalent IR per Section D.3.1 RS2 Positive conductor to ground IR equivalent resistance Rs3 Negative conductor to ground IR equivalent resistance All of the above terms are parallel equivalent resistances that are calculated from cables, connectors, splices, etc., in accordance with the equations from Section D.3.1. Note that current leakage path Rs3 can be neglected since it is effectively grounded at each end. The final configuration for analysis purposes is shown in Figure D-7.

Page 140 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 I

Figure D-7 Circuit Model for a Ground-Referenced Instrument Loop The analysis of this circuit is identical to the methodology presented in Section D.3.1. Note that since there are additional current leakage paths, a ground-referenced instrument loop may be more susceptible to instrument uncertainty when its components are exposed to high temperature and humidity.

D.3.3 Resistance Temperature Detector Circuits (RTDs)

Resistance temperature detectors (RTDs) provide input to the Reactor Protection System and the Engineered Safety Features Actuation System. RTDs are also used for several post-accident monitoring functions. Because of these applications, the effect of degraded insulation resistance must be considered for RTD circuits.

However, because of the difference in signal generation and processing, the analysis methodology is different than for 4 to 20 mA instrument loops.

An RTD circuit measures temperatures by the changing resistance of a platinum RTD, rather than a change in current. A typical 3-lead RTD circuit is shown in Figure D-8 (bridge and resistance to current [R/I] signal conditioner circuitry not shown for simplicity). Shunt resistances Rs and Rss represent possible leakage current paths for this configuration.

Page 141 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Figure D-8 RTD Circuit with Insulation Resistance Shown The compensating lead wire resistance is approximately 0 ohms compared to the associated IR, Rss. Therefore, Rss is effectively shorted by the lead wire and will have no effect on the resistance signal received at the signal conditioner. This concept applies to 4-lead RTD circuits also. Shunt resistance (Rs) is in parallel with the RTD. The R/I signal conditioner will detect the equivalent resistance of the parallel resistances Rs and RRTD. For this configuration, the equivalent resistance is RE.

RE = RRTD X Rs/ (RRTD + Rs)

The error, E, in OF introduced by the shunt resistance is defined as the difference between the temperature corresponding to the RTD resistance and the temperature corresponding to the equivalent resistance. In equation form, E (TF) = Temp (RE) - Temp (RRTD)

Expressed in percent span, E(t) = [(Temp(RE) - Temp(RRTD)/Span) X 100O Because the equivalent resistance seen by the signal conditioner will always be less than the RTD resistance, the resulting error will always be in the lower-than-actual temperature direction. In other words, the indicated temperature will always be lower than the actual temperature by the error amount.

Page 142 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Example D-3 As an example, calculate the IR in an RCS wide-range RTD instrument loop that will cause a 5t error in temperature, measurement. The instrument span is 700 0 F. Perform the evaluation at an RTD temperature of 700 0F.

-5* = [(Temp (Rs) - 700)/700] X 100*

or, Temp (RE) = 665 OF From standard 200Q RTD tables, the corresponding resistance is approximately 466 ohm. This is the equivalent resistance RE. The RTD resistance for 7000 F is approximately 480 ohm. So, the IR shunt resistance can be calculated by equation D-6.

466 = 480 Rs/ (480 + Rs) or, Rs = 15,977f?

D.4 Information Required to Perform Analysis The following information is normally obtained to complete an analysis of current leakage effects:

  • Cable length and type in the area of interest
  • Number of splices in the area of interest
  • List of all potential current leakage sources, e.g., cables, containment penetrations, etc.
  • EQ test report information providing measured insulation resistance for each component
  • Instrument circuit power supply maximum rated output voltage
  • Total instrument loop loading for the circuits of interest
  • Instrument loop span (4 - 20 mA, 0-700 0 F, etc.)
  • Power supply configuration, e.g., floating or grounded Page 143 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 Example D-4 Assuming the following design inputs, calculate the maximum uncertainty associated with IR current leakage effects. Note: This is an example only and does not apply to a particular configuration.

Containment electrical penetration IR: 4.4 x 106 Q (obtained from EQ file)

Cable IR:120 x 106 g2/ft (obtained from EQ file)

Cable length inside containment is 250 ft (from design documents)

Note that cable IR is modeled as parallel resistances, or in this case, as 250 parallel resistances, each with a resistance of 120 x 106 Q Or, cable IR = 120 x 106/250 = 0.48 x 106 Q Cable splices: 2.9 x 106 Q (obtained from EQ file)

Perform calculation at maximum power supply voltage (assume 48 VDC) and minimum loading (4 mA on a floating loop).

First, calculate equivalent shunt resistance due to all IR paths:

1/R, =1/(4.4 X 106) + 11(0.48 X 106) + 1/(2.9 X 1o6))

or, R, = 0.38 X 106 g2 The error in percent span is calculated by:

[48 - (0.004 x 250)]/[(0.38 x 106) X 0.0016] = 0.77T of span This is the worst case configuration consisting of the minimum IR values from EQ test reports at the minimum loop loading. The uncertainty could be improved by including the actual instrument loop load. Also, the uncertainty could be calculated at the setpoint which often will have a higher loop current than the assumed 4 mA above.

Page 144 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 D.5 Application of Results to Uncertainty Calculations Current leakage due to IR is a bias defined as IRA in Section 2.2 and used in equations described in Section 4.5.4. The direction of the bias depends on the type of circuit as follows:

  • Instrument loops, e.g., 4 to 20 mA or 10 to 50 mA circuits, will indicate higher than actual. The bias term is positive.
  • RTD circuits will indicate lower than actual. The bias term is negative.

D.6 Additional Considerations Depending on the instrument loop components, the circuit configuration, and the existing margins in a calculation, the first pass on a calculation may indicate less-than-desired setpoint margins. In this case, the input parameters to the calculation can be reviewed for any inherent margin that can be justifiably removed from the analysis. The following should be considered:

  • A Worst case IR values from the EQ test report are typically used. If the worst case IR values are based on IR to ground measurements and the instrument loop of concern is floating, then only conductor-to-conductor leakage need be considered.

This effectively doubles the IR to use for the calculation since the current leakage depends on the series IR of both conductor's insulation.

  • If the EQ test attempted to envelope all plants and all postulated accidents with a high peak temperature, e.g.,

450 0F, but the plant requirement is to a lesser value, such as 300 0F, then margin is contained in the test report. The IR of an insulating material decreases exponentially with temperature. The EQ test report should be reviewed to determine the measured IR at lower temperatures.

  • The calculations, References 5.22, 5.23, & 5.24, may have been performed for the worst-case circuit configuration for the sake of simplicity. In this case, the calculation probably assumed the following circuit conditions:
  • Maximum power supply voltage
  • Minimum instrument loop loading
  • Minimum instrument loop current, e.g., 4 mA or 10 mA Page 145 of 214

Instrument Setpoint APPENDIX D - EFFECT OF INSULATION Calculation Methodology RESISTANCE ON UNCERTAINTY REVISION 3 If the actual circuit configuration and desired current corresponding to the actual setpoint differs from the above assumptions, then the CI-01-00 calculation can calculate IRA per Appendix D, for the actual loop configuration and required setpoint to eliminate unnecessary conservatism.

  • Consider the time during which the process parameter is required. If the instrument loop performs a trip function prior to the peak accident transient conditions or if the instrument loop provides a post-accident monitoring function after the peak accident transient conditions have passed, a lower value of IRA may be defendable based upon a review of the appropriate EQ test reports.
  • Consider the signal cable routing in each environmental zone.

If the signal cable routes through multiple zones each with a unique peak temperature, a lower value of IRA may be defendable based upon calculation of the effect for each zone.

D.7 Concluding Remarks The effect of IRA on instrument uncertainty is easily in a setpoint or indication uncertainty calculation. This Appendix provides an analytical basis for current leakage calculations and discusses options to consider when the calculated results exceed the available margin. If a bounding IRA value for a given device has been established per References 5.22, 5.23, and 5.24 and the values are acceptable for use in the setpoint or indication uncertainty calculation, then no further action is required.

Current leakage due to IR is not expected during normal operation.

However, the methodology presented in this Appendix D could be used to determine IR effects during normal environmental conditions.

Cable insulation resistance typically exceeds 1 megohm during normal operation, which results in a negligible contribution to the overall uncertainty.

Page 146 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 APPENDIX E FLOW MEASUREMENT UNCERTAINTY EFFECTS E.1 Uncertainty of Differential Pressure Measurement Differential pressure transmitters are generally used for flow measurement. The differential pressure measurement is normally obtained across a flow restriction such as a flow orifice, nozzle, or venturi. Each type of flow measurement device is briefly described below:

  • A flow orifice is a thin metal plate clamped between gaskets in a flanged piping joint. A circular hole in the center, smaller than the internal pipe diameter, causes a differential pressure across the orifice plate that is measured by the differential pressure transmitter. A flow orifice is inexpensive and easy to install, but it has the highest pressure drop of all flow restrictor types.
  • The flow nozzle is a metal cone clamped between gaskets in a flanged piping joint so that the cone tapers in the direction of fluid flow. The nozzle does not cause as large a permanent reduction in pressure as does the orifice because the entrance cone guides the flow into the constricted throat section, reducing the amount of turbulence and fluid energy loss.
  • A flow venturi is a shaped tube inserted in the piping as a short section of pipe. The venturi has entrance and exit cones that serve as convergent and divergent nozzles, respectively, guiding the flow out of, as well as into, the constricted throat area. The venturi design is the most efficient and accurate of the flow restrictors. However, it is also the most expensive and difficult to maintain.

Regardless of how the pressure drop is created, flow transmitters measure the differential pressure across the flow restrictor. The high-pressure connection is always made upstream of the flow restrictors. The low-pressure connection is made downstream of orifices and nozzles (the exact location can vary), based on the constricted throat section of a venturi.

Flow is proportional to the square root of the differential pressure. This means that flow and differential pressure have a nonlinear relationship. The uncertainty also varies as a function of the square root relationship. The following example considers flow accuracy as a function of flow rate.

Page 147 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 Example E-1 This example is illustrative only and does not directly correlate to any particular system flow rates or designs. However, the relative change in accuracy as a function of flow is considered representative of expected performance. A flow transmitter is used to monitor system flow. The instrument loop diagram is shown in Figure E-1.

Flnw lndiaitnr Flow Element (Orifice)

Isolation Signal Flow Figure E-1 Flow Monitoring Instrument Loop Diagram The flow transmitter measures the differential pressure across the flow orifice. The relationship between flow in gpm and the differential pressure in inches is given by:

Flow = k \/FKA The constant, k, is the flow constant for a specified configuration and the term, p, is the density of water at the design operating temperature (refer to ASME MFC-3M-1989, reference 5.7 for a detailed explanation of the flow equation). If we assume that the fluid temperature is essentially constant, the density can be incorporated into the flow constant and the above expression simplifies to:

Flow = k J Page 148 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 For this example and assuming constant fluid temperature, the maximum flow will be given as 1,500 gpm when differential pressure is 100 inches. Therefore, the flow constant is:

Flow _1,500 k = Fl = 1-00 =150 Assume that the various manufacturers provided the following measurement uncertainties:

Flow Orifice Accuracy (PEA) - +/-1.5%

Flow Transmitter Accuracy (VAT) - +/-0-5%

Drift (VDT) - +/-1.0%

Temperature Effects (ATET) - +/-0.5 Indicator Accuracy (VA,) - +/-0.5%

Drift (VD,) - +/-1.5%

Input Resistor Accuracy (VAR) - +/-0.1%

Assume that all of the above uncertainty terms are random and independent for this example. The transmitter is providing an output signal proportional to the differential pressure across the flow orifice.

For this reason, we should first determine the uncertainty in our differential pressure measurement. The flow uncertainty can be estimated by taking the square root of the sum of the squares of the individual component uncertainties. The following equation is shown for example only AND does not replace the equations presented in Section 4.5.4:

Z = (PEA2 + VAT2 + VDT2 + ATET 2 + VA,2 + VD2 + VAR2 ) 1 /2 Z 41.5 52 +0.52+1.02+0.52+0.52+1.52+0.12

= + 2.5* = +/- 2.5 inches AP Page 149 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 Now, remember that our understanding of flow is based on the square root relationship between flow and differential pressure. Because, the relationship is not linear, we must consider the flow uncertainty at specific points. We already determined that flow for this particular application is related to differential pressure by the following expression:

Flow = 150 (AP)1/ 2 Table E-1 provides the flow-to-AP relationship at different flow points:

Percent of Differential Full Scale Flow Flow (gpm) Pressure (inches) 100%7 1,500 100.00 75% 1,125 56.25 50% 750 25.00 25% 375 6.25 10% 150 1.00 Table E-1 Flow Versus Differential Pressure for Example E-1 Now, let's estimate our uncertainty in flow for each of the above flow rates based on the +/-2.5 inches of measurement uncertainty in differential pressure.

100%: Flos =150 f00+72. = 1,500 9 gpni 75%: Flow = 150 [56.25 +/- 2.5 =1,125 gpm 50%: Flow = 150 F25++/-2.5 = 750 gpm

-38

+69 25%: Flow = 150 [6.25 +/- 2.5 = 375 85 gpm

-85 10% : Flow = 150 jlIE0+/- T2S = 150 - +130 17 Page 150 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 If the flow versus the uncertainty of that flow measurement is graphed, the relative uncertainty at low flow conditions is readily apparent (see Figure E-2). This example shows the problem of obtaining accurate flow measurements by differential pressure at low flow conditions. The use of more accurate instrumentation would change the magnitude of the uncertainty, but would not affect the relative difference in uncertainty at low flow versus high flow conditions.

10 0 %

8 0 %

<z a) 6 0 %

0-C c 40 %

20 %

0%

0%

Flow Rate (% of Full Flow)

Figure E-2 Flow Uncertainty as a Function of Flow Rate Page 151 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 E.2 Effects of Piping Configuration on Flow Accuracy Bends, fittings and valves in piping systems cause flow turbulence.

This can cause process measurement uncertainties to be induced in flow elements. ASME has published guidance for various types of installation examples to show the minimum acceptable upstream/downstream lengths of straight pipe before and after flow elements. Following this ASME guidance helps reduce the effect of this turbulence. The piping arrangement showing locations of valves, bends, fittings, etc. can usually be obtained from piping isometric drawings. Reference 5.7,ASME MFC-3M-1989, states that, if the minimum upstream and downstream straight-pipe lengths are met, the resultant flow measurement uncertainty for the piping configuration (not including channel equipment uncertainty) should be assumed to be 0.5%. If the minimum criteria cannot be met, additional uncertainty (at least 0.5%) should be assumed for conservatism based on an evaluation of the piping configuration and field measurement data, if available.

E.3 Varying Fluid Density Effects on Flow Orifice Accuracy In many applications, process liquid and gas flows are measured using orifice plates and differential pressure transmitters. The measurement of concern is either the volumetric flow rate or the mass flow rate. Many reference books and standards have been written using a wide variety of terminology to describe the mathematics of flow measurement, but in basic form, the governing equations are:

Q = k A (AP /p)112 and W = k A ( (AP) (p))"12 where, Q = Volumetric flow rate W = Mass flow rate A = Cross-sectional area of the pipe AP = Differential pressure measured across the orifice p = Fluid density K = Constant related to the beta ratio, units of measurement, and various correction factors Page 152 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 As shown above, the density of the fluid has a direct influence on the measured flow rate. Normally, a particular flow-metering installation is calibrated or sized for an assumed normal operating density condition. As long as the actual flowing conditions match the assumed density, additional related process errors should not be present If the flow-measuring system has been calibrated for the normal low-temperature condition, significant process uncertainties can be induced under accident conditions when the higher-temperature (lower-density) water is flowing. Of course, the flow measurement could be automatically compensated for density variations, but this is not the usual practice except on systems such as steam flow measurement.

To examine the effects of changing fluid density conditions, a liquid flow process shall be discussed. For most practical purposes, K and A can be considered constant. Actually, temperature affects K and A due to thermal expansion of the orifice, but this is assumed to be constant for this discussion to quantify the effects of density alone. If the volumetric flow rate, Q, is held constant, it is seen that a decrease in density will cause a decrease in differential pressure (AP), causing a measurement uncertainty. This occurs because the differential pressure transmitter has been calibrated for a particular differential pressure corresponding to a specific flow rate. A lower AP due to a lower fluid density causes the transmitter to indicate a lower flow rate.

Assuming the actual flow remains constant between a base condition (the density at which the instrument is calibrated, pl) and an actual condition (P2), an equality may be written between the base flow rate (Qj) and actual flow rate (Q2), as shown below:

Q1= Q2 or k A (AP2 /P2) 1/2 = k A (AP,/plv 112 or AP 2 /p 2 = API/P 1 AP 2 /Ap 1 = P2/Pl Page 153 of 214

Instrument Setpoint APPENDIX E - FLOW MEASUREMENT Calculation Methodology UNCERTAINTY EFFECTS REVISION 3 Density is the inverse of specific volume, SV. Accordingly, the above expression can be restated in terms of specific volume.

AP2 = SVI ARI S J$2 E.4 Effects of cavitating flows, I ratios, and fluid velocity on Flow Orifice Accuracy There are three elemental considerations to analyze when evaluating errors in flow measurement. First is the uncertainty of the coefficients used to determine the differential pressure of flow rate. This can be termed as flow element error or accuracy.

Second is a temperature variation, which occurs during normal operation, which was discussed in Section E.3 for density effects but may also create material property effects such as pipe size variations from thermal expansion. The third is flow rate variation, which will cause the discharge coefficient to vary slightly.

The three primary components of flow element error are:

(1) uncertainty of the discharge coefficient (2) bore diameter uncertainty and (3) pipe diameter uncertainty. The diameter ratio is represented as the bore diameter relative to the pipe diameter or S ratio and is given as: diameter ratio = d/D Where d = uncertainty of orifice bore diameter D = uncertainty of upstream pipe diameter As stated, the discharge coefficient can vary with flow rate and cause the flow coefficient to vary. Flow element installation assumes design condition and therefore a constant flow coefficient (K). Flow Variations decreasing from design flow will lower the flow element Reynolds number and as Reynolds number falls the discharge coefficient, C, will rise above the value that existed for design flow such that the relative error is predicted by:

APA - (-) -2 APD C Therefore, flow below design flow induces a small negative bias error.

Page 154 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 APPENDIX F LEVEL MEASUREMENT TEMPERATURE EFFECTS F.1 Level Measurement Overview Differential pressure transmitters are typically used for level measurement involving an instrument loop. One side of a d/p cell is connected to a water column of fixed height (often called a reference leg) and the other side is connected to the fluid whose level is to be measured (see Figure F-1).

A A

Tank Level Reference Level

'V IV Level Tank Transmit ter Figure F-1 Simplified Level Measurement in a Vented Tank The measured level in Figure F-1 is determined by the pressure caused by the column of water in the reference leg minus the pressure caused by the water level in the tank:

AP (Lref X grf ) - (Ltank X 2'ank) where, Lref Height of liquid in reference leg Yref = Specific weight of liquid in reference leg Ltank = Height of liquid in tank Wtank = specific weight of liquid in tank Notice in this case that tank level and differential pressure are inversely related. Maximum differential pressure occurs at minimum tank level.

Page 155 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 As implied by the above expression, the specific weight of the liquid in the reference leg may not equal the specific weight of liquid in the tank. The two liquids might be at different temperatures (or might even be different liquids in the case of sealed reference legs).

F.2 Uncertainty Associated with Density Changes Density changes in the reference leg fluid or the measured fluid can add to the uncertainty of a level measurement by a differential pressure transmitter. Differential pressure transmitters respond to the hydrostatic (head) pressure caused by a height of a liquid fluid column; for a given height, the response varies as the liquid density varies. The density changes as a function of temperature which then potentially changes the differential pressure measured by the transmitter. The transmitter cannot distinguish between the difference caused by a level change and the difference caused by a fluid density change.

Two types of level measurement system uncertainties are presented here. Section F.2.1 provides the methodology if no temperature compensation is provided for the vessel level measurement. Section F.2.2 provides the methodology for those cases in which the vessel temperature is measured to provide automatic compensation of the vessel liquid density, but the reference leg is still not compensated.

Page 156 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 F.2.1 Uncompensated Level Measurement Systems The methodology developed and described in this section assumes that vessels are closed and contain a saturated mixture of vapor and water. For this discussion, the reference leg is water-filled and also saturated. Note that the reference leg liquid may well be compressed (subcooled). Figure F-3 shows a closed vessel containing a saturated vapor/water mixture. The symbols used to explain the effect of density variations are provided immediately below Figure F-3.

H 10 0 A Head Reference HR V9 HO Level Tank Transmitter Figure F-3 Saturated Liquid/Vapor Level Measurement Page 157 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Table F-1 provides the list of symbols used in a level measurement analysis and their explanation.

HW: Height of water SVW: Specific volume of water at saturation temperature HV: Height.of vapor SVV: Specific volume of vapor HR: Height of reference leg SVR: Specific volume of reference leg fluid HO: Height of 0% indicated level SGW: Specific gravity of water at saturation temperature H100: Height of 100% indicated SGV: Specific gravity of vapor level AP: Differential pressure

'SGR: Specific gravity of reference leg fluid (inches H2 0)

Any vapor higher than the entrance to the reference leg has an equal effect on both sides of the.differential pressure transmitter and can be ignored.

Table F-1 Symbols Used in a Level Measurement Density Effect Analysis All heights in Table F-1 are referenced to the centerline of the lower level sensing line. HV and HR are measured to the highest possible water column that can be obtained by condensing vapor.

Specific gravity, is calculated by the specific volume of water at 680 F divided by the specific volume of the fluid at the stated condition.

Referring to Figure F-3, the differential pressure applied to the transmitter is the difference between the high pressure and the low pressure inputs:

AP = Pressure (Hi) - Pressure (Lo)

The individual terms above are calculated by:

Pressure (Hi) = (HR) ( SGR) + Static Pressure Pressure (Lo) = (HW) ( SGW) + (HS) (SGS) + Static Pressure Page 158 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Substituting the above equations into the general expression for differential pressure yields:

AP = (HR) (SGR) - (HW) (SGW) - (HS) (SGS)

Referring to Figure F-3, it can be seen that the height of the vapor (HV) is equal to the height of the reference leg (HR) minus the height of the water (HW). Substituting (HR - HW) for HS yields:

AP = (HR) (SGR) - (HW) (SGW) - (HR - HW) (SGS) or aP = [ (HR) (SGR - SGS)] + [((W) (SGS- SGW)1 Using Equation F.1 and substituting for HW the height of water at 0% level (HO) and at 100% level (H100), the differential pressures at 0% (APO) and at 100% (AP100) can be determined. Note that HR, HO, and H100 are normally stated in inches above the lower sensing line tap centerline. It is normally assumed that the fluid in both sensing lines below the lower sensing line tap is at the same density if they contain the same fluid and are at equal temperature. The specific gravity or specific weight terms (SGW, SGR, and SGV) are unit-less quantities, which means that AP, APO, and APIOO are normally stated in "inches of water."

The transmitter is calibrated for proper performance at a given operating condition. Before the transmitter calibration requirements can be expressed, it is necessary to define the reference operating conditions in the vessel and reference leg from which SGW, SGR, and SGV may be determined by the use of thermodynamic steam tables. After the specific gravity terms are known, they can be used in Equation F.1 along with HR, HO, and H100 and the equation solved for the minimum and maximum level conditions, APO and AP100.

Provided that the actual vessel and reference leg conditions remain unchanged, the indicated level is a linear function of the measured differential pressure; no density error effects are present. Under this base condition, the following proportionality can be written.

HU'- HO AP-APO H10O- HO AP100- APO Solving for HW yields:

HW = [(H100 - HO) (AP - APO)/(AP100 - APO)] + HO Page 159 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Now, assess the effects of varying the vessel and reference leg conditions from'the assumed values. Let an erroneous differential pressure, APU, and erroneous water level. HU, be developed because of an operating condition different from that assumed for the transmitter calibration. The uncertainty in the water level is given by:

HW +/- HU = [(HI00 - HO) ( AP +/- APU - aPo)/( APiOO - APO)] + HO Or, the uncertainty HU is given by:

HU = (H100 - HO) (,iPU)/( APIOO - APO)

And, APIOO - APO can be expressed by:

API00 - APO = [(HR) (SRG - SGS) + (H100) (SGS-SGW)]

- f (HR) (SGR - SGS) + (HO) (SGS - SGW))

or AP1O0 - APO = (H100 - HO) (SGS - SGW)

Thus, the uncertainty HU is given by:

HU-= APU SGS - SGW The term APU is just the difference between the differential pressure measured at the actual conditions, APA, minus the differential pressure measured at the base condition, APB:

APU =APA- APB Page 160 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 Assuming that HR and HW are constant (only the density is changing, not the actual levels), APA and APB can be expressed as:

APA = (HR) (SGRA - SGSA) + (HW) (SGSA - SGWA)

APB = (HR) (SGRB - SGSB) + (HW) (SGSB - SBWB)

Substituting into the expression for APU yields:

APU = (HR) (SGRA - SGSA - SGRB + SGSB) + (HW) (SGSA -

SGWA - SGSB + SGWB)

Returning to the expression for the uncertainty in measured level, HU, the substitution of the above expression for APU yields:

HU = [(HR) (SGRA - SGSA - SGRB + SGSB) + (HW)(SGSA - SGWA - SGSB

+ SGWB)]/(SGSB - SGWB)

The above expression for level measurement uncertainty describes the uncertainty caused by liquid density changes in the vessel, reference leg, or both.

F.2.2 Temperature-Compensated Level Measurement System The previous section describes the analysis methodology for the case in which no temperature compensation is provided to the level measurement system. The next section describes how to account for varying density effects on a differential pressure measurement.

This section clarifies the methodology for a system in which the vessel temperature is monitored and the level measurement system includes automatic temperature compensation to account for the vessel's liquid density changes.

If the temperature inside the vessel is monitored, then the specific gravity of the steam and the water inside the vessel can be corrected as a function of temperature. In the analysis methodology for the water level measurement uncertainty, HU, the following terms become effectively equal because of the automatic correction for temperature:

SGSA = SGSB and SGTVA = SGJVB Page 161 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 In this case, the vessel density effects are eliminated, but note that the reference leg density changes are not monitored and still require consideration. The uncertainty of the differential pressure measurement reduces to:

APU = (HR) (SGRA - SGRB)

The above equation shows that the differential pressure uncertainty becomes increasingly negative as the actual temperature increases above the reference temperature. As the temperature in the reference leg increases above the reference temperature, the fluid density decreases, causing a negative APU. Returning to Figure F-3, note that a lower differential pressure means that a higher level will be indicated, or a negative APU will cause a positive level uncertainty HU. The magnitude of the error can be estimated by:

HU = (HR) (SGRA - SGRB)/(SGSB - SGWB)

If the transmitter connections were reversed (high pressure connection reversed with low pressure connection to reverse the AP), the above discussion would still apply, but the uncertainty would change direction:

APU = (HR) (SGRB - SGRA)

The above equations calculate uncertainties in actual engineering units. If desired, the quantities HU and APU can be converted to percent span units by dividing each term by (HIOO - HO) or (AP100 -

APO), respectively, and multiplying the results by 100t. As discussed above, the sign (or direction of the uncertainty) for APU depends on which way the high- and low-pressure sides of the transmitter are connected to the vessel.

Page 162 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 F.2.3 Example Calculation for Uncompensated System For this example, assume that a level measurement is not compensated for density changes and has the following configuration:

1. HR = 150 in.
2. HO = 50 in.
3. H100 = 150 in.
4. HW = 100 in.
5. Reference conditions:

Vessel temperature = 5320 F (saturated water)

Reference leg temperature = 68 0 F (assume saturated, but could be compressed)

6. Actual conditions:

Vessel temperature = 500OF (saturated water)

Reference leg temperature = 300OF (assume saturated, but could be compressed)

Determine the level measurement uncertainty for this operating condition.

First, calculate the specific gravity terms for each condition by using steam table specific volumes of water (SVW) and specific volumes of vapor (SW). The following values are calculated:

SGrA- SVWV (680 F) 0.016046 fi 3 /lbm 078541 SVW (5000F) 0.02043 f 3 /Ibm SGSA STVW (68 0 F) _0.016046 ft 3 /lbm 0.02377 SVS (5000F) 0.67492 ft3 /lbm SGRA= SVY (68 0 F) 0.016046 fi3 /lbm 0.91954 SVII' (3000 F) 0.01745 ft3 /lbm SGW'B - SVI (680F) 0.016046 ft3 11bm 0.75582 SVW (532°F) 0.02123 ft 3 /ibm SGSA = SVW (68 0 F) 0.016046 ft3 llbin 0.03205 SMIV (5320F) 0.50070 ft3 11bm Page 163 of 214

Instrument Setpoint APPENDIX F - LEVEL MEASUREMENT Calculation Methodology TEMPERATURE EFFECTS REVISION 3 SGRB = SVJ (68 0 F) 0.016046 f3 11bm 1.0 SVW (680F) 0.016046 fi3 /lbrn Next, substitute HW = 100 in. and HR = 150 in., as well as the above quantities, into the expression for HU:

HU = [(HR) (SGRA - SGSA - SGRB + SGSB) + (HW) (SGSA - SGWA -

SGSB + SGWB)]/(SGSB - SGWB)

= [150(0.91954 - 0.02377 - 1.0 + 0.03205) + 100 (0.02377 -

0.78541 - 0.03205 +.0.75582)]/(0.03205 - 0.75582)

= + 20.2 inches In percent of span, the uncertainty is given by:

HU% = [(HU)/(H100 - HO)](100%) = [(20.2)/(150 - 50)1(100%) =

+20.2% span Page 164 of 214

Instrument Setpoint APPENDIX G - STATIC HEAD AND Calculation Methodology LINE LOSS PRESSURE EFFECTS REVISION 3 APPENDIX G STATIC HEAD AND LINE LOSS PRESSURE EFFECTS The flow of liquids and gases through piping causes a pressure drop from Point A to some Point B due to fluid friction (see Figure G-1). Many factors are involved, including piping length, piping diameter, pipe fittings, fluid viscosity, fluid velocity, etc. If a setpoint is based on pressure at a point in the system that is different from the point of measurement, the pressure drop between these two points must be taken into account.

Pressure Drop I ,

Point A 'Point B Flow T Figure G-1 Line Pressure Loss Example Example G-1 Refer to Figure G-1 for this example. If protective action must be taken during an accident when the pressure at Point A exceeds the analysis limit (AL) = 1060 psig, the pressure switch setpoint needs to be adjusted to account for the line loss (30 psig) and channel equipment errors (10 psig) as shown below (it is assumed that the sensing line head effect for the accident condition is negligible in this case).

Setpoint = AL - Line Loss - Total Channel Equipment Uncertainty

= 1060 -30 - 10

= 1020 psig Page 165 of 214

Instrument Setpoint APPENDIX G - STATIC HEAD AND Calculation Methodology LINE LOSS PRESSURE EFFECTS REVISION 3 Note that if the line loss had been neglected and the setpoint adjusted to the analysis limit minus equipment error (1050 psig),

the resultant setpoint would be non-conservative. In other words, when the trip occurred, the pressure at Point A could be equal to 1050 + 30 = 1080 psig, which non-conservatively exceeds the analysis limit Example G-2 If the pipe had dropped down vertically to Point B, the result would be a head effect plus line loss example. Assume the head pressure exerted by the column of water in the vertical section of piping is 5 psig and that the line loss of Point A to Point B is still equal to 30 psig. Also, assume that the pressure at Point A is not to drop below 1,500 psig without trip action. For this example, the setpoint is calculated as follows:

Setpoint = AL + Head + total Channel Equipment Uncertainty

= 1,500 + 5 + 10 = 1,565 psi In this case, the 30 psig line loss was neglected for conservatism.

Note that the head effect/line loss errors are bias terms, unless they can be calibrated out in the transmitter, in which case this effect can be removed from the channel uncertainty calculation.

CPS C&I department typically calibrates the effects of head out during transmitter calibration testing, this must be verified for each channel during analysis. If head effects are included in the channel uncertainty calculation, the effect must be added or subtracted from the analytical limit, depending on the particular circumstances, to ensure that protective action occurs before exceeding the analytical limit.

Page 166 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 APPENDIX H MEASURING AND TEST EQUIPMENT UNCERTAINTY M&TE uncertainty is the inaccuracy introduced by the calibration process due to the limitations of the test instruments. M&TE uncertainty includes three principal components: (1) vendor accuracy of the test equipment, (2) effect of temperature on the test equipment, and (3) accuracy of the test equipment calibration process. The first two components are included directly in the M&TE uncertainty and the third is assumed to be included in the conservatism of the vendor accuracy of the test equipment.

All (100%) of test equipment is certified to pass the calibration requirements, not just 95%, the common confidence level used for uncertainty calculations. Discussion with vendors shows that the actual accuracy of the test equipment is better than the vendor published values. Both of these provide conservatism in the accuracy of the test equipment and, therefore conservatism in the M&TE determination. As discussed in H.1 below the standards used to calibrate the test equipment are generally rated 4:1 better than the equipment being calibrated. For these reasons it is generally accepted that the published vendor accuracy of the test equipment includes the uncertainty of the calibration standard since vendor accuracy divided by 4 is negligible in the relation to other uncertainties. For the purposes of setpoint and uncertainty calculations, the total M&TE uncertainty for any module should be based on test equipment, which has been calibrated using 4:1 reference standards.

The module calibration also includes an As-Left tolerance (ALT) which can be related to the test equipment uncertainty. An instrument does not provide an exact measurement of the true process value; there is always some level of uncertainty or error in our measurement. The As-Left tolerance is (1) a reflection of the best accuracy that we can realistically obtain or (2) the minimum accuracy that we feel is needed to assure that the process is properly controlled.

For example, a pressure transmitter may have vendor accuracy (VA) of +/-0.1%, but its As-Left tolerance may be allowed to be +/-0.5%.

Thus, the instrument technician is allowed to leave the instrument as-is if it is found anywhere within +/-0.5t of the calibration check point. Without any other considerations, we would have to conclude that the calibrated condition of the instrument is only accurate to

+/-0.5% rather than the device's VA of +0.1%. If greater accuracy is needed, the calibration procedure should be revised for the tighter As-Left tolerance.

Appendix H provides the details for calculation preparers to consider when evaluating the M&TE uncertainty for a module.

Page 167 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.1 General Requirements The control of measuring and test equipment (M&TE) is governed at CPS, by procedure CPS 1512.01, Reference 5.14. This procedure requires the M&TE accuracy to be at least a 4:1 ratio, greater than the Reference Standards used. In discussion with NSED, loop M&TE is specified as the statistical combination of all of the pieces of input and output M&TE. Instrument and loop calibration procedures, CPS 8801.01 and 8801.02, References 5.15 and 5.16 required the M&TE to be at least as accurate as the device being calibrated (1:1 ratio). CPS does have an M&TE calculation (IP-C-0089, Ref. 5.30) supporting both maintenance selection activities and engineering assumptions used in calculations.

The following discusses specific requirements of this procedure:

1. Reference standards used for calibrating M&TE shall have an uncertainty (error) requirement of not more than 1A of the tolerance of the M&TE equipment being calibrated. A greater uncertainty may be acceptable as limited by "State of the Art."
2. Total SRSS of M&TE accuracy used for calibrating a loop or component shall have an uncertainty (error) requirement of no more than a 1:1 ratio of the tolerance of the loop or component being calibrated.
3. No measurement and test equipment shall be used if the record date for recalibrating the test equipment has been exceeded.

CPS 1512.01, does not address the accuracy of M&TE equipment with respect to the loop or component being checked for calibration. The accuracy of M&TE equipment is addressed by calculation, CPS (IP-C-0089, Reference 5.30). SRSS of M&TE device(s) accuracy uncertainty will be considered in terms of the VA of the loop or component to be calibrated.

For the purposes of setpoint and uncertainty calculations, the total M&TE uncertainty should be based on CPS Standard Assumption (Section I.11) that a 4:1 ratio exists between M&TE and references standards, thus CSTD = 0. If the test equipment accuracy is not based on 4:1 reference standards, the required total M&TE uncertainty should be met by using better test equipment for calibration.

In general, it is desirable to minimize the contribution of M&TE to the uncertainty of the loop. Every effort should be made to use the most accurate M&TE available during calibration.

Page 168 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.2 Uncertainty Calculations Based on Plant Calibration Practices The M&TE uncertainty included in an uncertainty calculation is based on historical practices and the uncertainty assigned to the M&TE by calculation, IP-C-0089, Ref. 5.30. The implicit design assumption is that M&TE used in the future will be equal to or better than the M&TE used in the past (due to improvements in State of the Art test equipment). In order to ensure this assumption is not invalidated by future calibrations, review the M&TE specified in the applicable C&I procedures. Verify the uncertainty of the M&TE specified (including calibration standards) is bounded by VA used in the calculation as shown in the following sections for each type of instrument or configuration.

NOTE: ALT does not have to equal VA. It can be greater or smaller based on the needs of C&I maintenance.

H.2.1 Loop Component For all components, the M&TE reference accuracy used for calibration should be no greater than VA of that component.

The calculation of Calibration uncertainty should include both the input and output M&TE. M&TE errors are present with the input signal provided to the input of the sensor as well as with the instrumentation used to measure the output of the sensor (see Figure H-1). The input M&TE is independent from the output M&TE.

Additionally, it should include any other affects on the M&TE equipment such as ATE and/or IRE.

Page 169 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 Signal Processing GA put Process Figure H-1 Measuring and Test Equipment Uncertainty An example is given for Figure H-1. In the case of a transmitter (sensor), where VA = +/-0.5%. The 1:1 criteria for M&TE would be met by the statistical combination of the input and output MTE reference accuracies.

2 VAsensor 2 (MTEI + MTEo2 ) 1/2 This comparison should be made for all components in the loop regardless of whether they have M&TE on both input and output, or multiple M&TE on input, output, or both.

H.2.2 Instrument Loops For an entire instrument loop, the Calibration Error used should be the statistical combination of the As-Left tolerance (ALT),

Calibration Device Error (Ci), and Calibration Standard Error (CSTD).

Ci should be the statistical combination of all of the pieces of input and output M&TE including all uncertainties associated with the M&TE (example: temperature effect and readability). CPS calculation IP-C-0089, "M&TE Uncertainty Calculation", provides uncertainty values for the most commonly used M&TE.

Page 170 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.2.3 Example Channel Loop Error Section for a Typical Transmitter, ATM Loop 7.6 Loop Calibration Error (CL)

Loop Calibration Error is determined by the SRSS of As-Left Tolerance (ALT;), Calibration Tool Error (C;), and Calibration Standards Error (C; STD) for the individual devices in the loop.

The equation below is used to calculate this effect.

From Section 7.3.3:

C= Aj ALT , 2+ (__) (20y) 7.6.1 As-Left Tolerance (ALTL)

From Section 7.5 ALTiPT =0.25% (20y)

ALTiAnfI= +/-0.25% , (2a)

ALTL +/-0.354% Span (2a) 7.6.2 Calibration Tool Error (C;)

7.6.2.1 Transmitter Calibration Tool Error (Cipr)

The IB2INXXXA, B, C, D transmitters located in the Aux. Bldg. (Refer to Section 7.2) are calibrated with a Fluke Model 45 DC voltmeter on the slow response setting that is capable of measuring 1-5 Vdc and a 250-ohm precision resistor, accurate to

+/-0.02 ohms. The calibration also requires a test gauge with a range of 0-2000 psig.

This information is from Section 7.0 of Output [calibration procedure listed in output section]. Per Assumption [ ], all M&TE equipment is a 3a value.

Per Section 7.4.1:

Transmitter span is 0-1500 psig VApr = +/- 0.25% span. (2a)

Per Reference [IP-C-0089], VA for the M&TE devices are:

Heise (0-2000 psig) = 0.1% FS (3a)

Fluke 45 (1-5 Vdc, Slow) = 0.065% reading, where max reading is 5 Vdc. (3a)

Page 171 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 The accuracy of the precision resistor is calculated as follows:

CPR = +/-0.02/250 *100 CPR = +/-0.008% Span (3a)

Per Ref. [CI-0 .00, Appendix H, Section H.2.1]

VAPT 2 (MTE1 2 + MTEO 2 ) 1 /2 0.25% span 2 ((0.1%FS/SP) 2 + (0.065%R/SP) 2 + (0.008%Span)2)1/2 (0.0025*1500) 2 ((0.001*2000/1500 2 + (0.00065*5/4)2 + (0.00008*1500)2)1/2 3.75 2 0.12 it The total M&TE error for the Heise gauge (CPG) is therefore:

Per Reference [IP-C-0089], Total error M&TE devices are:

CpG = +/-1.187 FS Converting to the 1500 psig span of the transmitter:

CPG = +/-1.187% (2000 psig/1 500 psig)

CPG = +/-1.583% Span (3a)

The M&TE error for the voltmeter ( yM) is therefore:

Cvq = +/-0.097% RISP

= +/-0.097% 5/4

=+0.121% Span (3a)

The M&TE error for the precision resistor (CPR) is therefore:

CPR = +/-0.008% Span (3a)

Substituting terms:

CFT = +CP + CVM + CPR cpr = +JV.583%span2 +0.121%span2 +0.008%span 2 CPT= +/-1.588% Span (3a)

Page 172 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 7.6.3 ATM Calibration Tool Error (CATM)

The ATM's are calibrated using a DAC, which uses a readout assembly. This assembly does introduce some error into the calibration. Per Reference [IP-C-0089], Total error M&TE devices are 0.195%FS.

CRes= + 0.195% *20 mA/l6mA CATI = +/-0.0901  % Span (3cy) 7.6.4 Calibration Standard Error (CSTD):

Per Assumption[ ], Calibration Standard Error is considered negligible for the purposes of this analysis.

CSTD = ° 7.6.5 Loop Calibration Error (CL):

Per Outputs [ ], the loop calibration is performed using a pressure gauge only. Therefore, C; for the loop will be CPG-From Section 7.6 above:

CL =+/-NV j(ALL)'+ Z(C'+/-) 2 +X CSTD J2 Fmn aon vn From above:

ALTL = 0.354% Span (2a) Section 7.6.1 CPG = 1.583% Span (3a) Section 7.6.2.1 CiSTD = 0 Section 7.6.3 Substituting terms for the pressure loop:

Cm= 2 0(O354 %span ) + (1.583 %span )2+2 CL = +/- 1.622% Span (2a)

Page 173 of 214

Instrument Setpoint APPENDIX H - MEASURING AND TEST Calculation Methodology EQUIPMENT UNCERTAINTY REVISION 3 H.2.4 Special Considerations CL is used in the development of AFTL, which is used to calculated NTSP. In order to preserve an existing setpoint, CL can be reduced as follows:

1. Reduce the M&TE temperature uncertainty by reducing the temperature-band from maximum (Bldg Temp. Band) to a lower Room Temp. Band for the location of the component. This will require calculating new M&TE uncertainty values consistent with calculation IP-C-0089.

Discussion and agreement with C&I Maintenance is required for the below options, but these may be considered as well;

2. Specify a more accurate M&TE, such as digital heise, which are temperature compensated. Also, there are some regular heise gauges, which are temperature compensated.
3. Reduce or change the range specified for M&TE. For the example above, specify a 15d0 psig Heise (if it exists).

However, the upper Cardinal Point(typically 100% span)used in the calibration procedure will have to be reduced such that the range of the M&TE is not exceeded when allowing for As Found and As Left calibration tolerances.

Page 174 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 APPENDIX I NEGLIGIBLE UNCERTAINTIES / CPS STANDARD ASSUMPTIONS The uncertainties listed and discussed in sections I.1 through I.6 below. The CPS Standard Assumptions are listed in I.11. Personnel performing an uncertainty calculation must evaluate the calculation with respect to this Appendix to verify that any special circumstances or unusual configurations do not invalidate any of these negligible uncertainties or CPS Standard Assumptions.

1.1 Normal Radiation Effects DC-ME-09-CP, Ref. 5.36, defines the normal and harsh environments for areas within the plant. There is not a substantial increase in radiation during normal operating conditions. In these areas, radiation changes during normal operation do not exist and/or are minimal, with no impact to vendor equipment. Normal radiation induced errors shall be incorporated when provided by the manufacturer. Otherwise, it is assumed that any accumulative effects of <104 RAD TID radiation are calibrated out on a periodic basis. For these reasons, the uncertainty introduced by any radiation effect during normal operation is assumed to be negligible.

1.2 Humidity Effects Most manufacturers' literature and technical manuals do not address the effect of humidity (10% RH to 95% RH) on their equipment. The uncertainty introduced by humidity changes during normal operation is assumed to be negligible unless the manufacturer specifically discusses humidity effects in the technical manual. The effects of humidity changes are assumed to be calibrated out on a periodic basis. A condensing environment is considered an abnormal event that would require equipment maintenance. A humidity below 10% is considered to occur very infrequently.

1.3 Seismic/Vibration Effects The effects of normal vibration (or a minor seismic event that does not cause an unusual event) on a component are assumed to be calibrated out on a periodic basis. As such, the uncertainty associated with this effect is assumed to be negligible. Abnormal vibrations, e.g., levels that produce noticeable effects on equipment, are considered abnormal events that require maintenance or equipment modification.

Page 175 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 I.4 Normal Insulation Resistance Effects The uncertainties associated with insulation resistance are assumed to be negligible during normal plant operating (non-accident) conditions. Typical insulation resistances are greater than 1,000 megohm. As an example, assume that the total IR is only 10 megohm and assume minimum instrument loop loading. Using the methodology provided in Appendix D, the expected uncertainty attributable to IR is given by:

(48 - (0.004) (250))/((10x106 ) (0.016)) = 0.03%

As can be seen, the IR can be considered negligible as long as the environment remains mild.

I.5 Lead Wire Effects Since the resistance of a wire is equal to the resistivity times the length divided by the cross-sectional area, it is assumed that the very small differences in wire lengths between components do not contribute to any significant'resistance differences between wires. The uncertainty associated with these insignificant resistance variations is assumed to be negligible.

If a system design includes lead wire effects that must be considered as a component of uncertainty, the requirement must be included in the design basis. The general design standard is to eliminate lead wire effects as a concern both in equipment design and installation. Failure to do so is a design fault that should be corrected. Unless specifically identified to the contrary, lead wire effects are to be assumed to be negligible. An exception to this is thermocouples and RTDs. These cases require individual evaluation of lead wire effects.

1.6 Calibration Temperature Effects Calibration temperature is not recorded at CPS, however, the temperature at which an instrument is calibrated is within the normal operating range of the instrument and generally reasonably close to one another between calibrations. Although, the ambient temperature effects cannot be determined, they are considered small. Therefore, the uncertainty associated with the temperature variations during calibration is assumed to be included within the instrument drift errors. Note that this applies only to temperature changes for calibration. Temperature effects over the expected range of equipment operation and M&TE temperature effects must be considered.

Page 176 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 1.7 Atmospheric Pressure Effects Assuming that the atmospheric pressure might change as much as one inch of mercury, this equates to approximately 0.5 psi. Because this change is small, this effect will be assumed negligible for pressures of 5 psi and larger, unless the pressure transmitter is measuring a relatively small pressure.

I.8 Dust Effects Any uncertainties associated with dust are assumed to be compensated for during normal periodic calibration and are assumed to be negligible.

1.9 RTD Self Heating Errors To determine a typical RTD self heating error, the following computation is provided:

RTD: Rosemount Model 104 RTD Self Heating Effect: 0.10 C or less Resistance @ 4000 C: 249.61 £Q Resistance @ 380 0 C: 242.58Q Resistance/OC around 4000 C = (249.61 - 242.58)/20

= 0.35 Q/OC Self Heating Error = 0.1 0 C x 0.35 !Q/OC = 0.035 Q At 400 0 C = 0.035/249.61 = 0.014t The above results show that the RTD self heating error can be assumed to be negligible.

I.10 Digital Signal Processing An accuracy of 0.1% of full scale or less is often specified.

Additionally, linearity and repeatability are often specified as 1 least significant bit (LSB). When this 0.1% uncertainty is compared to the percent uncertainty for the rest of the instrument loop, it is clear that this uncertainty can be neglected.

Page 177 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 1.11 Assumptions As defined in Section 2.2, these assumptions are considered to be defendable and should be used in Section 2.0 for any new or revised calculation, performed under this methodology. All standard assumptions shall be listed first without modification, except for where an assumption points to another assumption, which may not be the same number as listed (see assumptions 2.10 & 2.11 below). The Setpoint Program Coordinator may provide corrections and/or new standard assumptions that may have not been incorporated into the latest revision of CI-01.00. It may be necessary to modify some of the CPS Standard Assumptions listed below during the development or revision of calculations. The preparer and reviewer of a calculation must ensure the assumptions used are valid and applicable to their calculation.

2.1 Published instrument vendor specifications are considered to be 2a values unless specific information is available to indicate otherwise 2.2 Temperature, humidity, power supply, and ambient pressure errors have been incorporated when provided by the manufacturer. Otherwise, these errors are assumed to be included in the manufacturer's accuracy or repeatability specifications 2.3 Changes in ambient humidity are assumed to have a negligible effect on the uncertainty of the instruments used in these loops.

2.4 Normal radiation induced errors have been incorporated when provided by the manufacturer. Otherwise, these errors are assumed to be small and capable of being adjusted out each' time the instrument is calibrated. Therefore, unless specifically provided, normal radiation errors can be assumed to be included within the instrument drift errors.

2.5 If the manufacturers instrument performance data does not specify Span, Calibrated Span, Upper Range Limit, etc. the calculation will assume URL because it will result in the most conservative estimate of instrument uncertainty. In all cases the URL is greater than or equal to the calibrated span (CS) and it is conservative to use the URL in calculating instrument uncertainties. This is because, by definition, URL is the maximum upper calibrated span limit for the device.

Page 178 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 2.6 This analysis assumes that the instrument power supply stability (PSS) is within +5% (+1.2 Vdc) of a nominal 24 Vdc.

2.7 The effects of normal vibration (or a minor seismic event that does not cause an unusual event) on a component are assumed to be calibrated out on a periodic basis. As such, the uncertainty associated with this effect is assumed to be negligible and included within the instrument drift errors.

Abnormal vibrations, e.g., levels that produce noticeable effects on equipment, are considered abnormal events that require maintenance or equipment modification.

2.8 Evaluation of M&TE errors is based on the assumption that the test equipment listed in Analysis Section 7.0 is used. Use of test equipment less accurate than that listed will require evaluation of the effect on calculation results.

2.9 It is assumed that the M&TE listed in Section 7.0 is calibrated to the requires manufacturer's recommendations and within the manufacturer's required environmental conditions.

Temperature related errors are based on the difference between the Calibration Lab temperature and the worst case temperature at which the device is used.

2.10 It is assumed that the reference standards used for calibrating M&TE or Calibration tools shall have uncertainty requirements of not more than Y4 of the tolerance of the equipment being calibrated. A greater uncertainty may be acceptable as limited by "State of the Art". It is generally accepted that the published vendor accuracy of the M&TE or Calibration tool includes the uncertainty of the calibration standard M&TE when the 4:1 accuracy standard is satisfied.

Hence, Calibration Standard uncertainty is considered negligible to the overall calibration error term and can be ignored. This assumption is based primarily upon inherent M&TE conservatism built into the calculation. Per assumption

[2.11), this calculation considers the combined M&TE vendor or reference accuracy used for calibration satisfies 1:1 accuracy ratio to the instrument under calibration. This ratio bounds the upper accuracy limit on Calibration tool equal to the Vendor's Accuracy (VA) specification for the device under calibration. Use of M&TE more accurate than 1:1 is conservative to this assumption and thereby acceptable without impacting the results of this calculation.

Page 179 of 214

Instrument Setpoint APPENDIX I - NEGLIGIBLE UNCERTAINTIES Calculation Methodology /CPS STANDARD ASSUMPTIONS REVISION 3 2.11 It is assumed that when M&TE is not specified uniquely in a controlling calibration procedure (e.g., Surveillance Procedure or Preventive Maintenance Procedure), the combined M&TE vendor or reference accuracy used for calibration satisfies a 1:1 accuracy ratio to the instrument under calibration. This accuracy ratio establishes the limit on selected M&TE equal to the Vendor's Accuracy (VA) requirement.

Further, M&TE uncertainty assumed per this discussion, is considered a 30 value regardless of the confidence associated with the related VA term.

2.12 The effects of EMI and RFI are considered negligible for panel mounted meters in administratively controlled EMI/RFI environments, unless a specific uncertainty term is provided by the vendor.

2.13 If the instrument vendor provides no drift information and there is no clear basis for assuming drift is zero, it may be conservatively assumed that the drift over the entire calibration period equals Vendor Accuracy (i.e., VD = VA 2a).

2.14 Data from comparable but different instruments may be used when vendor specification is not available or is lacking.

This comparison should evaluate like applications in like environment with the instrument analyzed consistent for form, fit, and function.

Page 180 of 214

Instrument Setpoint APPENDIX J - DIGITAL SIGNAL Calculation Methodology PROCESSING UNCERTAINTIES REVISION 3 APPENDIX J DIGITAL SIGNAL PROCESSING UNCERTAINTIES This Appendix presents a discussion on digital signal processing and the uncertainties involved with respect to determining instrument channel setpoints for a digital system. This Appendix assumes that a digital signal processing system exists that receives an analog signal and provides either a digital or analog output. In many respects, the digital processor is treated as a black box; therefore, the discussion that follows is applicable to many different types of digital processors.

The digital processor is programmed to perform a controlled algorithm. Basic functions performed are addition, subtraction, multiplication and division, as well as data storage. The digital processor is the most likely component to introduce rounding and truncation errors.

In general, an analog signal is received by the digital processor, filtered, digitized, manipulated, converted back into analog form, filtered again and sent out. The analog input signal is first processed by a filter to reduce aliasing noise introduced by the signal frequencies that are high relative to the sampling rate. The filtered signal is sampled at a fixed rate and the amplitude of the signal held long enough to permit conversion to a digital word. The digital words are manipulated by the processor based on the controlled algorithm. The manipulated digital words are converted back to analog form, and the analog output signal is smoothed by a reconstruction filter to remove high-frequency components.

Several factors affect the quality of the representation of analog signals by digitized signals. The sampling rate affects aliasing noise, the sampling pulse width affects analog reconstruction noise, the sampling stability affects jitter noise and the digitizing accuracy affects the quantization noise.

J.1 Sampling Rate Uncertainty If the sampling rate is higher than twice the analog signal bandwidth, then the sampled signal is a good representation of the analog input signal and contains all the significant information.

If the analog signal contains frequencies that are too high with respect to the sampling rate, aliasing uncertainty will be introduced. Anti-aliasing band limiting filters can be used to minimize the aliasing uncertainty or else it should be accounted for in setpoint calculations.

Page 181 of 214

Instrument Setpoint APPENDIX J - DIGITAL SIGNAL Calculation Methodology PROCESSING UNCERTAINTIES REVISION 3 J.2 Signal Reconstruction Uncertainty Some information is lost when the digitized signal is sampled and held for conversion back to analog form after digital manipulation.

This uncertainty is typically linear and about +/-% Least Significant Bit (LSB).

J.3 Jitter Uncertainty The samples of the input signal are taken at periodic intervals. If the sampling periods are not stable, an uncertainty corresponding to the rate of change of the sampled signal will be introduced. The jitter uncertainty is insignificant if the clock is crystal controlled, which it is in the majority of cases.

J.4 Digitizing Uncertainty When the input signal is sampled, a digital word is generated that represents the amplitude of the signal at that time. The signal voltage must be divided into a finite number of levels that can be defined by a digital word n bits long. This word will describe 2n different voltage steps. The signal levels between these steps will go undetected. The digitizing uncertainty (also known as the quantizing uncertainty) can be expressed in terms of the total mean square error voltage between the exact and the quantized samples of the signal. An inherent digitizing uncertainty of +/-% the least significant bit (LSB) typically exists. The higher the numbers of bits in the conversion process the smaller the digitizing uncertainty.

J.5 Miscellaneous Uncertainties Analog-to-digital converters also introduce offset uncertainty, i.e., the first transition may not occur at exactly +/-% LSB. Gain uncertainty is introduced when the difference between the values at which the first transition and the last transition occurs is not equal. Linearity uncertainty is introduced when the differences between the transition values are not all equal.

As a rule of thumb, use +/-i LSB for relative uncertainty for the analog-to-digital conversion. For digital-to-analog conversion, the maximum linearity uncertainty occurs at full scale when all bits are in saturation. The linearity determines the relative accuracy of the converters. Deviations from linearity, once the converters are calibrated, is absolute uncertainty. As a rule of thumb, use +/-S LSB for absolute uncertainty and + LSB for linearity uncertainty.

Page 182 of 214

Instrument Setpoint APPENDIX J - DIGITAL SIGNAL Calculation Methodology PROCESSING UNCERTAINTIES REVISION 3 J.6 Truncation and Rounding Uncertainties The effect of truncation or rounding depends on whether fixed-point or floating-point arithmetic is used and how negative numbers are represented. For the sign-and-magnitude one's compliment and two's compliment methods, the numbers are represented identically. The largest truncation error occurs when all bits discarded are one's.

For negative numbers, the effect of truncation depends on whether sign-and-magnitude, two's compliment or one's compliment representation is used. Rounding is used on the magnitude of the numbers, and uncertainty is independent of the method of negative numbers representation.

For positive numbers and two's compliment negative numbers, the truncation uncertainty is estimated by:

b< ET

  • 0 For sign-and-magnitude and one's compliment negative numbers, the truncation uncertainty is estimated by:

0

  • ET < 2 -b where b is the number of bits to the right of the binary point after truncation or rounding.

Estimation for rounding uncertainty is:

(-1/2)( 2 b) < ER * (1/2)( 2 b)

Where b, is the number of bits to the right of the binary point after truncation or rounding. Truncation and rounding Effects the mantissa in floating point arithmetic. The relative uncertainty is more important than the absolute uncertainty, i.e., floating-point errors are multiplicative.

For floating point arithmetic, the relative uncertainty for rounding is estimated by:

-2.2- < E

  • 0 For one's compliment and sign-and-magnitude, truncation uncertainty is estimated by:

-22b<bb E < 0, for X <0 0

  • E < 2 .2 -b, for X >0 Where X is the sign and magnitude value prior to truncation.

Page 183 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 APPENDIX K PROPAGATION OF UNCERTAINTY THROUGH SIGNAL CONDITIONING MODULES This Appendix discusses techniques for determining the uncertainty of a module's output when the uncertainty of the input signal and the uncertainty associated with the module are known. Using these techniques, equations are developed to determine the output uncertainties for several common types of functional modules.

For brevity, error propagation equations (See Table K-1) will not be derived for all types of signal-processing modules. Equations for only the most important signal-processing functions will be developed; however, the methods discussed can be applied to functions not specifically addressed here. The equations derived are applicable to all signal conditioners of that type regardless of the manufacturer.

The techniques presented here are not used to calculate the inaccuracies of individual modules; they are used to calculate uncertainty of the output of a module when the module inaccuracy, input signal uncertainty and module transfer function are known.

This section discusses only two classifications of errors or uncertainties: those, which are random and independent and can be combined statistically, and those, which are biases, which must be combined algebraically. The methods discussed can be used for both random and biased uncertainty components.

It is important to note that the method of calibration or testing may directly affect the use of the information presented in this section. If, for example, all modules in the process electronics for a particular instrument channel are tested together, they may be considered one device. The uncertainty associated with the output of that device should be equal to or less than the uncertainty calculated by combining all individual modules.

K.1 Error Propagation Equations Using Partial Derivatives and Perturbation Techniques There are several valid approaches for the derivation of equations, which express the effect of passing an input signal with an error component through a module that performs a mathematical operation on the signal. The approaches discussed here, which are recommended for use in developing error-propagation equations, are based on the use of partial derivatives or perturbation techniques, i.e.,

changing the value of a signal by a small amount and evaluating the effect of the change on the output. Either technique is acceptable and the results, in most cases, are similar.

Page 184 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 For simplicity, this discussion assumes that input errors consist of either all random or all biased uncertainty components. The more general case of uncertainties with both random and biased components is addressed later in this Appendix.

K.2 Propagation of Input Errors through a Summing Function The summing function is represented by the equation:

C = k1 A + k2 B where, C = Output signal A, B = Input signals k1 and k2 = Constants representing gain or attenuation of the input signals The summing function is shown on Figure K-l.

k1 A Output C=klA+ k2 B k2 B Figure K-1 Summing Function Page 185 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 The input signals are summed as shown above to provide an output signal. If the input signals A and B have errors, a and b, the output signal including propagated error is given by:

C + c = k 1 (A + a) + k 2 (B + b)

Or C + c = k 1A + kja + k 2 B + k 2 b where c is the error of the output signal C. Subtracting Equation K.1 from Equation K.2 provides the following estimate of the output signal uncertainty:

c = k.a + k 2 b Equation K.3 is appropriate if the errors, a and b, are bias errors. If the input errors are random, they can be combined as the square root of the sum of the squares to predict the output error:

2 c = ( (k a) +(k b)2) 12 The above expressions for uncertainty can also be derived using partial derivatives. Start by taking the partial derivative of Equation K.1 with respect to each input:

AC = (dC/dA)AA + (dC/dB)AB dcldA = k1 (dA/dA) + k 2 (dB/dA)

= k1 + 0 = ki dC/dB = k2 (dA/dB) + k 2 (dB/d B)

= 0 + k2 = k2 The input signals are independent. The input errors, a and b, represent the change in A and B, or AA = a and AB = b. If c represents the change in C, then AC = c, yielding:

C2= (kla)2 + (k 2 b) 2 or 2

c = (k(ka) +(k2b)2) 12 Page 186 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 K.3 Propagation of Input Errors through a Multiplication Function The summing function is represented by the equation:

C = (kA) (k 2 B) where, C Output signal A, B Input signals k, and k2 = Constants representing gain or attenuation of the input signals The multiplication function is shown on Figure K-2.

k1 A Output 0 = (kiA) (k2 B) k2 B Figure K-2 Multiplication Function The input signals are multiplied as shown above to provide an output signal. If the input signals A and B have errors, a and b, the output signal including propagated error is given by:

C + c = k 2 (A + a)k 2 (B + b) where c is the error of the output signal C. Equation K.11 can be expanded as shown:

C + c = kjAk 2 B + klAk 2 b + kjak2 B + kjak2 b Page 187 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 Subtracting Equation K.12 from Equation K.10 provides the following estimate of the output signal uncertainty:

c = kjAk 2 b + kjak2 B + kjak2 b or c = kzk 2 (Ab + aB + ab)

If a and b are small with respect to A and B, the term ab is usually neglected to obtain the final result:

c = klk 2 (Ab + aB)

If the input signals are random, they can be combined as the square root of the sum of the squares to predict the output error:

c = klk 2 ( (Ab) 2 + (aB) 2)12 K.4 Error Propagation Through Other Functions Below are equations for other functions derived by the same techniques presented in the previous sections. The algebraic expressions represent the more conservative approach assuming bias errors and the SRSS expressions apply to random errors. Refer to Table 1 in reference 5.3, ISA-RP67.04, Part II, for more information.

Function Treatment of Error Division C = (kl* A)/(k2

  • B)

C = kl/k2 [(B

  • a) - (A
  • b)/B2 ] Algebraic C = kl/k2 [((B
  • a) 2 - (A
  • b) 2

) 1/2/B2 ] SRSS Logarithmic C = ki + (k2

  • Log A)

C = [k2

  • Log e/A]
  • a Algebraic C = [k2 *Log e/A] *a SRSS Page 188 of 214

Instrument Setpoint APPENDIX K - PROPAGATION OF UNCERTAINTY Calculation Methodology THROUGH SIGNAL CONDITIONING MODULES REVISION 3 Squaring C = A2 C = (2

  • A
  • a) + a2 Algebraic C = 2
  • A
  • a SRSS Square Root Extraction C = (A) 1 / 2 Page 189 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 APPENDIX L GRADED APPROACH TO UNCERTAINTY ANALYSIS L.1 Introduction The methodology presented in this engineering standard is intended to establish a minimum 95% probability with a high confidence that a setpoint will actuate when required. The methodology is based, in part, on ISA -S67.04, Reference 5.3.

When a calculation is prepared in accordance with this engineering standard, it will accomplish a rigorous review of the instrument loop layout and design. Each element of uncertainty will be evaluated in detail and the estimated loop uncertainty justified at length. The setpoint will be carefully established with respect to the process analytical limit and channel uncertainty. A calculation prepared with this engineering standard will be comprehensive and can typically take an engineer at least two weeks to prepare. This level of effort is justified for those calculations involving reactor safety and integrity.

The importance of the various types of safety-related setpoints differ, and as such it may be appropriate to apply different setpoint determinations requirements. As described in Reference 5.3, for automatic setpoints that has a significant importance to safety. For example, those required by the plant safety analyses and directly related to Reactor Protection System, Emergency Core-Cooling Systems, Containment Isolation, and Containment Heat Removal, a stringent setpoint methodology should consider all sources of instrument error. However, for setpoints that may not have the same level of stringent requirements, for example, those that are not credited in the safety analyses or that do not have limiting values, the setpoint determination methodology could be less rigorous. The level of detail should be commensurate with the importance of the application.

Multiple setpoint methodologies for engineering calculations have been attributed to programmatic setpoint errors at other power stations. These stations have incorporated corrective actions that implement setpoint and loop uncertainty analysis that are balanced with the importance or significance of the related plant system safety function. This approach is acceptable and is consistent with a draft recommended practice by Instrument Society of America (ISA) standards, (ISA dTR 67.04.09, Graded Approaches to Setpoint Determination, Draft Technical Report, 1994 and the subsequent version Draft 4, May, 2000). This Appendix provides guidance regarding how to satisfy the needs for proper setpoint control while allowing for simpler approaches for less critical applications.

Page 190 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 The CPS setpoint methodology will establish the basis of a graded setpoint program by grouping the instrument loops according to their safety significance. The graded approach to setpoint determination provides the maximum available tolerance to optimize the safety and reliability of the plant.

Graded approaches are based on fact that all the rigor and conservatism established in RP67.04-1994, Part II may not be warranted for all setpoints in a nuclear power plant. Per RP67.04-1994, a nuclear plant licensee may establish a multilevel classification scheme by documenting the rationale used to establish the classification. Implementation of a graded approach to setpoints requires the users to identify how critically important each setpoint is. For example, setpoints for RPS and ESFAS are to be maintained with a high degree of conservatism and a high level of confidence. Setpoints for Reg. Guide 1.97, Type C variables for post accident monitoring do not require the same level of confidence. Therefore, a graded approach, with classification for setpoints, will help proper maintenance of safety grade nuclear instrumentation without compromising the safe and reliable operation of the plant.

L.2 GRADED CLASSIFICATIONS CPS Setpoint Control distinguishes between applications by providing the following classifications of setpoint categories in terms of safety significance. For example, Setpoint Category 1 instrument loops are deemed safety significant and calculations for this class of instruments would require full rigor and conservatism established in RP67.04-1994, Part II for safety related setpoints.

The Setpoint Category Tables are presented in order of descending safety significance and therefore, calculation rigor.

Page 191 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 CPS Graded Approach Recommendations SETPOINT CATEGORY FUNCTIONAL DESCRIPTION 1 RPS (Reactor Protection System).

ESF (Engineered Safety Features).

ECCS (Emergency Core Cooling System).

PCIS (Primary Containment Isolation System).

SCIS (Secondary Containment Isolation System).

Emergency Reactor Shutdown Containment Isolation Reactor Core Cooling Containment and Reactor Heat Removal Prevent/mitigate a significant release of radioactivity.

2 Ensure compliance with Tech Spec but are not Level 1 setpoints.

Provide setpoints/limits for Reg. Guide 1.97 Type A variables.

3 Provide setpoints/limits for Reg. Guide 1.97, Type B, C, D variables.

Provide setpoints/limits for other regulatory requirements or operational commitments.

Provide setpoints/limits that are associated with personnel safety or equipment protection.

4 Provide setpoints/limits not identified with levels 1,2 & 3 above. Require documentation of engineering judgement, industry or station experience or other methods have been used to set or identify an operating limit.

Provide setpoints/limits for station EOP requirements. GE BWR methodology for EOP's does not require or desire treatment for uncertainties.

Page 192 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 The following guidelines should be followed with regard to the level of rigor required for a setpoint determination.

Cat. 1 and 2: A Calculation in accordance with CC-AA-309 and this standard is required. Setpoints must be prepared in accordance with this standard and must account for all known sources of uncertainty. The expected results of these calculations are that they establish a well-documented basis for the 95%

probability that the setpoint will actuate as desired Cat 3: A Calculation in accordance with CC-AA-309 and this standard is required. Setpoints need not meet all the requirements of this engineering standard, including the required level of detail or depth of analysis, unless they involve nuclear safety-related setpoints protecting a safety limit, initial condition or support a primary success path in any design basis accident or transient analysis functions. Cat. 3 Setpoints are normally associated with system control functions. Documented engineering judgement can be applied to those uncertainties that are not readily known or available.

Cat 4: Documented basis for the setpoint or limit is required but may be captured in ECN, Engineering Evaluation, or a Calculation. Engineering judgement can be applied to those uncertainties that are not readily known or available. Industry or station experience or other methods can be used to set the limit. Need not meet the requirements accounting for all known sources of uncertainty, including the required level of detail or depth of analysis.

Page 193 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 L.3 Correction for Single-Sided Setpoints The methodology presented in this engineering standard is intended to establish a 95% probability with a high confidence that a setpoint will actuate when required. Without consideration of bias effects, the probability is two-sided and symmetric about the mean as shown in Figure L-1.

2.5% 2.5 %

- 95%

-2a 0 Figure L-1 Typical Two-Sided Setpoint at 95% Level Figure L-1 shows the configuration in which there may be high and low setpoints with a single process. In some cases, there will only be a single setpoint associated with a particular sensor. For example, a pressure switch may actuate a high setpoint when steam dome pressure is too high. In this case a 95% probability is desired for the high pressure setpoint only as shown in Figure L-2.

Page 194 of 214

Instrument Setpoint APPENDIX L - GRADED APPROACH TO Calculation Methodology UNCERTAINTY ANALYSIS REVISION 3 Figure L-2 Typical One-Sided Setpoint at 95% Level A two-sided normally distributed probability at the 95% level will have 95% of the uncertainties falling within +/- 1.96a (see example L-1) with 2.5% below -1.96a and 2.5% above 1.96a. However, for one-sided normally distributed uncertainties, 95% of the population will fall below + 1.645a (see Table M-2). If the concern is that a single value of the process parameter is not exceeded and the single value is approached only from one direction, the appropriate limit to use for the 95% probability is + 1.645a (or - 1.645a depending on the direction the setpoint is approached). Provided that the individual component uncertainties were approached at the 95% level, or greater, the final calculated uncertainty result can be corrected for a single side of interest by the following expression:

1.645/1.96 = 0.839 Example L-1 Suppose the calculated uncertainty for the High Steam Dome Pressure channel is +/- 2% of span and this represents 95% probability for the expected uncertainty. Suppose the uncertainty applies only to the high pressure trip setpoint. In this case we are only concerned with what happens on the high end of span (near the setpoint). The setpoint can be established for a single side of interest by multiplying the Equation L.1 correction by the calculated channel uncertainty, or:

(0.839) (2%) = 1.68%

Hence, rather than require that the setpoint allowance include a 2%

uncertainty value, only a 1.68% allowance needs to be considered.

This correction can provide additional margin for normal system operations.

Page 195 of 214

Instrument Setpoint APPENDIX M - USING THE RESULTS OF Calculation Methodology A STATISTICAL DRIFT ANALYSIS REVISION 3 REVISION 3 APPENDIX M USING THE RESULTS OF A STATISTICAL DRIFT ANALYSIS Section items M.1 to M.3 are adopted from Ref. 5.27, NES-EIC-20.04 Rev. 3, "Analysis of Instrument Channel Setpoint Error and Instrument Loop Accuracy" Appendix J.

The drift analyses herein intended for use in the setpoint and channel error calculations are those performed for CPS' transition to a 24 month refueling cycle Ref. 19, Assessment EA # 2003-06220 and future updates in accordance with Ref. 5.13, ER-AA-520, Rev. 3, "Instrument Performance Trending". The analyses were done in accordance with Ref. 5.27 Appendix J which is in compliance with Ref 5.26, NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle," dated April 2, 1991 and 5.32, EPRI TR-103335, Rev. 1, Statistical Analysis of Instrument Calibration Data. Guidelines for Instrument Calibration Extension/Reduction Programs. The CPS surveillance AF/AL data is from loop calibrations for the nominal trip setpoint.

M.1 The data reduction has generated a "drift" value, but that number includes several uncertainties in addition to the classical drift. If the determined drift value is used in uncertainty calculations, the following uncertainties can normally be eliminated. To replace these values state that they are included in the calculated drift tolerance interval value (DTIc) and set their individual values to zero.

1.1 Reference Accuracy - The reference accuracy of the instrument is included in the calibration data and can be removed from the uncertainty calculation.

1.2 M&TE - As long as the calibration process uses the same, or more accurate, test equipment then this uncertainty is included in the calibration data and can be removed from the uncertainty calculation.

1.3 Drift - The true drift is included in the determined drift and is included in the calibration data and can be removed from the uncertainty calculation.

1.4 Normal Environmental Effects - For the instruments that are included in the calibration, the effects of variations in radiation, humidity, temperature, vibration, etc. experienced during the calibration are included in the calibration data and can be removed from the uncertainty calculation. These terms cannot be removed from the uncertainty calculations if these components see different conditions or magnitudes of the parameter, such as vibration or temperature, while operating than during calibration.

Page 196 of 214

Instrument Setpoint APPENDIX M - USING THE RESULTS OF Calculation Methodology A STATISTICAL DRIFT ANALYSIS REVISION 3 REVISION 3 1.5 Power Supply Effects - If the instruments are attached to the same power supply during calibration that is used during operation, then the affects are included in the calibration data and can be removed from the uncertainty calculation.

1.6 Setting Tolerance - If the setting tolerance is such that it is less than the determined drift then this tolerance will show up in that determined drift and can be removed from the uncertainty calculation. If the ST is much larger than the determined drift it will not normally be used in the calibration process and will not be seen in the determined drift. In this case the ST can be combined with the determined drift using SRSS.

M.2 For cases where there are time dependent drifts, the time frame used for determining the drift should be the normal surveillance interval plus twenty-five percent. Time dependent drift that is random is assumed to be normally distributed and can be combined using the Square Root Sum of the Squares method for intervals beyond the given interval.

M.3 Time independent drift can be assumed constant over the Valid Interval.

M.4 Loop As Found Tolerance - Since AFT is made up of drift, reference accuracy, and calibration errors including setting tolerances, the AFT will generally be set equal to the calculated Drift Tolerance Interval when valid drift results are available.

AFTL = DTIc M.5 When applying DTIc to an existing Method 1 calculation (the preferred method purported in this standard for calculating a setpoint for a function with an analytical limit) the reference accuracy used to develop the AV may be zeroed out. CPS 24 month drift analysis experience however showed it was typically not zeroed (conservatively) because a TS change to the AV would be required in order to take advantage of the increased operating margin it would provide to the setpoint.

M.6 Device As Found Tolerance - Since the CPS AF/AL data is for loops, the device AFT values remain to be calculated in accord with section 4.5.4 equations. Note that other plants drift analyses are typically not based on loop calibrations.

Page 197 of 214

Instrument Setpoint APPENDIX M - USING THE RESULTS OF Calculation Methodology A STATISTICAL DRIFT ANALYSIS REVISION 3 REVISION 3 M.7 The use of AF/AL data with fewer valid inputs than 30 is not allowed by ref. 5.27 and NRC RAI experience for extension of surveillance interval to 24 months. Where fewer than 30 valid points were available, other means of estimating drift were used such as covered in Appendix sections A.2.6 and C.3.4. In such cases the AF/AL data may however be used to validate assumptions for drift.

M.8 Existing calculations which have already calculated AFT per this standard were not revised to include the use of DTIc if the calculated experience DTIc was less than the existing AFT.

M.9 Future generation of new or revised DTIc values will be treated similarly. If the DTIc is less than the existing AFT the existing calculation will remain as is.

Page 198 of 214

Instrument Setpoint APPENDIX N - STATISTICAL ANALYSIS OF Calculation Methodology SETPOINT INTERACTION REVISION 3 APPENDIX N STATISTICAL ANALYSIS OF SETPOINT INTERACTION Frequently, there is more than one setpoint associated with a process control system. For example, a tank may have high and low level setpoints that are designed to prevent overfilling or completely emptying the tank. Each setpoint has a lower and upper actuation uncertainty and, in some cases, two or more setpoints can be very close to one another (or overlap) when all uncertainties are included. A calculation that involves multiple setpoints should also confirm that the setpoints are adequate with respect to one another.

Setpoints that are prepared in accordance with this engineering standard represent a 95% probability with a high confidence (approximately 95%) that the setpoint will actuate within the defined uncertainty limit. The uncertainty variation about the setpoint, is assumed to be approximately normally distributed. If two setpoints are close together, it could appear that they have an overlap region as shown in Figure N-1.

II I

I II I

II I

I I

I I

I 0-0

. Set point , Set point 2 Figure N-1 Distribution of Uncertainty about Two Setpoints Page 199 of 214

Instrument Setpoint APPENDIX N - STATISTICAL ANALYSIS OF Calculation Methodology SETPOINT INTERACTION REVISION 3 As shown in Figure N-1, setpoint overlap can occur when Setpoint 1 drifts high at the same time that Setpoint 2 drifts low. The probability of this occurrence can be estimated based on the behavior of the normal distribution. For a normal distribution, 68.3% of the total probability is contained within +/-1.Oa of the mean, with 15.85% in either tail. Because the setpoints have been statistically determined, it is reasonable to evaluate the possibility of setpoint overlap statistically also. It is highly unlikely for one setpoint to drift by the I.Oa value in the high direction when the other setpoint simultaneously drifts low by the 1.Oa value. The probability, PT, of this occurring is:

PT = (PA) (PB) = (O . 1585) (0.1585) = 0.0251 = 2.51*

The above probability readily shows the low likelihood of setpoint overlap even at the 1.0a level. The probability becomes virtually insignificant at the 1.50a level. In this case, 86.64% of the total probability is contained with +/-1.5c level, with 6.68% in either tail. The probability of one setpoint to drift high by 1.5a when the other setpoint drifts low by 1.5a is:

P1. = (PA) (PB) = (0. 0668) (0.0668) = 0.0045 = 0.45*

The above approach can be used to demonstrate the low likelihood of setpoint overlap. If setpoints appear to have a higher-than-desired probability of overlap, the electrical circuits should be reviewed to determine the possible consequences of the overlap.

Page 200 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 APPENDIX 0 INSTRUMENT LOOP SCALING 0.1 Introduction CPS Calibration Procedures and data sheets include head corrections and scaling. CPS procedure 8801.05, Reference 5.17, controls the method of instrument corrections. For calculations developed by this methodology, the scaling will be evaluated and documented in Attachment 1 of calculation. Scaling instrument loops and development of calibration correction values should be done in a consistent and correct manner. This vital instrument engineering function must be deliberately integrated into maintenance and engineering activities. This Appendix provides the guidance relative to the analysis of an instrument loop and preparation of scaling calculations.

A process instrumentation loop (circuit) typically consists of three distinct sections:

1. Sensing: The parameter to be measured is sensed directly by some mechanical device. Examples include a flow orifice for flow, a differential pressure cell for level, a bourdon tube for pressure, and a thermocouple for temperature measurement.

The sensing element may include a transmitter that converts the process signal into an electrical signal for ease of transmission.

2. Signal Processing: The electrical signal sent by the sensor/transmitter may be amplified, converted, isolated, or otherwise modified for the end-use devices.
3. Display or Actuation: The process signal is used somehow, either as a display, an actuation setpoint above or below some threshold, or as part of some final actuation device logic.

Figure 0-1 shows a typical instrument application. As shown, a level transmitter monitors a tank's water level. A power supply provides a constant voltage to the transmitter and the transmitter outputs a current proportional to the tank level. The indicator displays a tank level corresponding to the electrical current. If the electrical current is above (below) a predetermined level, indicative of a high (low) tank level, the trip unit actuates. The current is provided to the controller for some control action.

Page 201 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 Tank

  • ~ A'I IJ

0S0------- T l Figure 0-1 Simple Instrument Loop for Level Measurement The above example of a tank level measurement illustrates the various elements of an instrument loop. Regardless of the application, an instrument loop measures some parameter -

temperature, pressure, flow, level, etc. - and generates signals to monitor or aid in the control of the process. The instrument loop may be as simple as a single indicator for monitoring a process, or can consist of several sensor outputs combined to create a control scheme.

An instrument and control engineer, will usually design an instrument circuit such that the transmitter (or other instrument) output is linearly proportional to the measured process. Consider the tank level instrument loop just described. As tank level varies from 0t to 100t, we want a transmitter electrical output that can be scaled in direct proportion to the actual tank level. A typical transmitter output signal is shown in Figure 0-2. The output signal varies linearly with the measured process parameter with a low value of 4 milliamps (mA) to a high limit of 20 mA. Under ideal conditions, a zero tank level would result in a 4 mA transmitter output and a 100% level would correspond to a 20 mA output (or 10 to 50 mA, respectively).

Page 202 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 100%

.) 75%

.0LCZ 50%

25%

4 8 12 16 20 Signal mA Figure 0-2 Desired Relationship between Measured Process and Sensor Transmitter Output Example 0-1 Referring to Figure 0-2, what is the expected transmitter output signal if tank level is 50%? The tank level varies from 0% to 100%

for a transmitter output span of 16 mA (4 to 20 mA). The transmitter output signal should be:

Transmitter Output = 4 mA + (0.50)(16 mA) = 12 mA As expected, the transmitter output is at the half-way point of its total span. The above equation will be developed in more detail in the following section.

Page 203 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 Example 0-2 Referring again to Figure 0-2, what is the expected tank level if the transmitter signal is 18 mA?

Tank Level =18 mA -4 nA 100% = 87.5%

16 nA span 0.2 Scaling Terminology Instrument scaling, applied to a process instrumentation, is a method of establishing a relationship between a process sensor input and the signal conditioning devices that transmit/condition the sensor's output signal. The goal is to provide an accurate representation of the measured parameter throughout the measured span. In its simplest perspective, scaling converts process measurements (temperature, pressure, differential pressure, etc.)

from engineering units (OF, psig, etc.) into analog electrical units (VDC, mADC, etc.).

A typical instrument loop consists of a sensor, power supply, and end-use instruments as shown in Figure 0-3. Whereas Figure 0-1 showed the functionality of the circuit, Figure 0-3 shows the instrument loop as an actual circuit. All components are connected in a series arrangement. The power supply provides the necessary voltage for the pressure transmitter to function. In response to the measured process, the pressure transmitter provides a 4 to 20 mA output current.

Page 204 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 Indicator Pi Isolation Signal Figure 0-3 Simplified Instrument Loop Schematic Suppose the pressure transmitter shown in Figure 0-3 monitors tank pressure and is designed to operate over a process range of 1700 to 2500. The transmitter has an elevated zero or pedestal of 1700 psig. The transmitter has an analog output signal of 4 to 20 mADC.

Other components in Figure 0-3 include a pressure indicator and trip unit, each sensing the same 4 to 20 mA signal from the transmitter. The loop signal is developed from the transmitter input via the voltage developed across a 250 ohm input resistor; this arrangement is typical. As the current through the input resistors varies from 4 to 20 mA, the voltage developed across the resistor is 1 V to 5 V, maintaining a linear relationship between the measured process and the resultant output signal. The only purpose of the resistors is to convert the current signal to a voltage signal.

Page 205 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 As configured in this example, the 1700 to 2500 psig process signal has a span of 800 psig which corresponds to the 1 to 5 VDC (or 4 VDC span) across the input resistor. The scale factor is defined as the ratio of the analog electrical signal span to the process span, or 4 VDC/800 psig = 0.005 VDC/psig. Accounting for the 1700 psig input pedestal and the 1 VDC output pedestal, the scaling equation that relates the input to the output is given by:

Ep= (0.005V/psig) (P - 1700 psig)I + 1V where, Ep =Voltage corresponding to the input pressure P =Input pressure value between 1700 and 2500 psig The above scaling equation provides an exact relationship between the process variable and the voltage developed across an input resistor for the stated configuration.

0Ov-3 Module Equations Module equations are commonly referred to as transfer functions.

They define the relationship between a module's input and output signals and are just scaling equations that describe this input/output relationship. Transfer functions are typically classified as either static or dynamic.

Static transfer functions are time-independent and can be either linear or nonlinear. Modules that typically have static transfer functions include:

  • Input resistors (I/V modules)
  • Isolators
  • Summators Page 206 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 The module equation of a static device will sometimes include a gain adjustment also. For example, a simple summator may have the following module equation:

E..,= G(k1 E1 + k2 E2 + kBEB) +1 V where, ki, k2 = input signal gains kB = Bias input gain E1, E2 = Input voltages EB = Bias voltage G = Output gain

=Eout Output voltage 0;4 Scaling calculation I After the process algorithm, module equations, and required ranges have been determined, the scaling calculation can be completed. The scaling factor is used with the scaling equation to derive the voltage equation form the process equation. An overall system equation can be developed, by combining module equations, as applicable. For example, assume the use of two modules in an instrument loop.

The first module has two inputs, Eland E2, that are summed together with a module gain of G1. The simplified equation for this module is given by:

EA =GI (El + E2 )

Now, assume that the output, EA, is summed with another input, E3 ,

which has a module gain of G2 . The resulting module equation is:

Eout = G 2 (E3 + EA) or, substituting in for EA, E..t = G2 [E3 + G (E + E2 )]

Page 207 of 214

Instrument Setpoint APPENDIX 0 - INSTRUMENT LOOP SCALING Calculation Methodology REVISION 3 The expression for each voltage above can be complex also. But, the result is an overall scaling equation that defines the system operation. Once a scaling equation has been developed and the scaling calculation performed, the equation should be checked by inputting typical process values and determining if reasonable analog values are calculated. Each module should be tested separately to ensure its accuracy before combining it with other modules. As part of the test process, include minimum and maximum process values to ensure that the limits work as expected.

Page 208 of 214

Instrument Setpoint APPENDIX P - RADIATION MONITORING Calculation Methodology SYSTEMS REVISION 3 APPENDIX P RADIATION MONITORING SYSTEMS Radiation monitoring systems have unique features that complicate an uncertainty analysis. The system design, detector calibration, and display method all can reduce the system accuracy. Whenever evaluating a radiation monitoring system, review References 5.9 and 5.33, for additional information and:

Radiation monitoring system operation and maintenance manual Radiation monitoring system calibration procedures The following should be considered as part of any uncertainty analysis:

Detector Measurement Uncertainty A radiation monitoring system detector's response varies with the following parameters:

  • Energy level of the incident particles.
  • Count rate of the detected particles.
  • Type of particle being counted (depending on application, the particles may be gamma photons, neutrons, or beta particles).

Detector Count Rate Measurement Uncertainty The detector's measurement uncertainty can be affected by the following:

  • On the low end of scale, the uncertainty in count rate response is affected by signal to noise ratio effects.
  • On the high end of scale, the uncertainty in count rate is affected by pulse pile-up in which discrete pulses are missed.
  • Through the detection range, the alignment of the source to the detector geometry can impact the measurement uncertainty. For example, the containment high range radiation monitors need an unobstructed view of the containment dome. Blockages such as concrete walls can degrade the measurement capability of the detector.

Detector Energy Response Uncertainty The detector energy response uncertainty can be affected by the following:

Page 209 of 214

Instrument Setpoint APPENDIX P - RADIATION MONITORING Calculation Methodology SYSTEMS REVISION 3

  • On the low end, the discriminator setting and the energy sensitivity of the detector.

On the high end, the point at which a rise in incident particle energy does not result in a change in pulse height output.

Throughout the detection range, by a degrading failure of the system.

For most permanently installed radiation detectors, the detector is designed to respond to incident particles over a certain range of energies. The output of count rate is then correlated to a mR/hr or pC/cc indication by the application of a conversion factor, without regard to differing incident particle energies.

When the plant is shutdown, the detector indicated count rate is generally derived from lower energy particles. When the plant is operating, the particle energy tends to be higher. In this case, a typical detector will display a higher count rate, even if the number of incident particles per unit time remains the same. As the incident particle energy level changes, the probability of detection changes, for a given count rate. During initial calibration, this difference is accounted for by exposing the detector sample streams of different radioisotopes and measuring the detector's response.

After in-plant installation, the calibration is checked, by exposing the detector to fixed external sources of different radioisotopes.

The detector coefficient represents the sensitivity of the detector, which is typically specified in Amp/(R/hr). The sensitivity is provided by the vendor for each detector and can be different if the detectors are ever replaced.

Post Accident Radiation Measurement and Indication accuracy for containment area monitoring, is specified in Regulatory Guide 1.97, Table 2, Footnote 7. "Detectors should respond to gamma radiation photons, within any energy range from 60 keV to 3 MeV with an energy response accuracy of "20% at any specific photon energy from 0.1 MeV to 3 MeV". Overall system accuracy should be within a factor of 2 over the entire range." Revision 3 of R.G. 1.97 revised the above footnote to omit the "20% accuracy requirement for the detector.

Now the containment area radiation monitors "should respond to gamma radiation photons within any energy range from 60 keV to 3 MeV with a dose rate response accuracy within a factor of 2 over the entire range". Considering the prior revision, it is clear the intent of the "factor of 2" current requirement applies to the "overall system accuracy" and not the detector accuracy alone. This interpretation is consistent with the requirements placed on other radiation monitoring devices in the same table.

The uncertainty terms identified in radiation monitoring technologies are either percent of reading or in Equivalent Linear Page 210 of 214

Instrument Setpoint APPENDIX P - RADIATION MONITORING Calculation Methodology SYSTEMS REVISION 3 Full Scale (ELFS), which is the same as percent of span provided the span and full scale are equivalent. The method for converting percent of reading uncertainties to percent ELFS using the "error factor" concept is based on the model from an example radiation trip calculation in Reference 5.3, ISA S 67.04 Part II.

Conversion of this error to an ELFS error permits combining the percent of reading error with other string errors.

Consider the following example; A containment area monitor indicates R/Hr over an eight (8) decade range, uncertainty is calculated for the detector at 12.2%.

This detector accuracy error can be expressed as error factors of:

(1.0 + 0.122)/1.0 = 1.122 and (1.0 - 0.122)/1.0 = 0.878. ELFS is calculated for both factors as:

ERROR FACTOR = ioDX, where D = 8, the number of decades on the meter and X =

ELFS as a decimal value.

X(+) = (log(l.122)/8)

  • 100%= +0.62% ELFS X(-) = (log(0.878)/8)
  • 100%= - 0.71% ELFS The error will be assumed to be symmetrical and set at the larger of two values, thus EDET(ref) = "0.71 % ELFS.

Whenever evaluating the uncertainty of a radiation monitoring system, the periodic calibration methods are particularly important to consider. EPRI TR-102644, Reference 5.33, provides additional guidance. Also, the applicable system engineer should be contacted for additional expertise.

Page 211 of 214

Instrument Setpoint APPENDIX Q - ROSEMOUNT LETTERS Calculation Methodology REVISION 3 APPENDIX Q Rosemount Nuclear Instruments Rosemount Nuclear Instruments. Inc.

12001 Technology Drive Eden Prairie, MN 55344 USA Tel I (612) 828-8252 Fax 1 (612) 828-8280 4 April 2000 Ref: Grand Gulf Nuclear Station message on INPO plant reports, subject Rosemount Instrument Setpoint Methodology, dated March 9, 2000

Dear Customer:

This letter is intended to eliminate any confusion that may have arisen as a result of the reference message from Grand Gulf. The message was concerned with statistical variation associated with published performance variables and how the variation relates to the published specifications in Rosemount Nuclear Instruments, Inc.(RNII) pressure transmitter models 152, 1153 Series B1 3 153 Series D, 154 and 154 Series H. According to our understanding, the performance variables of primary concern are those discussed in GE Instrument Setpoint Methodology document NEDC 31336, namely I. Reference Accuracy

2. Ambient Temperature Effect
3. Overpressure Effect
4. Static Pressure Effects
5. Power Supply Effect It is RNII's understanding that GE and the NRC have accepted the methodology of using transmitter testing to insure specifications are met as a basis for confirming specifications are +/-3a, The conclusions we draw regarding specifications being +/-3a are based on manufacturing testing and screening, final assembly acceptance testing, periodic (e.g., every 3 months) audit testing of transmitter samples and limited statistical analysis. Please note that all performance specifications are based on zero-based ranges under reference conditions. Finally, we wish to make clear that no inferences are made with respect to confidence levels associated with any specification.
1. Reference Accuracy.

All (100%) RNII transmitters, including models 1152, 1153 Series B, 1153 Series D, 1154 and 1154 Series H, are tested to verify accuracy to +/-0.25% of span at 0%, 20%, 40%, 60%, 80% and 100% of span. Therefore, the reference accuracy published in our specifications is considered +/-3cF.

2. Ambient Temperature Effect All (100%) amplifier boards are tested for compliance with their temperature effect specifications prior to final assembly. All sensor modules, with the exception of model 1154, are temperature compensated to assure compliance with their temperature effect specifications. All (100%) model 154, model 154 Series H and model 1153 gage and absolute pressure transmitters are tested following final assembly to verify compliance with specification. Additionally, a review of audit test data performed on final assemblies of model 152 and model I 153 transmitters not tested following final assembly indicate FISIIER-ROSEIlOUNT conformance to specification. Therefore, the ambient temperature effect published in our specifications is considered +/-3a Page 212 of 214

Instrument Setpoint APPENDIX Q - ROSEMOUNT LETTERS Calculation Methodology REVISION 3 3.Overpressure Effect Testing of this variable is done at the module stage. All (100%) range 3 through 8 sensor modules are tested for compliance to specifications. We do not test range 9 or 10 modules for overpressure for safety reasons. However, design similarity permits us to conclude that statements made for ranges 3 through 8 would also apply to ranges 9 and 1 0. Therefore, the overpressure effect published in our specifications is considered +/-3a.

4. Static Pressure Effects All (100%) differential pressure sensor modules are tested for compliance with static pressure zero errors.

Additionally, Models I 153 and 1154 Ranges 3, 6,7 and 8 are 100% tested after final assembly for added assurance of specification compliance. Audit testing performed on ranges 4 and 5 have shown compliance to the specification.

Therefore, static pressure effects published in our specifications are considered +/-3a.

5. Powver Suprilv Effect Testing for conformance to this specification is performed on all transmitters undergoing sample (audit) testing.

This variable has historically exhibited extremely small performance errors and small standard deviation (essentially a mean error of zero with a standard deviation typically less than 10% of the specification). All transmitters tested were found in compliance with the specification. Therefore, power supply effect published in our specifications is considered +/-3a.

Should you have any further questions, please contact Jerry Edwards at (612) 828-3951.

Sincerely, Jerry L. Edwards Manager, Sales, Marketing and Contracts Rosemount Nuclear Instruments, Inc.

FISH ER-ROSEMOUNT Page 213 of 214

Instrument Setpoint APPENDIX R - RECORD OF COORDINATION FOR Calculation Methodology COMPUTER POINT ACCURACY REVISION 3 APPENDIX R RECORD OF COORDINATION FOR COMPUTER POINT ACCURACY Computer Point Accuracy (using single point data)

Hardware and software, considering that digital displays involve compression limits affect the accuracy of computer inputs. Taking into consideration the following errors, an accuracy of 0.25% of full range will be utilized. (Reference 5.28 and 5.29)

Gain Error +/- 0.025% Full Range Repeatability Error = +/- 0.025% Full Range

  • Others +/- 0.2% Full Range Total +/- 0.25% Full Range
  • In accuracy of filter input card, Reference Junction Compensation, and any other loss due to conversions and scan frequency.

Page 214 of 214