ML20196A195

From kanterella
Jump to navigation Jump to search
Instrument Drift Analysis
ML20196A195
Person / Time
Site: Perry FirstEnergy icon.png
Issue date: 06/17/1999
From:
CENTERIOR ENERGY
To:
Shared Package
ML20196A191 List:
References
NUDOCS 9906220148
Download: ML20196A195 (43)


Text

{{#Wiki_filter:PERRY NUCLEAR POWER PLANT ATTACHMENT 5 PY-CEI/NRR-2398L INSTRUMENT DRIFT ANALYSIS i l 9906220148 990617 PDR ADOCK 05000440 P PDR

I i 1 i PY-CEl/NRR-2398L Attachment 5 i Page 2 of 39 l Table Of Contents SECTION PAGE

1. OBJECTIVE / PURPOSE . ... . . . . -. _ ~ ._ .~ ___ .4 I

l

2. DRIFT ANALYSIS SCOPE . .. .. ... .. .~ ..-. 4 l
3. ROLES AND RESPONSIBILITIES .. . .. .... . ... . . . c . 4 3.1. Des!gn Engineering Responsibilities-- ..
                                                                                                                                          -........................_..4                                                     i 3.2. Setpoint Coordinator Responsibilitles.. . ... . ... . ... ... .......                                                                                   . ......... .... .... .-.. . . 5 3.3. Department Managers Responsibilities.. .. . . . .. -                                   -
                                                                                                                                                    .............................5 3.4. Design Engineering Responsibilitles PNED. . .. ... .                                                 . . . . . . . . .       ...................................5 3.5. Engineering Administrative Element PNED..... .. .... .. . .. .. ... ...                                                                  .                                      ............_.5
4. l DISCUSSION / METHODOLOGY ... .. . . . . _ -. .. 5 4.1, Methodology Options... . .... .. .. . . . . . . . . = . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...5 4.2. Softwate Verification And Validation ....... . . . . . . . . . . . . . . . ......m ...............6 4.3. Data Analysis Discussion.... .... . . .. . . .... ...... . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . .. 6 4.4. Assignment Of Rigor For The Analysis... ... ... . .. .................................8 4.5. Calibration Data Collectionz .. . . . . . . . . . . . . . . . . ..................n......... .. .. ... _ .. 10 4.6. Categorizing Calibration Data-- . . . . . . . . . . . . . ............................ -12 4.7. Outiier Analysis .. . . . ... . . . . . . . . .. . . . . . . . . . . . . . . . -...............15 4.8. Methods For Verifying Normality . . . . . . .
                                                                                                                ....................................................I6 4.9. Binomfal Pass / Fail Analysis For Distributions Considered Not To Be Norma! . .._.. ... . .. ..... .. .. 20 4.10. Time-Dependent Drift Analysis.-                                             .                       .                                           . . . . . . . .        ................21 4.11. Calibration Point Drift .. .                                         . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . .                                            - . 23 4.12. Drift Bias Determination...... .. .. . .. .......... . .                                                            .....................................23 4.13. Time Dependent Drift Uneertainty . .. . . .. .. . ... .                                                . . . . . . . . .              ................................25 4.14. Shelf Life Of Analysis Results.... ..                                                     ..........................-...........                                                    . . . . .   .... 15
5. PERFORMING AN ANALYSlS . . . . .. . _. _ . _ _ 25 5.1. Populating The Spreadsheet . . . . . . . . . . . . . . . ...................................26 5.2. Spreadsheet Performance Of Basic Statistics .. .. ....  ;.......................m...28
53. Outlier Detection And Expulsion.. ... . ... .. .. ....m..........._...............30 5.3. Calculate The Analyzed Drif1 Value.... . ................................31 i 5.4. Calculate The Analyzed Drift Value. . ..... ..... .. ... .. .. ... . . .. ... ... ... . .. .......... 31 5.5. Time-dependency Test... .. .... ... .. . . . . . . . . . . . . . . . . . . . . , . . . . . . .. . . . ..... 3 1 <

5.6. Normality Test . . . . . . . . . . . . . . . . . ...... ,. ................-....~..32 5.7. Plot The Spreadsheet Data . ... .... . . . . . . . . . - . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . 3 2 5.8. Analyzh , N Data & Charts - . . . . . . ... . . . . .. . . . . . . . . . . . . . .... .. 34

6. CALCULATIOM ...,.. .. .._... .. . . . . ~ 35 6.1. Drift CaIculations..... .....r..... . . . . . . , . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 6.2. Setpoint/ Uncertainty Calculations.. . ... .... . . . . _ .....................................36
7. D EFIN ITIONS . ... .. . . . . . . .. . .. ._.. .. ~ -
                                                                                                                                                                                               .                     .35
8. REFERENCES . . .. . _.. . ... . . . . 39 8.1. Industry Standards and Cerrespondence = .. . . . . . . . ~ . . . . . . . . . . . . . . . . . . . . - . -39 8.2. Procedures.... . . . ...m................ . . . . . . . . . . . . . . . . . _ . . . . . . . . . . . . ... 3 9 8.3. Programs - . . . . . . . . . , . . . . . . . . . . . . . . . . . . . .. .. ... . . .... 3 9 8.4. Misce ilancous _...... .. ...... .... .. .... .... . .. . . . . . . . . . . . . . . . ...-.........._.........39 5

L

'I

                       ~

i i s t PY 1.'E: RR-2398L 44tachment 5 Page 3 of 39 SECTION EAGE IARI4lll / j Table 1 . 95%!95% Tolerance hicerval Factors.. ........ . .... .. .... . .. . . ... . .. . ... .. ... .. ... .. .... . .... .. . ... . .. 10 1 i Table 2 - Critical Values For t-Test . ... ... ....... .. . . . . . , . . . . . . . . . . . . . .. _ . . . . . . . . . . .

                                                                                                                                                                                      .. ...... . ... .. I 6 I Table 3 - Values For A Normal Distribution .. .. ...                                      ..._               . . . . . . ...                              ... . .. .. 2 0 Tcble 4 - Maximum Values of Non-Biased Mean .. .. . .. ..., ... .. ... . ... .                                                                                        .... 2 4
                                                                                                                                               . . . . . . . . . . . . .   . .. . .. . .                     j l                              FJGURES
                                                                                        \

Figure 1 - Sample Spreadsheet (Switches, Trip Units & Other Tripping Devices) .. . ... . . . . . . . . . . 27

                .         ,                                                                                                                                                                                  l l

l Flygure 2 . Sample Spreadsheet (Transir.itterr; Indicators, Recorders & Other Non. Tripping Devices). .. . . 27

                                                                                                                                            .t l

l l ( s ( c l( ( f I 1 l l l l l r l i i ( \

      ~

i i i i

                                                      -_ /t A.                                                                    .--

PY-CEl/NRR-2398L Attachrnent 5 ) Page 4 of 39 DRIFT ANALYSIS

1. OBJECTIVE / PURPOSE The objective of this Design Guide is to provide the necessary detail and guidance to perform drift analysis using past calibration history data for the purposes of:

e Quantifying component / loop drift characteristics within defined probability limits to gain an understanding of the expected behavior for the component / loop by evaluating past performance, e Estimating component / loop drift for integration lato setpoint calculations. Analysis aid for reliability centered maintenance practices (e.g., optimizing calibration frequency). '

  . Establishing a technical basis for extending calibration and surveillance intervals using historical calibration data.
  • Evaluating extended surveillance intervals in support oflonger fuel cycles.
2. DRIFT ANALYSIS SCOPE The scope of this design guide is limited to the calculation of the expected performance for a component, group of components or loop utilizing past calibration data. The Drift Calculation (s) are the final product of the data analysis and will document the use of the drift data for the purposes listed in Section 1. The Setpoint/ Uncertainty calculations will incorporate the values documented in the Drift Calculations for the applications specific to a given loop or component (e.g. Tolerance Interval Factors for other than 95%/95%,

single side of interest setpoints, combination of uncertainties for multiple components in a given loop, etc.). (Ref. 8.1.I& 8.2.2) This design guide is applicable to all devices that are surveilled or calibrated where as found and as left data is recorded. The scope of this design guide includes, but is not limited to, the following list of devices: e Transmitters (Differential Pressure, Flow, Level, Pressure, Temperature, etc.) e Distables (Master & Slave Trip Units, Alarm Units, etc.)

  • Indicators (Analog, Digital)
  • Switches (Differential Pressure, Flow, Level, Position, Pressure, Temperature, etc.)

e Signal Conditioners / Converters (Summers, E/P Converters, Square Root Converters, etc.) e Recorders (Temperature, Pressure, Flow, Level, etc.) e Moaitors & Modules (Radiation, Neutron, H2.02. Pre-Amplifiers, etc.)

  • Relays (Time Delay, Undervoltage, Overvoltage, etc.)
3. ROLES AND RESPONSIBILITIES 3.1. Design Engineering Responsibilities
  • Ownership of the Drift Analysis Program.
  • All tasks specified under the " ENGINEERING RESPONSIBILITIES".
  • Creating / modifying drift calculations in accordance with NEl-0331 or NEl-0341. (Ref. 8.2.2)

l l PY-CEl/NRR-2398L Attachment 5 l Page 5 of 39 l 3.2. Setpoint Coordinator Responsibilities e Coordinating the data analysis efforts to climinate duplications by reviewing existing Drift Calculations and Data Analyses in Progress. e Archiving and maintaining the electronic copies of completed spreadsheet files. ' Maintaining calibration data files for the drift analyses that have been completed or are in progress. l

  • Review data entered into IPASS for Trends and unacceptable conditions, and recommend j replacement ofinstruments based on performance or trends.

l 3.3. Department Managers Responsibilities NOTE: Any Trained Engineering Personnel may perform a drift analysis. Assign the appropriate engineering support from within the Manager's department to complete the data analysis. 1 M e initiate the necessary requests for Design Engineering to perform the data analysis. 3.4. Design Engineering Responsibilities PNED l NOTE: Any Trained Engineering Personnel may perform any or all of the following tasks: l l e Any of the tasks specified under the " DATA ENTRY RESPONSIBILITIES". I e Determining the scope of the data grouping (e.g., all Rosemount Transmitters). l Developing the component list including tag numbers, manufacturers, model numbers, ranges, calibrated spans and surveillance tes+s or calibration procedures.

          . Specifying the data to be collected.

e Performing the statistical analysis of the data. l

  • Evaluating the analyzed data.
          . Reviewing Trend Data for Responsible systems.

3.5. Engineering Administrative Element PNED.

          . Locating and collecting the completed calibration data.
  • Compiling the data for entry into the IPASS database, e Accurately entering completed calibration data.

1

4. DISCUSSION / METHODOLOGY 4.1. Methodology Options This design guide is written to provide the methodology necessary for the analysis of as found versus as left calibration data as a means of characterizing the performance of a component or group of components via th.i following methods:

1 4.1.1. Electric Power Research Institute (EPRI) has developed a guideline to provide nuclear plants with practical methods for analyzing historic component calibration data to predict i component performance via a simple spreadsheet program (e.g., Microsoft Excel, Lotus l-2-3). This design guide is written in close adherence to Report "EPRI TR-103335, GUIDELINES FOR INSTRUMENT CALIBRATION EXTENSION / REDUCTION PROGRAMS". (Ref. 8.1.1 & 8.4.5) L

PY-CEl/NRR-2398L Attachrnent 5 Page 6 of 39 4.1.2. Commercial Grade Software programs other than Microsoft Excel (e.g. IPASS, Lotus 1 3, SYSTAT, etc.) that will perform the functions necessary to evaluate drift may be utilized providing:  ! e the intent of this design guide is met as outlined in Reference 8.1.1. (Ref. 8.1.1) e that software verification and validation is performed in accordance with PAP-0506.(Ref. 8.2.1

  • 8.2.3) e that software is used only as a tool to produce hard copy outputs which will be independently verified 4.1.3. The EPRI IPASS software version 2.2 Beta Release will be used to perform the primary statistical analysis for the PNPP instruments. However, since the IPASS program is not in final release and has no history of successful usage, independent verification of the analysis will be performed. Microsoft Excel will be used to develop spreadsheets that will match the analysis performed in IPASS and will also be used to perform analysis not performed in IPASS. These additional analyses will include, but not be limited to, time-dependency analyses. Since spreadsheets will be developed, specific fomu!a are provided in this guide to be equal to formula used in IPASS. There is expected to be no difference between the IPASS and Microsoft Excel analysis with the exception of rounding ,

differences. { 4.1.4. Because IPASS does not incorporate all aspects of the analysis in this guide, and the l presentation of the expected normal curve on the histogram is incorrect, some analysis will  ! be performed using Microsoft Excel spreadsheets. l 4.2. Software Verification And Validation 4.2.1. If software is selected to perform a data analysis without subsequent verification or independent review of the calculations, it must meet the requirements of PAP-0506 as a Certified Program. (Ref. 8.2.3 4.2.2, Where a Certified Program will not be used, the first drift analysis performed using the program or spread sheet will be 100% mathematically verified using a hand calculator or an independent software program. For subsequent drift analyses, which have utilized the verified analysis as a template, random manual verifications will be performed for each spreadsheet or analysis. Upgrades of products (e.g. Microsoft Excel Version 97 to 97 SR-

2) must be verified to provide the same results) as the previous version 4.2.3. The final product of the data analysis is the hard copy Drift Calculation that is controlled as a QA document. The electronic files are an intermediate step from raw data to final product and are not controlled as a QA file. All data contained in the electronic files is recoverable from QA calibration records controlled by Records Management.(Ref 8.2.1 & 1 8.2.2) 4.2.4. Microsoft Excel stores numbers with 15 digits of accuracy. All calculation outputs l displayed within the calculations are rounded from the values stored by Microsoft Excel i Rounding errors induced by Microsoft Excel are assumed to be negligible within the i calculations. (Ref. 8.4.5) 4.2.5. Different computers with different processors, running different versions of Windows  ;

(Windows 95 and Windows 98) with and without IPASS installed will be used to verify mathematical functions. 1

 -4.3. Data Analysis Discussion The following data analysis methods were evaluated for use at Perry Nuclear Power Plant; As Found Versus Setpoint, Worst Case - As Found Versus As Left, Combined Calibration Data Points

1 PY-CEl/NRR-2398L  ; l Attachment 5 Page 7 of 39 Analysis and As Found Versus As Left. De evaluation concluded that the As Found versus As Left methodology provided results that were more representative of the data and has been chosen for use by this Design Guide. Statistical tests not covered by this design guide may be utilized providing the Engineer performing the analysis adequatelyjustifies the use of the tests. l 4.3.1. As Found Versus As Left Calibration Data Analysis The as found versus as left calibration data analysis is based on calculating drift by l subtracting the previous as left component setting from the current as found setting. Each l calibration point is treated as an independent set of data for purposes of characterizing drift across the full calibrated span of the component / loop. By evaluating as found versus as left data for a component / loop or a similar group of components / loops, the following information may be obtained: l e The typical component / loop drift between calibrations (Random in nature), e Any tendency for the component / loop to drift in a particular direction (Bias). Any tendency for the component / loop drift to increase in magnitude over time (Time Dependent). Confirmation that the selected setting or calibration tolerance is appropriate or achievable for the component / loop. 4.3.1,1. General Features of As Found Versus As Left Analysis e The methodology evaluates historical calibration data only. De method does not monitor on-line component output; data is obtained from component calibration records. l e Present and future performance is predicted based on statistical analysis of past , performance. e Data is readily available from component calibration records. Data can be analyzed from plant startup to the present or only the most recent data can be evaluated.

  • Since only historical data is evaluated, the method is not intended as a tool to identify individual faulty components, although it can be used to demonstrate that a particular component model or application historically performs poorly.
  • A similar class of components, i.e., same make, model, or application, is evaluated. For example, the method can determine the drift of all analog indicators of a certain type installed in the control room. j e '

ne methodology is less suitable for evaluating the drift of a single component over time due to statistical analysis penalties that occur with smaller sample sizes. . e The methodology is based on actual calibtation data and is thus traceable to calibration standards.

  • The methodology obtains a value of drift for a particular component model that can be used in component uncertainty and setpoint calculations. ,

e ne methodology is designed to support the analysis of longer cal.ibration intervals due to fuel cycle extensions and is consistent with the NRC l expectations described in Generic Letter 91-04, Changes in Technical l Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle, , Values for instrument drift developed in accordance with this Design Guide l will be applied in accordance with the GE setpoint Methodology NEDC l 31336P-A " Instrument Setpoint Methodology. (Ref. 8.4.3)  ! J l

l PY-CEl/NRR-2398L ' l Attachment 5 ) Page 8 of 39 l l I l 4.3.1.2. Error And Uncertainty ContentIn As Found Versus As Left Calibration Data 1 } The as found versus as left data includes several sources of uncertainty over and above component drift. 'Ihe following is a list of uncertainties that may be l l included in drift data obtained through analyzing the as found versus as left  ! I data. Replacement of errors is in accordance with the GE Instrument Setpoint l Methodology: ! e Accuracy errors present during the first and second calibrations.

  • Measurement and test equipment error present during the first and second calibrations.
  • Personnel-induced or human-related variation or error during the first and second calibrations.
  • Normal temperature effects due to a difference in ambient temperature between the two calibrations.
  • Power Supply variations between the two calibrations.
  • Environmental effects on component performance, e.g., radiation, humidity, l vibration, etc., between the two calibrations that cause a shift in component I

output. l

  • Misapplication, improper installation, or other operating effects that effect l component calibration during the period between calibrations.  !

I e True drift representing a change, time-dependent or otherwise, in component / loop output over the time period between calibrations.

  • Drift Temperature efrect when used as a component of Do 4.3.1.3. Potential Impacts Of As Found Versus As Left Data Analysis i Many of the bulleted items listed in step 4.3.1.2 are not expected to have a l si;;nificant effect on the measured as found and as left settings. Because there  !

are so many independent parameters contributing to the possible variance in  ! calibration data, they will all be considered together and termed the component's Analyzed Drift (ADR or DA) uncertainty. This approach has the following potential impacts on an analysis of the component's calibration data-l

  • The magnitude of the calculated variation may be conservative and thus may l exceed any assumptions or manufacturer predictions regarding drift. Attempts )

to validate manufacturer's performance claims should consider the possible contributors listed in step 4.3.1.2 to the calculated drift.

  • The magnitude of the calculated variation that includes all of the above sources ,

i of uncertainty may mask any "true" time-dependent drift. In other words, the i analysis of as found versus, as left data may not demonstrate any time l' dependency. This does not mean that time-dependent drift does not exist, only that it is so small that it is negligible in the cumulative effects of component uncertainty when all of the above sources of uncertainty are combined. 4.4. Assignment Of Rigor For The Analysis  ; 4.4.1, Whatis Rigor?

r I PY-CEl/NRR-2398L t Attachment 5 Page 9 of 39 The term " Rigor"is defined for the purposes of this Design Guide as the degree of strictness applied to the analysis and the results of the analysis. For example, a safety related component used to satisfy a Technical Specification value would have the highest degree of strictness applied when performing the analysis and associated calculations.

  , 4.4.2. Rigor Levels This Design Guide assigns four levels of rigor when performing data analyses and the associated calculations.

NOTE: The default Tolerance Interval Factor (TIF) for all Drift Calculations performed using this Design Guide, regardless of Rigor Level, will be 95%/95% (Standard statistks term i ' meaning that the results have a 95% confidence (y) that at least 95% of the population will lie between the stated interval (P) for a sample size (n)). Any reduction in TIF v ill be shown in addition to the 95%/95% value with a detailed discussion provided for the basis l l of reducing the TIF. l ! 4.4.2.1. Rigor Level 1 - Components that perform functions that satisfy a specific I Technical Specification value. For example, IB21-N078A and associated loop provides a Reactor SCRAM Signal through the RPS for High Reactor Pressure , and is listed in the Technical Specifications with a specific trip value.  ! Components / loops that fall into this level of rigor E! Ell: e be included in the data group if the analyzed drift value is to be applied to , the component / loop in a Setpoint/ Uncertainty Calculation. j e use the 95/95% TIF for determination of the ADR term. (See step 4.5.2.1 and Table 1 - 95%/95% Tolerance Interval Factors) l e be evaluated in the Setpoint/ Uncertainty Calculation for application of the  ; Analyzed Drift term (e.g. The ADR term may include the normal temperature effects for a given device but, due to the impossibility of separating out that specific term an additional temperature uncertainty may

                          ' be included in the Setpoint/ Uncertainty, Calculation).

4.4.2.2. Rigor Level 2 - Components / loops that perform functions that are required by the Technical Specifications with no specific values listed, the FSAR, Reg. Guide 1.97, Appendix R and Electrical Coordination Equipment. Components / loops

. that fall into this level of rigor EU!al meet the requirements of Rigor Level I with the exception of the TIF which may be reduced to 90%/95%. Design Engineering is responsible for determining if the use ofless restrictive TIF is permissible for i any given evaluation. (See step 4.5.2.1). 3 4.4.2.3. Rigor Level 3 - Components / loops that perform functions that are E21 covered by the Technical Specifications, FSAR, Reg. Guide 1.97, Appendix R or Electrical j Coordination Equipment but provide a system related function (e.g. Pump Trip on j High Vibration, Air Compressor Trip on High Temperature, etc.).  !

Components / loops that fall into this level of rigor: l e do not need to be included in the data group provided they are adequately l represented by the data in the analysis group. Refer to Section 4.6 for i further clarification. I e may use less restrictive TIF for determination of the ADR term up to and l including 75/95%. Design Engineering is responsible for determining if the l use ofless restrictive TIF is pennissible for any given evaluation. (See step l 4.5.2.1) e may use the ADR term in the Setpoint/ Uncertainty Calculation without j taking additional uncertainty penalties for component accuracy errors,  ! I j

PY-CEl/NRR 23981. Attachment 5  ; Page 10 of 39 M&TE errors, personnel-induced or human related errors, ambient temperature and other environmental effects, power supply effects, misapplication errors and true component drift. Design Engineering is responsible for determining the need for additional uncertainty penalties for any given evaluation. 4.4.2.4. Rigor Level 4 - Components / loops that perform functions that do not fall into the other 3 levels of rigor. Components / loops that fall into this level of rigor must j meet the requirements of Rigor Level 3. 4.5. Calibration Data Collection 4.5.1. Sources Of Data The sources of data to perform a drift analysis are Surveillance Tests, Calibration Procedures and other calibration processes (calibration files, calibration sheets for Balance { of Plant devices, Preventative Maintenance, etc.). The location of the completed Surveillance Tests, Calibration Procedures and other calibration processes are listed below. 4.5.2. How Much Data To Collect , 4.5.2.1. The goal is to collect enough data for the instrument or group of instruments as to make a statistically valid pool. There is no hard fast number that must be attained for any given pool. Table 1 provides the 95%/95% TIF for various sample pool sizes. It should be noted that the smaller the pool the larger the penalty. A tolerance interval is a statement of probability that a certain proportion of the total l population is contained within a defined set of bounds. The tolerance interval i description also includes an assessment of the level of confidence in the statement l of probability. For example, a 95%/95% TIF indicates a 95% level of confidence j that 95% of the population is contained within the stated interval. l Table 1 - 95%/95% Tolerance Interval Factors Sample Size 95 %/95 % Sample Size 95 %/95 % San:ple Size 1 95 %/95 % 22 37.674 2 23 2.673 2120 2.203 23 9.916 2 24 2.651 2 130 2.194 24 6.370 2 25 2.631 2140 2.184 25 5.079 2 26 2.612 2 150 2.175 26 4.414 2 27 2.595 2 160 2.167 27 4.007 2 30 2.549 2 170 2.160 28 3.732 2 35 2.490 2 180 2.154 l 29 3.532 2 40 2.445 2 190 2.148 2 10 3.379 2 45 2.408 2 200 2.143 i 2 Ii 3.259 2 50 2.379 2 250 2.121 2 12 3.162 2 55 2.354 2 300 2.106 2 13 3.081 2 60 2.333 2 400 2.084 2 14 3.012 2 65 2.315 2 500 2.070 2 15 2.954 2 70 2.299 2 600 2.060 2 16 2.903 2 75 2.285 2 700 2.052 2 17 2.858 2 80 2.272 2 800 2.046 2 18 2.819 2 85 2.261 2 900 2.040 2 19 2.784 2 90 2.251 1000 2.036 2 20 2.752 2 95 2.241 91 1.960 2 21 2.723 2 100 2.233 2 22 2.697 2 110 2.218 L

PY-CEl/NRR-2398L Attachment 5 Page 11 of 39 4.5.2.2. The end purpose of the analysis determines the types of components that require evaluation. Different information may be needed depending on the analysis purpose, therefore, the total population of components - all makes, models, and I applications - that will be analyzed must be known. (e.g. All Rosemount Trip l Units) 4.5.2.3. Once the total population of components is known, the components should be grouped into functionally equivalent groups. Each grouping is treated as a { separate population for analysis purposes. (e.g. Starting with all Rosemount Trip l Units as the initial group and breaking them down into various sub groups - 710 Masters,710 Slaves,510 Masters,510 Slaves, Increasing Setpoints, Decreasing Setpoints, Monthly Calibrations, Quarterly Calibrations, etc.). 4.5.2.4. Not all components or available calibration data points need to be analyzed within each group in order to establish statistical performance limits for the group. However, devices col.tained in rigor levels 1 and 2 must be contained in the analysis group. Acquisition ofdata should be considered from different perspectives: l e For each grouping, a large enough sample of components should be { randomly selected from the population so that there is assurance that the evaluated components are representative of the entire population. By randomly selecting the components and confirming that the behavior of the  ; randomly selected components is similar, a basis for not evaluating the entire population can be established. For sensors, a random sample from , the population should include representation of all desired component ) spans and functions. It may be difficult tojustify the application of analysis results to a sensor whose span or function was not represented in the data set, e For each selected component in the sample, enough historic calibration data should be provided to ensure that the component's performance over time is understood. ( Due to the difficulty of determining the total sample set, developing a specific sampling criteria which provides a true random sample of the l population, ensures that various instruments calibrated at different i frequencies and that all components types are represented, it is often simpler to evaluate all drift analysis available. This eliminates changing l sample methods should groups be combined or split based on plant conditions or performance. 4.5.3, ' Retrieve as much data as indicated by the estimate of the sample size. Transcribe the calibration data into a Microsoft EXCEL spreadsheet (Table 1). 'Ihe Database information on each instrument will include: i e instrument manufacturer and make/model I e Instrument ID (or" Tag No.") e dates of the calibrations e calibration data e calibrated span (or desired setpoint)

PY-CEl/NRR-2398L Attachment 5 Page 12 of 39 e engineering units. The suggested format for the Microsoft Excel spreadsheets are given in Figure I and Figure 2. The format should give a clear presentation of the necessary information for each instrument. 4.6. Categorizing Calibration Data ' 4.6.1. Grouping Calibration Data One analysis goal should be to combine functionally equivalent components (components with similar design and performance characteristics) into a single group. In some cases, all components of a particular manufacturer make and model can be combined into a single sample. In other cases, virtually no grouping of data beyond a particular component mrke, model, and specific span or application may be possible. Some examples of groupings that may be possible include, but are not limited to, the following: 4.6.1.1. Small Groupings e All devices of same manufacturer, model and range covered by the same Surveillance Test. All trip units used to monitor a specific parameter (assuming that all trip units are the same manufacturer, model and range). 4.6.1.2. Larger Groupings e All transmitters of a specific manufacturer,inodel that have similar spans and performance requirements.

  • All Rosemount trip units with functionally equivalent model numbers.
                  .      All control room analog indicators of a specific manufacturer and model 4.6.2. Rationale For Grouping Components into A Larger Sample e       A single component analysis may result in too few data points to make statistically meaningful performance predictions.
  • Smaller sample sizes associated with a single component may unduly penalize performance predictions by applying a larger uncertainty factor to account for the smaller data set. Larger sample sizea reflect a greater understanding and assurance of representative data that in turn reduces the uncertait ty factor.
  • Large groupings of components into a sample set for a single population ultimately allows the user to state the plant-specific performance for a particular make and model of component. For example, the user may state," Main Steam Flow Transmitters have historically drified by less than 1%", or "All control room indicators of a particular make and model have historically drifted by less that 1.5%".

e An analysis of smaller sample sizes is more likely to be influenced by non-representative variations of a single component (outliers). l

            .       Grouping similar components together rather than analyzing them separately is more efficient and minimizes the number of separate calculations that must be          ,

ruintained. Each new calculation at a nuclear plant involves a certain ongoing i operations and maintenance expense even if only because it is another quality I document in the system. i l

1 PY-CEl/NRR 2398L l Attachment 5 ! Page 13 of 39 4.6.3. Considerations When Combining Components Into A Single Group Grouping components together into a sample . set for a single population does not have to become a complicated effort. Most components can be categorized readily into the l appropriate population. Consider the following guidelines when grouping functionally equivalent components together. If performed on a type-of component basis, component groupings should usually be established down to the manufacturer make and model, as a minimum. For example, mixing Rosemount transmitters in the same analysis as General Electric or i Barton transmitters should not be done. The principles of operation are different for the various manufacturers and combining the data might mask some trend for one type of component. This said, it may be desirable to combine groups of components for certain studies. If dissimilar component types are combined, a separate analysis of each component type should still be completed to ensure analysis results of the l mixed population are not misinterpreted or misapplied. I a Sensors of the same manufacturer make and model, but with different calibrated ' j spans or elevated zero points, can possibly still be combined into a single group. I For example, a single analysis that determines the drift for all Rosemount 1153 pressure transmitters installed onsite might simplify the application of the results. Note that some manufacturers provide a predicted accuracy and drift value for a given component model, regardless ofits span. However, the validity of combining i components with a variation ofspan ranging from tens of pounds to several thousand pounds should be confirmed. As part of the analysis, the performance of I components within each span should be compared to the overall expected I performance to determine if any differences are evident between components with different spans. Components combined into a single grouping should be exposed to similar calibration or surveillance conditions, as applicable. Note that the term operating l condition was not used in this case. Although it is desirable that the grouped l l components perform similar functions, the method by which the data is obtained for l l this analysis is also significant. If half the components are calibrated in the summer at 90*F and the other halfin the winter at 40'F, a difference in observed drift between the data for the two sets of components may exist. In many cases, ambient l l temperature variations are not expected to have a large effect since the components l l are located in environmentally controlled areas.  ! e Avoid using historical calibration data for componeras that have been replaced or are no longer in service. The analysis results should be based on the performance of currently installed components. 4.6.4. Verification That Data Grouping Is Appropriate e Combining functionally equivalent components into a single group for analysis purposes may simplify the scope of work; however, some level of verification should be performed to confirm that the selected component grouping is appropriate. As an example, the manufacturer may claim the same accuracy and drift specifications for two components of the same model, but with different ranges, e.g.,0-5 PSIG and 0 3000 PSIG. However, in actual application, components of one range may perform differently than components of another r nge.

  • Standard statistics texts provide methods that can be used to determine if data from similar types of components can be pooled into a single group. If different groups I

l L

PY-CEl/NRR-2398L Attachment 5 Page 14 of 39 of components have essentially equal variances and means at the desired statistical level, the data for the groups can be pooled into a single group. When evaluating Groupings, care must be taken not to split instnament groups only because they are calibrated on a different time frequency. Differences in variances may be indicative of a time dependent component to the device drift. "Ihe separation of these groups may later mask a ti te dependence for the component drift. A t-Test (two samples assuming unequal variances) should also be performed on the proposed components to be grouped. The t Test returns the probability associated with a Student's t-Test to determine whether two samples are likely to have coine from the same two underlying populations that have unequal variances. If for example, the proposed group contains 5 sub-groups, the t-Tests should be performed on all possible combinations for the groupings. The following formula is used to determine the test statistic value t. t,= I, - T, - A o (Ref. 8.4.5)

                                +                                                                             l n,       n2 l

Where ;

                                                                                                              )

t' - Test statistic { n - Total number of data points. x - Mean of the samples. ' s2 - Pooled variance. A - Hypothesized mean difference. 4.6.5. Examples Of Proven Groupings: l

  • All control room indicators receiving a 4 20mADC (or 1 5VDC) signal. Notice that j

a combined grouping may be possible even though the indicators have different j indication spans. For example, a 12 mADC signal should move the indicator

                                                                                                              )

pointer to the 50% of span position on each indicator scale regardless of the span l indicated on the face plate (exceptions are non-linear meter scales).

  • All control room bistables of similar make or model tested monthly for Technical Specification surveillance. Note that this assumes that all bistables are tested in a similar manner and have the same input range, e.g., a 1-SVDC or 4-20mADC spans.
      . A specific type of pressure transmitter used for similar applications in the plant in which the operating and calibration environment does not vary significantly between applications or location.
  • A group of transmitters of the same make and model, but with different spans, given that a review confinns that the transmitters of different spans have similar performance characteristics.

4.6.6. Using Data From Other Nuclear Power Plants:

  • It is acceptable, although not recommended, to pool PNPP specific data with data obtained from other utilities, providing the requirements of step 4.6.4 are met and the data can be verified to be of high quality. In this case, the data must also be verified to be collected in the same manner with the same type of M&TE. l l

PY-CEl/NRR-2398L . 1 Attachment 5 Page 15 of 39 4.7. Outlier Analysis An outlier is a data point significantly different in value from the rest of the sample. The presence i of an outlier or multiple outliers in the sample of component or group data may result in the { calculation of a larger than expected sample standard deviation and tolerance interval. Calibration data can contain outliers for several reasons that permit correction of the data or rejection of these l data points from the sample. Examples include:

    .                                                                                                        I Data Transcription Errors - Calibration data can be recorded incorrectly either on the original calibration data sheet or in the spreadsheet program used to analyze the data.           ]
    .      Calibration Errors - Improper setting of a device at the time of calibration. Would indicate       i larger than normal drift during the next subsequent calibration.

Afeasuring & Test Equipment Errors - Improperly selected or miscalibrated test equipment 3 could indicate drift when little or no drift was actually present. j e Scaling or Serpoint Changes - Changes in scaling or setpoints can appear in the data as a j larger than actual drift point unless the change is detected during the data entry or screening 1 process. FailedInstruments - Calibrations are occasionally performed to verify proper operation due , to erratic indications, spurious alarms, etc.. These calibrations may be indicative of l component failure and not drift which would introduce errors that are not representative of  ! the device performance during routine conditions. Des /gn or Application Defic /encies - An analysis of calibration data may indicate a particular component that always tends to drift significantly more than all other similar components  ; installed in the plant. In this case, the component may need an evaluation for the possibility j of a design, application, or installation problem. Including this particular component in the { same population as the other similar components may skew the drift analysis results.  ! i e Statistical Outliers. -This category is assigned to data points which are eliminated from the l associated instrument's data set based on being " unique outliers" which are not consistent j with the other data collected and can bejudged as an erroneous point based on being a

           " statistical outlier" at greater than three standard deviations from the mean. This conclusion   l is based on the fact that generally all impacted safety systems are designed to be single failure proof and a unique failure of one of the associated systems / components will not prevent performance of the safety function. Therefore, these failures do not affect the            i conclusion that the impact on system availability, if any, from a change in the Channel Calibration test interval is small.

4.7.1, Detection of Outliers There are several methods for determining the presence of outliers. This design guide utilizes the Critical Values for t-Test (Extreme Studentized Deviate). The t Test utilizes the values listed in Table 2 with an upper significance level of 5% to compare a given data point against. Note that the critical value of t increases as the sample size increases. This signifies that, as the sample size grows, it is more likely that the sample is truly representative of the population. The t-Test assumes that the data is normally distributed, which should be proven prior to performance of the t-Test. i l i j

I l PY-CEl/NRR-2398L Attachment 5 Page 16 of 39 Table 2 - Critical Values For t-Test Sarnple Size Upper 5% Significance Sample Size Upper 5% Significance Level Level s3 1.15 22 2.60 4 1.46 23 2.62 5 1.67 24 2.64 6 1.82 25 2.66 7 1.94 s 30 2.75 8 2.03 s 35 2.82 ! 9 2.1 I s 40 2.87 10 2.18 s 45 2.92 II 2.23 s 50 2.96 i 12 2.29 s 60 3.03 I3 2.33 s 70 3.09 14 2.37 s 75 3.10 15 2.41 5 80 3.14 16 2.44 s 90 3.18 17 2.47 s100 3.21 l 18 2.50 s 125 3.28 19 2.53 s150 3.33 20 2.56 >l50 4.00 21 2.58 4.7.2. t Test Outlier Detection Equation l*lxf-xl (Ref. 8.1.1) ) s i Where; ( l Xi An individual sample data point ' Y Mean of all sample data points s Standard deviation of all sample data points t Calculated value of extreme studentized deviate that is compared to the critical value of t for the sampic size 4.7.3. Outlier Expulsion Outliers may be excluded from the sample pool, providingjustification is provided for l each outlier. See some outlier examples provided in the bulleted items listed above in Section 4.7. Removal of points or calibrations based on justifications similar to the bulleted items (with the exception of the statistical outliers)is not considered outlier removal. Multiple outlier tests or passes are not permitted by this Design Guide. However, once all data quality corrections have been made, a final outlier identification can be made to determine if limited occurrences (less than 1% of sample size) exist. 1 Removal of these statistical outliers is permissible where it can be shown that these are l limited occurrences outside 3.5 sigma for the sample set. 1 4.8. Methods For Verifying Normality A test for normality can be important because many frequently used statistical methods are based upon an assumption that the data is normally distributed. This assumption applies to the analysis of component calibration data also. For example, the following analyses may rely on an assumption that the data is normally distributed: l l l

PY-CEl/NRR-2398L Attachment 5 Page 17 of 39 e Determination of a tolerance interval that bounds a stated proportion of the population based on calculation of mean and standard deviation.

  • Identification of outliers.

l Pooling of data from different samples into a single population The normal distribution occurs frequently and is an excellent approximation to describe many processes. Testing the assumption of normality is important to confirm that the data appears to fit the model of a normal distribution, but tests will not prove that the normal distribution is a correct model for the data. At best, it can only be found that the data is reasonably consistent with the i characteristics of a normal distribution. For example, some tests for normality will only allow the rejection of the hypothesis that the data is not norrr. ally distributed. This does not mean the data is normally distributed; it only means that there is no evidence to say that it is not normally distributed. Distribution free techniques are available when the data is not normally distributed; however, these techniques are not as well known and often result in penalizing the results by calculating tolerance intervals that are substantially larger than the normal distribution equivalent. 'here is a good reason to demonstrate that the data is normally distributed or can be bounded by the ac _ otion of normality. l Analytically verifying that a sample appears to be normally distributed usually invokes a form of statistics known as hypothesis testing. In general, a hypothesis test includes the following steps:

1) Statement of the hypothesis to be tested and any essumptions.
2) Statement of a level of significance to use as the basis for acceptance or rejection of the i hypothesis.

l

3) Determination of a test statistic and a critical region.
4) Calculation of the appropriate statistics to compare against the test statistic.
5) Statement of conclusions.

The following sections discuss various ways in which the assumption of normality can be verified to be consistent with the data or can be claimed to be a conservative representation of the actual data. Analytical hypothesis testing, as well as more subjective graphical analyses, are discussed. The following are methods for assessing normality: 4.8.1. Chi-Squared, x2, Goodness of Fit Test This well-known test is stated as a method for assessing normality in ISA-RP67.04, l Recommended Practice, Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation. The 22 test compares the actual distribution of sample values to the expected distribution. The expected vaic :.re calculated by using the normal mean and standard deviation for the sample. If the distribution is normally or approximately normally distributed, the difference between the actual versus expected values should be very small. And, if the distribution is not normally distributed, the differences should be significant. (Ref 8.1.2) 4.8.1.1. Equations To Perform The x2 Test

1) First, calculate the mean for the sample group X = [X, (Ref 8.1.1) n

I PY-CEl/NRR-2398L Attachment 5 Page 18 of 39 XI - Ar, individual sample data point X - Mean of all sample data points n - Total number of data points

2) Second, calculate the standard deviation for the sample group n[x _ 2 y

s=} n(n-1) (Ref. 8.1.1) Where; x - Sample data values (x1, x2, x3, . ..) s - Standard deviation of all sample data points n - Total number of data points

3) Dird, the data must be divided into bins to aid in determination of a normal distribution. The number of bins selected is up to the individual performing the analysis. Refer to Reference 8.1.1 for further guidance.

2

4) Fourth, calculate the x value for the sample group E,= NP, x = [(0, -E,)2 2

n(n-1) (Ref. 8.1.1) Where; E, - Expected values for the sample N - Total number of samples in the population P, - Probability that a given sample will be contained in a bin O, - Observed sample values (0,,02 ,0 3,.. ..) x2- Chi squared result 4.8.2. W Test ANSI N15.15-1974, Assessment of the Assumption of Normality (Employing Individual Observed Values), recommends this test for sample sizes less than 50. The W Test calculates a test statistic value for the sample population and compares the calculated value to the critical values for W which are tabulated in ANSI N15.15. De W test is a lower-tailed test. Thus if the calculated value of W is less than the critical value of W, the assumption ornormality would be rejected at the stated significance level. If the calculated value of W is larger than the critical value of W, there is no evidence to reject the assumption of normality. For the equations required to perform a W test refer to the reference listed. (Ref. 8.1.4) i 4.8.3. D-Prime Test ANSI N15.15-1974, Assessment of the Assumption of Normality (Employing Individual Observed Values), recommends this test for moderate to large sample sizes. He D' Test calculates a test statistic value for the sample population and compares the calculated value to the values for the D' percentage points of the distribution which are tabulated in ANSI N15.15. The D' Test is two-sided, which effecthiely means that the calculated D' must be bounded by the two-sided percentage points at the stated level of significance. For the

PY-CEl/NRR-2398L Attachment 5 Page 19 of 39 given sample size, the calculated value of D' must lie within the two values provided in the ANSI N15.15 table in order to accept the hypothesis of normality. (Ref. 8.1.4) 4.8.3.1. Equations To Perform The D' Test

1) First, calculate the linear combination of the sample group T=[ I" x x, (Ref. 8.1.4)

Where: xf - An individual sample data point i - The number of the sample point n - Total number of data points

2) Second, calculate the S 2for the sample group 2

S = [y _ ;),2 (Ref. 8.1.4) Where; 2 s - Unbiased estimate of the sample population variance n -Total number of data points

3) Third, calculate the D' value for the sample group D' = T S (Ref. 8.1.4) 4.8.4. Probability Plots Probability piots are discussed, since a graphical presentation of the data can reveal possible reasons for why the data is or is not normal. A probability plot is a graph of the sample data with the axes scaled for a normal distribution. If the data is nonnat, the data will tend to follow a straight line, if the data is non-normal, a nonlinear shape should be evident from the graph. The types of probability plots used by this design guide are as follows:
  • Cumulative Probability Plot - an XY scatter plot of the outlier tested data plotted against the percent probability (P) i for a normal distribution. Pi is calculated using the following equation:

r 100x l- g3 ( 2s P, =- (Ref. 8.1.1) n where; i - sample number i.e.1,2,... n = sample size  ! NOTE: Refer, as necessary, to Appendix C Section C.4 of the EPRI'If. '03335, GUIDELINES FOR INSTRUMENT CALIBRATION EXTENSION / REDUCTION PROGRAMS. (Ref. 8.1.1) e Normalized Probability Plot - an XY scatter plot of the outlier tested data plotted i against the probability for a normal distribution expressed in multiples of the standard j deviation. - I 1 i

PY-CEl/NRR-2398L Attachment 5 Page20 of 39 4.8.5. Coverage Analysis A coverage analysis is discussed for cases in which t'.e data fails a test for normality, but tne assumption of normality may still be a conservative representation of the data. Coverage - a histogram of the outlier tested data overlaid with the equivalent probability distribution curve for the normal distribution based on the data samples mean and standard deviation. This plot provides a very useful tool in determining normal distribution of the sample data. 4.8.6. Sample Counting Within lo and 2o for the Group A good method of verifying normality is to calculate the Standard Deviation of the group and count the number of times the absolute value of the samples are less than or equal to one Standard Deviation and repeat the process for two Standard Deviations. The counts would be divided by the total number of samples in the group to determine a percentage. The following table provides values for a normal distribution: Table 3-Values For A Normal Distribution Percentages for a Normal Distribution i Standard Deviation 68.27 % 2 Standaid Deviations 95.45 % l 4.9. Binomial Pass / Fall Analysis For Distributions Considered Not To Be Normal A pass / fail criteria for component performance simply compares the as found versus as left surveillance drift data against a pre defined acceptable value of drift. If the drift value is less than the pass / fait criteria, that data point passes; ifit is larger than the pass / fait criteria, it fails. By comparing the total number of passes to the number of failures, a probability can be computed for the expected number of component passes in the population. Note that the term failure in this instance does not mean that the component actually failed, only that it exceeded the selected pass / fail criteria for the analysis. Often the pass / fail criteria will be established at a point that clearly demonstrates acceptable component performance. The equations used to determine the Failure Proportion, Normal, Minimum and Maximum Probabilities are as follows: 1 Failure Proportion i Pr = x/n where- I x = Number of values exceeding the pass / fait criteria (Failures) (Ref. 8.1.1) n = Total number of drift values in the sample Normr.1 Probability that a value will pass P = 1 -P, (Ref. 8.1.1) Minimum Probability that a value will pass fr g 3 rp r s Pf = 1---z x - x - x 1- y (Ref. s.t.1) n y gn, sn, < n,

J PY-CEl/NRR-3398L Attachment 5 Page21 of 39 j Maximum Probability that a value will pass 1 i P, = 1 - x l' 1 ' f x

                                                  'x' x l- x' i

H +zx\\-M) \M)  % H) (aer. s.i . i>  ! 1 i , where; l' Pi = the minimum probability that a value will pass P, = the maximum probability that a value will pass z = the standardized normal distribution value corresponding to the desired confidence level, l e.g., z = 1.96 for a 95% confidence level. l l l The Binomial Pass / Fall Analysis is a good tool for verifying that drift values calculated for calit*ation extensions are appropriate for the interval. Refer to the EPRI TR 103335, GUIDELINES FOR INSTRUMENT CALIBRATION EXTENSION / REDUCTION PROGRAMS for the necessary detail to perform a pass / fail analysis. (Ret 8.1.1) l 4.10. Time-Dependent Drift Analysis l l The component / loop drift calculated in the previous sections represented a predicted performance limit without any consideration of whether the drift may vary with time between calibrations or component age. This section discusses the importance of understanding the time-dependent l performance and the impact of any time-dependency on an analysis. Understanding the time-dependency can be either important or unimportant, depending on the application. A time. dependency analysis is important whenever the drift analysis results are intended to support an extension of calibration intervals. 4.10.1 Limitations ofTime-dependency Analyses I ( EPRI TR-103335, GUIDELINES FOR INSTRUMENT CAllBRATION EXTENSION / l REDUCTION PROGRAMS performed drift analysis for numerous components at several nuclear plants as part of their project. The data evaluated did not demonstrate any significant time-dependent or age-dependent trends. Time-dependency may have existed in all of the cases analyzed, but was insignificant in comparison to other uncertainty contributors. Because time-dependency cannot be completely ruled out, there should be an ongoing evaluation to verify that component drift continues to meet expectations whenever calibration intervals are extended. (Ref. 8.1.1) 4.10.2. Standard Deviations at Different Calibration Intervals This analysis technique is the most recommended method of determining time-dependent tendencies in a given sample pool. The test consists simply of segregating the drift data

i. into different groups (Bins) corresponding to different ranges of calibration or surveillance intervals and comparing the standard deviations for the data in the various groups. The purpose of this type of calculation is to determine if the standard oeviation tends to become larger as the time between calibrations increases.

4.10.2.1. The data that is available will be placed in interval bins. The intervals that will normally be used will coincide with Technical Specificat. ion calibration intervals plus the allowed tolerance as follows:

a. O to 37.5 days (covers most weekly and n.onthly cahtrations)
b. 38 to 112.5 days (covers most quarterly calibrations)

F PY-CEl/NRR-2398L l Attachment 5 t Page 22 of 39 l c. I13 to 225 days (covers most semi-annual calibrations)

d. 226 to 456 days (covers most annual calibrations)
e. 456 to 675 days (covers most old refuel cycle calibrations)

I

f. 675 to 900 days (covers most extended refuel cy-le calibrations)
g. > 900 days covers missed and forced outage refueling cycle calibrations.

4.10.2.2. Different bin splits may be used, but must be evaluated for data coverage and acceptable data groupings. 4.10.2.3. For each bin where there is data the average, sample standard deviation and data count will be computed. In addition, the average interval for the data points will be computed. 4.10.2.4. To determine if time-dependency does or does not exist the data needs to be

                         " equally" distributed across the multiple bins, however equal distribution in all bins would not normally occur. Normally the minimum expected distribution that would allow evaluation is:
a. A bin will be considered in the final aralysis ifit holds more than five data points and more than ten percent of the total data count.
b. For those bins that are to be considered, the difference between bins will be less than twenty percent of the total data count.
c. At least two bins, including the bin with the most data, must be left for evaluation to occur.

The distribution percentages listed in these erkeria are arbitrary and thus engineering evaluation can modify them for a given evaluation. 4.10.3. Time-dependency Plots e Dnp Interval- an XY scatter plot that shows the cutlier tested % Drift data plotted against the time interval between tests for the data points. 'lhis plot method relies upon the human eye to discriminate the plot for any trend in the data to exhibit a time-dependency.

  • Dnf Interval Regression - an XY scatter plot that fits a line through the outlier tested
               % Drift data plotted against the time interval between tests for the data points using the "least squares" method to predict values for the given data set. 'The predicted line is plotted through the actual data for use in predicting drift over time. It is important to note that data outliers can have dramatic effect upon the regression line.
  • Absolute Value Delf Interval- an XY scatter plot that shows the Absolute Value of the outlier tested % Drift data plotted against the time interval between tests for the data points. This plot is intended to demonstrate any tendency for a given group to drift, in either direction, over time. This plot method relies upon the human eye to discriminate the plot for any trend in the data to exhibit a time dependency.

l

PY-CEl/NRR-23981. Attachment 5 l Page 23 of 39 Absolute Value Drip Interval Regression - an XY scatter plot that fits a 1ine through the Absolute Value of the outlier tested % Drift data plotted against the time interval , between tests for the data points using the "least squares" method to predict values for i the given data set. The predicted line is plotted through the actual data for use in predicting drift, in either direction, over time, it is important to note that data outliers can have dramatic effect upon the regression line. 4.10.4. AdditionalTime-Dependency Analyses e Instrument Resetting Evaluation - For data sets that consist of a single calibration interval, the time dependency determination may be accomplished simply by evaluating the frequency at which instruments require resetting. This type of analysis is particularly useful when applied to extend monthly Technical Specification surveillances to quarterly. However, it is less useful for instruments such as sensors or I relays that may be reset at each calibration interval, regardless of whether the l instrument was already in calibration. ' The Instrument Resetting Evaluation may be performed only if the devices in the sample pool are shown to be stable, not requiring adjustment (i.e. less than 5% of the data shows that adjustments were made). Care also must be taken when mechanical connections or flex points may be exercised by the act of checking calibration (actuation of a bellows or switch movement), where the act of checking the actuation point may have an affect on the next reading. Methodology for calculating the drift is as follows: Monthly As Found/As Left (As Found Current Calibration - As Left Previous Cadbration) or AF,- Aly (Ref. 8.1.1) Quarterly As Found/As Left using Monthly Data (AF,- AL2) + (AF - AL ) + (AF3 - AL.) 3 (Ref. 8.1.1) 4.10.5. Age-Dependent Drift Considerations Age-dependency is the tendency for a component's drift to increcce in magnitude as the component ages. This can be assessed by plotting the as found vale for each calibration minus the previous calibration as left value of each component over the period of time for l which data is available. Random fluctuations around zero may obscure any age-dependent drift trends. By plotting the absolute values of the as found versus as left calibration data, the tendency for the magnitude of drift to increase with time can be assessed. 4.11. Calibration Point Drift For devices with multiple calibration points (e.g., transmitters, indicators, etc.), the Drift-Calibration Point Plot is a usefut tool for comparing the amount of drift exhibited by the group of devices at the different calibration points. The plot consists of a line graph of tolerance interval as a function of calibration point. 4.12. Drift Blas Determination If an instrument, or group ofinstruments, consistently drifts predominately in one direction, the drift is assumed to have a bias. When the absolute value of the calculated average for the sample gol exceeds the values in Table 4 for the given sample size and calculated standard, deviation the average is treated as a bias to the drift term. The application of the bias must be carefully considered so that the overall drift term is not reduced in the non-conservative direction. Refer to Example I below.

E l ( PY-CEl/NRR-2398L l Attachment 5 Page 24 of 39 I Table 4- Maximum Values of Non-Biased Mean Sample Normal Deviate (t) Maximum Value of Non-Biased Mesa (xcrit) For Given STDEY (s) Size (n) @ 0.025 for 95% Confidence s2 s2 s2 s2 s2 s2 s2 s2 s2 0.10 % 0.25 % 0.50% 0.75 % l.00 % 1.50 % 2.00 % 2.50 % 3.00 % 55 2.571 0.115 0.287 0.575 0.862 1.150 1.725 2.300 2.874 3.449

   $10             2.228           0.070    0.176    0.352   0.528    0.705      1.057   1.409   1.761  2.114 sl5             2.131           0.055    0.138    0.275   0.413    0.550     0.825    1.100   1.376   1.651 s20             2.086           0.047    0.117    0.233   0.350    0.466     0.700   0.933    1.166   1.399 525             2.060           0.041    0.103    0.2 %   0.309    0.412     0.618   0.824    1.030   1.236 530             2.042           0.037    0.093    0.186   0.280    0.373     0.559   0.746   0.932    1.118 540             2.021           0.032    0.080    0.160   0.240    0.320     0.479   0.639   0.799   0.959 560             2.000           0.026    0.065    0.129   0.194    0.258     0.387   0.516   0.645   0.775 5120              1.980          0.018    0.045    0.090   0.136    0.181     0.271   0.361   0.452   0.542
  >l20              1.960                                       See Equation Below The maximum values of non-biased mean (Xcrit) for a given standard deviation (s) and sample size (n)is calculated using the following formula:

s xcrit *$* (Ref. 8.4.7) n Where; Jerit - Maximum value of non-biased mean for a given s & n, expressed in % t - Normal Deviate for a t-distribution @ 0.025 for 95% Confidence s Standard Deviation of sample pool n = Sample pool size Example of determining and applying bias to the ADR term:

1) Transmitter Group With a Biased Mian - A group of transmitters are calculated to have a standard deviation of 1.150%, average of-0.355% with a count of 47. From table 4, the maximum value that the average could be is
  • 0.258%. The ADR term for a 95%/95%

tolerance intervallevel is shown as DA =-0.355%

  • 1.150% x 2.408 (TIF from Table 1 for 47 samples) or DA --0.355%
  • 2.769%. For conservatism, the DA term for the positive direction is not reduced by the bias value whereas the negative direction is summed with the bias value so; DA = + 2.769%,- 3.124%
2) Transmitter Group With a Non-Biased Mean - A group of transmitters are calculated to have a standard deviation of 1.150%, average of 0.100% with a count of 47. From table 4, the maximum value that the average could be is
  • 0.258%. 'Ihe ADR term for a 95%/95%

tolerance intervallevelis shown as DA =

  • 1.150% x 2.408 (TIF from Table 1 for 47 samples) or DA =
  • 2.769%.

L

PY-CEl/NRR-23981. Attachment 5 4.13. Time Dependent Drift Uncertainty When calibration intervals are extended beyond the range for which hi.itorical data is available, the  ! statistical confidence in the ability to predict drift is reduced. This redt.ced confidence is translated ( to a higher drift uncertainty and is not dependent upon the observation of time-dependency within j the original sample. This Design Guide recommends increasing the Tolerance Interval Factor to l account for the higher drift uncertainty at extended intervals. For example: components that perform functions which satisfy a specific Technical Specification value (rigor level 1), the normal tolerance interval factor is 95%/95% indicating a 95% level of confidence that 95% of the population is contained within the stated interval. To account for the l drift uncertainty associated with extension of the calibration interval, the confidence level would be increased to 99% or 99*/o/95%. An increased confidence level shall be applied to calibration interval extensions regardless of detected time-dependency. Other methods for calculating the drift value for the extended time interval may also be used for moderate time correlated drift, the formula (proposed time interval / average analyzed time interval)"

  • the ADR value may be used. This method assumes that the drift to time relationship 15 not linear. Where there is indication of a strong relationship between drift and time, the formula proposed time interval / average analyzed time interval)* the ADR value may be used 4.14. Shelf Life Of Analysis Results Any analysis result based on performance of existing components has a shelflife. In this case, the term shelflife is used to describe a period of time extending from the present into the future, during which the analysis results are considered valid. Predictions for future component / loop performance are based upon our knowledge of past calibration performance. This approach assumes that changes in component / loop performance will occur slowly or not at all over time. For example, if evaluation of the last ten years of data shows the component / loop drin is stable with no observable trend, there is little reason to expect a dramatic change in performance during the next year.

However, it is also difficult to claim that an analysis completed today is still a valid indicator of component / loop performance ten years from now. For this reason, the analysis results should be re-verified periodically (every 3-5 years). Depending on the type orcomponent/ loop, the analysis results are also dependent on the method of l calibration, the component / loop span, and the M&TE accuracy. Any of the following program or l component / loop changes should be evaluated to determine if they affect the analysis results. i e Changes to M&TE accuracy. , e Changes to the component or loop (e.g. span, environment, manufacturer, model, etc.)  ! e

                                                                                                                      )

Calibration procedure changes which alter the calibration methodology. t

5. PERFORMING AN ANALYSIS Drift data for Technical Specification and ORM related instruments will be collected as a part of the PNPP evaluations for extension of plant surveillance to support a 24 month Fuel Cycle. He collected data will be entered into Microsoft Excel workbooks grouped by Manufacture and model number. All data will also be entered into the IPASS software program. Analysis will be performed using both IPASS and Microsoft Excel spreadsheets. The IPASS analyses are all embedded in the software and it is not possible to follow each specific analysis. Therefore, the analysis will be repeated in the spreadsheet, with random hand calculation verification of values, to allow tracing errors or changes. Le discussion provided in this section is to assist in setting up a spreadsheet and performing the independent analysis. For IPASS analysis see the IPASS User's Guide (Instrument Perfor' nance Analysis Software System (AP-106752).

PY-CEl/NRR-2398L Attachment 5 Page 26 of 39 5,1. Populating The Spreadsheet 5.1.1. For A New Analysis i 5.1.1.1. The Responsible Engineer shall determine the component group to be analyzed l (e.g., all Rosemount Trip Units). 5.1.1.2. The Responsible Engineer shall develop a list of component nusabers, l manufacturers, models, component types, brief descriptions, surveillance tests, l calibration procedures and calibration information (spans, setpoints, etc.). 5.1.1.3. The Responsible Engineer shall determine the data to be collected following the guidance of sections 4.4 through 4.6 of this Design Guide. 5.1.1.4. He Data Entry Person shall identify, locate and collect data for the component group to be analyzed (e.g., all Surveillance Tests for the Rosemount Trip Units completed to present). 5.1.1.5. He Data Entry Person shall sort the data by surveillance test or calibration l procedure if more than one test / procedure is involved. 5.1.1.6. The Data Entry Person shall sequentially sort the surveillance or calibration sheets descending, by date, starting with the most recent date. l 5.1.1.7. He Data Entry Person shall enter the Surveillance or Calibration Procedure Number, Tag Number, Manufacturer and Model Number on the INDEX sheet for all devices to be analyzed. 5.1.1.8. The Data Entry Person shall enter the Surveillance or Calibration Procedure Number, Tag Numbers, Spans, Required Trips, Indications or Outputs on the appropriate component / group sheet using the example formats provided in Figures land 2. I

I  ! PY-CEl/NRR-2398L l Attachment 5 Page 27 of 39 l Fleure 1 - Samole Spreadsheet (Switches. Trio Units & Other Trippine Devlees) SCRAM DISCIIARGE INSTRUMENT VOLUhtE FUGFI WATER Leul Fmr'rtcAt VaIUe represents the DRIFT ANALYSIS ICilN0012A SPAN = 16 mADC number of days Data Interval initial ata Raw Drift Data between 2/22/96 and Date Status (Days) P TRIP 11/20/95. Required Trip 50p120 (-16.70 mADC) 2/22S 6 As Found g Value represents the j As Left 94 " f 16.68 - 2/22/96 As Found l 16.68 0.0000 % l 11/20/95 As Found 16.68 As Leh 25 Left divided by the 16.68 0.0000 % deVlCeSpan. 10/26/95 As Found 16.68 As Left 64 16.68 0.0000 % 8/23/95 As Found 16.68 As Left 16.68 Note: The shaded areas show where data from surveillance tests or calibration procedures are inputted on the spreadsheet. - l Figure 2 - Samole Soreadsheet (Transmitters. Indicators. Recorders & Other Non-Trinoine Devices) REACTOR VESSEL lilGli PRESSURE SCRAM FUNCTIONAL TEST / CAL DRIFT ANALYSIS 1 E22N0093A COMPONENT CALIBRATION SPAN = 16 mADC Interval Initial Data Raw Drift Data l Date Status (Days) 13 PSI 763 PSI 1513 PSI 13 PSI 763 PSI 1513 PSI l Required 4 mADC 12 mADC 20 mADC 4 mADC 12 mADC 20 mADC 3/28/95 As Found 4.01 12.02 20.06 l As Lett $75 4.00  !!.99 20.00 0.1250 % 0.3750 % 0.5000 % 8/30/93 As Found 3.99 11.% 19.98 As Left 526 3.99 11.% 19.98 -0.0620% -0.2500 % -0.3750 % j 3/22/92 As Found 1.00 3.00 5.01 As Left 731 1.00 3.00 5.01 0.0000 % 0.0000 % 0.2500*4 3/22/90 As Found 1.000 3.000 _ 5.000 I As Left 1.000 3.000 5.000 Value represents the Values represent the 3/28/95 number of days between As Found minus the 8/30/93 3/28/95 and 8/30/93. As Left divided by the device span. Note: The shaded areas show where data from surveillance tests or calibration procedures are inputted on the spreadsheet. Additional columns will be required for data entry and the calculations of Raw Drift data. Other layouts mr.y be used as long as the required data is entered into the spread sheet. I 5.1.1.9. The Data Entry Person shall enter the Date, as found and as left values on the l appropriate component / group sheet, starting with the most recent data using the example formats provided in Figures I and 2. t .

l l I PY-CEl/NRR-2398L l Attachment 5 Page 28 of 39 5.1.1.10. The Responsible Engineer shall verify the data entered. 5.1.1.11. The Responsible Engineer shall review the notes on each calibration data { sheet to determine possible contributors for a given data point being an J outlier. The notes should be condensed and entered on the Excel spreadsheet for the applicable calibration points. 5.1.1.12. The Responsible Engineer shall calculate the time interval by subtracting the second date from the first date for the data set, as shown in the example formats provided in Figures I and 2. 5.1.1.13. He Responsible Engineer shall calculate the % Drift by subtracting the second dates' as left value from the first dates' as found value, divided by the device span for the data set. Format the spreadsheet cells to show the value in percent span (0.0000%). 5.1.1.14. The Responsible Engineer shall flag the calibration points that are suspected to be outliers due to the calibration notes reviewed in step 5.1.1.11. 5.2. Spreadsheet Performance Of Basle Statistics Basic statistics include, at a minimum, determining the number of data points in the sample, the average, standard deviation, variance, minimum, maximum, kurtosis, skewness and the Tolerance Interval Factor contained in each data column. This section provides the specific details for using Microsoft Excel. Other spreadsheet programs, statistical or Math programs that are similar in function, are acceptable for use to perform the data analysis, providing all analysis requirements are met. 5.2.1. Determine the number of data points contained in each column for each initial group by using the " COUNT' function. Example cell format = COUNT (C2:Cl33); The Count function returns the number of all populated cells within the range of cells C2 through Cl33. 5.2.2. Determine the average for the data points contained in each column for each initial group by using the " AVERAGE" function. Example cell fonnat = AVERAGE (C2:Cl33); he Average function retums the average of the data contained within the range of ceus C2 through Cl33. 5.2.3. Determine the standard deviation for the data points contained in each column for each initial group by using the "STDEV" function. Example cell format =STDEV(C2:Cl33); The Standard Deviation function returns the measure of how widely values are dispersed from the mean of the data contained within the range of cells C2 through Cl33. Formula used by Microsoft Excel to determine the standard deviation: n[x' -{Xf (Ref. 8.4.5) s=} n(n-1) Where; x - Sample data values (xi, x2, X3 , -) s - Standard deviation of all sample data points n - Total number of data points 5.2.4. Determine the variance for the data points contained in each column for each initial group

PY-CEl/NRR-2398L i Attachment 5 i Page 29 of 39 by using the " VAR" function or "VARP" if the entire population is contained within the spreadsheet. Example cell format = VAR (C2:Cl33); The Variance function returns the measure of how widely values are dispersed from the mean of the data contained within the range of cells C2 through Cl33. Fonnula used by Microsoft Excel to determine the variance: VAR (Variance of the sample population): (Ref. 8 A.5) 2 s, = n[x _[{x} n(n-1) VARP(Variance of the population): (Ref. 8.4.5) 2 a,= n[x - ([ x)* n, Where; x - Sample data values (xi, x2, x3, .....) 2 s - Variance of the sample population. c2 - Variance of the entire population. n - Total number ofdata points 5.2.5. Determine the kurtosis for the data points contained in each column for each initial group by using the "KUR'I" function. Example cell format =KURT(C2:Cl33); The Kurtosis function returns the relative peakedness or flatness of the distribution within the range of cells C2 through Cl33. Formula used by Microsoft Excel to determine the kurtosis:

                ,[          n(n + 1) f x, - I'         3(n - 1)*

(Ref. 8.U)

                   ,(n - 1)(n - 2)(n - 3)        s    s s   ,    (n - 2)(n - 3)

Where ; x - Sample data values (xi, x2, x3, .. ..) n - Total number of data points. s - Sample Stnndard Deviation. 5.2.6. Determine the skewness for the data points contained in each column for each initial group by using the " SKEW" function. Example cell format = SKEW (C2:Cl33); The Skewness function returns the degree of symmetry around the mean of the cells contained within the range ofcells C2 through Cl33. Formula used by Microsoft Excel to determine the skewness:

                                             ' x' - x' '

SKElY = (n n(n + 1)1)(n - 2)[ ( s s gg, g,y) Where; x - Sample data values (xi, x2, x3, . ..) n - Total number of data points. s - Sample Standard Deviation. 5.2.7. Determine the maximum value for the data points contained in each column for each initial

PY-CEl/NRR 2398L l Attachment 5 Page 30 of 39 l- group by using the " MAX" function. Example cell format = MAX (C2:C133); ne l Maximum function returns the largest value of the cells contained within the range of cells l - C2 through Cl33. 5.2.8. Determine the minimum value for the data points contained in each column for each initial group by using the " MIN" function. Example cell format = MIN (C2:C133); The Minimum function returns the smallest value of the cells contained within the range of  ; cells C2 through Cl33.  ; l l 5.2.9. Determine the median value for the data points contained in each column for each initial l group by using the " MEDIAN" function. Example cell format = MEDIAN (C2:Cl33); The l median is the number in the middle of a set of numbers; that is, half the numbers have , values that are greater than the median, and half have values that are less. If there is an even number of numbers in the set, then MEDIAN calculates the average of the two numbers in the middle. 5.2.10. Evaluate other groups to determine if combination of groups for analysis is appropriate or to verify that current groupings are acceptable. 5.2.11. Review the statistics and component data of the sub-groups to determine the acceptability for combination. This would entail looking at the manufacturer, model, calibration span, l setpoints, time intervals, stdev, avg, location, environment, etc. Refer to Section 4.6. ' l 5.2.12. Perform a t-Test in accordance with step 4.6.4 on each possible sub-group combination to test for the acceptability of combining the data. Acceptability for combining tha data is indicated when the absolute value of the Test Statistic (t Stat) is greater than the absolute value of the probability for a two tailed l distribution [P(T<=t) two-tail]. Example: t Stat for combining sub-group A & B may be 0.703 which is larger than the P(T<=t) two-tail of 0.485. 5.2.13. Combine the sub-groups that passed the required tests in steps 5.2.11 and 5.2.12 into a larger group or groups as necessary. l 5.2.14. Repeat steps 5.2.1 through 5.2.8 on the group (s) formed in step 5.2.13. 5.2.15. Where multiple groups are formed from the initial sub-groups, additional testing in accordance with step 5.2.11 and 5.2.12 may be performed to determine the suitability of further combinations. 5.2.16. Ensure that the indication unacceptability does not mask time-dependency. 5.3. Outlier Detection And Expulsion Refer to Section 4.7 for a detailed explanation of Outliers. 5.3.1. Following the guidance in Section 4.8, or if necessary, Section 4.9 verify that the raw data is normally distributed or approximately normally distributed. 5.3.2. Obtain the Critical Values for the t-Test from Table 2, which is 1 ased on the sample size of the data contained within the specified range of cells. Use the COUNT value to determine the sample size. 5.3.3. Format the spreadsheet cells for the outlier test using the following: l = If(Absolute Value(Raw Trip Value - Group Average) 4 Group Standard Deviation < l- Critical Value for t, nen the Raw Trip Value is True) Sumple on =lF(ABS (C2-C$135) / C$136 < 3.28,C2,"") 5.3.4. Perform the outlier test for all the samples.

1 PY-CEl/NRR-2398L Attachment 5 Page 31 of 39 l 5.3.5. Recalculate the Average, Median, Standard Deviation, Variance, Minimum, Mr.ximum, Kurtosis, Skewness and Count for the outlier tested data. 5.3.6. Ciculate the percentage of samples expelled as outliers from the original sampic population (Example = COUNTur.,w / COUNTm). Up to 5% of the original sample ) population may be expelled as outliers, providing justification is provided for each outlier. 5.3.7. {' Once the cutliers are statistically determined, each sample point identified as an outlier must bejustified prior to expulsion. Justification examples are provided in Section 4.7. If nojustification can be found, the sample point cannot be expelled from the pool. 5.3.8- Calculate the Absolute Value for each outlier tested data point within the group in a separate colurv't.

 '3.4. Calculate The Analyzed Drift Value The ADRValue is calculated by multiplying the standarddeviation of the outlier tested group by the        ,

Tole'ance interval Factor for the sample size. The ADRValue is not comprised of drift alone. This l value also contains errors from M&TE, normal temperature variations, device reference accuracy, human errors, normal humidity efrects, normal radiation effects, normal vibration effects, j misapplication, improper installation, or other operating affects that effect component calibration. 5.4.1. Use the COUNT value of the outlier tested data to determine the sample size and refer to i section 4.4 for the rigor leal of the data. j 5.4.2. Obtain the appropriate Tolerance Interval Factor (TIF) for the rigor level and size of the sample set. Table I lists the 95%/95% TIFs. Refer to Attachment 3 for other TIF multipliers. Note: TIFs other than 95%/95% must be evaluated by Design Engineering prior to use. 5.4.3. For a generic data analysis, multiple Tolerance Interval Factors may be used providing a clear tabulation of results is included in the analysis showing each value for the multiple levels of rigor. (e.g. Rigor Level 1 - TIF = 95%/95%, Rigor Level 4 - TIF = 75%/95%, etc.) 5.4.4. Multiply the TI F by the standard deviation for the data points contained in each column of the group (e.g. 0%,50%,100%, etc.). 5.4.5. Determine if the sample pool contains a bias in the average following the guidance provided in Section 4.12. 5.4.6. If the ADR term calculated above is applied to the existing calibration interval application of additional drift uncertainty is not necessary. 5.5. Time-dependency Test This test segregates the data into groups based on the time intervals (e.g. 0 6,6-12,12-18,18-24 and 24 30 months). The standard deviation is then determined for each group and compared to the remaining groups. This test is useful to confirm the regression plots. 5.5.1. Create a new spreadsheet tab titled " Time Dependency". 5.5.2. Copy the outlier tested % drift and associated time interval to the Time-dependency tab. 5.5.3. Sort the data ascending by the time interval. 5.5.4. Group the data in logical time intervals (e.g. 0-6,6-12,12-18,18-24 and 24-30 months). 5.5.5. Determine the Standard Deviation of each group.

PY-CEl/NRR-2398L Attachment 5 Page 32 of 39 5.5.6. Compare the Standard Deviations of each group to determine if the Standard Deviation increases as the time interval increases. 5.6. Normality Test This test calculates the Standard Deviation of the Outlier Tested data and counts how many samples fall within la and 2o. 5.6.1. Create a new spreadsheet tab titled "STDEV" or " Standard Deviation". 5.6.2. Copy the outlier tested % Drift to the new tab. 5.6.3. Sort the data ascending. 5.6.4. Calculate the standard deviation for the group using the formula given in step 5.23. 5.6.5. Calculate the absolute value of the % Drift column. 5.6.6. Determine if a given sample is within la by using the following test: If the absolute value of the sample is 5. one standard deviation then show the value [lF(ABS (Sample))5 STDEV, ""). 5.6.7. Determine if a given sample is within 2a by using the following test: If the absolute value of the sample is 5 2 standard deviations then show the value [IF(ABS (Sample))$ 2

  • STDEV, "").

5.6.8. Count the number of samples that are within lo and 2a. 5.6.9. Divide the counts by the total number of samples in the group to determine a percentage. Values for a normal distributian are provided in Table 3. 5.7. Plot The Spreadsheet Data The ability to perform regression antlysis, histograms ano other descriptive statistics tools in Microsoft Excel requires that the " add-ins" include the Data Analysis Tool Pack. The descriptive statistics tools reside under the Tools - Data Analysis pull &wn menu. Microsoft Excel may need to be reinstalled on the computer performing the data analysis to include the Data Analysis Tool

    . Pack.                                                                                                 (Ref. 8.4.5) 5.7.1. Drift Interval Plot 5.7.1.1. Organize the data in columns so that the outlier tested group can be plotted against the time intervals for the group.

5.7.1.2. Select and calculate, if necessary, the time interval representation for the group. For monthly surveillances, the time interval may be represented best in days, whereas refueling interval calibrations may be represented in months. 5.7.1.3. Use an XY scatter plot or the Regression Tool to display the % Drift of the sample group over the time intervals for the group. 5.7.2. Absolute Value Drift Interval Plot 5.7.2.1. Organize the data in columns so that the absolute value of the outlier tested group determined in step 5.3.8 can be plotted against the time intervals for the group. i 5.7.2.2. Select and calculate, if necessary, the time interval representation for the group. For monthly surveillances the time interval may be represented best in days, whereas refueling interval calibrations may be represented in months. 5.7.2.3. Use an XY scatter plot or the Regression Tool to display the Absolute Value of

PY-CEl/NRR-2398L Attachment 5 l Page 33 of 39 - I the % Drin of the sample group over the time intervals for the group. 5.7.3. Cumulative Probability Plot 5.7.3.1. Create a new spreadsheet tab titled " Probability". 5.7.3.2. Copy the outlier tested data to the Probability tab. i 5.7.3.3. Sort the outlier tested % Drift data ascending. 5.7.3.4. Create a column titled " Sample #" and assign a counter to each % Drift data point (e.g.1,2,3...). 5.7.3.5. Create a column titled " Probability" and calculate the probability term for each data point using the equation provided in step 4.8.4. 5.7.3.6. To create the Probability Plot, use the regression tool selecting the line fit chart. l Y Input Range = calculated probability; X Input Range = % Drift. 1 5.7.4. Normalized Probability Plot 5.7.4.1. Calculate *he multiple of the standard deviation for each point based on a normal l distribution using the probability terms calculated in step 5.7.3.5 from the Probability Tab. See EPRI TR 103335, GUIDELINES FOR INSTRUMENT l CALIBRATION EXTENSION / REDUCTION PROGRAMS pages C-21 and 22 for probability vs. standard deviation tables. (Ref. 8.1.1) 4 5.7.4.2. After calculating the multiple of the STDEV, round to nearest interval e.g. -3,- 2.9, 2.8, etc. 5.7.4.3. Use the Regression Tool with the line fit to create the normalized probability plot  ! of drift vs. multiple of the standard deviation using the % Drift as the x range and  ! the rounded multiple as the y range. 5.7.5. Coverage Plot (Histogram) l 5.7.5.1. Plot a histogram using the sorted outlier tested % Drift data from the Probability Tab and the Histogram Tool from the Data Analysis Tool Pack, defining the bin ranges or allowing Excel to select the bin ranges. 5.7.5.2. Create an overlay of a normal distribution and plot over the histogram for the data set. 5.7.6. Regression Statistics 5.7.6.1. Raw Regression Statistics - From the tools menu in Excel, select " Data Analysis", then " Regression". Using the Regression tool, plot the Raw data column (Y-axis) versus the date interval column (X-axis). Select the " Residuals" option and select "O K". 5.7.6.2. Absolute Value Regression Statistics - From the tools menu in Excel, select" Data Analysis", then " Regression". Using the Regression tool, plct the Absolute Value data column (Y-axis) versus the date interval column (X axis). Select the

                 " Residuals" option and select "OK".

5.7.6.3. Cumulative Probability Regression Statistics - From the tools menu in Excel, l select " Data Analysis", then " Regression". Using the Regression tool, plot the Probability data column (Y-axis) versus the Raw data column (X-axis). Select the  ;

                 " Residuals" option and select "OK".

l PY-CEl/NRR-23981. Attachment 5 Page 34 of 39 5.7.6.4. Normalized Probability Regression Statistics - From the tools menu in Microsoft Excel, select " Data Analysis", then " Regression" Using the Regression tool, plot the Multiple of STDEV data column (Y axis) versus the % Drift column (X-axis). Select the " Residuals" option and select "OK". 5.7.7. Adjust the page setup for each page of the data analysis to ensure that printing is grouped appropriately and is easily understood. Ensure that the date, title, file name and page number are displayed in the Header / Footer of each page. 5.7.8. Print the following work sheets and plots, as necessary, to confirm and add to the IPASS Statistical analysis and ; resentation. Specifically, the t-tests for data grouping, the Histogram, the standard Deviation of Time intervals and absolute value xy scatter plots should be printed from Microsoft Excel. 5.7.8.1. Raw Data 5.7.8.2. Grouped Data 5.7.8.3. t-Tests 5.7.8.4. XY Scatter Plos - Raw Data 5.7.8.5. XY Scatter Plots with Regression Line - Raw Data 5.7.8.6. Absolute Value XY Scatter Plots 5.7.8.7. Absolute Value XY Scatter Plots with Regression Line 5.7.8.8. Cumulative Probability Plots 5.7.8.9. Normalized Probability Plots 5.7.8.10. Histograms 5.7.8.11. Regression Statistics - Raw Data l I 5.7.8.12. Regression Statistics - Absolute Value Data 5.7.8.13. Regression Statistics - Cumulative Probability 5.7.8.14. Regression Statistics - Normalized Probability 5.7.8.15. Standard Deviation of Time Intervals 5.7.8.16. Pass / Fail worksheet -if applicable 5.7.8.17.Other worksheets used for data evaluation as necessary 5.8. Analyzing The Dats & Charts This Design Guide is intended to be used in conjunction with various statistical analysis texts. This l Design Guide does not provide step by step instruction in the analysis of the data, plots and graphs. The Responsible Engineer performing the data analyses shall have the necessary training, knowledge and understanding of basic statistics to interpret the data, plots and graphs. Refer to the EPRI TR-103335, GUIDELINES FOR INSTRUMENT CALIBRATION EXTENSION / REDUCTION PROGRAMS as a guide to perform the analysis of the various plots and graphs. 1 5.8.1. Probability plots are discussed in Section 4.8.4 of this Design Guide. 5.8.2. Coverage Analysis (Histograms) are discussed in Section 4.8.5 of this Design Guide. 5.8.3. The XY scatter plots and regression plots are discussed in Section 4.10.2 of this Design Guide.

PY-CEl/NRR-2398L Attachment 5 Page 35 of 39 5.8.4. Section 4.8 discusses the verification of normality. 5.8.5. Section 4.9 discusses the option of a pass / fail analysis for distributions considered not normal. 5.8.6. Section 4.10 discusses time-dependency evaluations. 5.8.7. Application of the ADR (DA) term is discussed in the Instrument Uncertainty and Setpoints Design Guide. (Ref. 8.3.1) 5.8.8. Application of bias in the ADR (DA) term is discussed in Section 4.12.

6. CALCULATIONS 6.1. Drift Calculations Perform a Drin Calculation in accordance with NEl 0341, ensuring that the following minimum requirements are met:

(Ref. 8.2.2 ) l 6.1.1. The title includes the terms Drin Calculation and the Manufacturer /Model number of the j component group analyzed. l 6.1.2. The calculation objective shall: I 6.1.2.1. Describe, at a minimum, that the objective of the calculation is to document the drift analysis results for the component group. 6.1.2.2. Provide a list for the group of all pertinent information in tabular form (e.g. Tag Numbers, Manufacturer, Model Numbers, ranges and calibration spans). 6.1.3. The method of solution shall describe, at a minimum, a summary of the methodology used to perform the drin analysis outlined by this Design Guide. Exceptions taken to this Design Guide shall be included in this section, including basis and references for exceptions. 6.1.4. The actual calculation / analysis shall provide: 6.1.4.1. The Statistics Summary for the analyzed group. 6.1.4.2. He applicable Tolerance Interval Factors (provide detailed discussion and justification if other than 95%/95%). 6.1.4.3. The calculated ADR (DA) Term (s). 6.1.4.4. Bias contained in the average if applicable. 6.1.4.5. The hard copy of drin analysis as an attachment to the calculation. 6.l.5. The results and conclusions section shall provide detailed discussions on: 6.1.5.1. The analysis data. 6.1.5.2. Application of any bias terms. , 6.1.5.3. The analysis plots and graphs. 6.1.5.4. Time dependency. 6.1.5.5. Normality.

PY-CEl/NRR-2398L Attachment 5 I Page 36 of 39 6.1.5.6. Acceptability of the data for use in Setpoint/ Uncertainty Calculations. For calibratien extension programs, the drift value must be extrapolated to the time interval ofinterest. The NRC has only accepted the statistical methods discussed in this guide for extension of the calibration interval to 24 months plus the 25 percent allowance provided by Technical Specification. To extend the calibration interval, determine if there is any time dependent component for drift. If there is no time dependent component for the drift, then the calculated drift value may be used directly for the increased interval. If there is a time-dependent component, i then the -! rift values must be extrapolated to for the new calibration interval. For i moderate time correlated drift, the formula (proposed time interval / average analyzed time interval)'d

  • the analyzed drift value may be used. This method assumes that the drift to time relationship is not linear. Where there is indication l of a strong relationship between drift and time the formula proposed time, 1 interval / average analyzed time interval)* the analyzed drift value may be used l 6.2. Setpoint/ Uncertainty Calculations Application of the results of the drift analyses and drift calculations to a specific device or loop will require, in most cases, that a setpoint/ uncertainty calculation be performed or revised in accordance with NEl-0341. (Ref. 8.2.2) l
7. DEFINITIONS 95%/95% - Standard statistics term meaning that the results have a 95% confidence (y) that at least 95 % of Ref. 8.1.1 the population will lie between the stated interval (P) for a sample size (n).

Analyzed Drift A temi representing the errors determined by a completed drift analysis for a group. Step 43.13 (DA)- Uncertainties which may be represented by the analyzed drift term are component accuracy

                   .            errors, M&TE errors, personnel-induced or human related errors, ambient temperature and l   Synonymous with other envirornnental effects, power supply efrects, misapplication errors and true component ADR                          drift.

As Found (FT)- The condition h which a channel, or portion of a channel, is found after a period of operation Ref. 8.13 l and before recalibration. 1 As Left (CT)- The condition in which a channel, or portion of a channel, is left after calibration or final Ref. 8.13 setpoint device verification. Bias (B) . A shift in the signal zero point by some amount. Ref. 8.1.1 Calibrated Span The maximum calibrated upper range value less the minimum calibrated lower range value. Ref. 8.1.1 (CS) -

Calibratio
Interval- The clapsed time between the initiation or successful completion of calibrations or calibration Ref. 8.1.1 ,

I checks on the same component or loop.  ! ChLSquare Test - Ref. 8.1.1 ( A test to determine if a sample appears to follow a given probability distribution. This test is , used as one method for assessing whether a sample follows a normal distribution. Commercial Grade Software that is not unique to, or used only in nuclear facilities, and which may be purchased Ref. 8.23 Software - on the basis of the vendor's published description, such as in a catalog. Confidence Interval- An interval that contains the population mean to a given pobability. Ref. 8.1.1 ! Coverage Analysis - An analysis to determine whether the cssumption of a normal distribution effectively bounds Ref. 8.i.1 l the data. A histogram is used to graphically portray the coverage analysis. Cumulative An expression of the total probability contained within an interval from -e to some value x. Ref. 8.1.1 Distribution - D-Prime Test - A test to verify the assumption of normality for moderate to large sample sizes. Ref. 8.1.1 s

1 l l PY-CEl/NRR-2398t.  ; Attachment 5 Page 37 of 39 Dependent- In statistics, dependent events are those for which the probability of all occurring at once is Ref. 8.1.1 different than the product of the probabilities of each occurring separately. In setpoint determination, dependent uncertainties are those uncertainties for which the sign or magnitude of one uncertainty affects the sign or magnitude of another uncertainty.  ! Drift - An undesired change in outout over a period of time where change is unrelated to the input, Ref. 8.1.2 environment, or load. Error - The algebraic difference between the indication and the ideal value of the measured signal. Ref. 8.1.2 Functionally Components with similar design and performance characteristics that can be combined to form Ref. 8.1.1 Equivalent - a single population for analysis purposes. Histogram - A graph of a frequency distribution. Ref. 8.1.1 ! Independent- In statistics, independent events are those in which the probability of all occurring at once is Ref. 8.1.1 the same as the product of the probabilities of each occurring separately. In setpoint determination, independent uncertainties are those for which the sign or magnitude of one uncertainty does not effect the sign or magnitude of any other uncertainty. Instru ment Channel- An arrangement of components and modules as required to generate a single protective action Ref. 8.1.2 signal when required by a plant condition. A channel loses its identity where single p tective action signals are combined. Instru ment. Range - The region between the limits within which a quantity is measured, received or transmitted, Ref. 8.1.2 expressed by stating the lower and upper range values. Kurtosis - A characterization of the relative peakedness or flatness of a distribution compared to a Ref. 8.1.1 normal distribution. A large kurtosis indicates a relatively peaked distribution and a small kurtosis indicates a relatively flat distribution. M&TE - Measuring and Test Equipment. Ref. 8.1.1 Maximum Span - The component's maximum upper range limit less the maximum lower range limit. Ref. 8.1.1 l Mean - The average value of a random sample or population. Ref. 8.1.1 l Median - The value of the middle number in an ordered set of numbers. Half the numbers have values Ref. 8.1.1 I that are greater than the median and half have values that are less than the median. If the data set has an even number, the median is the average of the two middle numbers. Module - Any assembly of interconnected components that constitutes an identifiable device, instrument Ref. 8.1.2 or piece of equipment. A module can be removed as a unit and replaced with a spare. It has definable performance characteristics that permit it to be tested as a unit. Normality Test - A statistics test to determine if a sample is normally distributed. Ref. 8.1.1 Outlier - A data point significantly different in value from the rest of the sample. Ref. 8.1.1 Population - The totality of the observations with which one is concemed. A true population consists of all Ref. 8.1.1 values, past, present and future. Probability - That branch of mathematics which deals with the assignment of relative frequencies of Ref. 8.4.7 occurrence (confidence) of the possible outcomes of a process or experiment according to some mathematical function. Prob. Density An expression of the distribution of probability for a continuous function. Ref. 8.1.1 Function - Probability Plot - A type of graph scaled for a particular distribution in which the sample data will plot as Ref. 8.1.1 approximately a straight line if the data follows that distribution. For example, normally distributed data will plot as a straight line on a probability plot scaled for a normal distribution; the data may not appear as a straight line on a graph scaled for a different type of distribution. Proportion - A segment of a population that is contained by an upper and lower limit. Tolerance intervals Ref. 8.4.7 determine the bounds or limits of a proportion of the population, notjust the sampled data. u

PY-CEl/NRR-2398L Attachment 5 Page 38 of 39 The proportion (P) is the second term in the tolerance interval value (e.g. 95%99%). Random- Describing a variable whose value at a particular future instant cannot be predicted exactly, Ref. 8.1.1 but can only be estimated by a probability distribution function. Raw Data - As found minus as left calibration data used to characterize the performance of a functionally Ref. 8.l.1 equivalent group ofcomponents. Reference Accuracy - - A number or quantity that defines a limit that errors will not exceed when a device is used Ref. 8.1.2 under specified operating conditions. Rigor - The degree of strictness applied to a given analysis. Step 4 4.1 Sample - A subset of a population. Rc0 8.l.1 3 Sensor - The portion of an instrument channel that responds to changes in a plant variable or condition Ref. 8.1.2 and converts the measured process variable into a signal e.g., electric or pneumatic Signal Conditioning - One or more modules that perform signal conversion, buffering, isolation or mathematical Ref. 8.1.2 operations on the signal as needed Skewness - A measure of the degree of symmetry around the mean. Ref. 8.1.1 Span- The algebraic difference between the upper and lower values of a calibrated span. Ref. 8.1.2 Standard. Deviation - A measure of how widely values are dispersed from the population mean. Ref. 8.1.1 Surveillance Interval- The elapsed time between the initiation or successful completion of a surveillance or Ref. 8.1.1 surveillance check on the same component, channel, instrument loop, or other specified system or device. Time. Dependent.. The tendency for the magnitude of component drift to vary with time. Ref. 8.1.1 Drift - Time-Dependent The uncertainty associated with extending calibration intervals beyond the range of available Ref. 8.1.1 Drift Uncertainty - historical data for a given instrument or group ofinstruments. Time-Independent . The tendency for the magnitude of component drift to show no specific trend with time. Ref. 8.1.1 Drift-Toleranee - The allowable variation from a specified or true value. Ref. 8.1.2 Tolerance Interval- An interval that contains a defined proportion of the population to a given probability. Ref. 8.1.1 Trip Setpoint - A predetermined value for actuation of the final actuation device to initiate protective action. Ref. 8.1.2 t Test - For this Design Guide the t-Test is used to determine: 1) if a sample is an outlier of a sample Ref. 8.l .1 pool. 2)if two groups of data originate from the same pool. Uncertainty - The amount to which an instrument channel's output is in doubt (or the allowance made Ref. 8.1.1

                          - therefore) due to possible errors either random or systematic which have net been corrected for, The uncertainty is generally identified within a probability and confidence level Variance -               A measure of how widely values are dispersed from the population mean.                             Ref. 8.1.1 W Test -                 A test to verify the assumption of normality for sample size less than 50.                          Ref. 8.1.1 i

L.

l PY-CEl/NRR 2398L Attachment 5 Page 39 of 39

8. REFERENCES s 8.1. Industry Standards and Correspondence ,

8.1.1. EPRI TR-103335, Rev. 0, Guidelines For Instrument Calibration Extension / Reduction  ! Programs. i 8.1.2. ISA RP67.04, Rev. O, Recommended Practice, Methodologies for the Determination of  ! Setpoints for Nuclear Safety Related Instrumentation. 8.1 J. ISA-67.04, Rev. 0, Standard, Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation. j 8.1.4. ANSI N15.15 1974, Rev. O, Assessment of the Assumption of Normality (Employing Individual Observed Values). I 8.1.5. IPASS User Guide " Instrument Performance Analysis Software System AP-106752 1 8.1.6. NRC to EPRI Letter Status Report on the Staff Review of EPRI Technical Report TR- j 103335" Guidelines for Instrument Calibration Extension / Reduction Program," Dated l March 1994 j 8.1.7. REGULATORY GUIDE 1.105, Rev. 2," Instrument Setpoints" USNRC. 8.1.8. GE NEDC 31336P A " General Electric Instrument Setpoint Methodology", dated September 1996 ) 8.2. Procedures j 8.2.1. NEl-0331 Rev.4 Design Input. 8.2.2. NEl-0341. Rev 6, Calculations 1 8.23. PAP-0506. Rev. 4, Computer Software Administrative Control  ; 8.2.4. PAP 1403 Rev. 6 Control of Setpoints. ] l 8.3. Programs j 8.3.1. Engineering Design Guide 97-021/05 E, Setpoint Calculation Methodology, Rev. 6, dated l Novermber 20,1997. 8.4. Miscellaneous l 8.4.1. IPASS (Instrument Performance Analysis Software System), Revision 0, created by EDAN l Engineering in conjunction with EPRL 8.4.2. NRC Status Report on the Staff Review of EPRI Technical Report TR 103335 " Guidelines i For Instrument Calibration Extension / Reduction Programs,"D_:ed March 1994 8.4.3. NRC Generic Letter 91-04, Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle. 8.4.4. MPAC, Maintenance Planning and Control System. 8.4.5. Microsoft Excel Version 97SR-2, Spreadsheet Program. 8.4.6. Microsoft Access Version 97SR 2, Database Program. 8.4.7. Statistics for Nuclear Engineers and Scientists Part 1: Basic Statistical Inference, William J. Beggs, February,1981. u -

PY-CEl/NRR 2398L Attachment 6 Page1of4 SIGNIFICANT HAZARDS CONSIDERATION The standards used to arrive at a determination that a request for amendment does not involve a significant hazard are included in Commission Regulation 10CFR50.92, which states that operation of the facility in accordance with the proposed amendment would not: (1) involve a significant increase in the probability or consequences of an accident previously evaluated; or l (2) create the possibility of a new or different kind of accident from any accident i previously evaluated; or (3) involve a significant reduction in a margin of safety. The proposed amendment has been reviewed with respect to these three factors, and it has been determined that the proposed change does not involve a significant hazard because: I

The proposed amendment does not involve a significant increase in the probability or consequences of an accident previously evaluated.

A. Frequency Extensions The proposed Technical Specification (TS) changes involve a change in the surveillance testing intervals to facilitate a change in the Perry Nuclear Power Plant (PNPP) operating cycle from 18 months to 24 months. The proposed TS changes do not physically impact the plant, nor do they impact any design or functional

                                                                                                   )

requirements of the associated systems. That is, the proposed TS changes do not i degrade the performance of, or increase the challenges to, any safety systems assumed to function in the accident analysis. The proposed TS changes do not impact the TS surveillance requirements themselves, or the way in which the surveillances are performed. In addition, the proposed TS changes do not introduce any accident initiators, since no accidents previously evaluated have, as their initiators, anything related to the frequency of surveillance testing. Also, evaluation of the proposed TS changes demonstrated that the availability of equipment and systems required to prevent or mitigate the radiological consequences of an accident are not significantly affected because of other, more frequent testing that is performed, the availability of redundant systems and equipment, or the high reliability of the equipment. Since the impact on the systems is minimal, it is concluded that the overall impact on the plant accident analysis is negligible. Furthermore, a historical review of surveillance test results and associated l maintenance records indicated that there was no evidence of any failures that would l invalidate the above conclusions. Therefore, the proposed TS changes do not I significantly increase the probability or consequences of an accident previously l evahtated. j i B. Allowable Value Changes l The proposed changes in Allowable Values for the instrumentation included in i i Table 3.3.8.1-1 Items d and e of the Technical Specifications are the result of I application of the Perry Instrument Setpoint Methodology (ISM) using plant  ; l l I l

1 PY-CEl/NRR-2398L Attachment 6 Page 2 of 4 specific drift values. Application of this methodology results in Allowable Values which more accurately reflect total instrumentation loop accuracy as well as that of l test equipment and calculated drift between surveillances. The proposed changes l will not result in any hardware changes. The instrumentation is not assumed to be an initiator of any analyzed event. Existing operating margin between plant l conditions and actual plant setpoints is not significantly reduced due to these changes. The role of the instrumentation is in mitigating and thereby limiting the consequences of accidents. The Allowable Values have been developed to ensure that the design and safety analysis hmits will be satisfied. The methodology used for the development of the Allowable Values ensures the affected instrumentation remains capable of mitigating design basis events as described in the safety analyses and that the results and radiological consequences described in the safety analyses remain bounding. Additionally, the proposed change does not alter the plant's ability to detect and mitigate events. Therefore, this change does not involve a significant increase in the probability or consequences of an accident previously evaluated. C. Frequency Reductions to Semiannual The proposed Technical Specification (TS) changes involve a change in the surveillance testing intervals from 18 months to either 6 months or quarterly. The shorter frequencies are based on PNPP specific results of setpoint drift evaluations. The proposed more restrictive TS changes do M physically impact the plant, nor do they impact any design or functional requLements of the associated systems. That is, the proposed TS changes do not degrade the performance of, or increase the challenges to, any safety systems assumed to function in the accident analysis. The proposed TS changes do not impact the TS surveillance requirements themselves, or the way in which the surveillances are performed. In addition, the proposed TS changes do not introduce any accident initiators, since no accidents previously evaluated have, as their initiators, anything related to the frequency of surveillance testing. The proposed TS frequencies will demonstrate that the equipment and systems required to prevent or mitigate the radiological consequences of an accident are continuing to meet the assumptions of the setpoint evaluation, on a more frequent basis. Since the impact on the systems is minimal, and the assumptions of the safety analyses will be maintained, it is concluded that the overall impact on the plant accident analysis is negligible. Furthermore, a historical review of surveillance test results and associated maintenance records indicated that there was no evidence of any failures that would invalidate the proposed test frequencies. Therefore, the proposed TS changes do not significantly increase the probability or consequences of an accident previously evaluated. l 1

I l PY-CEl/NRR-2398L Attachment 6 Page 3 of 4 i The proposed amendment would not create the possibility of a new or different kind l of accident from any accident previously evaluated. I A.' Frequency Extensions I The proposed TS changes involve a change in the surveillance testing intervals to facilitate a change in the PNPP operating cycle length. The proposed TS changes do not introduce any failure mechanisms of a different type than those previously evaluated, since there are no physical changes be:ng made to the facility. No new

i. or different equipment is being installed. No hulled equipment is being operated in a different manner. As a result, no new failure modes are being introduced. In  ;

addition, the surveillance test requirements themselves, and the way surveillance i tests are performed, will remain unchanged. Furthermore, a historical review of surveillance test results and associated maintenance records indicated there was no evidence of any failures that would invalidate the above conclusions. Therefore, the proposed TS changes do not create the possibility of a new or different kind of , accident from any previously evaluated. B. Allowable Value Changes The proposed changes are the result of application of the ISM using plant specific drift values and do not create the possibility of a new or different kind of accident from any accident previously evaluated. This is based on the fact that the method and manner of plant operation is unchanged. The use of the proposed Allowable Values does not impact safe operation of PNPP in that the safety analysis limits will be maintained. The proposed Allowable Values involve no system additions or physical modifications to systems in the' station. These Allowable Values were revised to ensure the affected instrumentation remains capable of mitigating accidents and transients. Plant equipment will not be operated in a manner different l from previous operation, except that setpoints may be changed. Since operational l- methods remain unchanged and the operating parameters have been evaluated to maintain the station within existing design basis criteria, no different type of failure or accident is created. C. Frequency Reductions to Semiannual or Quarterly The proposed TS changes involve a change in the surveillance testing interval due to the application of the ISM and plant specific drift analysis results. Also, the quarterly tests reflect current PNPP calibration practices, since the components are normally calibrated during the Channel Functional Test. The proposed TS changes do not introduce any failure mechanisms of a different type than those previously evaluated, since there are no physical changes being made to the facility. No new or different equipment is being installed. No installed equipment is being operated in a different manner. The proposed change does not impact core reactivity, or the manipulation of fuel bundles. As a result, no new failure modes are being l introduced. In addition, the surveillance test requirements themselves, and the way surveillance tests are performed, will remain unchanged. Furthermore, a historical review of surveillance test results and associated maintenance records indicated l l E

r f PY-CEl/NRR-2398L Attachment 6 Page 4 of 4 there was no evidence of any failures that would invalidate the above conclusions. Therefore, the proposed TS changes do not create the possibility of a new or different kind of accident from any previously evaluated. , The proposed amendment will not involve a significant reduction in a margin of j l safety. j l A. Frequency Extensions Although the proposed TS changes will result in changes in the interval between l surveillance tests, the impact, if any, on system availability is small, based on other, l more frequent testing that is performed, or the existence of redundant systems and l equipment, or overall system reliability. Evaluations have shown there is no l evidence of time dependent failures that would impact the availability of the l systems. The proposed change does not significantly impact the condition or i performance of stiuctures, systems, and components relied upon for accident i I mitigation. The proposed change does not significantly impact any safety analysis

assumptions or results. Therefore, the proposed change does not involve a significant reduction in a margin of safety.

l B. Allowable Value Changes The proposed change does not involve e reduction in a margin of safety. The proposed changes have been developed using a methodology to ensure safety analysis limits are not exceeded. As such, this proposed change does not involve a significant reduction in a margin of safety. C. Frequency Reductions to Semiannual or Quarterly The proposed TS changes will result in a shorter interval between surveillance tests l to ensure that the assumptions of the safety analysis are maintained. The impact, if l any, on system availability is small, as a result of this more frequent testing that is performed. The proposed change does not significantly impact the condition or performance of structures, systems, and components relied upon for accident mitigation. The proposed change does not significantly impact any safety analysis assumptions or results. Therefore, the proposed change does not involve a significant reduction in a margin of safety. l l l l _ _ _}}