ML042040177

From kanterella
Jump to navigation Jump to search
License Amendment Request to Support 24-month Fuel Cycles, Drift Analysis (Instrumentation and Controls)
ML042040177
Person / Time
Site: Monticello Xcel Energy icon.png
Issue date: 06/30/2004
From:
Nuclear Management Co
To:
Document Control Desk, Office of Nuclear Reactor Regulation
References
GL-91-004, L-MT-04-036
Download: ML042040177 (59)


Text

ENCLOSURE 4 MONTICELLO NUCLEAR GENERATING PLANT LICENSE AMENDMENT REQUEST TO SUPPORT 24-MONTH FUEL CYCLES DRIFT ANALYSIS (INSTRUMENTATION AND CONTROLS)

TABLE OF CONTENTS SECTION PAGE 1.0 PURPOSE 3

2.0 APPLICABILITY 3

2.1 Graded Approach 3

2.2 24-Month Fuel Cycle 3

3.0 PERSONNEL WITH RESPONSIBILITIES DEFINED IN THIS DOCUMENT 3

4.0 DISCUSSION/METHODOLOGY 4

4.1 Methodology Options 4

4.2 Drift Analysis Scope 4

4.3 As-Found As-Left (AFAL) Calibration Analysis 5

4.4 Calibration Data Collection 8

4.5 Data Grouping 9

4.6 Outlier Analysis 13 4.7 Normality Testing 15 4.8 Time-Dependency Analysis 23 4.9 Drift Bias Determination 28 4.10 Time Dependent Drift Uncertainty 28 4.11 Shelf Life Of Analysis Results 29 5.0 INSTRUCTIONS 30 5.1 Extended Surveillance Intervals 30 5.2 Verification of Drift Assumptions 30 5.3 Populating The Spreadsheet 30 5.4 Spreadsheet Performance Of Basic Statistics 32 5.5 Outlier Detection And Expulsion 34 5.6 Normality Tests 34 5.7 Selection of Final Data Set 35 5.8 Time Dependency Testing 36 5.9 Drift Bias Determination 39 5.10 Calculate The Analyzed Drift Value 39 Page 1 of 59

TABLE OF CONTENTS (Continued)

SECTION PAGE 6.0 CALCULATIONS 41 6.1 Drift Studies 41 6.2 Use of Analyzed Drift Value in Setpoint/Uncertainty Calculations 43 7.0 DEFINITIONS 44

8.0 REFERENCES

44 8.1 Industry Standards Documents 44 8.2 NMC Documents 45 8.3 Miscellaneous 45 9.0 TABLES 46 Table 9.1 Tolerance Interval Factors 46 Table 9.2 Critical Values For T-Test 48 Table 9.3 Expected Probabilities for Normal Distribution 49 Table 9.4 Probabilities of d 2 > o 2 (percent) 50 Table 9.5 Coefficients (an-i+1) Used in the W Test for Normality 51 Table 9.6 Percentage Points of the Distribution of the W Test Statistic for P = 0.05 53 Table 9.7 Percentage Points of the Distribution of the D4 Test Statistic 54 Table 9.8 Critical Values of F-Distribution 55 10.0 ATTACHMENTS 57 10.1 Evaluation of Drift Data 57 Page 2 of 59

1.0 PURPOSE This drift analysis methodology provides the guidelines to perform drift analyses using past calibration history data. A drift analysis may be used to:

y Estimate component/loop drift for integration into setpoint calculations.

y Establish a technical basis for extending calibration and surveillance intervals using historical calibration data.

  • Evaluate extended surveillance intervals in support of longer fuel cycles.
  • Trend device performance based on extended surveillance intervals.

2.0 APPLICABILITY 2.1 Graded Approach The amount of effort spent on details and input data validation should be proportional to the safety significance of the analyzed equipment.

Evaluations can be categorized following the Graded Approach to Setpoint Calculations sections in Engineering Standards Manual ESM-03.02 (Reference 8.2.1).

2.2 24-Month Fuel Cycle Drift analyses performed to support extended surveillance intervals as part of the 24-Month Fuel Cycle Extension project require the highest level of detail and validation. Deviations from this instruction should be justified and documented as part of the drift analysis. Drift analyses performed per this instruction will be used as part of the justification required by NRC Generic Letter 91-04 (Reference 8.1.3).

3.0 PERSONNEL WITH RESPONSIBILITIES DEFINED IN THIS DOCUMENT

  • Analysis Preparer
  • Analysis Verifier
  • Analysis Approver Additional responsibilities for preparers, verifiers, and approvers are contained in 4 AWI-05.01.25 (Reference 8.2.3).

Page 3 of 59

4.0 DISCUSSION/METHODOLOGY 4.1 Methodology Options This design guide is written to provide the methodology necessary for the analysis of As-Found As-Left calibration data as a means of characterizing the performance of a component or group of components via the following methods:

4.1.1 Electric Power Research Institute (EPRI) has developed a guideline to provide nuclear plants with practical methods for analyzing historic component calibration data to predict component performance via a simple spreadsheet program (e.g., Excel, Lotus 1-2-3). This design guide is written in close adherence to this guideline, Reference 8.1.1.

Reference 8.1.1 was originally issued as TR-103335, dated March 1994. By letter dated December 1, 1997, from T.H. Essig, NRC, to R.W. James, EPRI (Reference 8.1.6), the NRC staff issued a status report documenting its concerns with TR-103335. The EPRI report was reissued as TR-103335-R1 in October, 1998. The NRC has not issued a formal review of TR-103335-R1.

4.1.2 Commercial Grade Software programs other than Microsoft Excel (e.g.

IPASS, Lotus 1-2-3, etc.), that will perform the functions necessary to evaluate drift, may be utilized providing the intent of this design guide is met and the software is used only as a tool to produce hard copy outputs that will be independently verified.

4.1.3 The EPRI IPASS software (Reference 8.3.6) may be used to perform or independently verify certain portions of the drift analysis. The IPASS software does not have the functionality to perform many of the functions required by the drift analysis, such as time dependency functions, and therefore, should only be used in conjunction with other software products to produce or verify an entire drift study.

4.2 Drift Analysis Scope 4.2.1 The scope of this instruction is limited to the calculation of the expected performance for a component, group of components, or loop using past calibration data. Analysis performed per this instruction should be formatted and controlled as required by 4 AWI-05.01.25 (Reference 8.2.3).

Page 4 of 59

4.2.2 A drift analysis may be performed on all regularly calibrated devices where as-found and as-left data is recorded. The scope of this instruction includes, but is not limited to, the following list of devices:

A. Transmitters (Differential Pressure, Flow, Level, Pressure, Temperature, etc.)

B. Bistables (Trip Units, Alarm Units, etc.)

C. Indicators (Analog, Digital)

D. Switches (Differential Pressure, Flow, Level, Position, Pressure, Temperature, etc.)

E. Signal Conditioners/Converters (Summers, E/P Converters, Square Root Converters, etc.)

F. Recorders (Differential Pressure, Flow, Level, Pressure, Temperature, etc.)

G. Monitors & Modules (Radiation, Neutron, H2O2, Pre-Amplifiers,etc.)

H. Relays (Time Delay, Undervoltage, Overvoltage, etc.)

4.3 As-Found As-Left (AFAL) Calibration Analysis This Instruction is based on the as-found as-left (AFAL) analysis methodology described in EPRI Document TR-103335-R1 (Reference 8.1.1). Refer to the EPRI document for a more detailed description of the AFAL method than listed here.

4.3.1 Information Obtained From AFAL Analysis The following information can be obtained by evaluating the AFAL data for an instrument or group of instruments:

A. The typical drift between calibrations.

B. Any tendency to drift in a particular direction (bias).

C. Any tendency for the drift uncertainty to increase over time.

D. Confirmation that the setting tolerances are appropriate for the device.

E. Confirmation that instrument performance is consistent with design requirements.

Page 5 of 59

4.3.2 General Features of AFAL Analysis A. Methodology evaluates historical calibration data only; data is obtained from instrument calibration records.

B. Present and future performance is based on statistical analysis of past performance.

C. Data can be analyzed starting from instrument installation up to the present or only the more recent data can be evaluated.

D. Since only historical data is evaluated, the method is not intended as a tool to identify individual faulty instruments, although it can be used to demonstrate that a particular instrument, model, or application is performing well or poorly.

E. A similar class of instruments, i.e., same make, model, application, is evaluated.

F. The methodology is less suitable for evaluating the drift of a single instrument due to statistical analysis penalties that occur with smaller sample sizes.

G. The methodology is based on actual calibration data and is thus traceable to calibration standards.

H. The methodology determines plant-specific drift for a particular group of instruments that can be used in instrument uncertainty and setpoint calculations.

I. The methodology is designed to support the analysis of longer calibration intervals for fuel cycle extensions and is consistent with the NRC expectations described in Reference 8.1.3.

4.3.3 Random Behavior A. If the AFAL calibration data indicates that the instrument randomly drifts around its setting without a tendency to drift in a particular direction, the drift is referred to as random drift.

B. In terms of AFAL analysis, the standard deviation of the drift result is usually taken as the random portion of the instrument drift.

Page 6 of 59

4.3.4 Bias Behavior A. If the instrument consistently drifts in one direction, the drift is said to have a bias.

B. In terms of AFAL analysis, the mean, or average value, of the drift result is usually taken as the bias portion of the instrument performance.

4.3.5 Error and Uncertainty Content in AFAL Data A. The As-Found versus the As-Left data includes several sources of uncertainty over and above component drift. Each of the following sources of error can contribute to the magnitude of the AFAL value:

1. True drift representing a change, time-dependent or otherwise, in instrument/loop output over the time period between any two consecutive calibrations.
2. Accuracy errors present between any two consecutive calibrations.
3. Measurement and test equipment error between any two consecutive calibrations.
4. Personnel-induced or human-related variation or error between any two consecutive calibrations.
5. Normal temperature effects due to a difference in ambient temperature between any two consecutive calibrations.
6. Environmental effects on component performance, e.g.,

radiation, humidity, vibration, etc., between any two consecutive calibrations that cause a shift in component output.

7. Misapplication, improper installation, or other operating effects that affect component calibration between any two consecutive calibrations.

4.3.6 Potential Impacts of AFAL Data Analysis A. Many of the items listed in Step 4.3.5 are not expected to have a significant effect on the measured As-Found and As-Left settings.

Because of the many independent parameters contributing to the possible variance in calibration data, they will all be considered together and termed the component's Analyzed Drift (AD)

Page 7 of 59

uncertainty. This approach has the following potential impacts on an analysis of the component's calibration data:

1. The magnitude of the calculated variation may exceed any assumptions or manufacturer predictions regarding drift.

Attempts to validate manufacturer's performance claims should consider the possible contributors to the calculated drift.

2. The magnitude of the calculated variation that includes all of the above sources of uncertainty may mask any true time-dependent drift. In other words, the analysis of AFAL data may not demonstrate any time dependency. This does not mean that time-dependent drift does not exist, only that it could be so small that it is negligible in the cumulative effects of component uncertainty, when all of the above sources of uncertainty are combined.
3. The AFAL drift value can possibly be used in place of more than just the drift term in the channel uncertainty calculation.

4.4 Calibration Data Collection 4.4.1 Sources of Data A. Calibration and maintenance records for all plant process instruments are maintained in the Component Master List (CML) computerized database (on some complex instruments, procedures are used to document calibration data; in these cases the CML instrument record will indicate the procedure number where data is recorded). All previously completed calibration and maintenance history records are accessible through the Automated Records Management System (ARMS).

4.4.2 How Much Data To Collect A. The goal is to collect enough data for the instrument or group of instruments to make a statistically valid pool. There is no hard fast number that must be attained for any given pool. Table 9.1 provides the Tolerance Interval Factor (TIF) for various sample pool sizes; it should be noted that the smaller the pool the larger the penalty. A tolerance interval is a statement of confidence that a certain proportion of the total population is contained within a defined set of bounds. For example, a 95%/95% TIF indicates a 95% level of confidence that 95% of the population is contained within the stated interval.

Page 8 of 59

Generally, sample sizes of greater than 30 are acceptable. AFAL analysis performed with a smaller sample size must have justification provided within the analysis documentation.

B. Different information may be needed depending on the analysis purpose. Therefore, the total population of components - all makes, models, and applications - that will be analyzed must be known.

C. Once the total population of components is known, the components should be separated into functionally equivalent groups. Each grouping is treated as a separate population for analysis purposes.

D. Not all components or available calibration data points need to be analyzed within each group in order to establish statistical performance limits for the group. Acquisition of data should be considered from different perspectives.

1. For each grouping, a large enough sample of components should be randomly selected from the population, so there is assurance that the evaluated components are representative of the entire population. By randomly selecting the components and confirming that the behavior of the randomly selected components is similar, a basis for not evaluating the entire population can be established.

For sensors, a random sample from the population should include representation of all desired component spans and functions.

2. For each selected component in the sample, enough historic calibration data should be provided to ensure that the component's performance over time is understood.
3. Due to the difficulty of determining the total sample set, developing specific sampling criteria is difficult. Because of the difficulty in developing a valid sampling program, specific justification in the drift study is required to document any sampling plan.

4.5 Data Grouping 4.5.1 Grouping Calibration Data A. One analysis goal should be to combine functionally equivalent components (components with similar design and performance characteristics) into a single group. In some cases, all components Page 9 of 59

of a particular manufacturer make and model can be combined into a single sample. In other cases, virtually no grouping of data beyond a particular component make, model, and specific span or application may be possible.

4.5.2 Rationale for Grouping Components into a Larger Sample A. A single component analysis may result in too few data points to make statistically meaningful performance predictions.

B. Smaller sample sizes associated with a single component may unduly penalize performance predictions by applying a larger TIF to account for the smaller data set. Larger sample sizes reflect a greater understanding and assurance of representative data that in turn reduces the uncertainty factor.

C. Larger groupings of components into a sample set for a single population ultimately allows the user to state the plant-specific performance for a particular make and model of component.

D. An analysis of smaller sample sizes is more likely to be influenced by non-representative variations of a single component (outliers).

E. Grouping similar components together, rather than analyzing them separately, is more efficient and minimizes the number of separate calculations that must be maintained.

4.5.3 Considerations when Combining Components into a Single Group A. Consider the following guidelines when grouping functionally equivalent components together:

1. If performed on a type-of-component basis, component groupings should usually be established down to the manufacturer make and model, as a minimum. The principles of operation are different for the various manufacturers, and combining the data could mask some trend for one type of component.
2. Sensors of the same manufacturer make and model, but with different calibrated spans or elevated zero points, can possibly still be combined into a single group. Note that some manufacturers provide a predicted accuracy and drift value for a given component model, regardless of its span. However, the validity of combining components with a variation of span, ranging from tens of pounds to several thousand pounds, Page 10 of 59

should be confirmed. As part of the analysis, the performance of components within each span should be compared to the overall expected performance to determine if any differences are evident between components with different spans.

3. Components combined into a single group should be exposed to similar calibration or surveillance conditions, as applicable.

Although it is desirable that the grouped components perform similar functions, the method by which the data is obtained for this analysis is also significant. If half the components are calibrated in the summer at 90F and the other half in the winter at 40F, a difference in observed drift between the data for the two sets of components might exist. In many cases, ambient temperature variations are not expected to have a large effect since the components are located in environmentally controlled areas.

4.5.4 Verification that Data Grouping is Appropriate A. Combining functionally equivalent components into a single group for analysis purposes may simplify the scope of work; however, some level of verification should be performed to confirm that the selected component grouping is appropriate. As an example, the manufacturer may claim the same accuracy and drift specifications for two components of the same model, but with different ranges, e.g., 0-5 psig and 0-3000 psig. However, in actual application, components of one range may perform differently than components of another range.

B. Standard statistics texts provide methods that can be used to determine if data from similar types of components can be pooled into a single group. If different groups of components have essentially equal variances and means at the desired statistical level, the data for the groups can be pooled into a single group.

C. A t-Test (two samples assuming unequal variances) may also be performed on the proposed components to be grouped. The t-Test returns the probability associated with a Student's t-Test to determine whether two samples are likely to have come from the same two underlying populations that have unequal variances.

If, for example, the proposed group contains 5 sub-groups, the t-Tests should be performed on all possible combinations for the groupings. However, if there is no plausible engineering explanation for the two sets of data being incompatible, the groups should be combined, despite the results of the t-Test.

Page 11 of 59

1. The t-Test may be performed using the t-Test: Two-Sample Assuming Unequal Variances analysis tool within Microsoft Excel.
2. The following formula is used to determine the test statistic value t:

n s

m s

y x

t 2

2 2

1 0

+

=

where:

t -

test statistic m -

Number of data points in sample 1 n -

Number of data points in sample 2 x -

Mean of sample 1 y -

Mean of sample 2 s2 -

Variance of the two samples 0 -

Hypothesized mean difference (0 if testing for equal means)

The following formula is used to estimate the degrees of freedom for the test statistic:

1 1

2 2

2 2

2 1

2 2

2 2

1

+

+

=

n n

s m

m s

n s

m s

df D. The F-distribution test may be used to test if two variances are likely to have come from the same underlying population. Since the presence of outliers may have a significant effect on the results of the test, consideration should be given to performing the test before and after the outliers are removed. The following method uses a one-sided 5% test and is based on the discussion contained in Section 6.2 of Reference 8.3.3.

1. The F value is determined by the ratio of the smallest and largest variances for the two groups:

Page 12 of 59

Fcalc =

2 2

2 1

s s

where:

S1 -

largest drift standard deviation value S2 -

smallest drift standard deviation value

2. The critical value of the F-distribution can be found using Table 9.8 with:

V1 - number of samples minus 1 in bin with largest standard deviation V2 - number of samples minus 1 in bin with smallest standard deviation The critical value of F-distribution can also be found using the FINV function in Microsoft Excel:

Fcrit = FINV (0.05, V1, V2) 4.5.5 Using Data from Other Nuclear Power Plants A. It is generally not recommended to pool MNGP specific data with data obtained from other utilities. It may be acceptable to use data from other utilities in cases where limited calibration history is available at MNGP. In this case the data must also be verified to be acceptable for grouping.

Acceptability may be defined by verification of grouping, and an evaluation of calibration procedures, Measurement and Test Equipment used, and defined setting tolerances.

4.6 Outlier Analysis 4.6.1 An outlier is a data point significantly different in value from the rest of the sample. The presence of an outlier or multiple outliers in the sample of component or group data may result in the calculation of a larger than expected sample standard deviation and tolerance interval.

Calibration data can contain outliers for several reasons. Outlier analyses can be used in the initial analysis process to help to identify problems with data that require correction. Examples include:

A. Data Transcription Errors - Calibration data can be recorded incorrectly either on the original calibration data sheet or in the spreadsheet program used to analyze the data.

Page 13 of 59

B. Calibration Errors - Improper setting of a device at the time of calibration would indicate larger than normal drift during the subsequent calibration.

C. Measuring & Test Equipment Errors - Improperly selected or miscalibrated test equipment could indicate drift, when little or no drift was actually present.

D. Scaling or Setpoint Changes - Changes in scaling or setpoints can appear in the data as larger than actual drift points unless the change is detected during the data entry or screening process.

E. Failed Instruments - Calibrations are occasionally performed to verify proper operation due to erratic indications, spurious alarms, etc. These calibrations may be indicative of component failure (not drift), which would introduce errors that are not representative of the device performance during routine conditions.

F. Design or Application Deficiencies - An analysis of calibration data may indicate a particular component that always tends to drift significantly more than all other similar components installed in the plant. In this case, the component may need an evaluation for the possibility of a design, application, or installation problem.

Including this particular component in the same population as the other similar components may skew the drift analysis results.

4.6.2 Detection of Outliers A. ASTM Standard E178-02 (Reference 8.1.4) provides several methods for determining the presence of outliers. This instruction utilizes the Critical Values for T-Test. The T-Test utilizes the values listed in Table 9.2 with an upper significance level of 5% to compare a given data point against. Note that the critical value of T increases as the sample size increases. This signifies that as the sample size grows, it is more likely that the sample is truly representative of the population. The T-Test assumes that the data is normally distributed.

4.6.3 T-Test Outlier Detection Equation s

x x

T i

=

Where:

Xi

- An individual sample data point Page 14 of 59

x

- Mean of all sample data points s

- Standard deviation of all sample data points T

- Calculated value of extreme studentized deviate that is compared to the critical value of T for the sample size.

If the calculated value of T exceeds the critical value for the sample size and desired significance level, then the evaluated data point is identified as an outlier.

4.6.4 Outlier Expulsion A. This instruction does not permit multiple outlier tests or passes.

The removal of poor quality data as listed in Section 4.6.1 is not considered removal of outliers, since it is merely assisting in identifying data errors. However, after removal of the poor quality data, certain data points can still appear as outliers when the outlier analysis is performed. These "unique outliers" are not consistent with the other data collected; and could be judged as erroneous points which tend to skew the representation of the distribution of the data. However, for the general case, since these outliers may accurately represent instrument performance, only one (1) additional unique outlier (as indicated by the T-Test) may be removed from the drift data.

B. If there are many identified outliers, the data should be reviewed in more detail to determine if a single instrument or unusual situation is influencing the results.

4.7 Normality Testing 4.7.1 A test for normality can be important because many frequently used statistical methods are based upon an assumption that the data is normally distributed. This assumption applies to the analysis of component calibration data also. For example, the following analyses may rely on an assumption that the data is normally distributed:

A. Determination of a tolerance interval that bounds a stated proportion of the population based on calculation of mean and standard deviation B. Identification of outliers C. Pooling of data from different samples into a single population 4.7.2 The normal distribution occurs frequently and is an excellent approximation to describe many processes. Testing the assumption of Page 15 of 59

normality is important to confirm that the data appears to fit the model of a normal distribution, but tests will not prove that the normal distribution is a correct model for the data. At best, it can only be found that the data is reasonably consistent with the characteristics of a normal distribution, and that the treatment of a distribution as normal is conservative. For example, some tests for normality will only allow the rejection of the hypothesis that the data is not normally distributed.

A group of data passing the test does not mean the data is normally distributed; it only means that there is no evidence to say that it is not normally distributed. However, because of the wealth of industry evidence that drift can be conservatively represented by a normal distribution, a group of data passing these tests will be considered as normally distributed without adjustments to the standard deviation of the data set.

4.7.3 Distribution-free techniques are available when the data is not normally distributed; however, these techniques are not as well known and often result in penalizing the results by calculating tolerance intervals that are substantially larger than the normal distribution equivalent.

Because of this fact, there is a good reason to demonstrate that the data is normally distributed or can be bounded by the assumption of normality.

4.7.4 Analytically verifying that a sample appears to be normally distributed usually invokes a form of statistics known as hypothesis testing. In general, a hypothesis test includes the following steps: A. Statement of the hypothesis to be tested and any assumptions B. Statement of a level of significance to use as the basis for acceptance or rejection of the hypothesis C. Determination of a test statistic and a critical region D. Calculation of the appropriate statistics to compare against the test statistic E. Statement of conclusions 4.7.5 The following sections discuss various ways in which the assumption of normality can be verified to be consistent with the data or can be claimed to be a conservative representation of the actual data.

Analytical hypothesis testing and subjective graphical analyses are discussed. If any of the analytical hypothesis tests (Chi-Squared, D Prime, or W Test) are passed, the coverage analysis and additional graphical analyses are not required. The following are methods for assessing normality:

A. Chi-Squared, 2, Goodness of Fit Test The 2 test compares the actual distribution of sample values to the expected distribution. The expected values are calculated by using the normal mean and standard deviation for the sample. If the distribution is normally or approximately normally distributed, the Page 16 of 59

difference between the actual versus expected values should be very small. And, if the distribution is not normally distributed, the differences should be significant.

To perform a 2 test:

1. Calculate the mean for the sample group n

X X

i

=

where:

i X -

An individual sample data point x

Mean of all sample data points n

Total number of data points

2. Calculate the standard deviation for the sample group:

(

)

(

)1 2

2

=

n n

x x

n s

where:

x Sample data values ( x 1, x 2, x 3,...)

s Standard deviation of all sample data points n

Total number of data points

3. Divide the data into bins to aid in determination of a normal distribution. The number of bins selected is up to the individual performing the analysis. Refer to Reference 8.1.1 for further guidance. Table 9.3 lists the expected probabilities for normal distribution for 9 through 12 bins.

The data may be divided using the Histogram function in Microsoft EXCEL.

4. Calculate the 2 value for the sample group:

Ei = NPi Page 17 of 59

(

)

=

i i

i E

E O

2 2

where:

Ei - Number of sample items expected in a bin N - Total number of samples in the population Pi - Probability that a given sample will be contained in a bin Oi - Observed number of sample items in a bin 2 - Chi-squared result

5. Calculate the degrees of freedom (d). The degrees of freedom term is computed as the number of bins used for the chi-squared computation minus the constraints. In all cases for these drift calculations, the count, mean, and standard deviation are computed. Therefore, the constraints term is equal to three (3).
6. Compute the chi-squared per degree of freedom term:

o 2 =

d 2

7. Evaluate the results. The results are evaluated in the following manner, as prescribed in Reference 8.1.1. If the chi-squared result computed in Step 4.7.5.A.4. is less than or equal to the degrees of freedom (o 2 1), the assumption that the distribution is normal will not be rejected.

If the value from Step 4.7.5.A.4. is greater than the degrees of freedom, then one final check will be made. The degrees of freedom and obtained chi-squared value are used to look up the probability that the observed o 2 will exceed the expected value.

(See Table 9.4) If the lookup value is greater than or equal to 5%, then the assumption of normality will not be rejected.

However, if the lookup value is less than 5%, the assumption of normality is rejected.

B. The W Test Reference 8.1.5 recommends this test for sample sizes less than

50. The W Test calculates a test statistic value for the sample population and compares the calculated value to the critical values for W, which are tabulated in Table 9.6. The W Test is a lower-tailed test. Thus if the calculated value of W is less than the critical Page 18 of 59

value of W, the assumption of normality would be rejected at the stated significance level. If the calculated value of W is larger than the critical value of W, there is no evidence to reject the assumption of normality. Reference 8.1.5 establishes the methods and equations required for performing a W Test.

To perform a W test:

1. Order the sample data in ascending order from smallest to largest.
2. Calculate the S2 for the group:

S2 = (n -1) x s2 where:

S2 - Sum of the square about the mean s2 - Unbiased estimate of the sample population variance n - Total number of data points

3. Calculate the quantity b:

b = [an-i + 1 x (n-i + 1 - i) ]

where:

i

- 1 to k, and k = n/2 if n is even or k = (n-1)/2 if n is odd n - total number of samples i - An individual sample data point The coefficients an-i+1 are obtained from Table 9.5.

4. Compute the test statistic:

W =

2 2

s b

Page 19 of 59

5. Compare the test statistic to the corresponding critical value at 5% level of confidence. Critical values for W are tabulated in Table 9.6. If the calculated value of W is less than the critical value of W, the assumption of normality would be rejected at the stated significance level. If the calculated value of W is larger than the critical value of W, there is no evidence to reject the assumption of normality.

C. The D-Prime (D) Test Reference 8.1.5 recommends this test for moderate to large sample sizes, greater than or equal to 50. The D Test calculates a test statistic value for the sample population and compares the calculated value to the values for the D

percentage points of the distribution, which are tabulated in Table 9.7. The D Test is two-sided, which means that the two-sided percentage limits at the stated level of significance must bound the calculated Dvalue. For the given sample size, the calculated value of D must lie within the two values provided in Table 9.7 in order to accept the hypothesis of normality.

To perform a D test

1. Order the sample data in ascending order from smallest to largest.
2. Calculate the S2 for the group:

S2 = (n - 1) x s2 Where:

S2 - Sum of the squares about the mean s2 - Unbiased estimate of the sample population variance n - Total number of data points

3. Calculate the linear combination (T) of the sample group

x

+

=

ix n

i T

2 1

where:

i The number of the sample point n

Total number of data points i

An individual sample data point Page 20 of 59

4. Calculate the D' value for the sample group:

S T

D =

5. Evaluate the results. If the D value lies within the acceptable range of results (for the given data count) per Table 9.7, for the P = 0.025 and 0.975, then the assumption of normality is not rejected. (If the exact data count is not contained within the tables, the critical value limits for the D value should be linearly interpolated to the correct data count.) If however, the value lies outside that range, the assumption of normality is rejected.

D. Probability Plots A graphical presentation of the data can reveal possible reasons for why the data is or is not normal. A probability plot is a graph of the sample data with the axes scaled for a normal distribution. If the data is normal, the data will tend to follow a straight line. If the data is non-normal, a nonlinear shape should be evident from the graph. Refer to Section C.4 of Reference 8.1.1 for further discussion. This method of normality determination is subjective, and is not required if the numerical methods show the data to be normal, or if a coverage analysis is used.

1. Cumulative Probability Plot - an XY scatter plot of the Final Data Set plotted against the percent probability (Pi) for a normal distribution. The following steps are required to produce a probability plot:
a. Order the sample data in ascending order from smallest to largest value.
b. Calculate the cumulative probability (Pi) for each point:

P i

n i =

x

100 1

2 where:

i The number of the sample point n

Total number of sample points

c. Plot the ordered data in ascending order, xi versus Pi.
d. Attempt to draw a straight line through the data.

Page 21 of 59

The closer the data is to a straight line, the more likely that the data is normally distributed.

2. Normalized Probability Plot - an XY scatter plot of the Final Data Set plotted against the probability for a normal distribution expressed in multiples of the standard deviation.

Reference 8.1.1 provides for the alternate method of plotting the sample data against multiples of the standard deviation rather than Pi. The examples in Reference 8.1.1 and the results of the IPASS software are presented in this format. However, since the shape of the plot is the critical factor, this method is not further discussed in this instruction.

E. Coverage Analysis

1. A coverage analysis is discussed for cases in which the hypothesis tests reject the assumption of normality, but the assumption of normality may still be a conservative representation of the data. The coverage analysis involves the use of a histogram of the data set, overlaid with the equivalent probability distribution curve for the normal distribution, based on the data sample's mean and standard deviation. Visual examination of the plot is used, and the kurtosis is analyzed to determine if the distribution of the data is near normal. If the data is near normal, then a normal distribution model which adequately covers the set of drift data as observed is derived.

This normal distribution will be used as the model for the drift of the device.

2. Sample counting is used to determine an acceptable normal distribution. The Standard Deviation of the group is computed.

The number of times the samples are within two Standard Deviations of the mean is computed. The count is divided by the total number of samples in the group to determine a percentage.

3. If the mean is small per the criteria in Section 4.9, it will not be considered when performing the coverage analysis. In this case the number of times the samples are within two Standard Deviations of zero is computed. This provides slightly more conservative results for the coverage analysis.
4. If the percentage of data within the two standard deviations tolerance is greater than 95.45%, the existing standard deviation is acceptable to be used for the encompassing normal Page 22 of 59

distribution model. However, if the percentage is less than required, the standard deviation of the model will be enlarged, such that the required percentage within two Standard Deviations is greater than 95.45%. The required multiplier for the standard deviation in order to provide this coverage is termed the Normality Adjustment Factor (NAF). If no adjustment is required, the NAF is equal to one (1).

4.8 Time-Dependency Analysis The component drift calculated in the previous sections represented a predicted performance limit without any consideration of whether the drift may vary with time between calibrations or component age. This section discusses the importance of understanding the time-related performance and the impact of any time-dependency on an analysis. A time dependency analysis is important whenever the drift analysis results are intended to support an extension of calibration intervals.

4.8.1 Limitations of Time Dependency Analyses A. Reference 8.1.1 performed drift analysis for numerous components at several nuclear plants as part of the project. The data evaluated did not demonstrate any significant time-dependent or age-dependent trends. Time dependency may have existed in all of the cases analyzed, but was insignificant in comparison to other uncertainty contributors. Because time dependency cannot be completely ruled out, there should be an ongoing evaluation to verify that component drift continues to meet expectations whenever calibration intervals are extended.

4.8.2 Drift Interval Plot A. A drift interval plot is an XY scatter plot that shows the Final Data Set plotted against the time interval between tests for the data points. This plot method relies upon the human eye to discriminate the plot for any trend in the data to exhibit a time dependency. A prediction line can be added to this plot which shows a "least squares" fit of the data over time. This can provide visual evidence of an increasing or decreasing mean over time, considering all drift data. An increasing standard deviation is indicated by a trend towards increasing "scatter" over the increased calibration intervals.

4.8.3 Standard Deviations and Means at Different Calibration Intervals (Binning Analysis)

Page 23 of 59

This analysis technique is the most recommended method of determining time dependent tendencies in a given sample pool. The test consists simply of segregating the drift data into different groups (Bins) corresponding to different ranges of calibration or surveillance intervals and comparing the standard deviations and means for the data in the various groups. The purpose of this type of analysis is to determine if the standard deviation or mean tends to become larger as the time interval between calibrations increases.

A. The data that is available will be placed in interval bins. The intervals that will normally be used will coincide with Technical Specification calibration intervals plus the allowed tolerance as follows:

1. 0 to 1.25 months (covers most weekly and monthly calibrations)
2. >1.25 to 3.75 months (covers most quarterly calibrations)
3. >3.75 to 7.50 months (covers most semi-annual calibrations)
4. >7.50 to 15.0 months (covers most annual calibrations)
5. >15.0 to 22.5 months (covers most old refuel cycle calibrations)
6. >22.5 to 30.0 months (covers most extended refuel cycle calibrations)
7. >30.0 months covers missed and forced outage refueling cycle calibrations.

Data will naturally fall into these time interval bins based on the calibration requirements for the subject instrument loops. Only on occasion will a device be calibrated on a much longer or shorter interval than that of the rest of the population within its calibration requirement group. Therefore, the data will naturally separate into groups for analysis.

B. Different bin splits may be used, but must be evaluated for data coverage and acceptable data groupings.

C. For each bin, where there is data, the mean (average), standard deviation, average time interval and data count will be computed.

Page 24 of 59

D. To determine if time dependency does or does not exist, the data needs to be distributed across multiple bins, with a sufficient population of data in each of two or more bins to consider the statistical results for those bins to be valid. Normally the minimum expected distribution that would allow evaluation is defined below:

1. A bin will be considered valid in the final analysis if it holds more than five data points and more than ten percent of the total data count.
2. At least two bins, including the bin with the most data, must be left for evaluation to occur.

The distribution percentages listed in these criteria are somewhat arbitrary, and thus engineering evaluation can modify them for a given evaluation.

The mean and standard deviations of the valid bins are plotted versus average time interval on a diagram. This diagram can give a good visual indication of whether or not the mean or standard deviation of a data set is increasing significantly over time interval between calibrations.

If multiple valid bins do NOT exist for a given data set, there is not enough diversity in the calibration intervals analyzed to make meaningful conclusions about time dependency from the existing data. Unless overwhelming evidence to the contrary exists in the scatter plot, the single bin data set will be established as moderately time dependent for the purposes of extrapolation of the drift value.

E. For evaluation of the binning method, the critical value of the F-distribution is compared to the ratio of the smallest and largest variances for the evaluated bins. If the ratio of variances exceeds the critical value, the drift uncertainty should be considered as moderately or strongly time dependent. If the ratio of variances does not exceed the critical value, the drift uncertainty may be considered as time independent.

4.8.4 Regression Analyses and Plots A. Regression Analyses can often provide very valuable data for the determination of time dependency. A standard regression analysis within an EXCEL spreadsheet will plot the drift data versus time, with a prediction line showing the trend. It will also provide Analysis of Variance (ANOVA) table printouts, which contain information Page 25 of 59

required for various numerical tests to determine level of dependency between two parameters (time and drift value).

Regression analyses are only to be performed if multiple valid bins are determined from the binning analysis.

B. Regression Analyses are to be performed on the Final Data Set drift values and on the Absolute Value of the Final Data Set drift values. The Final Data Set drift values show trends for the mean of the data set, and the Absolute Values show trends for the standard deviation over time.

C. Regression Plots

1. Drift Regression - an XY scatter plot that fits a line through the final drift data plotted against the time interval between tests for the data points using the "least squares" method to predict values for the given data set. The predicted line is plotted through the actual data for use in predicting drift over time. It is important to note that statistical outliers can have a dramatic effect upon the regression line.
2. Absolute Value Drift Regression - an XY scatter plot that fits a line through the Absolute Value of the final drift data plotted against the time interval between tests for the data points using the "least squares" method to predict values for the given data set. The predicted line is plotted through the actual data for use in predicting drift, in either direction, over time. It is important to note that statistical outliers can have a dramatic effect upon the regression line.

D. Regression Time Dependency Analytical Tests - Typical spreadsheet software includes capabilities to include ANOVA tables with regression analyses. ANOVA tables give various statistical information, which can allow certain numerical tests to be employed to search for time dependency of the drift data. For each of the two regressions (drift regression and absolute value drift regression), the following ANOVA parameters will be used to determine if time dependency of the drift data is evident. All tests listed should be evaluated, and if time dependency is indicated by any of the tests, the data should be considered as time dependent.

Note that these tests only support the indication of time dependency and not the indication of time independence (i.e., a R Squared value of less then 0.09 does not indicate that the drift is time independent).

Page 26 of 59

1. R Squared Test - The R Squared value, printed out in the ANOVA table, is a relatively good indicator of time dependency.

If the value is greater than 0.09, then it appears that the data does closely conform to a linear function, and therefore, should be considered time dependent.

2. P Value Test - A P Value for X Variable 1 (as indicated by the ANOVA table for an EXCEL spreadsheet) less than 0.05 is indicative of time dependency.
3. Significance of F Test - An ANOVA table F value greater than the critical F-table value (for a 0.05 probability, the number of data points for the regression, and two degrees of freedom for the numerator) would indicate a time dependency. In an EXCEL spreadsheet, the FINV function can be used to return critical values from the F distribution. To return the critical value of F, use the significance level (in this case 0.05 or 5%) as the probability argument to FINV, 2 as the numerator degrees of freedom, and the data count minus two as the denominator. If the F value in the ANOVA table exceeds the critical value of F, then the drift is considered time dependent.
4. For each of these tests, if time dependency is indicated, the plots should be observed to determine the reasonableness of the result. The tests above generally assess the possibility that the function of drift is linear over time, not necessarily that the function is significantly increasing over time. Time dependency can be indicated even when the plot shows the drift to remain approximately the same or decrease over time. Generally, a decreasing drift over time is not expected for instrumentation, nor is a case where the drift function crosses zero. Under these conditions, the extrapolation of the drift term would normally be established assuming no time dependency, if extrapolation of the results is required beyond the analyzed time intervals between calibrations.

4.8.5 Age-Dependent Drift Considerations Age-dependency is the tendency for a component's drift to increase in magnitude as the component ages. This can be assessed by plotting the As-Found value for each calibration minus the previous calibration As-Left value of each component over the period of time for which data is available. Random fluctuations around zero may obscure any age-dependent drift trends. By plotting the absolute values of the As-Found versus As-Left calibration data, the tendency for the magnitude Page 27 of 59

of drift to increase with time can be assessed. This analysis is generally not performed as a part of a standard drift study, but can be used when establishing maintenance practices.

4.9 Drift Bias Determination An absolute value of the mean of less than 0.1% of calibrated span is adequate to state that the instrument drift does not appear to have a bias, provided that the tolerance interval centered around the mean is significantly larger. The application of the bias must be carefully considered separately, so that the overall treatment of the analyzed drift remains conservative.

4.10 Time Dependent Drift Uncertainty When calibration intervals are extended beyond the range for which historical data is available, the statistical confidence in the ability to predict drift is reduced. The bias and the random portions of the drift will be extrapolated separately, but in the same manner.

Where the analysis shows slight to moderate time dependency or time dependency is indeterminate, the formula below will be used.

o E

E CI CI AD AD x

=

where:

ADE

- drift bias or random term for the extended calibration interval AD

- drift bias or random term calculated from the observed data CIE

- extended calibration interval (surveillance interval +25%)

CI0

- maximum observed calibration time interval within the observed data This equation matches the adjustment of vendor drift for surveillance intervals contained in the GE methodology (Section 4.3.2 of Reference 8.3.1). The GE methodology is based on drift being random, therefore, the effect of one time period is independent of another and can be combined using the square root of the sum of the squares method. The calculated ADE will be verified to bound the 99%/95% tolerance level recommended in the EPRI report.

Where there is indication of a strong relationship between drift and time, the following formula may be used:

o E

E CI CI AD AD x

=

Page 28 of 59

This equation assumes that the drift from one time period may be dependent on the drift that occurred in the previous time period. Therefore, this equation is used to provide a larger analyzed drift value than would result from use of the GE methodology. The calculated ADE will be verified to bound the 99%/95% tolerance level recommended in the EPRI report.

Where it can be shown that there is no relationship between surveillance interval and drift, the drift value determined may be used for other time intervals without change. However, for conservatism, due to the uncertainty involved in extrapolation to time intervals outside of the analysis period, drift values that show minimal or no particular time dependency will generally be addressed by increasing the tolerance interval to the 99%/95% level.

4.11 Shelf Life Of Analysis Results 4.11.1 Any analysis result based on performance of existing components has a shelf life. In this case, the term shelf life is used to describe a period of time extending from the present into the future during which the analysis results are considered valid. Predictions for future performance are based upon our knowledge of past calibration performance. This approach assumes that changes in performance will occur slowly or not at all over time. For example, if evaluation of the last ten years of data shows the component/loop drift is stable with no observable trend, there is little reason to expect a dramatic change in performance during the next year. However, it is also difficult to claim that an analysis completed today is still a valid indicator of performance ten years from now. For this reason, the analysis results should be re-verified periodically.

4.11.2 Depending on the type of component/loop, the analysis results are also dependent on the method of calibration, the component/loop span, and the M&TE accuracy. Any of the following program or component/loop changes should be evaluated to determine if they affect the analysis results:

A. Changes to M&TE accuracy B. Changes to the component or loop (e.g. span, environment, manufacturer, model, etc.)

C. Calibration procedure changes that alter the calibration methodology Page 29 of 59

5.0 INSTRUCTIONS 5.1 Extended Surveillance Intervals 5.1.1 Drift analysis performed to support extended surveillance intervals should be stand-alone calculations prepared and controlled in accordance with the requirements of 4 AWI-05.01.25 (Reference 8.2.3).

5.1.2 Data for the drift analysis will be entered into Microsoft Excel spreadsheets grouped by manufacturer and model number. All data may also be entered into the IPASS software program. Analysis may be performed using both IPASS and EXCEL spreadsheets. The IPASS analyses are all embedded in the software and it is not possible to follow each specific analysis. The discussion provided in this section is to assist in setting up an EXCEL spreadsheet and performing the independent analysis (Reference 8.3.5). For IPASS analysis see the IPASS User's Manual (Reference 8.3.2).

5.1.3 Microsoft Excel spreadsheets generally compute values to an approximate 15 decimal resolution, which is well beyond any required rounding for engineering analyses. However, for printing and display purposes, most values are displayed to lesser resolution. It is possible that hand computations will produce slightly different results because of using rounded numbers in initial and intermediate steps, but the Excel computed values are considered highly accurate in comparison.

Values with significant differences between the original computations and the computations of the independent verifier will be investigated to ensure that the Excel spreadsheet is properly computing the required values.

5.2 Verification of Drift Assumptions 5.2.1 Drift analysis performed to verify the drift assumptions in a setpoint calculation may be either a stand-alone calculation or an attachment to a setpoint calculation performed following the guidelines of Reference 8.2.2. Since a time dependency analysis is generally not required for this analysis, an IPASS analysis will usually be sufficient.

5.3 Populating The Spreadsheet 5.3.1 The component group to be analyzed (e.g., all Rosemount Trip Units) is determined. The Responsible Engineer should determine the possible sub-groups within the large groupings, which from an engineering perspective, might show different drift characteristics, and therefore, may warrant separation into smaller groups. This would Page 30 of 59

entail looking at the manufacturer, model, calibration span, setpoints, time intervals, specifications, locations, environment, etc., as necessary.

5.3.2 Develop a list of component numbers, manufacturers, models, component types, brief descriptions, surveillance tests, calibration procedures and calibration information (spans, setpoints, etc.).

5.3.3 Determine the data to be collected, following the guidance of Sections 4.4 through 4.6 of this Design Guide.

5.3.4 Identify, locate and collect data for the component group to be analyzed.

5.3.5 Sort the data by surveillance test or calibration procedure if more than one test/procedure is involved.

5.3.6 Sequentially sort the surveillance or calibration sheets, descending by date, starting with the most recent date.

5.3.7 Enter the Date, As-Found, and As-Left values on the appropriate data entry sheet.

5.3.8 Review the notes on each calibration data sheet to determine possible contributors for excluding data. The notes should be condensed and entered onto the EXCEL spreadsheet for the applicable calibration points. Where appropriate and obvious, the Responsible Engineer should remove the data that is invalid for calculating drift for the device.

The reasons for excluding or correcting invalid data should be categorized following the categories in Attachment 10.1.

5.3.9 Calculate the time interval for each drift point by taking the difference between the current calibration date and the previous calibration date.

The time interval is converted to months using 30.5 days per month.

(If the data is not valid for either the As-Left or As-Found calibration information, then the value will not be computed for this data point.)

5.3.10 Calculate the Drift value by taking the difference between the current calibration As-Found value and the previous calibration As-Left value for each calibration check point. (If the data is not valid for either the As-Left or As-Found calibration information, then the value will not be computed for this data point.)

Page 31 of 59

5.4 Spreadsheet Performance Of Basic Statistics 5.4.1 Separate data columns are created for each calibration point within the calibrated span of the device. The percent Span of each calibration point should closely match from device to device within a given analysis. Basic statistics include at a minimum, determining the number of data points in the sample, the average drift, standard deviation of the drift, minimum drift value, and maximum drift value contained in each data column. This section provides the specific details for using Microsoft Excel. Other spreadsheet programs, statistical, or Math programs that are similar in function are acceptable for use to perform the data analysis, provided all analysis requirements are met.

5.4.2 Determine the average for the data points contained in each column for each initial group by using the "AVERAGE" function. Example cell format = AVERAGE (C2:C133). The Average function returns the average of the data contained within the range of cells C2 through C133. This average is also known as the mean of the data.

5.4.3 Determine the standard deviation for the data points contained in each column for each initial group by using the "STDEV" function. Example cell format = STDEV (C2:C133). The Standard Deviation function returns the measure of how widely values are dispersed from the mean of the data contained within the range of cells C2 through C133.

Formula used by Microsoft Excel to determine the standard deviation:

A. STDEV (Standard Deviation of the sample population):

(

)

(

)1 2

2

=

n n

x x

n s

where:

x Sample data values ( x 1, x 2, x 3,....)

Standard deviation of all sample data points Total number of data points s

n 5.4.4 Determine the variance for the data points contained in each column for each initial group by using the "VAR" function. Example cell format

= VAR (C2:C133). The Variance function returns the measure of how widely values are dispersed from the mean of the data contained within the range of cells C2 through C133. Formula used by Microsoft Excel to determine the variance:

Page 32 of 59

A. VAR (Variance of the sample population):

(

)

(

)1 2

2 2

=

n n

x x

n s

where:

x Sample data values ( x 1, x 2, x 3,....)

s 2 Variance of the sample population n

Total number of data points 5.4.5 Determine the largest positive drift value for the data points contained in each column for each initial group by using the "MAX" function.

Example cell format = MAX (C2:C133). The Maximum function returns the largest value of the cells contained within the range of cells C2 through C133.

5.4.6 Determine the largest negative drift value for the data points contained in each column for each initial group by using the "MIN" function.

Example cell format = MIN (C2:C133). The Minimum function returns the smallest value of the cells contained within the range of cells C2 through C133.

5.4.7 Determine the number of data points contained in each column for each initial group by using the "COUNT" function. Example cell format

= COUNT (C2:C133). The Count function returns the number of all populated cells within the range of cells C2 through C133.

5.4.8 Where sub-groups which have engineering reasons for the possibility that the data should be separated have been combined in a data set, analyze the statistics and component data of the sub-groups to determine the acceptability for combination.

A. Perform a t-Test in accordance with Step 4.5.4 on each possible sub-group combination to test for the acceptability of combining the data. Acceptability for combining the data is indicated when the absolute value of the Test Statistic (t Stat) is less than the t Critical two-tail. Example: t Stat for combining sub-group A & B may be 0.703, which is larger than the t Critical two-tail of 0.485. However, as a part of this process, the Responsible Engineer should ensure that the indication of unacceptability does not mask time dependency. In other words, if the only difference in the groupings is that of the calibration interval, the differences in the data characteristics could exist because of time dependent drift. If this is the only difference, the data should be combined, even though the tests show that it may not be appropriate.

Page 33 of 59

B. Perform a F-distribution test in accordance with Step 4.5.4 to test for the acceptability of combining the data. Acceptability for combining the data is indicated when the ratio of variances (Fcalc) is less than the critical value of the F-distribution (Fcrit). The same cautions on use of the t-Test apply to the F-distribution test.

5.5 Outlier Detection And Expulsion 5.5.1 A drift trend plot should be developed for each instrument in the group by plotting the drift value versus calibration date. Bounds corresponding to  2 Sigma (2 Standard Deviations) should be included on the plot. Drift values outside the 2 Sigma bounds should be evaluated for possible erroneous data. The reasons for excluding or correcting invalid data should be categorized following the categories in Attachment 10.1.

5.5.2 Obtain the Critical Values for the T-Test from Table 9.2, based on the sample size of the data contained within the specified range of cells.

Use the COUNT value to determine the sample size.

5.5.3 Perform the outlier test for all the samples at each calibration point.

For any values that show up as outliers, analyze the initial input data to determine if the data is erroneous. If so, remove or correct the data in the earlier pages of the spreadsheet, and re-run all of the analysis up to this point. Continue this process until all erroneous data has been removed. The reasons for excluding or correcting invalid data should be categorized following the categories in Attachment 10.1.

5.5.4 If appropriate, if any outliers are still displayed, remove the worst case outlier as a statistical outlier, per step 4.6.4 above.

5.5.5 Recalculate the Average, Standard Deviation, largest positive drift, largest negative drift, and Count for each calibration point after the removal of any outliers.

5.6 Normality Tests 5.6.1 To test for normality of the data set, the first step is to perform the required hypothesis testing. For data sets with 50 or more data points, the hypothesis testing can be done with either the Chi-Square (Step 4.7.5.A.) or the D Tests (Step 4.7.5.C.). If the data set has less than 50 data points, the W Test (Step 4.7.5.B.) or Chi-Square Test may be used. The Chi-Square test should generally be performed with 12 bins of data, starting from [- to (mean-2.5)], and bin increments of 0.5,

ending at [(mean+2.5) to +]. (Since the same bins are to be used for the histogram in the coverage analysis, the work for these two tasks Page 34 of 59

may be combined.) If the data passes either of the tests, only the passed test need be shown in the spreadsheet. However, if the assumption of normality is rejected by both of the hypothesis tests, the results of both tests should be presented.

5.6.2 If the assumption of normality is rejected by both tests, then a coverage analysis should be performed as described in Section 4.7.5.E. As explained above for the Chi-Square test, the coverage analysis and histogram will be established with a 12 bin approach unless inappropriate for the application.

5.6.3 If an adjustment is required to the standard deviation to provide a normal distribution that adequately covers the data set, then the required multiplier to the standard deviation (Normality Adjustment Factor (NAF)) will be determined iteratively in the coverage analysis.

This multiplier will produce a normal distribution model for the drift, which shows adequate data population from the data set within the 2 band of the model.

5.6.4 The Chi-Square Test and coverage analysis should be shown for the original data set and for the data set with the outlier removed.

5.6.5 Probability Plots (Step 4.7.5.D.) may be used if the numerical methods show that the data is not normally distributed.

5.7 Selection of Final Data Set 5.7.1 For transmitters, or other devices with multiple calibration points, the general process will be to use the calibration point with the worst case drift values. Care must be taken in selecting the final data set since a data set that has a high bias (mean) with a lower standard deviation may result in a less conservative setpoint than a data set that has a lower bias with a high standard deviation. The point(s) of interest and the direction of the setpoint (increasing or decreasing) should be considered when evaluating the data set for selecting bounding limits.

5.7.2 The following method is used to evaluate the calibration points:

A. Determine the 95%/95% tolerance interval for each calibration point: TI = s x TIF x NAF where: TI - Tolerance Interval s - drift standard deviation calculated from the observed data TIF -

95%/95% Tolerance Interval factor from Table 9.1 NAF - Normality Adjustment factor from Coverage Analysis (NAF = 1 if no coverage analysis was performed)

Page 35 of 59

B. Plot the tolerance interval as a function of calibration point. The Calibration Point Drift plot visually shows the amount of drift exhibited by the group of devices at the different calibration points.

C. If the points show a significant average (greater than 0.1% of span),

plot the average as a function of calibration point.

D. A data set with bounding statistics is selected to ensure the most conservative setpoint results.

E. Provide plots for the original data set and the data set with the outlier removed.

5.8 Time Dependency Testing 5.8.1 Drift Interval Plot A. A scatter plot is performed under a new page to the spreadsheet entitled "Scatter Plot" or "Drift Interval Plot". The chart function of EXCEL is used to chart the data with the x-axis being the calibration interval and the y-axis being the drift value. The prediction line should be added to the chart, along with the equation of the prediction line.

B. The Tolerance Interval calculated above should be added to the Drift Interval Plot as a plus/minus band centered around zero. If a significant average was determined (greater than 0.1% of span),

the average should also be plotted with the Tolerance Interval centered around the average.

C. This plot provides visual indication of the trend of the mean, and somewhat obscurely, of any increases in the scatter of the data over time. Plotting the Tolerance Interval provides visual verification that an acceptable number of the data points are bounded by the Tolerance Interval.

D. Once the extended Analyzed Drift value is determined, it should be reflected on the Drift Interval Plot similar to the Tolerance Interval.

5.8.2 Binning Analysis A. The binning analysis is performed under a separate worksheet of the EXCEL spreadsheet. The Final Data Set is copied onto the worksheet and then split by bins into the time intervals as defined in Section 4.8.3.A. The standard deviation, mean, average time interval, and count of the data in each time bin is calculated.

Page 36 of 59

Similar equation methods are used here as described in Section 5.4 above when characterizing the drift data set. The validity of the bins is evaluated based on population per the criteria of Section 4.8.3.D. If multiple valid bins are not established, the data will be considered as moderately time dependent.

B. If multiple bins are established, the standard deviations, means, and average time intervals are tabulated and a plot is generated to show the variation of the bin averages and standard deviations versus average time interval. This plot can be used to establish whether standard deviations and means are significantly increasing over time between calibrations.

C. If the plot shows an increase in standard deviation over time, compare the critical value of the F-distribution of the ratio of the smallest and largest variances for the required bins:

Fcalc=

S1 2 S2 2 where:

S 1

largest drift standard deviation value S

2 smallest drift standard deviation value The critical value of the F-distribution can be found using Table 9.8 with:

V 1

number of samples minus 1 in bin with largest standard deviation V

2 number of samples minus 1 in bin with smallest standard deviation The critical value of F-distribution can also be found using the FINV function in Microsoft Excel:

F crit = FINV (0.05, V 1,

V

2)

D. If the Fcalc value is less than the Fcrit value, the standard deviations of the drift uncertainty for the two bins are not significantly different and is not indicative of time dependent behavior. The drift uncertainty may be treated as time independent.

Page 37 of 59

E. If the Fcalc value is greater than the Fcrit value, the standard deviations of the drift uncertainty for the two bins appear to be different, indicative of time-dependent behavior. At the minimum, the drift uncertainty will be treated as moderately time dependent.

F. If the plots tend to indicate significant increases in either the mean or standard deviation over time, those parameters should be judged to be strongly time dependent.

5.8.3 Regression Analyses A. The regression analyses are performed in accordance with the requirements of Section 4.8.4 given that multiple valid time bins were established in the binning analysis. The Final Data Set should be set up with the blank lines removed. For the Absolute Value Regression, a third column should be created which takes the absolute value of the drift column.

B. For each of the two Regression Analyses, use the following steps to produce the regression analysis output. Using the "Data Analysis" package under "Tools" in Microsoft EXCEL, the Regression option should be chosen. The Y range will be established as the Drift (or Absolute Value of Drift) data range, and the X range should be the calibration time intervals. The output range should be established on a new worksheet for each analysis.

The option for the residuals should be established as "Line Fit Plots". The regression computation should then be performed.

The output of the regression routine will be a list of residuals, an ANOVA table listing, and a plot of the Drift (or Absolute Value of Drift) versus the Time Interval Between Calibrations. A prediction line will be included on the plot. Add a cell close to the ANOVA table listing which establishes the Critical Value of F, using the guidance of Section 4.8.4 for the Significance of F Test. This will utilize the FINV function of Microsoft EXCEL.

C. Analyze the results in the Drift Regression ANOVA table for R Square, P Value, and F Value, using the guidance of Section 4.8.4.

If any of these analytical means shows time dependency in the Drift Regression and the slope of the prediction line significantly increases over time from an initially positive value (or decreases over time from an initially negative value), without crossing zero within the time interval of the regression analysis, the mean of the data set should be established as strongly time dependent. This increase can also be validated by observing the results of the binning analysis plot for the mean of the bins, and by observing the scatter plot prediction line.

Page 38 of 59

D. Analyze the results in the Absolute Value of Drift Regression ANOVA table for R Square, P Value, and F Value, using the guidance of Section 4.8.4. If any of these analytical means shows time dependency and the slope of the prediction line significantly increases over time, the standard deviation of the data set should be established as strongly time dependent. This increase can also be validated by observing the results of the binning analysis plot for the standard deviation of the bins, and by observing any discernible increases in data scatter as time increases on the scatter plot.

E. Regardless of the results of the analytical regression tests, if the plots tend to indicate significant increases in either the mean or standard deviation over time, those parameters should be judged to be strongly time dependent. Otherwise, for conservatism, the data will always be considered to be moderately time dependent if extrapolation of the data is necessary to accommodate the uncertainty involved in the extrapolation process.

5.9 Drift Bias Determination 5.9.1 If the mean of the Final Data Set is significant per the criteria in Section 4.9, the average is treated as a bias to the drift term.

5.10 Calculate The Analyzed Drift Value 5.10.1 Determine the required time interval for which the value must be computed. Technical Specifications allow time intervals between tests to be extended up to 25% of the surveillance interval. Therefore, the analyzed drift value is determined for the required calibration interval plus 25%.

5.10.2 Bias Term If the mean of the Final Data Set is significant per the criteria in Section 4.9, a bias term will be considered. If no extrapolation is necessary, the bias term will be set equal to the mean of the Final Data Set.

Extrapolation of this term will be performed in one of two methods, as determined by the degree of time dependency established in the time dependency analysis. If the mean is determined to be strongly time dependent, the following equation will be used to extrapolate the value in a linear fashion:

o E

Ebias CI CI AD x

AD bias

=

Page 39 of 59

where:

ADE.bias drift bias term for the extended calibration interval ADbias drift bias (mean) calculated from the observed data CIE extended calibration interval (surveillance interval

+ 25%)

CI0 average observed calibration time interval from bin with longest time interval If the mean is determined to be moderately time dependent, the following equation will be used to extrapolate the mean. (Note that because of the uncertainty in defining a drift value beyond analysis limits, this equation will also generally be used for cases where no time dependency is evident.)

x AD bias o

E Ebias CL CL AD

=

5.10.3 Random Term A. The random portion of the Analyzed Drift is calculated by multiplying the standard deviation of the Final Data Set by the Tolerance Interval Factor (TIF) for the sample size and by the Normality Adjustment Factor (NAF), if required from the Coverage Analysis, and extrapolating the final result in a fashion similar to the methods shown above for the bias term.

B. Obtain the appropriate Tolerance Interval Factor for the size of the sample set from Table 9.1.

NAF TIF s

ADrandom x

x

=

where:

s drift standard deviation calculated from the observed data TIF 95%/95% Tolerance Interval factor from Table 9.1 NAF -

Normality Adjustment Factor from Coverage Analysis 5.10.4 30-Month Predicted Drift (Random Term)

A. If the drift uncertainty was not shown to be time-dependent, the drift uncertainty for the extended calibration interval is determined by increasing the tolerance factor to the 99%/95% level:

95

/

95 95

/

99 x

TIF TIF AD AD random random E

=

Page 40 of 59

where:

ADE.random random drift term for the extended calibration interval ADrandom random drift term calculated from the observed data TIF99/95 99%/95% Tolerance Interval Factor from Table 9.1 TIF95/95 95%/95% Tolerance Interval Factor from Table 9.1 B. If the drift was determined to be moderately time dependent, the following equation should be used to extrapolate the drift uncertainty:

o E

random random E

CI CI AD AD x

=

where:

CIE

- extended calibration interval (surveillance interval +25%)

CIO average observed calibration time interval from bin with longest time interval A check should be made to ensure that the obtained drift uncertainty is greater than the uncertainty calculated with the 99%/95% tolerance factor. The larger of the two values should be used.

C. If the drift is determined to be strongly time dependent, the following equation will be used to extrapolate the value in a linear fashion:

o E

random random E

CI CI AD AD x

=

A check should be made to ensure that the obtained drift uncertainty is greater than the uncertainty calculated with the 99%/95% tolerance factor. The larger of the two values should be used.

6.0 CALCULATIONS 6.1 Drift Studies The Drift Studies should be performed in accordance with the methodology described above and the requirements of Reference 8.2.3. The following items are to be addressed in the calculation.

Page 41 of 59

6.1.1 Describe, at a minimum, that the objective of the calculation is to document the drift analysis results for the component group, and extrapolate the drift value to the required calibration period (if applicable).

6.1.2 Provide a list for the group of all pertinent information in tabular form (e.g. Tag Numbers, Manufacturer, Model Numbers, ranges and calibration spans).

6.1.3 Describe any limitations on the application of the results. For instance, if the analysis only applies to a certain range code, the objective will state this fact.

6.1.4 The method of solution will describe, at a minimum, a summary of the methodology used to perform the drift analysis outlined by this Design Guide. Exceptions taken to this instruction will be identified, including basis and references for exceptions.

6.1.5 The actual calculation/analysis will provide:

A. A listing of data which was removed, and the justification for doing so.

B. A narrative discussion of the specific activities performed for this calculation.

C. Results and conclusions, including:

1. Manufacturer and model number analyzed
2. Bias and random Analyzed Drift values, as applicable
3. The applicable Tolerance Interval Factors (provide detailed discussion and justification if other than 95%/95%)
4. Applicable drift time interval for application
5. Normality conclusion
6. Statement of time dependency observed, as applicable
7. Limitations on the use of this value in application to uncertainty calculations, as applicable
8. Limitations on the application of the results to similar instruments, as applicable Page 42 of 59

D. Attachments, including the following information:

1. Input data with notes on removal and validity
2. Computation of drift data and calibration time intervals
3. Outlier summary, including Final Data Set and basic statistical summaries
4. Chi-Square Test Results (If Applicable)
5. W Test or D Test Results (If Applicable)
6. Coverage analysis, including histogram, percentages in the required sigma bands, and Normality Adjustment Factor (if applicable)
7. Scatter Plot with prediction line and equation
8. Binning Analysis Summaries for Bins and Plots (as applicable)
9. Derivation of the Analyzed Drift values, with summary of conclusions 6.2 Use of Analyzed Drift Value in Setpoint/Uncertainty Calculations 6.2.1 To apply the results of the drift analyses to a specific device or loop, a setpoint/uncertainty calculation will be performed, revised or evaluated in accordance with References 8.2.3 and 8.2.2, as appropriate. Per Section 4.3.5 above, the Analyzed Drift term characterizes the Vendor Accuracy (VA), M&TE (or calibration error), and drift error terms for the analyzed device, loop, or function. In order to save time, a comparison between these terms (or subset of these terms) in an existing setpoint calculation to the Analyzed Drift can be made. If the terms within the existing calculation bound the Analyzed Drift term, then the existing calculation is conservative as is, and does not specifically require revision. If revision to the calculation is necessary, the Analyzed Drift term may be incorporated into the calculation, setting the Vendor Accuracy, M&TE (or calibration error), and drift terms for the analyzed devices to zero.

Only the Vendor Drift and Drift Temperature Effect terms may be replaced with the analyzed drift value for the Technical Specification calculations performed per the GE setpoint methodology.

Page 43 of 59

6.2.2 When comparing the results to setpoint calculations which have more than one device in the instrument loop which has been analyzed for drift, comparisons can be made between the AD terms and the original terms on a device-by-device basis, or on a total loop basis. Care should be taken to properly combine terms for comparison in accordance with Reference 8.2.2, as appropriate.

6.2.3 When applying the drift study results of bistables or switches to a setpoint calculation, the preparer should fully understand the directionality of any bias terms within AD and apply the bias terms accordingly. See the guidelines within Reference 8.2.2 for working with bias terms.

7.0 DEFINITIONS As-Found The condition in which a channel, or portion of a channel, Ref. 8.1.1 is found after a period of operation and prior to any calibration.

As-Left The condition in which a channel, or portion of a channel, Ref. 8.1.1 is left after a calibration or surveillance check.

Kurtosis A characterization of the relative peakedness or flatness Ref. 8.1.1 of a distribution compared to a normal distribution. A large kurtosis indicates a relatively peaked distribution and a small kurtosis indicates a relatively flat distribution.

8.0 REFERENCES

8.1 Industry Standards Documents 8.1.1 "Guidelines for Instrument Calibration Extension/Reduction - Revision 1: Statistical Analysis of Instrument Calibration Data," EPRI, Palo Alto, CA: 1998. TR-103335-R1.

8.1.2 GE NEDC-31336P-A, "General Electric Instrument Setpoint Methodology," September 1996.

8.1.3 NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle."

8.1.4 ANSI/ASTM E178-02, "Standard Practice for Dealing With Outlying Observations."

8.1.5 ANSI N15.15-1974, "Assessment of the Assumption of Normality (Employing Individual Observed Values)."

Page 44 of 59

8.1.6 Status Report on the Staff Review of EPRI Technical Report TR-103335 "Guidelines for Instrument Calibration Extension/Reduction Programs," Dated March 1994.

8.2 NMC Documents 8.2.1 ESM-03.02, "Design Requirements, Practices & Topics (Instrumentation & Controls)."

8.2.2 ESM-03.02-APP-I, "Appendix I (GE Methodology Instrumentation &

Controls)."

8.2.3 4 AWI-05.01.25, "CALCULATION/ANALYSIS CONTROL."

8.3 Miscellaneous 8.3.1 GE-NE-901-021-0492, DRF-A00-01932-1, "Setpoint Calculation Guidelines for the Monticello Nuclear Generating Plant," October 1992.

8.3.2 "User's Manual: IPASS (Rev. 2), Instrument Performance Analysis Software System for As-Found-As-Left (AFAL) Data," EPRI, Palo Alto, CA: 1999. CM-106752-R2.

8.3.3 "Statistics for Nuclear Engineers and Scientists Part 1: Basic Statistical Inference," William J. Beggs; February 1981.

8.3.4 Beckwith, Buck, and Marangoni, "Mechanical Measurements, Third Edition," Addison-Wesley Publishing Company, Inc., 1981.

8.3.5 Microsoft Excel 2000 Version 9.0.4402 SR-1, Spreadsheet Program.

8.3.6 IPASS (Instrument Performance Analysis Software System), Revision 2.03, created by EDAN Engineering in conjunction with EPRI.

Page 45 of 59

9.0 TABLES Table 9.1 Tolerance Interval Factors Sample Size 95%/95%

99%/95%

Sample Size 95%/95% 99%/95%

2 37.674 188.491 55 2.354 2.538 3

9.916 22.401 60 2.333 2.506 4

6.370 11.150 65 2.315 2.478 5

5.079 7.855 70 2.299 2.454 6

4.414 6.345 75 2.285 2.433 7

4.007 5.488 80 2.272 2.414 8

3.732 4.936 85 2.261 2.397 9

3.532 4.550 90 2.251 2.382 10 3.379 4.265 95 2.241 2.368 11 3.259 4.045 100 2.233 2.355 12 3.162 3.870 110 2.218 2.333 13 3.081 3.727 120 2.205 2.314 14 3.012 3.608 130 2.194 2.298 15 2.954 3.507 140 2.184 2.283 16 2.903 3.421 150 2.175 2.270 17 2.858 3.345 160 2.167 2.259 18 2.819 3.279 170 2.160 2.248 19 2.784 3.221 180 2.154 2.239 20 2.752 3.168 190 2.148 2.230 21 2.723 3.121 200 2.143 2.222 22 2.697 3.078 250 2.121 2.191 23 2.673 3.040 300 2.106 2.169 24 2.651 3.004 400 2.084 2.138 25 2.631 2.972 500 2.070 2.117 26 2.612 2.941 600 2.060 2.102 27 2.595 2.914 700 2.052 2.091 30 2.549 2.841 800 2.046 2.082 35 2.490 2.748 900 2.040 2.075 40 2.445 2.677 1000 2.036 2.068 45 2.408 2.621 1.960 1.960 50 2.379 2.576 Page 46 of 59

NOTE 1:

For cases where the exact count is not contained within the table, either the higher value or linear interpolation of the values may be used to determine the Tolerance Interval Factor.

NOTE 2:

Table data from Table VII(a) of Reference 8.3.3.

NOTE 3:

Data matches the Tolerance Interval Factors used in the IPASS Revision 2.03 software.

NOTE 4:

An AFAL analysis performed with a sample size < 30 must have justification provided within the analysis documentation.

Page 47 of 59

9.0 TABLES (Cont'd)

Table 9.2 Critical Values For T-Test Sample Size Upper 5%

Significance Level Sample Size Upper 5%

Significance Level 3

1.15 22 2.60 4

1.46 23 2.62 5

1.67 24 2.64 6

1.82 25 2.66 7

1.94 30 2.75 8

2.03 35 2.81 9

2.11 40 2.87 10 2.18 45 2.91 11 2.23 50 2.96 12 2.29 60 3.03 13 2.33 70 3.08 14 2.37 75 3.11 15 2.41 80 3.13 16 2.44 90 3.17 17 2.47 100 3.21 18 2.50 125 3.28 19 2.53 150 3.33 20 2.56

>150 4.00 (Note 2) 21 2.58 NOTE 1:

Table data from Table 1 of Reference 8.1.4.

NOTE 2:

For sample sizes greater than 150, an outlier factor of 4.00 is used in accordance with the guidance in Reference 8.1.1.

NOTE 3:

An AFAL analysis performed with a sample size < 30 must have justification provided within the analysis documentation.

Page 48 of 59

9.0 TABLES (Cont'd)

Table 9.3 Expected Probabilities for Normal Distribution 9 Bin Analysis 10 Bin Analysis Bin Range (ó)

Probability (%)

Bin Range (ó)

Probability (%)

- to -2.38 0.866

- to -2.4 0.820

-2.38 to -1.70 3.594

-2.4 to -1.8 2.770

-1.70 to -1.02 10.930

-1.8 to -1.2 7.920

-1.02 to -0.34 21.300

-1.2 to -0.6 15.920

-0.34 to 0.34 26.620

-0.6 to 0.0 22.570 0.34 to 1.02 21.300 0.0 to 0.6 22.570 1.02 to 1.70 10.930 0.6 to 1.2 15.920 1.70 to 2.38 3.594 1.2 to 1.8 7.920 2.38 to 0.866 1.8 to 2.4 2.770 2.4 to 0.820 11 Bin Analysis 12 Bin Analysis Bin Range (ó)

Probability (%)

Bin Range (ó)

Probability (%)

- to -2.52 0.587

- to -2.5 0.621

-2.52 to -1.96 1.913

-2.5 to -2.0 1.659

-1.96 to -1.40 5.580

-2.0 to -1.5 4.400

-1.40 to -0.84 11.970

-1.5 to -1.0 9.190

-0.84 to -0.28 18.920

-1.0 to -0.5 14.980

-0.28 to 0.28 22.060

-0.5 to 0.0 19.150 0.28 to 0.84 18.920 0.0 to 0.5 19.150 0.84 to 1.40 11.970 0.5 to 1.0 14.980 1.40 to 1.96 5.580 1.0 to 1.5 9.190 1.96 to 2.52 1.913 1.5 to 2.0 4.400 2.52 to 0.587 2.0 to 2.5 1.659 2.5 to 0.621 NOTE: Data developed from Table III of Reference 8.3.3.

Page 49 of 59

9.0 TABLES (Cont'd)

Table 9.4 Probabilities of d 2 > o 2 (percent) o 2

d 0

0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 8.0 10.0 1

100 48 32 22 16 11 8.3 6.1 4.6 3.4 2.5 1.9 1.4 0.5 0.2 2

100 61 37 22 14 8.2 5.0 3.0 1.8 1.1 0.7 0.4 0.2 3

100 68 39 21 11 5.8 2.9 1.5 0.7 0.4 0.2 0.1 4

100 74 41 20 9.2 4.0 1.7 0.7 0.4 0.2 0.1 5

100 78 42 19 7.5 2.9 1.0 0.4 0.1 d

0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 1

100 65 53 44 37 32 27 25 21 18 16 14 12 11 9.4 2

100 82 67 55 45 37 30 24 20 17 14 11 9.1 7.4 6.1 3

100 90 75 61 49 39 31 24 19 14 11 8.6 6.6 5.0 3.8 4

100 94 81 66 52 41 31 23 17 13 9.2 6.6 4.8 3.4 2.4 5

100 96 85 70 55 42 31 22 16 11 7.5 5.1 3.5 2.3 1.6 6

100 98 88 73 57 42 30 21 14 9.5 6.2 4.0 2.5 1.6 1.0 7

100 99 90 76 59 43 30 20 13 8.2 5.1 3.1 1.9 1.1 0.7 8

100 99 92 78 60 43 29 19 12 7.2 4.2 2.4 1.4 0.8 0.4 9

100 99 94 80 62 44 29 18 11 6.3 3.5 1.9 1.0 0.5 0.3 10 100 100 95 82 63 44 29 17 10 5.5 2.9 1.5 0.8 0.4 0.2 11 100 100 96 83 64 44 28 16 9.1 4.8 2.4 1.2 0.6 0.3 0.1 12 100 100 96 84 65 45 28 16 8.4 4.2 2.0 0.9 0.4 0.2 0.1 13 100 100 97 86 66 45 27 15 7.7 3.7 1.7 0.7 0.3 0.1 0.1 14 100 100 98 87 67 45 27 14 7.1 3.3 1.4 0.6 0.2 0.1 15 100 100 98 88 68 45 26 14 6.5 2.9 1.2 0.5 0.2 0.1 NOTE: Data from Table 19-3 of Reference 8.1.1.

Page 50 of 59

9.0 TABLES (Cont'd)

Table 9.5 Coefficients (an-i+1) Used in the W Test for Normality n

i 3

4 5

6 7

8 9

10 11 12 13 14 1 0.7071 0.6872 0.6646 0.6431 0.6233 0.6052 0.5888 0.5739 0.5601 0.5475 0.5359 0.5251 2

0.1677 0.2413 0.2806 0.3031 0.3164 0.3244 0.3291 0.3315 0.3325 0.3325 0.3318 3

0.0875 0.1401 0.1743 0.1976 0.2141 0.2260 0.2347 0.2412 0.2460 4

0.0561 0.0947 0.1224 0.1429 0.1586 0.1707 0.1802 5

0.0399 0.0695 0.0922 0.1099 0.1240 6

0.0303 0.0539 0.0727 7

0.0240 n

i 15 16 17 18 19 20 21 22 23 24 25 26 1

0.5150 0.5056 0.4968 0.4886 0.4808 0.4734 0.4643 0.4590 0.4542 0.4493 0.4450 0.4407 2

0.3306 0.3290 0.3273 0.3253 0.3232 0.3211 0.3185 0.3156 0.3126 0.3098 0.3069 0.3043 3

0.2495 0.2521 0.2540 0.2553 0.2561 0.2565 0.2578 0.2571 0.2563 0.2554 0.2543 0.2533 4

0.1878 0.1939 0.1988 0.2027 0.2059 0.2085 0.2119 0.2131 0.2139 0.2145 0.2148 0.2151 5

0.1353 0.1447 0.1524 0.1587 0.1641 0.1686 0.1736 0.1764 0.1787 0.1807 0.1822 0.1836 6

0.0880 0.1005 0.1109 0.1197 0.1271 0.1334 0.1399 0.1443 0.1480 0.1512 0.1539 0.1563 7

0.0433 0.0593 0.0725 0.0837 0.0932 0.1013 0.1092 0.1150 0.1201 0.1245 0.1283 0.1316 8

0.0196 0.0359 0.0496 0.0612 0.0711 0.0804 0.0878 0.0941 0.0997 0.1046 0.1089 9

0.0163 0.0303 0.0422 0.0530 0.0618 0.0696 0.0764 0.0823 0.0876 10 0.0140 0.0263 0.0368 0.0459 0.0539 0.0610 0.0672 11 0.0122 0.0228 0.0321 0.0403 0.0476 12 0.0107 0.0200 0.0284 13 0.0094 n

i 27 28 29 30 31 32 33 34 35 36 37 38 1

0.4366 0.4328 0.4291 04254 0.4220 0.4188 0.4156 0.4127 0.4096 0.4068 0.4040 0.4015 2

0.3018 0.2992 0.2968 0.2944 0.2921 0.2898 0.2876 0.2854 0.2834 0.2813 0.2794 0.2774 3

0.2522 0.2510 0.2499 0.2487 0.2475 0.2463 0.2451 0.2439 0.2427 0.2415 0.2403 0.2391 4

0.2152 0.2151 0.2150 0.2148 0.2145 0.2141 0.2137 0.2132 0.2127 0.2121 0.2116 0.2110 5

0.1848 0.1857 0.1864 0.1870 0.1874 0.1878 0.1880 0.1882 0.1883 0.1883 0.1883 0.1881 6

0.1584 0.1601 0.1616 0.1630 0.1641 0.1651 0.1660 0.1667 0.1673 0.1678 0.1683 0.1686 7

0.1346 0.1372 0.1395 0.1415 0.1433 0.1449 0.1463 0.1475 0.1487 0.1496 0.1505 0.1513 8

0.1128 0.1162 0.1192 0.1219 0.1243 0.1265 0.1284 0.1301 0.1317 0.1331 0.1344 0.1356 9

0.0923 0.0965 0.1002 0.1036 0.1066 0.1093 0.1118 0.1140 0.1160 0.1179 0.1196 0.1211 10 0.0728 0.0778 0.0822 0.0862 0.0899 0.0931 0.0961 0.0988 0.1013 0.1036 0.1056 0.1075 11 0.0540 0.0598 0.0650 0.0697 0.0739 0.0777 0.0812 0.0844 0.0873 0.0900 0.0924 0.0947 12 0.0358 0.0424 0.0483 0.0537 0.0585 0.0629 0.0669 0.0706 0.0739 0.0770 0.0798 0.0824 13 0.0178 0.0253 0.0320 0.0381 0.0435 0.0485 0.0530 0.0572 0.0610 0.0645 0.0677 0.0706 Page 51 of 59

9.0 TABLES (Cont'd)

Table 9.5 Coefficients (an-i+1) Used in the W Test for Normality (Contd) 14 0.0084 0.0159 0.0227 0.0289 0.0344 0.0395 0.0441 0.0484 0.0523 0.0559 0.0592 15 0.0076 0.0144 0.0206 0.0262 0.0314 0.0361 0.0404 0.0444 0.0481 16 0.0068 0.0131 0.0187 0.0239 0.0287 0.0331 0.0372 17 0.0062 0.0119 0.0172 0.0220 0.0264 18 0.0057 0.0110 0.0158 19 0.0053 n

i 39 40 41 42 43 44 45 46 47 48 49 50 1

0.3989 0.3964 0.3940 0.3917 0.3894 0.3872 0.3850 0.3830 0.3808 0.3789 0.3770 0.3751 2

0.2755 0.2737 0.2719 0.2701 0.2684 0.2667 0.2651 0.2635 0.2620 0.2604 0.2589 0.2574 3

0.2380 0.2368 0.2357 0.2345 0.2334 0.2323 0.2313 0.2302 0.2291 0.2281 0.2271 0.2260 4

0.2104 0.2098 0.2091 0.2085 0.2078 0.2072 0.2065 0.2058 0.2052 0.2045 0.2038 0.2032 5

0.1880 0.1878 0.1876 0.1874 0.1871 0.1868 0.1865 0.1862 0.1859 0.1855 0.1851 0.1847 6

0.1689 0.1691 0.1693 0.1694 0.1695 0.1695 0.1695 0.1695 0.1695 0.1693 0.1692 0.1691 7

0.1520 0.1526 0.1531 0.1535 0.1539 0.1542 0.1545 0.1548 0.1550 0.1551 0.1553 0.1554 8

0.1366 0.1376 0.1384 0.1392 0.1398 0.1405 0.1410 0.1415 0.1420 0.1423 0.1427 0.1430 9

0.1225 0.1237 0.1249 0.1259 0.1269 0.1278 0.1286 0.1293 0.1300 0.1306 0.1312 0.1317 10 0.1092 0.1108 0.1123 0.1136 0.1149 0.1160 0.1170 0.1180 0.1189 0.1197 0.1205 0.1212 11 0.0967 0.0986 0.1004 0.1020 0.1035 0.1049 0.1062 0.1073 0.1085 0.1095 0.1105 0.1113 12 0.0848 0.0870 0.0891 0.0909 0.0927 0.0943 0.0959 0.0972 0.0986 0.0998 0.1010 0.1020 13 0.0733 0.0759 0.0782 0.0804 0.0824 0.0842 0.0860 0.0876 0.0892 0.0906 0.0919 0.0932 14 0.0622 0.0651 0.0677 0.0701 0.0724 0.0745 0.0765 0.0783 0.0801 0.0817 0.0832 0.0846 15 0.0515 0.0546 0.0575 0.0602 0.0628 0.0651 0.0673 0.0694 0.0713 0.0731 0.0748 0.0764 16 0.0409 0.0444 0.0476 0.0506 0.0534 0.0560 0.0584 0.0607 0.0628 0.0648 0.0667 0.0685 17 0.0305 0.0343 0.0379 0.0411 0.0442 0.0471 0.0497 0.0522 0.0546 0.0568 0.0588 0.0608 18 0.0203 0.0244 0.0283 0.0318 0.0352 0.0383 0.0412 0.0439 0.0465 0.0489 0.0511 0.0532 19 0.0101 0.0146 0.0188 0.0227 0.0263 0.0296 0.0328 0.0357 0.0385 0.0411 0.0436 0.0459 20 0.0049 0.0094 0.0136 0.0175 0.0211 0.0245 0.0277 0.0307 0.0335 0.0361 0.0386 21 0.0045 0.0087 0.0126 0.0163 0.0197 0.0229 0.0259 0.0288 0.0314 22 0.0042 0.0081 0.0118 0.0153 0.0185 0.0215 0.0244 23 0.0039 0.0076 0.0111 0.0143 0.0174 24 0.0037 0.0071 0.0104 25 0.0035 NOTE: Data from Table 1 of Reference 8.1.5 Page 52 of 59

9.0 TABLES (Cont'd)

Table 9.6 Percentage Points of the Distribution of the W Test Statistic for P = 0.05 n

P n

P 3

0.767 27 0.923 4

0.748 28 0.924 5

0.762 29 0.926 6

0.788 30 0.927 7

0.803 31 0.929 8

0.818 32 0.930 9

0.829 33 0.931 10 0.842 34 0.933 11 0.850 35 0.934 12 0.859 36 0.935 13 0.866 37 0.936 14 0.874 38 0.938 15 0.881 39 0.939 16 0.887 40 0.940 17 0.892 41 0.941 18 0.897 42 0.942 19 0.901 43 0.943 20 0.905 44 0.944 21 0.908 45 0.945 22 0.911 46 0.945 23 0.914 47 0.946 24 0.916 48 0.947 25 0.918 49 0.947 26 0.920 50 0.947 NOTE: Data from Table 2 of Reference 8.1.5.

Page 53 of 59

9.0 TABLES (Cont'd)

Table 9.7 Percentage Points of the Distribution of the D' Test Statistic P

P P

n 0.025 0.975 n

0.025 0.975 n

0.025 0.975 50 95.6 101.3 120 361.8 375.7 640 4525.0 4600.0 52 101.5 107.4 140 456.9 473.2 660 4739.0 4817.0 54 107.5 113.7 160 559.2 577.8 680 4975.0 5037.0 56 113.6 120.0 180 668.2 689.2 700 5178.0 5260.0 58 119.9 126.5 200 783.6 806.9 720 5403.0 5487.0 60 126.3 133.1 220 904.9 930.5 740 5630.0 5717.0 62 132.7 139.8 240 1023.0 1060.0 760 5861.0 5950.0 64 139.3 146.6 260 1164.0 1195.0 780 6094.0 6186.0 66 146.0 153.5 280 1302.0 1335.0 800 6331.0 6425.0 68 152.8 160.6 300 1445.0 1480.0 850 6935.0 7035.0 70 159.6 167.7 320 1593.0 1630.0 900 7558.0 7664.0 72 166.6 174.9 340 1745.0 1785.0 950 8198.0 8310.0 74 173.7 182.2 360 1902.0 1944.0 1000 8856.0 8973.0 76 180.9 189.7 380 2064.0 2108.0 1050 9530.0 9653.0 78 188.2 197.2 400 2230.0 2276.0 1100 10220 10,350 80 195.6 204.8 420 2400.0 2449.0 1150 10930 11,060 82 203.1 212.5 440 2574.0 2625.0 1200 11650 11,790 84 210.6 220.3 460 2752.0 2806.0 1250 12390 12,530 86 218.3 228.2 480 2934.0 2991.0 1300 13140 13,290 88 226.1 236.2 500 3120.0 3179.0 1350 13910 14,060 90 233.9 244.3 520 3310.0 3371.0 1400 14690 14,850 92 241.8 252.4 540 3504.0 3567.0 1450 15,480 15,650 94 249.9 260.7 560 3701.0 3767.0 1500 16,290 16,470 96 258.0 269.1 580 3902.0 3970.0 98 266.2 277.5 600 4106.0 4176.0 100 274.4 286.0 620 4314.0 4387.0 NOTE 1:

For cases where the exact count is not contained within the table linear interpolation of the values may be used to determine the Critical D' Values.

NOTE 2:

Data from Table 5 of Reference 8.1.5.

Page 54 of 59

9.0 TABLES (Cont'd)

Table 9.8 Critical Values of F-Distribution

= 0.05 2

1 1

2 3

4 5

6 7

8 9

10 1

161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 2

18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 3

10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 4

7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5

6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 6

5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 7

5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 8

5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 9

5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 26 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 2.22 27 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25 2.20 28 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 2.19 29 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22 2.18 30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 Page 55 of 59

9.0 TABLES (Cont'd)

Table 9.8 Critical Values of F-Distribution - Continued

= 0.05 2

1 1

2 3

4 5

6 7

8 9

10 40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 120 3.92 3.07 2.68 2.45 2.29 2.17 2.09 2.02 1.96 1.91 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 NOTE 1:

Data from Table VI of Reference 8.3.3 NOTE 2:

Values may also be calculated using the FINV function in Microsoft Excel: Fcrit = FINV(0.05, 1, 2)

Page 56 of 59

10.0 ATTACHMENTS 10.1 Evaluation of Drift Data To complete these evaluations, it is necessary to perform a review of the data points. In some cases, historical data points must be corrected or deleted because the identified data points represent a unique occurrence not related to instrument drift. To properly address the corrected or deleted data points, each affected data point is placed into one of seven categories.

The category descriptions and a basis for why the adjustment of the data point does not represent a drift problem is provided as follows:

CATEGORY A.1 "Data Transcription Errors" This category is assigned to data points that are identified as being data transcription problems. A data transcription error indicates that the data provided for evaluation in the Excel spreadsheet was in error and did not match the data recorded in the original Surveillance Test procedure or the model number of the instrument needed to be corrected. Also included in this category is data impacted by unavailable historical supporting data, which by its absence will skew the evaluation. For resolvable transcription errors, the points were not eliminated but modified to correct the obvious typographical error or changed to make the data points consistent. The change is acceptable because it ensures that proper data is evaluated. All changes to the data set were independently reviewed and verified to ensure control of the data set was maintained.

CATEGORY A.2 "Technician Data Entry Error" This category is assigned to data points which are eliminated from the associated instruments data set based on an obvious data entry error by the Technician recording the data. This category was assigned to data points where a value, for example, was entered as 101.27 when the acceptance range is between 1 and 10 units. Obvious data entry errors of this type may be eliminated because it was either physically impossible or highly unlikely for the instrument to have reached this value. Therefore, the elimination of the data points which fall into this category does not invalidate the instrument drift evaluation.

CATEGORY B.1 "Equipment Replacement' This category is assigned to data points in the data set impacted by As-Found data taken from a new or replacement instrument. When a new instrument is installed the As-Found setting is not a valid data point. Therefore, the elimination of the data points which fall into this category does not invalidate Page 57 of 59

the drift study. Any chronic problems with instrumentation failures, which would only be detected by the performance of the Channel Calibration Test, would be evaluated in the surveillance test history evaluation.

CATEGORY B.2 "Chronic Equipment Failure" This category is assigned to data points which are eliminated from the associated instrument's data set based on a review of the component's history that identified this particular component as a chronic problem instrument. The problems are normally determined to be design, application or installation related. In this case, all data points associated with this instrument would be eliminated to prevent skewing the drift analysis results.

The repetitive failures of this instrument are considered unique and the elimination of the data points that fall into this category does not invalidate the drift study.

CATEGORY B.3 "Scaling or Setpoint Changes" This category is assigned to data points in the data set affected by changes to the setpoint or input scaling values. New changes in instrument scaling or setpoints can appear in the data set as a larger than actual drift point unless the change is detected during the data entry process. Instrument data sheets may not always indicate what happened or the purpose of a change. As a result, an undetected setpoint change can appear as an outlier. When new Instrument inputs or setpoints are incorporated, the As-Found setting is not a valid data point. Therefore, the elimination of the data points that fall into this category does not invalidate the drift study.

CATEGORY C.1 "M&TE Equipment Out of Calibration" This category is assigned to data points which are eliminated from the associated instrument's data set based on the fact that the measuring and test equipment used to perform the test was out of calibration. This was identified by review of the Condition Reports generated when an item of M&TE is discovered to be out of calibration. The elimination of the data is acceptable because the use of out of calibration M&TE equipment makes all data obtained suspect and invalid. Therefore, the elimination of the data points that fall into this category does not invalidate the drift study. A separate evaluation is performed for all M&TE out of calibration incidents to ensure an Operability issue does not exist.

CATEGORY C.2 "Poor Calibration Techniques" This category is assigned to data points which are eliminated from the associated instrument's data set based on a determination that poor calibration techniques have been employed. The primary criteria used to Page 58 of 59

identify data points affected by these techniques are as follows: a Check for linearity problems. If one or more points of a multiple point calibration is out of tolerance, and subsequent calibrations indicate that all points were consistent, this would indicate a problem with calibration techniques if the instruments typically have good linearity characteristics over the instrument span.

1) Check for unnecessary adjustment. For example, if one cycle adjusts the instrument a certain amount in one direction and the next cycle adjusts the instrument back into calibration the same/similar amount in the opposite direction.
2) Check for inconsistent data. If several cycles demonstrate good performance and only one cycle of data indicates very poor performance, then the outlier data point can be eliminated based on poor calibration techniques.

The elimination of the data points that fall into this category does not invalidate the instrument drift evaluation because the data is not representative of true instrument performance.

Page 59 of 59