ML042040174

From kanterella
Jump to navigation Jump to search
License Amendment Request to Support 24-month Fuel Cycles, NMCs Interpretation of the NRC Comments on the Staff Review of EPRI Technical Report 103335, Guidelines for Instrument Calibration Extension/Reduction..
ML042040174
Person / Time
Site: Monticello Xcel Energy icon.png
Issue date: 06/30/2004
From:
Nuclear Management Co
To:
Document Control Desk, Office of Nuclear Reactor Regulation
References
EPRI TR-103335, GL-91-004, L-MT-04-036
Download: ML042040174 (15)


Text

ENCLOSURE 2 MONTICELLO NUCLEAR GENERATING PLANT NMCS INTERPRETATION OF THE NRC COMMENTS ON THE STAFF REVIEW OF EPRI TECHNICAL REPORT 103335, GUIDELINES FOR INSTRUMENT CALIBRATION EXTENSION/REDUCTION PROGRAMS The following are excerpts or paraphrases from the NRC Status Report dated December 1, 1997, on the Staff review of EPRI Technical Report (TR)-103335, Guidelines for the Instrument Calibration Extension/Reduction Programs. These excerpts are followed by Nuclear Management Company, LLCs (NMCs) interpretation regarding utilization of EPRI TR-103335. NMCs interpretations were used in the development of Engineering Standards Manual (ESM)-03.02-APP-III, Drift Analysis (Instrumentation and Controls). These interpretations were also used by NMC in the development of the 24-Month Fuel Cycle Extension Project for the Monticello Nuclear Generating Plant.

NRC COMMENTS ON EPRI TR Item 4.1, Section 1, Introduction, Second Paragraph:

The staff has issued guidance on the second objective (evaluating extended surveillance intervals in support of longer fuel cycles) only for 18-month to 24-month refueling cycle extensions (GL 91-04). Significant unresolved issues remain concerning the applicability of 18 month (or less) historical calibration data to extended intervals longer than 24 months (maximum 30 months), and instrument failure modes or conditions that may be present in instruments that are unattended for periods longer than 24 months.

NMCs INTERPRETATION Extensions for longer than 24 months (maximum 30 months) were not requested for any instrument calibrations or other surveillance requirements in this submittal.

NRC COMMENTS ON EPRI TR Item 4.2, Section 2, Principles of Calibration Data Analysis, First Paragraph:

This section describes the general relation between the as-found and as-left calibration values, and instrument drift. The term time-dependent drift is used. This should be clarified to mean time dependence of drift uncertainty, or in other words, time dependence of the standard deviation of drift of a sample or a population of instruments.

Page 1 of 15

NMCs INTERPRETATION Both EPRI TR Revisions 0 and 1 failed to adequately determine whether a relationship between the magnitude of drift and the time interval between calibrations existed. The drift analysis performed for Monticello looked at the time-to-magnitude relationship using several different statistical and non-statistical methods. First, during the evaluation of data for grouping, data was grouped for the same or similar manufacturer, model number, and application combinations even though the t statistical test may have shown that the groups were not necessarily from the same population if the groups were performed on significantly different frequencies. This test grouping was made to ensure the analysis did not cover up a significant time dependent bias or random element magnitude shift.

After the standard deviation and other simple statistics were calculated, the data was evaluated for the time to magnitude relationship. If adequately time-diverse data was available, a time-binning analysis was performed on the data. Data was divided into time bins based upon the time between calibrations. Statistics were computed for those bins, such as mean and standard deviation. These values were then plotted to expose any significant increases in the magnitude of the mean or standard deviation over time.

A regression analysis was performed, based upon the scatter of the raw drift values and a second regression analysis was performed on the absolute values of the drift.

For each of these regression analyses, statistical tests were performed to determine if time dependency was evident. These statistical tests are the R2, F, and P value tests.

Finally, visual examination of the plots generated as a result of the scatter plot, binning analysis, regression analysis of drift, and the regression analysis of the absolute value of drift were used to make a final judgment on whether or not the random or mean values of drift were time dependent. Therefore, the mean and random aspects of drift were evaluated for time dependency.

NRC COMMENTS ON EPRI TR Item 4.2, Section 2, Principles of Calibration Data Analysis, Second Paragraph:

Drift is defined as as-found(i) - as-left(i-1), where i denotes the ith calibration. As mentioned in the TR this quantity unavoidably contains uncertainty contributions from sources other than drift. These uncertainties account for variability in calibration equipment and personnel, instrument accuracy, and environmental effects. It may be difficult to separate these influences from drift uncertainty when attempting to estimate drift uncertainty, but this is not sufficient reason to group these allowances with a drift allowance. Their purpose is to provide sufficient margin to account for differences between the instrument calibration environment and its operating environment, see Section 4.7 of this report for a discussion of combining other uncertainties into a drift term.

Page 2 of 15

NMCs INTERPRETATION The drift determined by analysis was compared to the equivalent set of variables in the setpoint calculation. Per Section 6.2 of ESM-03.02-APP-III, the Analyzed Drift Value was not comprised of drift alone; this value also contains errors from M&TE and device reference accuracy. The drift value also includes other effects, but it was conservative to assume the other effects were not included since they cannot be quantified and were not expected to fully contribute to the errors observed. Due to the methods used in the GE setpoint methodology (NEDC-31336) the Analyzed Drift term is used to replace the Vendor Drift and Drift Temperature Effects only.

The errors associated with the environment were not considered in the comparison of the Analyzed Drift values to the setpoint calculation values. The environmental effects were considered separately from the Analyzed Drift term, within the setpoint calculation.

NRC COMMENTS ON EPRI TR Item 4.2, Section 2, Principles of Calibration Data Analysis, Third Paragraph:

The guidance of Section 2 is acceptable provided that time dependency of drift for a sample or population is understood to be time dependency of the uncertainty statistic describing the sample or population; e.g., the standard deviation of drift. A combination of other uncertainties with drift uncertainty may obscure any existing time dependency of drift uncertainty, and should not be done before time-dependency analysis is done.

NMCs INTERPRETATION Time dependency evaluations were performed on the basic as-left/as-found data.

Obviously other error contributors were contained in this data, but it is impossible to separate the contribution due to drift from the contribution due to Measurement and Test Equipment and Reference Accuracy. All of these terms contributed to the observed errors. Using the raw values appeared to give the most reliable interpretation of the time dependency for the calibration process, which was the true value of interest.

No other uncertainties were combined with the basic as-left/as-found data for time dependency determination.

Page 3 of 15

NRC COMMENTS ON EPRI TR Item 4.3, Section 3, Calibration Data Collection, Second Paragraph:

When grouping instruments, as well as manufacturer make and model, care should be taken to group only instruments that experience similar environments and process effects. Also, changes in manufacturing method, sensor element design, or the quality assurance program under which the instrument was manufactured should be considered as reasons for separating instruments into different groups. Instrument groups may be divided into subgroups on the basis of instrument age, for the purpose of investigating whether instrument age is a factor in drift uncertainty.

NMCs INTERPRETATION Instruments were originally grouped based upon manufacturer make, model number, and specific range of setpoint or operation. The groups were then evaluated, and combined based upon Section 4.5 of the ESM (Enclosure 4). The appropriateness of the grouping was then tested based upon a t-Test (two samples assuming unequal variances). The t-Test defines the probability, associated with a Students t-Test, that two samples are likely to have come from the same underlying population. Instrument groups were not divided into subgroups based upon age.

NRC COMMENTS ON EPRI TR Item 4.3, Section 3, Calibration Data Collection, Second Paragraph (continued):

Instrument groups should also be evaluated for historical instrument anomalies or failure modes that may not be evident in a simple compilation of calibration data. This evaluation should confirm that almost all instruments in a group performed reliably and almost all required only calibration attendance.

NMCs INTERPRETATION A separate surveillance test failure evaluation was performed for the procedures implementing the surveillance requirements. This evaluation identified calibration-related and non-calibration-related failures for single instruments, and groups of instruments supporting a specific function. After all relevant device and multiple device failures were identified, a cross-check of failures across manufacturer make and model number was also performed to determine if common mode failures could present a problem for the cycle extension. This evaluation confirmed that almost all instruments in a group (associated with extended TS line items) performed reliably and most failures were detected by more frequent testing.

Page 4 of 15

NRC COMMENTS ON EPRI TR Item 4.3, Section 3, Calibration Data Collection, Third Paragraph:

Instruments within a group should be investigated for factors that may cause correlations between calibrations. Common factors may cause data to be correlated, including common calibration equipment, same personnel performing calibrations, and calibrations occurring in the same conditions. The group, not individual instruments within the group, should be tested for trends.

NMCs INTERPRETATION Instruments were only investigated for correlation factors where multiple instruments appeared to have been driven out of tolerance by a single factor. Correlation may exist between the specific type of test equipment (e.g., Fluke 863 on the 0-200 mV range) and the personnel performing calibrations in the plant. This correlation would only effect the measurement if it caused the instrument performance to be outside expected boundaries, e.g., where additional errors should be considered in the setpoint analysis or where it showed a defined bias. Because Measurement and Test Equipment (MT&E) is calibrated more frequently than most process components being monitored, the effect of test equipment between calibrations is considered to be negligible and random. The setting tolerance, readability and other factors, which are more personnel based, would only affect the performance if there were a predisposition to leave or read settings in a particular direction (e.g., always in the more conservative direction). Plant training and evaluation programs are designed to eliminate this type of predisposition. Therefore, the correlation between M&TE and instrument performance, or between personnel and instrument performance has not been evaluated. Observed as-found values outside the allowable tolerance [Allowable Value] were evaluated to determine if a common cause existed as a part of the data entry evaluation.

NRC COMMENTS ON EPRI TR Item 4.3, Section 3, Calibration Data Collection, Fourth Paragraph:

TR-103335, Section 3.3, advises that older data may be excluded from analysis. It should be emphasized that when selecting data for drift uncertainty time dependency analysis, it is unacceptable to exclude data simply because it is old data. When selecting data for drift uncertainty time dependency analysis, the objective should be to include data for time spans at least as long as the proposed extended calibration interval, and preferably several times as long, including calibration intervals as long as the proposed interval. For limited extensions (e.g., a GL 91-04 extension), acceptable ways to obtain this longer interval data include obtaining data from other nuclear plants or from other industries for identical or close-to-identical instruments, or combining intervals between which the instrument was not reset or adjusted. If data from other sources is used, the source should be analyzed for similarity to the target plant in Page 5 of 15

procedures, process, environment methodology, test equipment, maintenance schedules and personnel training. An appropriate conclusion of the data collection process may be that there is insufficient data of appropriate time span for a sufficient number of instruments to support statistical analysis of drift uncertainty time dependency.

NMCs INTERPRETATION Data was obtained for at least ten years or the life of the instrument. This data allowed for the evaluation of data with various different calibration spans over several calibration intervals to provide representative information for each type of instrument. Data from outside the Monticello data set was not used to provide longer interval data. The time dependency determination in most cases was based upon calibrations performed at or near 18 months and data performed at shorter intervals (monthly, quarterly, or semiannually). There did not appear to be any time based factors that would be present from 18 to 24 months that would not have been present between 1, 3, 6, or 12 and 18 months. In some cases, it was determined that there was insufficient data to support statistical analysis of drift time dependency. A correlation between drift magnitude and time was assumed for these cases and the calculation reflects time dependent drift values.

NRC COMMENTS ON EPRI TR Item 4.3, Section 3, Calibration Data Collection, Fifth Paragraph:

TR-103335, Section 3.3 provides guidance on the amount of data to collect. As a general rule, it is unacceptable to reject applicable data, because biases in the data selection process may introduce biases in the calculated statistics. There are only two acceptable reasons for reducing the amount of data selected: enormity, and statistical dependence. When the number of data points is so enormous that the data acquisition task would be prohibitively expensive, a randomized selection process, not dependent upon engineering judgment, should be used. This selection process should have three steps. In the first step, all data is screened for applicability, meaning that all data for the chosen instrument grouping is selected, regardless of age of the data. In the second step, a proportion of the applicable data is chosen by automated random selection, ensuring that the data records for single instruments are complete, and enough individual instruments are included to constitute a statistically diverse sample. In the third step, the first two steps are documented. Data points should be combined when there is indication that they are statistically dependent on each other, although alternate approaches may be acceptable. See Section 4.5, below, on combined point data selection and Section 4.4.1 on 0%, 25%, 50%, 75%, and 100% calibration span points.

Page 6 of 15

NMCs INTERPRETATION Data was obtained for at least ten years or the life of the instrument. No data points were rejected from this time interval, and no sampling techniques were used.

NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-item 4.4.1, Sections 4.3 and 4.4, Data Setup and Spreadsheet Statistics, First Paragraph:

The use of spreadsheets, databases, or other commercial software is acceptable for data analysis provided that the software, and the operating system used on the analysis computer, is under effective configuration control. Care should be exercised in the use of Windows or similar operating systems because of the dependence on shared libraries. Installation of other application software on the analysis machine can overwrite shared libraries with older versions or versions that are inconsistent with the software being used for analysis.

NMCs INTERPRETATION The project used the Microsoft EXCEL spreadsheets to perform the drift analysis. This software was not treated as QA software. Therefore, computations were verified using hand verification and alternate software on different computers, such as Lotus 1-2-3 spreadsheets, MathCad and Quattro Pro.

NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-item 4.4.1, Sections 4.3 and 4.4, Data Setup and Spreadsheet Statistics, Second Paragraph:

Using either engineering units or per-unit (percent of span) quantities is acceptable.

The simple statistic calculations (mean, sample standard deviation, sample size) are acceptable. Data should be examined for correlation or dependence to eliminate over-optimistic tolerance interval estimates. For example, if the standard deviation of drift can be fitted with a regression line through the 0%, 25%, 50%, 75%, and 100%

calibration span points, there is reason to believe that drift uncertainty is correlated over the five (or nine, if the data includes a repeatability sweep) calibration data points. An example is shown in TR-103335, Figure 5.4, and a related discussion is given in TR-103335 Section 5.1.3. Confidence/tolerance estimates are based on (a) an assumption of normality (b) the number of points in the data set, and (c) the standard deviation of Page 7 of 15

the sample. Increasing the number of points (utilizing each calibration span point) when data is statistically dependent decreases the tolerance factor k, which may falsely enhance the confidence in the predicted tolerance interval. To retain the information, but achieve a reasonable point count for confidence/tolerance estimates, the statistically dependent data points should be combined into a composite data point. This retains the information but cuts the point count. For drift uncertainty estimates with data similar to that in the TR example, an acceptable method requires that the number of independent data points should be one-fifth (or one ninth) of the total number of data points in the example, and a combined data point for each set of five span points should be selected that is representative of instrument performance at or near the span point most important to the purpose of the analysis (i.e., trip or normal operation point).

NMCs INTERPRETATION The NMC analysis for Monticello used either engineering units or percent of calibrated span as appropriate to the calibration process. As an example, for switches that do not have a realistic span value, the engineering units were used in the analysis; normally percent of span is used. The data was evaluated for dependence, normally dependence was found between points (0%, 50%, and 100%) for a single calibration.

Due to the changes in MT&E and personnel performing the calibrations, independence was found between calibrations of the same component on different dates. The most conservative simple statistic values for the points closest to the point of interest were selected, or the most conservative values for any data point were selected to ensure conservatism. The multiplier was determined based upon the number of actual calibrations associated with the worst-case value selected. Selection of the actual number of calibrations is equivalent to the determination of independent points (e.g.,

one fifth or one ninth of the total data point count). Selection of the worst-case point is also more conservative than the development of a combined data point.

NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-Item 4.4.2, Section 4.5, Outlier Analysis:

Rejection of outliers is acceptable only if a specific, direct reason can be documented for each outlier rejected. For example, a documented tester failure would be cause for rejecting a calibration point taken with the tester when it had failed. It is not acceptable to reject outliers on the basis of statistical tests alone. Multiple passes of outlier statistical criterion are not acceptable. An outlier test should only be used to direct attention to data points, which are then investigated for cause. Five acceptable reasons for outlier rejection, provided that they can be demonstrated, are given in the TR: data transcription errors, calibration errors, calibration equipment errors, failed instruments, and design deficiencies. Scaling or setpoint changes that are not annotated in the data record indicate unreliable data, and detection of unreliable data is not cause for outlier Page 8 of 15

rejection, but may be cause for rejection of the entire data set and the filing of a licensee event report. The usual engineering technique of annotating the raw data record with the reason for rejecting it, but not obliterating the value, should be followed. The rejection of outliers typically has cosmetic effects: if sufficient data exists, it makes the results look slightly better; if insufficient data exists, it may mask a real trend.

Consequently, rejection of outliers should be done with extreme caution and should be viewed with considerable suspicion by a reviewer.

NMCs INTERPRETATION It is acceptable as stated previously to remove one outlier from an analysis based upon statistical means other than those using the engineering judgments mentioned in the EPRI document. No more than one outlier was removed from the drift population on the basis of being outliers.

Significant conservatisms exist in the assumptions for extrapolation of drift values as computed per Engineering Standards Manual (ESM)-03.02-APR-III, Drift Analysis (Instrumentation and Controls) (Enclosure 4), which provide additional margin for the devices to drift. Additionally, if the removal of the data reduced the computed extrapolated drift to a value that is not consistent with the capability of the device, the improved drift-monitoring program will detect the problem and implement design activity, maintenance activity, or both to correct the problem.

NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-item 4.4.3, Section 4.6, Verifying the Assumption of Normality:

The methods described are acceptable in that they are used to demonstrate that calibration data or results are calculated as if the calibration data were a sample of a normally distributed random variable. For example, a tolerance interval which states that there is a 95% probability that 95% of a sample drawn from a population will fall within tolerance bounds is based on an assumption of normality, or that the population distribution is a normal distribution. Because the unwarranted removal of outliers can have a significant effect on the normality test, removal of significant numbers of, or sometimes any (in small populations), outliers may invalidate this test.

NMCs INTERPRETATION NRC acceptable methods from the EPRI TR were used by NMC for the Monticello analysis. As previously addressed, all drift studies involved the removal of one or less outliers. Therefore, the normality tests are still valid. Coverage analysis was used where the normality tests did not confirm the assumption of normality. This produces a Page 9 of 15

conservative model of the drift data by expanding the standard deviation to provide adequate coverage.

NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-item 4.4.4, Section 4.7, Time-Dependent Drift Considerations, First through Ninth Paragraphs:

This section of the TR discusses a number of methods for detecting a time dependency in drift data, and one method of evaluating drift uncertainty time dependency. None of the methods uses a formal statistical model for instrument drift uncertainty, and all but one of them focus on drift rather drift uncertainty.

Two conclusions are inescapable: regression analysis cannot distinguish drift uncertainty time dependency, and the slope and intercept of regression lines may be artifacts of sample size, rather than being statistically significant. Using the results of a regression analysis to rule out time dependency of drift uncertainty is circular reasoning:

i.e., regression analysis eliminates time dependency of uncertainty; no time dependency is found; therefore, there is no time dependency.

NMCs INTERPRETATION Several different methods of evaluation for time dependency of the data were used for the analysis. One method, the binning analysis, was to evaluate the standard deviations at different calibration intervals. This analysis technique is the most recommended method of determining time-dependent tendencies in a given sample pool. The test consists simply of segregating the drift data into different groups (bins) corresponding to different ranges of calibration or surveillance intervals, and comparing the standard deviations for the data in the various groups. The purpose of this type of analysis is to determine if the standard deviation or mean tends to become larger as the time between calibration increases. Simple regression lines, regression of the absolute value of drift, as well as R2, F, and P tests were generated and reviewed. Visual examinations of the scatter plot, binning plot, and both regression plots were used to assess or corroborate results. The data was assumed moderately time dependent where there was not sufficient data to perform the detailed evaluation. Whenever extrapolation of the drift value was required, in all cases, drift was assumed to be at least moderately time dependent for the purposes of extrapolation, even though many of the test results showed that the drift was not time independent.

Page 10 of 15

NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-item 4.4.4, Section 4.7, Time-Dependent Drift Considerations, Thirteenth and Fourteenth Paragraphs:

A model can be used either to bound or project future values for the quantity in question (drift uncertainty) for the extended intervals. An acceptable method would use standard statistical methods to show that a hypothesis (that the instruments under study have drift uncertainties bounded by the drift uncertainty predicted by a chosen model) is true with high probability. Ideally, the method should use data that include instruments that were un-reset for at least as long as the intended extended interval, or similar data from other sources for instruments of like construction and environmental usage. The use of data of appropriate time span is preferable; however, if this data is unavailable, model projection may be used provided the total projected interval is no greater than 30 months and the use of the model is justified. A follow-up program of drift monitoring should confirm that model projections of uncertainty bounded the actual estimated uncertainty. If it is necessary to use generic instrument data or constructed intervals, the chosen data should be grouped with similar grouping criteria as are applied to instruments of the plant in question, the Students t test should be used to verify that the generic or constructed data mean appears to come from the same population. The F test should be used on the estimate of sample variance. For a target surveillance interval constructed of shorter intervals where instrument reset did not occur, the longer intervals are statistically dependent upon the shorter intervals; hence, either the constructed longer-interval data or the shorter-interval data should be used, but not both. In a constructed interval, drift = as-left(0) - as-found(Last), the intermediate values are not used.

When using samples acquired from generic instrument drift analysis or constructed intervals, the variances are not simply summed, but are combined weighted by the degrees of freedom in each sample.

NMCs INTERPRETATION The General Electric interval extension process was used because the General Electric setpoint methodology was used for most RPS/ECCS setpoints. NMC determined that where the drift could be proven to be time independent for the analysis period, or shown to be only slightly time dependent, or just moderately time dependent, the calculated drift value was extended based upon the formula:

Drift30 = Drift calculated x interval time drift calculated 30 Page 11 of 15

A strong indication of time dependent drift used the following formula:

Drift30 = Drift calculated x interval time drift calculated 30 The extended drift value determined using either of the above equations was verified to bound the suggested method of addressing time-dependent uncertainty contained in TR-103335-R-1, Section 9.5. This method increases the tolerance interval to the 99%/95% level instead of the standard 95%/95% level:

Drift30 = drift standard deviation x TIF99/95 NRC COMMENTS ON EPRI TR Item 4.4, Section 4, Analysis of Calibration Data Sub-item 4.4.5, Section 4.8, Shelf Life of Analysis Results:

The TR gives guidance on how long analysis results remain valid. The guidance given is acceptable with the addition that once adequate analysis and documentation is presented and the calibration interval extended, a strong feedback loop must be put into place to ensure drift, tolerance and operability of affected components are not negatively impacted. An analysis should be re-performed if its predictions turn out to exceed predetermined limits set during the calibration interval extension study. A goal during the re-performance should be to discover why the analysis results were incorrect.

The establishment of a review and monitoring program, as indicated in GL 91-04,, Item 7, is crucial to determining that the assumptions made during the calibration interval extension study were true. The methodology for obtaining reasonable and timely feedback must be documented.

NMCs INTERPRETATION NMC is committed to establishing a trending program as discussed in this submittal to provide feedback on the acceptability of the drift error extension. This program will evaluate any as-found condition outside the expected drift range and perform a detailed analysis of as-found values outside the Allowable Value. The drift analysis will be re-performed when the root cause analysis indicates drift is a probable cause for the performance problems.

NRC COMMENTS ON EPRI TR Item 4.5, Section 5, Alternative Methods of Data Collection and Analysis:

Page 12 of 15

Section 5 discusses two alternatives to as-found/as-left (AFAL) analysis, combining the 0%, 25%, 50%, 75% and 100% span calibration points, and the EPRI Instrument Calibration Reduction Program (ICRP).

Two alternatives of AFAL are mentioned: as-found/setpoint (AFSP) analysis, and worst case as-found/as-left (WCAFAL). Both AFSP and WCAFAL are more conservative than the AFAL method because they produce higher estimates of drift. Therefore, they are acceptable alternatives to AFAL drift estimation.

The combined-point method is acceptable, and in some cases preferable, if the combined value of interest is taken at the point important to the purpose of the analysis.

That is, if the instrument being evaluated is used to control the plant in an operating range, the instrument should be evaluated near its operating point. If the instrument being evaluated is employed to trip the reactor, the instrument should be evaluated near the trip point. The combined-point method should be used if the statistic of interests shows a correlation between calibration span points, thus inflating the apparent number of data points and causing and overstatement of confidence in the results. The method by which the points are combined (e.g., nearest point, interpolation, averaging) should be justified and documented.

NMCs INTERPRETATION Neither the AFSP nor the WCAFAL method was used by NMC at Monticello. The general process was to use the calibration point with the worst-case drift value this provides a bounding drift value that was applied to the entire range of the device for devices with multiple calibration points.

NRC COMMENTS ON EPRI TR Item 4.6, Section 6, Guidelines for Calibration and Surveillance Interval Extension Programs:

This section presents an example analysis in support of extending the surveillance interval of reactor trip bistables from monthly to quarterly. Because these bistables exhibit little or no bias, and very small drift, the analysis example does not challenge the methodology presented in TR-103335 Section 4, and thus raises no acceptability issues related to drift analysis that have not already been covered. The bistables are also rack instruments, and thus not representative of process instruments, for which drift is a greater concern. Bistables do not produce a variable output signal that can be compared to redundant device readings by operations personnel, or during trending programs, and cannot be compared during channel checks, as redundant process instruments are. For these reasons, the data presented in Section 6 have very little relationship to use in the TR methodology for calibration interval extensions for process instruments. The binomial pass/fail methodology of Section 6.3 is acceptable as a method of complying with GL 91-04, Enclosure 2, item 1 for bistables, Confirm that Page 13 of 15

acceptable limiting values of drift have not been exceeded except in rare instances.

This method provides guidance for the definition of rare instances by describing how to compute expected numbers of exceedances for an assumed instrument confidence/tolerance criterion (e.g., 95/95) for a large set of bistable data. There are other methods that would be acceptable, in particular, the 2 test for significance.

This test can be used to determine if the exceedance-of-allowable-limits frequency in the sample is probably due to chance or probably not due to chance, for a given nominal frequency (e.g., 95% of drifts do not exceed allowable limits). This provides an acceptable method of complying with GL 91-04, Enclosure 2, item 1 in the general case.

NMCs INTERPRETATION This submittal contains one group of bistables where the surveillance interval is increased from 18 to 24 months. Failure analysis was performed for the procedures involved to ensure that these tests normally pass their surveillances at the current frequency. NMC evaluated performing a Binomial Pass/Fail analysis for these bistables. However, there were no failures to meet the device As-Found criteria during all reviewed performances. Zero failures cause the test for probability and confidence interval to provide unreasonable results. NMC agrees with the statement that bistables do not produce a variable output signal that can be compared to redundant device readings by operation personnel. This does not prevent the performance of drift analysis. It is possible to trend bistable trip setpoints and perform a drift analysis based on the input value for bistable units in the same manner that drift analysis is performed for switches, since there is a change in state of the bistable based on the input value and this change in state is detectable in comparison with input value variations. NMC also disagrees with the statement bistables do not produce a variable output signal that can be compared to redundant device readings or during trending programs, and cannot be compared during channel checks, as redundant process instruments are.

The readings of input values required to change bistable state and trip the channels are measured during channel functional tests and compared with historical evaluations of the same channel and redundant channels. The drift was measured by as-found/as-left data analysis in the same manner as for process instrumentation (input to output relationship changes) and was considered in the applicable setpoint analysis. The approach taken with this extension request exceeds the requirements shown in the comments above since rigorous drift analysis was performed for the bistables.

NRC COMMENTS ON EPRI TR Item 4.7, Section 7, Application to Instrument Setpoint Programs:

Section 7 is a short tutorial on combining uncertainties in instrument Setpoint calculations. Figure 7-1 of this section is inconsistent with ANSI/ISA-S67.04-1994, Part I, Figure 1. Rack uncertainty is not combined with sensor uncertainty in the computation of the allowable value in the standard. The purpose of the allowable value Page 14 of 15

is to set a limit beyond which there is reasonable probability that the assumptions used in the setpoint calculation were in error. For channel functional test, these assumptions normally do not include an allowance for sensor uncertainty (quarterly interval, sensor normally excluded). If a few instruments exceed the allowable value, this is probably due to instrument malfunction. If it happens frequently, the assumptions in the setpoint analysis may be wrong. Since the terminology used in Figure 7-1 is inconsistent with ANSI/ISA-S67.04-1994, Part I, Figure 1, the following correspondences are suggested:

the Nominal Trip Setpoint is the ANSI/ISA trip setpoint; ANSI/ISA value A is the difference between TR Analytical Limit and Nominal Trip Setpoint [sic]; Sensor Uncertainty is generally not included in the Allowable Value Uncertainty and would require justification, the difference between Allowable Value and Nominal Trip Setpoint is ANSI/ISA value B; the Leave-As-Is-Zone is equivalent to the ANSI/ISA value E, and the difference between System Shutdown and Nominal Trip Setpoint is the ANSI/ISA value D. Equation 7-5 (page 7-7 of the TR) combines a number of uncertainties into the drift term, D. If this is done, the reasons and the method of combination should be justified and documented. The justification should include an analysis of the differences between operational and calibration environments, including accident environments in which the instrument is expected to perform.

NMCs INTERPRETATION Application of the drift values to plant setpoints was performed in accordance with the GE setpoint methodology for most RPS/ECCS setpoints. The Allowable Value defined for the GE setpoint methodology is defined as the operability limit when performing the channel calibration. No environmental terms are considered to be included in the drift term. Environmental effects and accuracy are included between the Analytical Limit and the Allowable Value. The difference between the setpoint and the Allowable Value is the drift (AFAL). The HELB environment is used for setpoints of equipment required operable during a HELB, but the effect is considered in the calculation of the Allowable Value.

NRC COMMENTS ON EPRI TR Item 4.8, Section 8, Guidelines for Fuel Cycle Extensions:

The TR repeats the provisions of Enclosure 2, GL 91-04, and provides direct guidance, by reference to preceding sections of the TR, on some of them.

NMCs INTERPRETATION A discussion of how NMCs evaluations for the Monticello Nuclear Generating Plant meet the guidance of GL 91-04 is provided in Enclosure 1.

Page 15 of 15