ML041200298

From kanterella
Revision as of 07:09, 18 March 2020 by StriderTol (talk | contribs) (StriderTol Bot insert)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Unit 2, License Amendment Request - Conversion of Current Technical Specifications (CTS) to Improved Technical Specifications (ITS)
ML041200298
Person / Time
Site: Cook  American Electric Power icon.png
Issue date: 04/06/2004
From: Nazar M
Indiana Michigan Power Co
To:
Document Control Desk, Office of Nuclear Reactor Regulation
References
AEP:NRC:4901
Download: ML041200298 (83)


Text

Indiana Michigan Power Company 500 Circle Drive Buchanan, MI 49107 1373 INDIAMA MICHIGAN POWER April 6, 2004 AEP:NRC:4901 10 CFR 50.90 U. S. Nuclear Regulatory Comm-ission ATTN: Document Control Desk Mail Stop O-P 1-17 Washington, DC 20555-0001

SUBJECT:

Donald C. Cook Nuclear Plant, Unit 1 and Unit 2 Docket Nos. 50-315 and 50-316 License Amendment Request - Conversion of Current Technical Specifications (CTS) to Improved Technical Specifications (ITS)

Dear Sir or Madam:

Pursuant to 10 CFR 50.90, Indiana Michigan Power Company (I&M), the licensee for Donald C. Cook Nuclear Plant Unit 1 and Unit 2, proposes to amend Appendix A, Technical Specifications, of Facility Operating Licenses DPR-58 and DPR-74. I&M proposes to revise the CTS to the ITS consistent with Improved Standard Technical Specifications (ISTS) as described in NUREG-143 1, "Standard Technical Specifications - Westinghouse Plants,"

Revision 2, and certain generic changes to the NUREG. The guidance of NEI 96-06, "Improved Technical Specifications Conversion Guidance," dated August 1996, and Nuclear Regulatory Commission (NRC) Administrative Letter 96-04, "Efficient Adoption of Improved Standard Technical Specifications,"

dated October 9, 1996, were used in preparing this submittal.

The detailed descriptions and justifications to support the proposed changes are provided in the volumes attached to this letter (Attachment 1). I&M has evaluated this proposed change in accordance with 10 CFR 5 0.9 1(a)(1) using the criteria of 10 CFR 50.92(c), and has determined that this change does not involve a significant hazards consideration. In addition, I&M has determined that the proposed license amendment meets the eligibility criteria for categorical exclusion set forth in 10 CFR 51.22(b), and no environmental impact statement or environmental assessment need be prepared in connection with the proposed license amendment. The determination of no significant hazards consideration and environmental assessment are also included in Attachment 1.

U. S. Nuclear Regulatory Commission AEP:NRC:4901 Page 2 The proposed changes to the CTS also include requested extensions of the Surveillance Frequency of selected Surveillance Requirements. This includes extensions of Surveillance Frequencies from 18 months to 24 months, primarily to provide cost-beneficial application of the ITS with minimal effect on safety, and to support the possibility of future extended fuel cycles. In addition, changes are proposed to the CTS for extending the Surveillance Frequency of other selected Surveillance Requirements using the same methodology, including extending Surveillance Frequencies of 7 days, 31 days, and 92 days to a 184 day Surveillance Frequency.

I&M has evaluated these proposed Surveillance Frequencies in accordance with the guidance provided in NRC Generic Letter 91-04, Changes in Technical Specification Surveillance Intervals to Accommodate a 24 Month Fuel Cycle, dated April 2, 1991. In support of these requested Surveillance Frequency extensions, I&M has evaluated selected instrumentation that include Allowable Values in the ITS, and has proposed revised Allowable Values when necessary to support the longer Surveillance Frequencies as described in Attachment 1.

The following enclosures are also included to assist the NRC in the review of this submittal:

Enclosure 1, Affirmation, provides an oath and affirmation affidavit regarding the statements made and matters set forth in this submittal.

Enclosure 2, Contents of the Donald C. Cook Nuclear Plant (CNP)

Improved Technical Specifications (ITS) Submittal, describes the organization and content of the submittal, including each of the volumes in Attachment 1.

Enclosure 3, Beyond Scope Changes, provides a list and description of the changes included in the ITS submittal that are beyond the scope of the ISTS as described in NUREG-1431, Revision 2, and also beyond the scope of the CTS.

Enclosure 4, Disposition of Generic Changes to NUREG-1431, Revision 2, lists the NRC-approved changes to NUREG-1431, Revision 2, as of October 26, 2003, and summarizes the disposition of each of these changes in the CNP license amendment request. The enclosure also lists other generic changes to NUREG-1431, Revision 2, being included in this license amendment request, which are currently under review by the NRC.

U. S. Nuclear Regulatory Commission AEP:NRC:4901 Page 3 Enclosure 5, "Methodology Summary and Compliance with NRC Generic Letter 9 1-04," provides a summary of the methodology used in compliance with NRC Generic Letter 9 1-04 to justify extensions of Surveillance Frequencies for specific CTS Surveillance Requirements to either 184 days or 24 months in the ITS.

Enclosure 6, "Instrument Drift Analysis Methodology Guideline," describes the methodology used to perform drift analyses of selected instrumentation, primarily using past calibration history data, to justify extensions of Surveillance Frequencies for the performance of calibration of these instruments in the ITS.

Enclosure 7, "Disposition of Existing License Amendment Requests,"

provides the disposition of other, currently docketed license amendment requests, as they relate to this license amendment request.

Enclosure 8, "Deleted License Conditions," provides a list of the current License Conditions that can be deleted upon approval of this proposed license amendment, and a brief justification for the proposed deletion.

Enclosure 9, "Commitments Met by Improved Technical Specifications (ITS) Conversion," describes previous commitments made by I&M to the NRC that are either fulfilled by the submittal of this license amendment request, or will be superceded by approval of the proposed ITS requirements.

I&M requests approval of the proposed license amendment before February 28, 2005, with an implementation period of at least 120 days, to support implementation of the ITS by the end of June 2005. After approval of the proposed license amendment, I&M will notify the NRC when ITS implementation actions are completed.

This letter contains no commitments. If you have any questions or require additional information, please contact Mr. John A. Zwolinski, Director of Design Engineering and Regulatory Affairs, at (269) 697-5007.

Sincerely, PA.IC.I M. K. Nazar Senior Vice Presidl and Chief Nuclear Officer GEW/rdw

U. S. Nuclear Regulatory Commission AEP:NRC:4901 Page 4

Enclosures:

1. Affirmation
2. Content of the Donald C. Cook Nuclear Plant (CNP) Improved Technical Specification (ITS) Submittal
3. Beyond Scope Changes
4. Disposition of Generic Changes to NUREG-1431, Revision 2
5. Methodology Summary and Compliance with NRC Generic Letter 91-04
6. Instrument Drift Analysis Methodology Guideline
7. Disposition of Existing License Amendment Requests
8. Deleted License Conditions
9. Commitments Met by Improved Technical Specifications (ITS) Conversion

Attachment:

1. ITS Submittal, Volumes 1 through 18 c: J. L. Caldwell, NRC Region III K. D. Curry, Ft. Wayne AEP, w/o enclosures/attachment J. T. King, MPSC, w/o enclosures/attachment MDEQ - WHMD/HWRPS, w/o enclosures/attachment NRC Resident Inspector J. F. Stang, Jr., NRC Washington, DC

U. S. Nuclear Regulatory Commission AEP:NRC:4901 Page 5 bc: M. J. Finissi, w/o enclosures/attachment D. W. Jenkins, w/o enclosures/attachment J. N. Jensen, w/o enclosures/attachment M. K. Nazar, w/o enclosures/attachment J. E. Newmiller, w/o attachment D. J. Poupard, w/o attachment M. K. Scarpello, w/o attachment T. K. Woods, w/o attachment J. A. Zwolinski, w/o attachment

Enclosure 1 to AEP:NRC:4901 AFFIRMATION I, Mano K. Nazar, being duly sworn, state that I am Senior Vice President and Chief Nuclear Officer of American Electric Power Service Corporation and Vice President of Indiana Michigan Power Company (I&M), that I am authorized to sign and file this request with the Nuclear Regulatory Commission on behalf of I&M, and that the statements made and the matters set forth herein pertaining to I&M are true and correct to the best of my knowledge, information, and belief.

American Electric Power Service Corporation

,ky\/c.A/ ,

M. K. Nazar Senior Vice Presidet and Chief Nuclear Officer SWORN TO AND SUBSCRIBED BEFORE ME THIS ____"DAY OF ,2004 My Commission Expires af17Y5 A. ~~~~ BRIDGET TAYLOR Notary Public, Beirien County, La

- Commission Expires Jun. 10, 2007

~~~MY to AEP:NRC:4901 Page 1 Contents of the Donald C. Cook Nuclear Plant (CNP)

Improved Technical Specifications (ITS) Submittal The submittal for the conversion of the current Technical Specifications (CTS) to the ITS for CNP Units 1 and 2 consists of the submittal letter with nine enclosures and one attachment that includes the following eighteen volumes:

Volume Title 1 Application of Selection Criteria to the Donald C. Cook Nuclear Plant Units 1 and 2 Technical Specifications 2 Generic Determination of No Significant Hazards Considerations and Environmental Assessment 3 ITS Chapter 1.0, Use and Application 4 ITS Chapter 2.0, Safety Limits 5 ITS Section 3.0, Limiting Condition for Operation (LCO) Applicability and Surveillance Requirement (SR) Applicability 6 ITS Section 3.1, Reactivity Control Systems 7 ITS Section 3.2, Power Distribution Limits 8 ITS Section 3.3, Instrumentation 9 ITS Section 3.4, Reactor Coolant System 10 ITS Section 3.5, Emergency Core Cooling Systems (ECCS) 11 ITS Section 3.6, Containment Systems 12 ITS Section 3.7, Plant Systems 13 ITS Section 3.8, Electrical Power Systems 14 ITS Section 3.9, Refueling Operations 15 ITS Chapter 4.0, Design Features 16 ITS Chapter 5.0, Administrative Controls 17 Unit 1 CTS Markup Pages in CTS Order 18 Unit 2 CTS Markup Pages in CTS Order Volumes 1, 17, and 18 are provided to assist the Nuclear Regulatory Commission (NRC) in the review and approval of Volumes 2 through 16. Below is a brief description of the content of each of the volumes in this submittal.

Volume 1 Volume 1 provides details concerning the application of the selection criteria to the individual CNP CTS. Each CTS Specification is evaluated, and a determination is made as to whether or not the CTS Specification meets the criteria in 10 CFR 50.36(c)(2)(ii) for retention in the proposed ITS.

to AEP:NRC:4901 Page 2 Volume 2 This volume contains the majority of the evaluations required by 10 CFR 50.91(a), which support a finding of No Significant Hazards Consideration (NSHC). Based on the inherent similarities in the NSHC evaluations, generic evaluations for a finding of NSHC have been written for the following categories of CTS changes:

Administrative Changes More Restrictive Changes Relocated Specifications Removed Detail Changes Less Restrictive Changes Category 1 - Relaxation of LCO Requirements Less Restrictive Changes Category 2 - Relaxation of Applicability Less Restrictive Changes Category 3 - Relaxation of Completion Time Less Restrictive Changes Category 4 - Relaxation of Required Action Less Restrictive Changes Category 5 - Deletion of Surveillance Requirement Less Restrictive Changes Category 6 - Relaxation of Surveillance Requirement Acceptance Criteria Less Restrictive Changes Category 7 - Relaxation of Surveillance Frequency, Non-24 Month Type Change Less Restrictive Changes Category 8 - Deletion of Reporting Requirements Less Restrictive Changes Category 9 - Surveillance Frequency Change using NRC Generic Letter (GL) 91-04 Guidelines, Non-24 Month Type Change Less Restrictive Changes Category 10 - 18 to 24 Month Surveillance Frequency Change, Non-Channel Calibration Type Less Restrictive Changes Category 11 - 18 to 24 Month Surveillance Frequency Change, Channel Calibration Type Less Restrictive Changes Category 12 - Deletion of Surveillance Requirement Shutdown Performance Requirements Less Restrictive Changes Category 13 - Addition of LCO 3.0.4 Exception Less Restrictive Changes Category 14 - Changing Instrumentation Allowable Values For those less restrictive changes that do not fall into one of the generic Less Restrictive Changes categories, a specific NSHC evaluation has been performed and is provided in the applicable Chapter, Section, or Specification in Volumes 3 through 16.

In addition, Volume 2 contains an evaluation of environmental consideration in accordance with 10 CFR 51.21. It has been determined that the proposed license amendment meets the eligibility criteria for categorical exclusion set forth in 10 CFR 51.22(b), and no environmental impact statement or environmental assessment need be prepared in connection with the proposed license amendment.

to AEP:NRC:4901 Page 3 Volumes 3 through 16 Volumes 3 through 16 provide the details and safety analyses to support the proposed changes.

Each of these volumes corresponds to a Chapter, Section, or Specification of NUREG-1431, Revision 2. Each volume contains the required information to review the conversion to ITS, and includes the following:

  • Individual ITS Specifications in ITS Chapter (Volumes 3, 4, and 15), Section (Volume 5), or Specification (Volumes 6 through 14 and 16) order;
  • Relocated/Deleted CTS Specifications (if applicable); and
  • ISTS Specifications not adopted in the CNP ITS (if applicable).

The information for each of the above three types of Specifications is organized as follows:

CTS Markup and Discussion of Changes (DOCs) (applicable only to Individual ITS Specifications and Relocated/Deleted CTS Specifications)

This section contains a markup of the CTS pages, either for CTS pages associated with an Individual ITS Specification or for Relocated/Deleted CTS Specifications, and the DOCs from the CTS. CTS license amendment requests under NRC review that have been docketed as of March 15, 2004, may or may not have been incorporated in the proposed changes as described in Enclosure 7 of this submittal.

The CTS markup pages for each ITS Specification are normally in numerical order, with all Unit 1 pages preceding all Unit 2 pages. However, more than one CTS Specification is sometimes used in the generation of an ITS Specification. In this case, the CTS pages that are the major contributor to the ITS Specification are shown first, followed by the remaining associated CTS pages in numerical order.

The left-hand margin of the CTS markup pages includes a cross-reference to the equivalent ITS requirement. The upper right-hand corner of the CTS markup pages is annotated with the ITS Specification number to which it applies. Items on the CTS markup pages that are addressed in other proposed ITS Chapters, Sections, or Specifications are annotated with a reference to the appropriate ITS Chapter, Section, or Specification.

The CTS markup pages are annotated with an alphanumeric designator to identify the differences between the CTS and the proposed ITS. The designator corresponds to a DOC, which provides the description and justification of the change. The DOCs are located directly following the associated CTS markup for each Chapter or Section (Volumes 3, 4, 5, and 15) or each Specification (Volumes 6 through 14 and 16).

to AEP:NRC:4901 Page 4 Each proposed change to the CTS is classified into one of the following categories:

Designator Category A ADMINISTRATIVE CHANGES Changes to the CTS that do not result in new requirements or change operational restrictions or flexibility.

These changes are supported in aggregate by a single generic NSHC.

M MORE RESTRICTIVE CHANGES Changes to the CTS that result in added restrictions or reduced flexibility. These changes are supported in aggregate by a single generic NSHC.

R RELOCATED SPECIFICATIONS Changes to the CTS that relocate Specifications that do not meet the selection criteria of 10 CFR 50.36(c)(2)(ii). These changes are supported in aggregate by a single generic NSHC.

LA REMOVED DETAIL CHANGES Changes to the CTS that eliminate detail and relocate the detail to a licensee-controlled document. Typically, this involves details of system design and function, or procedural detail on methods of conducting a Surveillance Requirement. These changes are supported in aggregate by a single generic NSHC. In addition, the generic type of removed detail change is identified in italics at the beginning of the DOC.

L LESS RESTRICTIVE CHANGES Changes to the CTS that result in reduced restrictions or added flexibility. These changes are supported either in aggregate by a generic NSHC that addresses a particular category of less restrictive change, or by a specific NSHC if the change does not fall into one of the fourteen categories of less restrictive changes. If the less restrictive change is covered by a generic NSHC, the category of the change is identified in italics at the beginning of the DOC.

The DOCs are numbered sequentially within each letter designator for each ITS Chapter, Section, or Specification.

The CTS Bases pages are replaced in their entirety by the proposed CNP ITS Bases, and markup pages are not provided in the ITS submittal.

to AEP:NRC:4901 Page 5 ISTS Markup and Justification for Deviations (JFDs) (applicable only to Individual ITS Specifications and ISTS Specifications not adopted in the CNP ITS)

This section contains a markup of the NUREG-1431, Revision 2, ISTS pages, either for ISTS pages associated with an Individual ITS Specification or ISTS Specifications not adopted in the CNP ITS, and JFDs from the ISTS. The ISTS pages are annotated with a numeric designator to identify the differences between the ISTS and the proposed ITS. The designator corresponds to a JFD, which provides the justification for the difference. The JFDs are located directly following the associated ISTS markup for each Chapter or Section (Volumes 3, 4, 5, and 15) or each Specification (Volumes 6 through 14 and 16). The ISTS markup pages are also annotated to show the incorporation of NRC-approved generic changes (Technical Specification Task Force (TSTF) change travelers) that are applicable to the CNP ITS.

The left-hand margin of the ISTS markup pages includes a cross-reference to the equivalent CTS requirement. In addition, while CNP currently has and will continue to have two separate Technical Specifications (one for each unit), a single markup of each ISTS page is provided for CNP Units 1 and 2. Differences between the two units are annotated as follows:

  • When the words apply only to a single unit, the phrase (Unit 1 only) or (Unit 2 only) follows the applicable words; and
  • When the words are different for the two units, the words are phrased in the manner xxxx (Unit 1) and yyyy (Unit 2) with the xxxx being the Unit 1 words and yyyy being the Unit 2 words.

When the final typed ITS is approved, the words associated with Unit 1 will only be in the Unit 1 version and the words associated with Unit 2 will only be in the Unit 2 version.

ISTS Bases Markup and JFDs (applicable only to Individual ITS Specifications and ISTS Specifications not adopted in the CNP ITS)

This section contains a markup of the NUREG-1431, Revision 2, ISTS Bases pages, either for ISTS Bases pages associated with an Individual ITS Specification or ISTS Specifications not adopted in the CNP ITS, and JFDs from the ISTS Bases. The ISTS Bases pages are annotated with a numeric designator to identify the differences between the ISTS Bases and the proposed ITS Bases. The designator corresponds to a JFD, which provides the justification for the difference. The JFDs are located directly following the associated ISTS Bases markup for each Chapter or Section (Volumes 3, 4, 5, and 15) or each Specification (Volumes 6 through 14 and 16). The ISTS Bases markup pages are also annotated to show the incorporation of NRC-approved generic changes (TSTF change travelers) that are applicable to the CNP ITS Bases. The volumes for ITS Chapters 1.0, 4.0, and 5.0 do not include this section, because NUREG-1431, Revision 2, does not include any Bases for these Chapters. In addition, while CNP currently has and will continue to have two separate to AEP:NRC:4901 Page 6 Technical Specifications (one for each unit), a single markup of each ISTS Bases page is provided for CNP Units 1 and 2. Differences between the two units are annotated as follows:

  • When the words apply only to a single unit, the phrase (Unit 1 only) or (Unit 2 only) follows the applicable words; and
  • When the words are different for the two units, the words are phrased in the manner xxxx (Unit 1) and yyyy (Unit 2) with the xxxx being the Unit 1 words and yyyy being the Unit 2 words.

When the final typed ITS Bases is issued, the words associated with Unit 1 will only be in the Unit 1 version and the words associated with Unit 2 will only be in the Unit 2 version.

Determination of NSHC (applicable only to Individual ITS Specifications and Relocated/Deleted CTS Specifications)

This section contains the determination in accordance with 10 CFR 50.91(a)(1) using the criteria of 10 CFR 50.92(c) to support a finding of NSHC. For those changes covered by a generic NSHC, those generic NSHCs are located in Volume 2. For those less restrictive changes that do not fall into one of the generic less restrictive categories, a specific NSHC evaluation has been performed. Each evaluation is annotated to correspond to the DOC discussed in the NSHC. For those ITS Chapters, Sections, or Specifications for which the less restrictive DOCs all fall into a generic category, a statement that there are no specific NSHCs is provided.

Volumes 17 and 18 Volumes 17 and 18 contain copies (from Volumes 3 through 16) of the Unit 1 and Unit 2 CTS markup pages that have been annotated to show the differences between the CTS and the proposed ITS. These volumes are organized in CTS page order, and have been prepared to facilitate NRC review efforts. They also demonstrate that all CTS requirements have been appropriately dispositioned. In some instances, the same CTS page is used in different ITS Chapters, Sections, or Specifications. Consequently, the CTS markup pages that are included in more than one ITS Chapter, Section, or Specification will appear in the order in which the CTS markup pages appear in Volumes 3 through 16.

to AEP:NRC:4901 Page 1 Beyond Scope Changes Beyond Scope Changes are those changes included in the Improved Technical Specifications (ITS) conversion submittal that are beyond the scope of the Improved Standard Technical Specifications (ISTS) as described in NUREG-1431, Revision 2, and also beyond the scope of the Donald C. Cook Nuclear Plant (CNP) current Technical Specifications (CTS). The following is a list of the discussions of changes (DOCs) in Attachment 1 that are Beyond Scope Changes in the CNP Units 1 and 2 ITS conversion submittal.

1. The Surveillance Frequencies for the following CHANNEL CALIBRATION Surveillance Requirements (SRs) are being changed from 18 months in the CTS to either 31 days or 184 days in the ITS:

a) ITS 3.3.1, DOC M.16: CTS Table 4.3-1, Functional Unit 16 (Undervoltage - Reactor Coolant Pumps (RCPs)) requires the performance of a CHANNEL CALIBRATION every 18 months. ISTS Table 3.3.1-1 Function 12 (Undervoltage RCPs) requires the performance of CHANNEL CALIBRATION every [18] months (ISTS SR 3.3.1.10). ITS Table 3.3.1-1 Function 12 (Undervoltage RCPs) requires the performance of CHANNEL CALIBRATION every 184 days (ITS SR 3.3.1.12). This changes the CTS and ISTS by changing the Surveillance Frequencies from 18 months to 184 days.

b) ITS 3.3.2, DOC M.10: CTS Table 4.3-2, Functional Unit 6.b (Motor Driven Auxiliary Feedwater (AFW) Pumps - 4 kV Bus Loss of Voltage) and Functional Unit 7.b (Turbine Driven AFW Pump - Reactor Coolant Pump Bus Undervoltage) require the performance of a CHANNEL CALIBRATION every 18 months. ISTS Table 3.3.2-1 Function 6.e (Auxiliary Feedwater - Loss of Offsite Power) and Function 6.f (Auxiliary Feedwater -

Undervoltage Reactor Coolant Pump) require the performance of a CHANNEL CALIBRATION every [18] months (ITS SR 3.3.2.9). ITS Table 3.3.2-1 Function 6.e (Auxiliary Feedwater - Loss of Voltage) and Function 6.f (Auxiliary Feedwater -

Undervoltage Reactor Coolant Pump) require the performance of a CHANNEL CALIBRATION every 184 days (ITS SR 3.3.2.7). This changes the CTS and ISTS by changing the Surveillance Frequencies from 18 months to 184 days.

c) ITS 3.3.5, DOC M.2: CTS Table 4.3-2 requires a CHANNEL CALIBRATION of the Loss of Voltage and Degraded Voltage instrumentation every 18 months. ISTS SR 3.3.5.3 requires the performance of a CHANNEL CALIBRATION for the Degraded Voltage Function and the Loss of Voltage Function every [18] months. ITS SR 3.3.5.3 requires the performance of a CHANNEL CALIBRATION for the Degraded Voltage Function every 31 days and ITS SR 3.3.5.5 requires the performance of a CHANNEL CALIBRATION for the Loss of Voltage Function every 184 days. This changes the CTS and ISTS by changing the Surveillance Frequencies from 18 months to either 31 days or 184 days.

to AEP:NRC:4901 Page 2

2. The following Allowable Value changes are the result of extending the CHANNEL CALIBRATION Surveillance Frequency from 18 months to 24 months:

a) ITS 3.3.1, DOC M.17: CTS Table 2.2-1 provides the Allowable Values for Functional Unit 8 (Overpower T) (Unit 2 only), Functional Unit 9 (Pressurizer Pressure - Low)

(Unit 1 only), Functional Unit 12 (Loss of Flow), Functional Unit 13, (Steam Generator Water Level - Low Low) (Unit 2 only), Functional Unit 14, (Steam/Feedwater Flow Mismatch and Steam Generator Water Level - Low) (Steam Generator Water Level -

Low portion only is covered by this change) (Unit 2 only), and Functional Unit 16 (Underfrequency - Reactor Coolant Pumps) (Unit 1 only). ISTS Table 3.3.1-1 provides the Allowable Values in brackets for the equivalent Reactor Trip System (RTS)

Instrumentation Functions. ITS Table 3.3.1-1 provides Allowable Values that are more restrictive than the CTS Allowable Values for the equivalent RTS Instrumentation Functions 7, 8.a, 10, 13, 14, and 15.

b) ITS 3.3.1, DOC L.19: CTS Table 2.2-1 provides the Allowable Values for Functional Unit 7 (Overtemperature T), Functional Unit 8 (Overpower T) (Unit 1 only),

Functional Unit 9 (Pressurizer Pressure - Low) (Unit 2 only), Functional Unit 10, (Pressurizer Pressure - High), Functional Unit 11 (Pressurizer Water Level - High),

Functional Unit 13, (Steam Generator Water Level - Low Low) (Unit 1 only), Functional Unit 14 (Steam/Feedwater Flow Mismatch and Steam Generator Water Level - Low)

(Steam Generator Water Level - Low portion only is covered by this change) (Unit 1 only), and Functional Unit 16 (Underfrequency - Reactor Coolant Pumps) (Unit 2 only).

ISTS Table 3.3.1-1 provides the Allowable Values in brackets for the equivalent RTS Instrumentation Functions. ITS Table 3.3.1-1 provides Allowable Values that are less restrictive than the CTS Allowable Values for the equivalent RTS Instrumentation Functions 6, 7, 8.a, 8.b, 9, 13, 14, and 15.

c) ITS 3.3.2, DOC M.11: CTS Table 3.3-4 provides the Allowable Values for Functional Unit 1.c (Safety Injection Containment Pressure - High), Functional Unit 1.f (Safety Injection Steam Line Pressure - Low) (Unit 1 only), Functional Unit 2.c (Containment Spray - Containment Pressure - High High), Functional Unit 3.b.3 (Containment Isolation Phase "B" Containment Pressure - High High), Functional Unit 4.c (Steam Line Isolation Containment Pressure - High High), Functional Unit 4.e (Steam Line Isolation Steam Line Pressure - Low) (Unit 1 only), Functional Unit 6.a (Motor Driven Auxiliary Feedwater Pumps Steam Generator Water Level - Low Low) (Unit 2 only), Functional Unit 7.a (Turbine Driven Auxiliary Feedwater Pumps Steam Generator Water Level -

Low Low) (Unit 2 only), and Functional Unit 10.c (Containment Pressure - High). ISTS Table 3.3.2-1 provides the Allowable Values in brackets for the equivalent Engineered Safety Feature Actuation System (ESFAS) Instrumentation Functions. ITS Table 3.3.2-1 provides Allowable Values that are more restrictive than the CTS Allowable Values for the equivalent ESFAS Instrumentation Functions 1.c, 1.e.(1), 2.c, 3.b.(3), 4.c, 4.d, 6.c, and 7.c.

to AEP:NRC:4901 Page 3 d) ITS 3.3.2, DOC L.22: CTS Table 3.3-4 provides the Allowable Values for Functional Unit 1.d (Pressurizer Pressure - Low), Functional Unit 1.f (Steam Line Pressure - Low)

(Unit 2 only), Functional Unit 4.d (Steam Line Isolation Steam Flow in Two Steam Lines

- High Coincident with Tavg - Low Low) (Tavg - Low Low portion only is covered by this change), Functional Unit 4.e (Steam Line Isolation Steam Line Pressure - Low) (Unit 2 only), Functional Unit 5.a (Turbine Trip and Feedwater Isolation Steam Generator Water Level - High High) (Unit 2 only), Functional Unit 6.a (Motor Driven Auxiliary Feedwater Pumps Steam Generator Water Level - Low Low) (Unit 1 only), and Functional Unit 7.a (Turbine Driven Auxiliary Feedwater Pumps Steam Generator Water Level - Low Low) (Unit 1 only). CTS Table 3.3-3 provides the Setpoint (i.e., Allowable Value) for the P-12 Interlock (Tavg - Low Low). ISTS Table 3.3.2-1 provides the Allowable Values in brackets for the equivalent ESFAS Instrumentation Functions including the P-12 Interlock. ITS Table 3.3.2-1 provides Allowable Values that are less restrictive than the CTS Allowable Values for the equivalent ESFAS Instrumentation Functions 1.d, 1.e.(1), 4.d, 4.e, 5.b, 6.c, and 8.c.

3. The following Surveillance Frequencies are being changed from 7 days, 31 days, or 92 days to 184 days, using the guidelines of NRC Generic Letter 91-04:

a) ITS 3.3.1, DOC L.18: CTS Table 4.3-1 requires a CHANNEL FUNCTIONAL TEST of Functional Units 6 (Source Range Neutron Flux), 16 (Undervoltage - Reactor Coolant Pumps), and 17 (Underfrequency - Reactor Coolant Pumps) instrumentation every 31 days. ISTS SR 3.3.1.8 requires the performance of a CHANNEL OPERATIONAL TEST for the Source Range Neutron Flux instrumentation prior to reactor startup and four hours after reducing power below the P-6 Interlock and every 92 days thereafter, and ISTS SR 3.3.1.9 requires the performance of a TRIP ACTUATING DEVICE OPERATIONAL TEST for the Undervoltage RCPs and Underfrequency RCPs instrumentation every [92] days. ITS SR 3.3.1.10 requires the performance of a CHANNEL OPERATIONAL TEST for the Source Range Neutron Flux instrumentation every 184 days, and ITS SR 3.3.1.11 requires the performance of a TRIP ACTUATING DEVICE OPERATIONAL TEST for the Undervoltage RCPs and Underfrequency RCPs instrumentation every 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

b) ITS 3.3.2, DOC L.19: CTS Table 4.3-2 requires a CHANNEL FUNCTIONAL TEST of the Motor Driven Auxiliary Feedwater Pumps - 4 kV Bus Loss of Voltage and the Turbine Driven Auxiliary Feedwater Pump - Reactor Coolant Pump Bus Undervoltage instrumentation every 31 days. ISTS SR 3.3.2.7 requires the performance of a TRIP ACTUATING DEVICE OPERATIONAL TEST for the Auxiliary Feedwater - Loss of Offsite Power and Auxiliary Feedwater - Undervoltage Reactor Coolant Pump instrumentation every [92] days. ITS SR 3.3.2.6 requires the performance of a TRIP ACTUATING DEVICE OPERATIONAL TEST for the Auxiliary Feedwater - Loss of to AEP:NRC:4901 Page 4 Voltage and Auxiliary Feedwater - Undervoltage Reactor Coolant Pump instrumentation every 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

c) ITS 3.3.5 DOC L.5: CTS Table 4.3-2 requires a CHANNEL FUNCTIONAL TEST of the Loss of Voltage instrumentation every 31 days. ISTS SR 3.3.5.2 requires the performance of a TRIP ACTUATING DEVICE OPERATIONAL TEST for the Loss of Voltage Function every [31] days. ITS SR 3.3.5.4 requires the performance of a TRIP ACTUATING DEVICE OPERATIONAL TEST for the Loss of Voltage Function every 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

d) ITS 3.3.6, DOC L.9: CTS Table 4.3-2 requires a CHANNEL FUNCTIONAL TEST of the Containment Radioactivity - High Function instrumentation every 92 days, and CTS Table 4.3-3 requires a CHANNEL FUNCTIONAL TEST of the containment area radiation, particulate, and noble gas channels every 92 days. CTS 4.9.9 states that the Containment Purge and Exhaust Isolation System shall be demonstrated OPERABLE, in part, once per 7 days during the specified conditions. ISTS SR 3.3.6.4 requires, for the Containment Radiation Functions of the Containment Purge and Exhaust Isolation Instrumentation, the performance of a CHANNEL OPERATIONAL TEST once per 92 days. ITS SR 3.3.6.6 requires, for the Containment Radiation Functions of the Containment Purge Supply and Exhaust System isolation instrumentation, the performance of a CHANNEL OPERATIONAL TEST once per 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

e) ITS 3.4.15, DOC L.8: CTS Table 4.3-3 requires a CHANNEL FUNCTIONAL TEST of the particulate and noble gas channels every 92 days. ISTS SR 3.4.15.2 requires the performance of a CHANNEL OPERATIONAL TEST of the required containment atmosphere radioactivity monitors every 92 days. ITS SR 3.4.15.2 requires the performance of a CHANNEL OPERATIONAL TEST of the required containment atmosphere radioactivity monitors every 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

f) ITS 3.6.9, DOC L.3: CTS 4.6.4.3.a requires energizing the supply breakers and verifying at least 34 igniters per train are energized, and CTS 4.6.4.3.b requires verifying at least one hydrogen igniter per train is OPERABLE in each containment region. These tests are required every 92 days. ISTS SR 3.6.10.1 and SR 3.6.10.2 require the performance of similar Surveillances every 92 days. ITS SR 3.6.9.1 and SR 3.6.9.2 require the performance of similar Surveillances (as modified by ITS 3.6.9, DOC L.1), but at a Frequency of 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

g) ITS 3.7.10, DOC L.3: CTS 4.7.5.1.b requires the Control Room Emergency Ventilation trains be demonstrated OPERABLE at least once per 31 days on a STAGGERED TEST BASIS by initiating flow through the HEPA filter and charcoal adsorber train and to AEP:NRC:4901 Page 5 verifying that the system operates for at least 15 minutes. ISTS SR 3.7.10.1 requires the performance of a similar Surveillance at a Frequency of 31 days. ITS SR 3.7.10.1 requires the performance of a similar Surveillance, but at a Frequency of 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

h) ITS 3.7.12, DOC L.3: CTS 4.7.6.1.a requires the Engineered Safety Features Ventilation System trains be demonstrated OPERABLE at least once per 31 days on a STAGGERED TEST BASIS by initiating, from the control room, flow through the HEPA filter and charcoal adsorber train and verifying the train operates for at least 15 minutes. ISTS SR 3.7.12.1 requires the performance of a similar Surveillance at a Frequency of 31 days.

ITS SR 3.7.12.1 requires the performance of a similar Surveillance, but at a Frequency of 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

i) ITS 3.7.13, DOC L.5: CTS 4.9.12.a states that the required Fuel Handling Area Exhaust Ventilation System shall be demonstrated OPERABLE at least once per 31 days by initiating flow through the HEPA filter and charcoal adsorber train and verifying that the train operates for at least 15 minutes. ISTS SR 3.7.13.1 requires the performance of a similar Surveillance at a Frequency of 31 days. ITS SR 3.7.13.2 requires the performance of a similar Surveillance, but at a Frequency of 184 days. This changes the CTS and ISTS Surveillance Frequencies to 184 days.

4. ITS 3.3.2, DOC L.20: CTS Table 3.3-3, Functional Unit 9.a (Safety Injection, Manual Initiation) requires a total of two channels per train to be OPERABLE. ISTS Table 3.3.2-1, Function 1.a requires two channels to be OPERABLE. ITS Table 3.3.2-1, Function 1.a requires only one channel per train to be OPERABLE. This changes the CTS and ISTS by decreasing the number of manual channels required OPERABLE to one per train.
5. ITS 3.3.6, DOC L.10: CTS Table 3.3-3, Functional Units 9.b and 9.c (Manual Containment Purge and Exhaust Isolation) require a total of 2 channels per train to be OPERABLE (1 channel per train for Functional Unit 9.b, and 1 channel per train for Functional Unit 9.c).

ISTS Table 3.3.6-1, Function 1 (Manual Initiation) requires two channels to be OPERABLE.

ITS Table 3.3.6-1, Function 1 (Manual Initiation) requires only one channel per train to be OPERABLE. This changes the CTS and ISTS by decreasing the number of manual channels required OPERABLE to one per train.

6. ITS 3.3.6, DOC L.11: CTS Table 4.3-3 footnote
  • requires performance of a SOURCE CHECK as part of the once per shift CHANNEL CHECK requirements for Containment Radiation instrumentation (Instruments 2.A.i, 2.A.ii, 2.A.iii, 2.B.i, 2.B.ii, and 2.B.iii). ISTS SR 3.3.6.1 requires performance of a CHANNEL CHECK for this instrumentation, but does not require a SOURCE CHECK. In addition, the SOURCE CHECK requirement was not included in NUREG-0452, Standard Technical Specifications for Westinghouse Pressurized Water Reactors, Revision 4, (the predecessor to the ISTS) so that the CTS requirement was beyond scope of both Technical Specification standards. ITS 3.3.6 does not include this to AEP:NRC:4901 Page 6 requirement. This changes the CTS by deleting the once per shift SOURCE CHECK requirement on the Containment Radiation instrumentation.
7. ITS 3.4.6, DOC L.1: CTS Limiting Condition for Operation (LCO) 3.4.1.3.c requires at least three reactor loops to be in operation when the reactor trip breakers are in the closed position and the control rod drive system is capable of rod withdrawal. CTS 3.4.1.3 Action b specifies the compensatory actions for less than the number of required OPERABLE or operating coolant loops specified in CTS LCO 3.4.1.3.c. ISTS LCO 3.4.6 requires two loops consisting of any combination of Reactor Coolant System (RCS) loops and residual heat removal (RHR) loops to be OPERABLE, and one loop to be in operation. In addition, the CTS requirement was not included in NUREG-0452, Revision 4, so that the CTS requirement was beyond scope of both Technical Specification standards. ITS LCO 3.4.6 requires two loops consisting of any combination of RCS loops and RHR loops to be OPERABLE, and one loop to be in operation. This changes the CTS by deleting more restrictive coolant loop requirements based on the status of the Rod Control System.
8. The requirement to specifically state the required water level as referenced to a specific point inside the steam generators instead of using a specific indication from one instrument is being changed as follows:

a) ITS 3.4.6, DOC L.5: CTS 4.4.1.3.3 states that the required steam generator(s) shall be determined OPERABLE by verifying secondary side water level is greater than or equal to 76% of wide range instrument span. ISTS SR 3.4.6.2 requires verification that steam generator secondary side water levels are > [17]% for required Reactor Coolant System loops. ITS SR 3.4.6.2 requires verification that the steam generator secondary side water levels are above the top of the U-tubes for the required Reactor Coolant System loops.

b) ITS 3.4.7, DOC L.3: CTS 3.4.1.4.b states that the secondary side water level of at least two steam generators shall be greater than or equal to 76% of wide range instrument span. ISTS LCO 3.4.7.b requires the secondary side water level of at least [two] steam generators to be > [17]%. ITS LCO 3.4.7.b requires the secondary side water level of at least two steam generators to be above the top of the U-tubes.

9. The following changes are made consistent with the condition for application of leak-before-break methodology to the pressurizer surge line for Unit 1 as documented in a Letter from Indiana Michigan Power Company (M. W. Rencheck) to the NRC dated October 26, 2000 (Letter C1000-20):

a) (Unit 1 only) ITS 3.4.13, DOC M.1: CTS 3.4.6.2.b states that the Reactor Coolant System leakage shall be limited to 1 gpm UNIDENTIFIED LEAKAGE. CTS 3.4.6.2 Action b allows 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br /> to reduce leakage to within limits with any RCS leakage greater than any one of the limits, excluding PRESSURE BOUNDARY LEAKAGE. ISTS LCO 3.4.13.b states that RCS operational LEAKAGE shall be limited to 1 gpm unidentified to AEP:NRC:4901 Page 7 LEAKAGE. ISTS 3.4.13 ACTION A states that if RCS LEAKAGE is not within limits for reasons other than pressure boundary LEAKAGE, to reduce LEAKAGE to within limits in 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br />. Unit 1 ITS LCO 3.4.13.b states that the RCS unidentified LEAKAGE limit for Unit 1 is 0.8 gpm. Unit 1 ITS 3.4.13 ACTION A states that if the unidentified leakage is > 0.8 gpm, to verify the source of unidentified LEAKAGE is not the pressurizer surge line or to reduce unidentified LEAKAGE to within limit in 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br />.

Unit 1 ITS 3.4.13 ACTION B states that if unidentified LEAKAGE is > 1.0 gpm, to reduce unidentified LEAKAGE to within limit within 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br />. This changes the Unit 1 CTS and ISTS by decreasing the unidentified LEAKAGE limit from 1 gpm to 0.8 gpm, and providing additional Actions if the unidentified LEAKAGE is not within the new 0.8 gpm limit but < 1.0 gpm.

b) (Unit 1 only) ITS 3.4.15, DOC M.2: CTS 3.4.6.1 Action requires a grab sample of the containment atmosphere to be obtained and analyzed at least once per 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> when the required gaseous and/or particulate radioactivity monitoring channels are inoperable.

ISTS 3.4.15 Required Action B.1.1 requires an analysis of grab samples of the containment atmosphere once per 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> when the required containment atmosphere radioactivity monitor is inoperable. Unit 1 ITS 3.4.15 Required Action B.1.1 requires the same requirement at a 12 hour1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> Frequency when no containment atmosphere particulate radioactivity monitoring channels are OPERABLE. This changes the Unit 1 CTS and ISTS by adding the requirement to analyze grab samples of the containment atmosphere every 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> instead of every 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br />.

10. ITS 3.5.5, DOC M.1: CTS 3.4.6.2.e Applicability Footnote
  • states that Specification 3.4.6.2.e is applicable with average pressure within 20 psi (i.e., + 20 psi) of the nominal full pressure value. CTS 4.4.6.2.1.c states that the seal line resistance shall be determined when the average pressurizer pressure is within 20 psi (i.e., + 20 psi) of its nominal full pressure value. ISTS SR 3.5.5.1 Note states that the Surveillance is not required until 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br /> after the Reactor Coolant System pressure stabilizes at > [2215 psig and < 2255 psig] (i.e.,

+ 20 psi nominal full pressure). The ITS SR 3.5.5.1 Note states that the Surveillance is not required to be performed until 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br /> after the pressurizer pressure stabilizes at > 2075 psig and < 2095 psig (Unit 1), and > 2225 psig and < 2245 psig (Unit 2) (i.e., + 10 psi of nominal full pressure). This changes the CTS and ISTS by decreasing the pressure band from

+ 20 psi to + 10 psi. In addition, CTS 4.4.6.2.1.c provides a pressure constant, PSI, to be used in the calculation of seal line resistance. The values for this constant (two values for Unit 1 and one value for Unit 2), which are moved to the Bases as described in ITS 3.5.5 DOC LA.2, have been increased. This results in a decrease in the calculated seal line resistance at any given charging pump pressure. This changes the CTS by increasing the pressure constant value, resulting in a decrease in the calculated seal line resistance flow.

11. ITS 3.6.14, DOC L.2: CTS 3.6.5.8 states that The refueling canal drains shall be OPERABLE. In this case, since there are three installed refueling canal drains, all three must be OPERABLE. ISTS LCO 3.6.18 states the refueling canal drains shall be to AEP:NRC:4901 Page 8 OPERABLE, and does not provide an explicit allowance to have less than the total number of installed refueling canal drains OPERABLE. ITS LCO 3.6.14 states two refueling canal drains shall be OPERABLE. This changes the CTS and ISTS by only requiring two of the three refueling canal drains to be OPERABLE. In addition, due to this change, the word required has been added to the Actions and the Surveillance Requirements since not all installed refueling drains are required to be OPERABLE.
12. ITS 3.7.6, DOC M.1: CTS 3.7.1.3 requires the Condensate Storage Tank (CST) to be OPERABLE with a minimum contained volume of 175,000 gallons of water. ISTS LCO 3.7.6 requires the CST to be OPERABLE, and ISTS SR 3.7.6.1 requires the CST level to be verified to be [110,000] gallons. ITS LCO 3.7.6 requires the CST to be OPERABLE, and ITS SR 3.7.6.1 requires the CST volume to be verified to be 182,000 gallons. This changes the CTS and ISTS by increasing the CST volume requirements.
13. ITS 3.7.8, DOC M.3: CTS 3.7.4.1 Action b states that with the opposite unit in MODE 1, 2, 3, or 4 and any unit Essential Service Water (ESW) pump inoperable, at least one crosstie valve on the associated header must be closed within 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> or the opposite unit ESW train must be declared inoperable and the appropriate action in the opposite unit's CTS 3.7.4.1 must be taken. ISTS 3.7.8 does not address the possibility of multiple units sharing a Service Water System through crosstie valves. The ITS does not include the allowance to delay declaring inoperable the opposite unit ESW train for 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br />. ITS 3.7.8 requires an immediate declaration of inoperability of the opposite unit ESW train and to immediately take the Actions required by ITS 3.7.8 ACTION A. This changes the CTS by deleting the 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> allowance to delay declaring inoperable the opposite unit ESW train, and adds requirements not included in the ISTS to address the opposite unit ESW train.
14. ITS 3.7.11, DOC M.2: CTS 4.7.5.2 states The control room air conditioning system shall be demonstrated OPERABLE at least once per 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> by verifying that the control room air temperature is less than or equal to 95°F. However, the CTS does not preclude the Surveillance from being performed with both control room air conditioning (CRAC) trains in operation and does not require this verification separately for each of the CRAC trains.

Therefore, the CTS Surveillance can be satisfied regardless of how many CRAC trains are in operation. ISTS SR 3.7.11.1 requires verification that each Control Room Emergency Air Temperature Control System train has the capability to remove the assumed heat load every

[18] months, and does not include any requirements to verify the control room air temperature. ITS SR 3.7.11.1 requires the 12 hour1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> Surveillance to be performed using only one of the two CRAC trains in operation, and requires the temperature to be < 85°F. ITS SR 3.7.11.2 requires verification that each CRAC train can maintain control room air temperature < 85°F every 31 days. This changes CTS by ensuring only one CRAC train is in operation and changing the temperature limit from 95°F to 85°F during the 12 hour1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> Surveillance, and adding a specific requirement to verify that each CRAC train can maintain control room air temperature < 85°F every 31 days, and changes the ISTS by adding requirements to verify control room air temperature.

to AEP:NRC:4901 Page 9

15. ITS 3.7.13, DOC M.1: CTS LCO 3.9.12 requires the spent fuel storage pool exhaust ventilation system to be OPERABLE. CTS 3.9.12 Action a specifies the requirements when no spent fuel storage pool exhaust ventilation system is OPERABLE. CTS 4.9.12.d.3 requires verification that the spent fuel storage pool exhaust ventilation system automatically directs its exhaust flow through the charcoal adsorber banks and automatically shuts down the storage pool ventilation system supply fans. ISTS 3.7.13 requires two Fuel Area Building Cleanup System trains to be OPERABLE, and does not require either train to be operating. ITS 3.7.13 requires one Fuel Handling Area Exhaust Ventilation (FHAEV) train to be OPERABLE and in operation. ITS 3.7.13 ACTION A specifies the compensatory actions for a required FHAEV train that is not in operation. ITS SR 3.7.13.1 requires the verification that the required FHAEV train is operating every 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br />. ITS SR 3.7.13.4 requires verification that the required FHAEV train actuates on an actual or simulated actuation signal. This changes the CTS and ISTS by adding the requirement that the required FHAEV train must be in operation, adds an ACTION to take if the required FHAEV train is not in operation (ITS 3.7.13 ACTION A), adds a new SR to periodically verify the required FHAEV train is in operation, and deletes a SR to verify the train automatically directs its exhaust flow through the charcoal adsorber banks on an actuation signal.
16. ITS 3.8.1, DOC M.5: CTS 4.8.1.1.2.a.4, the normal Diesel Generator (DG) start test, requires a verification that each DG starts from standby conditions and achieves in less than or equal to 10 seconds, a voltage of 4160 + 420 V and a frequency of 60 + 1.2 Hz. CTS 4.8.1.1.2.a.4 footnote
  • clarifies that the DG start (10 seconds) from standby conditions shall be performed at least once per 184 days in these surveillance tests. All other engine starts for the purpose of this Surveillance testing and compensatory action may be at reduced acceleration rates as recommended by the manufacturer so that mechanical stress and wear on the DG are minimized. CTS 4.8.1.1.2.e.2, the single largest load rejection test, requires the verification of the generator capability to reject a load greater than or equal to the specified value while maintaining voltage at 4160 + 420 V and frequency at 60 + 1.2 Hz.

CTS 4.8.1.1.2.e.4, the simulated loss of offsite power test, and CTS 4.8.1.1.2.e.6, the simulated loss of offsite power test in conjunction with a Safety Injection signal test, also specify a steady-state voltage of 4160 + 420 V and frequency 60 + 1.2 Hz. CTS 4.8.1.1.2.e.7 requires the performance of CTS 4.8.1.1.2.a.4 within 5 minutes after performing the 8 hour9.259259e-5 days <br />0.00222 hours <br />1.322751e-5 weeks <br />3.044e-6 months <br /> test (commonly called a hot restart test). ISTS SR 3.8.1.2, SR 3.8.1.7, SR 3.8.1.9, SR 3.8.1.11, SR 3.8.1.12, SR 3.8.1.15, SR 3.8.1.19, and SR 3.8.1.20 all include steady-state voltage limits of > [3740] V and < [4580] V, and steady-state frequency limits of > [58.8] Hz and < [61.2] Hz. CTS 4.8.1.1.2.a.4 is divided into three Surveillances in the ITS. ITS SR 3.8.1.2 requires the verification that each DG starts from standby conditions and achieves steady-state voltage of > 3910 V and < 4400 V and frequency of > 59.4 Hz and < 61.2 Hz.

ITS SR 3.8.1.2 Note 2 specifies that the modified DG start involving gradual acceleration to synchronous speed may be used for this SR as recommended by the manufacturer. ITS SR 3.8.1.8, the 184 day quickstart test, and SR 3.8.1.16, the 24 month hot restart test, require a steady-state voltage of > 3910 V and < 4400 V and a steady-state frequency of

> 59.4 Hz and < 61.2 Hz. ITS SR 3.8.1.10, the single largest load rejection test, requires the to AEP:NRC:4901 Page 10 verification that within 2 seconds following load rejection voltage is > 3910 V and < 4400 V and frequency is > 59.4 Hz and < 61.2 Hz. ITS SR 3.8.1.12, the loss of offsite power test, and SR 3.8.1.19, the loss of offsite power test in conjunction with an ESF signal, also require verification of the same limitations for steady-state voltage and frequency. This changes the CTS and ISTS in that the steady-state voltage range has been reduced from 4160 + 420 V to 4160 +240 V, -250 V, and the steady-state frequency range has been reduced from 60 + 1.2 Hz to 60 + 1.2 Hz, -0.6 Hz.

17. ITS 3.8.1, DOC L.19: CTS 4.8.1.1.2.a.3 requires that the fuel transfer pump can be started and that it transfers fuel from the storage system to the day tank. The test Frequency for this Surveillance is in accordance with the frequency specified in Table 4.8-1 (the Diesel Generator (DG) Test Schedule Table) on a STAGGERED TEST BASIS. The nominal test Frequency in CTS Table 4.8-1 is 31 days. ISTS SR 3.8.1.6 requires the verification that the fuel oil transfer system operates to [automatically] transfer fuel oil from storage tank[s] to the day tank [and engine mounted tank] every [92] days. ITS SR 3.8.1.6 requires the verification that the fuel oil transfer system operates to transfer fuel oil from the storage tank to the day tank every 92 days. This changes the CTS by deleting the requirement to perform this Surveillance in accordance with the DG Test Schedule Table, and changes the nominal test Frequency to 92 days.
18. ITS 3.8.1, DOC L.20: CTS 4.8.1.1.2.e.10 requires verifying that with the DG operating in a test mode while connected to its test load, a simulated Safety Injection signal overrides the test mode by returning the DG to standby operation and ensuring the emergency loads remain powered by offsite power. ISTS SR 3.8.1.17 requires verification that [with a DG operating in test mode and connected to its bus, an actual or simulated Engineered Safety Feature actuation signal overrides the test mode by returning the DG to ready-to-load operation and [automatically energizing the emergency load from offsite power] every

[18] months]. The ITS does not include this SR. This changes the CTS and ISTS by deleting this SR.

19. ITS 3.8.1, DOC L.21: CTS 3.8.1.1 Action b specifies the compensatory actions for one inoperable DG, and CTS 3.8.1.1 Action c specifies the compensatory actions for one inoperable offsite circuit and one inoperable DG. The Actions include a requirement to demonstrate the OPERABILITY of the remaining OPERABLE DG by performing Surveillance Requirement 4.8.1.1.2.a.4 within 8 hours9.259259e-5 days <br />0.00222 hours <br />1.322751e-5 weeks <br />3.044e-6 months <br />, unless the absence of any potential common mode failure for the remaining DG is demonstrated. ISTS 3.8.1 Required Actions B.3.1 and B.3.2 allow [24] hours to determine OPERABLE DG(s) is not inoperable due to common cause failure or to perform SR 3.8.1.2 for OPERABLE DGs. ITS 3.8.1 Required Actions B.3.1 and B.3.2 allow 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> to perform similar checks on the remaining OPERABLE DGs. This changes the CTS and ISTS by changing the time to perform these checks from 8 hours9.259259e-5 days <br />0.00222 hours <br />1.322751e-5 weeks <br />3.044e-6 months <br /> or 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br />, respectively, to 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br />.

to AEP:NRC:4901 Page 11

20. ITS 3.8.2, DOC L.6: CTS 4.8.1.2 requires the AC electrical power sources to be demonstrated OPERABLE by the performance of each of the Surveillance Requirements of 4.8.1.1.1 and 4.8.1.1.2 except for requirement 4.8.1.1.2.a.5. ISTS SR 3.8.2.1 Note 1 states that performance of SR 3.8.1.3, SR 3.8.1.9 through SR 3.8.1.11, SR 3.8.1.13 through SR 3.8.1.16, [SR 3.8.1.18,] and SR 3.8.1.19 is not required. ISTS SR 3.8.2.1 also excepts SR 3.8.1.8, SR 3.8.1.17, and SR 3.8.1.20 from being applicable. ITS SR 3.8.2.1 has included this allowance in the Note to SR 3.8.1.2 (see ITS 3.8.2 DOC L.2). However, the ITS SR 3.8.2.1 excepts additional SRs from being required to be met beyond the allowances of the CTS, including ITS SR 3.8.1.9, SR 3.8.1.13, SR 3.8.1.14 (ESF actuation signal portion only) and SR 3.8.1.19, and beyond the allowances of the ISTS, including ITS SR 3.8.1.13, SR 3.8.1.14 (ESF actuation signal portion only) and SR 3.8.1.19. This changes the CTS and ISTS by not requiring certain Surveillances to be met.
21. The Surveillance Frequencies of various CTS SRs are being extended to 24 months, consistent with the guidelines provided in NRC Generic Letter 91-04. The associated changes are described in the following Discussion of Changes:

ITS 3.1.4, DOC L.9 ITS 3.6.3, DOC L.5 ITS 3.3.1, DOCs L.1, L.2, L.3, and L.11 ITS 3.6.6, DOC L.1 ITS 3.3.2, DOCs L.1, L.2, L.4, and L.13 ITS 3.6.7, DOC L.1 ITS 3.3.3, DOC L.6 ITS 3.6.8, DOC L.3 ITS 3.3.4, DOC L.1 ITS 3.6.9, DOC L.2 ITS 3.3.6, DOCs L.5 and L.6 ITS 3.6.13, DOC L.1 ITS 3.3.7, DOC L.2 ITS 3.7.5, DOC L.8 ITS 3.3.8, DOC L.3 ITS 3.7.7, DOC L.2 ITS 3.4.1, DOC L.2 ITS 3.7.8, DOC L.2 ITS 3.4.9, DOC L.1 ITS 3.7.10, DOC L.2 ITS 3.4.11, DOC L.3 ITS 3.7.12, DOC L.2 ITS 3.4.12, DOC L.3 ITS 3.7.13, DOC L.4 ITS 3.4.14, DOC L.4 ITS 3.8.1, DOC L.3 ITS 3.4.15, DOC L.6 ITS 3.8.4, DOC L.2 ITS 3.5.2, DOC L.3 ITS 5.5, DOCs L.1 and L.3 to AEP:NRC:4901 Page 1 Disposition of Generic Changes to NUREG-1431, Revision 2 The following Nuclear Regulatory Commission (NRC)-approved Technical Specification Task Force (TSTF) changes are adopted in whole or in part in the Improved Technical Specifications (ITS) submittal:

TSTF Affected ITS 337, Rev. 1 3.5.5 347, Rev. 1 3.3.1 370, Rev. 0 3.5.1 401, Rev. 0 3.6.5 411, Rev. 1 3.3.1, 3.3.2 429, Rev. 3 3.6.11 434, Rev. 0 SR 3.0.1 438, Rev. 0 3.4.5, 3.4.6, 3.4.7, 3.4.8, 3.9.4, 3.9.5 440, Rev. 0 3.6.3, 3.6.6, 3.6.7 The following NRC-approved TSTF changes have not been adopted in the ITS submittal:

TSTF Reason 359, Rev. 9 A License Amendment Request has been submitted to adopt this TSTF, and this TSTF will be adopted when it is approved (See Enclosure 7).

371, Rev. 1 A Donald C. Cook Nuclear Plant (CNP)-specific evaluation has not been performed to adopt this less restrictive allowance.

418, Rev. 2 WCAP-14333, Probabilistic Risk Analysis of the RPS and ESFAS Test Times and Completion Times, has not been adopted by CNP.

419, Rev. 0 The PRESSURE AND TEMPERATURE LIMITS REPORT (PTLR) has not been adopted by CNP.

420, Rev. 0 ISTS 3.5.6, Boron Injection Tank, has not been adopted in the CNP ITS, because the requirements for the Boron Injection Tank were deleted from the current Technical Specifications in License Amendments 158 (Unit 1) and 142 (Unit 2) dated November 20, 1991.

421, Rev. 0 WCAP-15666, Extension of Reactor Coolant Pump Motor Flywheel Examination, has not been adopted by CNP.

447, Rev. 1 A CNP-specific evaluation has not been performed to adopt this less restrictive allowance.

The following proposed TSTF changes have been included in the ITS submittal:

TSTF Affected ITS 400, Rev. 1 3.8.1 433, Rev. 0 3.8.2 437, Rev. 0 3.1.7 to AEP:NRC:4901 Page 1 Methodology Summary and Compliance with Nuclear Regulatory Commission (NRC) Generic Letter 91-04 1.0

SUMMARY

This proposed license amendment includes a revision to the Donald C. Cook Nuclear Plant (CNP) Units 1 and 2 current Technical Specifications (CTS) to change Surveillance Frequencies from 18 months to 24 months for selected Surveillance Requirements (SRs). In addition, Surveillance Frequencies for other selected SRs have been changed from 7 days, 31 days, and 92 days to 184 days. In all cases, these changes have been evaluated to confirm the acceptability of the changes. These evaluations were performed using the guidance provided in NRC Generic Letter (GL) 91-04, Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle (Reference 1). This change is based on providing cost-beneficial application of the Improved Technical Specifications (ITS) with minimal effect on safety, and to support the possibility of future extended fuel cycles.

For changes involving instrumentation, the evaluations performed considered instrument drift using the methodology described in Enclosure 6 of this submittal. For all changes, historical surveillance test data, applicable maintenance records, and the plant corrective action database were reviewed in evaluating the effect of the proposed changes on safety. In addition, each change was reviewed to ensure the new Surveillance Frequencies will not invalidate the current plant licensing basis. Based on the results of these reviews, it is concluded that there is no adverse effect on plant safety due to increasing these Surveillance Frequencies, including the continued application of allowable grace periods described in CTS 4.0.2 and ITS SR 3.0.2.

2.0 METHODOLOGY In Reference 1, the NRC provided generic guidance for evaluating a 24 month Surveillance Frequency for SRs, and specified the evaluation steps needed to justify a 24 month Surveillance Frequency. While this request is not associated with an actual change in fuel cycle length, the methodology defined in Reference 1 is still appropriate for the evaluation of any Surveillance Frequency extension. The following discussion defines each of the steps outlined in Reference 1, and describes the methodology used to evaluate each Surveillance Frequency extension. The methodology used for evaluating the Surveillance Frequency extensions from 18 months to 24 months is very similar to that used to justify Surveillance Frequency extensions for a 24 month fuel cycle at the Edwin I. Hatch Nuclear Power Plant (HNP). The methodology used for evaluating the Surveillance Frequency extensions from 7 days, 31 days, or 92 days to 184 days is very similar to the methodology used to justify Surveillance Frequency extensions from quarterly to semiannually for the HNP. The NRC found the HNP methodology acceptable for both types of Surveillance Frequency extensions in the NRC Safety Evaluation Reports dated July 12, 2002, and September 26, 2002, respectively (Reference 2).

to AEP:NRC:4901 Page 2 A. Non-Instrumentation Changes Reference 1 identifies three steps to evaluate non-instrumentation changes.

Step 1:

licensees should evaluate the effect on safety of the change in surveillance intervals to accommodate a 24-month fuel cycle. This evaluation should support a conclusion that the effect on safety is small.

CNP Evaluation Each Surveillance Frequency extension was evaluated with respect to its effect on plant safety. The following information provides a description of the purpose of surveillance testing and a general description of the methodology utilized to justify the conclusion that extending the testing interval will have a minimal effect on safety.

The purpose of surveillance testing is to verify through the performance of the specified SRs that the tested Technical Specifications (TS) Function/Feature will perform as assumed in the associated safety analyses. By periodically testing the TS Function/Feature, the availability of the associated Function/Feature is confirmed. With the extension of Surveillance Frequencies, a longer period will exist between performances of a SR. If a failure resulting in the loss of a TS Function/Feature occurs during the operating cycle, and that failure would be detected only by performing the periodic SR, then the increase in Surveillance Frequency would result in a decrease in the availability of the associated TS Function/Feature. This situation would result in a potential impact on safety.

Each associated non-instrumentation SR was evaluated to demonstrate the potential impact on availability, if any, is small because of the Surveillance Frequency extension. A program plan defining the scope of the analysis to be performed (i.e., failure history analysis) and the methods for performing the analysis was developed. This process included:

  • Identifying the 18 month SRs to be evaluated for a Surveillance Frequency extension to 24 months, and the other SRs to be evaluated for a Surveillance Frequency extension from 7 days, 31 days, or 92 days to 184 days, in the TS;
  • Determining the surveillance tests that verified the operation of the TS Function/Feature (e.g., equipment) associated with each SR;
  • Collecting the surveillance test history associated with the TS Function/Feature;
  • Evaluating the surveillance test history results; and
  • Searching the Corrective Action database, and evaluating conditions that resulted in the need to perform an operability evaluation for a required TS Function/Feature.

to AEP:NRC:4901 Page 3 These evaluations included determining if the TS Function/Feature is additionally tested on a more frequent basis during the operating cycle by other plant programs (e.g., quarterly pump flow rate tests), if the TS Function/Feature is designed to be single failure proof, or if the TS Function/Feature has been found to be highly reliable. As an example, justifications for extending the ENGINEERED SAFETY FEATURE (ESF) RESPONSE TIME Surveillance Frequencies are based on other, more frequent testing of the ESF Actuation System components, and the design features that contribute to the high reliability of the ESF Actuation System components.

More frequent testing may include the performance of CHANNEL CHECKS which verify that the instrument loop (i.e., transmitter, signal processing equipment, and indicator) is functional, and that the system parameters (e.g., pump flow, system pressure, etc.) are within expected values. More frequent testing also includes CHANNEL OPERATIONAL TESTS (COT) that verify the operation of circuits associated with alarms, interlocks, displays, trip functions, time delays and channel failure trips. Where a CHANNEL CHECK or COT is not required, the circuit is normally simple, and these checks will not provide any additional assurance that the components are functional. In several cases (e.g., switches), the more frequent testing may not verify the operation of the circuits directly associated with the switch, but may verify the operation of other circuits associated with the same TS Function/Feature as the switch. In most cases, the same circuit (with the exception of the open loop associated with the switch) is used for manual operation of a component and/or for automatic actuation functions. In these cases, the CHANNEL CHECKS and COT would test most of the circuit associated with the switch, with the exception of the switch itself and the wire to connect the switch to the circuit.

Additional testing, such as inservice pump or valve testing, also verifies that the power and control circuits associated with the specific components, relays, and contacts associated with the TS Function/Feature are operational. Inservice programs test components based upon performance-oriented schedules. The Maintenance Rule Program also supports testing based on the safety significance of components and their unavailability or performance. Decreased component performance requires increased testing. Some system components may not be tested more frequently based upon the impact on plant operation (e.g., ESF injection valves).

However, performance of these components is tracked based on system availability, and failures will be identified and corrected as a part of the CNP maintenance program.

Additionally, industry reliability studies for boiling water reactors (BWRs), prepared by the BWR Owners Group (Reference 3), show that overall safety system reliability is not dominated by logic system reliability, but by mechanical component reliability (e.g., pumps and valves) that are consequently tested on a more frequent basis, usually by the Inservice Testing Program. These studies were evaluated by the NRC in the Safety Evaluation Report issued for Peach Bottom Atomic Power Station Units 2 and 3 that approved Surveillance Frequency extensions from 18 months to 24 months (Reference 4). These studies concluded that the probability of a relay or contact failure is small relative to the probability of mechanical component failure. Therefore, increasing the LOGIC SYSTEM FUNCTIONAL TEST Surveillance Frequency represented no significant change in overall safety system to AEP:NRC:4901 Page 4 unavailability. While CNP is not a BWR, and Westinghouse has performed no equivalent industry reliability studies, plant probabilistic risk assessments and Maintenance Rule Program tracking history for Westinghouse plants indicate that similar probabilities of logic system component failures and mechanical component failures exist.

Step 2:

Licensees should confirm that historical maintenance and surveillance data do not invalidate this conclusion.

CNP Evaluation The surveillance test history of the affected SRs was evaluated by obtaining completed surveillance test records, where available, to determine the frequency of equipment failures and the type of failures that may have occurred. If necessary, reports of equipment failures evaluated by the CNP Corrective Action Program were obtained, and associated maintenance records were obtained, to aid in determining the exact nature of equipment failures that had occurred.

Although maintenance history was reviewed, only equipment failures identified by the affected SRs were evaluated for impact, since failures detected by other plant activities, such as preventative maintenance tasks or other surveillance tests performed at shorter intervals, were assumed to continue to detect failures. This review of surveillance test history validated the conclusion that the impact, if any, on system availability will be small because of the change to a 184 day or 24 month Surveillance Frequency.

Step 3:

licensees should confirm that the performance of surveillances at the bounding surveillance interval limit provided to accommodate a 24-month fuel cycle would not invalidate any assumption in the plant licensing basis.

CNP Evaluation As part of the evaluation of each SR, the impact of the changes against the assumptions in the CNP licensing basis was reviewed. In general, these changes have no impact on the plant licensing basis. However, in some cases, the change does require a change to licensing basis information provided in the CNP Unit 1 and Unit 2 Updated Final Safety Analysis Report (UFSAR). These UFSAR changes are required to be evaluated in accordance with 10 CFR 50.59 and 10 CFR 50.71(e).

The Maintenance Rule Program trends failures that affect the safety functions of equipment, including each TS Function/Feature. Any degradation in performance due to the Surveillance Frequency extensions or maintenance activities will be identified and evaluated under the existing Maintenance Rule Program.

to AEP:NRC:4901 Page 5 B. Instrumentation (CHANNEL CALIBRATION) Changes:

Reference 1 identifies seven steps for the evaluation of instrumentation changes.

Step 1:

Confirm that instrument drift as determined by as-found and as-left calibration data from surveillance and maintenance records has not, except on rare occasions, exceeded acceptable limits for a calibration interval.

CNP Evaluation The effect of longer calibration intervals on the TS instrumentation was evaluated by performing a review of the surveillance test history for the affected instrumentation, including, where necessary, an instrument drift study. The failure history evaluation and drift study demonstrates that, except on rare occasions, instrument drift has not exceeded the current allowable limits.

Step 2:

Confirm that the values of drift for each instrument type (make, model, and range) and application have been determined with a high probability and a high degree of confidence.

Provide a summary of the methodology and assumptions used to determine the rate of instrument drift with time based upon historical plant calibration data.

CNP Evaluation CNP has performed drift evaluations in accordance with an Instrument Drift Analysis Methodology Guideline (Enclosure 6 to this letter), using Microsoft Excel spreadsheets based upon EPRI TR-103335, Guidelines for Instrument Calibration Extension/Reduction Programs, Revision 1 (Reference 5). The EPRI Instrument Performance Analysis Software System (IPASS) Version 2.2, Quattro-Pro, and Lotus 1-2-3 applications were also used to verify the analysis. The NRC has previously reviewed and provided comments to EPRI related to EPRI TR-103335, Revision 1. CNP responses to these NRC comments are included as part of Enclosure 6 of this submittal.

The Instrument Drift Analysis Methodology Guideline utilizes the as-found/as-left (AFAL) analysis methodology to statistically determine drift for current calibration intervals. Using recommendations from Reference 5, and considering the NRC review comments to Reference 5, the time dependence of the current drift was evaluated, where possible, and conservative assumptions were made in extrapolating current drift values to new drift values to be used for the extended Surveillance Frequencies.

The AFAL methodology utilizes historical data obtained from surveillance tests. The raw calibration data is conditioned prior to use for the drift calculation. The conditioning consists to AEP:NRC:4901 Page 6 of eliminating tests or individual data points that do not reflect actual drift. The removed data is generally limited to data associated with or affected by:

  • Instrument failures;
  • Procedural problems that affect the calibration data;
  • Measuring and test equipment problems that affect the calibration data; or
  • Human performance problems that affect the calibration data.

In limited cases, statistical outliers not meeting the above criteria were removed from the sample set. In each case, the values were well outside the expected performance conditions and, in most cases, resulted in equipment replacement or repair during the next calibration.

The CNP corrective action program requires prompt identification and evaluation of any instrument performance discovered to be substantially outside of expected conditions. This condition evaluation would determine if replacement of the instrument is necessary, and would determine the impact of the as-found condition of the instrument on the assumptions and values used in the drift analysis and setpoint analysis. These actions effectively identify failures and potential failures of the instrumentation to meet the assumptions of the drift analysis and setpoint analysis.

After the calibration data was properly conditioned, spreadsheets were used to calculate the difference between the current as-found value and the previous as-left value. This difference is the drift, and is expressed in engineering units, percent of span, or percent of setting.

For each calibration point, the spreadsheet is used to determine the following:

  • Tolerance interval (95%/95% for this analysis);
  • Standard deviation; and
  • Mean.

The Microsoft Excel spreadsheets were also used to perform other statistical analysis operations to identify outliers and determine normality of the data. Additional analyses were performed to verify that appropriate groupings were used, and to determine if specific indications of a time drift magnitude correlation exist. The final calculation of the tolerance interval is based upon a Time Dependence Analysis (normally a binning technique) performed using Microsoft Excel spreadsheets.

to AEP:NRC:4901 Page 7 Step 3:

Confirm that the magnitude of instrument drift has been determined with a high probability and a high degree of confidence for a bounding calibration interval of 30 months for each instrument type (make, model number, and range) and application that performs a safety function. Provide a list of the channels by TS section that identifies these instrument applications.

CNP Evaluation In accordance with the methodology described in the response to Step 2, the magnitude of instrument drift was determined with a high degree of confidence and a high degree of probability for a bounding calibration interval of 30 months for each instrument make and model number and range.

Step 4:

Confirm that a comparison of the projected instrument drift errors has been made with the values of drift used in the setpoint analysis. If this results in revised setpoints to accommodate larger drift errors, provide proposed TS changes to update trip setpoints. If the drift errors result in revised safety analysis to support existing setpoints, provide a summary of the updated analysis conclusions to confirm that safety limits and safety analysis assumptions are not exceeded.

CNP Evaluation CNP previously used the STEPIT instrument setpoint calculation program based on WCAP-12741, Appendix E, STEPIT Menu Driven Setpoint Calculation Program, D.C.

Cook Unit 1 (Reference 6). The input information, assumptions, and results from the STEPIT software setpoint calculations are published in Reference 7 and Reference 8.

Because of limitations in the STEPIT calculation code, CNP has developed a CNP-specific methodology document that is based on Reference 6. The CNP-specific methodology provides guidance on applying the Westinghouse methodology with additional considerations for error propagation and digital equipment errors. Therefore, while the specific technique for calculating setpoints has changed (from STEPIT model to spreadsheets and hand calculations), the methodology used for these calculations is unchanged.

Using the CNP-specific methodology, setpoint calculations were revised to consider drift over a calibration period of 30 months, and to develop new Allowable Values based on the trigger values in the Westinghouse setpoint methodology. If appropriate, Allowable Values have been revised to either take advantage of margin verified by the new Allowable Value calculation or, where necessary based on the 30 month drift values, to be more restrictive. In all cases, there was sufficient margin within the existing safety analyses to accommodate the revision in the Allowable Values without revising the safety analyses.

to AEP:NRC:4901 Page 8 Using this methodology, the following Allowable Values have been changed to be more restrictive, and will accommodate the drift associated with a calibration period of 30 months:

ITS Current Allowable Revised Allowable Table/Function Description Value Value Table 3.3.1-1 Unit 2 Overpower T < 3.0 % T Span < 0.038 % T Span Function 7 Above NTSP Above NTSP Table 3.3.1-1 Unit 1 Pressurizer Pressure - 1865 psig 1868 psig Function 8.a Low (Reactor Trip)

Table 3.3.1-1 Reactor Coolant Flow - Low 89.1% flow 89.6% flow Function 10 Table 3.3.1-1 Unit 1 Underfrequency 57.4 Hz 58.22 Hz Function 13 Reactor Coolant Pumps Table 3.3.1-1 Unit 2 Steam Generator (SG) 19.2% Narrow Range 20.8% Narrow Range Function 14, Water Level - Low Low Table 3.3.2-1 Function 6.c Table 3.3.1-1 Unit 2 SG Water Level - Low 24.0% Narrow Range 25.0% Narrow Range Function 15 Table 3.3.2-1 Containment Pressure - High 1.2 psig 1.17 psig Function 1.c, Table 3.3.2-1 Function 7.c Table 3.3.2-1 Containment Pressure - High 3 psig 2.97 psig Function 2.c, High Table 3.3.2-1 Function 3.b.(3),

Table 3.3.2-1 Function 4.c Table 3.3.2-1 Unit 1 Steam Line Pressure - 480 psig 481.3 psig Function 1.e.(1), Low Table 3.3.2-1 Function 4.d If an instrument was not in service long enough to establish a calculated drift value, the Surveillance Frequency was extended to 24 months based on other, more frequent testing (e.g., 92 day CHANNEL OPERATIONAL TEST, including calibration if necessary), or justification was based on information obtained from the instrument manufacturer.

In no case was it necessary to change the existing analytical limit or safety analyses to accommodate a larger instrument drift error.

to AEP:NRC:4901 Page 9 Step 5:

Confirm that the projected instrument errors caused by drift are acceptable for control of plant parameters to effect a safe shutdown with the associated instrumentation.

CNP Evaluation As discussed in the previous sections, the calculated drift values were compared to drift allowances in the setpoint calculation, other uncertainty analysis, and the Westinghouse design basis. For instrument strings that provide process variable indication, an evaluation was performed to verify the instruments could still be effectively utilized to perform a safe plant shutdown.

In no case was it necessary to change the existing safe shutdown analysis to account for failures or drift.

Step 6:

Confirm that all conditions and assumptions of the setpoint and safety analyses have been checked and are appropriately reflected in the acceptance criteria of plant surveillance procedures for channel checks, channel functional tests, and channel calibrations.

CNP Evaluation Nominal Trip Setpoints (NTSPs) will be established based on the changes to the Allowable Values in this submittal. Where the existing field setpoint is conservative to this calculated NTSP, the existing Surveillance procedures will only be changed if desired by Indiana Michigan Power Company. Where the existing field setpoint is less conservative than the NTSP, the plant setpoint calculation and the associated Surveillance procedures will be revised prior to implementation of the license amendment. The Surveillance procedures will be verified to appropriately reflect the assumptions and conditions of the setpoint calculations as required by the CNP procedure change process.

The acceptance criteria for the affected Surveillance procedures have been verified to meet the assumptions in the safety analyses and setpoint analyses prior to the evaluation of these changes in Surveillance Frequency. This review determined that the acceptance criteria do not require revision due to the change in the Surveillance Frequency for any of the associated TS Functions/Features.

to AEP:NRC:4901 Page 10 Step 7:

Provide a summary description of the program for monitoring and assessing the effects of increased calibration surveillance intervals on instrument drift and its effect on safety.

CNP Evaluation Instruments with Surveillance Frequencies for CHANNEL CALIBRATION extended to 24 months will be monitored and trended. As-found and as-left calibration data will be recorded for each calibration activity, and an evaluation of changes from previous calibrations will be performed to identify trends. This will identify occurrences of instruments found outside of their Allowable Value, or instruments whose performance is not as assumed in the drift analyses or setpoint analyses.

When as-found conditions are outside the Allowable Value, an evaluation will be performed to determine if the assumptions made to extend the Surveillance Frequency for CHANNEL CALIBRATION are still valid, to evaluate the effect on plant safety, and to evaluate instrument OPERABILITY.

In addition, the CNP trending program will address setpoints found to be outside of their As-Found Tolerance (AFT). This AFT is based upon the expected drift for the instruments.

The CNP trending program will require that any time a setpoint value is found outside the AFT, an additional evaluation be performed to ensure the performance of the instruments is still enveloped by the assumptions in the drift analyses and setpoint analyses. The trending program will also plot setpoint or transmitter AFAL values to verify that the performance of the instruments is within expected boundaries and that adverse trends (i.e., repeated directional changes in AFAL even of smaller magnitudes) are detected and evaluated.

The Maintenance Rule Program trends failures that affect the safety functions of equipment.

Any degradation in performance due to the extension of Surveillance Frequencies, as identified by performance of Surveillance Requirements and maintenance activities, will be captured under the existing Maintenance Rule Program.

3.0 CONCLUSION

As described in the above discussion, the evaluations to justify extensions of selected Surveillance Frequencies have been completed. These evaluations conform to the guidance provided in Reference 1. The specific evaluations for each Surveillance Frequency being changed are contained in the appropriate discussion of changes in Attachment 1.

4.0 REFERENCES

1. NRC GL 91-04, Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle, dated April 2, 1991.

to AEP:NRC:4901 Page 11

2. Letter from Leonard N. Olshan, NRC, to H. L. Sumner, Jr., Southern Nuclear Operating Company, Inc., NRC Safety Evaluation Report, HNP, Units 1 and 2, Extension of Surveillance Requirements from 92 days to 92 days on an Alternate Test Basis (TAC Nos. MB2966 and MB2968), dated July 12, 2002; and Letter from Joseph Colaccino, NRC, to H. L. Sumner, Jr., Southern Nuclear Operating Company, Inc., NRC Safety Evaluation Report, Edwin I. Hatch Nuclear Power Plant, Technical Specification Changes to Support Extension of Operating Cycle from 18 Months to 24 Months (TAC Nos. MB2965 and MB2967), dated September 26, 2002.
3. NEDC-30936P-A, BWR Owners Group Technical Specification Improvement Methodology (with Demonstration of BWR ECCS Actuation Instrumentation), Part 1 and Part 2, dated December 1988.
4. Letter from Joseph P. Shea, NRC, to George A. Hunger, Jr., Philadelphia Electric Company, NRC Safety Evaluation Report, Extension of Technical Specification Surveillance Intervals to 24 months, Peach Bottom Atomic Power Station, Unit Nos. 2 and 3 (TAC Nos. M83704 and M83705), dated August 2, 1993.
5. EPRI TR-103335, Statistical Analysis of Instrument Calibration Data, Guidelines for Instrument Calibration Extension/Reduction Programs, Revision 1, October 1998.
6. WCAP-12741, Appendix E, STEPIT Menu Driven Setpoint Calculation Program, D.C.

Cook Unit 1, Revision 0, September 1991.

7. WCAP-13055, Westinghouse Setpoint Methodology for Protection Systems, Donald C.

Cook Unit 1, Revision 1, August 1993.

8. WCAP-13801, Westinghouse Setpoint Methodology for Protection Systems, Donald C.

Cook Unit 2, Revision 0, August 1993.

Enclosure 6 to AEP:NRC:4901 Instrument Drift Analysis Methodology Guideline

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 1 of 40 Instrument Drift Analysis Methodology D.C.COOK UNITS 1 & 2 ITS PROJECT MANUAL CHAPTER 4 24 MONTH CYCLE SURVEILLANCE UPGRADE DEVELOPMENT GUIDELINE Revision 0

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 2 of 40 Instrument Drift Analysis Methodology PURPOSE This document is part of the Donald C. Cook Program Manual for the Improved Technical Specification and 24 Month Cycle Surveillance Upgrade. This is a guideline that describes the process to identify, perform and implement the activities required for CNP to operate with 24-Month Surveillance intervals.

BACKGROUND CNP intends to convert their existing Standard Technical Specifications to Improved Technical Specifications, and during this change extend the surveillance interval for all possible items in the Technical Specifications. A license amendment request will be submitted in support of the surveillance change. The request will demonstrate that the proposed changes will not adversely impact safety.

The evaluations for the submittal will be limited to Technical Specifications (TS) Surveillance Requirements (SRs) currently performed on an 18-Month, Monthly or Quarterly bases. Monthly and Quarterly SRs for Reactor Trip System (RTS) and the Engineered Safety Feature Actuation System Instrumentation (ESFAS) are not considered for change, since CNP plans to revise the SR for these functions based on Westinghouse WCAP 15376. Additional evaluations not included in the submittal will also be performed for the TRM, ODCM, and Program activities required to be performed during shutdown. Changes to these documents are controlled under the 50.59 process and do not require external approval.

Technical Specifications changes will be evaluated in accordance with the guidance provided in NRC Generic Letter 91-04, "Changes in Technical Specifications Surveillance Intervals to accommodate a 24-Month Fuel Cycle," dated April 2, 1991.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 3 of 40 Instrument Drift Analysis Methodology TABLE OF CONTENTS SECTION PAGE HISTORY OF REVISIONS .................................................................................................................................3

1. OBJECTIVE/PURPOSE ......................................................................................................................................5
2. DRIFT ANALYSIS SCOPE .................................................................................................................................5
3. DISCUSSION/METHODOLOGY ......................................................................................................................6 3.1. Methodology Options ...................................................................................................................................6 3.2. Data Analysis Discussion .............................................................................................................................6 3.3. Confidence Interval ......................................................................................................................................8 3.4. Calibration Data Collection ..........................................................................................................................9 3.5. Categorizing Calibration Data ....................................................................................................................10 3.6. Outlier Analysis..........................................................................................................................................13 3.7. Methods For Verifying Normality..............................................................................................................15 3.8. Binomial Pass/Fail Analysis For Distributions Considered Not To Be Normal.........................................20 3.9. Time-Dependent Drift Analysis .................................................................................................................20 3.10. Calibration Point Drift ................................................................................................................................24 3.11. Drift Bias Determination ............................................................................................................................24 3.12. Time Dependent Drift Uncertainty.............................................................................................................25 3.13. Shelf Life Of Analysis Results ...................................................................................................................26
4. PERFORMING AN ANALYSIS .......................................................................................................................26 4.1. Populating The Spreadsheet .......................................................................................................................27 4.2. Spreadsheet Performance Of Basic Statistics .............................................................................................28 4.3. Outlier Detection And Expulsion ...............................................................................................................30 4.4. Normality Tests ..........................................................................................................................................30 4.5. Time Dependency Testing..........................................................................................................................31 4.6. Calculate The Analyzed Drift Value ..........................................................................................................32
5. CALCULATIONS...............................................................................................................................................34 5.1. Drift Studies / Calculation ..........................................................................................................................34 5.2. Setpoint/Uncertainty Calculations ..............................................................................................................36
6. DEFINITIONS ....................................................................................................................................................37
7. REFERENCES ....................................................................................................................................................40 7.1. Industry Standards and Correspondence ....................................................................................................40 7.2. Calculations and Programs .........................................................................................................................40 7.3. Miscellaneous .............................................................................................................................................40 Appendix A: Example Drift Study for Barksdale B2T-C12(or M12) Series Pressure Switches 29 pages Appendix B: Evaluation of the NRC Status Report on the Staff Review of EPRI Technical Report-103335, Guidelines for Instrument Calibration Extension/Reduction Programs 13 pages

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 4 of 40 Instrument Drift Analysis Methodology SECTION PAGE TABLES Table 1 - 95%/95%Tolerance Interval Factors................................................................................................................9 Table 2 - Critical Values For t-Test.................................................................................................................................14 Table 3 - Values For A Normal Distribution..................................................................................................................19 Table 4 - Maximum Values of Non-Biased Mean..........................................................................................................24 History of Revisions Rev. No. Approval Date Reason & Description Change A 3/8/2002 Draft for Comment

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 5 of 40 Instrument Drift Analysis Methodology DRIFT ANALYSIS DESIGN GUIDE

1. OBJECTIVE/PURPOSE The objective of this Design Guide is to provide the necessary detail and guidance to perform drift analyses using past calibration history data for the purposes of:
  • Quantifying component/loop drift characteristics within defined probability limits to gain an understanding of the expected behavior for the component/loop by evaluating past performance
  • Estimating component/loop drift for integration into setpoint calculations
  • Analysis aid for reliability centered maintenance practices (e.g., optimizing calibration frequency)
  • Establishing a technical basis for extending calibration and surveillance intervals using historical calibration data
  • Evaluating extended surveillance intervals in support of longer fuel cycles
  • Trending device performance based on extended surveillance intervals
2. DRIFT ANALYSIS SCOPE The scope of this design guide is limited to the calculation of the expected performance for a component, group of components or loop, utilizing past calibration data. The Drift Studies are the final product of the data analysis and document the use of the drift data for the purposes listed in Section 1. For the Improved Technical Specification and 24-Month Cycle Surveillance Upgrade Project, the Setpoint/Uncertainty calculations will be revised to incorporate the values documented in the Drift Studies for the applications specific to a given loop or component ONLY IF the Analyzed Drift values exceed those within the existing Setpoint/Uncertainty calculation. A separate analysis will be used to document that the existing Setpoint/Uncertainty calculations are conservative in their present treatment of drift, based on Drift Study results.

The approaches described within this design guide can be applied to all devices that are surveilled or calibrated where As-Found and As-Left data is recorded. The scope of this design guide includes, but is not limited to, the following list of devices:

  • Transmitters (Differential Pressure, Flow, Level, Pressure, Temperature, etc.)
  • Bistables (Master & Slave Trip Units, Alarm Units, etc.)
  • Indicators (Analog, Digital)
  • Switches (Differential Pressure, Flow, Level, Position, Pressure, Temperature, etc.)
  • Signal Conditioners/Converters (Summers, E/P Converters, Square Root Converters, etc.)
  • Recorders (Temperature, Pressure, Flow, Level, etc.)
  • Monitors & Modules (Radiation, Neutron, H2O2, Pre-Amplifiers, etc.)
  • Relays (Time Delay, Undervoltage, Overvoltage, etc.)

Note that a given device or device type may be justified not to require drift analysis in accordance with this design guide, if appropriate. For the Improved Technical Specification and 24-Month Cycle Surveillance Upgrade Project, if calibration intervals are to be extended for instrumentation, and the associated drift is not analyzed per this design guide, justification should be provided as a part of the project documentation.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 6 of 40 Instrument Drift Analysis Methodology

3. DISCUSSION/METHODOLOGY 3.1. Methodology Options This design guide is written to provide the methodology necessary for the analysis of As-Found versus As-Left calibration data, as a means of characterizing the performance of a component or group of components via the following methods:

3.1.1. Electric Power Research Institute (EPRI) has developed a guideline to provide nuclear plants with practical methods for analyzing historic component calibration data to predict component performance via a simple spreadsheet program (e.g., Excel, Lotus 1-2-3). This design guide is written in close adherence to this guideline, Reference 7.1.1. The Nuclear Regulatory Commission reviewed Revision 0 of this document, and had a list of concerns documented in Reference 7.1.10. These concerns prompted the issuance of Revision 1 to Reference 7.1.1. In addition, Appendix B to this design guide addresses each concern individually and provides the Cook Nuclear Plant resolution.

3.1.2. Commercial Grade Software programs other than Microsoft Excel (e.g. IPASS, Lotus 1-2-3, SYSTAT, etc.), that perform the functions necessary to evaluate drift, may be utilized providing:

  • the intent of this design guide is met as outlined in Reference 7.1.1, and
  • software is used only as a tool to produce hard copy outputs which are to be independently verified.

3.1.3. The EPRI IPASS software, version 2.03, may be used to perform or independently verify certain portions of the drift analysis. The IPASS software does not have the functionality to perform many of the functions required by the drift analysis, such as time dependency functions, and therefore, should only be used in conjunction with other software products to produce or verify an entire drift study.

3.1.4. For the Improved Technical Specification and 24-Month Cycle Surveillance Upgrade Project, the final products of the data analyses are the hard copy drift studies, which will be formatted in accordance with the example drift study contained in Appendix A. The electronic files of the drift studies are an intermediate step from raw data to final product and are not controlled as QA files. The drift study is independently verified using different software than that used to create the drift study. The review of the drift study will include a summary tabulation of results from each program used in the review process to provide visual evidence of the acceptability of the results of the review.

3.2. Data Analysis Discussion The following data analysis methods were evaluated for use at Cook Nuclear Plant: 1) As-Found Versus Setpoint, 2) Worst Case As-Found Versus As-Left, 3) Combined Calibration Data Points Analysis, and

4) As-Found Versus As-Left. The evaluation concluded that the As-Found versus As-Left methodology provided results that were more representative of the data and has been chosen for use by this Design Guide. Statistical tests not covered by this design guide may be utilized providing the Engineer performing the analysis adequately justifies the use of the tests.

3.2.1. As-Found Versus As-Left Calibration Data Analysis The As-Found versus As-Left calibration data analysis is based on calculating drift by subtracting the previous As-Left component setting from the current As-Found setting. Each calibration point is treated as an independent set of data for purposes of characterizing drift across the full, calibrated span of the component/loop. By evaluating As-Found versus As-Left data for a component/loop or a similar group of components/loops, the following information may be obtained:

  • The typical component/loop drift between calibrations (Random in nature)
  • Any tendency for the component/loop to drift in a particular direction (Bias)

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 7 of 40 Instrument Drift Analysis Methodology

  • Any tendency for the component/loop drift to increase in magnitude over time (Time Dependency)
  • Confirmation that the selected setting or calibration tolerance is appropriate or achievable for the component/loop 3.2.1.1. General Features of As-Found Versus As-Left Analysis
  • The methodology evaluates historical calibration data only. The method does not monitor on-line component output; data is obtained from component calibration records.
  • Present and future performance is predicted based on statistical analysis of past performance.
  • Data is readily available from component calibration records. Data can be analyzed from plant startup to the present or only the most recent data can be evaluated.
  • Since only historical data is evaluated, the method is not intended as a tool to identify individual faulty components, although it can be used to demonstrate that a particular component model or application historically performs poorly.
  • A similar class of components, i.e., same make, model, or application, is evaluated. For example, the method can determine the drift of all analog indicators of a certain type installed in the control room.
  • The methodology is less suitable for evaluating the drift of a single component over time, due to statistical analysis penalties that occur with smaller sample sizes.
  • The methodology obtains a value of drift for a particular model, loop, or function that can be used in component or loop uncertainty and setpoint calculations.
  • The methodology is designed to support the analysis of longer calibration intervals and is consistent with the NRC expectations described in Reference 7.3.3. Values for instrument drift developed in accordance with this Design Guide are to be applied in accordance with References 7.2.1 and/or 7.2.2, as appropriate.

3.2.1.2. Error and Uncertainty Content in As-Found Versus As-Left Calibration Data The As-Found versus the As-Left data includes several sources of uncertainty over and above component drift. The difference between As-Found and previous As-Left data encompasses the following error terms as a minimum: Calibration Accuracy (CA),

Measurement and Test Equipment (M&TE) errors, and instrument Drift (D).

(References 7.2.1 and 7.2.2 are the setpoint calculation methodology documents for use at Cook Nuclear Plant.) The drift is not assumed to encompass the errors associated with temperature effect, since the temperature difference between the two calibrations is not quantified, and is not anticipated to be significant. Additional instruction for the use of As-Found and As-Left data may be found in Reference 7.1.2. Also, see Section 14.2 of Reference 7.2.2 for a description of the use of AFAL data at CNP, and the associated requirements of a drift-monitoring program, if AFAL data is used. The following possible contributors could be within the measured variation, but are not necessarily considered.

  • Accuracy errors present between any two consecutive calibrations
  • Measurement and test equipment error between any two consecutive calibrations
  • Personnel-induced or human-related variation or error between any two consecutive calibrations
  • Normal temperature effects due to a difference in ambient temperature between any two consecutive calibrations

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 8 of 40 Instrument Drift Analysis Methodology

  • Power Supply variations between any two consecutive calibrations
  • Environmental effects on component performance, e.g., radiation, humidity, vibration, etc., between any two consecutive calibrations that cause a shift in component output
  • Misapplication, improper installation, or other operating effects that affect component calibration between any two consecutive calibrations
  • True drift representing a change, time-dependent or otherwise, in component/loop output over the time period between any two consecutive calibrations 3.2.1.3. Potential Impacts of As-Found Versus As-Left Data Analysis Many of the bulleted items listed in step 3.2.1.2 are not expected to have a significant effect on the measured As-Found and As-Left settings. Because there are so many independent parameters contributing to the possible variance in calibration data, they are all considered together and termed the components Analyzed Drift (ADR or DA) uncertainty. This approach has the following potential impacts on an analysis of the components calibration data:
  • The magnitude of the calculated variation may exceed any assumptions or manufacturer predictions regarding drift. Attempts to validate manufacturers performance claims should consider the possible contributors listed in step 3.2.1.2 to the calculated drift.
  • The magnitude of the calculated variation that includes all of the above sources of uncertainty may mask any true time-dependent drift. In other words, the analysis of As-Found versus As-Left data may not demonstrate any time dependency. This does not mean that time-dependent drift does not exist, only that it could be so small that it is negligible in the cumulative effects of component uncertainty, when all of the above sources of uncertainty are combined.

3.3. Confidence Interval This Design Guide recommends a single confidence interval level to be used for performing data analyses and the associated calculations.

NOTE: The default Tolerance Interval Factor (TIF) for all drift studies, performed using this Design Guide, is chosen for a 95%/95% probability and confidence, although this is not specifically required in every situation. This term means that the results have a 95% confidence () that at least 95% of the population lies between the stated interval (P) for a sample size (n). Components that perform functions that support a specific Technical Specification value, Administrative Technical Requirements (ATR) value or are associated with the safety analysis assumptions or inputs are always analyzed at a 95%/95%

confidence interval. Components/loops that fall into this level must:

  • be included in the data group (or be justified to apply the results per the guidance of Reference 7.1.1) if the analyzed drift value is to be applied to the component/loop in a Setpoint/Uncertainty Calculation,
  • use the 95/95% TIF for determination of the Analyzed Drift term, and (see step 3.5.2.1 and Table 1 - 95%/95%Tolerance Interval Factors)
  • be evaluated in the Setpoint/Uncertainty Calculation for application of the Analyzed Drift term. (For example, the ADR term may include the normal temperature effects for a given device, but due to the impossibility of separating out that specific term, an additional temperature uncertainty may be included in the Setpoint/Uncertainty Calculation.)

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 9 of 40 Instrument Drift Analysis Methodology 3.4. Calibration Data Collection 3.4.1. Sources Of Data The sources of data to perform a drift analysis are Surveillance Tests, Calibration Procedures and other calibration processes (calibration files, calibration sheets for Balance of Plant devices, Preventative Maintenance, etc.).

3.4.2. How Much Data To Collect 3.4.2.1. The goal is to collect enough data for the instrument or group of instruments to make a statistically valid pool. There is no hard fast number that must be attained for any given pool, but a minimum of 30 drift values must be attained before the drift analysis can be performed without additional justification. As a general rule, drift analyses should not be performed for sample sizes of less than 20 drift values. Table 1 provides the 95%/95% TIF for various sample pool sizes; it should be noted that the smaller the pool the larger the penalty. A tolerance interval is a statement of confidence that a certain proportion of the total population is contained within a defined set of bounds.

For example, a 95%/95% TIF indicates a 95% level of confidence that 95% of the population is contained within the stated interval. (Note: For cases where the exact count is not contained within the table, linear interpolation of the values should be used to determine the Tolerance Interval Factor.) Table 1 was extracted from Reference 7.1.9.

Table 1 - 95%/95%Tolerance Interval Factors Sample Size 95%/95% Sample Size 95%/95% Sample Size 95%/95%

2 37.674 23 2.673 120 2.205 3 9.916 24 2.651 130 2.194 4 6.370 25 2.631 140 2.184 5 5.079 26 2.612 150 2.175 6 4.414 27 2.595 160 2.167 7 4.007 30 2.549 170 2.160 8 3.732 35 2.490 180 2.154 9 3.532 40 2.445 190 2.148 10 3.379 45 2.408 200 2.143 11 3.259 50 2.379 250 2.121 12 3.162 55 2.354 300 2.106 13 3.081 60 2.333 400 2.084 14 3.012 65 2.315 500 2.070 15 2.954 70 2.299 600 2.060 16 2.903 75 2.285 700 2.052 17 2.858 80 2.272 800 2.046 18 2.819 85 2.261 900 2.040 19 2.784 90 2.251 1000 2.036 20 2.752 95 2.241 1.960 21 2.723 100 2.233 22 2.697 110 2.218 3.4.2.2. Different information may be needed, depending on the analysis purpose, therefore, the total population of components - all makes, models, and applications that are to be analyzed must be known (e.g., all Foxboro transmitters).

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 10 of 40 Instrument Drift Analysis Methodology 3.4.2.3. Once the total population of components is known, the components should be separated into functionally equivalent groups. Each grouping is treated as a separate population for analysis purposes. (e.g., starting with all Foxboro Differential Pressure Transmitters as the initial group and breaking them down into various sub-groups - Different Ranges, Large vs. Small Turn Down Factors, Level vs. Flow Applications, etc.).

3.4.2.4. Not all components or available calibration data points need to be analyzed within each group in order to establish statistical performance limits for the group. Acquisition of data should be considered from different perspectives.

  • For each grouping, a large enough sample of components should be randomly selected from the population, so there is assurance that the evaluated components are representative of the entire population. By randomly selecting the components and confirming that the behavior of the randomly selected components is similar, a basis for not evaluating the entire population can be established. For sensors, a random sample from the population should include representation of all desired component spans and functions.
  • For each selected component in the sample, enough historic calibration data should be provided to ensure that the components performance over time is understood.
  • Due to the difficulty of determining the total sample set, developing specific sampling criteria is difficult. A sampling method must be used which ensures that various instruments calibrated at different frequencies are included. The sampling method must also ensure that the different component types, operating conditions and other influences on drift are included. Because of the difficulty in developing a valid sampling program, it is often simpler to evaluate all available data for the required instrumentation within the chosen time period.

This eliminates changing sample methods, should groups be combined or split, based on plant conditions or performance. For the purposes of this guide, specific justification in the drift study is required to document any sampling plan.

3.5. Categorizing Calibration Data 3.5.1. Grouping Calibration Data One analysis goal should be to combine functionally equivalent components (components with similar design and performance characteristics) into a single group. In some cases, all components of a particular manufacturer make and model can be combined into a single sample.

In other cases, virtually no grouping of data beyond a particular component make, model, and specific span or application may be possible. Some examples of possible groupings include, but are not limited to, the following:

3.5.1.1. Small Groupings

  • All devices of same manufacturer, model and range, covered by the same Surveillance Test
  • All trip units used to monitor a specific parameter (assuming that all trip units are the same manufacturer, model and range) 3.5.1.2. Larger Groupings
  • All transmitters of a specific manufacturer, model that have similar spans and performance requirements
  • All Foxboro Spec 200 isolators with functionally equivalent model numbers
  • All control room analog indicators of a specific manufacturer and model

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 11 of 40 Instrument Drift Analysis Methodology 3.5.2. Rationale For Grouping Components Into A Larger Sample

  • A single component analysis may result in too few data points to make statistically meaningful performance predictions.
  • Smaller sample sizes associated with a single component may unduly penalize performance predictions by applying a larger TIF to account for the smaller data set.

Larger sample sizes reflect a greater understanding and assurance of representative data that in turn, reduces the uncertainty factor.

  • Large groupings of components into a sample set for a single population ultimately allows the user to state the plant-specific performance for a particular make and model of component. For example, the user may state, Main Steam Flow Transmitters have historically drifted by less than 1%, or All control room indicators of a particular make and model have historically drifted by less than 1.5%.
  • An analysis of smaller sample sizes is more likely to be influenced by non-representative variations of a single component (outliers).
  • Grouping similar components together, rather than analyzing them separately, is more efficient and minimizes the number of separate calculations that must be maintained.

3.5.3. Considerations When Combining Components Into A Single Group Grouping components together into a sample set for a single population does not have to become a complicated effort. Most components can be categorized readily into the appropriate population. Consider the following guidelines when grouping functionally equivalent components together.

  • If performed on a type-of -component basis, component groupings should usually be established down to the manufacturer make and model, as a minimum. For example, data from Rosemount and Foxboro transmitters should not be combined in the same drift analysis. The principles of operation are different for the various manufacturers, and combining the data could mask some trend for one type of component. This said, it might be desirable to combine groups of components for certain studies. If dissimilar component types are combined, a separate analysis of each component type should still be completed to ensure analysis results of the mixed population are not misinterpreted or misapplied.
  • Sensors of the same manufacturer make and model, but with different calibrated spans or elevated zero points, can possibly still be combined into a single group. For example, a single analysis that determines the drift for all Foxboro pressure transmitters installed onsite might simplify the application of the results. Note that some manufacturers provide a predicted accuracy and drift value for a given component model, regardless of its span. However, the validity of combining components with a variation of span, ranging from tens of pounds to several thousand pounds, should be confirmed. As part of the analysis, the performance of components within each span should be compared to the overall expected performance to determine if any differences are evident between components with different spans.
  • Components combined into a single group should be exposed to similar calibration or surveillance conditions, as applicable. Note that the term operating condition was not used in this case. Although it is desirable that the grouped components perform similar functions, the method by which the data is obtained for this analysis is also significant. If half the components are calibrated in the summer at 90°F and the other half in the winter at 40°F, a difference in observed drift between the data for the two sets of components might exist. In many cases, ambient temperature variations are not expected to have a large effect since the components are located in environmentally controlled areas.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 12 of 40 Instrument Drift Analysis Methodology 3.5.4. Verification That Data Grouping Is Appropriate

  • Combining functionally equivalent components into a single group for analysis purposes may simplify the scope of work; however, some level of verification should be performed to confirm that the selected component grouping is appropriate. As an example, the manufacturer may claim the same accuracy and drift specifications for two components of the same model, but with different ranges, e.g., 0-5 PSIG and 0-3000 PSIG. However, in actual application, components of one range may perform differently than components of another range.
  • Standard statistics texts provide methods that can be used to determine if data from similar types of components can be pooled into a single group. If different groups of components have essentially equal variances and means at the desired statistical level, the data for the groups can be pooled into a single group.
  • When evaluating groupings, care must be taken not to split instrument groups only because they are calibrated on a different time frequency. Differences in variances may be indicative of a time dependent component to the device drift. The separation of these groups may later mask a time-dependence for the component drift.
  • A t-Test (two samples assuming unequal variances) should also be performed on the proposed components to be grouped. The t-Test returns the probability associated with a Student's t-Test to determine whether two samples are likely to have come from the same two underlying populations that have unequal variances. If for example, the proposed group contains 5 sub-groups, the t-Tests should be performed on all possible combinations for the groupings. However, if there is no plausible engineering explanation for the two sets of data being incompatible, the groups should be combined, despite the results of the t-Test. The following formula is used to determine the test statistic value t.

x1 x 2 t = 0 (Ref. 7.3.5) 2 2 s s 1

+ 2 n1 n2 Where ;

t - test statistic n - Total number of data points x - Mean of the samples s2 - Pooled variance 0 - Hypothesized mean difference The following formula is used to estimate the degrees of freedom for the test statistic.

2 s 12 s 22

+

1n n 2 df = 2 2 s 12 s 22 n

1 + 2 n n1 1 n2 1 Where; Values are as previously defined.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 13 of 40 Instrument Drift Analysis Methodology 3.5.5. Examples Of Proven Groupings:

  • All control room indicators receiving a 4-20mADC (or 1-5VDC) signal. Notice that a combined grouping may be possible even though the indicators have different indication spans. For example, a 12 mADC signal should move the indicator pointer to the 50% of span position on each indicator scale, regardless of the span indicated on the face plate (exceptions are non-linear meter scales).
  • All control room bistables of similar make or model tested quarterly for Technical Specification surveillance. Note that this assumes that all bistables are tested in a similar manner and have the same input range, e.g., a 1-5VDC or 4-20mADC spans.
  • A specific type of pressure transmitter used for similar applications in the plant in which the operating and calibration environment does not vary significantly between applications or location.
  • A group of transmitters of the same make and model, but with different spans, given that a review confirms that the transmitters of different spans have similar performance characteristics.

3.5.6. Using Data From Other Nuclear Power Plants:

  • It is acceptable, although not recommended, to pool Cook Nuclear Plant specific data with data obtained from other utilities, providing the requirements of step 3.6.4 are met and the data can be verified to be of high quality. In this case the data must also be verified to be acceptable for grouping. Acceptability may be defined by verification of grouping, and an evaluation of calibration procedures, Measurement and Test Equipment used, and defined setting tolerances. Where there is agreement in calibration method (starting at zero increasing to 100 percent and decreasing to 0 taking data every 25%),

calibration equipment, and area environment (if performance is affected by the temperature), there is a good possibility that the groups may be combined. Previously collected industry information may not have sufficient information about the manner of collection to allow combining with plant specific data.

3.6. Outlier Analysis An outlier is a data point significantly different in value from the rest of the sample. The presence of an outlier or multiple outliers in the sample of component or group data may result in the calculation of a larger than expected sample standard deviation and tolerance interval. Calibration data can contain outliers for several reasons. Outlier analyses can be used in the initial analysis process to help to identify problems with data that require correction. Examples include:

  • Data Transcription Errors - Calibration data can be recorded incorrectly either on the original calibration data sheet or in the spreadsheet program used to analyze the data.
  • Calibration Errors - Improper setting of a device at the time of calibration would indicate larger than normal drift during the subsequent calibration.
  • Measuring & Test Equipment Errors - Improperly selected or mis-calibrated test equipment could indicate drift, when little or no drift was actually present.
  • Scaling or Setpoint Changes - Changes in scaling or setpoints can appear in the data as larger than actual drift points unless the change is detected during the data entry or screening process.
  • Failed Instruments - Calibrations are occasionally performed to verify proper operation due to erratic indications, spurious alarms, etc. These calibrations may be indicative of component failure (not drift), which would introduce errors that are not representative of the device performance during routine conditions.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 14 of 40 Instrument Drift Analysis Methodology

  • Design or Application Deficiencies - An analysis of calibration data may indicate a particular component that always tends to drift significantly more than all other similar components installed in the plant. In this case, the component may need an evaluation for the possibility of a design, application, or installation problem. Including this particular component in the same population as the other similar components may skew the drift analysis results.

3.6.1. Detection of Outliers There are several methods for determining the presence of outliers. This design guide utilizes the Critical Values for t-Test (Extreme Studentized Deviate). The t-Test utilizes the values listed in Table 2 with an upper significance level of 5% to compare a given data point against.

Note that the critical value of t increases as the sample size increases. This signifies that as the sample size grows, it is more likely that the sample is truly representative of the population. The t-Test assumes that the data is normally distributed.

Table 2 - Critical Values For t-Test Sample Size Upper 5% Significance Sample Size Upper 5% Significance Level Level 3 1.15 22 2.60 4 1.46 23 2.62 5 1.67 24 2.64 6 1.82 25 2.66 7 1.94 30 2.75 8 2.03 35 2.82 9 2.11 40 2.87 10 2.18 45 2.92 11 2.23 50 2.96 12 2.29 60 3.03 13 2.33 70 3.09 14 2.37 75 3.10 15 2.41 80 3.14 16 2.44 90 3.18 17 2.47 100 3.21 18 2.50 125 3.28 19 2.53 150 3.33 20 2.56 >150 4.00 21 2.58 3.6.2. t-Test Outlier Detection Equation xi x t= (Ref. 7.1.1) s Where; Xi - An individual sample data point X - Mean of all sample data points s - Standard deviation of all sample data points t - Calculated value of extreme studentized deviate that is compared to the critical value of t for the sample size.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 15 of 40 Instrument Drift Analysis Methodology 3.6.3. Outlier Expulsion This design guide does not permit multiple outlier tests or passes. The removal of poor quality data as listed in Section 3.6 is not considered removal of outliers, since it is merely assisting in identifying data errors. However, after removal of poor quality data as listed in Section 3.6, certain data points can still appear as outliers when the outlier analysis is performed. These unique outliers are not consistent with the other data collected; and could be judged as erroneous points, which tend to skew the representation of the distribution of the data.

However, for the general case, since these outliers may accurately represent instrument performance, only one (1) additional unique outlier (as indicated by the t-Test, may be removed from the drift data. However, for special, rare cases, with specific justification, up to 2.5% of the population is allowed for removal as additional outliers, per this design guide. If more than one statistical outlier is removed from the data set, specific justification for the removal must be identified within the drift study. After removal of poor quality data and the removal of the unique outlier(s) (if necessary), the remaining drift data is known as the Final Data Set.

For transmitters or other devices with multiple calibration points, the general process is to use the calibration point with the worst-case drift values. This is determined by comparing the different calibration points and using the one with the largest error, determined by adding the absolute value of the drift mean to 2 times the drift standard deviation. The data set with the largest of those terms is used throughout the rest of the analysis, after outlier removal, as the Final Data Set. (Note that it is possible to use a specific calibration point and neglect the others, only if that is the single point of concern for all devices in the data set.)

The data set basic statistics (i.e., the Mean, Median, Standard Deviation, Variance, Minimum, Maximum, Kurtosis, Skewness, Count and Average Time Interval Between Calibrations) should be computed and displayed for the data set prior to removal of the unique outlier and for the Final Data Set, if different.

3.7. Methods For Verifying Normality A test for normality can be important because many frequently used statistical methods are based upon an assumption that the data is normally distributed. This assumption applies to the analysis of component calibration data also. For example, the following analyses may rely on an assumption that the data is normally distributed:

  • Determination of a tolerance interval that bounds a stated proportion of the population based on calculation of mean and standard deviation
  • Identification of outliers
  • Pooling of data from different samples into a single population The normal distribution occurs frequently and is an excellent approximation to describe many processes.

Testing the assumption of normality is important to confirm that the data appears to fit the model of a normal distribution, but the tests do not prove that the normal distribution is a correct model for the data.

At best, it can only be found that the data is reasonably consistent with the characteristics of a normal distribution, and that the treatment of a distribution as normal is conservative. For example, some tests for normality only allow the rejection of the hypothesis that the data is not normally distributed. A group of data passing the test does not mean the data is normally distributed; it only means that there is no evidence to say that it is not normally distributed. However, because of the wealth of industry evidence that drift can be conservatively represented by a normal distribution, a group of data passing these tests is considered as normally distributed without adjustments to the standard deviation of the data set.

Distribution-free techniques are available when the data is not normally distributed; however, these techniques are not as well known and often result in penalizing the results by calculating tolerance intervals that are substantially larger than the normal distribution equivalent. Because of this fact, there is a good reason to demonstrate that the data is normally distributed or can be bounded by the assumption of normality.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 16 of 40 Instrument Drift Analysis Methodology Analytically verifying that a sample appears to be normally distributed usually invokes a form of statistics known as hypothesis testing. In general, a hypothesis test includes the following steps:

1) Statement of the hypothesis to be tested and any assumptions
2) Statement of a level of significance to use as the basis for acceptance or rejection of the hypothesis
3) Determination of a test statistic and a critical region
4) Calculation of the appropriate statistics to compare against the test statistic
5) Statement of conclusions The following sections discuss various ways in which the assumption of normality can be verified to be consistent with the data or can be claimed to be a conservative representation of the actual data.

Analytical hypothesis testing and subjective graphical analyses are discussed. If any of the analytical hypothesis tests (Chi-Squared, D Prime, or W Test) are passed, the coverage analysis and additional graphical analyses are not required. The following are methods for assessing normality:

3.7.1. Chi-Squared, x2, Goodness of Fit Test This well-known test is stated as a method for assessing normality in References 7.1.1 and 7.1.2.

The x2 test compares the actual distribution of sample values to the expected distribution. The expected values are calculated by using the normal mean and standard deviation for the sample.

If the distribution is normally or approximately normally distributed, the difference between the actual versus expected values should be very small. And, if the distribution is not normally distributed, the differences should be significant.

3.7.1.1. Equations To Perform The x2 Test

1) First calculate the mean for the sample group X=

X i (Ref. 7.1.1) n Where; Xi - An individual sample data point X - Mean of all sample data points n - Total number of data points

2) Second calculate the standard deviation for the sample group n x 2 ( x )

2 s=

n(n 1)

(Ref. 7.1.1)

Where; x - Sample data values (x1, x2, x3, .....)

s - Standard deviation of all sample data points n - Total number of data points

3) Third the data must be divided into bins to aid in determination of a normal distribution. The number of bins selected is up to the individual performing the analysis. Refer to Reference 7.1.1 for further guidance. For most applications, a 12-bin analysis is performed on the drift data. See Section 4.4.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 17 of 40 Instrument Drift Analysis Methodology

4) Fourth calculate the x2 value for the sample group (Oi Ei )2 Ei = NPi x2 = (Ref. 7.1.1)

Ei Where; Ei - Expected values for the sample N - Total number of samples in the population Pi - Probability that a given sample is contained in a bin Oi - Observed sample values (O1, O2, O3, .....)

x2 - Chi squared result

5) Fifth, calculate the degrees of freedom. The degrees of freedom term is computed as the number of bins used for the chi-square computation minus the constraints. In all cases for these drift calculations, since the count, mean and standard deviation are computed, the constraints term is equal to three.
6) Sixth, compute the Chi squared per degree of freedom term (02). This term is merely the Chi squared term computed in step 4 above, divided by the degrees of freedom.
7) Finally, evaluate the results. The results are evaluated in the following manner, as prescribed in Reference 7.1.1. If the Chi squared result computed in step 4 is less than or equal to the degrees of freedom, the assumption that the distribution is normal is not rejected. If the value from step 4 is greater than the degrees of freedom, then one final check is made. The degrees of freedom and 02 are used to look up the probability of obtaining a 02 term greater than the observed value, in percent. (See Table C-3 of Reference 7.1.1.) If the lookup value is greater than or equal to 5%, then the assumption of normality is not rejected. However, if the lookup value is less than 5%, the assumption of normality is rejected.

3.7.2. W Test Reference 7.1.4 recommends this test for sample sizes less than 50. The W Test calculates a test statistic value for the sample population and compares the calculated value to the critical values for W, which are tabulated in Reference 7.1.4. The W Test is a lower-tailed test. Thus if the calculated value of W is less than the critical value of W, the assumption of normality would be rejected at the stated significance level. If the calculated value of W is larger than the critical value of W, there is no evidence to reject the assumption of normality. Reference 7.1.4 establishes the methods and equations required for performing a W Test.

3.7.3. D-Prime Test Reference 7.1.4 recommends this test for moderate to large sample sizes, greater than or equal to

50. The D Test calculates a test statistic value for the sample population and compares the calculated value to the values for the D percentage points of the distribution, which are tabulated in Reference 7.1.4. The D Test is two-sided, which means that the two-sided percentage limits at the stated level of significance must envelop the calculated D value. For the given sample size, the calculated value of D must lie within the two values provided in the Reference 7.1.4 table in order to accept the hypothesis of normality.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 18 of 40 Instrument Drift Analysis Methodology 3.7.3.1. Equations To Perform The D Test

1) First, calculate the linear combination of the sample group. (Note: Data must be placed in ascending order of magnitude, prior to the application of this formula.)

n+1 T = i 2 x xi (Ref. 7.1.4)

Where; T - linear combination xi - An individual sample data point i - The number of the sample point n - Total number of data points

2) Second, calculate the S2 for the sample group.

S 2 = (n 1) s 2 (Ref. 7.1.4)

Where; S2 - Sum of the Squares about the mean s2 - Unbiased estimate of the sample population variance n - Total number of data points

3) Third, calculate the D value for the sample group.

T D = (Ref. 7.1.4)

S

4) Finally, evaluate the results. If the D' value lies within the acceptable range of results (for the given data count) per Table 5 of Reference 7.1.4, for the P = 0.025 and 0.975, then the assumption of normality is not rejected. (If the exact data count is not contained within the tables, the critical value limits for the D' value should be linearly interpolated to the correct data count.) If however, the value lies outside that range, the assumption of normality is rejected.

3.7.4. Probability Plots Probability plots are discussed, since a graphical presentation of the data can reveal possible reasons for why the data is or is not normal. A probability plot is a graph of the sample data with the axes scaled for a normal distribution. If the data is normal, the data tends to follow a straight line. If the data is non-normal, a nonlinear shape should be evident from the graph.

This method of normality determination is subjective, and is not required if the numerical methods show the data to be normal, or if a coverage analysis is used. The types of probability plots used by this design guide are as follows:

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 19 of 40 Instrument Drift Analysis Methodology

  • Cumulative Probability Plot - an XY scatter plot of the Final Data Set plotted against the percent probability (Pi) for a normal distribution. Pi is calculated using the following equation:

1 100 x i Pi = 2 (Ref. 7.1.1) n where; i = sample number i.e. 1,2,...

n = sample size NOTE: Refer, as necessary, to Appendix C Section C.4 of Reference 7.1.1.

  • Normalized Probability Plot - an XY scatter plot of the Final Data Set plotted against the probability for a normal distribution expressed in multiples of the standard deviation.

3.7.5. Coverage Analysis A coverage analysis is discussed for cases in which the hypothesis tests reject the assumption of normality, but the assumption of normality may still be a conservative representation of the data.

The coverage analysis involves the use of a histogram of the Final Data Set, overlaid with the equivalent probability distribution curve for the normal distribution, based on the data sample's mean and standard deviation. Visual examination of the plot is used, and the kurtosis is analyzed to determine if the distribution of the data is near normal. If the data is near normal, then a normal distribution model is derived, which adequately covers the set of drift data, as observed. This normal distribution is used as the model for the drift of the device.

Sample counting is used to determine an acceptable normal distribution. The Standard Deviation of the group is computed. The number of times the samples are within +/- two Standard Deviations of the mean is computed. The count is divided by the total number of samples in the group to determine a percentage. The following table provides the percentage that should fall within the two Standard Deviation values for a normal distribution.

Table 3 - Value For A Normal Distribution Percentages for a Normal Distribution 2 Standard Deviations 95.45%

If the percentage of data within the two standard deviations tolerance is greater than the value in Table 3 for a given data set, the existing standard deviation is acceptable to be used for the encompassing normal distribution model. However, if the percentage is less than required, the standard deviation of the model is enlarged, such that the required percentage falls within the +/-

two Standard Deviations bounds of Table 3. The required multiplier for the standard deviation in order to provide this coverage is termed the Normality Adjustment Factor (NAF). If no adjustment is required, the NAF is equal to one (1).

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 20 of 40 Instrument Drift Analysis Methodology 3.8. Binomial Pass/Fail Analysis For Distributions Considered Not To Be Normal A pass/fail criteria for component performance simply compares the As-Found versus As-Left surveillance drift data against a pre-defined acceptable value of drift. If the drift value is less than the pass/fail criteria, that data point passes; if it is larger than the pass/fail criteria, it fails. By comparing the total number of passes to the number of failures, a probability can be computed for the expected number of component passes in the population. Note that the term failure in this instance does not mean that the component actually failed, only that it exceeded the selected pass/fail criteria for the analysis. Often the pass/fail criteria is established at a point that clearly demonstrates acceptable component performance.

The equations used to determine the Failure Proportion, Normal, Minimum and Maximum Probabilities are as follows:

Failure Proportion Pf = x/n where; x = Number of values exceeding the pass/fail criteria (Failures) (Ref. 7.1.1) n = Total number of drift values in the sample Normal Probability that a value will pass P = 1-Pf (Ref. 7.1.1)

Minimum Probability that a value will pass x 1 x x Pl = 1 z x x x 1 (Ref. 7.1.1) n n n n Maximum Probability that a value will pass x 1 x x Pu = 1 + z x x x 1 (Ref. 7.1.1) n n n n where; Pl = the minimum probability that a value will pass Pu = the maximum probability that a value will pass z = the standardized normal distribution value corresponding to the desired confidence level, e.g., z =

1.96 for a 95% confidence level.

The Binomial Pass/Fail Analysis is a good tool for verifying that drift values calculated for calibration extensions are appropriate for the interval. See Reference 7.1.1 for the necessary detail to perform a pass/fail analysis.

3.9. Time-Dependent Drift Analysis The component/loop drift calculated in the previous sections represented a predicted performance limit, without any consideration of whether the drift may vary with time between calibrations or component age. This section discusses the importance of understanding the time-related performance and the impact of any time-dependency on an analysis. Understanding the time dependency can be either important or unimportant, depending on the application. A time dependency analysis is important whenever the drift analysis results are intended to support an extension of calibration intervals.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 21 of 40 Instrument Drift Analysis Methodology 3.9.1. Limitations of Time Dependency Analyses Reference 7.1.1 performed drift analysis for numerous components at several nuclear plants as part of the project. The data evaluated did not demonstrate any significant time-dependent or age-dependent trends. Time dependency may have existed in all of the cases analyzed, but was insignificant in comparison to other uncertainty contributors. Because time dependency cannot be completely ruled out, there should be an ongoing trending evaluation to verify that component drift continues to meet expectations whenever calibration intervals are extended.

3.9.2. Scatter (Drift Interval) Plot A drift interval plot is an XY scatter plot that shows the Final Data Set plotted against the time interval between tests for the data points. This plot method relies upon the human eye to discriminate the plot for any trend in the data to exhibit a time dependency. A prediction line can be added to this plot which shows a "least squares" fit of the data over time. This can provide visual evidence of an increasing or decreasing mean over time, considering all drift data. An increasing standard deviation is indicated by a trend towards increasing "scatter" over the increased calibration intervals.

3.9.3. Standard Deviations and Means at Different Calibration Intervals (Binning Analysis)

This analysis technique is the most recommended method of determining time dependent tendencies in a given sample pool. The test consists simply of segregating the drift data into different groups (Bins) corresponding to different ranges of calibration or surveillance intervals and comparing the standard deviations and means for the data in the various groups. The purpose of this type of analysis is to determine if the standard deviation or mean tends to become larger as the time interval between calibrations increases.

3.9.3.1. The available data is placed in interval bins. The intervals normally used coincide with Technical Specification calibration intervals plus the allowed tolerance as follows:

a. 0 to 45 days (covers most weekly and monthly calibrations)
b. 46 to 135 days (covers most quarterly calibrations)
c. 136 to 230 days (covers most semi-annual calibrations)
d. 231 to 460 days (covers most annual calibrations)
e. 461 to 690 days (covers most 18 month refuel cycle calibrations)
f. 691 to 900 days (covers most extended refuel cycle calibrations)
g. > 900 days covers missed and forced outage refueling cycle calibrations.

Data will naturally fall into these time interval bins based on the calibration requirements for the subject instrument loops. Only on occasion will a device be calibrated on a much longer or shorter interval than that of the rest of the population within its calibration requirement group. Therefore, the data will naturally separate into groups for analysis.

3.9.3.2. Different bin splits may be used, but must be evaluated for data coverage and acceptable data groupings.

3.9.3.3. For each bin where there is data, the mean (average), standard deviation, average time interval and data count will be computed.

3.9.3.4. To determine if time dependency does or does not exist, the data needs to be distributed across multiple bins, with a sufficient population of data in each of two or more bins, to consider the statistical results for those bins to be valid. Normally the minimum expected distribution that would allow evaluation is defined below.

a. A bin is considered valid in the final analysis if it holds more than five data points and more than ten percent of the total data count.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 22 of 40 Instrument Drift Analysis Methodology

b. At least two bins, including the bin with the most data, must be left for evaluation to occur.

The distribution percentages listed in these criteria are somewhat arbitrary, and thus engineering evaluation can modify them for a given situation.

The mean and standard deviations of the valid bins are plotted versus average time interval on a diagram. This diagram can give a good visual indication of whether or not the mean or standard deviation of a data set is increasing significantly over time interval between calibrations.

NOTE: If multiple valid bins do NOT exist for a given data set, then the plot is not to be shown, and the regression analyses are not to be performed. The reasoning is that there is not enough diversity in the calibration intervals analyzed to make meaningful conclusions about time dependency from the existing data. Unless overwhelming evidence to the contrary exists in the scatter plot, the single bin data set is established as moderately time dependent for the purposes of extrapolation of the drift value.

3.9.4. Regression Analyses and Plots Regression Analyses can often provide very valuable data for the determination of time dependency. A standard regression analysis within an EXCEL spreadsheet can plot the drift data versus time, with a prediction line showing the trend. It can also provide Analysis of Variance (ANOVA) table printouts, which contain information required for various numerical tests to determine level of dependency between two parameters (time and drift value). Note that regression analyses are only to be performed if multiple valid bins are determined from the binning analysis.

Regression Analyses are to be performed on the Final Data Set drift values and on the Absolute Value of the Final Data Set drift values. The Final Data Set drift values show trends for the mean of the data set, and the Absolute Values show trends for the standard deviation over time.

Regression Plots The following are descriptions of the two plots generated by these regressions.

  • Drift Regression - an XY scatter plot that fits a line through the final drift data plotted against the time interval between tests for the data points using the "least squares" method to predict values for the given data set. The predicted line is plotted through the actual data for use in predicting drift over time. It is important to note that statistical outliers can have a dramatic effect upon the regression line.
  • Absolute Value Drift Regression - an XY scatter plot that fits a line through the Absolute Value of the final drift data plotted against the time interval between tests for the data points using the "least squares" method to predict values for the given data set. The predicted line is plotted through the actual data for use in predicting drift, in either direction, over time. It is important to note that statistical outliers can have a dramatic effect upon the regression line.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 23 of 40 Instrument Drift Analysis Methodology Regression Time Dependency Analytical Tests Typical spreadsheet software includes capabilities to include ANOVA tables with regression analyses. ANOVA tables give various statistical information, which can allow certain numerical tests to be employed to search for time dependency of the drift data. For each of the two regressions (drift regression and absolute value drift regression), the following ANOVA parameters are used to determine if time dependency of the drift data is evident. All tests listed should be evaluated, and if time dependency is indicated by any of the tests, the data should be considered as time dependent.

  • R Squared Test - The R Squared value, printed out in the ANOVA table, is a relatively good indicator of time dependency. If the value is greater than 0.09 (thereby indicating the R value greater than 0.3), then it appears that the data does closely conform to a linear function, and therefore, should be considered time dependent.
  • P Value Test - A P Value for X Variable 1 (as indicated by the ANOVA table for an EXCEL spreadsheet) less than 0.05 is indicative of time dependency.
  • Significance of F Test - An ANOVA table F value greater than the critical F-table value would indicate a time dependency. In an EXCEL spreadsheet, the FINV function can be used to return critical values from the F distribution. To return the critical value of F, use the significance level (in this case 0.05 or 5.0%) as the probability argument to FINV, 1 as the numerator degrees of freedom, and the data count minus two as the denominator. If the F value in the ANOVA table exceeds the critical value of F, then the drift is considered time dependent.

NOTE: For each of these tests, if time dependency is indicated, the plots should be observed to determine the reasonableness of the result. The tests above generally assess the possibility that the function of drift is linear over time, not necessarily that the function is significantly increasing over time. Time dependency can be indicated even when the plot shows the drift to remain approximately the same or decrease over time. Generally, a decreasing drift over time is not expected for instrumentation, nor is a case where the drift function crosses zero. Under these conditions, the extrapolation of the drift term would normally be established assuming no time dependency, if extrapolation of the results is required beyond the analyzed time intervals between calibrations.

3.9.5. Additional Time Dependency Analyses

  • Instrument Resetting Evaluation - For data sets that consist of a single calibration interval the time dependency determination may be accomplished simply by evaluating the frequency at which instruments require resetting. This type of analysis is particularly useful when applied to extend quarterly Technical Specification surveillances to semi-annually.

However, is less useful for instruments such as sensors or relays that may be reset at each calibration interval regardless of whether the instrument was already in calibration.

The Instrument Resetting Evaluation may be performed only if the devices in the sample pool are shown to be stable, not requiring adjustment (i.e. less than 5% of the data shows that adjustments were made). Care also must be taken when mechanical connections or flex points may be exercised by the act of checking calibration (actuation of a bellows or switch movement), where the act of checking the actuation point may have an affect on the next reading. Methodology for calculating the drift is as follows:

Quarterly As-Found/As-Left (As-Found Current Calibration - As-Left Previous Calibration) or AF1 - AL2 (Ref. 7.1.1)

Semi-Annual As-Found/As-Left using Monthly Data (AF1 - AL2) + (AF2 - AL3) (Ref. 7.1.1)

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 24 of 40 Instrument Drift Analysis Methodology 3.9.6. Age-Dependent Drift Considerations Age-dependency is the tendency for a components drift to increase in magnitude as the component ages. This can be assessed by plotting the As-Found value for each calibration minus the previous calibration As-Left value of each component over the period of time for which data is available. Random fluctuations around zero may obscure any age-dependent drift trends. By plotting the absolute values of the As-Found versus As-Left calibration data, the tendency for the magnitude of drift to increase with time can be assessed. This analysis is generally not performed as a part of a standard drift study, but can be used when establishing maintenance practices.

3.10. Calibration Point Drift For devices with multiple calibration points (e.g., transmitters, indicators, etc.) the Drift-Calibration Point Plot is a useful tool for comparing the amount of drift exhibited by the group of devices at the different calibration points. The plot consists of a line graph of tolerance interval as a function of calibration point. This is useful to understand the operation of an instrument, but is not normally included as a part of a standard drift study.

3.11. Drift Bias Determination If an instrument or group of instruments consistently drifts predominately in one direction, the drift is assumed to have a bias. When the absolute value of the calculated average for the sample pool exceeds the values in Table 4 for the given sample size and calculated standard deviation, the average is treated as a bias to the drift term. The application of the bias must be carefully considered separately, so that the overall treatment of the analyzed drift remains conservative. Refer to Example 1 below.

Table 4 - Maximum Values of Non-Biased Mean Sample Normal Deviate (t) Maximum Value of Non-Biased Mean (xcrit) For Given STDEV (s)

Size (n) @ 0.025 for 95%

Confidence s s s s s s s s s 0.10% 0.25% 0.50% 0.75% 1.00% 1.50% 2.00% 2.50% 3.00%

5 2.571 0.115 0.287 0.575 0.862 1.150 1.725 2.300 2.874 3.449 10 2.228 0.070 0.176 0.352 0.528 0.705 1.057 1.409 1.761 2.114 15 2.131 0.055 0.138 0.275 0.413 0.550 0.825 1.100 1.376 1.651 20 2.086 0.047 0.117 0.233 0.350 0.466 0.700 0.933 1.166 1.399 25 2.060 0.041 0.103 0.206 0.309 0.412 0.618 0.824 1.030 1.236 30 2.042 0.037 0.093 0.186 0.280 0.373 0.559 0.746 0.932 1.118 40 2.021 0.032 0.080 0.160 0.240 0.320 0.479 0.639 0.799 0.959 60 2.000 0.026 0.065 0.129 0.194 0.258 0.387 0.516 0.645 0.775 120 1.980 0.018 0.045 0.090 0.136 0.181 0.271 0.361 0.452 0.542

>120 1.960 Values Computed Per Equation Below

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 25 of 40 Instrument Drift Analysis Methodology The maximum values of non-biased mean (Xcrit) for a given standard deviation (s) and sample size (n) is calculated using the following formula:

s x crit = t x (Ref. 7.3.2) n Where; Xcrit = Maximum value of non-biased mean for a given s & n, expressed in %

t = Normal Deviate for a t-distribution @ 0.025 for 95% Confidence s = Standard Deviation of sample pool n = Sample pool size Example of determining and applying bias to the analyzed drift term:

1) Transmitter Group With a Biased Mean - A group of transmitters are calculated to have a standard deviation of 1.150%, mean of -0.355% with a count of 47. From Table 4, the maximum value that a negligible mean could be is +/- 0.258%. Therefore, the mean value is significant, and must be considered. The analyzed drift term for a 95%/95% tolerance interval level is shown as DA = -

0.355% +/- 1.150% x 2.408 (TIF from Table 1 for 47 samples) or DA = -0.355% +/- 2.769%. For conservatism, the DA term for the positive direction is not reduced by the bias value where as the negative direction is summed with the bias value, so DA = + 2.769%, - 3.124%.

2) Transmitter Group With a Non-Biased Mean - A group of transmitters are calculated to have a standard deviation of 1.150%, mean of 0.100% with a count of 47. From Table 4, the maximum value that a negligible mean could be is +/- 0.258%. Therefore, the mean value is insignificant, and can be neglected. The analyzed drift term for a 95%/95% tolerance interval level is shown as DA

= +/- 1.150% x 2.408 (TIF from Table 1 for 47 samples) or DA = +/- 2.769%.

3.12. Time Dependent Drift Uncertainty When calibration intervals are extended beyond the range for which historical data is available, the statistical confidence in the ability to predict drift is reduced. The bias and the random portions of the drift are extrapolated separately, but in the same manner. Where the analysis shows slight to moderate time dependency or time dependency is indeterminate, the formula below are used.

Rqd _ Calibration _ Interval DAExtended = DA x Max _ FDS _ Time _ Interval Where: DAExtended = the newly determined, extrapolated Drift Bias or Random Term DA = the bias or random drift term from the Final Data Set Max_FDS_Time_Interval = the maximum observed time interval within the Final Data Set Rqd_Calibration_Interval = the worst case calibration interval, once the calibration interval requirement is changed.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 26 of 40 Instrument Drift Analysis Methodology This method assumes that the drift to time relationship is not linear. Where there is indication of a strong relationship between drift and time the following formula may be used.

Rqd _ Calibration _ Interval DAExtended = DA x Max _ FDS _ Time _ Interval Where the terms are the same as defined above.

Where it can be shown that there is no relationship between surveillance interval and drift, the drift value determined may be used for other time intervals, without change. However, for conservatism, due to the uncertainty involved in extrapolation to time intervals outside of the analysis period, drift values that show minimal or no particular time dependency are generally treated as moderately time dependent, for the purposes of the extrapolation.

3.13. Maintenance Of Analysis Results Any analysis result based on the historical performance of existing components may require updating should performance significantly change. Predictions for future component/loop performance are based upon our knowledge of past calibration performance. This approach assumes that changes in component/loop performance occur slowly or not at all over time. For example, if evaluation of the last ten years of data shows the component/loop drift is stable with no observable trend, there is little reason to expect a dramatic change in performance during the next year. However, it is also difficult to claim that an analysis completed today is still a valid indicator of component/loop performance ten years from now. For this reason, the analysis results should be confirmed through an instrument trending program.

Where the analysis is shown to no longer apply to an instrument (demonstrated performance outside of predicted limits) the analysis may be updated or other corrective action may be taken.

Depending on the type of component/loop, the analysis results are also dependent on the method of calibration, the component/loop span, and the M&TE accuracy. Any of the following program or component/loop changes should be evaluated to determine if they affect the analysis results.

  • Changes to M&TE accuracy
  • Changes to the component or loop (e.g. span, environment, manufacturer, model, etc.)
  • Calibration procedure changes that alter the calibration methodology
4. PERFORMING AN ANALYSIS Drift data for Technical Specification and ATR related instruments is collected as a part of the Cook Nuclear Plant evaluations for extension of plant surveillance intervals. The collected data is entered into Microsoft Excel spreadsheets, grouped by manufacturer and model number. All data is also entered into the IPASS software program, for independent review of certain of the drift analysis functions. The drift analysis is performed using EXCEL spreadsheets. The IPASS analyses are all embedded in the software and it is not possible to follow each specific analysis. The discussion provided in this section is to assist in setting up a spreadsheet. For IPASS analysis instructions, see the IPASS Users Manual (Reference 7.1.5).

Microsoft Excel spreadsheets generally compute values to an approximate 15 decimal resolution, which is well beyond any required rounding for engineering analyses. However, for printing and display purposes, most values are displayed to lesser resolution. It is possible that hand computations would produce slightly different results, because of using rounded numbers in initial and intermediate steps, but the Excel computed values are considered highly accurate in comparison. Values with significant differences between the original computations and the computations of the independent verifier are to be investigated to ensure that the Excel spreadsheet is properly computing the required values.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 27 of 40 Instrument Drift Analysis Methodology 4.1. Populating The Spreadsheet 4.1.1. For A New Analysis 4.1.1.1. The Responsible Engineer determines the component group to be analyzed (e.g., all Foxboro pressure transmitters). The Responsible Engineer should determine the possible sub-groups within the large groupings, which from an engineering perspective, might show different drift characteristics, and therefore, may warrant separation into smaller groups. This would entail looking at the manufacturer, model, calibration span, setpoints, time intervals, specifications, locations, environment, etc., as necessary.

4.1.1.2. The Responsible Engineer develops a list of component numbers, manufacturers, models, component types, brief descriptions, surveillance tests, calibration procedures and calibration information (spans, setpoints, etc.).

4.1.1.3. The Responsible Engineer determines the data to be collected, following the guidance of Sections 3.4 through 3.6 of this Design Guide.

4.1.1.4. The Data Entry Person identifies, locates and collects data for the component group to be analyzed (e.g., all Surveillance Tests for the Foxboro Transmitters completed to present).

4.1.1.5. The Data Entry Person sorts the data by surveillance test or calibration procedure if more than one test/procedure is involved.

4.1.1.6. The Data Entry Person sequentially sorts the surveillance or calibration sheets descending, by date, starting with the most recent date.

4.1.1.7. The Data Entry Person enters the Surveillance or Calibration Procedure Number, Tag Numbers, Required Trips, Indications or Outputs, Date, As-Found values and As-Left values on the appropriate data entry sheet.

4.1.1.8. The Responsible Engineer verifies the data entered.

4.1.1.9. The Responsible Engineer reviews the notes on each calibration data sheet to determine possible contributors for excluding data. The notes should be condensed and entered onto the EXCEL spreadsheet for the applicable calibration points. Where appropriate and obvious, the Responsible Engineer should remove the data that is invalid for calculating drift for the device.

4.1.1.10. The Responsible Engineer calculates the time interval for each drift point by subtracting the date from the previous calibration from the date of the subject calibration. (If the data is not valid for either the As-Left or As-Found calibration information, then the value is not required to be computed for this data point.)

4.1.1.11. The Responsible Engineer calculates the Drift value for each calibration by subtracting the As-Left value from the previous calibration from the As-Found value of the subject calibration. (If the data is not valid for either the As-Left or As-Found calibration information, then the value is not computed for this data point.)

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 28 of 40 Instrument Drift Analysis Methodology 4.2. Spreadsheet Performance Of Basic Statistics Separate data columns are created for each calibration point within the calibrated span of the device. The

% Span of each calibration point should closely match from device to device within a given analysis.

Basic statistics include, at a minimum, determining the number of data points in the sample, the average drift, the average time interval between calibrations, standard deviation of the drift, variance of the drift, minimum drift value, maximum drift value, kurtosis, and skewness contained in each data column. This section provides the specific details for using Microsoft Excel. Other spreadsheet programs, statistical or Math programs that are similar in function, are acceptable for use to perform the data analysis, provided all analysis requirements are met.

4.2.1. Determine the number of data points contained in each column for each initial group by using the COUNT function. Example cell format = COUNT(C2:C133). The Count function returns the number of all populated cells within the range of cells C2 through C133.

4.2.2. Determine the average for the data points contained in each column for each initial group by using the AVERAGE function. Example cell format = AVERAGE(C2:C133). The Average function returns the average of the data contained within the range of cells C2 through C133.

This average is also known as the mean of the data. This same method should be used to determine the average time interval between calibrations.

4.2.3. Determine the standard deviation for the data points contained in each column for each initial group by using the STDEV function. Example cell format =STDEV(C2:C133). The Standard Deviation function returns the measure of how widely values are dispersed from the mean of the data contained within the range of cells C2 through C133. Formula used by Microsoft Excel to determine the standard deviation:

STD (Standard Deviation of the sample population): (Ref. 7.3.5) n x 2 ( x )

2 s=

n(n 1)

Where; x - Sample data values (x1, x2, x3, .....)

s - Standard deviation of all sample data points n - Total number of data points 4.2.4. Determine the variance for the data points contained in each column for each initial group by using the VAR function. Example cell format =VAR(C2:C133). The Variance function returns the measure of how widely values are dispersed from the mean of the data contained within the range of cells C2 through C133. Formula used by Microsoft Excel to determine the variance:

VAR (Variance of the sample population): (Ref. 7.3.5) n x 2 ( x )

2 s =

2 n(n 1)

Where; x - Sample data values (x1, x2, x3, .....)

s2 - Variance of the sample population n - Total number of data points

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 29 of 40 Instrument Drift Analysis Methodology 4.2.5. Determine the kurtosis for the data points contained in each column for each initial group by using the KURT function. Example cell format =KURT(C2:C133). The Kurtosis function returns the relative peaked-ness or flatness of the distribution within the range of cells C2 through C133. Formula used by Microsoft Excel to determine the kurtosis:

n ( n + 1) x i x 3 ( n 1) 4 2 KURT =

( n 1)( n 2 )( n 3 )

s (n 2 )(n 3)

(Ref. 7.3.5)

Where ;

x - Sample data values (x1, x2, x3, .....)

n - Total number of data points s - Sample Standard Deviation 4.2.6. Determine the skewness for the data points contained in each column for each initial group by using the SKEW function. Example cell format =SKEW(C2:C133). The Skewness function returns the degree of symmetry around the mean of the cells contained within the range of cells C2 through C133. Formula used by Microsoft Excel to determine the skewness:

n ( n + 1) 3 xi x (n 1)(n 2 )

SK E W =

s (Ref. 7.3.5)

Where; x - Sample data values (x1, x2, x3, .....)

n - Total number of data points s - Sample Standard Deviation 4.2.7. Determine the maximum value for the data points contained in each column for each initial group by using the MAX function. Example cell format =MAX(C2:C133). The Maximum function returns the largest value of the cells contained within the range of cells C2 through C133.

4.2.8. Determine the minimum value for the data points contained in each column for each initial group by using the MIN function. Example cell format =MIN(C2:C133). The Minimum function returns the smallest value of the cells contained within the range of cells C2 through C133.

4.2.9. Determine the median value for the data points contained in each column for each initial group by using the MEDIAN function. Example cell format =MEDIAN(C2:C133). The median is the number in the middle of a set of numbers; that is, half the numbers have values that are greater than the median, and half have values that are less. If there is an even number of data points in the set, then MEDIAN calculates the average of the two numbers in the middle.

4.2.10. Where sub-groups have been combined in a data set, which have engineering reasons for the possibility that the data should be separated, analyze the statistics and component data of the sub-groups to determine the acceptability for combination.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 30 of 40 Instrument Drift Analysis Methodology 4.2.11. Perform a t-Test in accordance with step 3.5.4 on each possible sub-group combination to test for the acceptability of combining the data.

Acceptability for combining the data is indicated when the absolute value of the Test Statistic (t Stat) is greater than the [t Critical two-tail]. Example: t Stat for combining sub-group A & B may be 0.703, which is larger than the t Critical two-tail of 0.485. However, as a part of this process, the Responsible Engineer should ensure that the indication unacceptability does not mask time dependency. In other words, if the only difference in the groupings is that of the calibration interval, the differences in the data characteristics could exist because of time dependent drift. If this is the only difference, the data should be combined, even though the tests show that it may not be appropriate.

4.3. Outlier Detection And Expulsion Refer to Section 3.7 for a detailed explanation of Outliers.

4.3.1. Obtain the Critical Values for the t-Test from Table 2, which is based on the sample size of the data contained within the specified range of cells. Use the COUNT value to determine the sample size.

4.3.2. Perform the outlier test for all the samples. For any values that show up as outliers, analyze the initial input data to determine if the data is erroneous. If so, remove the data in the earlier pages of the spreadsheet, and re-run all of the analysis up to this point. Continue this process until all erroneous data has been removed. The reason for removal of the data will be documented in the calculation.

4.3.3. If appropriate, if any outliers are still displayed, remove the worst-case outlier as a statistical outlier, per step 3.6.3 above. Only for a special, rare case may up to 2.5% of the population be removed as outliers; and if this is done, the justification must be provided within the drift study.

Once this outlier(s) have been removed, the remaining data set is the Final Data Set.

4.3.4. For transmitters, or other devices with multiple calibration points, the general process is to use the calibration point with the worst case drift values. This is determined by comparing the different calibration points and using the one with the largest error, determined by adding the absolute value of the mean to 2 times the standard deviation. The data set with the largest of those terms is used throughout the rest of the analysis, after outlier removal, as the Final Data Set. (Note that it is possible to use a specific calibration point and neglect the others, only if that is the single point of concern for all devices in the data set.)

4.3.5. Recalculate the Average, Median, Standard Deviation, Variance, Minimum, Maximum, Kurtosis, Skewness, Count and Average Time Interval Between Calibrations for the Final Data Set.

4.4. Normality Tests To test for normality of the Final Data Set, the first step is to perform the required hypothesis testing. For Final Data Sets with 50 or more data points, the hypothesis testing can be done with either the Chi-Square (3.7.1) or the D' Tests (3.7.3). If the Final Data Set has less than 50 data points, the W Test (3.7.2) or Chi-Square Test may be used. The Chi Square test should generally be performed with 12 bins of data, starting from [- to (mean-2.5)], and bin increments of 0.5, ending at [(mean+2.5) to +].

(Since the same bins are to be used for the histogram in the coverage analysis, the work for these two tasks may be combined.) If the data passes either of the tests, only the passed test need be shown in the spreadsheet. However, if the assumption of normality is rejected by both of the hypothesis tests, the results of both tests should be presented.

If the assumption of normality is rejected by both tests, then a coverage analysis should be performed as described in Section 3.7.5. As explained above the for Chi Square test, the coverage analysis and histogram are established with a 12 bin approach unless inappropriate for the application.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 31 of 40 Instrument Drift Analysis Methodology If an adjustment is required to the standard deviation to provide a normal distribution that adequately covers the data set, then the required multiplier to the standard deviation (Normality Adjustment Factor (NAF)) is determined iteratively in the coverage analysis. This multiplier produces a normal distribution model for the drift, which shows adequate data population from the Final Data Set within the +/- 2 bands of the model.

4.5. Time Dependency Testing Time dependency testing is only required for instruments for which the calibration intervals are being extended. Time dependency is evaluated through the use of a scatter (drift interval) plot, binning analysis, and regression analyses. The methods for each of these are detailed below.

4.5.1. Scatter Plot The scatter plot is performed under a new page to the spreadsheet entitled "Scatter Plot" or "Drift Interval Plot". The Final Data Set, including drift values and associated times between calibrations are copied into the first two columns of the new page of the spreadsheet. The chart function of EXCEL is used to merely chart the data with the x axis being the calibration interval and the y axis being the drift value. The prediction line should be added to the chart, along with the equation of the prediction line. This plot provides visual indication of the trend of the mean, and somewhat obscurely, of any increases in the scatter of the data over time.

4.5.2. Binning Analysis The binning analysis is performed under a separate page of the EXCEL spreadsheet. The Final Data Set is copied onto the first two columns, and then split by bins 1 through 8 into the time intervals as defined in Section 3.9.3.1. A table is set up to compute the standard deviation, mean, average time interval, and count of the data in each time bin. Similar equation methods are used here as described in Section 4.2 above, when characterizing the drift data set. Another table is used to evaluate the validity of the bins, based on population per the criteria of Section 3.9.3.4. If multiple valid bins are not established, the time dependency analysis stops at this page, and no regression analyses are performed.

The standard deviations, means and average time intervals are tabulated, and a plot is generated to show the variation of the bin averages and standard deviations versus average time interval, if multiple bins are established. This plot can be used to establish whether standard deviations and means are significantly increasing over time between calibrations.

4.5.3. Regression Analyses The regression analyses are performed in accordance with the requirements of Section 3.9.4, given that multiple valid time bins were established in the binning analysis. New pages should be created for the Drift Regression and the Absolute Value Drift Regression. The Final Data Set should be copied into the first columns of each of these pages, and the blank lines should be removed. On the Absolute Value Regression page, a third column should be created, which merely takes the absolute value of the drift column.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 32 of 40 Instrument Drift Analysis Methodology For each of the two Regression Analyses, use the following steps to produce the regression analysis output. Using the "Data Analysis" package under "Tools" in Microsoft EXCEL, the Regression option should be chosen. The Y range is established as the Drift (or Absolute Value of Drift) data range, and the X range should be the calibration time intervals. The output range should be established on the same page of the spreadsheet, directly to the right of the data already entered. The option for the residuals should be established as "Line Fit Plots". The regression computation should then be performed. The output of the regression routine is a list of residuals, an ANOVA table listing, and a plot of the Drift (or Absolute Value of Drift) versus the Time Interval Between Calibrations. A prediction line is included on the plot. Add a cell close to the ANOVA table listing which establishes the Critical Value of F, using the guidance of Section 3.9.4 for the Significance of F Test. This utilizes the FINV function of Microsoft EXCEL.

Analyze the results in the Drift Regression ANOVA table for R Square, P Value, and F Value, using the guidance of Section 3.9.4. If any of these analytical means shows time dependency in the Drift Regression, the mean of the data set should be established as strongly time dependent if the slope of the prediction line significantly increases over time from an initially positive value (or decreases over time from an initially negative value), without crossing zero within the time interval of the regression analysis. This increase can also be validated by observing the results of the binning analysis plot for the mean of the bins, and by observing the scatter plot prediction line.

Analyze the results in the Absolute Value of Drift Regression ANOVA table for R Square, P Value, and F Value, using the guidance of Section 3.9.4. If any of these analytical means shows time dependency, the standard deviation of the data set should be established as strongly time dependent if the slope of the prediction line significantly increases over time. This increase can also be validated by observing the results of the binning analysis plot for the standard deviation of the bins, and by observing any discernible increases in data scatter as time increases on the scatter plot.

Regardless of the results of the analytical regression tests, if the plots tend to indicate significant increases in either the mean or standard deviation over time, those parameters should be judged to be strongly time dependent. Otherwise, for conservatism, the data is always considered to be moderately time dependent if extrapolation of the data is necessary, to accommodate the uncertainty involved in the extrapolation process, since no data has generally been taken at time intervals as large as those proposed.

4.6. Calculate The Analyzed Drift Value The first step in determining the Analyzed Drift Value is to determine the required time interval for which the value must be computed. For the majority of the cases for instruments calibrated on a refueling basis, the required nominal calibration time interval is 24 months, or a maximum of 30 months.

For those devices that are being extended to a semi-annual surveillance interval of 184 days, the maximum is 230 days. Since the average time intervals are generally computed in days, the conservative value for a 30-Month calibration interval is established as 915 days.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 33 of 40 Instrument Drift Analysis Methodology The Analyzed Drift Value generally consists of two separate components - a random term and a bias term. If the mean of the Final Data Set is significant per the criteria in Section 3.11, a bias term is considered. If no extrapolation is necessary, the bias term is set equal to the mean of the Final Data Set.

Extrapolation of this term is performed in one of two methods, as determined by the degree of time dependency established in the time dependency analysis. If the mean is determined to be strongly time dependent, the following equation is used, which extrapolates the value in a linear fashion.

915 _ Days DA30 Mo.bias = x x Max _ FDS _ Time _ Interval It the mean is determined to be moderately time dependent, the following equation is used to extrapolate the mean. (Note that this equation is also generally used for cases where no time dependency is evident, because of the uncertainty in defining a drift value beyond analysis limits.)

915 _ Days DA30 Mo.bias = x x Max _ FDS _ Time _ Interval The random portion of the Analyzed Drift is calculated by multiplying the standard deviation of the Final Data Set by the Tolerance Interval Factor for the sample size and by the Normality Adjustment Factor, if required from the Coverage Analysis, and extrapolating the final result in a fashion similar to the methods shown above for the bias term. Use the following procedure to perform the operation.

4.6.1. Use the COUNT value of the Final Data Set to determine the sample size.

4.6.2. Obtain the appropriate Tolerance Interval Factor (TIF) for the size of the sample set. Table 1 lists the 95%/95% TIFs; refer to Standard statistical texts for other TIF multipliers. Note: TIFs other than 95%/95% must be specifically justified.

4.6.3. For a generic data analysis, multiple Tolerance Interval Factors may be used, providing a clear tabulation of results is included in the analysis, showing each value for the multiple levels of TIF.

4.6.4. Multiply the Tolerance Interval Factor by the standard deviation for the data points contained in the Final Data Set and by the Normality Adjustment Factor determined in the Coverage Analysis (if applicable).

4.6.5. If the analyzed drift term calculated above is applied to the existing calibration interval, application of additional drift uncertainty is not necessary.

4.6.6. When calculating drift for calibration intervals that exceed the historical calibration intervals, use the following equations, depending on whether the data is shown to be strongly time dependent or moderately time dependent.

For a Strongly Time Dependent random term, use the following equation.

915 _ Days DA30 Mo.random = x TIF x NAF x Max _ FDS _ Time _ Interval For a Moderately Time Dependent random term, use the following equation. (Note that this equation is also generally used for cases where no time dependency is evident, because of the uncertainty in defining a drift value beyond analysis limits.)

915 _ Days DA30 Mo.random = x TIF x NAF x Max _ FDS _ Time _ Interval Where: = Standard Deviation of the Final Data Set

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 34 of 40 Instrument Drift Analysis Methodology TIF = Tolerance Interval Factor from Table 1 NAF = Normality Adjustment Factor from the Coverage Analysis (If Applicable)

Max_FDS_Time_Interval = the maximum observed time interval within the Final Data Set The Analyzed Drift Value is not comprised of drift alone; this value also contains errors from M&TE and device Calibration Accuracy. It could also include other effects, but it is conservative to assume the other effects are not included, since they cannot be quantified and since they are not expected to fully contribute to the errors observed.

4.6.7. Since random errors are always expressed as +/- errors, specific consideration of directionality is not generally a concern. However, for bistables and switches, the directionality of any bias error must be carefully considered. Because of the fact that the As-Found and As-Left setpoints are recorded during calibration, the drift values determined up to this point in the drift study are representative of a drift in the setpoint, not in the indicated value.

Per Reference 7.1.2, error is defined as the algebraic difference between the indication and the ideal value of the measured signal. In other words, error = indicated value - ideal value (actual value)

For devices with analog outputs, a positive error means that the indicated value exceeds the actual value, which would mean that if a bistable or switching mechanism used that signal to produce an actuation on an increasing trend, the actuation would take place prior to the actual variable reaching the value of the intended setpoint. As analyzed so far in the drift study, the drift of a bistable or switch causes just the opposite effect. A positive Analyzed Drift would mean that the setpoint is higher than intended; thereby causing actuation to occur after the actual variable has exceeded the intended setpoint.

A bistable or switch can be considered to be a black box, which contains a sensing element or circuit and an ideal switching mechanism. At the time of actuation, the switch or bistable can be considered an indication of the process variable. Therefore, a positive shift of the setpoint can be considered to be a negative error. In other words, if the switch setting was intended to be 500 psig, but actually switched at 510 psig, at the time of the actuation, the switch "indicated" that the process value was 500 psig when the process value was actually 510 psig. Thus, error = indicated value (500 psig) - actual value (510 psig) = -10 psig Therefore, a positive shift of the setpoint on a switch or bistable is equivalent to a negative error, as defined by Reference 7.1.2. Therefore, for clarity and consistency with the treatment of other bias error terms, the sign of the bias errors of a bistable or switch should be reversed, in order to comply with the convention established by Reference 7.1.2.

5. CALCULATIONS 5.1. Drift Studies / Calculation The Drift Studies / Calculations should be performed in accordance with the methodology described above, with the following documentation requirements.

5.1.1. The title includes the Manufacturer/Model number of the component group analyzed.

5.1.2. The calculation objective must:

5.1.2.1. describe, at a minimum, that the objective of the calculation is to document the drift analysis results for the component group, and extrapolate the drift value to the required calibration period (if applicable),

5.1.2.2. provide a list for the group of all pertinent information in tabular form (e.g. Tag Numbers, Manufacturer, Model Numbers, ranges and calibration spans), and

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 35 of 40 Instrument Drift Analysis Methodology 5.1.2.3. describe any limitations on the application of the results. For instance, if the analysis only applies to a certain range code, the objective should state this fact.

5.1.3. The method of solution should describe, at a minimum, a summary of the methodology used to perform the drift analysis outlined by this Design Guide. Exceptions taken to this Design Guide are to be included in this section including basis and references for exceptions.

5.1.4. The actual calculation/analysis should provide:

5.1.4.1. A listing of data which was removed, and the justification for doing so 5.1.4.2. List of references 5.1.4.3. A narrative discussion of the specific activities performed for this calculation 5.1.4.4. Results and conclusions, including

- Manufacturer and model number analyzed

- bias and random Analyzed Drift values, as applicable

- The applicable Tolerance Interval Factors (provide detailed discussion and justification if other than 95%/95%)

- applicable drift time interval for application

- normality conclusion

- statement of time dependency observed, as applicable

- limitations on the use of this value in application to uncertainty calculations, as applicable

- limitations on the application if the results to similar instruments, as applicable 5.1.5. Attachments, including the following information:

5.1.5.1. Input data with notes on removal and validity 5.1.5.2. Computation of drift data and calibration time intervals 5.1.5.3. Outlier summary, including Final Data Set and basic statistical summaries 5.1.5.4. Chi Square Test Results (If Applicable) 5.1.5.5. W Test or D' Test Results (If Applicable) 5.1.5.6. Coverage Analysis, Including Histogram, Percentages in the Required Sigma Bands, and Normality Adjustment Factor (if applicable) 5.1.5.7. Scatter Plot with Prediction Line and Equation 5.1.5.8. Binning Analysis Summaries for Bins and Plots (as applicable) 5.1.5.9. Regression Plots, ANOVA Tables, and Critical F Values 5.1.5.10. Derivation of the Analyzed Drift Values, With Summary of Conclusions

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 36 of 40 Instrument Drift Analysis Methodology 5.2. Setpoint/Uncertainty Calculations To apply the results of the drift analyses to a specific device or loop, a setpoint/uncertainty calculation must be performed, revised or evaluated in accordance with References 7.2.1 and / or 7.2.2, as appropriate. Per Section 3.2.1.2 above, the Analyzed Drift term characterizes the Calibration Accuracy (CA), M&TE and Drift error terms for the analyzed device, loop, or function. In order to save time, a comparison between these terms (or subset of these terms) in an existing setpoint calculation to the Analyzed Drift can be made. If the terms within the existing calculation bound the Analyzed Drift term, then the existing calculation is conservative as is, and does not specifically require revision. If revision to the calculation is necessary, the Analyzed Drift term may be incorporated into the calculation, setting the Calibration Accuracy, M&TE, and Drift terms for the analyzed devices to zero. (See Section 14.2 of Reference 7.2.2 for more information.)

When comparing the results to setpoint calculations that have more than one device in the instrument loop that has been analyzed for drift, comparisons can be made between the DA terms and the original terms on a device-by-device basis, or on a total loop basis. Care should be taken to properly combine terms for comparison in accordance with References 7.2.1 and 7.2.2, as appropriate.

When applying the drift study results of bistables or switches to a setpoint calculation, the preparer should fully understand the directionality of any bias terms within DA and apply the bias terms accordingly. (See Section 4.6.7 above.)

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 37 of 40 Instrument Drift Analysis Methodology

6. DEFINITIONS 95%/95% - Standard statistics term meaning that the results have a 95% confidence () that at least 95% of Ref. 7.1.1 the population will lie between the stated interval (P) for a sample size (n).

Analyzed Drift A term representing the errors determined by a completed drift analysis for a group. Section Uncertainties that may be represented by the analyzed drift term are component calibration 3.2.1.3 (DA) -

accuracy, M&TE errors, personnel-induced or human related errors, ambient temperature and Synonymous with other environmental effects, power supply effects, misapplication errors and true component ADR drift.

As-Found (FT) - The condition in which a channel, or portion of a channel, is found after a period of operation Ref. 7.1.3 and before recalibration.

As-Left (CT) - The condition in which a channel, or portion of a channel, is left after calibration or final Ref. 7.1.3 setpoint device verification.

Bias (B) - A shift in the signal zero point by some amount. Ref. 7.1.1 Calibrated Span The maximum calibrated upper range value less the minimum calibrated lower range value. Ref. 7.1.1 (CS) -

Calibration Accuracy The accuracy that can be expected during a calibration at reference conditions. May be Ref. 7.2.2 (CA) - expressed in terms of Sensor Calibration Accuracy, Rack Calibration Accuracy, etc.

Calibration Interval - The elapsed time between the initiation or successful completion of calibrations or calibration Ref. 7.1.1 checks on the same instrument, channel, instrument loop, or other specified system or device.

Chi-Square Test - A test to determine if a sample appears to follow a given probability distribution. This test is Ref. 7.1.1 used as one method for assessing whether a sample follows a normal distribution.

Confidence Interval - An interval that contains the population mean to a given probability. Ref. 7.1.1 Coverage Analysis - An analysis to determine whether the assumption of a normal distribution effectively bounds Ref. 7.1.1 the data. A histogram is used to graphically portray the coverage analysis.

Cumulative.. An expression of the total probability contained within an interval from - to some value x. Ref. 7.1.1 Distribution -

D-Prime Test - A test to verify the assumption of normality for moderate to large sample sizes. Ref. 7.1.1 Dependent - In statistics, dependent events are those for which the probability of all occurring at once is Ref. 7.1.1 different than the product of the probabilities of each occurring separately. In setpoint determination, dependent uncertainties are those uncertainties for which the sign or magnitude of one uncertainty affects the sign or magnitude of another uncertainty.

Drift - An undesired change in output over a period of time where change is unrelated to the input, Ref. 7.1.2 environment, or load.

Error - The algebraic difference between the indication and the ideal value of the measured signal. Ref. 7.1.2 Final Data Set - The set of data that is analyzed for normality, time dependence, and used to determine the drift Section 3.6.3 value. This data has all outliers and erroneous data removed.

Functionally.. Components with similar design and performance characteristics that can be combined to form Ref. 7.1.1 Equivalent - a single population for analysis purposes.

Histogram - A graph of a frequency distribution. Ref. 7.1.1 Independent - In statistics, independent events are those in which the probability of all occurring at once is Ref. 7.1.1 the same as the product of the probabilities of each occurring separately. In setpoint determination, independent uncertainties are those for which the sign or magnitude of one uncertainty does not affect the sign or magnitude of any other uncertainty.

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 38 of 40 Instrument Drift Analysis Methodology Instrument Channel - An arrangement of components and modules as required to generate a single protective action Ref. 7.1.2 signal when required by a plant condition. A channel loses its identity where single protective action signals are combined.

Instrument Range - The region between the limits within which a quantity is measured, received or transmitted, Ref. 7.1.2 expressed by stating the lower and upper range values.

Kurtosis - A characterization of the relative peaked-ness or flatness of a distribution compared to a Ref. 7.1.1 normal distribution. A large kurtosis indicates a relatively peaked distribution and a small kurtosis indicates a relatively flat distribution.

M&TE - Measurement and Test Equipment. Ref. 7.1.1 Maximum Span - The components maximum upper range limit less the maximum lower range limit. Ref. 7.1.1 Mean - The average value of a random sample or population. Ref. 7.1.1 Median - The value of the middle number in an ordered set of numbers. Half the numbers have values Ref. 7.1.1 that are greater than the median and half have values that are less than the median. If the data set has an even number of values, the median is the average of the two middle values.

Module - Any assembly of interconnected components that constitutes an identifiable device, instrument Ref. 7.1.2 or piece of equipment. A module can be removed as a unit and replaced with a spare. It has definable performance characteristics that permit it to be tested as a unit.

Normality Adjustment A multiplier to be used for the standard deviation of the Final Data Set to provide a drift Section 3.7.5 Factor model that adequately covers the population of drift points in the Final Data Set.

Normality Test - A statistics test to determine if a sample is normally distributed. Ref. 7.1.1 Outlier - A data point significantly different in value from the rest of the sample. Ref. 7.1.1 Population - The totality of the observations with which we are concerned. A true population consists of Ref. 7.1.1 all values, past, present and future.

Probability - The branch of mathematics which deals with the assignment of relative frequencies of Ref. 7.3.2 occurrence (confidence) of the possible outcomes of a process or experiment according to some mathematical function.

Prob. Density.. An expression of the distribution of probability for a continuous function. Ref. 7.1.1 Function -

Probability Plot - A type of graph scaled for a particular distribution in which the sample data plots as Ref. 7.1.1 approximately a straight line if the data follows that distribution. For example, normally distributed data plots as a straight line on a probability plot scaled for a normal distribution; the data may not appear as a straight line on a graph scaled for a different type of distribution.

Proportion - A segment of a population that is contained by an upper and lower limit. Tolerance intervals Ref. 7.3.2 determine the bounds or limits of a proportion of the population, not just the sampled data.

The proportion (P) is the second term in the tolerance interval value (e.g. 95%/99%).

Random - Describing a variable whose value at a particular future instant cannot be predicted exactly, Ref. 7.1.1 but can only be estimated by a probability distribution function.

Raw Data - As found minus As-Left calibration data used to characterize the performance of a Ref. 7.1.1 functionally equivalent group of components.

Reference Accuracy - A number or quantity that defines a limit that errors will not exceed when a device is used Ref. 7.1.2 under specified operating conditions.

Sample - A subset of a population. Ref. 7.1.1 Sensor - The portion of an instrument channel that responds to changes in a plant variable or condition Ref. 7.1.2 and converts the measured process variable into a signal. e.g., electric or pneumatic Signal Conditioning - One or more modules that perform signal conversion, buffering, isolation or mathematical Ref. 7.1.2 operations on the signal as needed.

Skewness - A measure of the degree of symmetry around the mean. Ref. 7.1.1

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 39 of 40 Instrument Drift Analysis Methodology Span - The algebraic difference between the upper and lower values of a calibrated span. Ref. 7.1.2 Standard Deviation - A measure of how widely values are dispersed from the population mean. Ref. 7.1.1 Surveillance The elapsed time between the initiation or successful completion of a surveillance or Ref. 7.1.1 Interval - surveillance check on the same component, channel, instrument loop, or other specified system or device.

Time-Dependent The tendency for the magnitude of component drift to vary with time. Ref. 7.1.1 Drift -

Time-Dependent The uncertainty associated with extending calibration intervals beyond the range of available Ref. 7.1.1 Drift Uncertainty - historical data for a given instrument or group of instruments.

Time-Independent The tendency for the magnitude of component drift to show no specific trend with time. Ref. 7.1.1 Drift -

Tolerance - The allowable variation from a specified or true value. Ref. 7.1.2 Tolerance Interval - An interval that contains a defined proportion of the population to a given probability. Ref. 7.1.1 Trip Setpoint - A predetermined value for actuation of the final actuation device to initiate protective action. Ref. 7.1.2 t-Test - For this Design Guide the t-Test is used to determine: 1) if a sample is an outlier of a sample Ref. 7.1.1 pool, and 2) if two groups of data originate from the same pool.

Uncertainty - The amount to which an instrument channels output is in doubt (or the allowance made Ref. 7.1.1 therefore) due to possible errors either random or systematic which have not been corrected for. The uncertainty is generally identified within a probability and confidence level.

Variance - A measure of how widely values are dispersed from the population mean. Ref. 7.1.1 W Test - A test to verify the assumption of normality for sample size less than 50. Ref. 7.1.1

Improved Technical Specification and 24 Month Cycle Surveillance Upgrade ITS-PM-04 Revision 0 24 Month Cycle Surveillance Upgrade Development Guideline Page 40 of 40 Instrument Drift Analysis Methodology

7. REFERENCES 7.1. Industry Standards and Correspondence 7.1.1. EPRI TR-103335R1, "Statistical Analysis of Instrument Calibration Data - Guidelines for Instrument Calibration Extension/Reduction Programs," October, 1998 7.1.2. ISA-RP67.04-Part II-1994, "Recommended Practice, Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation" 7.1.3. ISA-S67.04-Part I-1994, "Standard, Methodologies for the Determination of Setpoints for Nuclear Safety-Related Instrumentation" 7.1.4. ANSI N15.15-1974, "Assessment of the Assumption of Normality (Employing Individual Observed Values)"

7.1.5. CM-106752-R2, Users Manual: IPASS (Rev. 2), Instrument Performance Analysis Software System for As-Found-As-Left (AFAL) Data," July 1999 7.1.6. NRC to EPRI Letter, "Status Report on the Staff Review of EPRI Technical Report TR-103335, Guidelines for Instrument Calibration Extension/Reduction Program," Dated March 1994 7.1.7. REGULATORY GUIDE 1.105, Rev. 2, Instrument Setpoints 7.1.8. GE NEDC 31336P-A General Electric Instrument Setpoint Methodology" 7.1.9. DOE Research and Development Report No. WAPD-TM-1292, February 1981, Statistics for Nuclear Engineers and Scientists Part 1: Basic Statistical Inference 7.1.10. US Nuclear Regulatory Commission Letter from Mr. Thomas H. Essig to Mr. R. W. James of Electric Power Research Institute, Dated December 1, 1997, Status Report on the Staff Review of EPRI Technical Report TR-103335, Guidelines for Instrument Calibration Extension /

Reduction Programs, Dated March 1994 7.2. Calculations and Programs 7.2.1. ECP 1-2-O1-03, Westinghouse Setpoint Methodology for Improved Thermal Design Procedure (ITDP) and Revised Thermal Design Procedure (RTDP), Revision 0 7.2.2. EG-IC-004, Instrument Setpoint Uncertainty," Rev. 4 7.3. Miscellaneous 7.3.1. IPASS (Instrument Performance Analysis Software System), Revision 2.03, created by EDAN Engineering in conjunction with EPRI 7.3.2. Statistics for Nuclear Engineers and Scientists Part 1: Basic Statistical Inference, William J.

Beggs; February, 1981 7.3.3. NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle" 7.3.4. MPAC, Maintenance Planning and Control System 7.3.5. Microsoft Excel Version 97SR-2, Spreadsheet Program 7.3.6. Microsoft Access Version 97SR-2, Database Program

Enclosure 7 to AEP:NRC:4901 Page 1 Disposition of Existing License Amendment Requests Affected ITS Submittal Date Description of Change Sections/Specifications Affected CTS Pages Disposition 8/27/03 Indiana Michigan Power Company Section 3.0, 3.3.1, Unit 1: 3/4 0-1, 3/4 0-2, 3/4 0-3, 3/4 1-11, Proposed changes will be (I&M) to Nuclear Regulatory 3.3.2, 3.3.3, 3.3.4, 3/4 3-3, 3/4 3-4, 3/4 3-5, 3/4 3-6, 3/4 3-16, incorporated in a supplement Commission (NRC) Letter 3.3.5, 3.3.6, 3.3.7, 3/4 3-17, 3/4 3-20, 3/4 3-21, 3/4 3-21a, to this Improved Technical AEP:NRC:3304, Application for 3.3.8, 3.4.11, 3.4.12, 3/4 3-21b, 3/4 3-22, 3/4 3-35, 3/4 3-37, 3/4 3-39, Specification (ITS) submittal Technical Specification Change 3.4.16, 3.5.2, 3.5.3, 3/4 3-40, 3/4 3-43, 3/4 3-46, 3/4 3-48a, 3/4 3-54, when this current Technical Regarding Mode Change Limitations, and 3.6.3, 3.7.1, 3.7.2, 3/4 3-57, 3/4 4-21, 3/4 4-31, 3/4 4-33, 3/4 4-36, Specification (CTS) change is Adoption of a Technical Specifications 3.7.5, 3.7.7, 3.7.8, 3/4 4-37, 3/4 4-39, 3/4 5-7, 3/4 6-9a, 3/4 6-14, approved by the NRC.

Bases Control Program and Standard 3.7.10, 3.7.13, 3.8.1, 3/4 7-1, 3/4 7-5, 3/4 7-10, 3/4 7-15, 3/4 7-17, Technical Specification Surveillance 5.5, 5.6 3/4 7-19, 3/4 7-26, 3/4 8-2, 3/4 9-13, 3/4 11-1, Requirement 3.0.1 and Associated Bases, 3/4 11-2, 3/4 11-3, 6-9 Using The Consolidated Line CTS 3/4.1.2.3, Unit 2: 3/4 0-1, 3/4 0-2, 3/4 0-3, 3/4 1-11, Improvement Process, (TAC Nos. 3/4.3.3.1, 3/4.3.3.2, 3/4 3-2, 3/4 3-3, 3/4 3-4, 3/4 3-5, 3/4 3-15, MC0600 and MC0601), consistent with 3/4.3.3.3, 3/4.3.3.4, 3/4 3-18, 3/4 3-19, 3/4 3-20, 3/4 3-20a, 3/4 3-21, Technical Specification Task Force 3/4.3.3.5.1, 3/4.3.3.9, 3/4 3-34, 3/4 3-36, 3/4 3-38, 3/4 3-38a, 3/4 3-39, (TSTF)-359, Revision 9, would allow 3/4.4.10.1, 3/4.4.12.1, 3/4 3-42, 3/4 3-44a, 3/4 3-45, 3/4 3-53, 3/4 4-20, entry into a MODE or other specified 3/4.4.12.2, 3/4.7.8 3/4 4-29, 3/4 4-31, 3/4 4-33, 3/4 4-34, 3/4 4-36, condition in the Applicability while 3/4 5-3, 3/4 5-7, 3/4 6-9a, 3/4 6-13, 3/4 7-1, relying on the associated ACTIONS, if a 3/4 7-5, 3/4 7-10, 3/4 7-12, 3/4 7-13, 3/4 7-14, risk assessment is performed that justifies 3/4 7-25, 3/4 8-2, 3/4 9-12, 3/4 11-1, 3/4 11-2, the action. 3/4 11-3, 6-9 8/27/03 I&M to NRC Letter AEP:NRC:3403, SR 3.0.3, 5.5 Unit 1: 3/4 0-2 Proposed changes are already License Amendment Request to Revise Unit 2: 3/4 0-2 reflected in this ITS Technical Specification 4.0.3: Missed submittal. However, the ITS Surveillance Time Allowance, (TAC submittal justifies these Nos. MC0606 and MC0607), consistent changes separately. Thus, with TSTF-358, Revision 6, would revise when this CTS change is LCO 4.0.3 to allow the time period to approved by the NRC, a perform a missed surveillance to be 24 supplement to the ITS hours or the surveillance interval, if a risk submittal will be provided to assessment is performed that justifies the reflect CTS (i.e., ITS action. justifications will be deleted).

Enclosure 7 to AEP:NRC:4901 Page 2 Disposition of Existing License Amendment Requests (Continued)

Affected ITS Submittal Date Description of Change Sections/Specifications Affected CTS Pages Disposition 2/14/04 I&M to NRC Letter AEP:NRC:4051, Section 3.3.6, 3.9.3 Unit 1: 3/4 3-37, 3/4 9-4, 3/4 9-10 Proposed change to eliminate Containment Requirements During Unit 2: 3/4 3-36, 3/4 9-4, 3/4 9-9 CORE ALTERATIONS Movement Of Recently Irradiated Fuel from the affected Assemblies, consistent with portions Applicability statements are of TSTF-51, Revision 2, would revise already reflected in this ITS Applicability of Technical submittal. However, the ITS Specifications for containment submittal justifies these penetrations and Containment Purge changes separately. Thus, and Exhaust Isolation System, and the when this CTS change is Actions for inoperable radiation approved by the NRC, a monitors that support the Containment supplement to the ITS Purge and Exhaust Isolation System, to submittal will be provided to make the requirements applicable only reflect CTS (i.e., ITS during movement of recently justifications will be deleted).

irradiated fuel within the Proposed change to revise containment, instead of during Applicability to be during CORE ALTERATIONS and movement of recently movement of irradiated fuel within the irradiated fuel will be containment. incorporated in a supplement to the ITS submittal when this CTS change is approved by the NRC.

Enclosure 7 to AEP:NRC:4901 Page 3 Disposition of Existing License Amendment Requests (Continued)

Affected ITS Submittal Date Description of Change Sections/Specifications Affected CTS Pages Disposition 2/14/04 I&M to NRC Letter AEP:NRC:4392, Section 3.1.8, 3.9.2 Unit 1: 3/4 9-2, 3/4 10-5 Proposed changes are already Application for Amendment to Delete Unit 2: 3/4 9-2, 3/4 10-3 reflected in this ITS Surveillance Requirements for Power submittal. However, the ITS Range, Intermediate Range, and submittal justifies these Source Range Neutron Flux changes separately, and Monitors, consistent with modifies the new Channel NUREG-1431, Revision 2, would Calibration requirement for modify the audible indication the Source Range Neutron requirements of the Source Range Flux Monitors to be required Neutron Flux Monitors Technical once per 24 months, instead Specification specified in Limiting of once per 18 months as Condition for Operation (LCO) 3.9.2; proposed in I&M to NRC delete Surveillance Requirements Letter AEP:NRC:4392.

(SRs) 4.9.2.a and 4.9.2.b for Channel Thus, when this CTS change Functional Tests, revise SR 4.9.2.c is approved by the NRC, a Channel Check requirements, and add supplement to the ITS Channel Calibration requirements for submittal will be provided to the Source Range Neutron Flux reflect CTS (i.e., ITS Monitors; and revise SR 4.10.4.2 justifications will be deleted),

(Unit 1) and SR 4.10.3.2 (Unit 2) with the exception of the Channel Functional Test requirements justification to require for the Intermediate Range and Power Channel Calibration for the Range Neutron Flux Monitors. Source Range Neutron Flux Monitors once per 24 months.

Enclosure 7 to AEP:NRC:4901 Page 4 Disposition of Existing License Amendment Requests (Continued)

Affected ITS Submittal Date Description of Change Sections/Specifications Affected CTS Pages Disposition 4/6/04 I&M to NRC Letter AEP:NRC:4565, Section 4.2.1, 4.3.1.2 Unit 1: 5-4, 5-8, 5-8a Proposed changes will be Proposed Technical Specification Unit 2: 5-4, 5-9, 5-9a incorporated in a supplement Changes and Exemption Requests to to this ITS submittal when Support Use of Framatome ANP, Inc. this CTS change is approved Fuel, would revise Technical by the NRC.

Specification 5.3.1 design features for fuel assemblies, and Technical Specification 5.6.2 new fuel storage criticality limitations including Figure 5.6-4, to support use of Framatome ANP, Inc. (FANP) fuel beginning with Cycle 20 in Unit 1 and Cycle 16 in Unit 2. An additional license amendment request is planned specific to Unit 1 for late June 2004 to request other Technical Specification changes related to the use of FANP methodology, and to submit the Unit 1 Realistic Large Break Loss of Coolant Accident (LOCA) analysis and the Unit 1 Non-LOCA analysis performed using FANP methodology.

to AEP:NRC:4901 Page 1 Deleted License Conditions Indiana Michigan Power Company requests that the following License Conditions be deleted and relocated to the Improved Technical Specifications (ITS) or other licensee-controlled documents as described below. The complete justification for the deletion and relocation can be found in the Discussion of Changes for ITS 5.5.

1. Unit 1 License Condition 2.C.(7) and Unit 2 License Condition 2.C.(3)(v), Secondary Water Chemistry Monitoring Program - The requirements of this License Condition have been included in ITS 5.5.8, Secondary Water Chemistry Program.
2. Unit 1 License Condition 2.H and Unit 2 License Condition 2.G, System Integrity - The requirements of this License Condition have been included in ITS 5.5.2, Leakage Monitoring Program.
3. Unit 1 License Condition 2.I and Unit 2 License Condition 2.H, Iodine Monitoring - This License Condition is a Specification (Specification 6.8.4.b) in NUREG-0452, Revision 4, Standard Technical Specifications for Westinghouse Pressurized Water Reactors, which was the previous Standard Technical Specifications for Westinghouse plants prior to NUREG-1431. This requirement is not included in NUREG-1431, Revision 2, and the requirement may be relocated to a licensee-controlled document. Therefore, the requirements of this License Condition are being relocated to the Technical Requirements Manual (TRM).

to AEP:NRC:4901 Page 1 Commitments Met by Improved Technical Specifications (ITS) Conversion The table below includes commitments previously made by Indiana Michigan Power Company (I&M) to the Nuclear Regulatory Commission (NRC) to address specific items in the license amendment request for conversion of the Donald C. Cook Nuclear Plant (CNP) current Technical Specifications (CTS) to the ITS, or are commitments previously made to the NRC that will be superceded by the proposed ITS requirements when approved by the NRC. Most of these commitments are related to correcting the Technical Specifications to address issues that are being administratively controlled in accordance with the guidance provided in NRC Administrative Letter 98-10, Dispositioning of Technical Specifications that are Insufficient to Assure Plant Safety. This submittal and approval of the proposed ITS requirements will fulfill these commitments.

Date and Letter No. Description of Commitment 11/01/01 I&M has implemented administrative controls for Diesel Generator voltage and C1101-07 frequency limits that are more restrictive than those specified in CTS 4.8.1.1.2.e in accordance with NRC Administrative Letter 98-10. The more restrictive limits ensure that engineered safety features pumps will produce flow that is adequate to meet their assumed safety and accident mitigation functions, and will be addressed in the proposed conversion to the ITS. The proposed ITS includes these more restrictive requirements as described in ITS 3.8.1 Discussion of Change (DOC)

M.5.

11/01/01 I&M has implemented administrative controls for Condensate Storage Tank (CST)

C1101-07 levels that are more restrictive than those specified in CTS 3.7.1.3 in accordance with NRC Administrative Letter 98-10. The more restrictive limits ensure that sufficient water is available to meet the requirements described in the CTS Bases, and will be addressed in the proposed conversion to the ITS. The proposed ITS includes these more restrictive requirements as described in ITS 3.7.6 DOC M.1.

11/01/01 I&M has implemented administrative controls for Reactor Coolant System (RCS)

C1101-07 seal line resistance that are more restrictive than those specified in CTS 4.4.6.2.1.c in accordance with NRC Administrative Letter 98-10. The more restrictive limits reflect the impact of the Unit 1 and 2 steam generator replacements, and will be addressed in the proposed conversion to the ITS. The proposed ITS includes these more restrictive requirements as described in ITS 3.5.5 DOC M.1.

11/01/01 I&M has implemented administrative controls for Unit 1 RCS leak detection C1101-07 equipment operability and RCS allowable leakage that are more restrictive than those specified in CTS 3.4.6.1 and CTS 3.4.6.2 in accordance with NRC Administrative Letter 98-10. The more restrictive limits are needed to support a leak-before-break analysis of the pressurizer surge line. In a letter from M. W.

Rencheck, I&M, to NRC Document Control Desk, C1000-20, dated October 26, 2000, I&M committed to submit a license amendment request to replace the administrative controls no later than the next refueling outage. This commitment was changed to be no later than the Improved Standard Technical Specification submittal date by the end of the First Quarter, 2004, and will be addressed in the proposed conversion to the ITS. The proposed ITS includes these more restrictive requirements as described in ITS 3.4.13 DOC M.1 and ITS 3.4.15 DOC M.2.

to AEP:NRC:4901 Page 2 Date and Letter No. Description of Commitment 04/29/03 I&M will include the MASTER RELAY TEST and SLAVE RELAY TEST AEP:NRC:3311-02 specified in NUREG-1431, "Standard Technical Specifications Westinghouse Plants" with the conversion to Improved Standard Technical Specifications (ISTS).

The proposed ITS includes definitions for MASTER RELAY TEST and SLAVE RELAY TEST as described in ITS Chapter 1.0 DOC A.15, and includes specific requirements for performance of these tests as described in ITS 3.3.2 DOC M.2, ITS 3.3.2 DOC M.3, ITS 3.3.2 DOC M.6, ITS 3.3.2 DOC M.8, and ITS 3.3.6 DOC M.2.

04/11/88 In accordance with the original commitment made in Unit 2 LER 88-003-00, I&M Unit 2 Licensee increased the calibration frequency of the 4 kV Bus Loss of Voltage relays and Event Report 4 kV Bus Degraded Voltage relays in both Units from every eighteen months to (LER) 88-003-00 monthly, and committed to continue monthly calibration until the trend indicated that a different frequency was justified. Currently, these relays are subjected to a CHANNEL CALIBRATION every 31 days in accordance with this commitment.

However, CTS Table 4.3-2 still requires a CHANNEL CALIBRATION of the Loss of Voltage and Degraded Voltage instrumentation every 18 months. ITS SR 3.3.5.3 requires the performance of a CHANNEL CALIBRATION for the Degraded Voltage Function every 31 days, and ITS SR 3.3.5.5 requires the performance of a CHANNEL CALIBRATION for the Loss of Voltage Function every 184 days. This changes the CTS by changing the Frequency of the SRs from 18 months to either 31 days or 184 days. These Frequencies are assumed in the current setpoint analyses for the relays. NRC approval of the proposed ITS Surveillance Frequencies will supercede this commitment.