IR 05000317/2010006

From kanterella
Jump to navigation Jump to search
IR 05000317-10-006 and 05000318-10-006; on 02/22/2010 - 04/30/2010; Calvert Cliffs Nuclear Power Plant Special Inspection for the February 18, 2010 Dual Unit Trip, Inspection Procedure 93812, Special Inspection
ML101650723
Person / Time
Site: Calvert Cliffs  Constellation icon.png
Issue date: 06/14/2010
From: David Lew
Division Reactor Projects I
To: George Gellrich
Constellation Energy Nuclear Group
References
EA-10-080 IR-10-006
Download: ML101650723 (45)


Text

UNITED STATES NUCLEAR REGULATORY COMMISSION

REGION I

475 ALLENDALE ROAD KING OF PRUSSIA, PA 19406*1415 June 14, 2010 EA-10-080 George H. Gellrich, Vice President Calvert Cliffs Nuclear Power Plant, LLC Constellation Energy Nuclear Group, LLC 1650 Calvert Cliffs Parkway Lusby, Maryland 20657-4702 SUBJECT: CALVERT CLIFFS NUCLEAR POWER PLANT - NRC SPECIAL INSPECTION REPORT 05000317/2010006 AND 05000318/2010006; PRELIMINARY WHITE FINDING

Dear Mr. Gellrich:

On April 30, 2010. the U. S. Nuclear Regulatory Commission (NRC) completed a Special Inspection of the February 18, 2010, dual unit trip at Calvert Cliffs Nuclear Power Plant (CCNPP) Units 1 and 2. The enclosed report documents the inspection results, which were discussed on April 30, 2010, with you and other members of your staff.

The special inspection was conducted in response to the dual unit trip with complications on February 18, 2010. The complications included loss of a 500 kilovolt (kV) offsite power supply to each unit, loss of power to a 4 kV safety bus on each unit, failure of the 2B emergency diesel generator (EDG) to reenergize a 4 kV safety bus, loss of power to the Unit 24 kV non-safety buses, loss of Unit 2 forced reactor coolant system (RCS) flow, and loss of the Unit 2 normal heat sink. The NRC's initial evaluation of this event satisfied the criteria in NRC Inspection Manual Chapter 0309, "Reactive Inspection Decision Basis for Reactors," for conducting a special inspection. The Special Inspection Team (SIT) Charter (Attachment 2 of the enclosed report) provides the basis and additional details concerning the scope of the inspection.

The special inspection team (the team) examined activities conducted under your license as they relate to safety and compliance with Commission rules and regulations and with conditions of your license. The team reviewed selected procedures and records, observed activities, conducted in-plant equipment inspections, and interviewed personnel. In particular, the team reviewed event evaluations (including technical analyses), causal investigations, relevant performance history, and extent-of-condition to assess the Significance and potential consequences of issues related to the February 18 event.

The team concluded that, overall, station personnel maintained plant safety in response to the reactor trips. Nonetheless, the team identified several issues related to equipment performance and human performance which complicated the event. The enclosed chronology (Attachment 3 of the enclosed report) provides additional details on the sequence of events and event complications. This report documents one self-revealing finding that, using the reactor safety Significance Determination Process (SDP). has preliminarily been determined to be White, a finding with low to moderatE~ safety significance. The finding is associated with the failure to perform appropriate maintenanc:e activities to ensure 2B EDG reliability. Specifically, safety related time delay relays in th~:: EDG low lube oil pressure trip circuit were used beyond the manufacturer recommended service life. without an associated test or monitoring program to demonstrate_

their continued reliability. Consequently, when called upon to reenergize the 24 4 kV safety bus, the time delay relay failed and the 2B EDG prematurely tripped in response to a low lube oil pressure signal. The 24 4 kV safety bus was reenergized from an alternate feed source approximately 30 minutes into the event. The significance determination of the event was performed assuming that similar time-delay relays on other systems have not failed due to this performance deficiency. Subsequent corrective actions included replacing and retesting the associated time delay relays on all three EDGs susceptible to the low lube oil pressure trip.

There is no current immediate safety concern due to this finding. because all EDGs have subsequently been demonstrated operable and long term corrective actions are being implemente1d through the Calvert Cliffs corrective action program to address the extent-of condition and extent-of-cause. The final resolution of this finding will be conveyed ina separate correspondence addressing the final risk significance and disposition of any violations.

As discussed i~ the attached inspection report. the finding is also an apparent violation (AV} of NRC requirements, involving Technical Specification 5.4.1, and is therefore being considered for escalated enforcement action in accordance with the Enforcement Policy, which can be found on NRC's Web site at http://www.nrc.gov/reading-rom/doc-cotlections/enforcementi.

In accordance with NRC Inspection Manual Chapter (lMC) 0609, we will complete our evaluation using the best available information and issue our final determination of safety significanc: within 90 days of the date of this letter. The significance determination process encourages an open dialogue between the NRC staff and the licensee; however, the dialogue should not impact the timeliness of the staffs final determination.

Before we make a final decision on this matter, we are providing you with an opportunity (1 ) to attend a Regulatory Conference where you can present to the NRC your perspective on the facts and assumptions the NRC used to arrive at the finding and assess its significance, or (2)

submit your position on the finding to the NRC in writing. If you request a Regulatory Conference, it should be held within 30 days of your response to this letter and we encourage you to submit supporting documentation at least one week prior to the conference in an effort to make the conference more efficient and effective. If a Regulatory Conference is held. it will be open for public observation. If you decide to submit only a written response. such submittal should be s,ent to the NRC within 30 days of your receipt of this letter. If you deCline to request a Regulatory Conference or submit a written response, you relinquish your right to appeal the final SDP dE~termination,in that by not doing either, you fail to meet the appeal requirements stated in the Prerequisite and Limitation sections of Attachment 2 of IMC 0609. We request that if you decide to attend a Regulatory Conference or provide a written response, that you address the apparent violation, and that you also address the length of time that the 28 EDG was considered inoperable.

Please contact Glenn Dentel at (610) 337-5233 in writing within 10 days from the issue date of this letter to notify the NRC of your intentions. If we have not heard from you within 10 days, we wHl continue with our significance determination and enforcement decision. The final resolution of this matter will be conveyed in separate correspondence. Because the NRC has not made a final determination in this matter, no Notice of Violation is being issued for these inspection findings at this time. In addition, please be advised that the number and characterization of the apparent violation described in the enclosed inspection report may change as a result of further NRC review.

In addition, the report documents two NRC-identified findings and two self-revealing findings, each of very low safety significance (Green). Three of these findings were determined to involve violations of NRC requirements. However, because of the very low safety significance and because they are entered into your corrective action program, the NRC is treating these findings as non-cited violations (NCVs) consistent with Section VI.A.1 of the NRC Enforcement Policy. If you contest any NCV, you should provide a response within 30 days of the date of this inspection report, with the basis for your denial, to the Nuclear Regulatory Commission, AnN.:

Document Control Desk, Washington DC 20555-0001; with copies to the Regional Administrator, Region I; the Director, Office of Enforcement, United States Nuclear Regulatory Commission, Washington, DC 20555-0001; and the NRC Senior Resident Inspector at Calvert Cliffs Nuclear Power Plant. In addition, if you disagree with the characterization of any finding in this report, you should provide a response within 30 days of the date of this inspection report, with the basis for your disagreement. to the Regional Administrator, Region I, and the NRC Senior Resident Inspector at Calvert Cliffs Nuclear Power Plant. The information you provide will be considered in accordance with Inspection Manual Chapter 0305.

In accordance with 10 CFR 2.390 of the NRC's "Rules of Practice," a copy of this letter, its enclosure, and your response (if any) will be available electronically for public inspection in the NRC Public Document Room or from the Publicly Available Records (PARS) component of NRC's document system .(ADAMS). ADAMS is accessible from the NRC Website at http://www.nrc.gov/reading-rm/adams.html(the Public Electronic Reading Room).

Sincerely, I:t.~Ml~

Division of Reactor Projects Docket Nos.: 50-317. 50~318 License Nos.: CPR-53, DPR-69 Enclosure: Inspection Report 05000317/2010006 and 05000318/2010006 w/Attachments: Supplemental Information (Attachment 1)

Special Inspection Team Charter (Attachment 2)

Detailed Sequence of Events (Attachment 3)

cc w/encl: Distribution via ListServ Enclosure: Inspection Report 05000317/2010006 and 05000318/2010006

SUMMARY OF FINDINGS

IR 05000317/2010006 and 05000318/2010006; 02/22/2010 - 04/30/2010; Constellation

Generation Company, Calvert Cliffs Nuclear Power Plant; Special Inspection for the February 18,2010. Dual Unit Trip; Inspection Procedure 93812, Specia/lnspection.

A six-pe~on NRC team. comprised of resident inspectors, regional inspectors, and a regional senior reactor analyst conducted this Special Inspection. The team was accompanied by two engineers from the State of Mary/and, Department of Natural Resources and Department of the Environment. One apparent violation with potential for greater than Green safety significance and four Green findings were identified. The significance of most findings is indicated by their color (Green, White, Yellow, or Red) using Inspection Manual Chapter (IMC) 0609, 'Significance Determination Process' (SOP); the crosscutting aspect was determined using IMC 0310,

'Components Within the Cross Cutting Areas;' and findings for which the SOP does not apply may be Green or be assigned a severity level after NRC management review. The NRC's program for overseeing the safe operation of commercial nuclear power reactors is described in NUREG-1649, "Reactor Oversight Process," Revision 4, dated December 200

NRC Identified and Self Revealing Findings

Cornerstone: Initiating Events

Criterion XVI "Corrective Actions," was identified, because auxiliary building roof leakage into the Unit 1 and Unit 2 45 foot switchgear rooms was identified on several occasions from 2002 to 2009, but was not thoroughly evaluated and corrective actions to this condition adverse to quality were untimely and ineffective. This degraded condition led to the failure of the auxiliary building to provide protection to several safety related systems from external events, a ground on a reactor coolant pump (RCP) bus, and ultimately a Unit 1 reactor trip. Immediate corrective actions included: repair of degraded areas of the roof; walk downs of other buildings within the protected area that could be susceptible to damage to electrical equipment due to water intrusion; issuance of standing orders to include guidance regarding prioritizing work orders due to roof leakage; and identifying further actions to take during periods of snow or rain to ensure plant equipment is not affected. Constellation entered the issue into their corrective action program (Condition Report (CR) 2010-001351). Long-term corrective actions include implementation of improved plant processes for categorization, prioritization and management of roofing issues.

The finding is more than minor because it is associated with the protection against external factors attribute of the Initiating Events Cornerstone and affected the cornerstone objective to limit the likelihood of those events that upset plant stability and challenge critical safety functions during shutdown as well as power operations. The team determined the finding had a very low safety significance because, although it caused the reactor trip, it did not contribute to the likelihood that mitigation equipment or functions will not be available. The cause of the finding is related to the crosscutting area of Problem Identification and Resolution, Corrective Action Program aspect P.1 (c)because Constellation did not thoroughly evaluate the problems related to the water intrusion into the auxiliary building such that the resolutions addressed the causes and extent-of-condition. This includes properly classifying, prioritizing, and evaluating the condition adverse to quality. (Section 2.1)

  • Green: The team identified a finding for failure to translate the design calculations of phase overcurrent relays on 13 kV feeder breakers into the actual relay settings. The overcurrent relays protect the unit service transformer against faults in the primary or secondary side windings. The design specified limit of 1200 amps was determined based on the breaker rating of the feeder breakers. Constellation determined the as found relay setting for the feeder breakers was 1440 amps which exceeded the rating of the feeder breakers. The team determined that due to the as-found relay setting, certain phase overcurrent conditions could potentially cause the breakers to fail prior to the phase overcurrent relay sensing the degraded condition. This condition could affect the recovery of the safety buses from the electrical grid. Constellation entered this issue into the corrective action program (condition report 2010-002123).

The finding is more than minor because it affected the Initiating Events Cornerstone attribute of equipment performance for ensuring the availability and reliability of systems to limit the likelihood of those events that upset plant stability and challenge critical safety functions during shutdown as well as power operations. Also, this issue was similar to Example 3j of IMC 0612, Appendix E, "Examples of Minor Issues," because the condition resulted in reasonable doubt of the operability of the component, and additional analysis was necessary to verify operability. This finding was determined to be of very low safety significance because the design deficiency did not result in an actual loss of function* based on Constellation's determination that the maximum load current possible would not challenge the feeder breaker ratings. Enforcement action does not apply because the performance deficiency did not involve a violation of a regulatory requirement. The finding did not have a cross-cutting aspect because the most significant contributor to the performance deficiency was not reflective of current licensee performance. (Section 2.3)

Cornerstone: Mitigating Systems

Preliminary White: The NRC identified an apparent violation of Technical Specification 5.4.1 for the failure of Constellation to establish, implement, and maintain preventive maintenance requirements associated with safety related relays. The team identified that Constellation did not implement a performance monitoring program specified by the licensee in Engineering Service Package (ES2001 00067) in lieu of a previously established (in 1987) 1O-year service life replacement PM requirement for the 28 EDG T3A time delay relay. As a consequence, the 26 EDG failed to run following a demand start signal on February 18, 2010. Following identification of the failed T3A relay, it was replaced and the 28 EDG was satisfactorily tested and returned to service. In addition, time delay relays used in the 1Band 2A EDG protective circuits, that also exceeded the vendor recommended 1O-year service life, were replaced. Constellation entered this issue, including the evaluation of extent-of-condition, into the corrective action program.

This finding is more than minor because it is associated with the equipment performance attribute of the Mitigating Systems Cornerstone and adversely impacted the objective of ensuring the availability, reliability, and capability of the safety related 2B EDG to respond to a loss of normal electrical power to its associated safety bus. This finding was assessed using IMC 0609. Appendix A and preliminarily determined to be White (low to moderate safety significance) based upon a Phase 3 Risk Analysis with an exposure time of 323 days which resulted in a total (internal and external contributions)calculated conditional core damage frequency (CCDF) of 7.1 E-6. The cause of this finding is related to the crosscutting area of Human Performance, Resources aspect H.2(a} because preventive maintenance procedures for the EDGs were not properly established and implemented to maintain long term plant safety by maintenance of design margins and minimization of long standing equipment issues. (Section 2.2)

  • ~: The team identified a NCVof 10 CFR 50, Appendix B, Criterion XVI. "Corrective Action," because Constellation did not thoroughly evaluate and correct a degraded condition of a C0-8 relay disc sticking or binding issues which can adversely impact the function of the EDGs and the electrical distribution protection scheme. Specifically, following the February 18. 2010 event, Constellation did not identify and adequately evaluate the recent CO*8 relay failures due to sticking or binding of the induction discs in the safety related and non-safety related applications. Constellation entered this issue into the corrective action program (CR 20100004673).

The finding is more than minor because it is associated with the equipment reliability attribute of the Mitigating Systems Cornerstone, and it adversely affected the associated cornerstone objective of ensuring the availability. reliability, and capability of systems that respond to initiating events to prevent undesirable consequences (I.e., core damage). This finding was determined to be of very low safety significance because these historical relay failures did not result in an actual loss of system safety function.

The cause of the finding is related to the crosscutting area of Problem Identification and Resolution, Corrective Action Program aspect P.1(c) because Constellation did not thoroughly evaluate the previous station operating experience of CO-8 relay induction disc sticking and binding issues such that resolutions addressed the causes and extent of-condition. (Section 2.3)

  • Green: A self-revealing NCV of Technical Specification (TS) 5.4.1.a, "Procedures" was identified for failure to establish adequate procedures for restoration of Chemical and Volume Control System (CVCS) letdown flow. On February 18, 2010, an electrical ground fault caused a Unit 1 reactor trip, loss of the 500 kV Red Bus, and cves letdown isolation as expected on the ensuing instrument bus 1Y10 electrical transient. Deficient operating instructions prevented timely restoration of letdown flow following the initial transient. Pressurizer level remained above the range specified in Emergency Operating Procedure (EOP}-1 for an extended period because of the operators' inability to relstore letdown. This ultimately led to exceeding the TS high limit for pressurizer level. CVCS Operating Instruction Ol-2A was subsequently revised, providing necessary guidance for re-opening the letdown system excess flow check valve to restore letdown flow. This event was entered into the licensee's corrective action program (CR 2010-001378).

The finding is more than minor because it is associated with the procedure quality attrilJute of the Mitigating Systems Cornerstone and affected the cornerstone objective to ensure the availability, reliability, and capability of systems that respond to initiating events to prevent undesirable consequences (Le., core damage). The finding is of very low safety significance because it is not a deSign or qualification deficiency, did not represent a loss of a safety function of a system or a Single train greater than its TS allowed outage time, and did not screen as potentially risk significant due to external events. This finding has a crosscutting aspect in the area of human petiormance, resources aspect H.2(c), because Constellation did not ensure that procedures for restoring eves letdown were complete and accurate. (Section 3.1)

REPORT DETAILS

1. Background and Description of Events

In accordance with the Special Inspection Team (SIT) charter (Attachment 2), team members (the team) conducted a detailed review of the February 18, 2010, dual unit trip with complications at Calvert Cliffs Nuclear Power Plant including equipment and operator response. The team gathered information from the plant process computer (PPC) alarm printouts, interviewed station personnel, performed physical walkdowns of plant equipment, and reviewed procedures, maintenance records, and various technical documents to develop a detailed timeline of the event (Attachment 3). The following represents an abbreviated summary of the Significant automatic plant and operator responses which began at 8:24 a.m. on February 18, 2010, and ended on February 22, 2010, with both Unit 1 and Unit 2 in cold shutdown:

On February 18, 2010, at 8:24 a.m., the Unit 1 reactor automatically tripped from 93 percent reactor power in response to a reactor coolant system (RCS) low flow condition.

Water had leaked through the auxiliary building roof into the 45' elevation switchgear room, causing an electrical ground on bus 14 which tripped the 12B reactor coolant pump (RCP), thereby initiating the reactor protection system trip on RCS low flow. Three of the four Unit 1 RCPs continued operating.

Ground overcurrent (O/C) relay 2RY251 G/B-22-2 failed to actuate as designed, permitting the Unit 1 ground O/C condition to reach the Unit 2 2213 kV RCP bus and the associated 500 kVl13 kV transformer (P-13000-2). Ground O/C protection for the P 13000-2 transformer actuated which deenergized the 500 kV "Red Bus" offsite power supply. the 22 bus, and all four RCPs. At 8:24 a.m., the Unit 2 reactor automatically tripped from full reactor power in response to the associated reactor protection system trip on ReS low flow.

The P~13000-2 isolation also deenergized the 21 13 kV service bus, which deenergized the Unit 1 144 kV safety bus, the Unit 2 24 4 kV safety bus, and several Unit 2 non safety related 4 kV busses. The 16 emergency diesel generator (EDG) started as designed and reenergized the Unit 1 14 bus. The 26 EDG started. but tripped 15 seconds later due to a low lube oil pressure signal and the 24 bus remained deenergized.

The electrical transient deenergized 120 volt instrument buses 1Y1 0 and 2Y10. which isolated the chemical volume control system (CVCS) and ~CS letdown for both units and

.complicated operators' control of pressurizer level.

Loss of power to the Unit 2 non-safety related buses resulted in loss of the normal RCS heat removal path (main feedwater pumps, circulating water pumps, and condenser).

Operators used the turbine driven auxiliary feedwater pump and atmospheric steam dump valves for decay heat removal.

At 8:48 a.m., Unit 2 operators exited emergency operating procedure (EOP)-O. "Reactor Trip" and entered EOP-2, "Loss of Flow and Loss of Offsite Power." At 8:57 a.m.,

operators reenergized the 24 bus via the altemate feeder breaker. At 9;00 a.m., Unit 2 operators restored RCS letdown and maintained appropriate pressurizer level control.

At 1'I :17 a.m., Unit 2 operators started the 23 motor driven auxiliary feedwater (AFW)pump and secured the turbine driven AFW pump. At 11:18 a.m., Unit 2 operators exited the EOPs and returned to normal operating procedures. As of 12:02 p.m., Unit 1 Operl:ltors remained unsuccessful at restoring RCS letdown and exceeded the pressurizer high level limits specified by both EOPs and TS. At 1:09 p.m., Unit 1 operators restored RCS letdown and restored normal pressurizer level control. At 1:38 p.m., Unit 1 operators exited the EOPs and returned to normal operating procedures.

At 2:07 p.m., Unit 1 vital 4 kV bus 14 was aligned to its alternate offsite source and the 18 EDG was secured. At 5:13 p.m., Unit 2 operators started 21B and 22A RCPs to restore forced RCS circulation. On February 19, 2010, at 12:05 p.m., operators verified two offsite power supplies were available, with the 21 13 kV service bus energized from an alternate offsite source. On February 20,2010, at 10:31 p.m. repairs on the 2B EDG were completed and the diesel generator was declared operable.

Unit 1 achieved cold shutdown at 5:38 a.m. on February 21, 2010, and 500 kV Red Bus was restored at 5:50 a.m. Unit 2 achieved cold shutdown at 5:00 a.m. on February 22, 2010.

2. Equipment Performance

2.1 Untimely Corrective Actions to Unit 1 45 Foot Elevation Switchgear Room Roof Leak Cam,ed Reactor Trip

a. Inspection Scope

Water leakage through the Unit 1 auxiliary building roof into the 45' elevation switchgear room, caused an electrical ground on Bus 14 which tripped the 12B RCP, thereby initiating a reactor protection system trip on RCS low flow. The team interviewed station personnel, performed field walkdowns, and reviewed various records including maintenance backlogs, maintenance history, operating logs, condition reports, and maintenance rule program records to independently determine the cause of the event and assess associated corrective actions. Constellation determined the root cause ot the e:vent was that Calvert Cliffs lacked sensitivity to the consequences associated with degr;sded roof conditions which led to a reactive rather than preventive strategy for dealing with roof leaks. The team independently reviewed Constellation's Root Cause Analysis Report (RCAR) for the Unit 1 reactor trip to determine the adequacy of the evaluation, the extent-ot-condition review, and associated corrective actions.

b. Findings

Introduction:

A self-revealing non-cited violation (NCV) of very low safety significance associated with 10 CFR Part 50, Appendix B, Criterion XVI "Corrective Actions," was identified because Constellation did not promptly identify and correct degraded conditions associated with the Unit 1 auxiliary building (45-foot elevation switchgear, room) roof leakage. These degraded conditions led to the failure of the auxiliary building to provide adequate protection to numerous safety related systems from external events (adverse weather conditions) resulting in a ground on a reactor coolant pump (RCP) bus and a consequential Unit 1 reactor trip on February 18, 2010.

Description:

On February 18, 2010, Unit 1 tripped due to water from a roof leak entering into the Unit 1 45-foot elevation switchgear (SWGR) room and causing a phase to ground short near a current transformer (CT) for the 12B RCP bus 14P differential/ground current protection devices. The ground fault was not isolated close to the source, due to a failed ground protection relay in the feeder breaker to the Unit 1 RCP bus. The consequential trip of the 12B RCP led to the Unit 1 reactor protection system (RPS) trip due to the a low reactor coolant system (RCS) flow signal.

While conducting a review of the dual unit trip, the team noted that in July of 2008, condition report (CR) IRE-032-766 was written regarding rain water which had fallen onto and into the emergency shutdown panel (ESOP) 1C43, which is located in the Unit 1 45' elevation SWGR room. Immediate actions were taken to notify the control room supervisor of the condition as well as to clean up the pooled water around the panel.

Corrective actions were initiated to establish a program to maintain weather tight building integrity. In June of 2009, CR 2009-004060 documented water dripping inside the SWGR room just east of the No. 12 motor generator set. No immediate actions were taken; however, recommended actions were to repair the roof. On August 8, 2009, a .

third CR (CR 2009-005508) was written, again regarding water leaking into the SWGR room and onto the ESOP. Immediate actions were taken to cover the panel with herculite and to direct the leaking water into a plastic bucket, as well as mopping up the standing water. Despite the immediate actions taken to address the three rain water issues, no additional actions were taken to properly prioritize, identify, and correct the roof leakage. This is evident due to the fact that each CR was given the lowest priority (category 4) as well as none of the work orders written to address the roof leakage* had been approved. Additional safety related SWGR equipment in the SWGR room included power supply breakers for the "B" train auxiliary feed water pump, high pressure safety injection pump, low pressure safety injection pump and EDG.

, Based on the review of the RCAR, the team noted several missed opportunities from 2002 to 2009 to identify and evaluate the degraded condition prior to the dual unit trip.

During a periodic bus inspection in 2004, repairs were made to insulating material on the power cables inSide the 14P01 cubicle to correct a water spot on the "B" phase of the 12B RCP bus. This cubicle is in the same SWGR enclosure as the 14P02 cubicle where the water intrusion occurred that resulted in the February 18, 2010 trip. The work was completed under the bus inspection work order; however, no CR was written documenting the indicated water intrusion. This preventive maintenance activity should have led to an investigation into the cause of the water intrusion as well as the extent of the degraded condition. An apparent cause (IRE-007-705) was also completed in 2005 in response to a CR written by quality assurance personnel noting that there were 33 leaks identified during a walk down but no trend CR was written. Corrective actions were proposed; however they were not adequately implemented.

The Calvert Cliffs' maintenance rule scoping document states that the function of the auxiliary building is to provide structural support and separation to safety and non-safety relatE~d equipment while accounting for the effects of certain extemal events. Rain storms and heavy snowfall are eKamples of external events for which the auxiliary building is designed to provide protection against. The Calvert Cliffs' structure monitoring program did not effectively use the corrective action process to ensure this function of the auxiliary building would be maintained. At the time of this special inspe:ction, 58 work orders were open to repair roof leaks. None of these work orders were planned or scheduled. Several of these work orders were over 2 years old.

Immediate corrective actions included: repairing degraded areas of the auxiliary building roof; performing walk downs of other protected area buildings that could be susceptible I

I to damage to electrical equipment due to water intrusion; issuing standing orders to I include guidance regarding prioritizing work or.ders due to roof leakage; and identifying furthE~r actions to take during periods of snow or rain to ensure plant equipment is not I II affected. Long-term corrective actions include implementing improved plant processes for categorization, prioritization, and management of degraded roof and water leakage issues.

j' The team concluded that Constellation had numerous opportunities to have thoroughly evaluated, classified, and prioritized the roof leakage, such that corrective actions could Ii have addressed the full extent of the auxiliary building roofing degraded condition and I prevented the water intrusion event and subsequent plant trip on February 18,2010.

I

. The team concluded that station personnel did not properly inspect and maintain the roofs of several safety related structures to ensure the internal safety related and non safety related components were protected from effects of the external environment (Le.,

rain, snow).

Analysis:

The failure of Constellation to promptly identify and correct conditions adverse to quality, associated with the auxiliary building roof leakage. is a performance deficiency. The finding is more than minor because it is associated with the Initiating Events Cornerstone and affects the cornerstone objective to limit the likelihood of those external events that upset plant stability and challenge critical safety functions during shutdown, as well as power operations. The inspectors evaluated this finding using IMe 0612 Attachment 4, "Phase 1- Initial Screening and Characterization of Findings," The team determined the finding to have very low safety significance because, although it contributed to a reactor trip, it did not contribute to the likelihood that mitigation equipment would not be availab!e.

The cause of this finding is related to the Problem Identification and Resolution cross cutting area. corrective action program, because Constellation did not thoroughly evaluate the problems related to the water intrusion into the-auxiliary building such that the resolutions addressed the causes and extent-of-condition. This included properly classifying, prioritizing. and evaluating the condition adverse to quality (P.1{c)).

Enforcement:

10 CFR Part 50, Appendix B. Criterion XVI "Corrective Action," states, in part, that conditions adverse to quality, such as failures, malfunctions, deficiencies, deviations, defective material and equipment, and non-conformances are promptly

. identified and corrected. Contrary to the above, from 2002 to February 18. 2010, Constellation did not thoroughly evaluate and promptly correct degraded conditions associated with auxiliary building roof leakage. This led to the failure of the auxiliary building to provide protection to several safety related systems from external events (Le.

flooding), a ground on a reactor coolant pump bus, and ultimately a Unit 1 reactor trip.

Beccluse this violation was of very low safety significance and was entered into the licensee's corrective action program as CR 2010-001351, this violation is being treated as an NCV, consistent with the NRC Enforcement Policy." (NCV 0500317/318/2010006 01: Failure to Thoroughly Evaluate and Correct Degraded Conditions Associated with Auxiliary Building Roof Leakage)2.2 Deficient Preventive Maintenance Program Procedures and Implementation for EDG Aqastat Time Delay (TO) Relays .

a.

InspE3ction Scope On February 18, 2010, Unit 2 experienced an automatic reactor trip, loss of the P-13000 2 Service Transformer, and loss of the 500 kV Red Switchyard Bus. The loss of the Red Bus resulted in loss of power to the No. 244 kV safety bus which caused an automatic start of the 2B EDG. The 28 EDG tripped due to low lube oil (LO) pressure after running for 15.2 seconds. The team reviewed the timing sequence, design requirements, relay schematics, and surveillance and maintenance history for the 2B EDG. Failure of a T3A time delay (TD) relay coincident with the 28 EDG LO low pressure protection logic not having reset caused the low LO pressure protective trip of the engine. Constellation identified two root causes for the EDG failure:

(1) station personnel failed to recognize and quantify the low margin in all aspects of the low lube oil.pressure trip set feature for the EDG; and, {2} station personnel did not rigorously assess all failure modes of the Agastat relays in the EDG protection circuitry prior to extending its service life beyond the vendor qualified life.

The team reviewed Constellation's evaluation of the 28 EDG's failure, the adequacy of proposed and completed corrective actions, and the appropriateness of the extent-of condition review. Independent reviews of design documents, mock-Up testing. drawings, surveillance testing, and field walk-downs were performed by the team to evaluate the cause of the 2B EDG failure. In addition, the team reviewed Constellation's preventive maintenance (PM) history and associated PM programs.

b. Findings

Introduction.

The NRC identified an apparent violation of Technical Specification 5.4.1 for the failure of Constellation to establish, implement, and maintain preventive maintenance requirements associated with safety related relays. The team identified that Constellation did not implement a performance monitoring program in lieu of a previously established 10-year service life replacement PM requirement for the 2B EDG T3A TD relay. As a consequence, the 2B EDG failed to run following a demand start signal on February 18, 2010. This apparent violation is preliminarily determined to be of low-to-moderate safety Significance {White}.

Description.

The purpose of the T3A (Agastat 7000 series) TD relay in the EDG protective circuit is to bypass the low lube oil trip on the EDG start to allow the EDG lube oil pressure to initially build up to operating conditions. The relay begins timing when the EDG speed reaches 810 rpm (approximately 6 seconds after EDG start). The relay functions to bypass the low LO pressure trip <<17 pounds pressure sensed in the EDG upper crankcase) for 15 seconds (a total of 21 seconds from EDG start). This time delay allows LO pressure to build-up in the EDG upper crankcase high enough to reset the trip logic (2 of 3 pressure switches reset at >20 pounds). The Unit 2 February 18, 2010, sequence of events printout revealed that the T3A relay timed out early (after 9.2 seconds) at 15.2 seconds following the EDG start and prior to the low LO pressure sensing trip logic being reset. Constellation determined that a typical fast, non-pre lubricated EDG start results in LO pressure exceeding 20 pounds pressure approximately 13 seconds following the start of the EDG. Accordingly, the early timeout of the T3A relay was not the only degraded 28 EDG condition that presented itself on February 18, 2010. Constellation attributed the February 18 delayed reset of the pressure switches to "sticky lubrication oil" in the %-inch stainless steel pressure sensing line to the pressure switches, vice an actual low LO pressure condition in the diesel engine upper crankcase.

The team determined that the T3A relay, which timed-out early, had been in-service on the 28 EDG for approximately 13.5 years, 3.5 years beyond its vendor recommended 10-year service life. In 2001, Constellation engineering discontinued the vendor recommended 10-year replacement PM and substituted a performance monitoring program envisioned to ensure Agastat relays (approximately 100 safety related applications and 500 to 600 non-safety related applications in the two Calvert Cliffs units)were appropriately monitored and replaced prior to failure (reference Engineering Service Package ESP No. ES200100067, approved 03/06/2001). The team identified that a relay performance monitoring program had not been establiShed since 2001 at Calvert Cliffs. Constellation initiated CR 2010-04493 to address this performance issue.

The Shift Manager reviewed the immediate operability and determined that the other safety-related components using Agastat relays remain operable because these relays are installed in less harsh operational environments (e.g. vibrations) then the EDG Agastat relays, and therefore, are less susceptible to age-related degradation. In addition, CR 2010-01784 was written to address the extent-ot-condition of Agastat relays used in other safety-related applications.

Constellation replaced the 28 EDG failed T3A relay and, via a single 'as-found' bench test, validated its February 18, 2010, in-service failure, when the relay failed again, timing out early at 11.6 seconds. Subsequent attempts by Constellation to adjust the relay to within calibration tolerance were unsuccessful. The failed relay was shipped to an independent laboratory for diagnostic testing and destructive examination. The laboratory identified that, exercised over its furl range of operation, >40 percent of the TD actuation results were out of tolerance. Internals examination identified three of six screws on the flexible diaphragm retaining ring were loose, suggesting that the early time-out of the relay was possibly due to excessive air bleed off (leakage passed the diaphragm seal). Constellation concluded that the TD relay failure was a relatively recent event (within the last 47 days) and attributable to the three 28 EDG starts and approximately seven cumulative hours of operation that occurred in early January 2010.

The team concluded that Constellation provided no evidence to support the approximate time of failure of the TD relay. However, the team determined that the failure and probable failure mechanism may have occurred between the last successful calibration of the TD relay (May 13,2008) and the observed failure on February 18,2010. In addition, the team conCluded that therD relay early time-out was most likely a latent failUre and masked by the monthly EDG surveillance test. Accordingly, the TD relay failure was revealed by the fast, non-pre-Iubrication, demand start on February 18, 2010.

The basis for the team's conclUSion was as follows:

  • Constellation'S troubleshooting results were not conclusive regarding the lubricating oil pressure sensing line "'sticky oil" theory, based upon the following: 1} the "sticky oil" drained from the sensing line was not saved or analyzed for consistency or contaminants (Constellation did not exercise appropriate quarantine practices); 2) the

%-inch LO pressure sensing line was not backfilled with oil and was therefore susceptible to trapped air pockets that may tend to dampen accurate pressure sensing and may result in a delayed pressure response; and, 3) Constellation's routine (two-year calibration cycle) and post-event calibration checks of the pressure switches did not record "as-found" values of the pressure switch reset values; this information may have assisted in ruling out possible pressure switch setpoint drift or malfunction.

The team acknowledged that Constellation's subsequent mock-up testing of the pressure sensing line did show that lubricating oils of heavier viscosity tend to delay the pressure sensing response. However, the 100W oil used to demonstrate the phenomena (approximate 3 second pressure sensing delay) was considerably heavier than the lubricating oil used in the 2B EDG {40W} and mayor may not have re!flected the "sticky oil" viscosity observed by the technician responsible for the pressure switch troubleshooting.

  • The fast, non-pre-Iube start of the 2B EOG contributed to the identification of the failed relay; whereas the monthly pre-lube EDG starts likely masked the failure of the TO relay. The team determined that for a typical fast, pre-lubricated EOG start, a small pre-lube pump is run for 3 to 5 minutes prior to the EDG starting and fills the upper crankcase with lubricating oil, but is not of sufficient capacity to pressurize the upper crankcase. When the EOG starts, the engine driven LO pump functions to complete the upper crankcase fill and pressurization (>20 pounds pressure) in approximately 8 seconds. Accordingly, any relay failure (timing out early, <12 s,~conds) is masked by the fast, pre-lube EDG start because the relay actuates at 6 seconds and only has to satisfactorily function (block the low lube oil trip signal) for >2 seconds. The team noted that by the low LO pressure protective system design. the fast pre-lube EOG starts allow for a significant margin to satisfactory build-up of lube all pressure before the TO relay times out (a margin of approximately 13 seconds).

For the fast non-pre-Iube start, LO pressure typically exceeds 20 pounds pressure at 13 seconds after EDG start. This 13 second time interval similarly translates to the TD relay having to function for >7 seconds from the time it actuates at 6 seconds from EOG start. This 7 seconds minimal TO function also, by design, provides margin (an additional 8 seconds) for satisfactory LO pressure bUild-Up.

The team concluded that the last known satisfactory relay calibration (setpolnt) check of the T3A relay was the two-year calibration check completed on May 13, 2008. Based upon Constellation records, the as-found setting was 17.5 seconds and the as-left was 16.5 seconds. All monthly surveillance tests of the 2B EDG since May 13, 2008, were fast, pre-lube starts. There were no demand starts of the 2B EDG between May 13, 2008, and February 18, 2010, that would have proved or disproved that the T3A relay was operable, and that the LO pressure senSing line issue was coincidental or precipitous of a fast, non-pre-Iube start.

Following identification of the failed T3A relay. the licensee replaced the relay, satisfactorily tested the 2B EDG, and returned the 2B EOG to service. In addition, time delay relays used in the 1B* and 2A EDG protective circuits, that also exceeded the vendor recommended 10-year service life, were replaced. Constellation is evaluating the continued use of Agastat relays beyond their vendor recommended 10-yr service life. As previously noted, there are approximately 100 safety related applications and 500-600 non-safety related applications at the two Calvert Cliffs units.

Analysis.

The team identified that the failure of Constellation to perform preventive maintenance in accordance with vendor recommendations without adequate performance monitoring on safety related Agastat 7000 series TO relays used in safety relat~~d applications is a performance deficiency and violation of Technical Specifications (TS). This violation of TS is more than minor because it is associated with the equipment performance attribute of the Mitigating Systems Cornerstone and adversely impacted the objective of ensuring the availability, reliability, and capability of systems that respond to initiating events to prevent undesirable consequences. Specifically, the early timeout of the T3A relay caused the 2B EDG to trip prior to the low lube oil pressure trip signal clearing (resetting) after a demand fast start on February 18, 2010.

The failure of the 2B EDG to run resulted in the continued loss of alternating current to the No. 24 4 kV safeguards bus and its associated emergency core cooling systems.

In ac~cordance with Table 4a of IMC 0609, Attachment 04, "Phase 1 Initial Screening and Characterization of Findings," this performance deficiency required a Phase 2 or 3 risk analysis because the issue resulted in an actual loss of safety function of a single train for greater than its TS allowed outage time. A Phase 3 risk assessment was perfc)rmed by a Region I Senior Reactor Analyst (SRA) using the SAPHIRE software and Calvert Cliffs Unit 2 Standardized Plant Analysis Risk (SPAR) model, Revision 3.46, dated February 2010.

To conduct the Phase 3 analysis, the SRA made the following modeling assumptions;

  • Exposure time was based upon a T/2 approximation. The team determined that the 2B EDG exposure time is best approximated by a T/2 value, per the usage rules of IMC 0308, Appendix A, "Technical Basis for At Power Significance Determination Process." Specifically, if the inception of a condition is unknown, the use of the mean exposure time (T/2) is a statistically valid time period because it represents one-half of the time since the last successful demonstration of the component's function and the time of discovery or known failure. The last successful demonstration of the T3A relay was the calibration check performed on May 13, 2008. The total time (T) between May 13, 2008 and February 18, 2010 is 646 days. Therefore, T/2 represents an approximate exposure time of 323 days or 7752 hours0.0897 days <br />2.153 hours <br />0.0128 weeks <br />0.00295 months <br />.
  • SPAR model basic event EPS-DGN-FS-2B, representing "Diesel Generator 2B Failure to Start" was set to TRUE. The basis for the TRUE. vice a failure probability of 1.0, is that common cause failure of the remaining Fairbanks-Morris EDGs could not be conclusively ruled out. The same type Agastat 7000 series TD relays, with comparable greater than 10 years in-service times were installed on the 1Band 2A EDGs.
  • SPAR model basic event AFW-XHE-XM-FC8, representing operator failure to open the Turbine Building to turbine driven auxiliary feed water (TDAFW) pump room door within 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> of a station blackout event, was set to FALSE. The basis for this change is that recent engineering analysis of the TDAFW pump room heat-up (post Appendix R fire, LOOP/LOCA, SBO) identified no dependency on operator action to open the door to the turbine building to ensure adequate cooling of the TDAFW pumps.
  • No additional 2B EDG recovery credit was applied to the model based upon this event. The SRA noted that 2B EDG non-recovery probability (0.772) in the SPAR model is based upon industry statistical data. The SRA notes that Constellation procedures have operators align the OC EDG (within 45 minutes) vice attempt to troubleshoot and restart the failed EDG. Accordingly, any subsequent attempts to restart the 2B EDG. after an approximate one hour delay (aligning the OC EDG) would likely have the same result because all LO would have drained from the upper crankcase.
  • Even though Agastat 7000 series relays are used in multiple safety related applications (some beyond their vendor recommended service life), no broad based increase in safety related systems' or components' failure probabilities was applied for this Phase 3 risk assessment. As a consequence, the calculated risk estimate for this condition may be a non-conservative value because the Agastat relays are used in multiple other safety related applications beyond the manufacturers recommended 1O-year service life.
  • Truncation for the SPAR model analysis was set at 1E-13.

USing the above stated assumptions. the increase in internal risk (core damage frequency) associated with the 2B EDG failure of February 18, 2010, was estimated at 6.DE-6. The dominant core damage sequence involves the loss of Facility B (13 kV Service Bus No. 21), loss of steam generator cooling (main feedwater and auxiliary feedwater), and the subsequent loss of once through cooling (feed and bleed. using the charging system and a power operated relief valve).

Base!d upon the absence of an NRC external risk quantification tool, the SRA used Constellation's ca[culated extemal risk values to approximate the external risk contribution. Constel!ation's estimated external risk is based. upon a RISKMAN fire modeling tool and was calculated at 1.1 E-6 for the T/2 exposure period. No appreciable external risk contributions were identified for flooding or seismic events. The dominant core damage external events include turbine building fires (involving the steam generator main feedwater pump area) and high wind/hurricane events. The dominant turbine building fire scenarios involve the failure of the available EDGs (2B and 1B) and a spurious initiation of the safety feature actuation system (SFAS). The dominant high wind/hurricane event core damage scenarios involve the assumed failure of the OC EDG. the subsequent failure of the remaining safety related EDGs, and a spurious SFAS.

Based upon the SRA's calculated internal events risk estimate and Constellation's estimated external events risk contribution, the total increase in Unit 2 core damage frequency for this finding is approximately 7.1 E-6. Accordingly, this finding is of low to modlarated safety significance (WHITE). This finding and the associated risk analysis was reviewed by a Significance and Enforcement Review Panel (SERP) conducted on June 1,2010. The SERP concluded that the stated Technical SpeCification violation and associated risk characterization were appropriate. The violation does not represent an immediate safety concern because the licensee took prompt corrective actions to replace the Agastat relays in use beyond their service life for an three Fairbanks-Morris EDGs and ensured the LO pressure sensing lines were properly backfilled. Subsequent testing of all three EDGS verified operability, including a non-pre-Iubricated fast start of the 2B EDG.

The Constellation PRA staff performed a risk assessment of the 2B EDG failure using their CAFTA internal events model and RISKMAN external events model. Constellation assumed the same exposure time as the Region I SRA of T/2 equal to 323 days.

Constellation's total risk estimate was 3.1 E-6 CDF. Based upon discussions with the Constellation PRA staff, their risk estimate and dominant core damage sequences compare favorably with the NRC results.

The cause of this finding is related to the crosscutting area of Human Performance, resources aspect because preventive maintenance procedures for the EDGs were not I*

properly established and implemented to maintain long term plant safety by maintenance of design margins and minimization of long standing equipment issues (H.2(a).

Enforcement.

Technical Specification 5.4.1 states, in part. that written procedures I I*

specified in Regulatory Guide 1.33, Revision 2, Appendix A, February 1978, shall be established, implemented, and maintained. Section 9.b. of Appendix A to Regulatory  !

Guide 1.33 states, in part, that preventive maintenance schedules should be developed to specify replacement of parts that have a specific service life. In March 2001 Constellation replaced their original10-year relay replacement preventive maintenance I

with a proposed performance monitoring program, to ensure the continued reliability and operability of Agastat relays installed in safety related applications beyond the vendor I

!

recommended 10-year service life, via Engineering Change Package No. ES200100067.

Contrary to the above, the team identified that Constellation did not establish a performance monitoring program, and aU Agastat relays installed in safety related applications at Calvert Cliffs have been subject to "run to failure" preventive maintenance/replacement interval. Constellation took prompt corrective action to replace Agastat relays used in service, beyond their 10-year service life, in the 2B, 2A and 1B EDGs. The remaining Agastat relays, used in safety related applications beyond their vendor recommended service life, are under evaluation by Constellation.

Constellation has initiated several CRs (see Attachment 1 to this report) associated with this performance deficiency. Pending final significance determination, the finding is identified as Apparent Violation (AV)05000318/2010006-02, Inadequate Preventive Maintenance Results in the Failure of the 28 Emergency Diesel Generator.

2.3 Ground Fault Relay 251 G/B-22-2 Did Not Actuate on Ground Overcurrent to Trip Open Breaker 252-2202

a. Inspection Scope

The team reviewed design requirements, drawings, and maintenance history of the 251 GfB-22-2 relay. Failure of this relay to actuate and trip open the 252-2202 breaker resulted in a loss of the P-13000-2 service transformer, which resulted in loss of power to the Unit 2 RCPs and a Unit 2 trip with loss of normal decay heat removal. Unit 2 remained on atmospheric dump valves and auxiliary feedwater for heat removal for approximately 68 hours7.87037e-4 days <br />0.0189 hours <br />1.124339e-4 weeks <br />2.5874e-5 months <br />. Constellation determined the most likely cause of the relay failUre was premature coil aging due to the operating environment and the magnitude of the current seen, which caused insulation breakdown and shorting of the magnetizing coil. Even though Constellation could not conclusively identify the cause of the insulation breakdown and magnitude of the signal that coincided with the breakdown, they did note that the relay in this particular application is located in non-environmentally controlled space which would impact aging mechanisms due to the temperature extremes.

Additionally, the 251 G/B-22-2 relay age was 39 years at the time of tlie event, which is only 1 year within the 40-60 year service life.

The team reviewed Constellation's root cause analysis report (RCAR) for the 251 G/B-22 2 relay to determine the adequacy of the evaluation and the appropriateness of the extent-of-condition review. Independent reviews of the design documentation, drawings, maintenance history, and field walk-downs were performed to validate the cause of the relay failure. The team reviewed the design requirement and the relay setting information of the 13.8 kV fault protection relaying scheme to ensure proper equipment protection during transient and steady state conditions. The team also reviewed the history of the 251G/B-22-2 relay, along with other protective relays in the 13.S kV system that were required during the event, to verify that the applicable test acceptance criteria and maintenance frequency requirements were met.

b. Findings

DefiGient Evaluation and Untimely Corrective Action Associated with Induction Disc Bincling on CO-8 Type Relays

Introduction:

The team identified a finding of very low safety significance (Green) that involved a NCVof 10 CFR 50, Appendix B, Criterion XVI, "Corrective Action," because Constellation did not thoroughly evaluate and correct a degraded condition of CO-S relay disc sticking or binding issues which can adversely impact the function of the EDGs and the electrical distribution protection scheme. Specifically, following the February 18, 2010, event Constellation did not identify and adequately evaluate the recent CO-S relay failures due to sticking or binding of the induction discs in the safety related and non safety related applications.

Description:

The team reviewed Constellation's RCAR for the relay 2RY251 G/B-22-2 on breaker 2BKR252-2202 which failed to trip open the breaker. The relay was a CO-S ground fault over-current relay which had been in service for the life of the plant. The relay consists of an electromagnet and an induction disc which rotates to close a moving contact to a stationary contact to complete the breaker trip circuitry. The root cause analYSis concluded that the magnetizing coil had shorted out the majority of the windings in a manner that current would pass but the induction disc would not rotate.

The team reviewed Constellation's maintenance and corrective action history of the CO S relay failures and noted that the induction disc type relays had a failure history aSSOCiated with disc binding and sticking conditions. The team also noted that CO-S relays and other induction disc type relays had a high failure rate for out of tolerance conditions during the performance of relays calibration procedures. The team determined that failures of the relay due to binding, sticking, and out of tolerance conditions can potentially impact the breaker trip operation and affect breaker coordination.

The failure history for binding, sticking, and out of tolerance conditions for the induction type relays were reviewed since 2007. The team found 40 failures since 2007 and 5 failures of the CO-8 type relays. Constellation has a total of 68 CO-S type relays installed in safety related and non-safety related applications, all of which have been SChE~duled to be calibrated every 2 years since 2005. The team noted that from 1999 to 2005 as-found testing and calibration of the relays were performed every 4 years. The team reviewed the failure data of the CO-8 and other induction disc type relays prior to 2005 and concluded that the failure rate did not change significantly subsequent to the increase in calibration frequency. The CO-8 relay failures were noted to be 10 percent from 1999-2005.

Constellation replaced or cleaned the relays with sticking or binding conditions; however, the licensee did not place the relays in any system or component monitoring program.

The relays were also not part of the system health tracking report. The team reviewed the historical failures of the CO-8 relays and noted that for some of the testing conditions, the induction disc needed to be mechanically agitated to free it from the binding or sticking conditions. The team reviewed the vendor and Electric Power Research Institute (EPRI) calibration and maintenance manual and determined that Constellations' calibration and inspection procedure did not include all of the recommended practices specified in the EPRI guideline related to inspection and cleaning of the induction disc units. Constellation entered this issue into the corrective action program (CRs 2010-004672 and 2010-004673).

Ana~ysis: The team reviewed Constellation's root cause evaluation, which concluded the cause of the relay failure to be premature coil aging due to its operating environment and the magnitude of the current seen by the relay. The team concluded that there was no direct correlation between the coil failure and the historical binding and sticking conditions of the C0-8 relay discs. However the team determined that Constellation's failure histories ofthe CO-8 type relays were significant and the failure to evaluate the degraded conditions and implement timely and effective action to correct this condition adverse to quality was a performance deficiency. The CO-8 relays are used in multiple safety related and non-safety related applications.

The finding was more than minor, in accordance with NRC IMC 0612, Appendix B, "Issue Screening," (lMC 0612B) because, while it was not similar to any examples in IMC 0612, Appendix E, "Examples of Minor Issues" (IMC 0612E), it was aSSOCiated with the equipment reliability attribute of the Mitigating Cornerstone and it adversely affected the ass()ciated cornerstone objective of ensuring the availability, reliability. and capability of systems that respond to initiating events to prevent undesirable consequences (I.e., core damage). The team evaluated this finding using IMC 0612 Attachment 4, "Phase 1 Initial Screening and Characterization of Findings." The finding is of very low safety significance (Green) because it is not a design or qualification deficiency, did not repr<i;~sent a loss of a safety function of a system or a single train greater than its TS allowed outage time, and did not screen as potentially risk significant due to external events. The historical relay failures did not result in an actual loss of system safety function.

The cause of the finding is related to the crosscutting area of Problem Identification and Rest::Jlution, Corrective Action Program because Constellation did not thoroughly evaluate the previous station operating experience of CO-8 relay induction disc sticking and binding issues such that resolutions addressed the causes and extent-of-condition (P.1(c>>.

Enfmcement: 10 CFR 50 Appendix B, Criterion XVI, "Corrective Action," requires. in part, that measures shall be established to assure that conditions adverse to quality are promptly identified and corrected. Contrary to the above. Constellation did not adequately evaluate and correct the degraded condition of CO-8 relays which can potentially impact the function of multiple safety related systems or component. Because the finding was* of very low safety significance and has been entered into Constellation's corrective action program (CR 2010-004673), this violation is being treated as a NCV, consistent with Section VI.A of the NRC Enforcement Policy: NCV 05000317 &

3181:2010006-03, Failure to Evaluate Degraded Conditions Associated With CO-8 Relays and Implement Timely and Effe~ive Action to Correct the Condition Adverse to Quality.

Deficient Offsite Power Distribution Tripping Scheme Design Control

Introduction:

The team identified a finding having very low safety significance (Green)for failure to translate design calculation setpoint standard listed in calculation E-90-058 and E-90-061 of phase overcurrent relay (250) on feeder breakers 252-1101, 1102, 1103,2101,2102, and 2103 into the actual relay settings.

Description:

During the relay settings review, the team identified that the service transformer 251G/ST-2 and service bus 251G/SB-21 ground overcurrent relays settings specified in the relay setting sheets did not support the values listed in the relay setting calculation E-90-61 for the 500/14 kV Service Transformer {P-13000-2}.* The value listed in the calculations for the 251 G/ST-2 ground overcurrent relay tap settings was 2.5 amps and the actual field setting, which is set in accordance with the relay setting sheets, was found to be at 2 amps. For the service bus 251 G/SB-21 the calculation setting of the time delay value was 4 seconds and the actual field settings was found to be at 3 seconds. Due to these discrepancies Constellation's engineering staff conducted an evaluation to determine if the actual field settings as specified in the relay setting sheets for the two overcurrent relays provided adequate coordination to ensure selective tripping. The relays are designed to detect ground faults on the 13.8 kV system which have not been cleared by the 500 kV transmission system relays and separate the station service transformer P-13000-2 from the grid. The team reviewed Constellation's evaluation and determined that there was no selective tripping coordination impact due to the relay setting discrepancies on 251 G/ST-2 and 251G/S8-21. However. due to these discrepanCies identified between the relay setting sheets and the design calculations, Constellation conducted an extent-ot-condition review for the 13.8 kV systems to determine it other similar relay settings discrepancies exist.

As a result of the extent-of-condition review, Constellation identified that the phase overcurrent relay {250} pickup value for the six unit service transformers feeder breakers 252-1101, 1102, 1103, 2101, 2102, and 2103 were set at 1440 amps in accordance with the relay setting sheets and the values specified in the calculations E-90-058 and E-90 061 were 1200 amps.

The normal system operation deSign when offsite power is available, is the 4.16 kV system being supplied by the 13.8 kV system through six unit service transformers. The unit service transformers have overcurrent protection to protect against transformer faults in the primary or secondary side windings. This overcurrent protection per calculations E-90-058 and E-90-061 was limited to be at 1200 amps due to the breaker rating of all of the feeder breakers. Due to the as found relay setting of 1440 amps exceeding the breaker ratings of 1200 amps, Constellation conducted an operability analysis and performed a calculation which determined that the maximum load current possible during the worst case electrical distribution line-up condition would be 982 amps. The calculation demonstrated that the maximum load current possible during the worst case electrical distribution line-up would not challenge the feeder breaker ratings, and therefore would not cause the breaker to fail prior to the trip operation (tripping).

Analysis:

The team determined that the failure to translate the design calculation setpoint standard values listed in the calculation E-90-058 and E-90-061 of phase overcurrent relay (250) on feeder breakers 252-1101, 1102, 1103,2101,2102, and 2103 into the actual relay settings was a performance deficiency.

The team determined that this finding was more than minor because it affected the Initiating Events Cornerstone attribute of equipment performance for ensuring the availability and reliability of systems to limit the likelihood of those events that upset plant stability and challenge critical safety functions during shutdown as well as power operations. Also, this issue was similar to Example 3j of IMC 0612, Appendix E, "Examples of Minor Issues," because the condition resulted in reasonable doubt of the operability of the component, and additional analysis was necessary to verify operability.

The failure to translate adequate design calculation setpoint of phase overcurrent relays on the feeder breakers resulted in an as-found relay setting that exceeded the rating of the feeder breakers. The team determined that due to the as-found relay setting exceeding the breaker ratings, certain phase overcurrent conditions could have potentially caused the breaker to fail prior to the phase overcurrent relay sensing the degraded condition. The team determined that this condition could affect the recovery of the safety buses from the electrical grid. The team evaluated this finding using IMC 0612 Attachment 4, "'Phase i-Initial Screening and Characterization of Findings." This finding was determined to be of very low safety significance (Green) because these inadElquate relay settings did not result in an actual loss of system safety function and Constellation also performed an evaluation and determined that the maximum load current possible would not challenge the feeder breaker ratings. The finding did not have a cross-cutting aspect because the most significant contributor to the performance defiCiency was not reflective of current licensee performance.

Enforcement:

This finding was not a violation of regulatory requirements because the unit service transformers and the overcurrent protection relays are not a system or component covered under 10 CFR Part 50, Appendix B. The issue has been entered into the licensee's corrective action program (CR 2010-002123. Because this finding does not involve a violation and has very low safety significance, it is identified as FIN 05000317 & 318/2010006-04: Failure to Translate Design Calculation Setpoint of Phase Overcurrent Relay on Feeder Breakers.

2.4 Breaker 2BKR152-2501 (4 kV Bus 25 Normal Feed) Failed to Trip Open a.

InSpElction Scope The team reviewed design requirements, drawings, and maintenance history of the 2BKR152-2501 breaker. The breaker inspection reviewed the maintenance practice and procedure of overhauling the 4 kV breakers to determine if adequate test acceptance criteria were established and followed vendor recommendations. The team reviewed Constellation's root cause analysis report for the 2BKR152-2501 to determine the adequacy of the evaluation and the appropriateness of the extent-of-condition review.

Independent reviews of the design documentation, drawings, maintenance history, and field walkdowns were performed to validate the cause of the breaker failure.

Additionally, operations, maintenance, and engineering staff were interviewed to confirm the c,bservations and causes cited in Constellation's evaluation of this issue. The team reviewed the adequacy of associated preventive maintenance, corrective actions, and post maintenance testing performed on the 2BKR152-2501 breaker. Bus 25 supplies power to three Unit 2 circulating water pumps.

No findings of significance were identified for this equipment issue. The team determined that this failure of 2BKR 152-2501 to open had no adverse consequence during this event.

2.5 Breaker 2BKR252-2201 (13 kV Unit 2 RCP Buses Normal Feed) Failed to Trip Open

a. Inspection Scope

The team reviewed design requirements, drawings, and maintenance history of the 2BKR252-2201 breaker. The team reviewed the maintenance practice and procedure of overhauling the 13.8 kV breakers to determine if adequate test acceptance criteria were established and followed vendor recommendations. Constellation concluded the cause of the breaker failing to open was infant mortality (Le., manufacturing defect). The team reviewed Constellation's root cause analysis report for the 2BKR252-2201 to determine the adequacy of the evaluation and the appropriateness of the extent-of-condition review. Independent reviews of the design documentation, drawings, maintenance history, and field walkdowns were performed to validate the cause of the breaker failure.

Additionally, operations, maintenance, and engineering staff were interviewed to confirm the observations and causes cited in Constellation's evaluation of this issue. The team reviewed the adequacy of associated preventive maintenance, corrective actions, and post maintenance testing performed on the 2BRK252-2201 breaker.

b. Findings

No findings of significance were identified.

3. Human Performance

3.1 Event Diagnosis and Crew Performance

a. Inspection Scope

The team interviewed the operations crew that responded to the February 18, 2010, event, including three senior reactor operators, the shift manager, the control room supervisor, the shift technical advisor, two reactor operators, and three equipment operators to determine whether the operators performed in accordance with procedures and training. The team also reviewed narrative logs, post-transient reports, condition reports, PPC trend data, and procedures implemented by the crew.

b. Findings

/Observations Deficient Procedure Guidance for CVCS Letdown Restoration rntroduction: A self-revealing Green NCV of TS 5.4.1.a, "Procedures," was identified because Constellation did not establish adequate procedures for restoration of CVCS letdown flow. Deficient operating instructions prevented timely restoration of letdown flow following letdown isolation, which ultimately led to exceeding the TS high limit for pressurizer level.

Description:

On February 18, 2010, Unit 1 was operating at 93% reactor power in preparation for main steam safety valve testing with the 11 and 13 charging pumps operating and increased letdown flow balanced with cparging flow. At 8:24 a.m., a phase to ground overcurrent fault on 12B RCP switchgear resulted in an automatic reactor trip on Unit 1. Protective relaying isolated plant service transformer P-13000-2, which de-energized Unit 1 4 kV bus 14. Instrument Bus 1Y10, which is normally fed from 4 kV Bus 14, de-energized, isolating CVCS letdown by closing letdown isolation valvI31-CVC-515. The 1B EDG automatically started on bus undervoltage and re powered 4 kV Bus 14 about 8 seconds later.

Charging pump 13 stopped on loss of power when 14 Bus de-energized and charging pump 11, powered from 4 kV Bus 11, continued running. At 8:31 a.m., operators re started charging pump 13. Charging pumps remained running and pressurizer level incrE~ased as expected. Operators performed makeup to the CVCS Volume Control Tank (VCT) from 8:50 a.m. to 9:11 a.m. in order to maintain VCT inventory while the two running charging pumps transferred VCT contents into the pressurizer. At 8:58 a.m., 34 minutes after the reactor trip, and with pressurizer level approaching the high end of the EOP pressurizer level control band (180"), operators turned off charging pump 13.

Charging pump 11 continued to run in anticipation of restoring letdown. At 9:02 a.m.,

operators stopped charging pump 11 because pressurizer level was above the EOP high level limit.

At 9:12 a.m., operators made their first attempt to restore letdown in accordance with 01 2A, "Chemical and Volume Control System", Section 6.7, "Starting Charging and Letdown" by re-starting charging pump 11 and shortly thereafter opening letdown isolation valves. They were not successful in restoring letdown. Subsequent post-event analysis of system parameter data stored on the plant computer indicated that excess flow check valve 1-CVC-343 was closed. Inadequate procedural guidance prevented operators from re-opening the check valve to establish letdown flow. The procedure for starting letdown consisted of setting letdown downstream control valves at 20% open in manual, starting a charging pump to cool the letdown stream, then opening letdown upstream isolations 1-CVC-515 and 1-CVC-5i6 to establish letdown flow. 01-2A did not contain any information related to the possibility that excess flow check valve 1-CVC-343 might be closed and did not provide direction for opening the valve.

Operators were confused by indicated letdown flow remaining downscale and took about 7 minutes re-confirming the system lineup and monitoring their instrumentation before stopping charging pump 11. They did not use 01-2A, Section 6.6, "Securing Charging and Letdown" to stop charging and letdown because letdown was not yet established.

Initial conditions for using Section 6.6 were not met. Operators did not recognize a need for simultaneously stopping charging and letdown in accordance with the general methodology of Section 6.6. An additional 17 minutes elapsed from the time operators stopped the charging pump 11 until they closed the upstream letdown isolation valves.

j.

I Post-event data analysis showed the downstream letdown* piping temperature steadily incre:ased into the 400 0 to 500°F range during the 17 minutes between stopping the chan~ing pump and closing the upstream letdown isolation valves because of hot reactor coolant flowing in the letdown line through the 10 gallons per minute (gpm) orifice which bypasses around the excess flow check valve. Typically, reactor coolant is cooled by charging flow through the letdown regenerative heat exchanger to about 220°F in the letdown line. It is postulated that during letdown restoration attempts, the ReS which was greater than 2000 psi pressure, re-pressurized the letdown line which rapidly collapsed steam voids in the hot (400°F-500°F) letdown piping and re-closed the excess flow check valve because of water hammer. A differential pressure was then established across the check valve, maintaining it closed. The restoration method provided by procedure OI-2A did not contain actions necessary for pressure equalization across this spring-loaded check valve.

During the second letdown restoration attempt at 10:44 a.m., letdown continued to flow through the bypass orifice for 21 minutes after stopping charging pump 11. This action again heated the letdown line to near reactor coolant temperature. On the third attempt at 11 :39 a.m., operators closed letdown isolation valves just 2 minutes after stopping the char!Jing pump, which left the letdown line in a relatively cool state, such that the transient conditions on the fourth and final attempt did not re-close the excess flow check valve. Operators made a total of four attempts to restore letdown over 5 hours5.787037e-5 days <br />0.00139 hours <br />8.267196e-6 weeks <br />1.9025e-6 months <br /> before letdown was finally restored at 1:17 p.m.

Pressurizer level remained above the specified limit in EOP-1 for all but a few minutes of approximately 5 hours5.787037e-5 days <br />0.00139 hours <br />8.267196e-6 weeks <br />1.9025e-6 months <br /> following the reactor trip. Throughout this period, operators attempted to control pressurizer level from the EOP high level limit of 180" to the normal full power level of 215". This range was based on the constraints of contrOlling pressurizer level below the TS high limit of 225" and high enough to prevent overfilling the VCT. With letdown unavailable, operators were only able to lower pressurizer level through the 6 gpm reactor coolant pump seal bleed off that returns to the VCT.

The team observed that unnecessarily conservative procedural requirements for ensuring adequate shutdown margin in NEOP-301, "Operator Surveillance Procedure" contributed to the operating crew's sense of urgency for letdown restoration. Operators rec09nized that the 2400 gallon ReS boration required to satisfy the requirements of NEOP-301 would cause pressurizer level to significantly exceed the TS high level limit if performed with letdown isolated.

Other options existed for controlling VeT level such that bleed off could be allowed to reduce pressurizer level to within the EOP band. These included intentionally draining the VeT to the liquid waste system and aligning bleed off flow to return to the reactor coolElnt drain tank instead of to the VeT. However, the station does not have an abnormal operating procedure for responding to a sustained loss of letdown and therefore no procedural guidance existed for using other methods to control veT level.

Around noon, shortly after the third attempt to restore letdown, operators became involved in shifting main turbine gland sealing steam supply from main steam to auxiliary steam and failed to control ReS temperature. Loop temperature rose approximately 5°F, causing pressurizer level, already high at 215", to rise and peak at 231."

Pressurizer level remained above the TS 3.4.9 high limit of 225" for apprOXimately 7 minutes until operator actions which were taken to lower RCS temperature succeeded in reducing level to below the TS limit The excess flow check valve did not re-close on the fourth restoration attempt. letdown was successfu Ily re-established at 1317, approximately 5 hours5.787037e-5 days <br />0.00139 hours <br />8.267196e-6 weeks <br />1.9025e-6 months <br /> after event initiation.

COnl:;tellation has established procedure guidance relating to letdown restoration following closure of the excess flow check valve. The issue was entered into their CAP for further evaluation as CR 2010-001378.

Analysis:

The performance deficiency is that Constellation did not establish adequate procedures for restoring letdown. Multiple factors contributed to pressurizer level exceeding the TS high limit. These included time pressure from overly conservative procedure requirements related to maintaining shutdown margin, filling the pressurizer above the EOP band when RCS temperature was below its nominal no*load value, makeup to the VCT to the high end of its control band when pressurizer level was already high, the absence of proceduralized options for controlling VCT level, and inattentiveness to reactor coolant temperature control. However, inadequate procedure guidance for letdown restoration is the primary reason which led to operation outside of EOP pressurizer level limits for an extended period of time and unnecessarily challenged operators in their attempts to maintain pressurizer level control.

The team determined this finding is more than minor because it is associated with the procedure quality attribute of the Mitigating Systems Cornerstone and affected the cornerstone objective to ensure the availability, reliability, and capability of systems that respond to initiating events to prevent undesirable consequences (i.e., core damage).

The finding is of very low safety significance (Green) because it is not a design or qualitfication defiCiency. did not represent a loss of a safety function of a system or a single train greater than its TS allowed outage time, and did not screen as potentia!ly risk significant due to external events. This finding has a crosscutting aspect in the area of Human Performance, resources, because Constellation did not ensure that procedures for restoring CVCS letdown were complete and accurate (H.2(c>>.

Enforcement:

TS 5.4.1.a requires, in part, that written procedures be established, implemented, and maintained for activities described in Appendix A of Regulatory Guide (RG) 1.33, "Quality Assurance Program Requirements (Operation}." Specifically, Section 3 of RG 1.33, Appendix A, "Instructions for energizing, filling, venting, draining, startup, shutdown, and changing modes of operation should be prepared, as appropriate, for the following systems," includes the Letdown/Purification System.

Contrary to the above, on February 18, 2010, the operators were unable to restore charging and letdown using the existing instructions of OI-2A, "Chemical and Volume Control System," due to inadequacy of the procedure. Because this issue is of very [ow safety significance {Green) and Constellation entered this issue into their corrective action program as CR 2010-001378, this finding is being treated as an NCV consistent with Section VJ.A.1 of the NRC Enforcement Policy. (NCV 05000317/318/2010006-05, Failed to Establish Adequate Procedures for Letdown Restoration).

3.2 Communications and Emergenqy Plan Applicability a. InsQ~:lction Scope This event involved an automatic reactor trip of both units with multiple complicating degraded equipment issues. Each unit lost one 500 kV offsite power supply (the Red Bus). In addition, Unit 2 lost forced RCS circulation when all four RCPs tripped, the 28 EDG failed to reenergize the Unit 2 24 4 kVafety bus, and the Unit 2 normal heat removal sink (main condenser) was unavailable for an extended time. Operators notified the NRC of the event at 11 :47 a.m. on February 18 in accordance with 10 CFR 50.72.

Operators determined that emergency action level (EAL) entry criteria were not met and accordingly did not declare an emergency event. The team reviewed operator logs, emergency procedures, the Emergency Plan, plant operating data, and interviewed station personnel to verify operators properly assessed the EAL entry criteria and notified the NRC of the event.

b. Findings

No findings of Significance were identified.

Organizational Response 4.1 Immediate Response and Restart Readiness Assessment a. InSpE!ction Scope The team interviewed personnel, reviewed various procedures and records, observed plant operators and station meetings, and performed plant walkdowns to assess station personnel's immediate response to the event and restart readiness assessment. The licensee restart readiness assessment was performed in accordance with CNG-OP 1.01-1006, Post-Trip Reviews, Rev. 1.

No findings of Significance were identified.

Operators promptly announced the event, implemented the appropriate emergency operating procedures, and correctly assessed EALs. However, human performance deficiencies and/or procedure deficiencies led to Unit 1 exceeding the TS pressurizer level limit (Section 3.1) and untimely verification of offsite power source availability.

Constellation augmented the on-shift staff promptly to support initial diagnosis and corrective actions to address the numerous degraded equipment problems.

The post-trip review was sufficient to ensure operator performance issues and significant equipment issues were identified and addressed. Notwithstanding, the team identified several deficiencies which posed challenges to the effectiveness of the licensee restart readiness assessment (CR 2010-004502). The team discussed each issue with licensee management who entered the issues into the corrective action program, as applicable.

One notable issue was that station personnel did not quarantine several failed components (breaker 152-2501, 2B EDG oil sensing line contents, relay 251 G/B-22-2).

This adversely limited the as-found information available to diagnose the failure mechanisms.

4.2 Post-Event Root Cause Analysis and Actions

.Enclosure

a. Inspection Scope

The team reviewed the RCAR for the 2010 Dual Unit Trip to determine whether the causes of the event and associated human performance and equipment challenges were properly identified. Additionally, the team assessed whether interim and planned long term corrective actions were appropriate to address the cause(s).

No findings of significance were identified.

The RCAR properly evaluated causes and appropriate corrective actions for several equipment challenges. For example, evaluation and corrective actions for the Unit 1 roof leakage which initiated the ground fault event were comprehensive. In addition to the root cause, the RCAR identified several contributing causes including deficient maintenance rule implementation and performance monitoring, over reliance and inadequate vendor oversight, incomplete incorporation of Quality Assurance findings, and insufficient engineering involvement in roof construction. Interim corrective actions were appropriate and long term actions were being developed through the corrective action program.

In several other areas the team determined the RCAR lacked depth and technical rigor in identifying and assessing potential causes. In each case the RCAR developed an explanation for what may have caused the event or equipment response, but did not fully develop other potential causes. Examples included:

  • RCAR did not identify the failure to implement an Agastat relay monitoring program when the 10 year replacement PM was eliminated (2B EDG failure);
  • RCAR conclusion that loose diaphragm retaining ring screws on the Agastat relay were caused by vibration and were the result of a manufacturing defect were not well supported by the contracted failure analysis or data evaluation (2B EDG failure);
  • Inforr:nation that the relay induction disc did not freely rotate back to the original position during bench troubleshooting. was not incorporated into the RCAR (relay 2RY251G/B-22-2 failure);
  • RCAR did not thoroughly review previous internal OE regarding induction disc failure on CO-8 type relays. Station personnel did not recognize the sensitivity of the induction disc to sticking/binding (relay 251 G/B-22-2 failure);
  • RCAR did not include or address the 2008 as-found inspection results which found the armature linkage misaligned and the trip coil loose. This was an unexpected and infrequent occurrence (breaker 152-2501 failure); and
  • ReAR concluded the 152-2501 breaker failure was due to mechanical binding in the trip linkage caused by human error during the October 2008 trip armature bolt replacement. However, corrective actions did not investigate other breaker maintenance performed by these technicians during that time period.

The team reviewed these issues and determined that none of these issues involved violations of regulatory requirements or were already described as part of the previously discussed violations in this report.

4.3.

Revi'9w of Operating Experience

a. Inspection Scope

The team reviewed Constellation's use of pertinent industry and station operating experience (OE), including evaluation of potential precursors to this event.

b. Findings

No findings of significance were identified.

The team identified several instances where Constellation had not effectively evaluated or initiated actions to address related station or industry operating experience issues.

Examples included:

  • Unit 1 and Unit 2 45 foot switchgear room roof leakage onto electrical switchgear had been identified numerous times since 2002, but not corrected. Fifty-eight open work orders for roof leaks, several> 24 months old, had not been implemented (Section 2.1).
  • Industry OE has reported numerous problems with Agastat series 7000 relays; several affecting reliability of the actuation setpoint. Yet engineers extended both the service life and calibration periodicity of the EDG lube oil pressure trip time delay relays beyond the vendor specified periods without adequate technical basis (Section 2.2).
  • Technicians routinely did not consider relay actuation outside of the acceptance band to be a test failure. Often no condition report was initiated and no drift/performance trending was performed. Corrective action was often limited to adjusting the as-left setpoint to within the acceptance band (e.g, agastat 7000 series time delay relays, CO-8 overcurrent protection relays) (CR 2010-004090).

The team reviewed these issues and determined that none of these issues involved violations of regulatory requirements or were already described as part of the previously discussed violations in this report.

5. Risk Significance of the Event

a.

Initial Assessment The initial risk assessment for this event is documented in the enclosed SIT charter.

b.

Final Assessment Onsite follow-up and discussions with the Constellation PRA staff verified that there were no additional plant conditions or operator performance issues that significantly alter the initial event risk assessments performed for both units. The Unit 1 reactor trip estimated conditional core damage probability (CCDP) was calculated to be 2.6 E-6 for the February 18, 2010 reactor trip. The Unit 2 reactor trip CCDP, accounting for a loss of reactor coolant forced Circulation (all RCPs tripped), loss of heat sink (main condenser}, and failure of the 28 EDG to run, was estimated to be 1.5 E-5 for the February 18, 2010 event 40A3 Follow~up of Events

.1 (Closed) Licensee Event Report (LER) 05000317/2010-001, Reactor Trip Due to Water

Intrusion into SWitchgear Protective Circuitry On February 18, at 8:24 a.m., the Unit 1 reactor automatically tripped from 93 percent reactor power in response to a RCS low flow condition. Water had leaked through the auxiliary building roof into the 45' switchgear room, causing an electrical ground which tripped the 128 RCP, thereby initiating the reactor protection system trip on RCS low flow. Three of the four Unit 1 Reps continued operating. The electrical ground and failure of a ground fault protection relay caused service transformer P-13000-2 to isolate, a

thereby deenergizing the 144 kV safety bus and the 1Y1 120 volt instrument bus. The 1B EDG automatically started and reenergized the 14 bus as designed. The LER accurately described operator response to the event. The team reviewed the LER and idenlified no findings of significance beyond those previously documented in this report (NRC Inspection Report No. 05000317/2010006). This LER stated a supplemental LER will document a complete description of corrective actions after the event analysis and cause determination is complete. This LER is closed .

.2 (Cloe;ed) Licensee Event Report (LER) 05000318/2010-001, Reactor Trip Due to Partial

Loss of Offsite Power On February 18, at 8:24 a.m., the Unit 2 reactor automatically tripped from 99.5 percent reactor power due to a loss of power to all four Reps and the associated reactor proteiCtion system RCS low flow trip. The event emanated from a ground fault on Unit 1 (see Section 2.1). A ground OIC relay failed to actuate as designed, permitting the Unit 1 ground OIC condition to reach Unit 2. Unit 2 electrical protection responded by deenergizing the 500 kV"Red Bus" offsite power supply and multiple onsite electrical buses including the 24 4 kV safety bus. The 28 EDG started as designed, but tripped on low lube oil pressure (see Section 2.2). The LER accurately described operator response to the event. The team reviewed the LER and identified no findings of significance beyond those previously documented in this report (NRC Inspection Report No.05000317/2010006}. This LER stated a supplemental LER will document a complete description of corrective actions after the event analysis and cause determination is complete. This LER is closed.

40A6 Meetings, Including Exit

Exit Meeting Summary

On April 30, 2010, the team presented their overall findings to members of Constellation management led by Mr. G. Gellrich, Site Vice President, and other members of his staff who l:lcknowledged the findings. The'team confirmed that proprietary information reviewed during the inspection period was returned to Constellation.

1-1

SUPPLEMENTAL INFORMATION

KEY POINTS OF CONTACT

Licensee Personnel

G. Gellrich Site Vice President

K. Allor Senior Operations Instructor

P. Amos Performance Improvement

P. Darby Principal Assessor, Engineering Quality Performance Assessment
S. Dean Manager, Maintenance
M. Draxton Manager, Nuclear Training

D. Fitz Communications

M. Flynn HR Director

D. Frye Manager. Operations

M. Gahan GS, Design Engineering

G. Gellrock Supervisor

S. Henry Manager, Work Management

J. Koebel PRA

D. Lauver Director, Licensing
W. Mahaffee Supervisor, Chemistry Operation
J. McCullum Supervisor, Instrumentation and Controls

K. Mills Assistant Operations Manager

P. O'Malley Quality Performance Assessment

T. Riti GS, System Engineering
K. Roberson Manager, NSS

A. Simpson Engineering/Licensing

R. Stark Design Engineering

T. Trepanier Plant General Manager

Others

S. Gray Power Plant Research Program Manager, Department of Natural

Resources, State of Maryland

M. Griffin Nuclear Emergency Preparedness Coordinator, Department of the

Environment, State of Maryland

LIST OF ITEMS OPENED, CLOSED, AND DISCUSSED

Opened

05000317/318f2010006-01 NCV Failure to Thoroughly Evaluate and Promptly Correct Degraded Conditions Associated with Auxiliary Building Roof Leakage {Section 2.1}
05000317/318f20 10006-02 AV Inadequate Preventive Maintenance Results in the Failure of the 2B Emergency Diesel Generator (Section 2.2)
05000317/318/2010006-03 NCV Failure to Evaluate Degraded Conditions Associated with CO-8 Relays and Implement

1-2 Timely and Effective Action to Correct the Condition Adverse to Quality (Section 2.3)

05000317/318/201 0006-04 FIN Failure to Translate Design Calculation Setpoint of Phase Overcurrent Relay on Feeder Breakers (Section 2.3)
05000317/3'18/20 10006-05 NCV Faifed to Establish Adequate Procedures for Letdown Restoration (Section 3.1)

Opened and Closed

05000317/201 0-001 LER Reactor Trip Due to Water Intrusion into Switchgear Protective Circuitry (Section 40A3.1)
05000318/2010-001 LER Reactor Trip Due to Partial Loss of Offsite Power (Section 40A3.2)

LIST OF DOCUMENTS REVIEWED