IR 05000528/2004012: Difference between revisions

From kanterella
Jump to navigation Jump to search
(Created page by program invented by StriderTol)
(Created page by program invented by StriderTol)
 
Line 19: Line 19:


=Text=
=Text=
{{#Wiki_filter:
{{#Wiki_filter:k  UNITED STATES a.. 4
. 2°NUCLEAR  REGULATORY COMMISSION
 
==REGION IV==
611 RYAN PLAZA DRIVE, SUITE 400
<,4';i.t.~ 4ARLINGTON,  TEXAS 76011.4005 Gregg R. Overbeck, Senior Vice President, Nuclear Arizona Public Service Company P.O. Box 52034 Phoenix, AZ 85072-2034 SUBJECT: PALO VERDE NUCLEAR GENERATING STATION, UNITS 1, 2, AND 3 - NRC AUGMENTED INSPECTION TEAM (AIT) REPORT 05000528/2004-012; 05000/204529/2004-012; 05000530/2004-012 AND PRELIMINARY FINDINGS
 
==Dear Mr. Overbeck:==
On June 18, 2004, the Nuclear Regulatory Commission (NRC) completed an Augmented Inspection at your Palo Verde Nuclear Generating Station, Units 1, 2 and 3. The enclosed report documents the inspection findings, which were preliminarily discussed on June 18, 2004, with you and other members of your staff. A public exit was conducted with you and other members of your staff on July 12, 2004.
 
The Augmented Inspection concluded that each unit generally operated as designed for a loss of offsite power event by properly shutting down and stabilizing in Mode 3. Nevertheless, a number of system failures, design control issues, maintenance issues, and human performance errors were noted during the emergency that unnecessarily complicated the event. For example, a failure of the Unit 2 Train UA" Emergency Diesel Generator limited the available safety equipment for operators and a failure of the Technical Support Center Emergency Diesel Generator required your emergency response organization to use alternate facilities. These issues and others are discussed in more detail in the enclosed report. In addition, we will conduct a followup inspection to assess your determination of root and contributing causes, corrective actions, and to address any compliance issues identified. Please note that some aspects of this report are exempt from public disclosure and, as such, in accordance with 10 CFR 2.390 are being withheld from public distribution.
 
In accordance with 10 CFR 2.390 of the NRC's "Rules of Practice," a copy of this letter and its enclosure will be made available electronically for public inspection in the NRC Public Document Room or from the NRC's document system (ADAMS), accessible from the NRC Web site at http://www.nrc.pov/readinp-rm/adams.html.
 
Sincerely, Bruce S. Mallet, Regional Administrator Region IV RnginI atmon in this record was deleted Dockets: 50-528; 50-529; 50-530  in accordance with th F edom of Information Act. exemptionlS F011-
 
Licenses: NPF-41; NPF-51; NPF-74
 
===Enclosure:===
 
REGION IV==
Dockets: 50-528; 50-529; 50-530 Licenses: NPF-41; NPF-51; NPF-74 Report No.: 05000528/2004-011; 05000529/2004-011; 05000530/2004-01 1 Licensee: Arizona Public Service Company Facility: Palo Verde Nuclear Generating Station, Units 1, 2, and 3 Location: 5951 S. Wintersburg Road Tonopah, Arizona Dates: June 14 through July 12, 2004 Team Leader: Anthony T. Gody, Chief Operations Branch Inspectors: P. Alter, Senior Resident Inspector, Projects Branch B Division of Reactor Projects T. Koshy, Electrical & Instrumentation and Controls Branch Office of Nuclear Reactor Regulation Amar Pal, Electrical & Instrumentation and Controls Branch Office of Nuclear Reactor Regulation T. McConnell, Resident Inspector, Projects Branch D Division of Reactor Projects C. Paulk, Senior Reactor Inspector, Engineering Branch Division of Reactor Safety Joseph I. Tapia, Senior Reactor Inspection, Engineering Branch Division of Reactor Safety David P. Loveless, Senior Reactor Analyst Division of Reactor Safety Accompanied By: G. Skinner, Electrical Engineer, Beckman and Associates Approved By: Anthony T. Gody, Chief Operations Branch Division of Reactor Safety
 
SUMMARY OF FINDINGS IR 05000528/2004-012; 05000-529/2004-012; 05000-530/2004-012; June 18, 2004; Palo Verde Nuclear Generating Station, Units 1, 2, and 3; Augmented Inspection The report covered a period of inspection by six inspectors and a contractor. The significance of most findings is indicated by its color (Green, White, Yellow, Red) using Inspection Manual Chapter 0609, "Significance Determination Process." Findings for which the Significance Determination Process does not apply may be green or be assigned a severity level after NRC management review. The NRC's program for overseeing the safe operation of commercial nuclear power reactors is described in NUREG-1 649, "Reactor Oversight Process," Revision 3, dated July 2000.
 
NRC-Identified and Self Revealing Findings On June 14, 2004, at 7:41 a.m. MDT, a ground-fault occurred Phase "C" of a 230 kV transmission line in northwest Pheonix, Arizona between the "West Wing" and 'Liberty" substations located approximately 47 miles from the Palo Verde Nuclear Generating Station. A failure in the protective relaying resulted in the ground-fault not isolating from the local grid for approximately 38 seconds. This uninterrupted fault cascaded into the protective tripping of a number of 230kV and 525kV transmission lines, a nearly concurrent trip of all three Palo Verde Nuclear Generating Station units, and the loss of six additional generation units nearby within approximately 30 seconds of fault initiation. This represented a total loss of nearly 5,500 MWe of local electric generation. Because of the loss-of-offsite power, the licensee declared a Notice of Unusual Event for all three units at approximately 7:50 a.m. MDT. The Unit 2 Train 'A" Emergency Diesel Generator started, but failed early in the load sequence process due to a diode that had less than seventy hours of run time in the exciter rectifier circuit that short-circuited. This resulted in the Train "A" Engineering Safeguards Features busses de-energizing which limited the availability of certain safety equipment for operators. Because of this failure, the licensee elevated the emergency declaration for Unit 2 to an Alert at 7:54 MDT.
 
An NRC Augmented Inspection Team was dispatched to the site later that same day and found that the licensee's response to the event, while generally acceptable, was complicated by a number of equipment failures, procedure issues, and human performance issues with diverse apparent causes and with varying degrees of significance.
 
TABLE OF CONTENTS 1.0 Introduction ...........................................................
1.1 Event Description ..................
1.2 System Descriptions ....................................................
1.3 Preliminary Risk Significance of Event ......................................
2.0 System Performance and Design Issues .....................................
2.1 Off-site Power System Issues .............................................
2.2 Unit 1, Atmospheric Dump Valve 185 Failure .................................
2.3 Unit 1, Letdown Heat Exchanger Isolation Failure ..............................
2.4 Unit 2, Train "A" Emergency Diesel Generator Failure ...........................
2.5 Unit 3, System Interactions During Event ....................................
2.6 Unit 3, Reactor Coolant Pump 2B Lift Oil Pump Trip ............................
2.7 Unit 3, Low Pressure Safety Injection System In-Leakage .......................
2.8 Unit 1 and 3, General Electric Magna Blast Breaker Failures .....................
3.0 Human Performance and Procedural Aspects of the Event .......................
3.1 Turbine-Driven Auxiliary Feedwater Drains ...................................
3.2 Unit 2, Train uE" Positive Displacement Charging Pump Trip ......................
3.3 Entry Into Technical Specification Action Statements ...........................
3.2 Technical Support Center Emergency Diesel Generator Trip .....................
3.3 Initial Notification of Event to State and Local Officials ..........................
3.4 Emergency Response Organization Challenges ...............................
4.0 Coordination with Off-Site Electrical Ormanizations .............................
5.0 Risk Significance of the Event .............................................
6.0 Assessment of Event Response ...........................................
7.0 Exit Meeting Summary ..................................................
ATTACHMENT 1 - Supplemental Information ATTACHMENT 2 - Augmented Inspection Team Charter ATTACHMENT 3 - Sequence of Events ATTACHMENT 4 - System Figures Figure 1 - Palo Verde Nuclear Generating Station Transmission System ATTACHMENT 5 - Proprietary Information
 
Report Details 1.0 Introduction 1.1 Event Description On June 14, 2004, at 7:41 a.m. MDT, a ground-fault occurred on Phase "Co of a 230 kV transmission line in northwest Pheonix, Arizona between the "West Wing' and "Liberty" substations located approximately 47 miles from the Palo Verde Nuclear Generating Station (PVNGS). A failure in the protective relaying resulted in the ground-fault not isolating from the local grid for approximately 38 seconds. This uninterrupted fault cascaded into the protective tripping of a number of 230kV and 525kV transmission lines, a nearly concurrent trip of all three PVNGS units, and the loss of six additional generation units nearby within approximately 30 seconds of fault initiation. This represented a total loss of nearly 5,500 MWe of local electric generation. Because of the loss-of-offsite power (LOOP), the licensee declared a Notice of Unusual Event (NOUE) for all three units at approximately 7:50 a.m. MDT.
 
The Unit 2 Train "A" Emergency Diesel Generator (EDG) started, but failed early in the load sequence process due to a diode that had less than seventy hours of run time in the exciter rectifier circuit that short-circuited. This resulted in the Train "A" Engineering Safeguards Features (ESF or Safety) busses de-energizing which limited the availability of certain safety equipment for operators. Because of this failure, the licensee elevated the emergency declaration for Unit 2 to an Alert at 7:54 MDT.
 
An NRC Augmented Inspection Team (AIT) was dispatched to the site later that same day and found that the licensee's response to the event, while generally acceptable, was complicated by a number of equipment failures, procedure issues, and human performance issues with diverse apparent causes and with varying degrees of significance. For example;
* The Technical Support Center (TSC) emergency diesel generator failed because a test switch was not returned to its' proper position following maintenance six-days prior to the event. As a result, the emergency response organization assembled in the alternate TSC. This resulted in some confusion and posed some unique challenges to the emergency response organization.
 
* The ability of the licensee to conduct automatic dial out for emergency responders and to develop protective action recommendations, had they been needed, appeared to have been affected by the loss of power.
 
* Other facility issues were identified which could have impeded emergency responders but did not during this event.
 
* An Atmospheric Dump Valve (ADV) on Unit 1 drifted closed due to an apparent equipment malfunction which posed a minor operational nuisance to the control room operators during the event.
 
* Operators did not anticipate that the Unit 1 letdown system would not automatically isolate because a temporary modification was not fully understood or translated into operating procedures. This resulted in high temperatures in that system. The high temperatures resulted in fumes being generated as paint heated up which precipitated a fire brigade response. This complicated the Unit 1 event.
 
* The Unit 2 Positive Displacement Charging Pump "EW was temporarily lost due to human performance errors.
 
* An unanticipated control interaction in the Unit 3 steam bypass control valve system resulted in a momentary opening of all Unit 3 steam bypass valves and an unanticipated main steam isolation signal. The main steam isolation signal only slightly complicated the Unit 3 operators response to the loss-of-offsite power event.
 
* A check-valve leakage problem in the Unit 3 safety injection system resulted in operators having to manually depressurize the low-pressure safety inject system three times during the event. This posed an unnecessary additional distraction for the event.
 
* Two Magna-Blast circuit breakers failed to operate during recovery operations in Unit 1 and Unit 3 which delayed electrical system recovery efforts.
 
* Procedure issues and a limited amount of equipment affected the ability to maintain the turbine-driven auxiliary feedwater system operable following a main steam isolation.
 
Despite the number of challenges to the plant operating staff and management, all three units were safely shutdown, placed in a stable condition immediately following the loss-of-offsite power event, and power restoration efforts began immediately. With the exception of the local 525 kV transmission grid surrounding the Palo Verde switchyard, the Arizona, California, and Nevada electrical grid remained relatively stable, only noting the'fault through some minor frequency and voltage fluctuations. This was notable considering the amount of generation lost. The total local generation lost during the event included the three Palo Verde units, three co-generation units at the Red Hawk generating station, and three co-generation units at the Arlington generating station for a total of approximately 5,500 megawatts of electrical generation.
 
In the following sections, each pertinent aspect of the event is discussed in detail.
 
Section 2.0 contains the teams findings in the area of system performance and design.
 
Section 3.0 contains the teams findings in the area of human performance and procedures. Section 4.0 contains the teams findings associated with the facilities interaction with off-site entities. Section 5.0 includes a summary of the NRC analysis associated with overall risk significance of the event. Finally, Section 6.0 contains the teams overall assessment of the licensee's response to the event.
 
1.2 System Descriptions 1.2.1 Off-site Power Transmission and Distribution Systems a. General The Palo Verde Nuclear Generating Station is connected by its associated transmission system to the Arizona-New Mexico-California-Southern Nevada extra high voltage (EHV) grid which is interconnected to other EHV systems within the Western System Coordinating Council (WSCC).
 
b. Palo Verde Nuclear Generating Station Switchyard The PVNGS switchyard consists of two 500 kV buses which are connect to the three PVNGS 525/22.8 kV main step-up transformers, and seven transmission lines, using a breaker and a half scheme. A breaker and a half scheme uses two breakers to connect the source of power to the switchyard or transmission line. Both breakers are required to open to isolate a fault in the system. This scheme is used to increase reliability of power and allows flexibility for maintenance. The seven 525 kV transmission lines comprising the Palo Verde transmission system are situated in four corridors from the PVNGS switchyard as follows:
One line to the Devers substation (240 mi.)
 
Three lines to the Hassayampa substation (3 mi.)
 
One line to the Rudd substation (25 mi.)
 
Two lines to the Westwing 500 kv substation (44 mi.)
 
c. West Wing Substation The Westwing substation is comprised of a two-bus 230 kV section and a two-bus 500 kV section. The 500 kV section is connected to the adjacent 230 kV Westwing section through three 525/345/230 kV load tap-changing transformers. The Westwing 230 kV buses are connected to the transmission system using a breaker and a half scheme as follows:
One line to the Surprise substation One line to the Pinnacle Peak substation One line to the Liberty substation One line to the Agua Fria substation One line to the Deer Valley substation One line to New Waldell substation Two 230/69 kV transformers feeding the Arizona Public Service (APS) distribution system d. Hassavampa Switchyard The Hassayampa substation is located three miles from the PVNGS switchyard. It consists of two 500 kV buses connected to the PVNGS switchyard and several other generating stations and substations through a breaker an a half scheme, as follows:
Three lines to the PVNGS switchyard (3 mi.)
 
Two lines to the Red Hawk switchyard (1 mi.)
 
One line to the Jojoba substation (20 mi.)
 
One line to the Noth Gila substation (110 mi.)
 
One line to the Mesquite switchyard (0.5 mi.)
 
One line to the Arlington Valley switchyard (1 mi.)
 
One line to the Harquahala switchyard (30 mi.)
 
The three lines to the PVNGS switchyard were equipped with negative sequence relays intended to serve as pole-mismatch protection, or open conductor, for the Hassayampa to Palo Verde transmission lines. Personnel employed by APS indicated that this
 
relaying was set to trip on 20% negative sequence current after a finite time delay of 5 seconds.
 
1.2.2 On-site Power Distribution Svstem a. General Power is supplied to the PVNGS auxiliary buses from the offsite power supply through thee startup transformers. In addition, during normal plant operation, power for the onsite non-Class 1E alternating current (ac) system is supplied through the unit auxiliary transformer connected to the main generator isolated phase bus. The non-Class 1E ac buses normally are supplied through the unit auxiliary transformer, and the Class 1E buses normally are supplied through the startup transformers, Each unit's non-Class 1E power system is divided into two parts. Each of the two parts supplies a load group including approximately half of the unit auxiliaries. Three startup transformers connected to the 525 kV switchyard are shared between Units 1, 2, and 3 and are connected to 13.8kV buses of the units. Each startup transformer is capable of supplying 100% of the startup or normally operating loads of one unit simultaneously with the ESF loads associated with two load groups of one other unit. The 4160 V class 1E buses are each normally supplied by an associated 13.8/4.16 kV auxiliary transformer, and receive standby power from one of the six standby diesel generators.
 
The Class 1E 4160 V system supplies power to 480V and lower distribution voltages through 18 41601480 V load center transformers.
 
b. Palo Verde Nuclear Generatinq Station Generator Protective Relaying The main generator protection schemes include relaying designed to protect the generators against internal as well as external faults. Protection against external faults includes backup distance relaying and negative sequence time over-current relaying.
 
The backup distance relaying provides backup protection for 24 kV and 525 kV system faults close to the switchyard. The distance relay operates through an external timer. If the fault persists and the time delay step is completed, a lockout relay trips the unit auxiliary transformer 13.8 kV breakers, generator excitation, 525 kV generator unit breakers, main turbine, and the main transformer cooling pumps. The lockout relay also initiates transfer of station auxiliary loads.
 
The generator negative sequence time over-current relay provides generator protection against possible damage from unbalanced currents resulting from prolonged faults or unbalanced load conditions. The relay operates through a lockout relay to trip the unit auxiliary transformer 13.8kV breakers, generator excitation, 525 kV generator unit breakers, main transformer cooling pumps and the main turbine. The negative sequence relay also incorporates a sensitive alarm circuit that, in conjunction with a separately mounted ammeter, alerts operator action on relatively low values of negative sequence current (ust above normal system unbalance).
 
c. Emergencv Diesel Generators The Class 1E ac system distributes power at 4.16 kV, 480 V, and 120 V to all Class 1E loads. Also, the Class 1E ac system supplies power to certain selected loads that are not directly safety-related but are important to the plant . The Class IE ac system
 
contains standby power sources (i.e., emergency diesel generators) that automatically provide the power required for safe-shutdown in the event of loss of the Class 1E bus voltage.
 
In the event that preferred power is lost, the Class 1E system functions to shed Class 1E loads and to connect the standby power source to the Class 1E busses. The load sequencer then functions to start the required Class I E loads in programmed time increments.
 
d. Station Blackout Gas Turbine Generator Sets A non-safety related Alternate AC (AAC) power source consisting of two redundant gas turbine generators (GTG) is available to provide power to cope with a four hour station blackout event in any one nuclear unit. One GTG is analyzed to supply all required station blackout loads, which are located on the 'A' train.
 
Each GTG has a minimum continuous output rating of 3400kW at 13.8kV under worst case anticipated site environmental conditions. This rating is sufficient to provide power to the loads identified as being important for coping with a postulated station blackout.
 
e. Technical Support Center Emergency Diesel Generator The technical support center diesel generator provides standby alternating current to the 480 V electrical distribution panel that supplies all electrical power to the technical support center emergency planning facility. The diesel engine is cooled by a self-contained cooling water system with an air cooled radiator. The radiator is in turn cooled by an electric motor driven fan. The fan motor is powered by the technical support center electrical power distribution panel. Normal electrical power for the technical support center comes from the off-site electrical power supply to Unit 1.
 
During a loss of off-site power, when power is lost to the technical support center electrical power distribution panel, the technical support diesel generator automatically starts and re-energizes the technical support center electrical loads, including the diesel engine radiator cooling fan.
 
1.2.3 Chemical Volume and Control System The chemical and volume control system controls the purity, volume, and boric acid content of the reactor coolant. Water removed from the reactor coolant system is cooled in the regenerative heat exchanger. From there, the coolant flows to the letdown heat exchanger and then through a filter and a demineralizer where corrosion and fission products are removed. It is then sprayed into the volume control tank and returned by the charging pumps to the regenerative heat exchanger where it is heated prior to returning to the reactor coolant system.
 
When the vital 4160 VAC buses are de-energized, the charging pump breakers must be manually reset and the pumps restarted from the control room. Therefore, no charging flow is assumed for 30 minut6s after the time of trip to allow for resetting the breaker and performing manual alignment of one of three gravity-fed boration pathways to the charging pump suction.
 
Following a loss of offsite power, letdown subsystem is designed to isolate automatically due to the loss of nuclear cooling water to the letdown heat exchanger or by operator action. When charging is restarted, the resulting mismatch between letdown and charging will cause volume control tank level to decrease. To reduce the chance of losing suction to the charging pumps, the volume control tank level is monitored by two non-safety grade instrument channels. Alarms are provided on low level and if the two channels differ significantly. The use of two channels of different types (one has a wet reference leg and the other is dry) decreases the probability of operator error mis-aligning the boration systems should one channel fail.
 
1.2.4 Auxiliary Feedwater System The Auxiliary Feedwater System (AFW) provides an independent means of supplying water to the Steam Generators during emergency operations when the Feedwater System is inoperable. AFW maintains the water inventory necessary to allow a Reactor Coolant System cooldown at a maximum rate of 75 0 F/hr down to a temperature of 350&deg;F. It also provides the necessary water inventory for startup, normal shutdown and hot standby conditions.
 
1.3 Preliminary Risk Significance of Event The Nuclear Regulatory Commission's Management Directive 8.3, "Incident Investigation Program," documents the NRC's formal process conducted for the purpose of accident prevention. This directive documents a risk-informed approach to determining when the agency will commit additional resources for further investigation of an event. The risk metric used for this decision is the conditional core probability.
 
A complete loss of offsite power is a significant event at any nuclear facility. Because the Combustion Engineering plant is designed without primary system power-operated relief valves, making a reactor coolant system feed and bleed evolution impossible, the risk significance is somewhat higher for this design. To evaluate this event, the NRC analyst used the Standardized Plant Analysis Risk Model for Palo Verde (SPAR),
Revision 3, and modified appropriate basic events to include updated loss of offsite power curves published in NUREG CR-5496, "Evaluation of Loss of Offsite Power Events at Nuclear Power Plants: 1980 - 1996." The analyst evaluated the risk associated with the Unit 2 reactor because it represented the dominant risk of the event.
 
For the preliminary analysis, the analyst established that a loss of offsite power had occurred and that the event may have been recovered at a rate equivalent to the industry average. Both Emergency Diesel Generator "A" and Charging Pump "E" were determined to have failed and assumed to be unrecoverable. Additionally, the analyst ignored all sequences that included a failure of operators to trip reactor coolant pumps, because all pumps trip automatically on a loss of offsite power. The conditional core damage probability was estimated to be 6.5 x 104 indicating that the event was of substantial risk significance and warranted an augmented inspection team.
 
2.0 System Performance and Design Issues 2.1 Offsite Power Reliability and Independence Issues
      /
 
a. Inspection Scope The team reviewed design drawings associated with the Palo Verde, Hassayampa, West Wing, Devers, and Rudd switchyards and substations. In addition, the team conducted interviews with licensee personnel, APS personnel, and Salt River Project (SRP) personnel involved in the licensees investigation. Finally, the team reviewed the sequence of event and alarm printouts in detail to develop a comprehensive understanding of the event progression.
 
b. Observations and Findings One Unresolved Item (URI) was identified to review the licensees root and contributing causes of the loss of offsite power event and corrective action implementation.
 
(URI 05000528;529;530/2004012-001)
The 500 kV system upset at the PVNGS switchyard originated with a fault across a degraded insulator on the 230 kV Liberty transmission line between the Westwing and Liberty substations approximately 47 miles from PVNGS. Protective relaying detected the fault and isolated the line from the Liberty substation. The protective relaying scheme at the Westwing substation received a transfer trip signal from the Liberty substation actuating the Type AR relay in the tripping scheme for circuit Breakers WWI 022 and WWI 126. The Type AR relay had four output contacts, all of which were actuated by a single lever arm. The tripping schematic showed that contacts 1-10 and 2-3 should have energized redundant trip coils in Breaker WW1022, while contacts 4-5 and 6-7 should have energized redundant trip coils in Breaker WW1 126.
 
Breaker WWI 126 tripped, demonstrating that the Type AR relay coil picked up, and least one of the AR relay contacts, 1-10 or 2-3, closed. PCB 1022 did not trip. Bench testing by APS showed that, even with normal voltage applied to the coil, neither of the tripping contacts for PCB 1022 closed. The breaker failure scheme for PCB 1022 featured a design where the tripping contacts for the respective redundant trip coils also energized redundant breaker failure relays. Since the tripping contacts for PCB 1022 apparently did not close, the breaker failure scheme for PCB 1022 also was not activated, resulting in a persistent uncleared fault on the 230 kV Liberty line.
 
Various transmission system events recorders show that during approximately the first 12 seconds after fault inception, several transmission lines on the interconnected 69 kV, 230 kV, 345 kV, and 525 kV systems tripped on overcurrent, including lines connected to the Westwing and Hassayampa substations. Also during the first 12 seconds, two Red Hawk combustion turbines and one Red Hawk steam turbine power plants tripped, and the fault alternated between a single phase to ground fault to a two phase to ground fault, apparently as a result of a failed shield wire falling on the faulted line. After 12 seconds, the fault became a three phase to ground fault, and additional 525 kV lines tripped.
 
At approximately 17 seconds after fault inception, the three transmission lines between the PVNGS switchyard and the Hassayampa substation tripped simultaneously due to action of their negative sequence relaying, thereby isolating the fault from the several co-generation plants connected to the Hassayampa substation. Approximately 24
 
seconds after fault inception the last two 525 kV lines connected to the PVNGS switchyard tripped, isolating the PVNGS switchyard from the transmission system. At approximately 28 seconds after fault inception, the three PVNGS generators were isolated from the switchyard, and by approximately 38 seconds all remaining lines feeding the fault had tripped and the fault was isolated.
 
Reliability Issues The degraded insulator was caused by external contamination and did not, by itself, represent a concern relative to the reliability of the insulators on the 230 kV transmission system. Nevertheless, the failed Type AR relay and the lack of a robust tripping scheme raised concerns relative to the maintenance, testing, and design of 230 kV system protective relaying. Interviews with APS transmission and distribution personnel indicated that the Westwing substation, where the relay failure occurred, was subject to annual maintenance and testing. Following the event, the failed Type AR relay was removed from service by APS personnel and visually inspected by the NRC team at PVNGS. The relay showed no apparent signs of contamination or deterioration.
 
Although the team considered the maintenance interval to be reasonable, the team did not determine the degree of rigor applied in testing the relaying scheme. For instance, it is doubtful that the testing included methods common in the nuclear industry such as verifying that each contact in the tripping scheme functioned properly. As noted earlier, the tripping scheme lacked redundancy that may have prevented the failure of the protective scheme to clear the fault. Personnel employed by APS and SRP reviewed the design of the Westwing substation as well as all other substations connected to the PVNGS switchyard, and found that only the Liberty and Deer Valley transmission lines at the Westwing substation featured a tripping scheme with only one Type AR relay. All of the newer lines featured two Type AR relays. However, APS personnel found that the middle breakers in the breaker and a half scheme at the Westwing substation only contained one trip coil, as opposed to two trip coils in the bus connected breakers. This feature was found by SRP personnel to be representative of the design at the Devers substation. In order to improve reliability, APS modified the tripping schemes for the Liberty and Deer Valley lines to feature two AR relays enegizing separate trip coils. In addition, personnel from APS and SRP also stated that they would evaluate the feasibility of installing two trip coils in all single trip-coil breakers. Finally, APS personnel indicated that the APS 525/230 kV transformers did not have the same overcurrent protection as the SRP transformers and would consider the installation of overcurrent protection.
 
The team found that APS notably improved the reliability of their Westwing substation by installing a redundant tripping scheme with two Type AR relays for the Liberty and Deer Valley transmission lines. In addition, the APS and SRP intention to include dual trip coils and overcurrent protection on unprotected transformers would also serve to increase the reliability of power to the grid. The also noted that the PVNGS licensee actively coordinated the off-site power investigation and facilitated discussions with APS and SRP.
 
Independence of Offsite Power Supplies Licensees are tasked with ensuring that the facility meets the General Design Criteria (GDC) contained within 10 CFR, Part 50, Appendix A. Specifically, GDC 17 requires
 
that power from the offsite transmission network be supplied by 'two physically independent circuits designed and located so as to minimize to the extent practical the likelihood of their simultaneous failure under operating and postulated accident and environmental conditions." This event highlighted a previously unknown vulnerability associated with the three transmission lines between the Hassayampa and PVNGS switchyard. These three transmission lines featured negative sequence relaying intended to serve as pole mismatch protection. This design was implemented in 1999 as part of extensive modifications to the Hassayampa switchyard intended to accommodate new co-generation facilities local to the PVNGS. The negative sequence protection scheme was designed to actuate a complete isolation of all three of the subject transmission lines after a 5-second time delay to avoid spurious tripping due to faults. Although these individual lines are listed as separate sources of offsite power in the Plant Technical Specifications, this event demonstrated that the lines were subject to simultaneous failure (acting as one) because of the protective relaying scheme.
 
Personnel employed by SRP and the licensee stated that the negative sequence relaying was disabled and pole mismatch protection was being implemented by alternate relaying.
 
The team found that the licensees efforts to coordinate their investigation with APS and SRP appropriate. The design changes implemented on the Hassayampa switchyard to PVNGS switchyard transmission lines to remove the negative sequence protection improved the independence of those transmission lines and would prevent the three subject transmission lines acting as one in the future for the same type of fault.
 
2.2 Unit 1. Atmospheric Dump Valve 185 Failure a. Inspection Scope The team interviewed operators, reviewed control room logs, and reviewed CRDR 2716011 associated with the loss of manual control of the Atmospheric Dump Valve (ADV) 185 during the performance of Procedure 40EP-9EO10 "Loss of Offsite Power/ Loss of Forced Circulation," Revision 10.
 
b. Observations and Findings The team identified an unresolved item associated with the licensees determination of root and contributing causes of the Valve ADV-1 85 failure and to review corrective actions, if any (URI 05000528;529;530/2004012-001).
 
Following the Unit I LOOP, Valve ADV -185 failed to operate properly while being remote-manually operated from the control room. Operators in the control room observed that the valve had drifted closed, despite a remote-manual controller setting demanding the valve to be open. The operators were able to adjust Valve ADV-185 from the control board by adjusting the demand higher than needed. However, the valve position would not remain in the desired position.
 
The team assessed how much the Valve ADV-1 85 affected the operators ability to control reactor coolant temperatures and concluded that the impact was minimal. The operator had been trained sufficiently to readily diagnose the problem and utilize an alternate ADV for decay heat removal. All other atmospheric dump valves on Unit 1
 
responded properly to remote-manual control signals and presented no further challenges to the control room operators.
 
Licensee personnel identified the apparent cause of the malfunction as internal leakage equalizing around a pilot valve causing the valve to shut. The valve and it's associated control circuit were quarantined and maintenance personnel were troubleshooting the components to determine the root cause of the malfunction.
 
2.3 Unit 1. Letdown Heat Exchanger Isolation Failure a. Inspection Scope The team reviewed the circumstances surrounding the Unit 1 letdown heat exchangers failure to isolate following the June 14, 2004, loss of offsite power event. Since the Unit 1 letdown system was temporarily modified by the licensee, the teams review included a detailed inspection of Temporary Modification 2594804. In addition, the team reviewed CRDR 2715667 documenting the system response during the event to understand the licensees investigation into the failure. The team also interviewed plant personnel and reviewed control room logs and temperature plots to determine the impact of the high temperature on the letdown system.
 
b. Observations and Findings The team identified an unresolved item associated with the licensees determination of root and contributing causes of the letdown system failure and to review corrective actions, if any (URI 05000528;529;530/2004012-001). In addition, this issue has potential cross-cutting aspects in the area of human performance.
 
During the June 14, 2004, loss-of-offsite-power, the Unit 1 letdown system did not operate as expected when fluid temperatures exceeded the alarm setpoint. The letdown system bypassed the ion exchanger and the filter at 1400 F, as expected. However, a temporary modification to bypass a flow sensor which resulted in the system failing to isolate when needed. The letdown system response had apparently not been anticipated by the engineers designing the temporary modification and operators were unaware of the systems response to a loss of offsite power. The team was concerned that inadequate design control had resulted in the overheating of a system designed for low temperature operation. The system was designed to isolate the letdown system if temperature at the outlet of the non-regenerative heat exchanger exceeded 148 0F.
 
The licensee identified that the apparent cause of the system not isolating as expected was a failure of the temporary modification to fully address the functioning of the letdown control system during a loss of power to the controller. As a consequence of a loss-of-offsite-power, the nuclear cooling water flow is normally lost to the non-regenerative heat exchanger. Typically, when power is restored to the system, the valves would be in a manual mode of operation and flow through the system would not be secured by the normal control system. The temporary modification effectively bypassed the backup initiating signal for isolating the system in the event cooling water flow to the heat exchanger was lost, which occurred as a result of the loss of offsite power.
 
The impact on the plant systems and personnel were minimized when the ion exchanger bypass valves actuated to remove high temperature water from the resin. However, the introduction of high temperature water created a distraction when, as a result of paint and insulation being heated, the fire brigade was activated for a report of smoke/fumes.
 
The fire brigade responded to the report of a potential fire and operators conducted a detailed walkdown of the system.
 
The licensee conducted an engineering calculation to determine the maximum stress associated with 3500F fluid temperature which was considered the worst-case temperature the letdown system could have been subjected to. The worts-case thermally induced stress was calculated to be 27,475 pounds per square inch (psi). The licensees engineers determined that a socket-weld on the drain for purification Filter F36 was the only weld of concern that could have exceeded its' maximum allowable stress if it had reached 3500F. Licensee personnel performed a visual inspection of the affected weld, and removed the filter element to determine if any damage occurred. Because the filter element was rated for 1800 F for 1-hour, and there was no indication of any heat damage, the licensee personnel concluded that the weld was not subjected to the temperatures that could have caused excessive stress on the weld. In addition, the licensee conducted a soft parts analysis to ascertain if any parts susceptible to high temperatures were present and found none.
 
With respect to the extent of condition, the team found that Unit 1 was the only unit that had this modification installed to bypass the low flow isolation signal. Therefore, the team had no concerns with the other units.
 
2.4 Unit 2, Train A Emergency Diesel Generator Failure a. Inspection Scope The team interviewed licensee representatives and reviewed the sequence of events that led up to the failure of the Unit 2 Train A emergency diesel generator to determine the apparent cause. The team also reviewed the effects the loss of the diesel generator had on the recovery of the event; the action plan for determining the root cause (Condition Report/Disposition Request .(CRDR) 2715709); and the extent of condition of the apparent cause.
 
b. Observations and Findings The team found that the apparent failure of the Unit 2 Train A emergency diesel generator was a failed diode in Phase B of the voltage regulator exciter circuit. The diode failure resulted in a reduced excitation current which was unable to maintain the voltage output with the applied loads.
 
At approximately 07:41:15 am, the Unit 2 Train A emergency diesel generator received a start signal as a result of an undervoltage signal on the Train A 4.16KV Class I E bus.
 
The emergency generator started, came up to speed and voltage, and energized the bus at approximately 07:21:23 am, within the 10 seconds allowed by design.
 
Approximately 5 seconds later, the Train A battery chargers, control element drive mechanism cooling units, and the containment cooling units were sequenced onto the
 
bus. The essential cooling water pump was sequenced onto the bus approximately 15 seconds after the first loads.
 
The team noted that, at approximately the same time the essential cooling water pump was energized, the output voltage from the emergency diesel generator began to fail.
 
The control room operators observed the voltage and current indications in the control room were zero and had an auxiliary operator observe the indications locally, at the emergency diesel generator control panel. The indications were also zero. The control room operators initiated a manual emergency trip of the diesel at approximately 07:56:21 am. The team found these actions to be appropriate for the circumstances.
 
The team found that the failed emergency diesel generator did not have a large impact on plant stabilization and recovery, but did result in having only one train of safety equipment available. The only apparent effect of the loss of Train "A" safety-related equipment was associated with the availability of Train "A" charging pumps which rely on emergency power from the EDGs.
 
The team noted that licensee engineers and maintenance personnel developed a comprehensive plan to troubleshoot the failure (CRDR 2715709). The plan was methodical and prioritized. The team found that the troubleshooting activities were thorough and well controlled, resulting in the identification of the failed diode in Phase B of the exciter circuit. The failure resulted in a half-wave output with significantly reduce current that led to the loss of adequate excitation to maintain the required voltage for the applied loads.
 
The team found that, while this diode was common to all the emergency diesel generators at the Palo Verde Nuclear Generating Station, there was insufficient data to indicate there was a common mode problem. A review of the industry database on component failures revealed only one other failure of this specific model diode. That failure was in 1997. As such, the team found the extent of condition review by licensee personnel to have been appropriate for the circumstances.
 
The team noted that the failed diode had been replaced during the Fall 2003 refueling and steam generator replacement outage. This diode had been subject to approximately 65 hours of operation before it failed. Licensee personnel had plans to perform additional testing to determine the root cause, if possible, of the diode failure.
 
The NRC will evaluate the corrective actions and root cause determination associated with the emergency diesel generator failure (URI 05000528;529;53012004012-001). In addition, this item has potential cross-cutting aspects in the area of problem identification and resolution.
 
2.5 Unit 3, Plant Response to Loss of Offsite Power a. Inspection Scope The team reviewed CRDR 2715659 documenting the Unit 3 reactor trip, plant response, and pre-startup review. In addition, control room logs associated with system temperature, pressure and flow plots; voltage and frequency plots; and nuclear instrumentation plots to assess whether the plant responded as designed. Finally,
 
various personnel that were either involved in the event or in the analyses of the event were interviewed.
 
b. Observations and Findings The team identified two unresolved items. The first unresolved item was associated with the automatic main steam-line isolation in Unit 3 and will result in an evaluation of the response of the bypass control system in all three units following the loss of offsite power and compare the response to those assumed in the plant safety analysis (URI 05000528;529;530/2004012-002). The team found that the plant responses observed during this event were different from those described in the Final Safety Analyses Report (FSAR). Accordingly, the second unresolved item is associated with reviewing the licensees root cause for the Unit 3 reactor trip on a variable over-power signal and the licensees evaluation of the impact of the high frequency on plant equipment, as well as the extent of condition once the cause is determined (URI 05000528;529;530/2004012-001).
 
The team noted that Unit 3 experienced an automatic main steam-line isolation.
 
Licensee engineers's attributed the automatic isolation to a steam bypass control system anomaly that caused all the bypass valves to open simultaneously, suddenly decreasing main steam line pressure, and causing a main steam isolation. The team found, through interviews with licensee engineers, the apparent cause of the "anomaly" was the result of a momentary loss of power to Panel D11 with the control system being re-energized in the automatic mode, vice manual. According to the licensee engineers, this power loss initiated a 30-second timer that disconnected the valve control signals from the control cabinet. When the 30-second timer completed, all eight valves modulated open in about 14 seconds.
 
The PVNGS FSAR, Revision 12, Section 1.8, "Conformance to NRC Regulatory Guides," documents that the licensee took exception to the separation criterion of NRC Regulatory Guide 1.75, "Physical Independence of Electric Systems," Revision 1, for the power supplies to Panel D11. As a result, Panel D11 was powered from both a non-vital power supply (normal) and a vital power supply (backup). Upon loss of normal power, the supply automatically transfers to the backup supply. After the normal supply returns, the panel must be manually transferred back to the normal supply. Upon a total loss of power to Panel D11, the steam bypass control system will be unable to automatically respond to any challenges FSAR Section 7.2.2.4.1.2.1). The team also noted that the power supply configuration was identical on all three units. However, Units 1 and 2 did not respond the same as Unit 3.
 
The team noted that, in each subsection of the FSAR listed below, the steam bypass control system is assumed to be unavailable because it is either deenergized or in manual. During the loss-of-offsite-power event, the team found that the system was reenergized and operated in automatic. The team noted that this system response may not be as described in the licensee's safety analysis.
 
6.3.3.5D. For all break sizes, the reactor trip will result in a turbine trip and the subsequent loss of offsite power will result in the loss of main feedwater flow. Since the steam bypass
 
control system is not available due to loss of condenser vacuum on loss of offsite power ...
7.2.2.4.1.2.1 A. The [Steam Bypass Control System] SBCS and
  [Reactor Pressure Control System] RPCS will be unable to automatically respond to any challenges on a failure of distribution panel E-NNN-D1 1.
 
7.2.2.4.1.2B ... the LOFW [loss-of-feedwater] event presented in subsection 15.2.7 assumed that the [Pressurizer Pressure Control System] PPCS, SBCS, and [?????]RRS are in the manual mode of operation, unable to automatically respond to challenges.
 
15.1.4.2 Case 1 Since the steam bypass control system is assumed to be in the manual mode with all bypass valves closed ...
15.1.4.2 Case 2 Since the steam bypass control system is assumed to be in the manual mode with all bypass valves closed ...
15.2.3.1 ... in this analysis both the SBCS and RPCS are assumed to be in the manual mode and credit is not taken for their functioning.
 
15.3.1.1 The only credible failure which can result in a simultaneous loss of power is a complete loss of offsite power. In addition, since a loss of offsite power is assumed to result in a turbine trip and renders the steam dump and bypass system unavailable, the plant cooldown is performed utilizing the secondary valves and atmospheric dump valves (ADVs)...
The loss of offsite power will make unavailable any systems whose failure could affect the calculated peak pressure. For example, a failure of the steam dump and bypass system to modulate or quick open and a failure of the pressurizer spray control valve to open involve systems (steam dump and bypass system and pressurizer pressure control system (PPCS)) which are assumed to be in the manual mode as a result of the loss of offsite power and, hence, unavailable for at least 30 minutes.
 
15.3.1.2C. The turbine is assumed to trip on loss of offsite power.
 
The loss of offsite power produces a loss of load on the turbine which generates a turbine trip signal. The turbine stop valves are closed as a result of the trip. The steam
 
bypass control system becomes unavailable due to the loss of offsite power and subsequent loss of condenser vacuum.
 
15.3.4.1 The assumed loss of AC renders the steam bypass control system inoperable as a result of the loss of circulating water pumps.
 
15.3.4.2C. The loss of offsite power causes a loss of power to the plant loads and the plant experiences a simultaneous loss of feedwater flow, condenser inoperability, and a coastdown of all reactor coolant pumps.
 
15.3.4.3.1C. The loss of offsite power also causes a loss of main feedwater and condenser inoperability. The turbine trip, with the steam bypass control system (SBCS) and the condenser unavailable, leads to a rapid buildup in secondary system pressure and temperature ...
15.4.2.2D. Following the generation of a turbine trip on reactor trip, the main feedwater control system (FWCS) enters the reactor trip override mode and reduces feedwater flow to 5% of nominal, full power flow. Since the steam bypass control system (SBCS) is assumed to be in manual mode with all bypass valves closed, the main steam safety valves (MSSVs) open to limit secondary system pressure and remove heat stored in the core and the RCS.
 
15.4.2.3B. All the control systems listed in table 15.4.2-2, except the steam bypass control system, were assumed to be in the automatic mode since these systems have no impact on the minimum [Departure from Nucleate Boiling Ratio]
DNBR obtained during the transient. The steam bypass control system is assumed to be in manual mode because this minimizes DNBR during the transient.
 
15.4.8.3C. The steam bypass control system is inoperable on loss of offsite power and therefore is unavailable.
 
15.5.2.1 The loss of normal ac power results in loss of power to the reactor coolant pumps, the condensate pumps, the circulating water pumps, the pressurizer pressure and level control system, the reactor regulating system, the feedwater control system, and the steam bypass control system.
 
15.5.2.3C. Since the steam bypass control system is in the manual mode ...
The unavailability of the steam bypass valves ...
 
15.6.3.1.2D Since the SBCS is assumed to be in manual mode with all bypass valves closed ...
15.6.3.3.1A. The ADVs are used due to the unavailability of the steam bypass control system due to loss of offsite power.
 
15.6.3.3.3.1C. The loss of offsite power also causes the steam bypass system to the condenser to become unavailable.
 
During the teams review of the time-line, it was noted that the main turbine stop valves closed on each unit at approximately 07:41:21 am. The Units 1 and 2 reactor coolant pumps had tripped on undervoltage approximately 1-second prior to the turbine trips, and the reactors tripped on anticipatory low departure from nucleate boiling ration within 1-second of receipt of the turbine trips. However, on Unit 3, the reactor tripped on variable over-power approximately 1-second after the other units. Next, the team noted that the Unit 3 main generator tripped approximately 1-second after the reactor trip on a volts/hertz signal, while the other units' main generators did not trip on volts/hertz signals until approximately 3.5 seconds after the reactor trips. And, approximately 5 seconds after the Units I and 2 reactor coolant pumps tripped on undervoltage, the Unit 3 reactor coolant pumps tripped on undervoltage. All three units experienced post-event frequency increases to approximately 67 hertz.
 
During the loss-of-offsite power event, the Unit 3 reactor coolant pumps remained connected to the substation bus while the turbine was in a overspeed condition.
 
Licensee engineers concluded that the bus voltage was maintained because of an unexpected response of the Unit 3 generator's excitation circuit. As a result of the excitation circuit response, the excitation, and therefore the output voltage, remained high, delaying the load shed and tripping of the reactor coolant pumps. The licensee planned to conduct troubleshooting to evaluate the main generator excitation control system.
 
Since the Unit 3 reactor coolant pumps remained operating longer, they turned at the higher frequency, increasing flow through the critical reactor core. This increase in flow (approximately 108.2 percent of design flow), produced a power of approximately 109 percent, as read on excore nuclear instruments. This positive rate of change in reactor power generated a variable over-power-trip signal to shutdown the reactor.
 
The team reviewed the licensee's evaluation of the increased reactor coolant flow and noted that the estimated flow of 108.2 percent was less than the evaluated limit of 110.4 percent of design volumetric flow. According to the licensee's analyses, the most limiting component of each reactor coolant pump was the motor flywheel which was designed for 125 percent of rated speed. The team noted that this value was not approached during the event. The team agreed with the licensee's conclusion that there was no impact to the continued power operation with respect to fuel grid-to-rod fretting, vessel hydraulic uplift forces, and fuel mechanical design.
 
While all three turbine generators were in an over-speed condition and connected to the plant busses, all connected loads experienced a higher frequency. The reactor coolant pumps for Units 1 and 2 were not exposed to the high frequency condition because their undervoltage relays actuated before the higher frequency was attained.
 
2.6 Unit 3. Reactor Coolant Pump 2B Lift Oil Pump Breaker a. Inspection Scope The team reviewed the thermal overload curves for the lift oil pumps and the operator response to the loss of the pump with regard to restoring forced circulation in the primary plant. The team also interviewed plant personnel, reviewed CRDR 2715659, and reviewed control room logs regarding the activities surrounding the failure of the lift oil pump to start.
 
b. Observations and Findings The team identified an unresolved item associated with the design of the lift oil pump motor breaker thermal overloads and operation of the lift oil system (URI 05000528;529;53012004012-002).
 
During restoration efforts following the June 14, 2004 loss of offsite power, the Unit 3 reactor coolant Pump 2B lift oil pump thermal overloads were actuated while operators were making preparations to start reactor coolant pumps.
 
The team noted that the procedure for starting reactor coolant pumps did not contain any note or precaution that warned operators of a potential thermal overload trip if the lift oil pump motorwas run longerthan 10 minutes. Licensee Procedure 4OEP-9EO10, Appendix 1, "RCP [Reactor Coolant Pump] Restart," states, in part:
15. Ensure the appropriate lift oil pump has been running for 7 minutes or more.
 
The team noted that the thermal overload trip resulted in an unnecessary delay in the restoration of forced reactor coolant flow through the core.
 
In addition, the licensees calculation for sizing the thermal overloads for the motor breaker were only 0.1 amp greater than the motor running current. At this level of running current, the licensee calculated that the overloads would actuate in approximately 600 seconds. Licensee personnel identified the apparent cause of the trip of the lift oil pump was operating the pump in excess of 10 minutes. The licensee initiated CRDR 2715659 to address this issue.
 
2.7 Unit 3. Low Pressure Safety Iniection System In-Leakage a. Inspection Scone The team reviewed CRDR 2715659 which documented that a leaking Borg-Warner check valve had pressurized the low pressure safety injection system during the event.
 
Plant personnel were interviewed and control room logs and plots were reviewed to determine the impact of the in-leakage to the control room operators during the loss of offsite power event.
 
b. Observations and Findings
 
The team identified an unresolved item related to the Borg-Warner safety injection check valve leakage. The unresolved item is to conduct a review of the licensee root and contributing cause determination, review the effectiveness of prior corrective actions for previous check valve leakage issues, to assess the licensees use of industry operating experience and generic communications, to determine the adequacy of the in-service testing program for demonstrating check valve operability, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).
 
While Unit 3 operators were implementing loss of offsite power emergency procedures, they were required to manually implement alarm response Procedure 40AL-9RL2B,
"Panel B020B Alarm Response," Revision 48 on three occasions to depressurize a section of safety injection piping to maintain the low pressure safety injection system operable. The team found that, while operators maintained an adequate level of control, they were somewhat challenged by the unnecessary distraction from emergency procedures. Apparently, Valve RCEV-217, a 14-inch Borg-Warner check valve began to leak and pressurized the safety injection header to reactor coolant Loop 2A. The licensees apparent cause involved a thermal hydraulic interaction which resulted in check valve leakage when system temperatures changed rapidly.
 
2.8 Unit 1 and 3. General Electric Magna Blast Breaker Failures a. Inspection Scope The team reviewed the failure of two 13.8 kV circuit breakers to close on demand during the recovery from the loss-of-offsite power. The team also interviewed licensee personnel associated with the investigation into the breaker failures.
 
b. Observations and Findings The team identified an unresolved item related to the reliability of Magna-Blast circuit breakers. The unresolved item will result in a review of the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, review the effectiveness of prior corrective actions for Magna-Blast circuit breaker issues, to assess the licensees use of industry operating experience and generic communications, to determine the adequacy of preventative maintenance, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).
 
This item has potential cross-cutting aspects in the areas of human performance and problem identification and resolution.
 
The team noted that, while recovering from the loss-of-offsite-power, 13.8 kV circuit Breakers 1ENANS06K and 3ENANS05D failed to close on demand from the control room. The licensee initially determined the apparent cause of the inability to close the breakers was that they had not been cycled frequently enough. Apparently, the licensee believed that improper operation of the latching mechanisms may have occurred due to grease hardening and contamination by dirt. The licensee initiated CRDR 2716019 to evaluate the failures, determine the root cause(s), and take any corrective actions identified.
 
The team noted that the initial response only involved a cycling of the breakers without any detailed troubleshooting. The team found that the licensee personnel considered
 
this acceptable because of a known issue with grease hardening in Magna-Blast circuit breakers located in a relatively hot environment with little to no cycling during the 18-month operating cycle.
 
The team noted that each of the breakers had been refurbished in 2002.
 
Breaker 1ENANS06K had been cleaned, inspected, and cycled during the last refueling outage earlier this year. The team found that the licensee personnel's determination of the apparent cause for the Unit 1 breaker was not supported by the facts because of the recent cleaning and inspection.
 
Because of the large volume of industry operating experience with Magna-Blast circuit breaker reliability and the fact that both breakers had maintenance on them within the past two to three years, the team was concerned that the two breakers may have problems other than what was described in the licensees apparent cause.
 
2.9 Auxiliary Feedwater (AFW) System Performance a. Inspection Scope The team evaluated the adequacy of the AFW system performance during and after the loss of offsite power event. The inspection was accomplished through a review of documents and interviews with operators and engineering staff.
 
b. Observations and Findings The team identified an unresolved item related to the design and operation of the AFW system. Specifically, a thermally induced vibration occurred when operators placed the non-essential AFW system into service which also may have involved procedural issues.
 
The unresolved item is to review the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).
 
As part of the reactor trip response, operators manually started the essential motor-driven AFW pumps in all 3 units. Six hours after the reactor trip, Unit I operators placed the non-essential motor-driven AFW pump into service and secured the essential pump.
 
At this time, a plant operator reported high vibration for approximately 5 minutes in the main feedwater piping. The licensee generated CRDR 2715731 to document the high vibration. In Units 2 and 3, the non-essential pumps were placed in service, 17 and 29 hours after the reactor trips, respectively. No vibration was noted in Units 2 and 3.
 
There was no procedural requirement that compelled operators to secure the essential pump and place the non-essential pump in service. According to the Unit 1 operator, the basis for transferring from the essential pump to the non-essential pump was to allow operators to add chemicals to the feedwater, if needed. However, there was no need to add chemicals at the time that the transfer occurred in Unit 1.
 
The high vibration in the Unit I feedwater line occurred when the relatively cold auxiliary feedwater coming from the condensate storage tank mixed with the stagnant hot water in the insulated section of feedwater piping downstream of the injection point of the non-essential AFW pump. That section of feedwater became isolated as a result of a
 
manual Main Steam Isolation Signal (MSIS) actuation required by the applicable Emergency Operating Procedure. There were no subsequent procedural cautions or guidance for preventing the introduction of the cold water into the feedwater system prior to that section of piping being allowed to cool down sufficiently. The placement of the non-essential AFW pumps into service in Units 2 and 3 did not result in high vibration because those sections of feedwater piping had apparently cooled enough to preclude a thermally induced vibration transient.
 
3.0 Human Performance and Procedural Aspects of the Event 3.1 AFW System Operation a. Inspection Scope The inspector assessed emergency procedure implementation and control room operator response as it related to the AFW system. The inspection was accomplished through a review of documents and interviews with operators and engineering staff.
 
b. Observations and Findings Emergency Operating Procedure Implementation The team identified an unresolved item associated with the apparent failure of emergency operating procedures to inform control room operators on a potential operability concern with the turbine-driven AFW pump after a main steam isolation. The unresolved item is to review the licensee root and contributing cause determination, determine the adequacy of procedures and operator training, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;53012004012-001).
 
As discussed previously, Unit 2 tripped at 7:41 a.m. on June 14, 2004 as a result of the loss of off site power. The completion of reactor post trip actions resulted in entry into the "Loss of Offsite Power/Loss of Forced Circulation" Emergency Operating Procedure (EOP) 40EP-9EO07, Rev. 10. Step 6. of this procedure requires control room operators to initiate a manual MSIS actuation. In addition to closing the main steam isolation valves, this step also causes closure of drains associated with two critical steam traps required to maintain operability of the turbine-driven AFW pump. With the steam traps unavailable, condensate can accumulate in the steam lines which can contribute to an overspeed trip of the turbine during startup.
 
The team noted that the EOP did not caution the operators that an MSIS would potentially make the turbine-driven AFW pumps inoperable. The EOP also did not direct the operators to implement the applicable sections of Normal Operating Procedure 400P-9SG01, "Main Steam," Rev. 37, which provide the necessary instructions for manually draining those sections of piping necessary to maintain operability of the pump. This procedure requires that the piping associated with the critical steam traps be blown down every two hours until a dry steam condition is reached and then every six hours thereafter. On the day of the event, operators did not commence actions to drain the associated piping until 11 hours after the reactors tripped.
 
TDAFW Steam Drain Line Equipment The team identified an unresolved item associated with the availability of resources to drain the TDAFW steam piping, the impact on the delay to restore critical equipment from a potentially inoperable status, the adequacy of the past corrective actions from a previous overspeed trip of a TDAFW pump. The unresolved item is to review the licensee root and contributing cause determination, determine the adequacy of past design changes, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).
 
As discussed above, without the steam traps available, condensate can accumulate in the steam lines and lead to a potential overspeed trip of the pump. A condensation induced overspeed trip of the Unit 1 TDAFW pump previously occurred on April 24, 1990. At that time, Engineering Evaluation Request 90-AF-01 1 was generated to evaluate the root cause. The necessary corrective actions identified included directions to revise the operating and surveillance procedures to address maintaining the steam traps dry and directions to implement manual methods to ensure that the steam lines were maintained drained while in Modes 1, 2 and 3 with the turbine not on line.
 
After operators realized that draining of the piping associated with the critical steam traps was necessary to ensure continued operability of the turbine driven AFW pump, the applicable portions of the main steam normal operating procedure were referenced.
 
The procedure required the installation of a vent rig tool constructed in accordance with Engineering Evaluation Request 92-SG-007 at each manual drain location.
 
Consequently, each turbine-driven AFW pump required two vent rig tools. Operators were only able to find sufficient vent rig tools for one turbine-driven AFW pump.
 
Decision-Making with Limited Resources The team identified an unresolved item associated with the decision-making process for directing resources to drain a TDAFW pump steam trap considered risk importance appropriately. The unresolved item is to review the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).
 
The AFW system has a relatively high value of risk importance. As such, with only enough vent rig tools to drain one turbine-driven AFW pump at a time, operations management decided to begin draining the Unit 1 TDAFW pump steam traps first. The team noted that with Unit 2 having only one of two EDGs available, it may have been a more prudent decision to restore the Unit 2 TDAFW pump to service first.
 
3.2 Unit 2. Train "E' Positive Displacement Charging Pump Trio a. Inspection Scone The team reviewed the emergency operating procedures and the control room operator response to the loss of offsite power with respect to the charging pumps to determine the effect on the response to the event. The team also interviewed plant personnel and reviewed CRDRs 2716521 and 2716806 regarding the activities surrounding the charging pump operations.
 
b. Observations and Findings The team identified an unresolved item associated with procedure adherence. The unresolved item is to review the licensee root and contributing cause determination, determine whether a violation or violations of requirements occurred, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001). This item has potential cross-cutting aspects in the area of human performance and problem identification and resolution.
 
As the volume control tank level dropped to approximately 15 percent with Positive Displacement Charging Pump CHB-P01 operating, a control room operator recognized the need to transfer the charging pump suction from the volume control tank to the refueling water tank. Because of the loss of offsite power, control room operators were implementing Procedure 40EP-9EO07, "Loss of Offsite Power I Loss of Forced Circulation," Revision 10.
 
Step 11 of Procedure 40EP-9EO07 states:
IF VCT makeup is NOT available, THEN perform the following:
a. IF RWT level is below or approaching 73%, AND the CRS desires to keep charging in service, THEN PERFORM ONEof the following:
  * Appendix 10, Charging PumD Alternate Suction to the RWT /
Restoration
  * Appendix 11, Charging Pump Alternate Suction to the SFP /
Restoration b. IF RWT level is above 73%, THEN perform the following:
1) IF three charging will be used, THEN stop the Boric Acid Makeup Pumps.
 
2) IF three charging pumps are will be (sic) used, AND a Fuel Pool Clean Pump is recirculating the RWT, THEN stop RWT recirc by stopping the appropriate Fuel Pool Cleanup Pump.
 
3) Oen CHN-HV-536, RWT Gravity Feed to Charging Pump Suction.
 
4) Close CHV-UV-501, Volume Control Tank Outlet.
 
The team noted that since refueling water tank level was greater than 73 percent at the time, the appropriate steps in the procedure for transferring the charging was Step 11.b.3) and 4). However, the Control Room Supervisor decided that Step 11.a. was appropriate because Valves CHN-HV-536 and CHN-UV-501 did not have power and the supervisor knew that the valves in Step 11.a. could be manually operated. The supervisor failed to consider that the valves in Step 11 .b. could also be manually operated. By making this decision, the Control Room Supervisors decision to implement Step 11.a. may not have been in accordance with the requirements of the emergency operating procedure for the plant conditions at the time (i.e,, the refueling water tank level was greater than 73 percent). The licensee initiated CRDR 2716521 to evaluate the human performance error.
 
After deciding to implement Step 11.a., the Control Room Supervisor conducted a briefing with an auxiliary operator to discuss the manual transfer of the charging Pump CHE-P01 suction from the volume control tank to the refueling water tank using Appendix 10 to Procedure 40EP-9EO10, "Standard Appendices," Revision 32.
 
Appendix 10 states, in part:
1. Request that Radiation Protection accompany the operator performing the local operations to perform area surveys.
 
2. IF it is desired to align Charging Pump(s) suction to the RWT, THEN perform the following:
a. Place the appropriate Charging Pump(s) in "PULL-TO-LOCK."
 
b. Direct an operator to PERFORM Attachment 10-A, Aligning Charging PumD Suction to the RWT, for the appropriate Charging Pump(s).
 
c. WHEN the appropriate Charging Pump(s) has been aligned, THEN start the appropriate Charging Pump(s) as necessary.
 
Attachment 10-A states, in part:
1. Open CHB-V327, "RWT TO CHARGING PUMPS SUCTION" (70 ft. East Mechanical Piping Penetration Room)...
4. IF aligning Charging Pump E, THEN perform the following (Charging Pump E VlvGallery)
 
a. Close CHE-V322, ""E" CHARGING PUMP CHE-P01 SUCTION ISOLATION VALVE".
 
b. Open CHE-V757, ""E" CHARGING PUMP ALTERNATE SUCTION ISOLATION VALVE".
 
5. Inform the responsible operator that the appropriate Charging Pump(s) are aligned to the RWT.
 
The team found that the auxiliary operator did not implement Appendix 10, Step 1 of emergency operating Procedure 40EP-9EO10. Instead of requesting a radiation protection person to accompany him, the operator went to the radiologically controlled area access to perform a routine entry. However, because of the loss of offsite power, the access computers were not functioning and routine entry data was being entered manually. The auxiliary operator failed to inform the radiation protection person of the necessity of his entry nor of the procedural requirement for a radiation protection person to accompany him. This resulted in some delay in implementing the EOP. The licensee initiated CRDR 2716806 to evaluate the delay at the access point.
 
Once access was gained, the auxiliary operator proceeded to perform Attachment 10-A, Steps 4 and 5 which were not in the correct order. After positioning the valves listed in Step 4, the auxiliary operator informed the control room operator that the charging Pump CHE-P01 suction had been transferred. The control room operator then started charging Pump CHE-P01 at approximately 08:05 am and secured charging Pump CHB-P01 at approximately 08:05:52 am. At approximately 08:05:59, charging Pump CHE-P01 tripped on low suction pressure, resulting in a loss of all charging flow.
 
At approximately 08:06:22, the control room operator re-started charging Pump CHB-P01. The team found that the control room operator was unaware that this pump was operating with the suction from the volume control tank. After approximately 4.5 minutes, the control room operator noticed that the volume control tank level had dropped to approximately 10 percent. At that time, the operator secured charging Pump CHB-P01 to prevent it from tripping on low suction pressure or becoming air-bound.
 
At approximately 08:11:31 am, the charging pump suction was properly transferred to the refueling water tank and charging Pump CHB-P01 was restarted. At approximately 11:32:37 am, the time line indicated that charging Pump CHA-P01 was started.
 
3.3 Technical Support Center (TSC) Emergency Diesel Generator Trip a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and electrical maintenance department. Security department logs were reviewed to determine the cause of the failure of the technical support center diesel generator during the loss of off-site power. The team walked down the technical support center electrical
 
distribution system and the technical support center diesel generator. The team reviewed the licensee's preliminary findings attached to CRDR 2715749 written to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004.
 
b. Observations and Findings The team identified an unresolved item associated with a failure of the technical support center diesel generator. The unresolved item is to review the licensee root and contributing cause determination, determine whether a violation or violations of requirements occurred, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).
 
This item has potential cross-cutting aspects in the area of human performance.
 
The team found that the apparent cause for the failure of the technical support diesel generator to restore power to the technical support center was a human performance error which had occurred during post maintenance testing of the diesel engine starting system on June 8, 2004.
 
On June 14, 2004, as a result of the loss of off-site power, electrical power was lost to the technical support center. As designed, the technical support center diesel generator started, but it did not re-energize the technical support center electrical loads. Electrical maintenance technicians were called to investigate the problem and shortly after they arrived at the technical support center diesel generator the diesel engine tripped. The engine control panel alarms indicated that the trip was due to high engine temperature.
 
Electrical power was restored to the technical support center when off-site power was restored to Unit 1 at 9:10 AM. The technical support center was without electrical power for approximately 1 hour 30 minutes.
 
During subsequent troubleshooting, electrical maintenance technicians determined that the engine operating switch was in 'Idle." With the switch in "Idle," the diesel generator started on loss of electrical power to the technical support center, but did not come up to proper voltage and frequency and did not re-energize the technical support center electrical distribution panel. As a result, the engine radiator cooling fan did not start, so the engine overheated and tripped on high temperature. The electrical maintenance technicians returned the engine operating switch to its normal uRun' position and wrote CRDR 2715726.
 
The licensee determined that the engine operating switch was apparently left in the
"Idle" position after post maintenance testing of the engine starting system performed on June 8, 2004 under Work Order 2623863. During this monthly engine starting battery inspection, electricians noted that one battery terminal and connector were corroded.
 
The electricians contacted their team leader and received permission to cleanup the connection using the same work order. The team leader and the lead electrician determined that the starting system needed to be tested after the battery was returned to its normal configuration. The lead electrician suggested using a portion of preventative maintenance task, "Quarterly Restrike Test for TSC Diesel Generator."
 
Since this test is routinely performed by the electricians working on the starting battery, the team leader allowed the electricians to perform the test without a working copy of the test procedure in the field. After the diesel generator was successfully started, the
 
engine operating switch was moved from "Run" to "Idle" to let the engine run at a slower speed and cooldown before being secured. The team determined that the failure to have a working copy of the test procedure at the engine during this post maintenance testing and failure to use the restoration guidance contained in the test procedure contributed directly to the failure to restore the technical support center diesel generator to its normal standby condition.
 
On June 16, 2004 The licensee performed the periodic one hour loaded test run of the technical support center diesel generator using preventative maintenance task,
"Quarterly Restrike Test for TSC Diesel Generator," under work Order 2715869. The diesel generator started as expected and automatically energized the technical support center electrical power distribution panel. The diesel generator ran loaded for one hour with no problems noted. The diesel generator was shutdown using the task instructions and restoration directions.
 
The team determined that the diesel generator failure contributed to the delay in staffing the technical support center. As a result of diesel generator failure, the responding members of the emergency response organization were moved to the satellite technical support center adjacent to the Unit 2 control room. However, normal off-site power was restored to the technical support center before the two hour staffing requirement of PVNGS Emergency Plan, Table 1, 'Minimum Staffing Requirements for PVNGS for Nuclear Power Plant Emergencies," Revision 28.
 
3.4 Emergency Response Organization Issues a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and security department and reviewed security department logs and emergency planning records to determine the cause of the multiple emergency response organization communication problems during the loss of off-site power. The team also reviewed the licensee's preliminary findings attached to significant CRDR 2715749 initiated to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004 and attended the significant event investigation team meetings. In addition, CRDR 2716281 associated with the availability of dose projection computers was reviewed.
 
b. Observations and Findings The team identified several examples of an unresolved issue during the inspection. The first involved communication and coordination issues associated with notifying state and local officials of emergency classifications. The second involved the apparent unavailability of the radiological dose projection computers used to develop timely protective action recommendations to state and local authorities from the control room.
 
The third involved the apparent delays in notifying and staffing emergency response organization. The unresolved item is to review the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, determine whether a violation or violations of requirements occurred, assess the significance of any findings, and to assess any corrective actions implemented
 
(URI 05000528;529;530/2004012-001). This item has potential cross-cutting aspects in the area of human performance.
 
The team found that the apparent causes for the multiple emergency response organization communication problems were (1) the unanticipated loss of off-site power to all three units which resulted in the loss of normal emergency planning communications equipment, and (2) human performance errors in implementing EPIP-01, "Satellite Technical Support Center Actions," Revision 14.
 
When the loss of off site power and three unit trip occurred the two of the unit shift managers, the on site manager and the operations manager, who was the on-call technical support center emergency coordinator, were in the plan of the day meeting in the operations support building adjacent to the Unit 2 control room. The Unit 1 shift manager returned to the Unit 1 control room and assumed the duties as emergency coordinator for all three units. When the on-site manager arrived at the Unit I control room to relieve the shift manager of his emergency coordinator responsibilities, Unit 2 entered an Alert emergency action level, so the on-site manager returned to Unit 2 to set up the satellite technical support center ant the most affected unit. The Unit 1 shift manager had declared a Notification of Unusual Event for the loss of off-site power for greater than 15 minutes. He gave this information to the on-site manager to coordinate the emergency notification to state and local authorities.
 
The Unit 2 shift manager declared an Alert emergency action level based on the loss of off-site power concurrent with a loss of one of the Unit 2 emergency diesel generators for greater than 15 minutes. He directed the on-shift emergency communicator to notify state and local authorities. The emergency communicator immediately determined that the normal notification alert network system was not working and used the backup radio notification system to notify the state and local authorities within 8 minutes of the Alert classification.
 
When the on site manager arrived at the Unit 2 satellite technical support center in the Unit 2 control room, he was told by the operations manager that Unit 2 had assumed all emergency communications, but did not question him as to whether or not the Unit 1 Notification of Unusual Event was sent to the state and local authorities. Apparently, there was no formal turnover on emergency communications responsibilities from the Unit 1 shift manager to the Unit 2 shift manager or the on-site manager who was going to relieve the Unit 2 shift manager of emergency coordinator responsibilities. In addition, the on site manager and operations .manager did not effectively communicate the status of the off site notification. These two incomplete communications human performance errors that resulted in the Unit 1 Notification of Unusual Event not being sent to state and local authorities.
 
The Unit 3 shift manager declared a Notification of Unusual Event for the loss of off-site power for greater than 15 minutes. There was a time delay before the Unit 3 on-shift emergency communicator attempted to send out the notification using the normal notification alert network system. When he determined that it was not working he used the backup radio notification system but did not notify the state and local authorities until 20 minutes after the Notification of Unusual Event classification. The team determined that the delay in starting the notification process and the need to use the backup radio system were human performance errors that delayed the Unit 3 Notification of Unusual
 
Event beyond the 15 minute requirement in EPIP-01, uSatellite Technical Support Center Actions," Revision 14.
 
The loss of power to the normal notification alert network system complicated the emergency notification of state and local authorities. In addition, the licensee determined that the three satellite technical support center dose projection computers had lost power and raised questions about their ability to make timely protective action recommendations. The apparent cause for both failures was that both systems were supplied electrical power from electrical circuits that have no backup power supplies.
 
The licensee initiated CRDR 2715749 to address the loss of power to the normal notification alert network system and CRDR 2716281 to address the dose projection computers. The licensee implemented immediate corrective actions to install backup uninterruptible power supplies for both systems.
 
During the initial loss of off-site power and the failure of the Unit 2 Train "A" EDG, the Unit 2 shift manager and on-shift emergency communicator were delayed in sending out the emergency pager notification to the on-call emergency response organization. The team determined that the delay of 16 minutes contributed to the greater than 2 hour response time of the on-call technical support electrical engineer to the technical support center. The licensee did not activate the backup dialogic auto-dialer system for emergency response organization notification as required during an Alert emergency classification. During interviews, the Unit 2 shift manager had stated that he thought that June 14, 2004, a Monday, was a normal working day and the emergency response organization would respond to the plant wide announcement of the Alert classification.
 
In fact, Monday was a normal off day for plant personnel and the dialogic auto-dialer system should have been used to activate the emergency response. The team determined that this human performance error contributed to the late staffing of the technical support center and the less than minimum required number of radiation protection technicians reporting to the operations support center within the required 2 hours. This failure to use EPIP-01 properly was documented in CRDR 2715749 and the licensee revised EPIP-01, to always require the activation of the dialogic auto-dialer for backup emergency response organization notification.
 
4.0 Coordination with Off-site Electrical Organizations a. Inspection Scope The team reviewed the design and maintenance practices off site electrical organization in order to assess factors that influenced electrical power Grid failure, the extent of the system failure and the corrective actions for preventing such failures. In addition, the licensees coordination with off site organizations before, during, and after the June 14, 2004 loss of offsite power event was assessed.
 
b. Observations and Findings As discussed previously, the loss of the PVNGS 525 kV local grid, which disabled all the seven offsite power supplies for the nuclear stations, was due to the cascading effect of a wide area electrical isolation that originated from an electrical fault on a 230kV transmission line that failed to isolate for approximately 38 seconds. The selective tripping of the breakers to isolate problems at the West Wing 230Kv Substation, near
 
the source of the fault, did not perform as required due to a relay failure and a design that had no defense-in-depth.
 
The switchgear maintenance at the PVNGS 525 kV switchyard is performed by SRP personnel. The breakers undergo yearly maintenance which includes a check of the SF6 tubing, pressure switches, air system alarms, air compressor operation, breaker timing, and an operational check of the mechanisms.
 
The protective relaying is also inspected yearly. Relay settings, software and firmware, operating characteristics, and communication circuits are verified for accuracy also on a yearly basis. The PVNGS switchyard is manned by maintenance personnel during normal working hours for prompt identification of any evolving problems.
 
The licensee has calculated the minimum onsite requirement for electrical voltage to be 512kV. They have directed the APS Energy Control Center (APS-ECC), the local transmission system operator, to provide voltage range of 525 to 535kV for the PVNGS 500kV switchyard. The APS-ECC continued to provide voltage at the expected voltage band following the isolation of the fault.
 
Of note was how closely the APS-ECC and PVNGS control room operators coordinated their efforts to reduce PVNGS switchyard voltage so reactor coolant pumps could be started during plant recovery efforts. In addition, the team found that the licensee actively coordinated the investigation into why a single insulator failure could result in a loss of offsite power and a three-unit trip and was closely involved in the development of corrective actions to improve both reliability and independence of transmission lines.
 
The team concluded that the remedial measures taken and planned by the offsite electrical organizations improved reliability and independence and appropriately minimized the possibility of a cascading blackout in the PVNGS 500 kV switchyard.
 
5.0 Risk Significance of the Event The initial risk assessment for Unit 2 resulted in a conditional core damage probability (CCDP) of 6.5 x 10 '. The initial CCDP for Units I and 3 was estimated as 3.2 x 10-4 per unit. Subsequently, the team, assisted by Office of Nuclear Regulatory Research personnel, completed a detailed risk assessment for the event. This analysis used the Standardized Plant Analysis Risk (SPAR) Model for Palo Verde 1, 2, & 3, Revision 3.03, to estimate the risk. The analyst assumed that 95 percent of loss of offsite power events, similar to the June 14th event, would be recovered within 2-1/2 hours. The resulting CCDPs were 4 x 10-5, 7 x 104, and 4 x 10-5 for Units 1, 2, and 3, respectively.
 
The team gathered information concerning the failed emergency diesel generator and charging pump in Unit 2. Other equipment problems including turbine-driven auxiliary feedwater pump drains, power-operated relief valves problems, and 13.8 kV breaker issues were assessed. In addition, the team evaluated the ability of the licensee to recovery offsite power, the probability that power could be provided to the vital buses from the gas turbine generators had it been needed, and the capability of vital and nonvital batteries to continue to provide control power, had a station blackout occurred.
 
The team made the following assumptions critical to the analysis:
 
* The Unit 2 Emergency Diesel Generator A failed and could not have been recovered prior to postulated core damage.
 
* A Unit 2 licensed operator misaligned the suction path to Charging Pump E causing the pump to trip on low suction pressure. The pump could not have been recovered prior to postulated core damage because the pump was air bound.
 
* The required mission times, during this specific event, for the emergency diesel generators and the turbine-driven auxiliary feedwater pump were 2.5 hours.
 
* Recovery of ac power to the first vital bus, via the gas turbine generators or offsite power, was possible within one hour following a postulated station blackout. This assumption was derived from the following facts and their associated timeframes:
the east switchyard bus was energized from offsite power (32 minutes);
the gas turbine generators were started and loaded (29 minutes);
licensed operators determined the grid to be stable (49 minutes); and power can be aligned from east bus to a vital 4160 volt bus (-30 minutes).
 
* The probability that operators failed to restore offsite power within 1 hour was 4 x 10-2 as determined using the SPAR-H method. The nominal action failure rate of 0.001 was modified because the available time was barely adequate to accomplish the breaker alignments necessary, the operator stress level would have been high, and the actions required were of moderate complexity.
 
* The probability that operators failed to restore offsite power prior to the core becoming uncovered during a reactor coolant pump seal LOCA was estimated as 4 x 10-3. The same performance shaping factors were used as for the 1-hour recovery with the exception of the time available. The team determined that the time available was nominal, because there would be some extra time, above what is minimally required, to execute the recovery action.
 
* The failure probability for recovery of offsite power prior to battery depletion during a station blackout was estimated as 4 x 10-3. The same performance shaping factors were used as for the seal LOCA recovery.
 
* The team concluded that the failures of 13.8 kV feeder breakers in Units 1 and 3 would have increased the complexity in recovering offsite power for these units.
 
However, the potential contribution of common cause failure probabilities would not greatly impact the nonrecovery probabilities described previously for Unit 2.
 
* The Palo Verde gas turbine generators used for station blackout could be started and loaded within one hour of blackout initiation. One gas turbine generator can provide power to switchyard components and supply one Unit 1 vital 4160 volt bus. Both generators can provide one vital bus on Units 1 and 2 or Units 1 and 3, but not Units 2 and 3.
 
To account for the offsite power circumstances on June 14, 2004, the team modified the SPAR to replace industry average loss of offsite power nonrecovery probabilities with ones derived from actual grid conditions and estimated probabilities of human actions failing. Additionally, modeling of the Palo Verde gas turbine generators was improved to better represent their contribution in providing power to vital buses if needed. The team determined that this modified SPAR was an appropriate tool to assess the risk of this event.
 
The team set the likelihood of a loss of offsite power to 1.0, and the likelihood of all other initiating events were set to the house event FALSE, indicating the assumption that it is unlikely that two initiating events would occur at the same time. The failure to start and failure to run basic events for both Emergency Diesel Generator A and Charging Pump E were set to the house event TRUE, permitting calculation of the probability that similar components would fail from common cause. The SPAR model was quantified following the modifications, and the mean of the best estimate CCDPs were obtained through Monte Carlo simulation of the event.
 
6.0 Assessment of Event Response a. Inspection Scope The team conducted an overall assessment of how the PVNGS facility responded to the loss of offsite power event; how the licensee implemented emergency procedures, assessed the apparent causes of failures, and determined when the facility was ready for restart; and when appropriate, the team assessed the effectiveness of immediate corrective actions.
 
b. Observations and Findings Although largely out of the control if the PVNGS licensee, the team found it unacceptable for a single phase to ground fault on a 230 kV transmission line to cause a loss of all power to the PVNGS switchyard and a trip of all three PVNGS units.
 
Nevertheless, the event resulted in the identification of several design improvements which improved both the reliability and independence of the 525 kV grid local to the PVNGS switchyard.
 
With respect to how well the PVNGS facility responded, overall, the team found that the PVNGS facility responded in a manner consistent with its design for a loss of offsite power with some exceptions. One of those exceptions involving failure of the Unit 2 EDG to run was notable because it resulted in some increased risk to the facility. The other exceptions, while less notable individually, were numerous and represented a larger concern when considered in their aggregate. Of note was the self-critical nature of the licensee efforts to understand and correct emergency response organization issues.
 
The team found that the licensees efforts to identify each issue, determine the root and/or apparent cause, and develop corrective actions generally appropriate with few exceptions. Several observations were made by the team regarding how well the licensee integrated post-trip review efforts and communicated with the NRC. For example, with respect to effective communications, while the team knew that the
 
licensee had planned to correct any transmission and distribution issues prior to re-starting the facility, the licensee did not effectively communicate that to NRC management during a telephone conference. Another example involved integration of findings associated with each units response to the loss of offsite power. The licensee did not identify, until after re-starting Unit 3, that the main generator exciter operated differently than the other two units. As a result, troubleshooting efforts were limited by plant operations.
 
7.0 Exit Meeting Summary On June 18, June 24, and July 7, 2004, the team presented the preliminary observations from the Augmented Inspection in progress. On July 12, 2004, the Augmented Inspection Team Leader presented the results of the inspection in a public meeting held at the Estrella Community College in Goodyear, Arizona. The results of the inspection which was conducted June 14 through July 12, 2004, were presented to Mr. J. Levine, and other members of his staff. Mr. Overton acknowledged the observations presented. Proprietary information reviewed by the team was returned to the facility.
 
ATTACHMENT 1 SUPPLEMENTAL INFORMATION KEY POINTS OF CONTACT Licensee Jim Levine Greg Overbeck David Mauldin Dennis W. Gerlach, Manager, Transmission & generation Operations, SRP Mike gentry, Manager, Grid Operations-PDO, Transmission and Generation Dispatching, SRP Giang Vuong, Protection Engineer, SRP Edmundo, Marquez, Manager System Protection, Electronic Systems, SRP Cary B. Deise, Director, Transmission Planning and Operations, APS Tom Glock, Power Operations Manager, Power Ops Tech Services, APS Steven Phegley, Section Leader, Protection Metering, & Automated Control, APS Steven Kestler, Electrical Engineer, Palo Verde Nuclear Station Bajranga Aggarwal, Systems Engineer, APS John Hesser, Director of Emergency Services Larry Leavitt, Significant CRDR Lead Investigator David Crozier, Program Leader for Emergency Planing Martin Rhodes, Security Team Leader Danne Cole, Security Section Leader NRC ITEMS OPENED 05000528/2004012-01; URI 05000529/2004012-01; 05000530/2004012-01 05000528/2004012-02; URI 05000529/2004012-02; 05000530/2004012-02 05000528/2004012-03; URI 05000529/2004012-03; 05000530/2004012-03
 
  -
DOCUMENTS REVIEWED Drawings NUMBER  TITLE  REVISION 01 -J-SPL-003 Control Logic Diagram Essential Spray Pond Auviliary 3 Pumps, Day Tk Valve & Alarms 01-J-EWL-001 Control Logic Diagram Essential Cooling Water Pumps 2 and Surge Tank Fill Valves 01-J-EWL-002 Control Logic Diagram Essential Cooling Water Loop A 0 X-Tie Valves & System Alarms 01 -J-SPL-001 Control Logic Diagram Essential Spray Pond Pumps 3 01-M-EWP-001 P&l Diagram Essential Cooling Water System 29 01 -M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 1 of 3 35 01 -M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 2 of 3 35 01 -M-SPP-001 P&I Diagram Essential Spray Pond System Sheet 3 of 3 35 01 -M-SPP-002 P&l Diagram Essential Spray Pond System  12 A-774-10.110 Palo Verde Station 500KV Switchyard PL912 Closing and 0 SRP Tripping Schematic A774-1 0.1 11/1 Palo Verde Station 500KV Switchyard 500KV Breaker 0 SRP PL912 Schematic Diagram A774-10.112 Palo Verde Station 500KV Switchyard PL912 Fail/Fault 0 SRP and CT Fail/Fault Schematic Diagram A774-10.113 Palo Verde Station 500KV Switchyard PL915 Fail/Fault 0 SRP and CT Fault Schematic Diagram A-774-10.13 Palo Verde Station 500KV Switchyard 500KV Breaker 9 SRP PL932 Closing and Tripping Schematic Diagram A-774-10.14 Palo Verde Station 500KV Switchyard 500KV Switchyard 9 SRP 500KV Breaker Failure & Fault Monitor PL992 & PL995 Schematic Diagram
 
Drawings NUMBER  TITLE  REVISION A-774-10.15 Palo Verde Station 500KV Switchyard 500KV Breaker 12 SRP PL915 Closing and Tripping Schematic Diagram A-774-10.20 Palo Verde Station 500kV Switchyard 500kV Breaker PL 10 SRP 942 Closing & Tripping Schematic Diagram A-774-10.21 Palo Verde Station 500kV Switchyard 500kV Breaker PL 10 SRP 945 Closing & Tripping Schematic Diagram A-774-10.36 Palo Verde Station 500KV Switchyard 500KV Breaker 6 SRP PL915 Schematic Diagram A-774-10.42 Palo Verde Station 500KV Switchyard 500KV Breaker PL 10 SRP 945 Schematic Diagram A-774-10.49 Palo Verde Station 500KV Switchyard 500KV Breaker 7 SRP PL935 Closing and Tripping Schematic Diagram A-774-10.5 Palo Verde Station 500KV Switchyard Devers Line 5 SRP Relaying Schematic Diagram A-774-10.50 Palo Verde Station 500KV Switchyard 500KV Breaker 7 SRP PL938 Closing and Tripping Schematic Diagram A-774-10.82 Palo Verde Station 500KV Switchyard PL972 Closing and I SRP Tripping Schematic Diagram A-774-10.86 Palo Verde Station 500KV Switchyard PL975 Closing and I SRP Tripping Schematic Diagram A-774-10.90 Palo Verde 500KV Switchyard 500KV Hassayampa #1 3 SRP Line Rel 87La Schematic Diagram A-774-10.91 Palo Verde 500KV Switchyard 500KV Hassayampa #1 2 SRP Line Rel 87La Schematic Diagram A-774-20.3 Palo Verde Substation Westwing #1 500KV Line I SRP Relaying2lLa Schematic Diagram Sheet 1 A-774-20.4 Palo Verde Substation Westwing #1 500KV Line 1 SRP Relaying2lLa Schematic Diagram Sheet 2
 
Drawings NUMBER  TITLE  REVISION A-774-20.6 Palo Verde Substation Westwing #1 500KV Line 1 SRP Relaying2lLb Schematic Diagram Sheet I A-774-20.7 Palo Verde Substation Westwing #1 500KV Line 1 SRP Relaying2I Lb Schematic Diagram Sheet 2 A-774-20.9 Palo Verde Substation Westwing #1 500KV Line Relaying I SRP 87Lc Schematic Diagram Sheet 2 A-774-8.2 Palo Verde 500KV SWYD. One Line Diagram SH2 Bays 1 12 SRP & 2 IN-6W A-774-8.3 Palo Verde Station 500kV Switchyard IN-6W 500KV Bays 14 SRP 3 & 4 One Line Diagram Sh.3 K-774-9.1 Palo Verde Substation Bay I Three Line Diagram 11 SRP K-774-9.3 Palo Verde Station 500KV Switchyard Bay 3 Three Line 12 SRP Diagram K-774-9.4 Palo Verde Substation 500KV Switchyard Bay 4 Three 18 SRP Line Diagram K-774-9.6 Palo Verde Station 500KV Switchyard Bay 7 Three Line 1 SRP Diagram G-33417 Sheet 1 of 2, Westwing 230KV Switchyard USBR Liberty 12 APS & Pinn Pk Line Relaying CT/PT Schematic G-33417 Sheet 2 of 2, Westwing 230KV Switchyard WAPA 230KV 12 APS Liberty & Pinn Pk Line Relaying CT-PT Schematic G-33434 Sheet 1 of 1, Westwing 230KV Switchyard WAPA 230KV 9 APS Liberty Line Relaying DC Schematic G-33451 Westwing 230KV Switchyard WAPA 230KV Liberty Line & 14 APS West Bus Tie PCB WW1022 DC Schematic G-33453 Sheet 1 of 1,Westwing 230KV Switchyard WAPA 230KV 16 APS Liberty & Pinn Pk Line PCB WW1 126 Schematic
 
Drawings NUMBER  TITLE  REVISION G-33493 Sheet 1 of 2, Westwing 230KV Switchyard USBR Liberty 1 APS & Pinn Pk Line CCPD Jct. Box Wiring Diagram 01-E-MAB-001 Elementary Diagram Main Generation System Main 13 PVNGS Generator Three Line Metering and Relaying 01 -E-MAB-0012 Elementary Diagram Main Generator System Main 9 PVNGS Generator Three Line Metering and Relaying 01-E-MAB-004 Elementary Diagram Main Generation System Main 8 PVNGS Transformer Three Line Diff, Metering and Relaying 01-E-MAB-006 Elementary Diagram Main Generation System Generator 3 PVNGS & Transformer Primary Protection Unit Tripping 01-E-MAB-007 Elementary Diagram Main Generation System Generator 5 PVNGS & Transformer Primary Protection Unit Tripping 01-E-MAB-008 Elementary Diagram Main Generation System Generator 5 PVNGS & Transformer Primary Protection Unit Tripping 01-E-MAB-009 Elementary Diagram Main Generation System Generator 4 PVNGS & Transformer Primary Protection Unit Tripping 01 -E-MAB-010 Elementary Diagram Main Generation System Generator 8
& Transformer Back-up Protection Unit Tripping 01-E-MAB-011 Elementary Diagram Main Generation System Generator 7
& Transformer Back-up Protection Unt Tripping, 01-E-MAB-011 Elementary Diagram Main Generation System Generator 12
& Transformer Back-up Protection Unit Tripping 01-E-MAB-013 Elementary Diagram Main Generation System Generator 10
& Transformer Unit Tripping Cabling Block Diagram 01-E-NHA-001 Single Line Diagram 480V Non-Class 1E Power System 21 Motor Control Center 1E-NHN-M13 01 -E-NHA-010 Single Line Diagram 480V Non-Class 1E Power System .19 Motor Control Center 1E-NHN-M10
 
Drawings NUMBER  TITLE  REVISION 01-E-NNA-001 Single Line Diagram 120VAC Non-Class 1E Ungrounded 19 Instrument and Control Panel 1E-NNN-D1 I 01 -E-NNA-002 Single Line Diagram 120VAC Non-Class 1E Ungrounded 19 Instrument and Control Panel 1E-NNN-D12 01-E-PHA-001 Single Line Diagram 480V Class 1E Power System Motor 16 Control Center 1E-PHA-M31 01 -E-PHA-002 Single Line Diagram 480V Class 1E Power System Motor 16 Control Center 1E-PHB-M32 13-E-MAA-001 Main Single Line Diagram  21 G-32900 Sheet 1 of 2, Westwing 500KV Switchyard Bays 1 - 9 One 23 Line Diagram G-32900 Sheet 2 of 2, Westwing 500KV Switchyard Bays 10 - 18 * 12 One Line Diagram G-32901 Sheet 1 of 2, Westwing 500KV Switchyard Transformer 28 Bays 1 & 4 One Line Diagram G-32901 Sheet 2 of 2, Westwing 500KV Switchyard Bays 7,10,13 10
  & 16 One Line Diagram G-33300 Westwing 230KV Switchyard Bays 1-9 One Line Diagram 25 G-33301 Sheet 1 of 2, Westwing 230KV Switchyard Bays 10-18 31 One Line Diagram Condition Report/Disposition Reports CRDR 2715726 CRDR 2716011 CRDR 2715941 CRDR 2715667
 
CRDR 2715659 CRDR 2715768 CRDR 2715709 CRDR 2715727 CRDR 2715749 CRDR 2716281 CRDR 2715669 Miscellaneous Documents:
NUMBER  TITLE  REVISION/DATE Security Computer Alarm logs for June 14, 2004 Security Access Transaction Records for June 14, 2004 Day Shift Security Department Logs for June 14, 2004 Sally Port Vehicle Barrier Operating Instructions, as posted on June 14, 2004 Sally Port Vehicle Barrier Operating Instructions, revised on June 17, 2004 PVNGS Emergency Plan, Table 1, 'Minimum  28 Staffing Requirements for PVNGS for Nuclear Power Plant Emergencies"
 
Miscellaneous Documents:
NUMBER  TITLE  REVISION/DATE WO# 2623863 Monthly Inspection of TSC DG Battery and June 9, 2004 Battery Charger WO# 2715869 Perform the Restrike Test for the TSC Diesel June 16, 2004 Generator APS Letter Robert Smith to N. Bruce et al., Final June 5, 2002 Report for the 2002 Palo Verde /Hassayampa Operating Study 2003-04 Winter Palo Verde Unit 2 Uprating Net November 2003 Generating Capacity of 1408MW for Updated Final Safety Analysis Report (UFSAR)
Procedure No. Palo Verde Transmission System Interchange Revision 8 PVTS-01 Scheduling and Congestion Management Procedure PVNGS Technical Specifications, Through November 21, 2003, Amendment No. 150,  Corrected December 12, 2003 NRC Letter M Fields to APS, Palo Verde Nuclear Generating Station Units 1, 2 and 3 - Issuance of Amendments Re: Changes Related to Double Sequencing and Degraded Voltage Instrumentation (TAC Nos. MA4406, MA4407, and MA4408)
APS Letter 102-04310-WEIISABIRKR,  July 16, 1999 Response to NRC Request for Additional Information Regarding Proposed Amendment to Technical Specifications (TS) 3.8.1, AC Sources-Operating and 3.3.7, Diesel Generator (DG)-Loass of Voltage Start (LOVS),
10CFR 50.59 Screening and Evaluation, Revise Revision 0 the UFSAR, Technical Specifications, and Technical Specifications Bases to enhance the means of complying with the requirements of Regulatory Guide 1.93 for offsite power sources
 
Miscellaneous Documents:
NUMBER  TITLE  REVISION/DATE 10CFR 50.59 Screening and Evaluation, S-04- Revision 0 0009, Updated Transmission Grid Stability Study: Salt River Project 20031126 (LDCR 2003F040)
Visual Examination of Welds report number 04-250, component 1-CH-GCBA 1 WOOA Visual Examination of Welds report number 04-250, component 1 CHN-F36 Purification Filter Palo Verde Nuclear Generating Station Design 16 Basis Manual, EW System Palo Verde Nuclear Generating Station Design 13 Basis Manual, SP System PV Unit 2 Archived Operator Log 06/14/2004, 12:10:47 AM, through 06/15/2004, 11:10:30 PM Bulletin 74-09 Deficiency in General Electric Model 4KV August 6, 1974 Magne-Blast Breakers Information General Electric Magne-Blast Circuit Breaker April 17, 1984 Notice 84-29 Problems Information Potential Failure of General Electric Magne- June 12, 1990 Notice 90-41 Blast Circuit Breakers and AK Circuit Breakers Information Grease Solidification Causes Molded Case April 7,1993 Notice 93-26 Circuit Breaker Failure To Close Information Misadjustment Between General Electric 4.16- December 3,1993 Notice 93-91 KV Circuit Breakers and Their Associated Cubicles Information Inoperability of General Electric Magne-Blast January 7, 1994 Notice 94-02 Breaker Because of Misalignment of Close-Latch Spring Information Failures of General Electric Magne-Blast Circuit August 1,1994 Notice 94-54 Breakers To Latch Closed
 
Miscellaneous Documents:
NUMBER  TITLE  REVISION/DATE Information Hardened or Contaminated Lubricants Cause April 21, 1995 Notice 95-22 Metal-Clad Circuit Breaker Failure Information Failures of General Electric Magne-Blast Circuit August 12, 1996 Notice 96-43 Breakers Unit 3 4 Pt Trend chart,"Core Differential Pressures for Loops 1A, IB, 2A, 2B", start time 07:41:15 through 07:41:45 Unit 1 4 Pt Trend chart, "Letdown System Temperature and Flow," start time 6/14/04 07:40:00 through 6/14/04 09:40:00 PV Unit 1 and Unit 3 Archived Operator Logs 6/14/2004 1:30 a.m. through 6/15/2004 5:35 a.m.
 
Calculation 13- CVCS Letdown Heat exchanger to Purification MC-CH-508 Filters, Unit 1 350 F Temperature Event During Plant Trip of 6-14-04 Procedures:
NUMBER  TITLE  REVISION/DAT E
40EP-9EO07 Loss of Offsite Power/Loss of Forced Circulation 10 40EP-9EO1 0 Standard Appendices  33 400P-9CH01 CVCS Normal Operations  35 20SP-OSK08 Compensatory Measures for the Loss of Security 27 Equipment Effectiveness
 
21 SP-OSK1 I Security Contingencies  13 2ODP-OSK29 Security System Testing  27 EPIP-01 Satellite Technical Support Center Actions 14 EPIP-01 Satellite Technical Support Center Actions 15 EPIP-99 EPIP Standard Appendices, Appendix C, 'Forms" 1 EPIP-99 EPIP Standard Appendices, Appendix D,  1
"Notification" EPIP-99 EPIP Standard Appendices, Appendix H, "Autodialer 1 Activation" 20SP-OSKO8 Compensatory Measures for the Loss of Security 27 Equipment Effectiveness 21 SP-OSK11 Security Contingencies  13 2ODP-OSK29 Security System Testing  27 41AL-1 RK6B Panel B06B Alarm Responses, 'Mn Gen Neg Seq 32 Pre-Trip 01 -P-CHF-201 Auxiliary Building Isometric Chem, Volume Control 6/2/1998 System Letdown Heat Exchanger
 
ATTACHMENT 2 AUGMENTED INSPECTION TEAM CHARTER
 
UNITED STATES NUCLEAR REGULATORY COMMISSION
 
==REGION IV==
611 RYAN PLAZA DRIVE, SUITE 400 ARLINGTON, TEXAS 76011-4005 June 15, 2004 MEMORANDUM TO: Anthony T. Gody, Chief Operations Branch Division of Reactor Safety FROM:  Bruce Mallett, Regional Administrator IRA!
SUBJECT:  AUGMENTED INSPECTION TEAM CHARTER; PALO VERDE NUCLEAR GENERATING STATION, UNITS 1, 2, AND 3, COMPLETE LOSS OF OFFSITE POWER AND MULTIPLE MITIGATING SYSTEM FAILURES In response to the complete loss of all offsite power sources, the trip of all three units, and the Unit 2 Emergency Diesel Generator 'A,' failing to function as required at Palo Verde Nuclear Generating Station on June 14, 2004, an Augmented Inspection Team is being chartered.
 
There was no impact to public heath and safety associated with the event. You are hereby designated as the Augmented Inspection Team (AIT) leader.
 
A. Basis On June 14, 2004, at 9:45 a.m. CDT, all offsite power supplies to the Palo Verde Nuclear Generating Station were disrupted, with a concurrent trip of all three units.
 
Additionally, the Unit 2 Emergency Diesel Generator "A" failed to function as required.
 
As a result, the licensee declared a Notice of Unusual Event (NOUE) for all three units at about 9:50 a.m. CDT and elevated to an Alert for Unit 2 at 9:54 CDT. The licensee and NRC resident inspectors also reported a number of other problems, including the failure of Unit 2 Charging Pump "E," the failure of a Unit 3 steam bypass control valve, multiple breakers failing to operate during recovery operations, and emergency response facility and security interface issues which may have impeded emergency responders. This event meets the criteria of Management Directive 8.3 for a detailed follow up inspection, in that, it involved multiple failures to systems used to mitigate an actual event. The initial risk assessment, though subject to some uncertainties, indicates that the conditional core damage probability was in the range of high E4.
 
Because the initial risk assessment was in the range for consideration of an AIT and because of multiple failures in systems used to mitigate an actual event, it was decided that an AIT is the appropriate NRC response for this event.
 
The AIT is being dispatched to obtain a better understanding of the event and to assess the responses of plant equipment and the licensee to the event. The team is also tasked with reviewing the licensee's root-cause analyses.
 
Anthony T. Gody  -2-B. Scope Specifically, the team is expected to perform data gathering and fact-finding in order to address the following:
1. Develop a complete sequence of events related to the loss-of-offsite power, the multiple unit trips, and the Unit 2 emergency diesel generator failure.
 
2. Assess the performance of plant systems in response to the event, including any design considerations that may have contributed to the event.
 
3. Assess the adequacy of plant procedures used in response to the event.
 
4. Assess the licensee's response to the event, including operator actions and emergency declarations, and any emergency response facility or security interface issues that may have adversely affected response to the event.
 
5. Assess the licensee's determination of the root and/or apparent causes of offsite power loss, emergency diesel generator failure, and other mitigating system(s)
failures.
 
6. Based upon the licensee's cause determinations, review any maintenance related actions which could have contributed to the event initiation or produced subsequent response problems.
 
7. Review the licensee's assessment of coordination activities with off-site electrical dispatch organizations prior to and during the event.
 
8. Provide input to the regional Senior Reactor Analyst for further assessment of risk significance of the event.
 
C. Guidance The Team will report to the site, conduct an entrance meeting, and begin inspection no later than June 16, 2004. A report documenting the results of the inspection should be issued within 30 days of the completion of the inspection. While the team is on site, you will provide daily status briefings to Region IV management. The team is to emphasize fact-finding in its review of the circumstances surrounding the event, and it is not the responsibility of the team to examine the regulatory process. The team should notify Region IV management of any potential generic issues identified related to this event for discussion with the Program Office. Safety concerns that are not directly related to this event should be reported to the Region IV office for appropriate action.
 
Anthony T. Gody  -3-For the period of the inspection, and until the completion of documentation, you will report to the Regional Administrator. For day to day interface you will contact Dwight Chamberlain, Director, Division of Reactor Safety. The guidance in Inspection Procedure 93800, "Augmented Inspection Team," and Management Directive 8.3, "NRC Incident Investigation Procedures," apply to your inspection. This Charter may be modified should the team develop significant new information that warrants review. If you have any questions regarding this Charter, contact Dwight Chamberlain at (817)
860-8180.
 
Distribution:
B. Mallett T. Gwynn J. Dixon-Herrity J. Dyer R. Wessman T. Reis H. Berkow S. Dembeck M. Fields D. Chamberlain A. Howell C. Marschall T. Pruett J. Clark V. Dricks W. Maier N. Salgado G. Warnick J. Melfi
 
ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:40:55.747 Fault #1 inception Fault #1 type = C-N Fault #1 causellocation = Phase down (broken bells)
reported near 115th Ave. & Union Hills (WW-LBX Line)
At Westwing, the Liberty line relays operated properly and issued a trip signal. Incorporated in this scheme is a Westinghouse high-speed "AR" auxiliary tripping relay that is used to "multiply" that trip signal toward both trip coils of two breakers (WWI 022 & WW1126). The "AR" relay failed (partially) and issued the trip signal to breaker WW1 126 only. Since the trip signal was never successfully issued to WW1022, breaker failure for WW1022 was also never initiated (this would have cleared the Westwing 230kV West bus and isolated the fault). Therefore, the "remote" ends of all lines feeding into the 525kV and 230kV yards were required to trip to isolate the fault.
 
07:40:55.814 4.0 cycles after fault #1 inception WWI 126 opened (LBX / PPX 230kV crossover breaker)
07:40:55.822 4.5 cycles after fault #1 inception LBX1282 opened (Westwing 230kV Line)
07:40:56.115 22.1 cycles after fault #1 inception AFX732 & AFX735 opened (Westwing 230kV Line)
07:40:56.122 22.5 cycles after fault #1 inception YP452 & YP852 opened (Westwing 525kV Line)
07:40:56.136 23.3 cycles after fault #1 inception WW1426 & WW1522 opened (Agua Fria 230kV Line)
07.40:56.142 23.7 cycles after fault #1 inception WW856 & WW952 opened (Yavapai 525kV Line)
07:40:56.165 25.1 cycles after fault #1 inception DV322 & DV722 & DV962 opened (Westwing 230kV Line)
07:40:56.172 25.5 cycles after fault #1 inception WWI 726 & WWI 822 opened (Deer Valley 230kV Line)
07:40:56.196 26.9 cycles after fault #1 inception RWYX482 & RWYX582 & RWYX782 opened (Westwing 230kV Line)
  (Waddell 230kV Line)
  (230/69kV Transformer #8)
 
ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:40:56.515 46.1 cycles after fault #1 inception WWI 222 opened (Pinnacle Peak 230kV Line)
t = unknown Surprise Lockout "L" operated (230/69kV Transformer #4 Differential & B/U Over-Current)
07:40:56.548 48.1 cycles after fault #1 inception SC622 & SC922 & SC262 opened (Surprise 230169kV Transformer #4)
07:40:57.549 108.1 cycles after fault #1 inception SC1 322 opened (Westwing 230kV Line)
07:40:57.800 123.2 cycles after fault #1 inception RWP-CT2A opened (Redhawk Combustion Turbine 2A)
07:40:57.807 123.6 cycles after fault #1 inception RWP-ST1 opened (Redhawk Steam Turbine 1)
07:40:57.814 124.0 cycles after fault #1 inception RWP-CT1A opened (Redhawk Combustion Turbine 1A)
07:40:58.339 155.5 cycles after fault #1 inception RIV762 opened (Westwing 69kV Line)
07:40:58.372 157.5 cycles after fault #1 inception HH762 opened (Westwing 69kV Line)
t = unknown Westwing Lockout "AK" operated (230/69kV Transformer #11 Differential & B/U Over-Current)
07:40:59 (EMS) WW2026 & WW2122 opened (Westwing 230169kV Transformer #11 - High Side)
07:40:59.272 211.5 cycles after fault #1 inception WK362 opened (Westwing 69kV Line)
07:40:59.489 224.5 cycles after fault #1 inception HMX935 & HAAX938 opened (Hassayampa - Arlington 525kV Line)
  (Time stamp provided by SRP)
07:41:00 (EMS) WW862 & WW962 & WWI 362 opened (Westwing 230/69kV Transformer #11 - Low Side)
07:41:00.392 278.7 cycles after fault #1 inception WVV752 opened (South 345kV Line)
07:41:01.982 Fault #1 type changed = B-C-N
 
ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:41:02.144 383.8 cycles after fault #1 inception PSX832 closed auto (Perkins Cap-Bank Bypass)
(Time stamp provided by SRP)
07:41:02.154 Fault #1 type changed = C-N 07:41:02.799 Fault #1 type changed = B-C-N 07:41:03.966 493.1 cycles after fault #1 inception SC562 opened (McMicken 69kV Line)
07:41:05.373 577.6 cycles after fault #1 inception MQ562 opened (McMicken 69kV Line)
07:41:07.849 12.102 seconds after fault #1 inception HAAX922 & HAAX925 opened (Palo Verde 525kV Line #2)
(Time stamp provided by SRP)
07:41:07.851 12.104 seconds after fault #1 inception PLX972 & PLX975 opened (Hassayampa 525kV Line #2)
(Time stamp provided by SRP)
07:41:07.859 12.112 seconds after fault #1 inception HAAX932 opened (Palo Verde 525kV Line #1)
  (Time stamp provided by SRP)
07:41:07.875 12.128 seconds after fault #1 inception PLX982 & PLX985 opened (Hassayampa 525kV Line #3)
  (Time stamp provided by SRP)
07:41:07.878 12.131 seconds after fault #1 inception HAAX912 & HAAX915 opened (Palo Verde 525kV Line #3)
  (Time stamp provided by SRP)
07:41:07.880 12.133 seconds after fault #1 inception PLX942 & PLX945 opened (Hassayampa 525kV Line #1)
  (Time stamp provided by SRP)
07:41:08.104 Fault #1 type changed = A-B-C-N 07:41:10.445 14.698 seconds after fault #1 inception NV1 052 & NV1 156 opened (Westwing 525kV Line)
07:41:10.456 14.709 seconds after fault #1 inception WW556 & WW652 opened (Navajo 525kV Line)
07:41:12 (EMS) WW424J opened (Westwing 230kV West Bus Reactor)
 
ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:41:20.005 24.258 seconds after fault #1 inception PLX992 opened (Devers 525kV Line)
(PLX995 out-of-service at this time)
(Time stamp provided by SRP)
07:41:20.113 24.366 seconds after fault #1 inception PLX932 & PLX935 opened (Rudd 525kV Line)
(Time stamp provided by SRP)
07:41:20.145 24.398 seconds after fault #1 inception RUX912 & RUX915 opened (Palo Verde 525kV Line)
(Time stamp provided by SRP)
07:41:20.864 25.117 seconds after fault #1 inception PLX912 & PLX915 opened (Westwing 525kV Line #1)
(Time stamp provided by SRP)
07:41:20.873 25.126 seconds after fault #1 inception WW1456 & WW1552 opened (Palo Verde 525kV Line #2)
07:41:20.874 25.127 seconds after fault #1 inception WW1 156 & WW1 252 opened (Palo Verde 525kV Line #1)
07:41:20.895 25.148 seconds after fault #1 inception PLX922 & PLX925 opened (Westwing 525kV Line #2)
(Time stamp provided by SRP)
07:41:23.848 28.101 seconds after fault #1 inception PLX988 opened (Palo Verde Unit-3)
(Time stamp provided by SRP)
07:41:24.280 System Frequency = 59.514 Hz (Measured at APS Reach Substation)
07:41:24.641 28.894 seconds after fault #1 inception PLX918 opened (Palo Verde Unit-1)
(Time stamp provided by SRP)
07:41:24.652 28.905 seconds after fault #1 inception PLX938 opened (Palo Verde Unit-2)
(Time stamp provided by SRP)
07:41:25 (DOE) ED4-122 & ED4-322 opened (DOE ED4 Substation)
Tripped on under-frequency (Note frequency low at 07:41 :24.280)
07:41:25 (EMS) ML142, ML542, ML1042 & ML1442 opened (Moon Valley 12kV Feeders)
Tripped on under-frequency (Note frequency low at 07:41:24.280)
07:41:28 (DOE) MEX794 closed auto (Mead Cap Bank bypass)
 
ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:41:34.615 38.868 seconds after fault #1 inception MEX1092 & MEX1692 opened (Perkins - Westwing 525kV Line)
Fault #1 cleared 07:42:22.773 System Frequency = 59.770 Hz (Measured at APS Reach Substation)
 
ATTACHMENT 3 Sequence of Events Unit I Sequence of Events 0741 Startup Transformer# 2 Breaker 945 Open Excessive Main Generator and Field Currents Noted Engineered Safeguards Features Bus Undervoltage Loss of Offsite Power Load Shed Train A and B Emergency Diesel Generator Train A and B Start Signal Low Departure from Nucleate Boiling Ratio Reactor Trip Master Turbine Trip Main Turbine Mechanical Over Speed Trip Emergency Diesel Generator 'A' Operating (10 Second Start Time)
Emergency Diesel Generator UBR Operating (13 Second Start Time*)
0751 Manual Main Steam Isolation System Actuation 0758 Declared Notice of Unusual Event (loss of essential power for greater than 15 minutes)
0810 Both Gas Turbine Generator Sets Started,
#1 GTG is supplying power to NAN S07 0813 Closed 525 k 552-942. The East bus is powered from Hass #1 0838 Restored power to Startup Transformer X01 0844 Restored power to Startup Transformer X03 0855 Fire reported in 120 ft Aux building. Fire brigade confirmed that no fire existed but paint was heated causing fumes. Later it was confirmed that fumes were caused by the elevated temperature of the letdown heat exchanger when it failed to isolate.
 
0900 HI Temp Abnormal Operation Porcedure entered for Letdown heat exchanger outlet temperature offscale high.
 
1002 Reset Generator Protective Trips (volts/hertz; Backup under-frequency)
Palo Verde Switchyard Ring Bus restored 1159 Paralleled DG B with bus and cooled down engine restoring the in house buses 1207 Emergency Coordinator terminated NUE for all three units 1248 Paralleled DG A with bus and cooled down 2209 Noted grid voltage greater than 535.5 volts Shift Manager Coordinated with ECC 6/15 0005 Restored CVCS letdown per Std Appendix 12 started Chg Pump 'A'
 
ATTACHMENT 3 Sequence of Events Unit 1 Sequence of Events 0155 Established RCP seal injection and controlled bleed off 0241 Started 2A RCP, had to secure due to low running amps other two units had RCP's running (what were the amps at the time) exiting of EOP delayed due to switchyard conditions 0305 Exited Loss of Letdown AOP after restoration of letdown per Standard App. 12 of EOP's 0345 Palo Varde Switchyard E-W voltage at approx. 530.7 KV 0818 Started RCP's 2A and 1A 0920 Started RCP's 2B and 1B 0930 Exited EOP 40EP- 9E007 Loss of Offsite Power/Loss of Forced Circulation
 
ATTACHMENT 3 Sequence of Events Unit 2 Sequence of Events 0740 4.16KV Switchgear 3 Bus Trouble Alarm Generator Negative Sequence Alarm 4.16KV Switchgear 4 Bus Trouble Alarm 0741 Main Transformer B Status Trouble Alarm Main Transformer A Status Trouble Alarm ESF Bus Undervoltage Channel A-2 ESF Bus Undervoltage Channel B-2 LOP/Load Shed B ESF Bus Undervoltage Channel B-3 DG Start Signal B LOP/Load Shed A ESF Bus Undervoltage Channel A-4 DG Start Signal A LO DNBR Channels A, B, C, & D Trip RPS Channels A, B, C, & D Trip Main Generator 525KV Breaker 935 Open Mechanical Overspeed Trip of Main Turbine 0751 Manually initiated Main Steam Isolation Signal 0755 Declared an Alert for Loss of All Offsite Power to Essential Busses for Greater than 15 minutes 0901 Energized 13.8KV Busses 2E-NAN-S03 and 2E-NAN-S05 0927 Energized 4.16KV Bus 2E-PBA-S03 0951 Exited Alert 1001 Energized 13.8KV Bus 2E-NAN-S01 1024 Energized 13.8KV Bus 2E-NAN-S02 1132 Started Charging Pump A 1618 Engineering and Maintenance review concluded that Charging Pump E was available for service after fill and vent 1714 Started Charging Pump E 1716 Started RCP 1A 1722 Started RCP 2A 1806 Stopped RCPs 1A and 2A on low motor amperage. ECC contacted to adjust grid voltage as-low-as-possible
 
ATTACHMENT 3 Sequence of Events Unit 2 Sequence of Events 2040 Started RCPs 1A and 2A 2051 Stopped RCPs 1A and 2A on low running amperage 6/15 0400 Started RCPs 1A and 2A 0610 Exited Emergency Operating Procedures
 
ATTACHMENT 3 Sequence of Events Unit 3 -Sequence of Events 07:40 Generator Under Voltage Negative Sequence Trip Master Turbine Trip 3ENANS01 Bus Under Voltage Reactor Trip Circuit Breakers Open 07:41 Exciter Voltage Regulator Mode Change Unit 3 Gen 525 KV bkr 985 opens phase Gen B &Ccurrent alarm generator field current ESF bus undervoltage ch A-2 LOP load shed B EDG B start signal CEDM MG set A & B input Bkr open LOP load shed A EDG A start signal Turbine overspeed mechanical trip ESF Bus UV A-1 ;A-4 alarm 13.8 Kv swgr 1 & 2 load shed Main Generator Gross MW low (402 MW)
Power load Unbalance alarm VOPTChA,B,C&D Turbine Bypass Gp X quick open 07:42 lo SG press Unit 3 Gen 525 Kv Bkr 988 open 07:43 MSIS actuates automatically on Lo SG press 23:41 started RCP 1A 23:45 started RCP 2A 6/15 00:40 exited EOP 16:37 Started RCP 16 6/16.
 
02:07 started RCP 2B
 
ATTACHMENT 3 Sequence of Events Miscellaneous 0741 Loss of Off-Site Power 0750 0754 Unit 2 Alert 0758 Unit 1, 3 NOUE 0759 Unit 2 NAN sent by radio 0800 0807 Unit 1 NAN signed (not sent)*
/0815 0817 TSC DIG Tripped*
OSC Staffed 0818 Unit 2 NAN initiated*
0819 ERDS activated 0840 NRC ENS notification 0854 . a 0900 Unit 1 Intermediate Bus (S06) re-energized from S/U Transformer
;0900 1 0909 l0911 ge 0927..
V1.E
 
ATTACHMENT 3 Sequence of Events Miscellaneous 0930 TSC Staff relocated to STSC I1 0936 I 4c; 0951 Unit 2 downgraded to NOUE 0952 EOF staffed TSC staff moved from STSC to TSC
,-,I
    ; 61 1001 Last TSC Key person on-site 1005 Unit 2 NOUE transmitted from EOF 1027 TSC staffed*
EC turnover complete
\ 1030 1038
    /
1040    'I '
1042 1045 1207 Event Terminated 1215 NAN for event termination transmitted by EOF 1216 TSC secured Y9& +/Z ExZ
 
Exempt From Public Disclosure in Accordance with 10 CFR 2.390 ATTACHMENT 4 INFORMATION EXEMPT FROM PUBLIC DISCLOSURE Exempt From Public Disclosure in Accordance with 10 CFR 2.390
 
  [Exempt From Public Disclosure in AccorddIT; -with--10 CFR 2.399 ATTACHMENT 4 8.0 Proprietary Information 8.1 Electrical Grid Stability la. Inspection Scope The team reviewed the local electric grid stability following the June 14, 2004, loss-of-offsite power event to enure the adequacy of the grid protection to prevent cascading of 500kV and 230kV switchgear. In addition, the team reviewed local switchyard, substation, generator, and transmission line protective relay schemes to ascertain if any generic grid reliability or independence weakness could be identified.
 
b. Observations and Findinas SX~
Independence As indicated in the Inspection Report above, GDC 17 requires that power from the offsite transmission network be supplied by "two physically independent circuits (not
* necessarily on separate rights of way) designed and located so as to minimize to the extent practical the likelihood of their simultaneous failure under operating and postulated accident and environmental conditions.'
Grid Stability
    -,
 
- -
Ltxempt FromPubic ulscosure Accordance In wit4l iFR .300 ATTACHMENT 4 I
N 8.2 Protected Area Access Problems a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and security department and reviewed security department logs to determine the cause of protected area access problems encountered during the loss of off-site power. The team reviewed security procedures, the licensee's initial findings and immediate corrective actions taken on June 17, 2004. The team also reviewed the licensee's preliminary findings attached to significant CRDR 2715749 initiated to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004.
 
b.      I
      --j-x-E, IK.5 ExeffTW-FGmPulc icosr iA F bl.-iss~-5 OD
 
ATTACHMENT 5 UNRESOLVED ITEM DETAILS 05000528/2004012; URI Review licensee's root and/or apparent cause 05000529/2004012; determination, corrective actions, and compliance 050005308/2004012- associated with a number of loss-of-offsite power event 001  related issues. (See Table 1)
05000528/2004012; URI Review design control and compliance aspects of a 05000529/2004012; number of loss-of-offsite power event related issues.
 
050005308/2004012- (See Table 1)
002 05000528/2004012; URI Review use of Plant Technical Specifications during 05000529/2004012; emergencies. (See Table 1)
050005308/2004012-003
}}
}}

Latest revision as of 18:15, 22 December 2019

Draft Letter from Bruce Mallet to Gregg Overbeck Transmitting Draft Palo Verde Nuclear Generating Station, Units 1, 2, and 3 - NRC Augmented Inspection Team (AIT) Report 05000528/2004-012, 05000/204529/2004-12; 05000530/2004-012 and Prelimi
ML053250127
Person / Time
Site: Palo Verde  Arizona Public Service icon.png
Issue date: 11/15/2005
From: Mallett B
NRC Region 4
To: Overbeck G
Arizona Public Service Co
References
FOIA/PA-2004-0307 IR-04-012
Download: ML053250127 (70)


Text

k UNITED STATES a.. 4

. 2°NUCLEAR REGULATORY COMMISSION

REGION IV

611 RYAN PLAZA DRIVE, SUITE 400

<,4';i.t.~ 4ARLINGTON, TEXAS 76011.4005 Gregg R. Overbeck, Senior Vice President, Nuclear Arizona Public Service Company P.O. Box 52034 Phoenix, AZ 85072-2034 SUBJECT: PALO VERDE NUCLEAR GENERATING STATION, UNITS 1, 2, AND 3 - NRC AUGMENTED INSPECTION TEAM (AIT) REPORT 05000528/2004-012; 05000/204529/2004-012; 05000530/2004-012 AND PRELIMINARY FINDINGS

Dear Mr. Overbeck:

On June 18, 2004, the Nuclear Regulatory Commission (NRC) completed an Augmented Inspection at your Palo Verde Nuclear Generating Station, Units 1, 2 and 3. The enclosed report documents the inspection findings, which were preliminarily discussed on June 18, 2004, with you and other members of your staff. A public exit was conducted with you and other members of your staff on July 12, 2004.

The Augmented Inspection concluded that each unit generally operated as designed for a loss of offsite power event by properly shutting down and stabilizing in Mode 3. Nevertheless, a number of system failures, design control issues, maintenance issues, and human performance errors were noted during the emergency that unnecessarily complicated the event. For example, a failure of the Unit 2 Train UA" Emergency Diesel Generator limited the available safety equipment for operators and a failure of the Technical Support Center Emergency Diesel Generator required your emergency response organization to use alternate facilities. These issues and others are discussed in more detail in the enclosed report. In addition, we will conduct a followup inspection to assess your determination of root and contributing causes, corrective actions, and to address any compliance issues identified. Please note that some aspects of this report are exempt from public disclosure and, as such, in accordance with 10 CFR 2.390 are being withheld from public distribution.

In accordance with 10 CFR 2.390 of the NRC's "Rules of Practice," a copy of this letter and its enclosure will be made available electronically for public inspection in the NRC Public Document Room or from the NRC's document system (ADAMS), accessible from the NRC Web site at http://www.nrc.pov/readinp-rm/adams.html.

Sincerely, Bruce S. Mallet, Regional Administrator Region IV RnginI atmon in this record was deleted Dockets: 50-528; 50-529; 50-530 in accordance with th F edom of Information Act. exemptionlS F011-

Licenses: NPF-41; NPF-51; NPF-74

Enclosure:

REGION IV==

Dockets: 50-528; 50-529; 50-530 Licenses: NPF-41; NPF-51; NPF-74 Report No.: 05000528/2004-011; 05000529/2004-011; 05000530/2004-01 1 Licensee: Arizona Public Service Company Facility: Palo Verde Nuclear Generating Station, Units 1, 2, and 3 Location: 5951 S. Wintersburg Road Tonopah, Arizona Dates: June 14 through July 12, 2004 Team Leader: Anthony T. Gody, Chief Operations Branch Inspectors: P. Alter, Senior Resident Inspector, Projects Branch B Division of Reactor Projects T. Koshy, Electrical & Instrumentation and Controls Branch Office of Nuclear Reactor Regulation Amar Pal, Electrical & Instrumentation and Controls Branch Office of Nuclear Reactor Regulation T. McConnell, Resident Inspector, Projects Branch D Division of Reactor Projects C. Paulk, Senior Reactor Inspector, Engineering Branch Division of Reactor Safety Joseph I. Tapia, Senior Reactor Inspection, Engineering Branch Division of Reactor Safety David P. Loveless, Senior Reactor Analyst Division of Reactor Safety Accompanied By: G. Skinner, Electrical Engineer, Beckman and Associates Approved By: Anthony T. Gody, Chief Operations Branch Division of Reactor Safety

SUMMARY OF FINDINGS IR 05000528/2004-012; 05000-529/2004-012; 05000-530/2004-012; June 18, 2004; Palo Verde Nuclear Generating Station, Units 1, 2, and 3; Augmented Inspection The report covered a period of inspection by six inspectors and a contractor. The significance of most findings is indicated by its color (Green, White, Yellow, Red) using Inspection Manual Chapter 0609, "Significance Determination Process." Findings for which the Significance Determination Process does not apply may be green or be assigned a severity level after NRC management review. The NRC's program for overseeing the safe operation of commercial nuclear power reactors is described in NUREG-1 649, "Reactor Oversight Process," Revision 3, dated July 2000.

NRC-Identified and Self Revealing Findings On June 14, 2004, at 7:41 a.m. MDT, a ground-fault occurred Phase "C" of a 230 kV transmission line in northwest Pheonix, Arizona between the "West Wing" and 'Liberty" substations located approximately 47 miles from the Palo Verde Nuclear Generating Station. A failure in the protective relaying resulted in the ground-fault not isolating from the local grid for approximately 38 seconds. This uninterrupted fault cascaded into the protective tripping of a number of 230kV and 525kV transmission lines, a nearly concurrent trip of all three Palo Verde Nuclear Generating Station units, and the loss of six additional generation units nearby within approximately 30 seconds of fault initiation. This represented a total loss of nearly 5,500 MWe of local electric generation. Because of the loss-of-offsite power, the licensee declared a Notice of Unusual Event for all three units at approximately 7:50 a.m. MDT. The Unit 2 Train 'A" Emergency Diesel Generator started, but failed early in the load sequence process due to a diode that had less than seventy hours of run time in the exciter rectifier circuit that short-circuited. This resulted in the Train "A" Engineering Safeguards Features busses de-energizing which limited the availability of certain safety equipment for operators. Because of this failure, the licensee elevated the emergency declaration for Unit 2 to an Alert at 7:54 MDT.

An NRC Augmented Inspection Team was dispatched to the site later that same day and found that the licensee's response to the event, while generally acceptable, was complicated by a number of equipment failures, procedure issues, and human performance issues with diverse apparent causes and with varying degrees of significance.

TABLE OF CONTENTS 1.0 Introduction ...........................................................

1.1 Event Description ..................

1.2 System Descriptions ....................................................

1.3 Preliminary Risk Significance of Event ......................................

2.0 System Performance and Design Issues .....................................

2.1 Off-site Power System Issues .............................................

2.2 Unit 1, Atmospheric Dump Valve 185 Failure .................................

2.3 Unit 1, Letdown Heat Exchanger Isolation Failure ..............................

2.4 Unit 2, Train "A" Emergency Diesel Generator Failure ...........................

2.5 Unit 3, System Interactions During Event ....................................

2.6 Unit 3, Reactor Coolant Pump 2B Lift Oil Pump Trip ............................

2.7 Unit 3, Low Pressure Safety Injection System In-Leakage .......................

2.8 Unit 1 and 3, General Electric Magna Blast Breaker Failures .....................

3.0 Human Performance and Procedural Aspects of the Event .......................

3.1 Turbine-Driven Auxiliary Feedwater Drains ...................................

3.2 Unit 2, Train uE" Positive Displacement Charging Pump Trip ......................

3.3 Entry Into Technical Specification Action Statements ...........................

3.2 Technical Support Center Emergency Diesel Generator Trip .....................

3.3 Initial Notification of Event to State and Local Officials ..........................

3.4 Emergency Response Organization Challenges ...............................

4.0 Coordination with Off-Site Electrical Ormanizations .............................

5.0 Risk Significance of the Event .............................................

6.0 Assessment of Event Response ...........................................

7.0 Exit Meeting Summary ..................................................

ATTACHMENT 1 - Supplemental Information ATTACHMENT 2 - Augmented Inspection Team Charter ATTACHMENT 3 - Sequence of Events ATTACHMENT 4 - System Figures Figure 1 - Palo Verde Nuclear Generating Station Transmission System ATTACHMENT 5 - Proprietary Information

Report Details 1.0 Introduction 1.1 Event Description On June 14, 2004, at 7:41 a.m. MDT, a ground-fault occurred on Phase "Co of a 230 kV transmission line in northwest Pheonix, Arizona between the "West Wing' and "Liberty" substations located approximately 47 miles from the Palo Verde Nuclear Generating Station (PVNGS). A failure in the protective relaying resulted in the ground-fault not isolating from the local grid for approximately 38 seconds. This uninterrupted fault cascaded into the protective tripping of a number of 230kV and 525kV transmission lines, a nearly concurrent trip of all three PVNGS units, and the loss of six additional generation units nearby within approximately 30 seconds of fault initiation. This represented a total loss of nearly 5,500 MWe of local electric generation. Because of the loss-of-offsite power (LOOP), the licensee declared a Notice of Unusual Event (NOUE) for all three units at approximately 7:50 a.m. MDT.

The Unit 2 Train "A" Emergency Diesel Generator (EDG) started, but failed early in the load sequence process due to a diode that had less than seventy hours of run time in the exciter rectifier circuit that short-circuited. This resulted in the Train "A" Engineering Safeguards Features (ESF or Safety) busses de-energizing which limited the availability of certain safety equipment for operators. Because of this failure, the licensee elevated the emergency declaration for Unit 2 to an Alert at 7:54 MDT.

An NRC Augmented Inspection Team (AIT) was dispatched to the site later that same day and found that the licensee's response to the event, while generally acceptable, was complicated by a number of equipment failures, procedure issues, and human performance issues with diverse apparent causes and with varying degrees of significance. For example;

  • The Technical Support Center (TSC) emergency diesel generator failed because a test switch was not returned to its' proper position following maintenance six-days prior to the event. As a result, the emergency response organization assembled in the alternate TSC. This resulted in some confusion and posed some unique challenges to the emergency response organization.
  • The ability of the licensee to conduct automatic dial out for emergency responders and to develop protective action recommendations, had they been needed, appeared to have been affected by the loss of power.
  • Other facility issues were identified which could have impeded emergency responders but did not during this event.
  • An Atmospheric Dump Valve (ADV) on Unit 1 drifted closed due to an apparent equipment malfunction which posed a minor operational nuisance to the control room operators during the event.
  • Operators did not anticipate that the Unit 1 letdown system would not automatically isolate because a temporary modification was not fully understood or translated into operating procedures. This resulted in high temperatures in that system. The high temperatures resulted in fumes being generated as paint heated up which precipitated a fire brigade response. This complicated the Unit 1 event.
  • The Unit 2 Positive Displacement Charging Pump "EW was temporarily lost due to human performance errors.
  • An unanticipated control interaction in the Unit 3 steam bypass control valve system resulted in a momentary opening of all Unit 3 steam bypass valves and an unanticipated main steam isolation signal. The main steam isolation signal only slightly complicated the Unit 3 operators response to the loss-of-offsite power event.
  • A check-valve leakage problem in the Unit 3 safety injection system resulted in operators having to manually depressurize the low-pressure safety inject system three times during the event. This posed an unnecessary additional distraction for the event.
  • Two Magna-Blast circuit breakers failed to operate during recovery operations in Unit 1 and Unit 3 which delayed electrical system recovery efforts.

Despite the number of challenges to the plant operating staff and management, all three units were safely shutdown, placed in a stable condition immediately following the loss-of-offsite power event, and power restoration efforts began immediately. With the exception of the local 525 kV transmission grid surrounding the Palo Verde switchyard, the Arizona, California, and Nevada electrical grid remained relatively stable, only noting the'fault through some minor frequency and voltage fluctuations. This was notable considering the amount of generation lost. The total local generation lost during the event included the three Palo Verde units, three co-generation units at the Red Hawk generating station, and three co-generation units at the Arlington generating station for a total of approximately 5,500 megawatts of electrical generation.

In the following sections, each pertinent aspect of the event is discussed in detail.

Section 2.0 contains the teams findings in the area of system performance and design.

Section 3.0 contains the teams findings in the area of human performance and procedures. Section 4.0 contains the teams findings associated with the facilities interaction with off-site entities. Section 5.0 includes a summary of the NRC analysis associated with overall risk significance of the event. Finally, Section 6.0 contains the teams overall assessment of the licensee's response to the event.

1.2 System Descriptions 1.2.1 Off-site Power Transmission and Distribution Systems a. General The Palo Verde Nuclear Generating Station is connected by its associated transmission system to the Arizona-New Mexico-California-Southern Nevada extra high voltage (EHV) grid which is interconnected to other EHV systems within the Western System Coordinating Council (WSCC).

b. Palo Verde Nuclear Generating Station Switchyard The PVNGS switchyard consists of two 500 kV buses which are connect to the three PVNGS 525/22.8 kV main step-up transformers, and seven transmission lines, using a breaker and a half scheme. A breaker and a half scheme uses two breakers to connect the source of power to the switchyard or transmission line. Both breakers are required to open to isolate a fault in the system. This scheme is used to increase reliability of power and allows flexibility for maintenance. The seven 525 kV transmission lines comprising the Palo Verde transmission system are situated in four corridors from the PVNGS switchyard as follows:

One line to the Devers substation (240 mi.)

Three lines to the Hassayampa substation (3 mi.)

One line to the Rudd substation (25 mi.)

Two lines to the Westwing 500 kv substation (44 mi.)

c. West Wing Substation The Westwing substation is comprised of a two-bus 230 kV section and a two-bus 500 kV section. The 500 kV section is connected to the adjacent 230 kV Westwing section through three 525/345/230 kV load tap-changing transformers. The Westwing 230 kV buses are connected to the transmission system using a breaker and a half scheme as follows:

One line to the Surprise substation One line to the Pinnacle Peak substation One line to the Liberty substation One line to the Agua Fria substation One line to the Deer Valley substation One line to New Waldell substation Two 230/69 kV transformers feeding the Arizona Public Service (APS) distribution system d. Hassavampa Switchyard The Hassayampa substation is located three miles from the PVNGS switchyard. It consists of two 500 kV buses connected to the PVNGS switchyard and several other generating stations and substations through a breaker an a half scheme, as follows:

Three lines to the PVNGS switchyard (3 mi.)

Two lines to the Red Hawk switchyard (1 mi.)

One line to the Jojoba substation (20 mi.)

One line to the Noth Gila substation (110 mi.)

One line to the Mesquite switchyard (0.5 mi.)

One line to the Arlington Valley switchyard (1 mi.)

One line to the Harquahala switchyard (30 mi.)

The three lines to the PVNGS switchyard were equipped with negative sequence relays intended to serve as pole-mismatch protection, or open conductor, for the Hassayampa to Palo Verde transmission lines. Personnel employed by APS indicated that this

relaying was set to trip on 20% negative sequence current after a finite time delay of 5 seconds.

1.2.2 On-site Power Distribution Svstem a. General Power is supplied to the PVNGS auxiliary buses from the offsite power supply through thee startup transformers. In addition, during normal plant operation, power for the onsite non-Class 1E alternating current (ac) system is supplied through the unit auxiliary transformer connected to the main generator isolated phase bus. The non-Class 1E ac buses normally are supplied through the unit auxiliary transformer, and the Class 1E buses normally are supplied through the startup transformers, Each unit's non-Class 1E power system is divided into two parts. Each of the two parts supplies a load group including approximately half of the unit auxiliaries. Three startup transformers connected to the 525 kV switchyard are shared between Units 1, 2, and 3 and are connected to 13.8kV buses of the units. Each startup transformer is capable of supplying 100% of the startup or normally operating loads of one unit simultaneously with the ESF loads associated with two load groups of one other unit. The 4160 V class 1E buses are each normally supplied by an associated 13.8/4.16 kV auxiliary transformer, and receive standby power from one of the six standby diesel generators.

The Class 1E 4160 V system supplies power to 480V and lower distribution voltages through 18 41601480 V load center transformers.

b. Palo Verde Nuclear Generatinq Station Generator Protective Relaying The main generator protection schemes include relaying designed to protect the generators against internal as well as external faults. Protection against external faults includes backup distance relaying and negative sequence time over-current relaying.

The backup distance relaying provides backup protection for 24 kV and 525 kV system faults close to the switchyard. The distance relay operates through an external timer. If the fault persists and the time delay step is completed, a lockout relay trips the unit auxiliary transformer 13.8 kV breakers, generator excitation, 525 kV generator unit breakers, main turbine, and the main transformer cooling pumps. The lockout relay also initiates transfer of station auxiliary loads.

The generator negative sequence time over-current relay provides generator protection against possible damage from unbalanced currents resulting from prolonged faults or unbalanced load conditions. The relay operates through a lockout relay to trip the unit auxiliary transformer 13.8kV breakers, generator excitation, 525 kV generator unit breakers, main transformer cooling pumps and the main turbine. The negative sequence relay also incorporates a sensitive alarm circuit that, in conjunction with a separately mounted ammeter, alerts operator action on relatively low values of negative sequence current (ust above normal system unbalance).

c. Emergencv Diesel Generators The Class 1E ac system distributes power at 4.16 kV, 480 V, and 120 V to all Class 1E loads. Also, the Class 1E ac system supplies power to certain selected loads that are not directly safety-related but are important to the plant . The Class IE ac system

contains standby power sources (i.e., emergency diesel generators) that automatically provide the power required for safe-shutdown in the event of loss of the Class 1E bus voltage.

In the event that preferred power is lost, the Class 1E system functions to shed Class 1E loads and to connect the standby power source to the Class 1E busses. The load sequencer then functions to start the required Class I E loads in programmed time increments.

d. Station Blackout Gas Turbine Generator Sets A non-safety related Alternate AC (AAC) power source consisting of two redundant gas turbine generators (GTG) is available to provide power to cope with a four hour station blackout event in any one nuclear unit. One GTG is analyzed to supply all required station blackout loads, which are located on the 'A' train.

Each GTG has a minimum continuous output rating of 3400kW at 13.8kV under worst case anticipated site environmental conditions. This rating is sufficient to provide power to the loads identified as being important for coping with a postulated station blackout.

e. Technical Support Center Emergency Diesel Generator The technical support center diesel generator provides standby alternating current to the 480 V electrical distribution panel that supplies all electrical power to the technical support center emergency planning facility. The diesel engine is cooled by a self-contained cooling water system with an air cooled radiator. The radiator is in turn cooled by an electric motor driven fan. The fan motor is powered by the technical support center electrical power distribution panel. Normal electrical power for the technical support center comes from the off-site electrical power supply to Unit 1.

During a loss of off-site power, when power is lost to the technical support center electrical power distribution panel, the technical support diesel generator automatically starts and re-energizes the technical support center electrical loads, including the diesel engine radiator cooling fan.

1.2.3 Chemical Volume and Control System The chemical and volume control system controls the purity, volume, and boric acid content of the reactor coolant. Water removed from the reactor coolant system is cooled in the regenerative heat exchanger. From there, the coolant flows to the letdown heat exchanger and then through a filter and a demineralizer where corrosion and fission products are removed. It is then sprayed into the volume control tank and returned by the charging pumps to the regenerative heat exchanger where it is heated prior to returning to the reactor coolant system.

When the vital 4160 VAC buses are de-energized, the charging pump breakers must be manually reset and the pumps restarted from the control room. Therefore, no charging flow is assumed for 30 minut6s after the time of trip to allow for resetting the breaker and performing manual alignment of one of three gravity-fed boration pathways to the charging pump suction.

Following a loss of offsite power, letdown subsystem is designed to isolate automatically due to the loss of nuclear cooling water to the letdown heat exchanger or by operator action. When charging is restarted, the resulting mismatch between letdown and charging will cause volume control tank level to decrease. To reduce the chance of losing suction to the charging pumps, the volume control tank level is monitored by two non-safety grade instrument channels. Alarms are provided on low level and if the two channels differ significantly. The use of two channels of different types (one has a wet reference leg and the other is dry) decreases the probability of operator error mis-aligning the boration systems should one channel fail.

1.2.4 Auxiliary Feedwater System The Auxiliary Feedwater System (AFW) provides an independent means of supplying water to the Steam Generators during emergency operations when the Feedwater System is inoperable. AFW maintains the water inventory necessary to allow a Reactor Coolant System cooldown at a maximum rate of 75 0 F/hr down to a temperature of 350°F. It also provides the necessary water inventory for startup, normal shutdown and hot standby conditions.

1.3 Preliminary Risk Significance of Event The Nuclear Regulatory Commission's Management Directive 8.3, "Incident Investigation Program," documents the NRC's formal process conducted for the purpose of accident prevention. This directive documents a risk-informed approach to determining when the agency will commit additional resources for further investigation of an event. The risk metric used for this decision is the conditional core probability.

A complete loss of offsite power is a significant event at any nuclear facility. Because the Combustion Engineering plant is designed without primary system power-operated relief valves, making a reactor coolant system feed and bleed evolution impossible, the risk significance is somewhat higher for this design. To evaluate this event, the NRC analyst used the Standardized Plant Analysis Risk Model for Palo Verde (SPAR),

Revision 3, and modified appropriate basic events to include updated loss of offsite power curves published in NUREG CR-5496, "Evaluation of Loss of Offsite Power Events at Nuclear Power Plants: 1980 - 1996." The analyst evaluated the risk associated with the Unit 2 reactor because it represented the dominant risk of the event.

For the preliminary analysis, the analyst established that a loss of offsite power had occurred and that the event may have been recovered at a rate equivalent to the industry average. Both Emergency Diesel Generator "A" and Charging Pump "E" were determined to have failed and assumed to be unrecoverable. Additionally, the analyst ignored all sequences that included a failure of operators to trip reactor coolant pumps, because all pumps trip automatically on a loss of offsite power. The conditional core damage probability was estimated to be 6.5 x 104 indicating that the event was of substantial risk significance and warranted an augmented inspection team.

2.0 System Performance and Design Issues 2.1 Offsite Power Reliability and Independence Issues

/

a. Inspection Scope The team reviewed design drawings associated with the Palo Verde, Hassayampa, West Wing, Devers, and Rudd switchyards and substations. In addition, the team conducted interviews with licensee personnel, APS personnel, and Salt River Project (SRP) personnel involved in the licensees investigation. Finally, the team reviewed the sequence of event and alarm printouts in detail to develop a comprehensive understanding of the event progression.

b. Observations and Findings One Unresolved Item (URI) was identified to review the licensees root and contributing causes of the loss of offsite power event and corrective action implementation.

(URI 05000528;529;530/2004012-001)

The 500 kV system upset at the PVNGS switchyard originated with a fault across a degraded insulator on the 230 kV Liberty transmission line between the Westwing and Liberty substations approximately 47 miles from PVNGS. Protective relaying detected the fault and isolated the line from the Liberty substation. The protective relaying scheme at the Westwing substation received a transfer trip signal from the Liberty substation actuating the Type AR relay in the tripping scheme for circuit Breakers WWI 022 and WWI 126. The Type AR relay had four output contacts, all of which were actuated by a single lever arm. The tripping schematic showed that contacts 1-10 and 2-3 should have energized redundant trip coils in Breaker WW1022, while contacts 4-5 and 6-7 should have energized redundant trip coils in Breaker WW1 126.

Breaker WWI 126 tripped, demonstrating that the Type AR relay coil picked up, and least one of the AR relay contacts, 1-10 or 2-3, closed. PCB 1022 did not trip. Bench testing by APS showed that, even with normal voltage applied to the coil, neither of the tripping contacts for PCB 1022 closed. The breaker failure scheme for PCB 1022 featured a design where the tripping contacts for the respective redundant trip coils also energized redundant breaker failure relays. Since the tripping contacts for PCB 1022 apparently did not close, the breaker failure scheme for PCB 1022 also was not activated, resulting in a persistent uncleared fault on the 230 kV Liberty line.

Various transmission system events recorders show that during approximately the first 12 seconds after fault inception, several transmission lines on the interconnected 69 kV, 230 kV, 345 kV, and 525 kV systems tripped on overcurrent, including lines connected to the Westwing and Hassayampa substations. Also during the first 12 seconds, two Red Hawk combustion turbines and one Red Hawk steam turbine power plants tripped, and the fault alternated between a single phase to ground fault to a two phase to ground fault, apparently as a result of a failed shield wire falling on the faulted line. After 12 seconds, the fault became a three phase to ground fault, and additional 525 kV lines tripped.

At approximately 17 seconds after fault inception, the three transmission lines between the PVNGS switchyard and the Hassayampa substation tripped simultaneously due to action of their negative sequence relaying, thereby isolating the fault from the several co-generation plants connected to the Hassayampa substation. Approximately 24

seconds after fault inception the last two 525 kV lines connected to the PVNGS switchyard tripped, isolating the PVNGS switchyard from the transmission system. At approximately 28 seconds after fault inception, the three PVNGS generators were isolated from the switchyard, and by approximately 38 seconds all remaining lines feeding the fault had tripped and the fault was isolated.

Reliability Issues The degraded insulator was caused by external contamination and did not, by itself, represent a concern relative to the reliability of the insulators on the 230 kV transmission system. Nevertheless, the failed Type AR relay and the lack of a robust tripping scheme raised concerns relative to the maintenance, testing, and design of 230 kV system protective relaying. Interviews with APS transmission and distribution personnel indicated that the Westwing substation, where the relay failure occurred, was subject to annual maintenance and testing. Following the event, the failed Type AR relay was removed from service by APS personnel and visually inspected by the NRC team at PVNGS. The relay showed no apparent signs of contamination or deterioration.

Although the team considered the maintenance interval to be reasonable, the team did not determine the degree of rigor applied in testing the relaying scheme. For instance, it is doubtful that the testing included methods common in the nuclear industry such as verifying that each contact in the tripping scheme functioned properly. As noted earlier, the tripping scheme lacked redundancy that may have prevented the failure of the protective scheme to clear the fault. Personnel employed by APS and SRP reviewed the design of the Westwing substation as well as all other substations connected to the PVNGS switchyard, and found that only the Liberty and Deer Valley transmission lines at the Westwing substation featured a tripping scheme with only one Type AR relay. All of the newer lines featured two Type AR relays. However, APS personnel found that the middle breakers in the breaker and a half scheme at the Westwing substation only contained one trip coil, as opposed to two trip coils in the bus connected breakers. This feature was found by SRP personnel to be representative of the design at the Devers substation. In order to improve reliability, APS modified the tripping schemes for the Liberty and Deer Valley lines to feature two AR relays enegizing separate trip coils. In addition, personnel from APS and SRP also stated that they would evaluate the feasibility of installing two trip coils in all single trip-coil breakers. Finally, APS personnel indicated that the APS 525/230 kV transformers did not have the same overcurrent protection as the SRP transformers and would consider the installation of overcurrent protection.

The team found that APS notably improved the reliability of their Westwing substation by installing a redundant tripping scheme with two Type AR relays for the Liberty and Deer Valley transmission lines. In addition, the APS and SRP intention to include dual trip coils and overcurrent protection on unprotected transformers would also serve to increase the reliability of power to the grid. The also noted that the PVNGS licensee actively coordinated the off-site power investigation and facilitated discussions with APS and SRP.

Independence of Offsite Power Supplies Licensees are tasked with ensuring that the facility meets the General Design Criteria (GDC) contained within 10 CFR, Part 50, Appendix A. Specifically, GDC 17 requires

that power from the offsite transmission network be supplied by 'two physically independent circuits designed and located so as to minimize to the extent practical the likelihood of their simultaneous failure under operating and postulated accident and environmental conditions." This event highlighted a previously unknown vulnerability associated with the three transmission lines between the Hassayampa and PVNGS switchyard. These three transmission lines featured negative sequence relaying intended to serve as pole mismatch protection. This design was implemented in 1999 as part of extensive modifications to the Hassayampa switchyard intended to accommodate new co-generation facilities local to the PVNGS. The negative sequence protection scheme was designed to actuate a complete isolation of all three of the subject transmission lines after a 5-second time delay to avoid spurious tripping due to faults. Although these individual lines are listed as separate sources of offsite power in the Plant Technical Specifications, this event demonstrated that the lines were subject to simultaneous failure (acting as one) because of the protective relaying scheme.

Personnel employed by SRP and the licensee stated that the negative sequence relaying was disabled and pole mismatch protection was being implemented by alternate relaying.

The team found that the licensees efforts to coordinate their investigation with APS and SRP appropriate. The design changes implemented on the Hassayampa switchyard to PVNGS switchyard transmission lines to remove the negative sequence protection improved the independence of those transmission lines and would prevent the three subject transmission lines acting as one in the future for the same type of fault.

2.2 Unit 1. Atmospheric Dump Valve 185 Failure a. Inspection Scope The team interviewed operators, reviewed control room logs, and reviewed CRDR 2716011 associated with the loss of manual control of the Atmospheric Dump Valve (ADV) 185 during the performance of Procedure 40EP-9EO10 "Loss of Offsite Power/ Loss of Forced Circulation," Revision 10.

b. Observations and Findings The team identified an unresolved item associated with the licensees determination of root and contributing causes of the Valve ADV-1 85 failure and to review corrective actions, if any (URI 05000528;529;530/2004012-001).

Following the Unit I LOOP, Valve ADV -185 failed to operate properly while being remote-manually operated from the control room. Operators in the control room observed that the valve had drifted closed, despite a remote-manual controller setting demanding the valve to be open. The operators were able to adjust Valve ADV-185 from the control board by adjusting the demand higher than needed. However, the valve position would not remain in the desired position.

The team assessed how much the Valve ADV-1 85 affected the operators ability to control reactor coolant temperatures and concluded that the impact was minimal. The operator had been trained sufficiently to readily diagnose the problem and utilize an alternate ADV for decay heat removal. All other atmospheric dump valves on Unit 1

responded properly to remote-manual control signals and presented no further challenges to the control room operators.

Licensee personnel identified the apparent cause of the malfunction as internal leakage equalizing around a pilot valve causing the valve to shut. The valve and it's associated control circuit were quarantined and maintenance personnel were troubleshooting the components to determine the root cause of the malfunction.

2.3 Unit 1. Letdown Heat Exchanger Isolation Failure a. Inspection Scope The team reviewed the circumstances surrounding the Unit 1 letdown heat exchangers failure to isolate following the June 14, 2004, loss of offsite power event. Since the Unit 1 letdown system was temporarily modified by the licensee, the teams review included a detailed inspection of Temporary Modification 2594804. In addition, the team reviewed CRDR 2715667 documenting the system response during the event to understand the licensees investigation into the failure. The team also interviewed plant personnel and reviewed control room logs and temperature plots to determine the impact of the high temperature on the letdown system.

b. Observations and Findings The team identified an unresolved item associated with the licensees determination of root and contributing causes of the letdown system failure and to review corrective actions, if any (URI 05000528;529;530/2004012-001). In addition, this issue has potential cross-cutting aspects in the area of human performance.

During the June 14, 2004, loss-of-offsite-power, the Unit 1 letdown system did not operate as expected when fluid temperatures exceeded the alarm setpoint. The letdown system bypassed the ion exchanger and the filter at 1400 F, as expected. However, a temporary modification to bypass a flow sensor which resulted in the system failing to isolate when needed. The letdown system response had apparently not been anticipated by the engineers designing the temporary modification and operators were unaware of the systems response to a loss of offsite power. The team was concerned that inadequate design control had resulted in the overheating of a system designed for low temperature operation. The system was designed to isolate the letdown system if temperature at the outlet of the non-regenerative heat exchanger exceeded 148 0F.

The licensee identified that the apparent cause of the system not isolating as expected was a failure of the temporary modification to fully address the functioning of the letdown control system during a loss of power to the controller. As a consequence of a loss-of-offsite-power, the nuclear cooling water flow is normally lost to the non-regenerative heat exchanger. Typically, when power is restored to the system, the valves would be in a manual mode of operation and flow through the system would not be secured by the normal control system. The temporary modification effectively bypassed the backup initiating signal for isolating the system in the event cooling water flow to the heat exchanger was lost, which occurred as a result of the loss of offsite power.

The impact on the plant systems and personnel were minimized when the ion exchanger bypass valves actuated to remove high temperature water from the resin. However, the introduction of high temperature water created a distraction when, as a result of paint and insulation being heated, the fire brigade was activated for a report of smoke/fumes.

The fire brigade responded to the report of a potential fire and operators conducted a detailed walkdown of the system.

The licensee conducted an engineering calculation to determine the maximum stress associated with 3500F fluid temperature which was considered the worst-case temperature the letdown system could have been subjected to. The worts-case thermally induced stress was calculated to be 27,475 pounds per square inch (psi). The licensees engineers determined that a socket-weld on the drain for purification Filter F36 was the only weld of concern that could have exceeded its' maximum allowable stress if it had reached 3500F. Licensee personnel performed a visual inspection of the affected weld, and removed the filter element to determine if any damage occurred. Because the filter element was rated for 1800 F for 1-hour, and there was no indication of any heat damage, the licensee personnel concluded that the weld was not subjected to the temperatures that could have caused excessive stress on the weld. In addition, the licensee conducted a soft parts analysis to ascertain if any parts susceptible to high temperatures were present and found none.

With respect to the extent of condition, the team found that Unit 1 was the only unit that had this modification installed to bypass the low flow isolation signal. Therefore, the team had no concerns with the other units.

2.4 Unit 2, Train A Emergency Diesel Generator Failure a. Inspection Scope The team interviewed licensee representatives and reviewed the sequence of events that led up to the failure of the Unit 2 Train A emergency diesel generator to determine the apparent cause. The team also reviewed the effects the loss of the diesel generator had on the recovery of the event; the action plan for determining the root cause (Condition Report/Disposition Request .(CRDR) 2715709); and the extent of condition of the apparent cause.

b. Observations and Findings The team found that the apparent failure of the Unit 2 Train A emergency diesel generator was a failed diode in Phase B of the voltage regulator exciter circuit. The diode failure resulted in a reduced excitation current which was unable to maintain the voltage output with the applied loads.

At approximately 07:41:15 am, the Unit 2 Train A emergency diesel generator received a start signal as a result of an undervoltage signal on the Train A 4.16KV Class I E bus.

The emergency generator started, came up to speed and voltage, and energized the bus at approximately 07:21:23 am, within the 10 seconds allowed by design.

Approximately 5 seconds later, the Train A battery chargers, control element drive mechanism cooling units, and the containment cooling units were sequenced onto the

bus. The essential cooling water pump was sequenced onto the bus approximately 15 seconds after the first loads.

The team noted that, at approximately the same time the essential cooling water pump was energized, the output voltage from the emergency diesel generator began to fail.

The control room operators observed the voltage and current indications in the control room were zero and had an auxiliary operator observe the indications locally, at the emergency diesel generator control panel. The indications were also zero. The control room operators initiated a manual emergency trip of the diesel at approximately 07:56:21 am. The team found these actions to be appropriate for the circumstances.

The team found that the failed emergency diesel generator did not have a large impact on plant stabilization and recovery, but did result in having only one train of safety equipment available. The only apparent effect of the loss of Train "A" safety-related equipment was associated with the availability of Train "A" charging pumps which rely on emergency power from the EDGs.

The team noted that licensee engineers and maintenance personnel developed a comprehensive plan to troubleshoot the failure (CRDR 2715709). The plan was methodical and prioritized. The team found that the troubleshooting activities were thorough and well controlled, resulting in the identification of the failed diode in Phase B of the exciter circuit. The failure resulted in a half-wave output with significantly reduce current that led to the loss of adequate excitation to maintain the required voltage for the applied loads.

The team found that, while this diode was common to all the emergency diesel generators at the Palo Verde Nuclear Generating Station, there was insufficient data to indicate there was a common mode problem. A review of the industry database on component failures revealed only one other failure of this specific model diode. That failure was in 1997. As such, the team found the extent of condition review by licensee personnel to have been appropriate for the circumstances.

The team noted that the failed diode had been replaced during the Fall 2003 refueling and steam generator replacement outage. This diode had been subject to approximately 65 hours7.523148e-4 days <br />0.0181 hours <br />1.074735e-4 weeks <br />2.47325e-5 months <br /> of operation before it failed. Licensee personnel had plans to perform additional testing to determine the root cause, if possible, of the diode failure.

The NRC will evaluate the corrective actions and root cause determination associated with the emergency diesel generator failure (URI 05000528;529;53012004012-001). In addition, this item has potential cross-cutting aspects in the area of problem identification and resolution.

2.5 Unit 3, Plant Response to Loss of Offsite Power a. Inspection Scope The team reviewed CRDR 2715659 documenting the Unit 3 reactor trip, plant response, and pre-startup review. In addition, control room logs associated with system temperature, pressure and flow plots; voltage and frequency plots; and nuclear instrumentation plots to assess whether the plant responded as designed. Finally,

various personnel that were either involved in the event or in the analyses of the event were interviewed.

b. Observations and Findings The team identified two unresolved items. The first unresolved item was associated with the automatic main steam-line isolation in Unit 3 and will result in an evaluation of the response of the bypass control system in all three units following the loss of offsite power and compare the response to those assumed in the plant safety analysis (URI 05000528;529;530/2004012-002). The team found that the plant responses observed during this event were different from those described in the Final Safety Analyses Report (FSAR). Accordingly, the second unresolved item is associated with reviewing the licensees root cause for the Unit 3 reactor trip on a variable over-power signal and the licensees evaluation of the impact of the high frequency on plant equipment, as well as the extent of condition once the cause is determined (URI 05000528;529;530/2004012-001).

The team noted that Unit 3 experienced an automatic main steam-line isolation.

Licensee engineers's attributed the automatic isolation to a steam bypass control system anomaly that caused all the bypass valves to open simultaneously, suddenly decreasing main steam line pressure, and causing a main steam isolation. The team found, through interviews with licensee engineers, the apparent cause of the "anomaly" was the result of a momentary loss of power to Panel D11 with the control system being re-energized in the automatic mode, vice manual. According to the licensee engineers, this power loss initiated a 30-second timer that disconnected the valve control signals from the control cabinet. When the 30-second timer completed, all eight valves modulated open in about 14 seconds.

The PVNGS FSAR, Revision 12, Section 1.8, "Conformance to NRC Regulatory Guides," documents that the licensee took exception to the separation criterion of NRC Regulatory Guide 1.75, "Physical Independence of Electric Systems," Revision 1, for the power supplies to Panel D11. As a result, Panel D11 was powered from both a non-vital power supply (normal) and a vital power supply (backup). Upon loss of normal power, the supply automatically transfers to the backup supply. After the normal supply returns, the panel must be manually transferred back to the normal supply. Upon a total loss of power to Panel D11, the steam bypass control system will be unable to automatically respond to any challenges FSAR Section 7.2.2.4.1.2.1). The team also noted that the power supply configuration was identical on all three units. However, Units 1 and 2 did not respond the same as Unit 3.

The team noted that, in each subsection of the FSAR listed below, the steam bypass control system is assumed to be unavailable because it is either deenergized or in manual. During the loss-of-offsite-power event, the team found that the system was reenergized and operated in automatic. The team noted that this system response may not be as described in the licensee's safety analysis.

6.3.3.5D. For all break sizes, the reactor trip will result in a turbine trip and the subsequent loss of offsite power will result in the loss of main feedwater flow. Since the steam bypass

control system is not available due to loss of condenser vacuum on loss of offsite power ...

7.2.2.4.1.2.1 A. The [Steam Bypass Control System] SBCS and

[Reactor Pressure Control System] RPCS will be unable to automatically respond to any challenges on a failure of distribution panel E-NNN-D1 1.

7.2.2.4.1.2B ... the LOFW [loss-of-feedwater] event presented in subsection 15.2.7 assumed that the [Pressurizer Pressure Control System] PPCS, SBCS, and [?????]RRS are in the manual mode of operation, unable to automatically respond to challenges.

15.1.4.2 Case 1 Since the steam bypass control system is assumed to be in the manual mode with all bypass valves closed ...

15.1.4.2 Case 2 Since the steam bypass control system is assumed to be in the manual mode with all bypass valves closed ...

15.2.3.1 ... in this analysis both the SBCS and RPCS are assumed to be in the manual mode and credit is not taken for their functioning.

15.3.1.1 The only credible failure which can result in a simultaneous loss of power is a complete loss of offsite power. In addition, since a loss of offsite power is assumed to result in a turbine trip and renders the steam dump and bypass system unavailable, the plant cooldown is performed utilizing the secondary valves and atmospheric dump valves (ADVs)...

The loss of offsite power will make unavailable any systems whose failure could affect the calculated peak pressure. For example, a failure of the steam dump and bypass system to modulate or quick open and a failure of the pressurizer spray control valve to open involve systems (steam dump and bypass system and pressurizer pressure control system (PPCS)) which are assumed to be in the manual mode as a result of the loss of offsite power and, hence, unavailable for at least 30 minutes.

15.3.1.2C. The turbine is assumed to trip on loss of offsite power.

The loss of offsite power produces a loss of load on the turbine which generates a turbine trip signal. The turbine stop valves are closed as a result of the trip. The steam

bypass control system becomes unavailable due to the loss of offsite power and subsequent loss of condenser vacuum.

15.3.4.1 The assumed loss of AC renders the steam bypass control system inoperable as a result of the loss of circulating water pumps.

15.3.4.2C. The loss of offsite power causes a loss of power to the plant loads and the plant experiences a simultaneous loss of feedwater flow, condenser inoperability, and a coastdown of all reactor coolant pumps.

15.3.4.3.1C. The loss of offsite power also causes a loss of main feedwater and condenser inoperability. The turbine trip, with the steam bypass control system (SBCS) and the condenser unavailable, leads to a rapid buildup in secondary system pressure and temperature ...

15.4.2.2D. Following the generation of a turbine trip on reactor trip, the main feedwater control system (FWCS) enters the reactor trip override mode and reduces feedwater flow to 5% of nominal, full power flow. Since the steam bypass control system (SBCS) is assumed to be in manual mode with all bypass valves closed, the main steam safety valves (MSSVs) open to limit secondary system pressure and remove heat stored in the core and the RCS.

15.4.2.3B. All the control systems listed in table 15.4.2-2, except the steam bypass control system, were assumed to be in the automatic mode since these systems have no impact on the minimum [Departure from Nucleate Boiling Ratio]

DNBR obtained during the transient. The steam bypass control system is assumed to be in manual mode because this minimizes DNBR during the transient.

15.4.8.3C. The steam bypass control system is inoperable on loss of offsite power and therefore is unavailable.

15.5.2.1 The loss of normal ac power results in loss of power to the reactor coolant pumps, the condensate pumps, the circulating water pumps, the pressurizer pressure and level control system, the reactor regulating system, the feedwater control system, and the steam bypass control system.

15.5.2.3C. Since the steam bypass control system is in the manual mode ...

The unavailability of the steam bypass valves ...

15.6.3.1.2D Since the SBCS is assumed to be in manual mode with all bypass valves closed ...

15.6.3.3.1A. The ADVs are used due to the unavailability of the steam bypass control system due to loss of offsite power.

15.6.3.3.3.1C. The loss of offsite power also causes the steam bypass system to the condenser to become unavailable.

During the teams review of the time-line, it was noted that the main turbine stop valves closed on each unit at approximately 07:41:21 am. The Units 1 and 2 reactor coolant pumps had tripped on undervoltage approximately 1-second prior to the turbine trips, and the reactors tripped on anticipatory low departure from nucleate boiling ration within 1-second of receipt of the turbine trips. However, on Unit 3, the reactor tripped on variable over-power approximately 1-second after the other units. Next, the team noted that the Unit 3 main generator tripped approximately 1-second after the reactor trip on a volts/hertz signal, while the other units' main generators did not trip on volts/hertz signals until approximately 3.5 seconds after the reactor trips. And, approximately 5 seconds after the Units I and 2 reactor coolant pumps tripped on undervoltage, the Unit 3 reactor coolant pumps tripped on undervoltage. All three units experienced post-event frequency increases to approximately 67 hertz.

During the loss-of-offsite power event, the Unit 3 reactor coolant pumps remained connected to the substation bus while the turbine was in a overspeed condition.

Licensee engineers concluded that the bus voltage was maintained because of an unexpected response of the Unit 3 generator's excitation circuit. As a result of the excitation circuit response, the excitation, and therefore the output voltage, remained high, delaying the load shed and tripping of the reactor coolant pumps. The licensee planned to conduct troubleshooting to evaluate the main generator excitation control system.

Since the Unit 3 reactor coolant pumps remained operating longer, they turned at the higher frequency, increasing flow through the critical reactor core. This increase in flow (approximately 108.2 percent of design flow), produced a power of approximately 109 percent, as read on excore nuclear instruments. This positive rate of change in reactor power generated a variable over-power-trip signal to shutdown the reactor.

The team reviewed the licensee's evaluation of the increased reactor coolant flow and noted that the estimated flow of 108.2 percent was less than the evaluated limit of 110.4 percent of design volumetric flow. According to the licensee's analyses, the most limiting component of each reactor coolant pump was the motor flywheel which was designed for 125 percent of rated speed. The team noted that this value was not approached during the event. The team agreed with the licensee's conclusion that there was no impact to the continued power operation with respect to fuel grid-to-rod fretting, vessel hydraulic uplift forces, and fuel mechanical design.

While all three turbine generators were in an over-speed condition and connected to the plant busses, all connected loads experienced a higher frequency. The reactor coolant pumps for Units 1 and 2 were not exposed to the high frequency condition because their undervoltage relays actuated before the higher frequency was attained.

2.6 Unit 3. Reactor Coolant Pump 2B Lift Oil Pump Breaker a. Inspection Scope The team reviewed the thermal overload curves for the lift oil pumps and the operator response to the loss of the pump with regard to restoring forced circulation in the primary plant. The team also interviewed plant personnel, reviewed CRDR 2715659, and reviewed control room logs regarding the activities surrounding the failure of the lift oil pump to start.

b. Observations and Findings The team identified an unresolved item associated with the design of the lift oil pump motor breaker thermal overloads and operation of the lift oil system (URI 05000528;529;53012004012-002).

During restoration efforts following the June 14, 2004 loss of offsite power, the Unit 3 reactor coolant Pump 2B lift oil pump thermal overloads were actuated while operators were making preparations to start reactor coolant pumps.

The team noted that the procedure for starting reactor coolant pumps did not contain any note or precaution that warned operators of a potential thermal overload trip if the lift oil pump motorwas run longerthan 10 minutes. Licensee Procedure 4OEP-9EO10, Appendix 1, "RCP [Reactor Coolant Pump] Restart," states, in part:

15. Ensure the appropriate lift oil pump has been running for 7 minutes or more.

The team noted that the thermal overload trip resulted in an unnecessary delay in the restoration of forced reactor coolant flow through the core.

In addition, the licensees calculation for sizing the thermal overloads for the motor breaker were only 0.1 amp greater than the motor running current. At this level of running current, the licensee calculated that the overloads would actuate in approximately 600 seconds. Licensee personnel identified the apparent cause of the trip of the lift oil pump was operating the pump in excess of 10 minutes. The licensee initiated CRDR 2715659 to address this issue.

2.7 Unit 3. Low Pressure Safety Iniection System In-Leakage a. Inspection Scone The team reviewed CRDR 2715659 which documented that a leaking Borg-Warner check valve had pressurized the low pressure safety injection system during the event.

Plant personnel were interviewed and control room logs and plots were reviewed to determine the impact of the in-leakage to the control room operators during the loss of offsite power event.

b. Observations and Findings

The team identified an unresolved item related to the Borg-Warner safety injection check valve leakage. The unresolved item is to conduct a review of the licensee root and contributing cause determination, review the effectiveness of prior corrective actions for previous check valve leakage issues, to assess the licensees use of industry operating experience and generic communications, to determine the adequacy of the in-service testing program for demonstrating check valve operability, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).

While Unit 3 operators were implementing loss of offsite power emergency procedures, they were required to manually implement alarm response Procedure 40AL-9RL2B,

"Panel B020B Alarm Response," Revision 48 on three occasions to depressurize a section of safety injection piping to maintain the low pressure safety injection system operable. The team found that, while operators maintained an adequate level of control, they were somewhat challenged by the unnecessary distraction from emergency procedures. Apparently, Valve RCEV-217, a 14-inch Borg-Warner check valve began to leak and pressurized the safety injection header to reactor coolant Loop 2A. The licensees apparent cause involved a thermal hydraulic interaction which resulted in check valve leakage when system temperatures changed rapidly.

2.8 Unit 1 and 3. General Electric Magna Blast Breaker Failures a. Inspection Scope The team reviewed the failure of two 13.8 kV circuit breakers to close on demand during the recovery from the loss-of-offsite power. The team also interviewed licensee personnel associated with the investigation into the breaker failures.

b. Observations and Findings The team identified an unresolved item related to the reliability of Magna-Blast circuit breakers. The unresolved item will result in a review of the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, review the effectiveness of prior corrective actions for Magna-Blast circuit breaker issues, to assess the licensees use of industry operating experience and generic communications, to determine the adequacy of preventative maintenance, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).

This item has potential cross-cutting aspects in the areas of human performance and problem identification and resolution.

The team noted that, while recovering from the loss-of-offsite-power, 13.8 kV circuit Breakers 1ENANS06K and 3ENANS05D failed to close on demand from the control room. The licensee initially determined the apparent cause of the inability to close the breakers was that they had not been cycled frequently enough. Apparently, the licensee believed that improper operation of the latching mechanisms may have occurred due to grease hardening and contamination by dirt. The licensee initiated CRDR 2716019 to evaluate the failures, determine the root cause(s), and take any corrective actions identified.

The team noted that the initial response only involved a cycling of the breakers without any detailed troubleshooting. The team found that the licensee personnel considered

this acceptable because of a known issue with grease hardening in Magna-Blast circuit breakers located in a relatively hot environment with little to no cycling during the 18-month operating cycle.

The team noted that each of the breakers had been refurbished in 2002.

Breaker 1ENANS06K had been cleaned, inspected, and cycled during the last refueling outage earlier this year. The team found that the licensee personnel's determination of the apparent cause for the Unit 1 breaker was not supported by the facts because of the recent cleaning and inspection.

Because of the large volume of industry operating experience with Magna-Blast circuit breaker reliability and the fact that both breakers had maintenance on them within the past two to three years, the team was concerned that the two breakers may have problems other than what was described in the licensees apparent cause.

2.9 Auxiliary Feedwater (AFW) System Performance a. Inspection Scope The team evaluated the adequacy of the AFW system performance during and after the loss of offsite power event. The inspection was accomplished through a review of documents and interviews with operators and engineering staff.

b. Observations and Findings The team identified an unresolved item related to the design and operation of the AFW system. Specifically, a thermally induced vibration occurred when operators placed the non-essential AFW system into service which also may have involved procedural issues.

The unresolved item is to review the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).

As part of the reactor trip response, operators manually started the essential motor-driven AFW pumps in all 3 units. Six hours after the reactor trip, Unit I operators placed the non-essential motor-driven AFW pump into service and secured the essential pump.

At this time, a plant operator reported high vibration for approximately 5 minutes in the main feedwater piping. The licensee generated CRDR 2715731 to document the high vibration. In Units 2 and 3, the non-essential pumps were placed in service, 17 and 29 hours3.356481e-4 days <br />0.00806 hours <br />4.794974e-5 weeks <br />1.10345e-5 months <br /> after the reactor trips, respectively. No vibration was noted in Units 2 and 3.

There was no procedural requirement that compelled operators to secure the essential pump and place the non-essential pump in service. According to the Unit 1 operator, the basis for transferring from the essential pump to the non-essential pump was to allow operators to add chemicals to the feedwater, if needed. However, there was no need to add chemicals at the time that the transfer occurred in Unit 1.

The high vibration in the Unit I feedwater line occurred when the relatively cold auxiliary feedwater coming from the condensate storage tank mixed with the stagnant hot water in the insulated section of feedwater piping downstream of the injection point of the non-essential AFW pump. That section of feedwater became isolated as a result of a

manual Main Steam Isolation Signal (MSIS) actuation required by the applicable Emergency Operating Procedure. There were no subsequent procedural cautions or guidance for preventing the introduction of the cold water into the feedwater system prior to that section of piping being allowed to cool down sufficiently. The placement of the non-essential AFW pumps into service in Units 2 and 3 did not result in high vibration because those sections of feedwater piping had apparently cooled enough to preclude a thermally induced vibration transient.

3.0 Human Performance and Procedural Aspects of the Event 3.1 AFW System Operation a. Inspection Scope The inspector assessed emergency procedure implementation and control room operator response as it related to the AFW system. The inspection was accomplished through a review of documents and interviews with operators and engineering staff.

b. Observations and Findings Emergency Operating Procedure Implementation The team identified an unresolved item associated with the apparent failure of emergency operating procedures to inform control room operators on a potential operability concern with the turbine-driven AFW pump after a main steam isolation. The unresolved item is to review the licensee root and contributing cause determination, determine the adequacy of procedures and operator training, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;53012004012-001).

As discussed previously, Unit 2 tripped at 7:41 a.m. on June 14, 2004 as a result of the loss of off site power. The completion of reactor post trip actions resulted in entry into the "Loss of Offsite Power/Loss of Forced Circulation" Emergency Operating Procedure (EOP) 40EP-9EO07, Rev. 10. Step 6. of this procedure requires control room operators to initiate a manual MSIS actuation. In addition to closing the main steam isolation valves, this step also causes closure of drains associated with two critical steam traps required to maintain operability of the turbine-driven AFW pump. With the steam traps unavailable, condensate can accumulate in the steam lines which can contribute to an overspeed trip of the turbine during startup.

The team noted that the EOP did not caution the operators that an MSIS would potentially make the turbine-driven AFW pumps inoperable. The EOP also did not direct the operators to implement the applicable sections of Normal Operating Procedure 400P-9SG01, "Main Steam," Rev. 37, which provide the necessary instructions for manually draining those sections of piping necessary to maintain operability of the pump. This procedure requires that the piping associated with the critical steam traps be blown down every two hours until a dry steam condition is reached and then every six hours thereafter. On the day of the event, operators did not commence actions to drain the associated piping until 11 hours1.273148e-4 days <br />0.00306 hours <br />1.818783e-5 weeks <br />4.1855e-6 months <br /> after the reactors tripped.

TDAFW Steam Drain Line Equipment The team identified an unresolved item associated with the availability of resources to drain the TDAFW steam piping, the impact on the delay to restore critical equipment from a potentially inoperable status, the adequacy of the past corrective actions from a previous overspeed trip of a TDAFW pump. The unresolved item is to review the licensee root and contributing cause determination, determine the adequacy of past design changes, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).

As discussed above, without the steam traps available, condensate can accumulate in the steam lines and lead to a potential overspeed trip of the pump. A condensation induced overspeed trip of the Unit 1 TDAFW pump previously occurred on April 24, 1990. At that time, Engineering Evaluation Request 90-AF-01 1 was generated to evaluate the root cause. The necessary corrective actions identified included directions to revise the operating and surveillance procedures to address maintaining the steam traps dry and directions to implement manual methods to ensure that the steam lines were maintained drained while in Modes 1, 2 and 3 with the turbine not on line.

After operators realized that draining of the piping associated with the critical steam traps was necessary to ensure continued operability of the turbine driven AFW pump, the applicable portions of the main steam normal operating procedure were referenced.

The procedure required the installation of a vent rig tool constructed in accordance with Engineering Evaluation Request 92-SG-007 at each manual drain location.

Consequently, each turbine-driven AFW pump required two vent rig tools. Operators were only able to find sufficient vent rig tools for one turbine-driven AFW pump.

Decision-Making with Limited Resources The team identified an unresolved item associated with the decision-making process for directing resources to drain a TDAFW pump steam trap considered risk importance appropriately. The unresolved item is to review the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).

The AFW system has a relatively high value of risk importance. As such, with only enough vent rig tools to drain one turbine-driven AFW pump at a time, operations management decided to begin draining the Unit 1 TDAFW pump steam traps first. The team noted that with Unit 2 having only one of two EDGs available, it may have been a more prudent decision to restore the Unit 2 TDAFW pump to service first.

3.2 Unit 2. Train "E' Positive Displacement Charging Pump Trio a. Inspection Scone The team reviewed the emergency operating procedures and the control room operator response to the loss of offsite power with respect to the charging pumps to determine the effect on the response to the event. The team also interviewed plant personnel and reviewed CRDRs 2716521 and 2716806 regarding the activities surrounding the charging pump operations.

b. Observations and Findings The team identified an unresolved item associated with procedure adherence. The unresolved item is to review the licensee root and contributing cause determination, determine whether a violation or violations of requirements occurred, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001). This item has potential cross-cutting aspects in the area of human performance and problem identification and resolution.

As the volume control tank level dropped to approximately 15 percent with Positive Displacement Charging Pump CHB-P01 operating, a control room operator recognized the need to transfer the charging pump suction from the volume control tank to the refueling water tank. Because of the loss of offsite power, control room operators were implementing Procedure 40EP-9EO07, "Loss of Offsite Power I Loss of Forced Circulation," Revision 10.

Step 11 of Procedure 40EP-9EO07 states:

IF VCT makeup is NOT available, THEN perform the following:

a. IF RWT level is below or approaching 73%, AND the CRS desires to keep charging in service, THEN PERFORM ONEof the following:

  • Appendix 10, Charging PumD Alternate Suction to the RWT /

Restoration

  • Appendix 11, Charging Pump Alternate Suction to the SFP /

Restoration b. IF RWT level is above 73%, THEN perform the following:

1) IF three charging will be used, THEN stop the Boric Acid Makeup Pumps.

2) IF three charging pumps are will be (sic) used, AND a Fuel Pool Clean Pump is recirculating the RWT, THEN stop RWT recirc by stopping the appropriate Fuel Pool Cleanup Pump.

3) Oen CHN-HV-536, RWT Gravity Feed to Charging Pump Suction.

4) Close CHV-UV-501, Volume Control Tank Outlet.

The team noted that since refueling water tank level was greater than 73 percent at the time, the appropriate steps in the procedure for transferring the charging was Step 11.b.3) and 4). However, the Control Room Supervisor decided that Step 11.a. was appropriate because Valves CHN-HV-536 and CHN-UV-501 did not have power and the supervisor knew that the valves in Step 11.a. could be manually operated. The supervisor failed to consider that the valves in Step 11 .b. could also be manually operated. By making this decision, the Control Room Supervisors decision to implement Step 11.a. may not have been in accordance with the requirements of the emergency operating procedure for the plant conditions at the time (i.e,, the refueling water tank level was greater than 73 percent). The licensee initiated CRDR 2716521 to evaluate the human performance error.

After deciding to implement Step 11.a., the Control Room Supervisor conducted a briefing with an auxiliary operator to discuss the manual transfer of the charging Pump CHE-P01 suction from the volume control tank to the refueling water tank using Appendix 10 to Procedure 40EP-9EO10, "Standard Appendices," Revision 32.

Appendix 10 states, in part:

1. Request that Radiation Protection accompany the operator performing the local operations to perform area surveys.

2. IF it is desired to align Charging Pump(s) suction to the RWT, THEN perform the following:

a. Place the appropriate Charging Pump(s) in "PULL-TO-LOCK."

b. Direct an operator to PERFORM Attachment 10-A, Aligning Charging PumD Suction to the RWT, for the appropriate Charging Pump(s).

c. WHEN the appropriate Charging Pump(s) has been aligned, THEN start the appropriate Charging Pump(s) as necessary.

Attachment 10-A states, in part:

1. Open CHB-V327, "RWT TO CHARGING PUMPS SUCTION" (70 ft. East Mechanical Piping Penetration Room)...

4. IF aligning Charging Pump E, THEN perform the following (Charging Pump E VlvGallery)

a. Close CHE-V322, ""E" CHARGING PUMP CHE-P01 SUCTION ISOLATION VALVE".

b. Open CHE-V757, ""E" CHARGING PUMP ALTERNATE SUCTION ISOLATION VALVE".

5. Inform the responsible operator that the appropriate Charging Pump(s) are aligned to the RWT.

The team found that the auxiliary operator did not implement Appendix 10, Step 1 of emergency operating Procedure 40EP-9EO10. Instead of requesting a radiation protection person to accompany him, the operator went to the radiologically controlled area access to perform a routine entry. However, because of the loss of offsite power, the access computers were not functioning and routine entry data was being entered manually. The auxiliary operator failed to inform the radiation protection person of the necessity of his entry nor of the procedural requirement for a radiation protection person to accompany him. This resulted in some delay in implementing the EOP. The licensee initiated CRDR 2716806 to evaluate the delay at the access point.

Once access was gained, the auxiliary operator proceeded to perform Attachment 10-A, Steps 4 and 5 which were not in the correct order. After positioning the valves listed in Step 4, the auxiliary operator informed the control room operator that the charging Pump CHE-P01 suction had been transferred. The control room operator then started charging Pump CHE-P01 at approximately 08:05 am and secured charging Pump CHB-P01 at approximately 08:05:52 am. At approximately 08:05:59, charging Pump CHE-P01 tripped on low suction pressure, resulting in a loss of all charging flow.

At approximately 08:06:22, the control room operator re-started charging Pump CHB-P01. The team found that the control room operator was unaware that this pump was operating with the suction from the volume control tank. After approximately 4.5 minutes, the control room operator noticed that the volume control tank level had dropped to approximately 10 percent. At that time, the operator secured charging Pump CHB-P01 to prevent it from tripping on low suction pressure or becoming air-bound.

At approximately 08:11:31 am, the charging pump suction was properly transferred to the refueling water tank and charging Pump CHB-P01 was restarted. At approximately 11:32:37 am, the time line indicated that charging Pump CHA-P01 was started.

3.3 Technical Support Center (TSC) Emergency Diesel Generator Trip a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and electrical maintenance department. Security department logs were reviewed to determine the cause of the failure of the technical support center diesel generator during the loss of off-site power. The team walked down the technical support center electrical

distribution system and the technical support center diesel generator. The team reviewed the licensee's preliminary findings attached to CRDR 2715749 written to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004.

b. Observations and Findings The team identified an unresolved item associated with a failure of the technical support center diesel generator. The unresolved item is to review the licensee root and contributing cause determination, determine whether a violation or violations of requirements occurred, review the licensees assessment of the extent of condition, and to assess any corrective actions implemented (URI 05000528;529;530/2004012-001).

This item has potential cross-cutting aspects in the area of human performance.

The team found that the apparent cause for the failure of the technical support diesel generator to restore power to the technical support center was a human performance error which had occurred during post maintenance testing of the diesel engine starting system on June 8, 2004.

On June 14, 2004, as a result of the loss of off-site power, electrical power was lost to the technical support center. As designed, the technical support center diesel generator started, but it did not re-energize the technical support center electrical loads. Electrical maintenance technicians were called to investigate the problem and shortly after they arrived at the technical support center diesel generator the diesel engine tripped. The engine control panel alarms indicated that the trip was due to high engine temperature.

Electrical power was restored to the technical support center when off-site power was restored to Unit 1 at 9:10 AM. The technical support center was without electrical power for approximately 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> 30 minutes.

During subsequent troubleshooting, electrical maintenance technicians determined that the engine operating switch was in 'Idle." With the switch in "Idle," the diesel generator started on loss of electrical power to the technical support center, but did not come up to proper voltage and frequency and did not re-energize the technical support center electrical distribution panel. As a result, the engine radiator cooling fan did not start, so the engine overheated and tripped on high temperature. The electrical maintenance technicians returned the engine operating switch to its normal uRun' position and wrote CRDR 2715726.

The licensee determined that the engine operating switch was apparently left in the

"Idle" position after post maintenance testing of the engine starting system performed on June 8, 2004 under Work Order 2623863. During this monthly engine starting battery inspection, electricians noted that one battery terminal and connector were corroded.

The electricians contacted their team leader and received permission to cleanup the connection using the same work order. The team leader and the lead electrician determined that the starting system needed to be tested after the battery was returned to its normal configuration. The lead electrician suggested using a portion of preventative maintenance task, "Quarterly Restrike Test for TSC Diesel Generator."

Since this test is routinely performed by the electricians working on the starting battery, the team leader allowed the electricians to perform the test without a working copy of the test procedure in the field. After the diesel generator was successfully started, the

engine operating switch was moved from "Run" to "Idle" to let the engine run at a slower speed and cooldown before being secured. The team determined that the failure to have a working copy of the test procedure at the engine during this post maintenance testing and failure to use the restoration guidance contained in the test procedure contributed directly to the failure to restore the technical support center diesel generator to its normal standby condition.

On June 16, 2004 The licensee performed the periodic one hour loaded test run of the technical support center diesel generator using preventative maintenance task,

"Quarterly Restrike Test for TSC Diesel Generator," under work Order 2715869. The diesel generator started as expected and automatically energized the technical support center electrical power distribution panel. The diesel generator ran loaded for one hour with no problems noted. The diesel generator was shutdown using the task instructions and restoration directions.

The team determined that the diesel generator failure contributed to the delay in staffing the technical support center. As a result of diesel generator failure, the responding members of the emergency response organization were moved to the satellite technical support center adjacent to the Unit 2 control room. However, normal off-site power was restored to the technical support center before the two hour staffing requirement of PVNGS Emergency Plan, Table 1, 'Minimum Staffing Requirements for PVNGS for Nuclear Power Plant Emergencies," Revision 28.

3.4 Emergency Response Organization Issues a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and security department and reviewed security department logs and emergency planning records to determine the cause of the multiple emergency response organization communication problems during the loss of off-site power. The team also reviewed the licensee's preliminary findings attached to significant CRDR 2715749 initiated to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004 and attended the significant event investigation team meetings. In addition, CRDR 2716281 associated with the availability of dose projection computers was reviewed.

b. Observations and Findings The team identified several examples of an unresolved issue during the inspection. The first involved communication and coordination issues associated with notifying state and local officials of emergency classifications. The second involved the apparent unavailability of the radiological dose projection computers used to develop timely protective action recommendations to state and local authorities from the control room.

The third involved the apparent delays in notifying and staffing emergency response organization. The unresolved item is to review the licensee root and contributing cause determination, review the licensees assessment of the extent of condition, determine whether a violation or violations of requirements occurred, assess the significance of any findings, and to assess any corrective actions implemented

(URI 05000528;529;530/2004012-001). This item has potential cross-cutting aspects in the area of human performance.

The team found that the apparent causes for the multiple emergency response organization communication problems were (1) the unanticipated loss of off-site power to all three units which resulted in the loss of normal emergency planning communications equipment, and (2) human performance errors in implementing EPIP-01, "Satellite Technical Support Center Actions," Revision 14.

When the loss of off site power and three unit trip occurred the two of the unit shift managers, the on site manager and the operations manager, who was the on-call technical support center emergency coordinator, were in the plan of the day meeting in the operations support building adjacent to the Unit 2 control room. The Unit 1 shift manager returned to the Unit 1 control room and assumed the duties as emergency coordinator for all three units. When the on-site manager arrived at the Unit I control room to relieve the shift manager of his emergency coordinator responsibilities, Unit 2 entered an Alert emergency action level, so the on-site manager returned to Unit 2 to set up the satellite technical support center ant the most affected unit. The Unit 1 shift manager had declared a Notification of Unusual Event for the loss of off-site power for greater than 15 minutes. He gave this information to the on-site manager to coordinate the emergency notification to state and local authorities.

The Unit 2 shift manager declared an Alert emergency action level based on the loss of off-site power concurrent with a loss of one of the Unit 2 emergency diesel generators for greater than 15 minutes. He directed the on-shift emergency communicator to notify state and local authorities. The emergency communicator immediately determined that the normal notification alert network system was not working and used the backup radio notification system to notify the state and local authorities within 8 minutes of the Alert classification.

When the on site manager arrived at the Unit 2 satellite technical support center in the Unit 2 control room, he was told by the operations manager that Unit 2 had assumed all emergency communications, but did not question him as to whether or not the Unit 1 Notification of Unusual Event was sent to the state and local authorities. Apparently, there was no formal turnover on emergency communications responsibilities from the Unit 1 shift manager to the Unit 2 shift manager or the on-site manager who was going to relieve the Unit 2 shift manager of emergency coordinator responsibilities. In addition, the on site manager and operations .manager did not effectively communicate the status of the off site notification. These two incomplete communications human performance errors that resulted in the Unit 1 Notification of Unusual Event not being sent to state and local authorities.

The Unit 3 shift manager declared a Notification of Unusual Event for the loss of off-site power for greater than 15 minutes. There was a time delay before the Unit 3 on-shift emergency communicator attempted to send out the notification using the normal notification alert network system. When he determined that it was not working he used the backup radio notification system but did not notify the state and local authorities until 20 minutes after the Notification of Unusual Event classification. The team determined that the delay in starting the notification process and the need to use the backup radio system were human performance errors that delayed the Unit 3 Notification of Unusual

Event beyond the 15 minute requirement in EPIP-01, uSatellite Technical Support Center Actions," Revision 14.

The loss of power to the normal notification alert network system complicated the emergency notification of state and local authorities. In addition, the licensee determined that the three satellite technical support center dose projection computers had lost power and raised questions about their ability to make timely protective action recommendations. The apparent cause for both failures was that both systems were supplied electrical power from electrical circuits that have no backup power supplies.

The licensee initiated CRDR 2715749 to address the loss of power to the normal notification alert network system and CRDR 2716281 to address the dose projection computers. The licensee implemented immediate corrective actions to install backup uninterruptible power supplies for both systems.

During the initial loss of off-site power and the failure of the Unit 2 Train "A" EDG, the Unit 2 shift manager and on-shift emergency communicator were delayed in sending out the emergency pager notification to the on-call emergency response organization. The team determined that the delay of 16 minutes contributed to the greater than 2 hour2.314815e-5 days <br />5.555556e-4 hours <br />3.306878e-6 weeks <br />7.61e-7 months <br /> response time of the on-call technical support electrical engineer to the technical support center. The licensee did not activate the backup dialogic auto-dialer system for emergency response organization notification as required during an Alert emergency classification. During interviews, the Unit 2 shift manager had stated that he thought that June 14, 2004, a Monday, was a normal working day and the emergency response organization would respond to the plant wide announcement of the Alert classification.

In fact, Monday was a normal off day for plant personnel and the dialogic auto-dialer system should have been used to activate the emergency response. The team determined that this human performance error contributed to the late staffing of the technical support center and the less than minimum required number of radiation protection technicians reporting to the operations support center within the required 2 hours2.314815e-5 days <br />5.555556e-4 hours <br />3.306878e-6 weeks <br />7.61e-7 months <br />. This failure to use EPIP-01 properly was documented in CRDR 2715749 and the licensee revised EPIP-01, to always require the activation of the dialogic auto-dialer for backup emergency response organization notification.

4.0 Coordination with Off-site Electrical Organizations a. Inspection Scope The team reviewed the design and maintenance practices off site electrical organization in order to assess factors that influenced electrical power Grid failure, the extent of the system failure and the corrective actions for preventing such failures. In addition, the licensees coordination with off site organizations before, during, and after the June 14, 2004 loss of offsite power event was assessed.

b. Observations and Findings As discussed previously, the loss of the PVNGS 525 kV local grid, which disabled all the seven offsite power supplies for the nuclear stations, was due to the cascading effect of a wide area electrical isolation that originated from an electrical fault on a 230kV transmission line that failed to isolate for approximately 38 seconds. The selective tripping of the breakers to isolate problems at the West Wing 230Kv Substation, near

the source of the fault, did not perform as required due to a relay failure and a design that had no defense-in-depth.

The switchgear maintenance at the PVNGS 525 kV switchyard is performed by SRP personnel. The breakers undergo yearly maintenance which includes a check of the SF6 tubing, pressure switches, air system alarms, air compressor operation, breaker timing, and an operational check of the mechanisms.

The protective relaying is also inspected yearly. Relay settings, software and firmware, operating characteristics, and communication circuits are verified for accuracy also on a yearly basis. The PVNGS switchyard is manned by maintenance personnel during normal working hours for prompt identification of any evolving problems.

The licensee has calculated the minimum onsite requirement for electrical voltage to be 512kV. They have directed the APS Energy Control Center (APS-ECC), the local transmission system operator, to provide voltage range of 525 to 535kV for the PVNGS 500kV switchyard. The APS-ECC continued to provide voltage at the expected voltage band following the isolation of the fault.

Of note was how closely the APS-ECC and PVNGS control room operators coordinated their efforts to reduce PVNGS switchyard voltage so reactor coolant pumps could be started during plant recovery efforts. In addition, the team found that the licensee actively coordinated the investigation into why a single insulator failure could result in a loss of offsite power and a three-unit trip and was closely involved in the development of corrective actions to improve both reliability and independence of transmission lines.

The team concluded that the remedial measures taken and planned by the offsite electrical organizations improved reliability and independence and appropriately minimized the possibility of a cascading blackout in the PVNGS 500 kV switchyard.

5.0 Risk Significance of the Event The initial risk assessment for Unit 2 resulted in a conditional core damage probability (CCDP) of 6.5 x 10 '. The initial CCDP for Units I and 3 was estimated as 3.2 x 10-4 per unit. Subsequently, the team, assisted by Office of Nuclear Regulatory Research personnel, completed a detailed risk assessment for the event. This analysis used the Standardized Plant Analysis Risk (SPAR) Model for Palo Verde 1, 2, & 3, Revision 3.03, to estimate the risk. The analyst assumed that 95 percent of loss of offsite power events, similar to the June 14th event, would be recovered within 2-1/2 hours. The resulting CCDPs were 4 x 10-5, 7 x 104, and 4 x 10-5 for Units 1, 2, and 3, respectively.

The team gathered information concerning the failed emergency diesel generator and charging pump in Unit 2. Other equipment problems including turbine-driven auxiliary feedwater pump drains, power-operated relief valves problems, and 13.8 kV breaker issues were assessed. In addition, the team evaluated the ability of the licensee to recovery offsite power, the probability that power could be provided to the vital buses from the gas turbine generators had it been needed, and the capability of vital and nonvital batteries to continue to provide control power, had a station blackout occurred.

The team made the following assumptions critical to the analysis:

  • A Unit 2 licensed operator misaligned the suction path to Charging Pump E causing the pump to trip on low suction pressure. The pump could not have been recovered prior to postulated core damage because the pump was air bound.
  • Recovery of ac power to the first vital bus, via the gas turbine generators or offsite power, was possible within one hour following a postulated station blackout. This assumption was derived from the following facts and their associated timeframes:

the east switchyard bus was energized from offsite power (32 minutes);

the gas turbine generators were started and loaded (29 minutes);

licensed operators determined the grid to be stable (49 minutes); and power can be aligned from east bus to a vital 4160 volt bus (-30 minutes).

  • The probability that operators failed to restore offsite power within 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> was 4 x 10-2 as determined using the SPAR-H method. The nominal action failure rate of 0.001 was modified because the available time was barely adequate to accomplish the breaker alignments necessary, the operator stress level would have been high, and the actions required were of moderate complexity.
  • The probability that operators failed to restore offsite power prior to the core becoming uncovered during a reactor coolant pump seal LOCA was estimated as 4 x 10-3. The same performance shaping factors were used as for the 1-hour recovery with the exception of the time available. The team determined that the time available was nominal, because there would be some extra time, above what is minimally required, to execute the recovery action.
  • The failure probability for recovery of offsite power prior to battery depletion during a station blackout was estimated as 4 x 10-3. The same performance shaping factors were used as for the seal LOCA recovery.
  • The team concluded that the failures of 13.8 kV feeder breakers in Units 1 and 3 would have increased the complexity in recovering offsite power for these units.

However, the potential contribution of common cause failure probabilities would not greatly impact the nonrecovery probabilities described previously for Unit 2.

  • The Palo Verde gas turbine generators used for station blackout could be started and loaded within one hour of blackout initiation. One gas turbine generator can provide power to switchyard components and supply one Unit 1 vital 4160 volt bus. Both generators can provide one vital bus on Units 1 and 2 or Units 1 and 3, but not Units 2 and 3.

To account for the offsite power circumstances on June 14, 2004, the team modified the SPAR to replace industry average loss of offsite power nonrecovery probabilities with ones derived from actual grid conditions and estimated probabilities of human actions failing. Additionally, modeling of the Palo Verde gas turbine generators was improved to better represent their contribution in providing power to vital buses if needed. The team determined that this modified SPAR was an appropriate tool to assess the risk of this event.

The team set the likelihood of a loss of offsite power to 1.0, and the likelihood of all other initiating events were set to the house event FALSE, indicating the assumption that it is unlikely that two initiating events would occur at the same time. The failure to start and failure to run basic events for both Emergency Diesel Generator A and Charging Pump E were set to the house event TRUE, permitting calculation of the probability that similar components would fail from common cause. The SPAR model was quantified following the modifications, and the mean of the best estimate CCDPs were obtained through Monte Carlo simulation of the event.

6.0 Assessment of Event Response a. Inspection Scope The team conducted an overall assessment of how the PVNGS facility responded to the loss of offsite power event; how the licensee implemented emergency procedures, assessed the apparent causes of failures, and determined when the facility was ready for restart; and when appropriate, the team assessed the effectiveness of immediate corrective actions.

b. Observations and Findings Although largely out of the control if the PVNGS licensee, the team found it unacceptable for a single phase to ground fault on a 230 kV transmission line to cause a loss of all power to the PVNGS switchyard and a trip of all three PVNGS units.

Nevertheless, the event resulted in the identification of several design improvements which improved both the reliability and independence of the 525 kV grid local to the PVNGS switchyard.

With respect to how well the PVNGS facility responded, overall, the team found that the PVNGS facility responded in a manner consistent with its design for a loss of offsite power with some exceptions. One of those exceptions involving failure of the Unit 2 EDG to run was notable because it resulted in some increased risk to the facility. The other exceptions, while less notable individually, were numerous and represented a larger concern when considered in their aggregate. Of note was the self-critical nature of the licensee efforts to understand and correct emergency response organization issues.

The team found that the licensees efforts to identify each issue, determine the root and/or apparent cause, and develop corrective actions generally appropriate with few exceptions. Several observations were made by the team regarding how well the licensee integrated post-trip review efforts and communicated with the NRC. For example, with respect to effective communications, while the team knew that the

licensee had planned to correct any transmission and distribution issues prior to re-starting the facility, the licensee did not effectively communicate that to NRC management during a telephone conference. Another example involved integration of findings associated with each units response to the loss of offsite power. The licensee did not identify, until after re-starting Unit 3, that the main generator exciter operated differently than the other two units. As a result, troubleshooting efforts were limited by plant operations.

7.0 Exit Meeting Summary On June 18, June 24, and July 7, 2004, the team presented the preliminary observations from the Augmented Inspection in progress. On July 12, 2004, the Augmented Inspection Team Leader presented the results of the inspection in a public meeting held at the Estrella Community College in Goodyear, Arizona. The results of the inspection which was conducted June 14 through July 12, 2004, were presented to Mr. J. Levine, and other members of his staff. Mr. Overton acknowledged the observations presented. Proprietary information reviewed by the team was returned to the facility.

ATTACHMENT 1 SUPPLEMENTAL INFORMATION KEY POINTS OF CONTACT Licensee Jim Levine Greg Overbeck David Mauldin Dennis W. Gerlach, Manager, Transmission & generation Operations, SRP Mike gentry, Manager, Grid Operations-PDO, Transmission and Generation Dispatching, SRP Giang Vuong, Protection Engineer, SRP Edmundo, Marquez, Manager System Protection, Electronic Systems, SRP Cary B. Deise, Director, Transmission Planning and Operations, APS Tom Glock, Power Operations Manager, Power Ops Tech Services, APS Steven Phegley, Section Leader, Protection Metering, & Automated Control, APS Steven Kestler, Electrical Engineer, Palo Verde Nuclear Station Bajranga Aggarwal, Systems Engineer, APS John Hesser, Director of Emergency Services Larry Leavitt, Significant CRDR Lead Investigator David Crozier, Program Leader for Emergency Planing Martin Rhodes, Security Team Leader Danne Cole, Security Section Leader NRC ITEMS OPENED 05000528/2004012-01; URI 05000529/2004012-01; 05000530/2004012-01 05000528/2004012-02; URI 05000529/2004012-02; 05000530/2004012-02 05000528/2004012-03; URI 05000529/2004012-03; 05000530/2004012-03

-

DOCUMENTS REVIEWED Drawings NUMBER TITLE REVISION 01 -J-SPL-003 Control Logic Diagram Essential Spray Pond Auviliary 3 Pumps, Day Tk Valve & Alarms 01-J-EWL-001 Control Logic Diagram Essential Cooling Water Pumps 2 and Surge Tank Fill Valves 01-J-EWL-002 Control Logic Diagram Essential Cooling Water Loop A 0 X-Tie Valves & System Alarms 01 -J-SPL-001 Control Logic Diagram Essential Spray Pond Pumps 3 01-M-EWP-001 P&l Diagram Essential Cooling Water System 29 01 -M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 1 of 3 35 01 -M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 2 of 3 35 01 -M-SPP-001 P&I Diagram Essential Spray Pond System Sheet 3 of 3 35 01 -M-SPP-002 P&l Diagram Essential Spray Pond System 12 A-774-10.110 Palo Verde Station 500KV Switchyard PL912 Closing and 0 SRP Tripping Schematic A774-1 0.1 11/1 Palo Verde Station 500KV Switchyard 500KV Breaker 0 SRP PL912 Schematic Diagram A774-10.112 Palo Verde Station 500KV Switchyard PL912 Fail/Fault 0 SRP and CT Fail/Fault Schematic Diagram A774-10.113 Palo Verde Station 500KV Switchyard PL915 Fail/Fault 0 SRP and CT Fault Schematic Diagram A-774-10.13 Palo Verde Station 500KV Switchyard 500KV Breaker 9 SRP PL932 Closing and Tripping Schematic Diagram A-774-10.14 Palo Verde Station 500KV Switchyard 500KV Switchyard 9 SRP 500KV Breaker Failure & Fault Monitor PL992 & PL995 Schematic Diagram

Drawings NUMBER TITLE REVISION A-774-10.15 Palo Verde Station 500KV Switchyard 500KV Breaker 12 SRP PL915 Closing and Tripping Schematic Diagram A-774-10.20 Palo Verde Station 500kV Switchyard 500kV Breaker PL 10 SRP 942 Closing & Tripping Schematic Diagram A-774-10.21 Palo Verde Station 500kV Switchyard 500kV Breaker PL 10 SRP 945 Closing & Tripping Schematic Diagram A-774-10.36 Palo Verde Station 500KV Switchyard 500KV Breaker 6 SRP PL915 Schematic Diagram A-774-10.42 Palo Verde Station 500KV Switchyard 500KV Breaker PL 10 SRP 945 Schematic Diagram A-774-10.49 Palo Verde Station 500KV Switchyard 500KV Breaker 7 SRP PL935 Closing and Tripping Schematic Diagram A-774-10.5 Palo Verde Station 500KV Switchyard Devers Line 5 SRP Relaying Schematic Diagram A-774-10.50 Palo Verde Station 500KV Switchyard 500KV Breaker 7 SRP PL938 Closing and Tripping Schematic Diagram A-774-10.82 Palo Verde Station 500KV Switchyard PL972 Closing and I SRP Tripping Schematic Diagram A-774-10.86 Palo Verde Station 500KV Switchyard PL975 Closing and I SRP Tripping Schematic Diagram A-774-10.90 Palo Verde 500KV Switchyard 500KV Hassayampa #1 3 SRP Line Rel 87La Schematic Diagram A-774-10.91 Palo Verde 500KV Switchyard 500KV Hassayampa #1 2 SRP Line Rel 87La Schematic Diagram A-774-20.3 Palo Verde Substation Westwing #1 500KV Line I SRP Relaying2lLa Schematic Diagram Sheet 1 A-774-20.4 Palo Verde Substation Westwing #1 500KV Line 1 SRP Relaying2lLa Schematic Diagram Sheet 2

Drawings NUMBER TITLE REVISION A-774-20.6 Palo Verde Substation Westwing #1 500KV Line 1 SRP Relaying2lLb Schematic Diagram Sheet I A-774-20.7 Palo Verde Substation Westwing #1 500KV Line 1 SRP Relaying2I Lb Schematic Diagram Sheet 2 A-774-20.9 Palo Verde Substation Westwing #1 500KV Line Relaying I SRP 87Lc Schematic Diagram Sheet 2 A-774-8.2 Palo Verde 500KV SWYD. One Line Diagram SH2 Bays 1 12 SRP & 2 IN-6W A-774-8.3 Palo Verde Station 500kV Switchyard IN-6W 500KV Bays 14 SRP 3 & 4 One Line Diagram Sh.3 K-774-9.1 Palo Verde Substation Bay I Three Line Diagram 11 SRP K-774-9.3 Palo Verde Station 500KV Switchyard Bay 3 Three Line 12 SRP Diagram K-774-9.4 Palo Verde Substation 500KV Switchyard Bay 4 Three 18 SRP Line Diagram K-774-9.6 Palo Verde Station 500KV Switchyard Bay 7 Three Line 1 SRP Diagram G-33417 Sheet 1 of 2, Westwing 230KV Switchyard USBR Liberty 12 APS & Pinn Pk Line Relaying CT/PT Schematic G-33417 Sheet 2 of 2, Westwing 230KV Switchyard WAPA 230KV 12 APS Liberty & Pinn Pk Line Relaying CT-PT Schematic G-33434 Sheet 1 of 1, Westwing 230KV Switchyard WAPA 230KV 9 APS Liberty Line Relaying DC Schematic G-33451 Westwing 230KV Switchyard WAPA 230KV Liberty Line & 14 APS West Bus Tie PCB WW1022 DC Schematic G-33453 Sheet 1 of 1,Westwing 230KV Switchyard WAPA 230KV 16 APS Liberty & Pinn Pk Line PCB WW1 126 Schematic

Drawings NUMBER TITLE REVISION G-33493 Sheet 1 of 2, Westwing 230KV Switchyard USBR Liberty 1 APS & Pinn Pk Line CCPD Jct. Box Wiring Diagram 01-E-MAB-001 Elementary Diagram Main Generation System Main 13 PVNGS Generator Three Line Metering and Relaying 01 -E-MAB-0012 Elementary Diagram Main Generator System Main 9 PVNGS Generator Three Line Metering and Relaying 01-E-MAB-004 Elementary Diagram Main Generation System Main 8 PVNGS Transformer Three Line Diff, Metering and Relaying 01-E-MAB-006 Elementary Diagram Main Generation System Generator 3 PVNGS & Transformer Primary Protection Unit Tripping 01-E-MAB-007 Elementary Diagram Main Generation System Generator 5 PVNGS & Transformer Primary Protection Unit Tripping 01-E-MAB-008 Elementary Diagram Main Generation System Generator 5 PVNGS & Transformer Primary Protection Unit Tripping 01-E-MAB-009 Elementary Diagram Main Generation System Generator 4 PVNGS & Transformer Primary Protection Unit Tripping 01 -E-MAB-010 Elementary Diagram Main Generation System Generator 8

& Transformer Back-up Protection Unit Tripping 01-E-MAB-011 Elementary Diagram Main Generation System Generator 7

& Transformer Back-up Protection Unt Tripping, 01-E-MAB-011 Elementary Diagram Main Generation System Generator 12

& Transformer Back-up Protection Unit Tripping 01-E-MAB-013 Elementary Diagram Main Generation System Generator 10

& Transformer Unit Tripping Cabling Block Diagram 01-E-NHA-001 Single Line Diagram 480V Non-Class 1E Power System 21 Motor Control Center 1E-NHN-M13 01 -E-NHA-010 Single Line Diagram 480V Non-Class 1E Power System .19 Motor Control Center 1E-NHN-M10

Drawings NUMBER TITLE REVISION 01-E-NNA-001 Single Line Diagram 120VAC Non-Class 1E Ungrounded 19 Instrument and Control Panel 1E-NNN-D1 I 01 -E-NNA-002 Single Line Diagram 120VAC Non-Class 1E Ungrounded 19 Instrument and Control Panel 1E-NNN-D12 01-E-PHA-001 Single Line Diagram 480V Class 1E Power System Motor 16 Control Center 1E-PHA-M31 01 -E-PHA-002 Single Line Diagram 480V Class 1E Power System Motor 16 Control Center 1E-PHB-M32 13-E-MAA-001 Main Single Line Diagram 21 G-32900 Sheet 1 of 2, Westwing 500KV Switchyard Bays 1 - 9 One 23 Line Diagram G-32900 Sheet 2 of 2, Westwing 500KV Switchyard Bays 10 - 18 * 12 One Line Diagram G-32901 Sheet 1 of 2, Westwing 500KV Switchyard Transformer 28 Bays 1 & 4 One Line Diagram G-32901 Sheet 2 of 2, Westwing 500KV Switchyard Bays 7,10,13 10

& 16 One Line Diagram G-33300 Westwing 230KV Switchyard Bays 1-9 One Line Diagram 25 G-33301 Sheet 1 of 2, Westwing 230KV Switchyard Bays 10-18 31 One Line Diagram Condition Report/Disposition Reports CRDR 2715726 CRDR 2716011 CRDR 2715941 CRDR 2715667

CRDR 2715659 CRDR 2715768 CRDR 2715709 CRDR 2715727 CRDR 2715749 CRDR 2716281 CRDR 2715669 Miscellaneous Documents:

NUMBER TITLE REVISION/DATE Security Computer Alarm logs for June 14, 2004 Security Access Transaction Records for June 14, 2004 Day Shift Security Department Logs for June 14, 2004 Sally Port Vehicle Barrier Operating Instructions, as posted on June 14, 2004 Sally Port Vehicle Barrier Operating Instructions, revised on June 17, 2004 PVNGS Emergency Plan, Table 1, 'Minimum 28 Staffing Requirements for PVNGS for Nuclear Power Plant Emergencies"

Miscellaneous Documents:

NUMBER TITLE REVISION/DATE WO# 2623863 Monthly Inspection of TSC DG Battery and June 9, 2004 Battery Charger WO# 2715869 Perform the Restrike Test for the TSC Diesel June 16, 2004 Generator APS Letter Robert Smith to N. Bruce et al., Final June 5, 2002 Report for the 2002 Palo Verde /Hassayampa Operating Study 2003-04 Winter Palo Verde Unit 2 Uprating Net November 2003 Generating Capacity of 1408MW for Updated Final Safety Analysis Report (UFSAR)

Procedure No. Palo Verde Transmission System Interchange Revision 8 PVTS-01 Scheduling and Congestion Management Procedure PVNGS Technical Specifications, Through November 21, 2003, Amendment No. 150, Corrected December 12, 2003 NRC Letter M Fields to APS, Palo Verde Nuclear Generating Station Units 1, 2 and 3 - Issuance of Amendments Re: Changes Related to Double Sequencing and Degraded Voltage Instrumentation (TAC Nos. MA4406, MA4407, and MA4408)

APS Letter 102-04310-WEIISABIRKR, July 16, 1999 Response to NRC Request for Additional Information Regarding Proposed Amendment to Technical Specifications (TS) 3.8.1, AC Sources-Operating and 3.3.7, Diesel Generator (DG)-Loass of Voltage Start (LOVS),

10CFR 50.59 Screening and Evaluation, Revise Revision 0 the UFSAR, Technical Specifications, and Technical Specifications Bases to enhance the means of complying with the requirements of Regulatory Guide 1.93 for offsite power sources

Miscellaneous Documents:

NUMBER TITLE REVISION/DATE 10CFR 50.59 Screening and Evaluation, S-04- Revision 0 0009, Updated Transmission Grid Stability Study: Salt River Project 20031126 (LDCR 2003F040)

Visual Examination of Welds report number 04-250, component 1-CH-GCBA 1 WOOA Visual Examination of Welds report number 04-250, component 1 CHN-F36 Purification Filter Palo Verde Nuclear Generating Station Design 16 Basis Manual, EW System Palo Verde Nuclear Generating Station Design 13 Basis Manual, SP System PV Unit 2 Archived Operator Log 06/14/2004, 12:10:47 AM, through 06/15/2004, 11:10:30 PM Bulletin 74-09 Deficiency in General Electric Model 4KV August 6, 1974 Magne-Blast Breakers Information General Electric Magne-Blast Circuit Breaker April 17, 1984 Notice 84-29 Problems Information Potential Failure of General Electric Magne- June 12, 1990 Notice 90-41 Blast Circuit Breakers and AK Circuit Breakers Information Grease Solidification Causes Molded Case April 7,1993 Notice 93-26 Circuit Breaker Failure To Close Information Misadjustment Between General Electric 4.16- December 3,1993 Notice 93-91 KV Circuit Breakers and Their Associated Cubicles Information Inoperability of General Electric Magne-Blast January 7, 1994 Notice 94-02 Breaker Because of Misalignment of Close-Latch Spring Information Failures of General Electric Magne-Blast Circuit August 1,1994 Notice 94-54 Breakers To Latch Closed

Miscellaneous Documents:

NUMBER TITLE REVISION/DATE Information Hardened or Contaminated Lubricants Cause April 21, 1995 Notice 95-22 Metal-Clad Circuit Breaker Failure Information Failures of General Electric Magne-Blast Circuit August 12, 1996 Notice 96-43 Breakers Unit 3 4 Pt Trend chart,"Core Differential Pressures for Loops 1A, IB, 2A, 2B", start time 07:41:15 through 07:41:45 Unit 1 4 Pt Trend chart, "Letdown System Temperature and Flow," start time 6/14/04 07:40:00 through 6/14/04 09:40:00 PV Unit 1 and Unit 3 Archived Operator Logs 6/14/2004 1:30 a.m. through 6/15/2004 5:35 a.m.

Calculation 13- CVCS Letdown Heat exchanger to Purification MC-CH-508 Filters, Unit 1 350 F Temperature Event During Plant Trip of 6-14-04 Procedures:

NUMBER TITLE REVISION/DAT E

40EP-9EO07 Loss of Offsite Power/Loss of Forced Circulation 10 40EP-9EO1 0 Standard Appendices 33 400P-9CH01 CVCS Normal Operations 35 20SP-OSK08 Compensatory Measures for the Loss of Security 27 Equipment Effectiveness

21 SP-OSK1 I Security Contingencies 13 2ODP-OSK29 Security System Testing 27 EPIP-01 Satellite Technical Support Center Actions 14 EPIP-01 Satellite Technical Support Center Actions 15 EPIP-99 EPIP Standard Appendices, Appendix C, 'Forms" 1 EPIP-99 EPIP Standard Appendices, Appendix D, 1

"Notification" EPIP-99 EPIP Standard Appendices, Appendix H, "Autodialer 1 Activation" 20SP-OSKO8 Compensatory Measures for the Loss of Security 27 Equipment Effectiveness 21 SP-OSK11 Security Contingencies 13 2ODP-OSK29 Security System Testing 27 41AL-1 RK6B Panel B06B Alarm Responses, 'Mn Gen Neg Seq 32 Pre-Trip 01 -P-CHF-201 Auxiliary Building Isometric Chem, Volume Control 6/2/1998 System Letdown Heat Exchanger

ATTACHMENT 2 AUGMENTED INSPECTION TEAM CHARTER

UNITED STATES NUCLEAR REGULATORY COMMISSION

REGION IV

611 RYAN PLAZA DRIVE, SUITE 400 ARLINGTON, TEXAS 76011-4005 June 15, 2004 MEMORANDUM TO: Anthony T. Gody, Chief Operations Branch Division of Reactor Safety FROM: Bruce Mallett, Regional Administrator IRA!

SUBJECT: AUGMENTED INSPECTION TEAM CHARTER; PALO VERDE NUCLEAR GENERATING STATION, UNITS 1, 2, AND 3, COMPLETE LOSS OF OFFSITE POWER AND MULTIPLE MITIGATING SYSTEM FAILURES In response to the complete loss of all offsite power sources, the trip of all three units, and the Unit 2 Emergency Diesel Generator 'A,' failing to function as required at Palo Verde Nuclear Generating Station on June 14, 2004, an Augmented Inspection Team is being chartered.

There was no impact to public heath and safety associated with the event. You are hereby designated as the Augmented Inspection Team (AIT) leader.

A. Basis On June 14, 2004, at 9:45 a.m. CDT, all offsite power supplies to the Palo Verde Nuclear Generating Station were disrupted, with a concurrent trip of all three units.

Additionally, the Unit 2 Emergency Diesel Generator "A" failed to function as required.

As a result, the licensee declared a Notice of Unusual Event (NOUE) for all three units at about 9:50 a.m. CDT and elevated to an Alert for Unit 2 at 9:54 CDT. The licensee and NRC resident inspectors also reported a number of other problems, including the failure of Unit 2 Charging Pump "E," the failure of a Unit 3 steam bypass control valve, multiple breakers failing to operate during recovery operations, and emergency response facility and security interface issues which may have impeded emergency responders. This event meets the criteria of Management Directive 8.3 for a detailed follow up inspection, in that, it involved multiple failures to systems used to mitigate an actual event. The initial risk assessment, though subject to some uncertainties, indicates that the conditional core damage probability was in the range of high E4.

Because the initial risk assessment was in the range for consideration of an AIT and because of multiple failures in systems used to mitigate an actual event, it was decided that an AIT is the appropriate NRC response for this event.

The AIT is being dispatched to obtain a better understanding of the event and to assess the responses of plant equipment and the licensee to the event. The team is also tasked with reviewing the licensee's root-cause analyses.

Anthony T. Gody -2-B. Scope Specifically, the team is expected to perform data gathering and fact-finding in order to address the following:

1. Develop a complete sequence of events related to the loss-of-offsite power, the multiple unit trips, and the Unit 2 emergency diesel generator failure.

2. Assess the performance of plant systems in response to the event, including any design considerations that may have contributed to the event.

3. Assess the adequacy of plant procedures used in response to the event.

4. Assess the licensee's response to the event, including operator actions and emergency declarations, and any emergency response facility or security interface issues that may have adversely affected response to the event.

5. Assess the licensee's determination of the root and/or apparent causes of offsite power loss, emergency diesel generator failure, and other mitigating system(s)

failures.

6. Based upon the licensee's cause determinations, review any maintenance related actions which could have contributed to the event initiation or produced subsequent response problems.

7. Review the licensee's assessment of coordination activities with off-site electrical dispatch organizations prior to and during the event.

8. Provide input to the regional Senior Reactor Analyst for further assessment of risk significance of the event.

C. Guidance The Team will report to the site, conduct an entrance meeting, and begin inspection no later than June 16, 2004. A report documenting the results of the inspection should be issued within 30 days of the completion of the inspection. While the team is on site, you will provide daily status briefings to Region IV management. The team is to emphasize fact-finding in its review of the circumstances surrounding the event, and it is not the responsibility of the team to examine the regulatory process. The team should notify Region IV management of any potential generic issues identified related to this event for discussion with the Program Office. Safety concerns that are not directly related to this event should be reported to the Region IV office for appropriate action.

Anthony T. Gody -3-For the period of the inspection, and until the completion of documentation, you will report to the Regional Administrator. For day to day interface you will contact Dwight Chamberlain, Director, Division of Reactor Safety. The guidance in Inspection Procedure 93800, "Augmented Inspection Team," and Management Directive 8.3, "NRC Incident Investigation Procedures," apply to your inspection. This Charter may be modified should the team develop significant new information that warrants review. If you have any questions regarding this Charter, contact Dwight Chamberlain at (817)

860-8180.

Distribution:

B. Mallett T. Gwynn J. Dixon-Herrity J. Dyer R. Wessman T. Reis H. Berkow S. Dembeck M. Fields D. Chamberlain A. Howell C. Marschall T. Pruett J. Clark V. Dricks W. Maier N. Salgado G. Warnick J. Melfi

ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:40:55.747 Fault #1 inception Fault #1 type = C-N Fault #1 causellocation = Phase down (broken bells)

reported near 115th Ave. & Union Hills (WW-LBX Line)

At Westwing, the Liberty line relays operated properly and issued a trip signal. Incorporated in this scheme is a Westinghouse high-speed "AR" auxiliary tripping relay that is used to "multiply" that trip signal toward both trip coils of two breakers (WWI 022 & WW1126). The "AR" relay failed (partially) and issued the trip signal to breaker WW1 126 only. Since the trip signal was never successfully issued to WW1022, breaker failure for WW1022 was also never initiated (this would have cleared the Westwing 230kV West bus and isolated the fault). Therefore, the "remote" ends of all lines feeding into the 525kV and 230kV yards were required to trip to isolate the fault.

07:40:55.814 4.0 cycles after fault #1 inception WWI 126 opened (LBX / PPX 230kV crossover breaker)

07:40:55.822 4.5 cycles after fault #1 inception LBX1282 opened (Westwing 230kV Line)

07:40:56.115 22.1 cycles after fault #1 inception AFX732 & AFX735 opened (Westwing 230kV Line)

07:40:56.122 22.5 cycles after fault #1 inception YP452 & YP852 opened (Westwing 525kV Line)

07:40:56.136 23.3 cycles after fault #1 inception WW1426 & WW1522 opened (Agua Fria 230kV Line)

07.40:56.142 23.7 cycles after fault #1 inception WW856 & WW952 opened (Yavapai 525kV Line)

07:40:56.165 25.1 cycles after fault #1 inception DV322 & DV722 & DV962 opened (Westwing 230kV Line)

07:40:56.172 25.5 cycles after fault #1 inception WWI 726 & WWI 822 opened (Deer Valley 230kV Line)

07:40:56.196 26.9 cycles after fault #1 inception RWYX482 & RWYX582 & RWYX782 opened (Westwing 230kV Line)

(Waddell 230kV Line)

(230/69kV Transformer #8)

ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:40:56.515 46.1 cycles after fault #1 inception WWI 222 opened (Pinnacle Peak 230kV Line)

t = unknown Surprise Lockout "L" operated (230/69kV Transformer #4 Differential & B/U Over-Current)

07:40:56.548 48.1 cycles after fault #1 inception SC622 & SC922 & SC262 opened (Surprise 230169kV Transformer #4)

07:40:57.549 108.1 cycles after fault #1 inception SC1 322 opened (Westwing 230kV Line)

07:40:57.800 123.2 cycles after fault #1 inception RWP-CT2A opened (Redhawk Combustion Turbine 2A)

07:40:57.807 123.6 cycles after fault #1 inception RWP-ST1 opened (Redhawk Steam Turbine 1)

07:40:57.814 124.0 cycles after fault #1 inception RWP-CT1A opened (Redhawk Combustion Turbine 1A)

07:40:58.339 155.5 cycles after fault #1 inception RIV762 opened (Westwing 69kV Line)

07:40:58.372 157.5 cycles after fault #1 inception HH762 opened (Westwing 69kV Line)

t = unknown Westwing Lockout "AK" operated (230/69kV Transformer #11 Differential & B/U Over-Current)

07:40:59 (EMS) WW2026 & WW2122 opened (Westwing 230169kV Transformer #11 - High Side)

07:40:59.272 211.5 cycles after fault #1 inception WK362 opened (Westwing 69kV Line)

07:40:59.489 224.5 cycles after fault #1 inception HMX935 & HAAX938 opened (Hassayampa - Arlington 525kV Line)

(Time stamp provided by SRP)

07:41:00 (EMS) WW862 & WW962 & WWI 362 opened (Westwing 230/69kV Transformer #11 - Low Side)

07:41:00.392 278.7 cycles after fault #1 inception WVV752 opened (South 345kV Line)

07:41:01.982 Fault #1 type changed = B-C-N

ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:41:02.144 383.8 cycles after fault #1 inception PSX832 closed auto (Perkins Cap-Bank Bypass)

(Time stamp provided by SRP)

07:41:02.154 Fault #1 type changed = C-N 07:41:02.799 Fault #1 type changed = B-C-N 07:41:03.966 493.1 cycles after fault #1 inception SC562 opened (McMicken 69kV Line)

07:41:05.373 577.6 cycles after fault #1 inception MQ562 opened (McMicken 69kV Line)

07:41:07.849 12.102 seconds after fault #1 inception HAAX922 & HAAX925 opened (Palo Verde 525kV Line #2)

(Time stamp provided by SRP)

07:41:07.851 12.104 seconds after fault #1 inception PLX972 & PLX975 opened (Hassayampa 525kV Line #2)

(Time stamp provided by SRP)

07:41:07.859 12.112 seconds after fault #1 inception HAAX932 opened (Palo Verde 525kV Line #1)

(Time stamp provided by SRP)

07:41:07.875 12.128 seconds after fault #1 inception PLX982 & PLX985 opened (Hassayampa 525kV Line #3)

(Time stamp provided by SRP)

07:41:07.878 12.131 seconds after fault #1 inception HAAX912 & HAAX915 opened (Palo Verde 525kV Line #3)

(Time stamp provided by SRP)

07:41:07.880 12.133 seconds after fault #1 inception PLX942 & PLX945 opened (Hassayampa 525kV Line #1)

(Time stamp provided by SRP)

07:41:08.104 Fault #1 type changed = A-B-C-N 07:41:10.445 14.698 seconds after fault #1 inception NV1 052 & NV1 156 opened (Westwing 525kV Line)

07:41:10.456 14.709 seconds after fault #1 inception WW556 & WW652 opened (Navajo 525kV Line)

07:41:12 (EMS) WW424J opened (Westwing 230kV West Bus Reactor)

ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:41:20.005 24.258 seconds after fault #1 inception PLX992 opened (Devers 525kV Line)

(PLX995 out-of-service at this time)

(Time stamp provided by SRP)

07:41:20.113 24.366 seconds after fault #1 inception PLX932 & PLX935 opened (Rudd 525kV Line)

(Time stamp provided by SRP)

07:41:20.145 24.398 seconds after fault #1 inception RUX912 & RUX915 opened (Palo Verde 525kV Line)

(Time stamp provided by SRP)

07:41:20.864 25.117 seconds after fault #1 inception PLX912 & PLX915 opened (Westwing 525kV Line #1)

(Time stamp provided by SRP)

07:41:20.873 25.126 seconds after fault #1 inception WW1456 & WW1552 opened (Palo Verde 525kV Line #2)

07:41:20.874 25.127 seconds after fault #1 inception WW1 156 & WW1 252 opened (Palo Verde 525kV Line #1)

07:41:20.895 25.148 seconds after fault #1 inception PLX922 & PLX925 opened (Westwing 525kV Line #2)

(Time stamp provided by SRP)

07:41:23.848 28.101 seconds after fault #1 inception PLX988 opened (Palo Verde Unit-3)

(Time stamp provided by SRP)

07:41:24.280 System Frequency = 59.514 Hz (Measured at APS Reach Substation)

07:41:24.641 28.894 seconds after fault #1 inception PLX918 opened (Palo Verde Unit-1)

(Time stamp provided by SRP)

07:41:24.652 28.905 seconds after fault #1 inception PLX938 opened (Palo Verde Unit-2)

(Time stamp provided by SRP)

07:41:25 (DOE) ED4-122 & ED4-322 opened (DOE ED4 Substation)

Tripped on under-frequency (Note frequency low at 07:41 :24.280)

07:41:25 (EMS) ML142, ML542, ML1042 & ML1442 opened (Moon Valley 12kV Feeders)

Tripped on under-frequency (Note frequency low at 07:41:24.280)

07:41:28 (DOE) MEX794 closed auto (Mead Cap Bank bypass)

ATTACHMENT 3 Sequence of Events Electrical Sequence of Events 07:41:34.615 38.868 seconds after fault #1 inception MEX1092 & MEX1692 opened (Perkins - Westwing 525kV Line)

Fault #1 cleared 07:42:22.773 System Frequency = 59.770 Hz (Measured at APS Reach Substation)

ATTACHMENT 3 Sequence of Events Unit I Sequence of Events 0741 Startup Transformer# 2 Breaker 945 Open Excessive Main Generator and Field Currents Noted Engineered Safeguards Features Bus Undervoltage Loss of Offsite Power Load Shed Train A and B Emergency Diesel Generator Train A and B Start Signal Low Departure from Nucleate Boiling Ratio Reactor Trip Master Turbine Trip Main Turbine Mechanical Over Speed Trip Emergency Diesel Generator 'A' Operating (10 Second Start Time)

Emergency Diesel Generator UBR Operating (13 Second Start Time*)

0751 Manual Main Steam Isolation System Actuation 0758 Declared Notice of Unusual Event (loss of essential power for greater than 15 minutes)

0810 Both Gas Turbine Generator Sets Started,

  1. 1 GTG is supplying power to NAN S07 0813 Closed 525 k 552-942. The East bus is powered from Hass #1 0838 Restored power to Startup Transformer X01 0844 Restored power to Startup Transformer X03 0855 Fire reported in 120 ft Aux building. Fire brigade confirmed that no fire existed but paint was heated causing fumes. Later it was confirmed that fumes were caused by the elevated temperature of the letdown heat exchanger when it failed to isolate.

0900 HI Temp Abnormal Operation Porcedure entered for Letdown heat exchanger outlet temperature offscale high.

1002 Reset Generator Protective Trips (volts/hertz; Backup under-frequency)

Palo Verde Switchyard Ring Bus restored 1159 Paralleled DG B with bus and cooled down engine restoring the in house buses 1207 Emergency Coordinator terminated NUE for all three units 1248 Paralleled DG A with bus and cooled down 2209 Noted grid voltage greater than 535.5 volts Shift Manager Coordinated with ECC 6/15 0005 Restored CVCS letdown per Std Appendix 12 started Chg Pump 'A'

ATTACHMENT 3 Sequence of Events Unit 1 Sequence of Events 0155 Established RCP seal injection and controlled bleed off 0241 Started 2A RCP, had to secure due to low running amps other two units had RCP's running (what were the amps at the time) exiting of EOP delayed due to switchyard conditions 0305 Exited Loss of Letdown AOP after restoration of letdown per Standard App. 12 of EOP's 0345 Palo Varde Switchyard E-W voltage at approx. 530.7 KV 0818 Started RCP's 2A and 1A 0920 Started RCP's 2B and 1B 0930 Exited EOP 40EP- 9E007 Loss of Offsite Power/Loss of Forced Circulation

ATTACHMENT 3 Sequence of Events Unit 2 Sequence of Events 0740 4.16KV Switchgear 3 Bus Trouble Alarm Generator Negative Sequence Alarm 4.16KV Switchgear 4 Bus Trouble Alarm 0741 Main Transformer B Status Trouble Alarm Main Transformer A Status Trouble Alarm ESF Bus Undervoltage Channel A-2 ESF Bus Undervoltage Channel B-2 LOP/Load Shed B ESF Bus Undervoltage Channel B-3 DG Start Signal B LOP/Load Shed A ESF Bus Undervoltage Channel A-4 DG Start Signal A LO DNBR Channels A, B, C, & D Trip RPS Channels A, B, C, & D Trip Main Generator 525KV Breaker 935 Open Mechanical Overspeed Trip of Main Turbine 0751 Manually initiated Main Steam Isolation Signal 0755 Declared an Alert for Loss of All Offsite Power to Essential Busses for Greater than 15 minutes 0901 Energized 13.8KV Busses 2E-NAN-S03 and 2E-NAN-S05 0927 Energized 4.16KV Bus 2E-PBA-S03 0951 Exited Alert 1001 Energized 13.8KV Bus 2E-NAN-S01 1024 Energized 13.8KV Bus 2E-NAN-S02 1132 Started Charging Pump A 1618 Engineering and Maintenance review concluded that Charging Pump E was available for service after fill and vent 1714 Started Charging Pump E 1716 Started RCP 1A 1722 Started RCP 2A 1806 Stopped RCPs 1A and 2A on low motor amperage. ECC contacted to adjust grid voltage as-low-as-possible

ATTACHMENT 3 Sequence of Events Unit 2 Sequence of Events 2040 Started RCPs 1A and 2A 2051 Stopped RCPs 1A and 2A on low running amperage 6/15 0400 Started RCPs 1A and 2A 0610 Exited Emergency Operating Procedures

ATTACHMENT 3 Sequence of Events Unit 3 -Sequence of Events 07:40 Generator Under Voltage Negative Sequence Trip Master Turbine Trip 3ENANS01 Bus Under Voltage Reactor Trip Circuit Breakers Open 07:41 Exciter Voltage Regulator Mode Change Unit 3 Gen 525 KV bkr 985 opens phase Gen B &Ccurrent alarm generator field current ESF bus undervoltage ch A-2 LOP load shed B EDG B start signal CEDM MG set A & B input Bkr open LOP load shed A EDG A start signal Turbine overspeed mechanical trip ESF Bus UV A-1 ;A-4 alarm 13.8 Kv swgr 1 & 2 load shed Main Generator Gross MW low (402 MW)

Power load Unbalance alarm VOPTChA,B,C&D Turbine Bypass Gp X quick open 07:42 lo SG press Unit 3 Gen 525 Kv Bkr 988 open 07:43 MSIS actuates automatically on Lo SG press 23:41 started RCP 1A 23:45 started RCP 2A 6/15 00:40 exited EOP 16:37 Started RCP 16 6/16.

02:07 started RCP 2B

ATTACHMENT 3 Sequence of Events Miscellaneous 0741 Loss of Off-Site Power 0750 0754 Unit 2 Alert 0758 Unit 1, 3 NOUE 0759 Unit 2 NAN sent by radio 0800 0807 Unit 1 NAN signed (not sent)*

/0815 0817 TSC DIG Tripped*

OSC Staffed 0818 Unit 2 NAN initiated*

0819 ERDS activated 0840 NRC ENS notification 0854 . a 0900 Unit 1 Intermediate Bus (S06) re-energized from S/U Transformer

0900 1 0909 l0911 ge 0927..

V1.E

ATTACHMENT 3 Sequence of Events Miscellaneous 0930 TSC Staff relocated to STSC I1 0936 I 4c; 0951 Unit 2 downgraded to NOUE 0952 EOF staffed TSC staff moved from STSC to TSC

,-,I

61 1001 Last TSC Key person on-site 1005 Unit 2 NOUE transmitted from EOF 1027 TSC staffed*

EC turnover complete

\ 1030 1038

/

1040 'I '

1042 1045 1207 Event Terminated 1215 NAN for event termination transmitted by EOF 1216 TSC secured Y9& +/Z ExZ

Exempt From Public Disclosure in Accordance with 10 CFR 2.390 ATTACHMENT 4 INFORMATION EXEMPT FROM PUBLIC DISCLOSURE Exempt From Public Disclosure in Accordance with 10 CFR 2.390

[Exempt From Public Disclosure in AccorddIT; -with--10 CFR 2.399 ATTACHMENT 4 8.0 Proprietary Information 8.1 Electrical Grid Stability la. Inspection Scope The team reviewed the local electric grid stability following the June 14, 2004, loss-of-offsite power event to enure the adequacy of the grid protection to prevent cascading of 500kV and 230kV switchgear. In addition, the team reviewed local switchyard, substation, generator, and transmission line protective relay schemes to ascertain if any generic grid reliability or independence weakness could be identified.

b. Observations and Findinas SX~

Independence As indicated in the Inspection Report above, GDC 17 requires that power from the offsite transmission network be supplied by "two physically independent circuits (not

  • necessarily on separate rights of way) designed and located so as to minimize to the extent practical the likelihood of their simultaneous failure under operating and postulated accident and environmental conditions.'

Grid Stability

-,

- -

Ltxempt FromPubic ulscosure Accordance In wit4l iFR .300 ATTACHMENT 4 I

N 8.2 Protected Area Access Problems a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and security department and reviewed security department logs to determine the cause of protected area access problems encountered during the loss of off-site power. The team reviewed security procedures, the licensee's initial findings and immediate corrective actions taken on June 17, 2004. The team also reviewed the licensee's preliminary findings attached to significant CRDR 2715749 initiated to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004.

b. I

--j-x-E, IK.5 ExeffTW-FGmPulc icosr iA F bl.-iss~-5 OD

ATTACHMENT 5 UNRESOLVED ITEM DETAILS 05000528/2004012; URI Review licensee's root and/or apparent cause 05000529/2004012; determination, corrective actions, and compliance 050005308/2004012- associated with a number of loss-of-offsite power event 001 related issues. (See Table 1)

05000528/2004012; URI Review design control and compliance aspects of a 05000529/2004012; number of loss-of-offsite power event related issues.

050005308/2004012- (See Table 1)

002 05000528/2004012; URI Review use of Plant Technical Specifications during 05000529/2004012; emergencies. (See Table 1)

050005308/2004012-003