ML053130068
ML053130068 | |
Person / Time | |
---|---|
Site: | Palo Verde |
Issue date: | 11/02/2005 |
From: | Mallett B Region 4 Administrator |
To: | Overbeck G Arizona Public Service Co |
References | |
FOIA/PA-2004-0307 IR-04-012 | |
Download: ML053130068 (53) | |
Text
tedP~
REG(,,
UNITED STATES NUCLEAR REGULATORY COMMISSION
.:. tREGION IV 611 RYAN PLAZA DRIVE, SUITE 400 ARLINGTON, TEXAS 76D114005 EA-Gregg R. Overbeck, SeniorVice President, Nuclear Arizona Public Service Company P.O. Box 52034 Phoenix, AZ 85072-2034
SUBJECT:
PALO VERDE NUCLEAR GENERATING STATION, UNITS 1, 2, AND 3 - NRC AUGMENTED INSPECTION TEAM (AIT) REPORT 05000528/2004-012; 05000/20452912004-012; 05000530/2004-012 AND PRELIMINARY FINDINGS
Dear Mr. Overbeck:
On June 18, 2004, the Nuclear Regulatory Commission (NRC) completed an Augmented Inspection at your Palo Verde Nuclear Generating Station, Units 1, 2 and 3. The enclosed repot documents the inspection findings, which were discussed on June 18, 2004, with you and other members of your staff.
In accordance with 10 CFR 2.390 of the NRC's "Rules of Practice," a copy of this letter and its enclosure will be made available electronically for public inspection in the NRC Public Document Room or from the NRC's document system (ADAMS), accessible from the NRC Web site at htto://www.nrc.aov/readin--rm/adams.html.
Sincerely, Bruce S. Mallet, Regional Administrator Region IV Dockets: 50-528; 50-529; 50-530 Licenses: NPF-41; NPF-51; NPF-74 Information Inthis record was deleted
'In accordance with the Ftdom of Intormation
Enclosure:
Act, exemptions FA o og) cc w/enclosure:
Steve Olea Arizona Corporation Commission 4+/-! 1
1200 W. Washington Street Phoenix, AZ 85007 Douglas K. Porter, Senior Counsel Southern California Edison Company Law Department, Generation Resources P.O. Box 800 Rosemead, CA 91770 Chairman Maricopa County Board of Supervisors 301 W. Jefferson, 10th Floor Phoenix, AZ 85003 Aubrey V. Godwin, Director Arizona Radiation Regulatory Agency 4814 South 40 Street Phoenix, AZ 85040 M. Dwayne Carnes, Director Regulatory Affairs/Nuclear Assurance Palo Verde Nuclear Generating Station Mail Station 7636 P.O. Box 52034 Phoenix, AZ 85072-2034 Hector R. Puente Vice President, Power Generation El Paso Electric Company 310 E. Palm Lane, Suite 310 Phoenix, AZ 85004 Jeffrey T. Weikert Assistant General Counsel El Paso Electric Company Mail Location 167 123 W. Mills El Paso, TX 79901 John W. Schumann Los Angeles Department of Water & Power Southern California Public Power Authority P.O. Box 51111, Room 1255-C Los Angeles, CA 90051-0100 John Taylor Public Service Company of New Mexico 2401 Aztec NE, MS Z110 Albuquerque, NM 87107-4224
Cheryl Adams Southern California Edison Company 5000 Pacific Coast Hwy. Bldg. DIN San Clemente, CA 92672 Robert Henry Salt River Project 6504 East Thomas Road Scottsdale, AZ 85251 Brian Almon Public Utility Commission William B. Travis Building P.O. Box 13326 1701 North Congress Avenue Austin, TX 78701-3326
Electronic distribution by RIV:
Regional Administrator (BSMI)
DRP Director (ATH)
DRS Director (DDC)
Senior Resident Inspector (GXW2)
Branch Chief, DRP/D (TWP)
Senior Project Engineer, DRP/D (JAC)
Staff Chief, DRP/TSS (PHH)
RITS Coordinator (KEG)
Jennifer Dixon-Herrity, OEDO RIV Coordinator (JLD)
PV Site Secretary (vacant)
G. Sanborn, ACES (GFS)
M. Vasquez, ACES (GMV)
S. Lewis, OGC (SHL)
ADAMS: 0 Yes 0 No Initials:
0 Publicly Available 0 Non-Publicly Available 0 Sensitive 0 Non-Sensitive
[C:OB ;'-.'r*i~.*'-_.ifSRI:EB ',>.R:BD.i;t
.. t SRI:PB MNRs F-. ACES w.w,s~.i Ea^A-'.x D:R
'-, yy)%~.-
ATGodylmb CJPaulk TMcConnell PAlter TKosh GSanborn DDChamberlain OFFICIAL RECORD COPY T=Telephone E=E-mail F=Fax
ENCLOSURE U.S. NUCLEAR REGULATORY COMMISSION REGION IV Dockets: 50-528; 50-529; 50-530 Licenses: NPF-41; NPF-51; NPF-74 Report No.: 05000528/2004-01 1; 05000529/2004-01 1; 05000530/2004-01 1 Licensee: Arizona Public Service Company Facility: Palo Verde Nuclear Generating Station, Units 1, 2, and 3 Location: 5951 S. Wintersburg Road Tonopah, Arizona Dates: June 22, 2004 Team Leader: Anthony T. Gody, Chief Operations Branch Inspectors: P. Alter, Senior Resident Inspector, Projects Branch B Division of Reactor Projects T. Koshy, Electrical & Instrumentation and Controls Branch Office of Nuclear Reactor Regulation Amar Pal (Sp?), Electrical & Instrumentation and Controls Branch Office of Nuclear Reactor Regulation T. McConnell, Resident Inspector, Projects Branch D Division of Reactor Projects C. Paulk, Senior Reactor Inspector, Engineering Branch Division of Reactor Safety Accompanied By: G. Skinner, Electrical Engineer, Beckman and Associates Approved By: Anthony T. Gody, Chief Operations Branch Division of Reactor Safety
SUMMARY
OF FINDINGS IR 05000528/2004-012; 05000-52912004-012; 05000-53012004-012; June 18, 2004; Palo Verde Nuclear Generating Station, Units 1, 2, and 3; Augmented Inspection The report covered a period of inspection by inspectors. The significance of most findings is indicated by its color (Green, White, Yellow, Red) using Inspection Manual Chapter 0609, "Significance Determination Process." Findings for which the Significance Determination Process does not apply may be green or be assigned a severity level after NRC management review. The NRC's program for overseeing the safe operation of commercial nuclear power reactors is described in NUREG-1649, "Reactor Oversight Process," Revision 3, dated July 2000.
NRC-Identified and Self Revealing Findings
TABLE OF CONTENTS 1.0 Introduction ......................... 1 1.1 Event Description (Tony) ......................... 1 1.2 System Descriptions (Misc.) .......................... : 5 1.3 Preliminary Risk Significance of Event (Dave) ................................ 8 2.0 System Performance and Design Issues ................................... 15 2.1 Off-site Power System Issues (George) .....................................
2.2 Unit 1, Atmospheric Dump Valve 185 Failure (Tim) .............................
2.3 Unit 1, Letdown Heat Exchanger Isolation Failure (Tim) .........................
2.4 Unit 2, Train "A" Emergency Diesel Generator Failure (Chuck) ....................
2.5 Unit 3, Bypass Valve 1003 Malfunction (Tim) .................................
2.6 Unit 3, Reactor Coolant Pump 2B Lift Oil Pump Trip (Tim) .......................
2.7 Unit 3, Low Pressure Safety Injection System Over Pressurization (Tim) ............
2.8 Unit 3, Variable Over Power Reactor Trip (Tim) ...............................
2.9 Unit 1 and 3, General Electric Magna Blast Breaker Failures (Chuck) ...............
3.0 Human Performance and Procedural Aspects of the Event .......................
3.1 Turbine-Driven Auxiliary Feedwater Drains (Tapia) .............................
3.2 Unit 2, Train "E" Positive Displacement Charging Pump Trip (Chuck) ...............
3.3 Entry Into Technical Specification Action Statements (Tapia) .....................
3.2 Technical Support Center Emergency Diesel Generator Trip (Peter) ...............
3.3 Initial Notification of Event to State and Local Officials (Peter) ....................
3.4 Emergency Response Organization Challenges (Peter) .........................
4.0 Coordination with Off-Site Electrical Organizations (Tom) ........................
5.0 Risk Significance of the Event (Dave) .......................................
6.0 Assessment of Event Resoonse (Tony) ......................................
7.0 Exit Meeting Summary (Tony) .............................................
ATTACHMENT 1 - Supplemental Information ATTACHMENT 2 - Augmented Inspection Team Charter ATTACHMENT 3 - Sequence of Events ATTACHMENT 4 - System Figures Figure 1 - Palo Verde Nuclear Generating Station Transmission System Figure 2 -
Figure 3 -
Report Details 1.0 Introduction 1.1 Event Description On June 14, 2004, at 9:41 a.m. CDT, a ground-fault occurred on the 230 kV transmission line in northwest Pheonix, Arizona between the "West Wing" and "Liberty" substations located approximately 47 miles from the Palo Verde Nuclear Generating Station (PVNGS). A failure in the protective relaying resulted in the ground-fault not isolating from the local grid for over 30 seconds. This uninterrupted fault resulted in the protective tripping of a number for 230kV and 525kV transmission lines and a nearly concurrent trip of all three PVNGS units approximately 30 seconds later. The licensee declared a Notice of Unusual Event (NOUE) for all three units at about 9:50 a.m. CDT due to the loss-of-offsite power (LOOP).
The Unit 2 Emergency Diesel Generator (EDG) uA" started, but failed early in the load sequence process due to a failed diode in the exciter rectifier circuit. This resulted in the Train "A" Engineering Safeguards Features busses de-energizing which limited the availability of certain safety equipment for operators. Because of this failure, the licensee elevated the emergency declaration for Unit 2 to an Alert at 9:54 CDT.
The Augmented Inspection Team (AIT) found that the licensee's response to the event, while generally appropriate, was complicated by a number of other equipment failures, procedure issues, and human performance issues with diverse apparent causes and with varying degrees of significance. For example;
- The Technical Support Center emergency diesel generator failed because a test switch was not returned to its' proper position following maintenance six-days prior to the event. As a result, the emergency response organization assembled in the alternate TSC. This resulted in some confusion and posed some unique challenges to the emergency response organization.
- The emergency response organization
- Other facility issues were identified which could have impeded emergency responders but did not in this case.
- An Atmospheric Dump Valve on Unit 1 drifted closed due to an apparent equipment malfunction which posed a operational nuisance during the event.
- The Unit 1 letdown system failed to isolate due to poor design control during a modification which resulted in high temperatures in that system. The high temperatures resulted in fumes being generated as paint heated up which precipitated a fire brigade response. This complicated the Unit 1 event.
- The Unit 2 Positive Displacement Charging Pump "E" was lost due to human performance errors.
- An unanticipated control interaction in the Unit 3 steam bypass control valve system resulted in a momentary opening of all Unit 3 steam bypass valves and an unanticipated main steam isolation signal. The main steam isolation signal complicated the Unit 3 operators response to the loss-of-offsite power event.
- A check-valve leakage problem resulted in operators having to manually depressurize the low-pressure safety inject system posing an additional distraction for the event.
- Two Magna-Blast circuit breakers failed to operate during recovery operations which delayed electrical system recovery efforts.
- Limited equipment affected the ability to manually drain condensate from the turbine-driven auxiliary feedwater system and that systems availability.
Despite the number of challenges to the plant operating staff and management, all three units were safely shutdown and placed in a stable condition immediately following the loss-of-offsite power event. With the exception of the local 525 kV transmission grid surrounding the Palo Verde switchyard, the Arizona, California, and Nevada electrical grid remained stable, only momentarily noting the fault through some minor frequency fluctuations. This was notable considering the amount of generation lost due to the fault. The total local generation lost during the event (>5,500 MW) included the three Palo Verde units, three co-generation units at the Red Hawk generating station, and three co-generation units at the Arlington generating station.
1.2 System Descriotions 1.2.1 Off-site Power Transmission and Distribution Systems
- a. General The Palo Verde Nuclear Generating Station is connected by its associated transmission system to the Arizona-New Mexico-Califomia-Southern Nevada extra high voltage (EHV) grid which is interconnected to other EHV systems within the Western System Coordinating Council (WSCC).
- b. Palo Verde Nuclear Generating Station Switchyard The PVNGS switchyard consists of two 500 kv buses which are connect to the three PVNGS 525/22.8 kv main step-up transformers, and seven transmission lines, using a breaker and a half scheme. The seven 525 kV transmission lines comprising the Palo Verde transmission system are situated in four corridors from the PVNGS switchyard as follows:
One line to the Devers substation (240 mi.)
Three lines to the Hassayampa substation (3 mi.)
One line to the Rudd substation (25 mi.)
Two lines to the Westwing 500 kv substation (44 mi.)
- c. West Wing Substation
The Westwing substation is comprised of a two bus 230 kv section and a two bus 500 kv section. The 500 kv section is connected to the adjacent 230 kv Westwing section through three 525/345/230 kv load tap changing transformers. The Westwing 230 kv buses are connected to the transmission system as follows:
One line to the Surprise substation One line to the Pinnacle Peak substation One line to the Liberty substation One line to the Agua Fria substation One line to the Deer Valley substation One line to New Waldell substation Two 230/69 kv transformers feeding the APS distribution system The above lines are connected to the Westwing substation through a breaker an a half scheme such that at least two circuit breakers must be opened to isolate a line from the substation.
- d. Hassayampa Switchyard The Hassayampa substation is located three miles from the PVNGS switchyard. It consists of two 500 kv buses connected to the PVNGS switchyard and several other generating stations and substations through a breaker an a half scheme, as follows:
Three lines to the PVNGS switchyard (3 mi.)
Two lines to the Red Hawk Switchyard (1 mi.)
One line to the Jojoba substation (20 mi.)
One line to the Noth Gila substation (110 mi.)
One line to the Mesquite switchyard (0.5 mi.)
One line to the Arlington Valley switchyard (1 mi.)
One line to the Harquahala Switchyard (30 mi.)
The three lines to the PVNGS switchyard were equipped with negative sequence relays intended to serve as pole mismatch protection for the Hassayampa PCBs. APS stated that this relaying was set to trip on 20% negative sequence current after a definite time delay of 5 seconds.
1.2.2 On-site Power Distribution System
- a. General Power is supplied to the PVNGS auxiliary buses from the offsite power supply through thee startup transformers. In addition, during normal plant operation, power for the onsite non-Class I E ac system is supplied through the unit auxiliary transformer connected to the main generator isolated phase bus. The non-Class 1E ac buses normally are supplied through the unit auxiliary transformer, and the Class 1E buses normally are supplied through the startup transformers. Each unit's non-Class I E power system is divided into two parts. Each of the two parts supplies a load group including approximately half of the unit auxiliaries. Three startup transformers connected to the 525 kV switchyard are shared between Units 1, 2, and 3 and are connected to 13.8kV buses of the units. Each startup transformer is capable of supplying 100% of the startup
or normally operating loads of one unit simultaneously with the engineered safety feature (ESF) loads associated with two load groups of one other unit. The 4160v class 1E buses are each normally supplied by an associated 13.8/4.16 kv auxiliary transformer, and receive standby power from one of the six standby diesel generators.
The Class 1E 4160 V system supplies power to 480V and lower distribution voltages through 18 4160/480V load center transformers.
- b. Palo Verde Nuclear Generating Station Generator Protective Relaying The main generator protection schemes feature relaying intended to protect the generators against internal as well as external faults. Protection against external faults includes backup distance relaying and negative sequence time over current relaying.
The backup distance relaying provides backup protection for 24 kV and 525 kV system faults close to the switchyard. The distance relay operates through an external timer. If the fault persists and the time delay step is completed, a lockout relay trips the unit aux transformer 13.8 kV breakers, generator excitation, 525 kV generator unit breakers, main turbine and the main transformer cooling pumps. The lockout relay also initiates transfer of station auxiliary loads.
The generator negative sequence time over current relay provides generator protection against possible damage from unbalanced currents resulting from prolonged faults or unbalanced load conditions. The relay operates through a lockout relay to trip the unit auxiliary transformer 13.8kV breakers, generator excitation, 525 kV generator unit breakers, main transformer cooling pumps and the main turbine. The negative sequence relay.also incorporates a sensitive alarm circuit that, in conjunction with a separately mounted ammeter, alerts operator action on relatively low values of negative sequence current Oust above normal system unbalance).
- c. Emergency Diesel Generators The Class 1E alternating current system distributes power at 4.16 KV, 480V, and 120V to all Class 1E loads. Also, the Class 1E alternating current system supplies power to certain selected loads that are not directly safety-related but are important to the plant .
The Class I E alternating current system contains standby power sources (i.e.,
emergency diesel generators) that automatically provide the power required for safe-shutdown in the event of loss of the Class I E bus voltage.
In the event that preferred power is lost, the Class I E system functions to shed Class 1E loads and to connect the standby power source to the Class 1E bus. The load sequencer then functions to start the required Class 1E loads in programmed time increments.
- d. Station Blackout Gas Turbine Generator Sets A non-safety related Alternate AC (AAC) power source consisting of two redundant gas turbine generators is available to provide power to cope with a four hour station blackout event in any one nuclear unit. One GTG is analyzed to supply all required station blackout loads, which are located on the 'A' train.
Each GTG has a minimum continuous output rating of 3400kW at 13.8kV under worst case anticipated site environmental conditions. This rating is sufficient to provide power to the loads identified as being important for coping with the SBO.
- e. Technical Support Center Emergency Diesel Generator The technical support center diesel generator provides standby alternating current to the 480 V electrical distribution panel that supplies all electrical power to the technical support center emergency planning facility. The diesel engine is cooled by a self-contained cooling water system with an air cooled radiator. The radiator is in turn cooled by an electric motor driven fan. The fan motor is powered by the technical support center electrical power distribution panel. Normal electrical power for the technical support center comes from the off-site electrical power supply to Unit 1.
During a loss of off-site power, when power is lost to the technical support center electrical power distribution panel, the technical support diesel generator automatically starts and re-energizes the technical support center electrical loads, including the diesel engine radiator cooling fan.
1.2.3 Chemical Volume and Control System The chemical and volume control system controls the purity, volume, and boric acid content of the reactor coolant. Water removed from the reactor coolant system is cooled in the regenerative heat exchanger. From there, the coolant flows to the letdown heat exchanger and then through a filter and a demineralizer where corrosion and fission products are removed. It is then sprayed into the volume control tank and returned by the charging pumps to the regenerative heat exchanger where it is heated prior to returning to the reactor coolant system.
When the vital 4160 VAC buses are de-energized, the charging pump breakers must be manually reset and the pumps restarted from the control room. Therefore, no charging flow is assumed for 30 minutes after the time of trip to allow for resetting the breaker and performing manual alignment of one of three gravity-fed boration pathways to the charging pump suction.
Following a loss of offsite power, letdown will isolate automatically due to the loss of nuclear cooling water to the letdown heat exchanger or by operator action. When*
charging is restarted, the resulting mismatch between letdown and charging will cause volume control tank level to decrease. To reduce the chance of losing suction to the charging pumps, the volume control tank level is monitored by two nonsafety grade instrument channels. Alarms are provided on low level and if the two channels differ significantly. The use of two channels of different types (one has a wet reference leg and the other is dry) decreases the probability of operator error inaligning the boration systems should one channel fail.
1.2.4 Auxiliary Feedwater System The Auxiliary Feedwater System (AFW) provides an independent means of supplying water to the Steam Generators during emergency operations when the Feedwater System is inoperable. AFW maintains the water inventory necessary to allow a Reactor Coolant System cooldown at a maximum rate of 75 0 F/h down to a temperature of
350'F. It also provides the necessary water inventory for startup, normal shutdown and hot standby conditions.
1.3 Preliminary Risk Significance of Event The Nuclear Regulatory Commission's Management Directive 8.3, "Incident Investigation Program," documents the NRC's formal process conducted for the purpose of accident prevention. This directive documents a risk-informed approach to determining when the agency will commit additional resources for further investigation of an event. The risk metric used for this decision is the conditional core probability.
Because there is a lack of complete information at the time of initial decision-making, a preliminary evaluation is performed.
A loss of offsite power is a significant event at any nuclear facility, and more so for a Combustion Engineering plant without primary system power-operated relief valves, because of the inability to perform a reactor coolant system feed and bleed evolution.
To evaluate this event, the analyst used the Standardized Plant Analysis Risk Model for Palo Verde (SPAR), Revision 3 model, and modified appropriate basic events to include updated loss of offsite power curves published in NUREG CR-5496. The analyst evaluated the risk associated with the Unit 2 reactor because it represented the dominant risk of the event.
For the preliminary analysis, the analyst established that a loss of offsite power had occurred and that the event may have been recovered at a rate equivalent to the industry average. Both Emergency Diesel Generator A and Charging Pump E were determined to have failed and assumed to be unrecoverable. Additionally, the analyst ignored all sequences that included a failure of operators to trip reactor coolant pumps, because all pumps trip automatically on a loss of offsite power. The conditional core damage probability was estimated to be 6.5 x 1Oindicating that the event was of substantial risk significance and warranted an augmented inspection team.
2.0 System Performance and Design Issues 2.1 Offsite Power Reliability and Independence Issues
- a. Inspection Scope The team reviewed design drawings associated with the Palo Verde, Hassayampa, West Wing, Devers, and Rudd switch yards and substations. In addition, the team conducted interviews with licensee personnel, Arizona Public service personnel, and Salt River Project personnel involved in the investigation. Finally, the team reviewed the sequence of event and alarm printouts in detail to develop a comprehensive understanding of the event progression.
- b. Observations and Findings
The 500 kv system upset at the PVNGS switchyard originated with a fault across a degraded insulator on the 230 KV Liberty line from the Westwing substation. Protective relaying detected the fault and isolated the line from the Liberty substation. The protective relaying scheme at the Westwing substation received a transfer trip signal from Liberty actuating the AR relay in the tripping scheme for circuit breakers 1022 and 1126. The AR relay had four output contacts, all of which were actuated by a single lever arm. The tripping schematic showed that contacts 1-10 and 2-3 should have energized redundant trip coils in PCB 1022, while contacts 4-5 and 6-7 should have energized redundant trip coils in PCB 1026.
PCB 1126 tripped, demonstrating that the AR relay coil picked up, and least one of the AR relay contacts,1 -10 or 2-3, closed. PCB 1022 did not trip. Bench testing by APS showed that, even with normal voltage applied to the coil, neither of the tripping contacts for PCB 1022 closed. The breaker failure scheme for PCB 1022 featured a design where the tripping contacts for the respective redundant trip coils also energized redundant breaker failure relays. Since the tripping contacts for PCB 1022 apparently did not close, the breaker failure scheme for PCB 1022 also was not activated, resulting in a persistent uncleared fault on the 230 kV Liberty line.
Various transmission system events recorders show that during approximately the first 12 seconds after fault inception, several transmission lines on the interconnected 69 kv, 230 kv, 345, and 525 kv systems tripped on overcurrent, including lines connected to the Westwing, Hassayampa substations. Also during the first 12 seconds, two Red Hawk combustion turbines and one Red Hawk steam turbine power plants tripped, and the fault alternated between a single line to ground fault to a two line to ground fault, apparently as a result of a failed shield wire falling on the faulted line. After 12 seconds, the fault became a three phase to ground fault, and additional 525 kv lines tripped.
At approximately 17 seconds after fault inception, the three tie lines between the PVNGS switchyard and the Hassayampa substation tripped simultaneously due to action of their negative sequence relaying, thereby isolating the fault from the several co-generation plants connected to the Hassayampa substation. Approximately 24 seconds after fault inception the last two 525 kv lines connected to the PVNGS switchyard tripped, isolation the switchyard from the transmission system. At approximately 28 seconds after fault inception, the three PVNGS generators were isolated from the switchyard, and by approximately 38 seconds all remaining lines feeding the fault had tripped and the fault was isolated.
Reliability Issues The degraded insulator was caused by external contamination and did not represent a concern relative to the reliability of the insulation of the 230 kv transmission system.
The failed AR relay and the lack of a robust tripping scheme raised concerns relative to the maintenance, testing, and design of 230 kv system protective relaying. Interviews with APS T&D personnel indicated that the Westwing substation where the relay failure occurred was subject to annual maintenance and testing. Following the event, the failed AR relay was removed from service by APS and visually inspected by the NRC team at PVNGS. The relay showed no apparent signs of contamination or deterioration.
Although the team considered the maintenance interval to be reasonable, the team did
not determine the degree of rigor applied in testing the relaying scheme. For instance, it is doubtful that the testing included methods common in the nuclear industry such as verifying that each contact in the tripping scheme functioned properly. As noted earlier, the tripping scheme lacked redundancy that may have prevented the failure of the protective scheme to clear the fault. APS reviewed the design of the Westwing substation as well as all other substations connected to the PVNGS switchyard, and found that only the Liberty and Deer Valley lines at the Westwing substation featured a tripping scheme with only one AR relay. All of the newer lines featured two AR relays.
However, APS also noted that the middle breakers in the breaker and a half scheme at the Westwing substation only contained one trip coil, as opposed to two trip coils in the bus connected breakers. This feature was believed to be representative of the design at other APS substations. In order to improve reliability, APS modified the tripping schemes for the Liberty and Deer Valley lines to feature two AR relays enegizing separate trip coils. APS also stated that they would evaluate the feasibility of installing two trip coils in all PCBs. The team noted that even considering the completed and proposed modifications, all of the tripping circuits were still powered by a single 125VDC system, so 'single fa~ilure" vulnerabilities will remain. APS stated that 125 DC system reliability was enhanced by redundant battery chargers and alarms that annunciate in the APS control center. Grid reliability studies performed by utilities typically have not considered the occurrence of an uncleared fault on the transmission system. This event, and the specific concerns identified relative to the design and testing of transmission system protective relaying suggest vulnerabilities may exist that render the offsite power supplies less reliable than previously assumed.
Independence of Offsite Power Supplies GDC 17 requires that power from the offsite transmission netviork be supplied by 'two physically independent circuits (not necessarily on separate rights of way) designed and located so as to minimize to the extent practical the likelihood of their simultaneol to failure under operating and postulated accident and environmental conditions'.
The uncleared fault resulted 1in tripping of transmission lines both locally, and at renimoe substations. Lines at several interconnected transmission voltage levels tripped, commencing a few cycles after fault inception, and continuing for another.38 seconds. Even remote lines were tripped by inverse time overcurrent relays, which were not intended to protect against remote faults, but nevertheless succumbed to the fault because its duration.:
Another concern was raised by the simultaneous tripping of the three Hassyampa tie lines. The three Hassayampa tie lines featured negative sequence relaying intended to serve as pole mismatch protection. The scheme featured a 5 second definite time delay to avoid spurious tripping due to faults. Although these individual lines could have been considered as separate sources of offsite power, this event demonstrated that the lines were subject to simultaneous failure resulting from unintended operation of the relaying scheme. SRP has stated that the negative sequence relaying has been disabled and pole mismatch protection is being implemented by alternate relaying.
2.2 Unit 1. Atmospheric Dump Valve 185 Failure
- a. Inspection Scope The team reviewed the operators' responses and control room logs relating to the loss of manual control of the atmospheric dump Valve 185 during the performance of Procedure 40EP-9EO10 "Loss of Offsite Power/ Loss of Forced Circulation,"
Revision 10.
- b. Observations and Findings The team identified an unresolved item associated with the control of atmospheric dump Valve ADV-1 85 in Palo Verde Unit 1.
Following the June 14, 2004, loss-of-offsite-power event at Palo Verde Unit 1, atmospheric dump Valve ADV -185 failed to operate properly while being manually operated. Operators in the control room observed that the valve had drifted closed, contrary to the manual controller setting. The operators were able to adjust Valve ADV-185 from the controlling station; however, the valve position would not remain in the desired position. Licensee personnel initiated CRDR 2716011 to determine the root cause of the failure and perform corrective actions necessary to address the failure.
The impact on the control of primary plant temperature during this event was minimal.
The operator had the skill and ability to readily diagnose and overcome this anomaly.
All other atmospheric dump valves on Unit 1 responded properly to manual control signals and presented no further challenges to the control room operators.
Licensee personnel identified the apparent cause of the malfunction as internal leakage equalizing around a pilot valve causing the valve to shut. The valve and it's associated control circuit were quarantined and maintenance personnel were troubleshooting the components to determine the root cause of the malfunction.
This issue is identified as Unresolved Item 05000528/2004012-XXX to evaluate the root cause determination and corrective actions associated with the atmospheric dump Valve ADV-185 drifting shut when in manual control.
2.3 Unit 1, Letdown Heat Exchanger Isolation Failure
- a. Inspection Scope The team reviewed the licensee's temporary modification Package 2594804 and CRDR 2715667 documenting the system response during the event. The team also interviewed plant personnel and reviewed control room logs and temperature plots to determine the impact of the high temperature on the letdown system
- b. Observations and Findings The team identified an unresolved item associated with the design control of the letdown system.
During the 14 June, 2004, loss-of-offsite-power, the Unit 1 letdown system did not operate as expected when fluid temperatures exceeded the alarm setpoint. The letdown system bypassed the ion exchanger and the filter at 1400F, as expected. However, the team determined that a temporary modification to bypass a flow sensor had been installed which removed the signal to isolate the system on low flow. The system was designed to isolate the letdown system if temperature at the outlet of the non-regenerative heat exchanger exceeded 148'F. The isolation did not occur as expected.
Licensee personnel identified the apparent cause of the system not isolating as expected was the failure of the temporary modification to fully address the functioning of the letdown control system during a loss of power to the controller. The team noted that, as a consequence of a loss-of-offsite-power, the nuclear cooling water flow is lost to the non-regenerative heat exchanger. The team also noted that, when power is restored to the system, the valves would be in a manual mode of operation. Therefore, flow through the system would not be secured by the control system. The team found that the temporary modification resulted in the bypass of the backup initiating signal for isolating the system in the event that flow was lost.
The impact on the plant systems and personnel were minimized when the ion exchanger bypass valves actuated to remove high temperature water from the resin. However, the introduction of high temperature water created a distraction when, as a result of paint and insulation being heated, the fire brigade was activated for a report of smoke/fumes emanating from . The report required the building to be walked down by operators.
Licensee personnel performed a visual inspection of the system and completed a stress analysis to identify locations that exceeded the maximum allowable stress.
The team noted that the maximum allowable stress associated with 350'F fluid temperature was 27,475 psi. The team determined that a weld on the drain for purification Filter F36 was the only area of that may have exceeded the maximum allowable stress. Licensee personnel performed a visual inspection of the affected weld, and removed the filter element to determine if any damage occurred. Because the filter element is rated for 1800 F for 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br />, and there was no indication of damage, the licensee personnel concluded that the weld was not subjected to the temperatures that could have caused excessive stress on the weld.
With respect to the extent of condition, the team found that Unit I was the only unit that had this modification installed to bypass the low flow isolation signal. Therefore, the team had no concerns with the other units.
The team identified the issue of design control as Unresolved Item 05000528/2004012-XXX. The issue is unresolved pending NRC review of the root cause and identification/performance of corrective actions resulting from CRDR 2715667.
2.4 Unit 2. Train A Emergency Diesel Generator Failure
- a. Scope The team interviewed licensee representatives and reviewed the sequence of events that led up to the failure of the Unit 2 Train A emergency diesel generator to determine
the apparent cause. The team also reviewed the effects the loss of the diesel generator had on the recovery of the event; the action plan for determining the root cause (Condition Report/Disposition Request (CRDR) 2715709); and the extent of condition of the apparent cause.
- b. Observations and Findings The team found that the apparent failure of the Unit 2 Train A emergency diesel generator was a failed diode in Phase B of the voltage regulator exciter circuit. The diode failure resulted in a reduced excitation current which was unable to maintain the voltage output with the applied loads.
At approximately 07:41:15 am, the Unit 2 Train A emergency diesel generator received a start signal as a result of an undervoltage signal on the Train A 4.16KV Class 1E bus.
The emergency generator started, came up to speed and voltage, and energized the bus at approximately 07:21:23 am, within the 10 seconds allowed by design.
Approximately 5 seconds later, the Train A battery chargers, control element drive mechanism cooling units, and the containment cooling units were sequenced onto the bus. The essential cooling water pump was sequenced onto the bus approximately 15 seconds after the first loads.
The team noted that, at approximately the same time the essential cooling water pump was energized, the output voltage from the emergency diesel generator began to fail.
The control room operators observed the voltage and current indications in the control room were a zero and had an auxiliary operator observe the indications locally, at the emergency diesel generator control panel. The indications were also zero. The control room operators initiated a manual emergency trip of the diesel at approximately 07:56:21 am. The team found these actions to be appropriate for the circumstances.
The team found that the failed emergency diesel generator did not have a large impact on the recovery, but did result in having only one train of safety equipment available.
The only apparent effect of the loss of Train A safety-related equipment was associated with the charging pumps (see Section 4.1, below).
The team noted that licensee engineers and maintenance personnel developed a comprehensive plan to troubleshoot the failure (CRDR 2715709). The plan was methodical and prioritized. The team found that the troubleshooting activities were thorough and well controlled, resulting in the identification of the failed diode in Phase B of the exciter circuit. The failure resulted in a half-wave output with significantly reduce current that led to the loss of adequate excitation to maintain the required voltage for the applied loads.
The team found that, while this diode was common to all the emergency diesel generators at the Palo Verde Nuclear Generating Station, there was insufficient data to indicate there was a common mode problem. A review of the industry database on component failures revealed only one other failure of this specific model diode. That failure was in 1997. As such, the team found the extend of condition reiew by licensee personnel to have been appropriate for the circumstances.
The team noted that the failed diode had been replaced during the fall 2003 refueling and steam generator replacement outage. This diode had been subject to approximately 75 hours8.680556e-4 days <br />0.0208 hours <br />1.240079e-4 weeks <br />2.85375e-5 months <br /> of operation. Licensee personnel had plans to perform additional testing to determine the root cause, if possible, of the diode failure.
Unresolved Item 05000529/2004012-XXX is opened to evaluate the corrective actions and root cause determination associated with the emergency diesel generator failure.
This item has potential problem identification and resolution aspects.
2.5 Unit 3 Event Respone
- a. Inspection Scope The team reviewed the licensee's CRDR 2715659 documenting the steam system bypass response during the event. The team reviewed control room logs; temperature, voltage and frequency plots; steam pressure and flow plots; primary coolant flow; and nuclear power plots to determine the magnitude and correlation of key events. The team also interviewed various personnel that were either involved in the event or in the analyses of the event. The team conducted an extensive review of the plant computer alarm log printout to establish a time line of the event. The team conducted an analysis of the key events on Unit 3, as indicated by the alarm printout, and noted several differences in the progression of the loss-of-offsite-power event response when compared to the responses for Units 1 and 2 and as described in the Final Safety Analysis Report (FSAR).
- b. Observations and Findings The team identified two unresolved items. The first item is associated with the automatic main steam-line isolation in Unit 3 and tasks the follow-up review to evaluate the response of the bypass control system response in all three units after the loss-of-offsite-power and compare the response to those assumed in the plant safety analysis.
The second item is associated with reviewing the licensees root cause for the Unit 3 reactor trip on a variable over-power signal and the licensees evaluation of the impact of the high frequency on plant equipment, as well as the extent of condition once the cause is determined.
The team noted that Unit 3 experienced an automatic main steam-line isolation.
Licensee engineers's attributed the automatic isolation to a steam bypass control system anomaly that caused a decrease in steam pressure. The team found, through interviews with licensee engineers, the apparent cause of the "anomaly" was the result of a momentary loss of power to Panel D11 with the control system being re-energized in the automatic mode, vice manual. According to the licensee engineers, this power loss initiated a 30-second timer that disconnected the valve control signals from the control cabinet. When the 30-second timer completed, all eight valves modulated open in about 14 seconds. This resulted in a rapid drop in steam-line pressure, automatically initiating a main steam-line isolation signal.
The PVNGS FSAR, Revision 12, Section 1.8, "Conformance to NRC Regulatory Guides," documents that the licensee took exception to the separation criterion of NRC Regulatory Guide 1.75, "Physical Independence of Electric Systems," Revision 1, for the power supplies to Panel D11. As a result, Panel D11 has both a non-vital power supply
(normal) and a vital power supply (backup). Upon loss of normal power, the supply automatically transfers to the backup supply. After the normal supply returns, the panel must be manually transferred back to the normal supply. Upon a total loss of power to Panel D11, the steam bypass control system will be unable to automatically respond to any challenges (FSAR, Section 7.2.2.4.1.2.1). The team also noted that the power supply configuration was identical on all three units. However, Units 1 and 2 did not respond the same as Unit 3.
The team noted that, in each subsection of the FSAR listed below, the steam bypass control system is assumed to be unavailable because it is either deenergized or in manual. During the loss-of-offsite-power event, the team found that the system was reenergized and operated in automatic. The team noted that this system response may not be as described in the licensee's safety analysis.
6.3.3.5D. For all break sizes, the reactor trip will result in a turbine trip and the subsequent loss of offsite power will result in the loss of main feedwater flow. Since the steam bypass control system is not available due to loss of condenser vacuum on loss of offsite power ...
7.2.2.4.1.2.1A. The SBCS and RPCS will be unable to automatically respond to any challenges on a failure of distribution panel E-NNN-D1 1.
7.2.2.4.1.2B ... the LOFW [loss-of-feedwater] event presented in subsection 15.2.7 assumed that the PPCS, SBCS, and RRS are in the manual mode of operation, unable to automatically respond to challenges.
15.1.4.2 Case 1 Since the steam bypass control system is assumed to be in the manual mode with all bypass valves closed ...
15.1.4.2 Case 2 Since the steam bypass control system is assumed to be in the manual mode with all bypass valves closed . . .
15.2.3.1 ... in this analysis both the SBCS and RPCS are assumed to be in the manual mode and credit is not taken for their functioning.
15.3.1.1 The only credible failure which can result in a simultaneous loss of power is a complete loss of offsite power. In addition, since a loss of offsite power is assumed to result in a turbine trip and renders the steam dump and bypass system unavailable, the plant cooldown is performed utilizing the secondary valves and atmospheric dump valves (ADVs)...
The loss of offsite power will make unavailable any systems whose failure could affect the calculated peak pressure. For example, a failure of the steam dump and bypass system to modulate or quick open and a failure of the pressurizer spray control valve to open involve systems (steam dump and bypass system and pressurizer pressure control system (PPCS)) which are assumed to be in the manual mode as a result of the loss of offsite power and, hence, unavailable for at least 30 minutes.
15.3.1.2C. The turbine is assumed to trip on loss of offsite power.
The loss of offsite power produces a loss of load on the turbine which generates a turbine trip signal. The turbine stop valves are closed as a result of the trip. The steam bypass control system becomes unavailable due to the loss of offsite power and subsequent loss of condenser vacuum.
15.3.4.1 The assumed loss of AC renders the steam bypass control system inoperable as a result of the loss of circulating water pumps.
15.3.4.2C. The loss of offsite power causes a loss of power to the plant loads and the plant experiences a simultaneous loss of feedwater flow, condenser inoperability, and a coastdown of all reactor coolant pumps.
15.3.4.3.1C. The loss of offsite power also causes a loss of main feedwater and condenser inoperability. The turbine trip, with the steam bypass control system (SBCS) and the condenser unavailable, leads to a rapid buildup in secondary system pressure and temperature ...
15.4.2.2D. Following the generation of a turbine trip on reactor trip, the main feedwater control system (FWCS) enters the reactor trip override mode and reduces feedwater flow to 5% of nominal, full power flow. Since the steam bypass control system (SBCS) is assumed to be in manual mode with all bypass valves closed, the main steam safety valves (MSSVs) open to limit secondary system pressure and remove heat stored in the core and the RCS.
15.4.2.38. All the control systems listed in table 15.4.2-2, except the steam bypass control system, were assumed to be in the automatic mode since these systems have no impact on the minimum DNBR obtained during the transient. The steam bypass control system is assumed to be in manual mode because this minimizes DNBR during the transient.
15.4.8.3C. The steam bypass control system is inoperable on loss of offsite power and therefore is unavailable.
15.5.2.1 The loss of normal ac power results in loss of power to the reactor coolant pumps, the condensate pumps, the circulating water pumps, the pressurizer pressure and level control system, the reactor regulating system, the feedwater control system, and the steam bypass control system.
15.5.2.3C. Since the steam bypass control system is in the manual mode ...
The unavailability of the steam bypass valves ...
15.6.3.1.2D Since the SBCS is assumed to be in manual mode with all bypass valves closed . . .
15.6.3.3.1A. The ADVs [atmospheric dump valves] are used due to the unavailability of the steam bypass control system due to loss of offsite power.
15.6.3.3.3.1C. The loss of offsite power also causes the steam bypass system to the condenser to become unavailable.
The team identified the determination of the root cause for the main steamline isolation, the evaluation of the response of the bypass control system, and the determination of the analyzed design to be Unresolved Item 05000530/2004012-XXX.
During the teams review of the time-line, it was noted that the main turbine stop valves closed on each unit at approximately 07:41:21 am. The Units 1 and 2 reactor coolant pumps had tripped on undervoltage approximately 1-second prior to the turbine trips, and the reactors tripped on anticipatory low departure from nucleate boiling ration within 1-second of receipt of the turbine trips. However, on Unit 3, the reactor tripped on variable over-power approximately 1-second after the other units. Next, the team noted that the Unit 3 main generator tripped approximately 1-second after the reactor trip on a volts/hertz signal, while the other units' main generators did not trip on volts/hertz signals until approximately 3.5 seconds after the reactor trips. And, approximately 5 seconds after the Units 1 and 2 reactor coolant pumps tripped on undervoltage, the Unit 3 reactor coolant pumps tripped on undervoltage. The team also noted, from the review of the post-trip review data, that all three units experienced post-event frequency increases to approximately 67 hertz.
During the loss-of-offsite power event, the Unit 3 reactor coolant pumps remained connected to the substation bus while the turbine was in a overspeed condition.
Licensee engineers concluded that the bus voltage was maintained because of an unexpected response of the Unit 3 generator's excitation circuit. As a result of the excitation circuit response, the excitation, and therefore the output voltage, remained high, delaying the load shed and tripping of the reactor coolant pumps.
Since the Unit 3 reactor coolant pumps remained operating longer, they turned at the higher frequency, the flow increased through the critical reactor core. This increase in flow (approximately 108.2 percent of design flow), produced a power of approximately 109 percent, as read on excore nuclear instruments. This positive rate of change in reactor power generated a variable over-power-trip signal to shutdown the reactor.
The team reviewed the licensee's evaluation of the increased reactor coolant flow and noted that the estimated flow of 108.2 percent was less than the evaluated limit of 110.4 percent of design volumetric flow. According to the licensee's analyses, the most limiting component of each reactor coolant pump was the motor flywheel which was designed for 125 percent of rated speed. The team noted that this value was not approached during the event. The team agreed with the licensee's conclusion that there was no impact to the continued power operation with respect to fuel grid-to-rod fretting, vessel hydraulic uplift forces, and fuel mechanical design.
While all three turbine generators were in an over-speed condition and connected to the plant busses, all connected loads experienced a higher frequency. The reactor coolant pumps for Units 1 and 2 were not exposed to the high frequency condition because their undervoltage relays actuated before the higher frequency was attained.
The team found that the plant responses observed during this event were apparently different from those described in the FSAR. The evaluation of the root cause for the Unit 3 reactor trip on a variable over-power signal and the evaluation of the impact of the high frequency on plant equipment, as well as the extent of condition once the cause is determined is considered an Unresolved Item XXXX, 2.6 Unit 3. Reactor Coolant Pump 2B Lift Oil Pump Breaker
- a. Inspection Scope The team reviewed the thermal overload curves for the lift oil pumps and the operators' responses to the loss of the pump with regard to restoring forced circulation in the primary plant. The team also interviewed plant personnel, and reviewed CRDR 2715659 and control room logs regarding the activities surrounding the failure of the lift oil pump to start.
- b. Observations and Findings The team identified two unresolved items associated with the design of the lift oil system and the emergency operating procedure to start reactor coolant pumps.
Following the 14 June, 2004 loss of Offsite Power, the Unit 3 reactor coolant Pump 2B lift oil pump thermal overloads were actuated during the recovery of reactor coolant pumps. The team noted that the motor running current was within 0.1 amp of the overload rating. At this level of running current, the team found that the overloads would actuate in approximately 600 seconds. Licensee personnel identified the apparent cause of the trip of the lift oil pump was operating the pump in excess of 10 minutes.
The team found that the thermal overload sizing and motor running amperage are common among the three units. The team noted that the motors for the lift oil pumps
had been replaced and the thermal overloads resized through the design change process.
The team identified the evaluation of the adequacy of the design change associated with the lift oil pumps to be Unresolved Item 05000528/2004012-XXX; 05000529/2004012-XXX; 05000530/2004012-X)(X. Licensee personnel initiated CRDR 2715659 to track this issue.
To restart reactor coolant pumps, Procedure 40EP-9EO10 Appendix 1, states, in part:
- 15. Ensure the appropriate lift oil pump has been running for 7 minutes or more.
The team found that the procedure may not have contained sufficient detail to ensure the safe and continued operation of the lift oil pumps. The team identified the evaluation of the adequacy of Procedure 40EP-9EO1 0 with respect to the operation of the lift oil pumps to be Unresolved Item 05000528/2004012-XXX; 05000529/2004012-XXX; 2.7 Unit 3. Low Pressure Safety Injection System In-Leakage
- a. Inspection Scope The team reviewed the CRDR 2715659 documenting the safety injection system response during the event. Plant personnel were interviewed and control room logs and plots were reviewed to determine the impact of the in-leakage to the safety injection system.
- b. Observations and Findings The team identified two unresolved items related to the safety injection check valve leakage. The first item is associated with the root cause determination, and prior corrective actions for previous leakage issues and response to industry operating experience and generic communications. The second item is associated with the adequacy of the inservice testing program for testing and demonstrating the check valves capable of performing their design basis functions.
During the loss-of-offsite-power event, there were several instances of in-leakage to the Unit 3 safety injection system through check Valve RCEV-217. This in-leakage occurred through 14 inch Borg-Warner check valves and pressurized the safety injection header to reactor coolant Loop 2A. The team noted that, when this system is pressurized above 1850 psig, Train B of the low pressure safety injection system is rendered inoperable. The team also noted that control room operators monitored this condition and utilized an annunciator with a setpoint of 1000 psig.
From the control room logs, the team noted that the operators depressurized the system three times during the response to the loss-of-offsite-power. The operator performed the venting evolution by implementing alarm response Procedure 40AL-9RK2B, "NEED TITLE," Revision ?. Licensee personnel evaluated each instance of pressure increase to ensure that an intersystem loss-of-coolant-accident had not occurred. The criterion
for determining that if there was an intesystem loss-of-coolant-accident was a pressure increase of more than 1100 psig in less than 1 minute.
The team noted that licensee personnel determined that the apparent cause of the leakage was the seat and disc did not come to equilibrium temperatures at the same time. The licensee personnel determined the problem was most likely to occur when the primary plant is changing from a cooled down condition to normal operating temperatures. Licensee personnel initiated CRDR 2715659 to track this item. All three units have these check valves and have experienced back leakage through them during changing plant conditions.
The team identified Unresolved Item 05000528/2004012-XXX; 05000529/2004012-XXX; 05000530/2004012-XXX for the determination of the root cause of the leaking check valve, as well as the extent of condition once the cause is determined. In addition to the root cause determination, this item is unresolved pending NRC review of the adequacy of the corrective action program and the events assessment program to address the generic concerns previously identified by the industry and the NRC, and through actual experience at Palo Verde.
The team found that the impact on the event was the number of distractions of operators attempting to respond to the loss-of-offsite power three times Vhile depressurizing the system over a 7-hour period.
All units in Palo Verde have these check valves and have experienced back leakage through them during changing plant conditions. As a result, the team identified Unresolved Item 05000528/2004012-XXX; 05000529/2004012-XXX; 05000530/2004012-XXX for the evaluation of the adequacy of the inservice test program to correctly determine the operability of check valves.
2.8 Unit 1 and 3. General Electric Magna Blast Breaker Failures
- a. Scone The team reviewed the failure of two circuit breakers to close on demand during the recovery from the loss-of-offsite power. The team also interviewed licensee personnel associated with the investigation into the breaker failures.
- b. Observations and Findings The team identified an unresolved item associated with maintenance activities and operation of Magne-Blast circuit breakers.
The team noted that, while recovering from the loss-of-offsite-power, 13.8KV circuit Breakers 1ENANS06K and 3ENANS05D failed to close on demand from the control room. Electrical engineering and maintenance personnel determined the apparent cause of the failures to be the improper operation of the latching mechanisms due to poor lubrication and contamination by dirt. Licensee personnel initiated CRDR 2716019 to evaluate the failures, determine the root cause(s), and take any corrective actions identified.
The team noted that the initial response only involved a cycling of the breakers without any detailed troubleshooting. The team found that the licensee personnel considered this acceptable because of a known issue with hardened grease in Magne-Blast circuit breakers. While there is a well known issue with Magne-Blast circuit breakers failing to close as a result of hardened grease, the team found the licensee personnel's approach to be narrow. There have been at least generic communication issued by the NRC, dating to 1979, associated with Magne-Blast circuit breaker operation problems. In addition to the hardened grease issue, other causes of this type of breaker failure include misaligned latches, misaligned auxiliary contacts, and high-resistance contacts.
The team noted that each of the breakers had been refurbished in 2002.
Breaker I ENANS06K had been cleaned, inspected, and cycled during the last refueling outage earlier this year. The team found that the licensee personnel's determination of the apparent cause for the Unit 1 breaker was not supported by the facts because of the recent cleaning and inspection.
The team identified this as Unresolved Item 0500052812004012-XXX; 05000530/2004012-XXX, pending NRC review of the circumstances surrounding the failure of the breakers, and the licensee's review and corrective actions associated with CRDR 2716019. This item has potential human performance and problem identification and resolution aspects.
3.0 Human Performance and Procedural Aspects of the Event 3.1 Unit 2. Train "E" Positive Displacement Charging Pump Trip
- a. Scope The team reviewed the emergency operating procedures and the operators' responses
-to the loss of offsite power with respect to the charging pumps to determine the effect on the response to the event. The team also interviewed plant personnel and reviewed CRDRs 2716521 and 2716806 regarding the activities surrounding the charging pump operations.
- b. Observations and Findings The team identified an unresolved item associated with three examples of operators lack of adherence to the emergency operating procedures.
As the volume control tank level dropped to approximately 15 percent with Pump CHB-P01 operating, a control room operator recognized the need to transfer the charging pump suction from the volume control tank to the refueling water tank.
Because of the loss of offsite power, control room operators were implementing the emergency operating procedure. The specific procedure was Procedure 40EP-9EO07, "Loss of Offsite Power I Loss of Forced Circulation," Revision 10.
Step 11 of the procedure states:
IF VCT makeup is NOT available, THEN perform the following:
- a. IF RWT level is below or approaching 73%, AND the CRS desires to keep charging in service, THEN PERFORM ONEof the following:
- Appendix 10, Charging Pump Alternate Suction to the RWT /
Restoration
- Appendix 11, Charging Pump Alternate Suction to the SFP I Restoration
- b. IF RWT level is above 73%, THEN perform the following:
- 1) IF three charging will be used, THEN stop the Boric Acid Makeup Pumps.
- 2) IF three charging pumps are will be (sic) used, AND a Fuel Pool Clean Pump is recirculating the RWT, THEN stop RWT recirc by stopping the appropriate Fuel Pool Cleanup Pump.
- 3) Open CHN-HV-536, RWT Gravity Feed to Charging Pump Suction.
- 4) Close CHV-UV-501, Volume Control Tank Outlet.
The team noted that the refueling water tank level was greater than 73 percent during this event. As such, the team found that the appropriate steps in the procedure for transferring the charging was Step 11.b.3) and 4). However, the Control Room Supervisor decided that Step 11.a. was appropriate because Valves CHN-HV-536 and CHN-UV-501 did not have power and the supervisor knew that the valves in Step 11.a.
could be manually operated. The supervisor failed to consider that the valves in Step 11.b. could also be manually operated. By making this decision, the team considered the actions of the Control Room Supervisor not to be in accordance with the requirements of the emergency operating procedure for the plant conditions at the time (i.e., the refueling water tank level was greater than 73 percent). This is identified as the first example of Unresolved Item 05000529/2004012-XXX. Licensee personnel initiated CRDR 2716521 to evaluate the human performance error.
After deciding to implement Step 11.a., the Control Room Supervisor conducted a briefing with an auxiliary operator to discuss the manual transfer of the charging Pump CHE-P01 suction from the volume control tank to the refueling water tank using Appendix 10 to Procedure 40EP-9EO10, "Standard Appendices," Revision 32.
Appendix 10 states, in part:
- 1. Request that Radiation Protection acc6mpany the operator performing the local operations to perform area surveys.
- 2. IF it is desired to align Charging Pump(s) suction to the RWT, THEN perform the following:
- a. Place the appropriate Charging Pump(s) in "PULL-TO-LOCK."
- b. Direct an operator to PERFORM Attachment 10-A, Aligning Charqgin Pump Suction to the RWT, for the appropriate Charging Pump(s).
- c. WHEN the appropriate Charging Pump(s) has been aligned, THEN start the appropriate Charging Pump(s) as necessary. 0-A states, in part:
- 1. Open CHB-V327, "RWT TO CHARGING PUMPS SUCTION" (70 ft. East Mechanical Piping Penetration Room)...
- 4. IF aligning Charging Pump E, THEN perform the following (Charging Pump E VIvGallery)
- a. Close CHE-V322, ""E" CHARGING PUMP CHE-P01 SUCTION ISOLATION VALVE".
- b. Onen CHE-V757, ""E" CHARGING PUMP ALTERNATE SUCTION ISOLATION VALVE".
- 5. Inform the responsible operator that the appropriate Charging Pump(s) are aligned to the RWT.
The team found that the auxiliary operator did not implement Appendix 10, Step 1 of emergency operating Procedure 40EP-9EO10. Instead of requesting a radiation protection person to accompany him, the operator went to the radiologically controlled area access to perform a routine entry. However, because of the loss of offsite power, the access computers were not functioning and routine entry data was being entered manually. The auxiliary operator failed to inform the radiation protection person of the necessity of his entry nor of the procedural requirement for a radiation protection person to accompany him. This is identified as the second example of Unresolved Item
05000529/2004012-XXX. Licensee personnel initiated CRDR 2716806 to evaluate the delay at the access point.
After reaching the valves, the auxiliary operator, with the procedure on the wrong page, proceeded to perform Attachment 10-A, Steps 4 and 5. After positioning the valves listed in Step 4, the auxiliary operator informed the control room operator that the charging Pump CHE-P01 suction had been transferred. The control room operator then started charging Pump CHE-P01 at approximately 08:05 am and secured charging Pump CHB-P01 at approximately 08:05:52 am. At approximately 08:05:59, charging Pump CHE-P01 tripped on low suction pressure, resulting in a loss of all charging flow.
At approximately 08:06:22, the control room operator re-started charging Pump CHB-P01. The team found that the control room operator was unaware that this pump was operating with the suction from the volume control tank. After approximately 4.5 minutes, the control room operator noticed that the volume control tank level had dropped to approximately 10 percent. At that time, the operator secured charging Pump CHB-P01 to prevent it from tripping on low suction pressure of becoming air-bound.
At approximately 08:11:31 am, the charging pump suction was properly transferred to the refueling water tank and charging Pump CHB-P01 was restarted. At approximately 11:32:37 am, the time line indicated that charging Pump CHA-P01 was started.
The team found that the auxiliary operator did not properly implement emergency operating Procedure 40EP-9EO10 as required. This is identified as the third example of Unresolved Item 05000529/2004012-XXX. Licensee personnel initiated CRDR 2716521 to evaluate the human performance error.
The team found that the failure to properly implement the emergency operating procedures, as written, complicated the recovery from the loss-of-offsite-power by distracting the operators. The actual significance will be assessed during closure of Unresolved Item 05000529/2004012-XXX. This item has potential human performance and problem identification and resolution aspects.
3.2 Technical Support Center Emergency Diesel Generator Trip
- a. Inspection Scone The team interviewed members of the licensee's emergency planning organization and electrical maintenance department and reviewed security department logs to determine the cause of the failure of the technical support center diesel generator during the loss of off-site power. The team walked down the technical support center electrical distribution system and the technical support center diesel generator. The team reviewed the licensee's preliminary findings attached to CRDR 2715749 written to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004.
- b. Observations and Findings
The team found that the apparent cause for the failure of the technical support diesel generator to restore power to the technical support center was a human performance error during post maintenance testing of the diesel engine starting system on June 8, 2004.
On June 14, 2004, as a result of the loss of off-site power, electrical power was lost to the technical support center. As designed, the technical support center diesel generator started, but it did not re-energize the technical support center electrical loads. Electrical maintenance technicians were called to investigate the problem and shortly after they arrived at the technical support center diesel generator the diesel engine tripped. The engine control panel alarms indicated that the trip was due to high engine temperature.
Electrical power was restored to the technical support center when off-site power was restored to Unit 1 at 9:10 AM. The technical support center was without electrical power for approximately 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> 30 minutes.
During subsequent troubleshooting, electrical maintenance technicians determined that the engine operating switch was in "Idle." With the switch in "Idle," the diesel generator started on loss of electrical power to the technical support center, but did not come up to proper voltage and frequency and did not re-energize the technical support center electrical distribution panel. As a result, the engine radiator cooling fan did not start, so the engine overheated and tripped on high temperature. The electrical maintenance technicians returned the engine operating switch to its normal 'Run" position and wrote CRDR 2715726.
The licensee determined that the engine operating switch was apparently left in the
'Idle" position after post maintenance testing of the engine starting system performed on June 8, 2004 under work Order 2623863. During this monthly engine starting battery inspection, electricians noted that one battery terminal and connector were corroded.
The electricians contacted their team leader and received permission to cleanup the connection using the same work order. The team leader and the lead electrician determined that the starting system needed to be tested after the battery was returned to its normal configuration. The lead electrician suggested using a portion of preventative maintenance task, "Quarterly Restrike Test for TSC Diesel Generator."
Since this test is routinely performed by the electricians working on the starting battery, the team leader allowed the electricians to perform the test without a working copy of the test procedure in the field. After the diesel generator was successfully started, the engine operating switch was moved from "Run" to "Idle" to let the engine run at a slower speed and cooldown before being secured. The team determined that the failure to have a working copy of the test procedure at the engine during this post maintenance testing and failure to use the restoration guidance contained in the test procedure contributed directly to the failure to restore the technical support center diesel generator to its normal standby condition.
On June 16, 2004 The licensee performed the periodic one hour loaded test run of the technical support center diesel generator using preventative maintenance task, "Quarterly Restrike Test for TSC Diesel Generator," under work Order 2715869. The diesel generator started as expected and automatically energized the technical support center electrical power distribution panel. The diesel generator ran loaded for one hour with no problems noted. The diesel generator was shutdown using the task instructions and restoration directions.
The team determined that the diesel generator failure contributed to the delay in staffing the technical support center. As a result of diesel generator failure, the responding members of the emergency response organization were moved to the satellite technical support center adjacent to the Unit 2 control room. However, normal off-site power was restored to the technical support center before the two hour staffing requirement of PVNGS Emergency Plan, Table 1, "Minimum Staffing Requirements for PVNGS for Nuclear Power Plant Emergencies," Revision 28.
Unresolved Item 05000529/2004012-XXX is opened to evaluate the corrective actions and apparent cause determination associated with the technical support center diesel generator failure. This item has potential human performance error aspects.
3.3 Emergency Response Organization Issues
- a. Inspection Scope The team interviewed members of the licensee's emergency planning organization and security department and reviewed security department logs and emergency planning records to determine the cause of the multiple emergency response organization communication problems during the loss of off-site power. The team also reviewed the licensee's preliminary findings attached to significant CRDR 2715749 initiated to investigate and determine the root causes for the emergency planning problems arising from the loss of off-site power and plant trip on June 14, 2004 and attended the significant event investigation team meetings.
- b. Observations and Findings The team found that the apparent causes for the multiple emergency response organization communication problems were (1) the unanticipated loss of off-site power to all three units which resulted in the loss of normal emergency planning communications.equipment, and (2) human performance errors in implementing EPIP-01, "Satellite Technical Support Center Actions," Revision 14.
When the loss of off-site power and three unit trip occurred the two of the unit shift managers, the on-site manager and the operations manager, who was the on-call technical support center emergency coordinator, were in the plan of the day meeting in the operations support building adjacent to the Unit 2 control room. The Unit 1 shift manager returned to the Unit I control room and assumed the duties as emergency coordinator for all three units. When the on-site manager arrived at the Unit 1 control room to relieve the shift manager of his emergency coordinator responsibilities, Unit 2 entered an Alert emergency action level, so the on-site manager returned to Unit 2 to set up the satellite technical support center ant the most affected unit. The Unit 1 shift manager had declared a Notification of Unusual Event for the loss of off-site power for greater than 15 minutes. He gave this information to the on-site manager to coordinate the emergency notification to state and local authorities.
The Unit 2 shift manager declared an Alert emergency action level based on the loss of off-site power concurrent with a loss of one of the Unit 2 emergency diesel generators for greater than 15 minutes. He directed the on-shift emergency communicator to notify state and local authorities. The emergency communicator immediately determined that
the normal notification alert network system was not working and used the backup radio notification system to notify the state and local authorities within 8 minutes of the Alert classification.
When the on-site manager arrived at the Unit 2 satellite technical support center in the Unit 2 control room, he was told by the operations manager that unit 2 had assumed all emergency communications, but did not question him as to whether or not the Unit 1 Notification of Unusual Event notification was sent out to the state and local authorities.
The team determined that there was no formal turnover on emergency communications responsibilities from the Unit 1 shift manager to the Unit 2 shift manager or the on-site manager who was going to relieve the Unit 2 shift manager of emergency coordinator responsibilities. In addition, the on-site manager and operations manager did not effectively communicate the status of off-site notification. These two incomplete communications human performance errors that resulted in the Unit 1 Notification of Unusual Event not being sent to state and local authorities.
The Unit 3 shift manager declared a Notification of Unusual Event for the loss of off-site power for greater than 15 minutes. There was a time delay before the Unit 3 on-shift emergency communicator attempted to send out the notification using the normal notification alert network system. When he determined that it was not working he used the backup radio notification system but did not notify the state and local authorities until 20 minutes after the Notification of Unusual Event classification. The team determined that the delay in starting the notification process and the need to use the backup radio system were human performance errors that delayed the Unit 3 Notification of Unusual Event beyond the 15 minute requirement in EPIP-01, 'Satellite Technical Support Center Actions," Revision 14.
The team determined that loss of power to the normal notification alert network system did complicate the emergency notification of state and local authorities. In addition the licensee determined that the three satellite technical support center dose projection computers. The apparent cause for both failures was that both systems were supplied electrical power from electrical circuits that have no backup power supplies.
CRDR 2715749 addresses the loss of power to the normal notification alert network system and CRDR 2716281 addresses the dose projection computers. The recommended corrective action is provide an uninterruptible power supply for both systems.
During the initial loss of off-site power and failure of one Unit 2 emergency diesel generator the Unit 2 shift manager and on-shift emergency communicator were delayed in sending out the emergency pager notification to the on-call emergency response organization. The team determined that the delay of 16 minutes contributed to the greater than 2 hour2.314815e-5 days <br />5.555556e-4 hours <br />3.306878e-6 weeks <br />7.61e-7 months <br /> response time of the on-call technical support electrical engineer to the technical support center. The problems with protected area access (See Report Section X.X.) did not interfere with this failure to meet the minimum staffing requirements of PVNGS Emergency Plan Table 1. The Unit 2 mistakenly N/A'd the EPIP-01 step to activated the backup dialogic auto-dialer system for emergency response organization notification. During interviews the Unit 2 stated that he thought that June 14 a Monday ws a normal working day and the emergency response organization would respond to the plant wide announce of the Alert classification. The team determined that this human performance error contributed to the late staffing of
the technical support center and the less than minimum required number of radiation protection technicians reporting to the operations support center within the required 2 hours2.314815e-5 days <br />5.555556e-4 hours <br />3.306878e-6 weeks <br />7.61e-7 months <br />. This failure to use EPIP-01 properly was documented in CRDR 2715749 and the licencee revised EPIP-01, to always activate the dialogic auto-dialer for backup emergency response organization notification.
Unresolved Item 05000529/2004012-XXX is opened to evaluate the corrective actions and root cause determination associated with the delayed Unit 3 and missed Unit 1 notifications of state and local officials of their notification of unusual events. This item has potential problem human performance error aspects.
Unresolved Item 05000529/2004012-XXX is opened to evaluate the corrective actions and root cause determination associated with the inoperability of the radiological dose projection computers used to provide radiologically based protective action recommendations to state and local authorities.
Unresolved Item 05000529/2004012-XXX is opened to evaluate the corrective actions and apparent cause determination associated with the delay in notifying the on-call emergency response organization. This item has potential problem human performance error aspects.
4.0 Coordination with Off-site Electrical Organizations
- a. Inspection Scope The team reviewed the design and maintenance practices off site electrical organization in order to assess factors that influenced electrical power Grid failure, the extend of the system failure and the corrective actions for preventing such failures.
- b. Observations and Findings The loss of the Palo Verde 500kV grid, which disabled all the seven offsite power supplies for the nuclear stations, was due to the cascading effect of a wide area electrical isolation that originated from an electrical fault on a 230kV transmission line that remained un isolated for a period of 39 Sec., The selective tripping of the breakers to isolate problems at the West Wing 230Kv Substation, near the source of the fault, did not perform as required due to a relay failure and a design deficiency The switchgear maintenance at the Palo Verde 500kV substation is performed by Salt River Project (SRP). The breakers undergo a yearly maintenance including a check of the SF6 tubing, pressure switches; a check of the air system for alarms and the operation of the compressor; breaker timing and operational check of the mechanisms.
The protective relaying is also inspected yearly. The relays' settings, software and firmware, operating characteristics, and communication circuits are verified for accuracy.
The Palo Verde substation is manned by maintenance personnel during normal working hours for prompt identification of any evolving problems.
The licensee has calculated the onsite requirement for electrical voltage to be 512kV.
They have directed the APS Energy Control Center (APS-ECC), the local transmission
system operator, to provide voltage range of 525 to 535kV for the Palo Verde 500kV Substation. The Energy Control Center continued to provide voltage at the expected voltage band following the isolation of the fault.
The team concluded that the remedial measures taken and planned by the offsite electrical organizations would be an enhancement for preventing a cascading blackout in the Palo Verde 500kV substation.
5.0 Risk Significance of the Event - Dave 6.0 Assessment of Event Response - Tony 7.0 Exit Meeting Summary - Tony
ATTACHMENT 1 SUPPLEMENTAL INFORMATION KEY POINTS OF CONTACT Licensee NRC ITEMS OPENED, CLOSED, AND DISCUSSED 05000528/2004-012; 05000529/2004-012; 05000530/2004-012 DOCUMENTS REVIEWED Drawings NUMBER TITLE REVISION 01-J-SPL-003 Control Logic Diagram Essential Spray Pond Auviliary 3 Pumps, Day Tk Valve & Alarms 01-J-EWL-001 Control Logic Diagram Essential Cooling Water Pumps 2 and Surge Tank Fill Valves 01 -J-EWL-002 Control Logic Diagram Essential Cooling Water Loop A 0 X-Tie Valves & System Alarms 01-J-SPL-001 Control Logic Diagram Essential Spray Pond Pumps 3 01-M-EWP-001 P&l Diagram Essential Cooling Water System 29
Drawings NUMBER TITLE REVISION 01 -M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 1 of 3 35 01-M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 2 of 3 35 01 -M-SPP-001 P&l Diagram Essential Spray Pond System Sheet 3 of 3 35 01-M-SPP-002 P&l Diagram Essential Spray Pond System 12 Miscellaneous Documents:
NUMBER TITLE REVISION Palo Verde Nuclear Generating Station Design Basis 16 Manual, EW System Palo Verde Nuclear Generating Station Design Basis 13 Manual, SP System PV Unit 2 Archived Operator Log 06/14/2004, 12:10:47 AM, through 06/15/2004, 11:10:30 PM Bulletin 74-09 Deficiency in General Electric Model 4KV Magne-Blast August 6, Breakers 1974 Information General Electric Magne-Blast Circuit Breaker Problems April 17, Notice 84-29 1984 Information Potential Failure of General Electric Magne-Blast Circuit June 12, Notice 90-41 Breakers and AK Circuit Breakers 1990 Information Grease Solidification Causes Molded Case Circuit Breaker April 7,1993 Notice 93-26 Failure To Close Information Misadjustment Between General Electric 4.16-KV Circuit December 3, Notice 93-91 Breakers and Their Associated Cubicles 1993 Information Inoperability of General Electric Magne-Blast Breaker January 7, Notice 94-02 Because of Misalignment of Close-Latch Spring 1994 Information Failures of General Electric Magne-Blast Circuit Breakers August 1, Notice 94-54 To Latch Closed 1994
Miscellaneous Documents:
NUMBER TITLE REVISION Information Hardened or Contaminated Lubricants Cause Metal-Clad April 21, Notice 95-22 Circuit Breaker Failure 1995 Information Failures of General Electric Magne-Blast Circuit Breakers August 12, Notice 96-43 1996 Procedures:
NUMBER TITLE REVISION 40EP-9EO07 Loss of Offsite Power/Loss of Forced Circulation 10 40EP-9EO10 Standard Appendices 33 400P-9CH01 CVCS Normal Operations 35
ATTACHMENT 2 AUGMENTED INSPECTION TEAM CHARTER
UNITED STATES NUCLEAR REGULATORY COMMISSION REGION IV
~~k'~l ~611 RYAN PLAZA DRIVE, SUITE 400 c4 aARLINGTON,
.j47 TEXAS 76011.4005 June 15, 2004 MEMORANDUM TO: Anthony T. Gody, Chief Operations Branch Division of Reactor Safety FROM: Bruce Mallett, Regional Administrator IRA/
SUBJECT:
AUGMENTED INSPECTION TEAM CHARTER; PALO VERDE NUCLEAR GENERATING STATION, UNITS 1, 2, AND 3, COMPLETE LOSS OF OFFSITE POWER AND MULTIPLE MITIGATING SYSTEM FAILURES In response to the complete loss of all offsite power sources, the trip of all three units, and the Unit 2 Emergency Diesel Generator "A," failing to function as required at Palo Verde Nuclear Generating Station on June 14, 2004, an Augmented Inspection Team is being chartered.
There was no impact to public heath and safety associated with the event. You are hereby designated as the Augmented Inspection Team (AIT) leader.
A. Basis On June 14, 2004, at 9:45 a.m. CDT, all offsite power supplies to the Palo Verde Nuclear Generating Station were disrupted, with a concurrent trip of all three units.
Additionally, the Unit 2 Emergency Diesel Generator 'A" failed to function as required.
As a result, the licensee declared a Notice of Unusual Event (NOUE) for all three units at about 9:50 a.m. CDT and elevated to an Alert for Unit 2 at 9:54 CDT. The licensee and NRC resident inspectors also reported a number of other problems, including the failure of Unit 2 Charging Pump 'E," the failure of a Unit 3 steam bypass control valve, multiple breakers failing to operate during recovery operations, and emergency response facility and security interface issues which may have impeded emergency responders. This event meets the criteria of Management Directive 8.3 for a detailed follow up inspection, in that, it involved multiple failures to systems used to mitigate an actual event. The initial risk assessment, though subject to some uncertainties, indicates that the conditional core damage probability was in the range of high E-4.
Because the initial risk assessment was in the range for consideration of an AIT and because of multiple failures in systems used to mitigate an actual event, it was decided that an AIT is the appropriate NRC response for this event.
The AIT is being dispatched to obtain a better understanding of the event and to assess the responses of plant equipment and the licensee to the event. The team is also tasked with reviewing the licensee's root-cause analyses.
Anthony T. Gody B. Scope Specifically, the team is expected to perform data gathering and fact-finding in order to address the following:
- 1. Develop a complete sequence of events related to the loss-of-offsite power, the multiple unit trips, and the Unit 2 emergency diesel generator failure.
- 2. Assess the performance of plant systems in response to the event, including any design considerations that may have contributed to the event.
- 3. Assess the adequacy of plant procedures used in response to the event.
- 4. Assess the licensee's response to the event, including operator actions and emergency declarations, and any emergency response facility or security interface issues that may have adversely affected response to the event.
- 5. Assess the licensee's determination of the root and/or apparent causes of offsite power loss, emergency diesel generator failure, and other mitigating system(s) failures.
- 6. Based upon the licensee's cause determinations, review any maintenance related actions which could have contributed to the event initiation or produced subsequent response problems.
- 7. Review the licensee's assessment of coordination activities with off-site electrical dispatch organizations prior to and during the event.
- 8. Provide input to the regional Senior Reactor Analyst for further assessment of risk significance of the event.
C. Guidance The Team will report to the site, conduct an entrance meeting, and begin inspection no later than June 16, 2004. A report documenting the results of the inspection should be issued within 30 days of the completion of the inspection. While the team is on site, you will provide daily status briefings to Region IV management. The team is to emphasize fact-finding in its review of the circumstances surrounding the event, and it is not the responsibility of the team to examine the regulatory process. The team should notify Region IV management of any potential generic issues identified related to this event for discussion with the Program Office. Safety concerns that are not directly related to this event should be reported to the Region IV office for appropriate action.
Anthony T. Gody For the period of the inspection, and until the completion of documentation, you will report to the Regional Administrator. For day to day interface you will contact Dwight Chamberlain, Director, Division of Reactor Safety. The guidance in Inspection Procedure 93800, "Augmented Inspection Team," and Management Directive 8.3, "NRC Incident Investigation Procedures," apply to your inspection. This Charter may be modified should the team develop significant new information that warrants review. If you have any questions regarding this Charter, contact Dwight Chamberlain at (817) 860-8180.
Distribution:
B. Mallett T. Gwynn J. Dixon-Herrity J. Dyer R. Wessman T. Reis H. Berkow S. Dembeck M. Fields D. Chamberlain A. Howell C. Marschall T. Pruett J. Clark V. Dricks W. Maier N. Salgado G. Warnick J. Melfi
Anthony T. Gody ADAMS: o Yes 0 No Initials:
o Publicly Available 0 Non-PubliclyAvailable 0 Sensitive o Non-Sensitive CEBj >'i: 'aDD:DRP:.
YJt :,-i NRRi.;,Q-D:DRS-l{>a p,-}I A-.>-
JAClark/lmb MSatorius HBerkow DDChamberlain BMallett IRN IRA/ /RA/T IRA/
/RAJ 6/15/04 6/15/04 6/15/04 6/15/04 l_6/15104
_= .F.Fa. i OFICA OFFICIAL RECORD COP
.=eehn REOR COPY T=Telephone E=E-mail F=Fax
ATTACHMENT 3 SEQUENCE OF EVENTS
Electrical Sequence of Events June 24, 2004 07:40:55.747 Fault #1 inception Fault #1 type = C-N Fault #1 cause/location = Phase down (broken bells) reported near 115th Ave. & Union Hills (WW-LBX Line)
At Westwing, the Liberty line relays operated properly and issued a trip signal. Incorporated in this scheme is a Westinghouse high-speed "AR" auxiliary tripping relay that is used to "multiply" that trip signal toward both trip coils of two breakers (WW1 022 & WW1126). The "AR" relay failed (partially) and issued the trip signal to breaker WW1 126 only. Since the trip signal was never successfully issued to WW1 022, breaker failure for WW1 022 was also never initiated (this would have cleared the Westwing 230kV West bus and isolated the fault). Therefore, the "remote" ends of all lines feeding into the 525kV and 230kV yards were required to trip to isolate the fault.
07:40:55.814 4.0 cycles after fault #1 inception WW1 126 opened (LBX / PPX 230kV crossover breaker) 07:40:55.822 4.5 cycles after fault #1 inception LBX1282 opened (Westwing 230kV Line) 07:40:56.115 22.1 cycles after fault #1 inception AFX732 & AFX735 opened (Westwing 230kV Line) 07:40:56.122 22.5 cycles after fault #1 inception YP452 & YP852 opened (Westwing 525kV Line) 07:40:56.136 23.3 cycles after fault #1 inception WW1426 & WW1522 opened (Agua Fria 230kV Line) 07:40:56.142 23.7 cycles after fault #1 inception WW856 & WW952 opened (Yavapai 525kV Line) 07:40:56.165 25.1 cycles after fault #1 inception DV322 & DV722 & DV962 opened (Westwing 230kV Line) 07:40:56.172 25.5 cycles after fault #1 inception WW1 726 & WW1 822 opened (Deer Valley 230kV Line) 07:40:56.196 26.9 cycles after fault #1 inception RWYX482 & RWYX582 & RWYX782 opened (Westwing 230kV Line)
(Waddell 230kV Line)
(230/69kV Transformer #8)
Electrical Sequence of Events June 24, 2004 07:40:56.515 46.1 cycles after fault #1 inception WW1 222 opened (Pinnacle Peak 230kV Line) t = unknown Surprise Lockout "L" operated (230/69kV Transformer #4 Differential &
B/U Over-Current) 07:40:56.548 48.1 cycles after fault #1 inception SC622 & SC922 & SC262 opened (Surprise 230/69kV Transformer #4) 07:40:57.549 108.1 cycles after fault #1 inception SC1322 opened (Westwing 230kV Line) 07:40:57.800 123.2 cycles after fault #1 inception RWP-CT2A opened (Redhawk Combustion Turbine 2A) 07:40:57.807 123.6 cycles after fault #1 inception RWP-STI opened (Redhawk Steam Turbine 1) 07:40:57.814 124.0 cycles after fault #1 inception RWP-CTIA opened (Redhawk Combustion Turbine IA) 07:40:58.339 155.5 cycles after fault #1 inception RIV762 opened (Westwing 69kV Line) 07:40:58.372 157.5 cycles after fault #1 inception HH762 opened (Westwing 69kV Line) t = unknown Westwing Lockout "AK" operated (230/69kV Transformer #11 Differential & B/U Over-Current) 07:40:59 (EMS) WW2026 & WW2122 opened (Westwing 230/69kV Transformer #11 - High Side) 07:40:59.272 211.5 cycles after fault #1 inception WK362 opened (Westwing 69kV Line) 07:40:59.489 224.5 cycles after fault #1 inception HAAX935 & HAAX938 opened (Hassayampa - Arlington 525kV Line)
(Time stamp provided by SRP) 07:41:00 (EMS) WW862 & WW962 & WW1362 opened (Westwing 230/69kV Transformer #11 - Low Side) 07:41:00.392 278.7 cycles after fault #1 inception WW752 opened (South 345kV Line) 07:41:01.982 Fault #1 type changed = B-C-N
Electrical Sequence of Events June 24, 2004 07:41:02.144 383.8 cycles after fault #1 inception PSX832 closed auto (Perkins Cap-Bank Bypass)
(Time stamp provided by SRP) 07:41:02.154 Fault #1 type changed = C-N 07:41:02.799 Fault #1 type changed = B-C-N 07:41:03.966 493.1 cycles after fault #1 inception SC562 opened (McMicken 69kV Line) 07:41:05.373 577.6 cycles after fault #1 inception MQ562 opened (McMicken 69kV Line) 07:41:07.849 12.102 seconds after fault #1 inception HAAX922 & HAAX925 opened (Palo Verde 525kV Line #2)
(Time stamp provided by SRP) 07:41:07.851 12.104 seconds after fault #1 inception PLX972 & PLX975 opened (Hassayampa 525kV Line #2)
(Time stamp provided by SRP) 07:41:07.859 12.112 seconds after fault #1 inception HAAX932 opened (Palo Verde 525kV Line #1)
(Time stamp provided by SRP) 07:41:07.875 12.128 seconds after fault #1 inception PLX982 & PLX985 opened (Hassayampa 525kV Line #3)
(Time stamp provided by SRP) 07:41:07.878 12.131 seconds after fault #1 inception HAAX912 & HAAX915 opened (Palo Verde 525kV Line #3)
(Time stamp provided by SRP) 07:41:07.880 12.133 seconds after fault #1 inception PLX942 & PLX945 opened (Hassayampa 525kV Line #1)
(Time stamp provided by SRP) 07:41:08.104 Fault #1 type changed = A-B-C-N 07:41:10.445 14.698 seconds after fault #1 inception NV1 052 & NV1 156 opened (Westwing 525kV Line) 07:41:10.456 14.709 seconds after fault #1 inception WW556 & WW652 opened (Navajo 525kV Line) 07:41:12 (EMS) WW424J opened (Westwing 230kV West Bus Reactor)
Electrical Sequence of Events June 24, 2004 07:41:20.005 24.258 seconds after fault #1 inception PLX992 opened (Devers 525kV Line) (PLX995 out-of-service at this time)
(Time stamp provided by SRP) 07:41:20.113 24.366 seconds after fault #1 inception PLX932 & PLX935 opened (Rudd 525kV Line)
(Time stamp provided by SRP) 07:41:20.145 24.398 seconds after fault #1 inception RUX912 & RUX915 opened (Palo Verde 525kV Line)
(Time stamp provided by SRP) 07:41:20.864 25.117 seconds after fault #1 inception PLX912 & PLX915 opened (Westwing 525kV Line #1)
(Time stamp provided by SRP) 07:41:20.873 25.126 seconds after fault #1 inception WW1 456 & WW1 552 opened (Palo Verde 525kV Line #2) 07:41:20.874 25.127 seconds after fault #1 inception WW1 156 &WW1252 opened (Palo Verde 525kV Line #1) 07:41:20.895 25.148 seconds after fault #1 inception PLX922 & PLX925 opened (Westwing 525kV Line #2)
(Time stamp provided by SRP) 07:41:23.848 28.101 seconds after fault #1 inception PLX988 opened (Palo Verde Unit-3)
(Time stamp provided by SRP) 07:41:24.280 System Frequency = 59.514 Hz (Measured at APS Reach Substation) 07:41:24.641 28.894 seconds aft&r fault #1 inception PLX918 opened (Palo Verde Unit-1)
(Time stamp provided by SRP) 07:41:24.652 28.905 seconds after fault #1 inception PLX938 opened (Palo Verde Unit-2)
(Time stamp provided by SRP) 07:41:25 (DOE) ED4-122 & ED4-322 opened (DOE ED4 Substation)
Tripped on under-frequency (Note frequency low at 07:41:24.280) 07:41:25 (EMS) ML142, ML542, ML1042 & ML1442 opened (Moon Valley 12kV Feeders)
Tripped on under-frequency (Note frequency low at 07:41:24.280)
07:41:28 (DOE) MEX794 closed auto (Mead Cap Bank bypass) 07:41:34.615 38.868 seconds after fault #1 inception MEX1092 & MEX1692 opened (Perkins - Westwing 525kV Line)
Fault #1 cleared 07:42:22.773 System Frequency = 59.770 Hz (Measured at APS Reach Substation)
Unit I Sequence of Events 0741 Startup Transformer# 2 Breaker 945 Open Excessive Main Generator and Field Currents Noted Engineered Safeguards Features Bus Undervoltage Loss of Offsite Power Load Shed Train A and B Emergency Diesel Generator Train A and B Start Signal Fast Clsg of IV commanded (then rescinded in same second)
No ETSV pressure trip Low Departure from Nucleate Boiling Ratio Reactor Trip Master Turbine Trip Main Turbine Mechanical Over Speed Trip Emergency Diesel Generator "A" Operating (10 Second Start Time)
Emergency Diesel Generator "B" Operating (13 Second Start Time*)
0751 Manual Main Steam Isolation System Actuation 0758 Declared Notice of Unusual Event (loss of essential power for greater than 15 minutes) 0810 Both Gas Turbine Generator Sets Started, #1 GTG is supplying power to NAN S07 0813 Closed 525 k 552-942. The East bus is powered from Hass #1 0838 Restored power to Startup Xfrmr X01 0844 Restored power to Startup Xfrmr X03 0855 Fire reported in 120 ft Aux building. NO FIRE exists, AOP reported fumes.
Suspected that the fumes were caused by the elevated temperature of the letdown HX 0900 HI Temp AO entered for Letdown HX Outlet temp offscale high (2715667) -this controller feeds a common alarm annunciator
-apparently the alarm came in without actions being taken until AO did walk through what is the annunciator sensing history how long did this condition exist before actions taken???
1002 Reset Generator Protective Trips (volts/hertz; Backup under-frequency)
Palo Verde Switchyard Ring Bus restored 1159 Paralleled DG B with bus and cooled down engine restoring the in house buses 1207 Emergency Coordinator terminated NUE for all three units 1248 Paralleled DG A with bus and cooled down
Unit 1 Sequence of Events 2209 Noted grid voltage greater than 535.5 volts Shift Manager Coordinated with ECC 6/15 0005 Restored CVCS letdown per Std Appendix 12 started Chg Pump 'A' 0155 Established RCP seal injection and controlled bleed off 0241 Started 2A RCP, had to secure due to low running amps other two units had RCP's running (what were the amps at the time) exiting of EOP delayed due to switchyard conditions 0305 Exited Loss of Letdown AOP after restoration of letdown per Standard App. 12 of EOP's 0345 Palo Varde Switchyard E-W voltage at approx. 530.7 KV 0818 Started RCP's 2A and IA 0920 Started RCP's 2B and 1B 0930 Exited EOP 40EP- 9E007 Loss of Offsite Power/Loss of Forced Circulation
Unit 2 Sequence of Events 6/14/2004 0740 4.16KV Switchgear 3 Bus Trouble Alarm Generator Negative Sequence Alarm 4.16KV Switchgear 4 Bus Trouble Alarm 0741 Main Transformer B Status Trouble Alarm Main Transformer A Status Trouble Alarm ESF Bus Undervoltage Channel A-2 ESF Bus Undervoltage Channel B-2 LOP/Load Shed B ESF Bus Undervoltage Channel B-3 DG Start Signal B LOP/Load Shed A ESF Bus Undervoltage Channel A-4 DG Start Signal A LO DNBR Channels A, B, C, & D Trip RPS Channels A, B, C, & D Trip Main Generator 525KV Breaker 935 Open Mechanical Overspeed Trip of Main Turbine 0751 Manually initiated MSIS 0755 Declared an Alert for Loss of All Offsite Power to Essential Busses for Greater than 15 minutes 0901 Energized 13.8KV Busses 2E-NAN-S03 and 2E-NAN-S05 0927 Energized 4.16KV Bus 2E-PBA-S03 0951 Exited Alert 1001 Energized 13.8KV Bus 2E-NAN-SO1 1024 Energized 13.8KV Bus 2E-NAN-S02 1132 Started Charging Pump A 1618 Engineering and Maintenance review concluded that Charging Pump E was available for service after fill and vent 1714 Started Charging Pump E 1716 Started RCP 1A 1722 Started RCP 2A
Unit 2 Sequence of Events 1806 Stopped RCPs 1A and 2A on low motor amperage. ECC contacted to adjust grid voltage as-low-as-possible 2040 Started RCPs 1A and 2A 2051 Stopped RCPs 1A and 2A on low running amperage 06/15/2004 0400 Started RCPs IA and 2A 0610 Exited Emergency Operating Procedures
Unit 2 Sequence of Events