ML072960170

From kanterella
Jump to navigation Jump to search
Public Meeting Summary on the Reactor Oversight Process
ML072960170
Person / Time
Site: Kewaunee, Duane Arnold, Cook, Fort Calhoun, FitzPatrick, LaSalle  Omaha Public Power District icon.png
Issue date: 10/31/2007
From: Joseph Ashcraft
NRC/NRR/ADRO/DIRS/IPAB
To: Andersen J
NRC/NRR/ADRO/DIRS/IPAB
Ashcraft, Joseph NRR/DIRS/IPAB 415-3177
References
Download: ML072960170 (108)


Text

October 31, 2007 MEMORANDUM TO: James W. Andersen, Chief Performance Assessment Branch Division of Inspection and Regional Support Office of Nuclear Reactor Regulation FROM:

Joseph M. Ashcraft, Reactor Operations Engineer /RA/

Performance Assessment Branch Division of Inspection and Regional Support Office of Nuclear Reactor Regulation

SUBJECT:

PUBLIC MEETING

SUMMARY

ON THE REACTOR OVERSIGHT PROCESS HELD ON October 18, 2007 On October 18, 2007, the staff hosted the monthly Reactor Oversight Process (ROP) Working Group public meeting. The attendance list for the meeting is contained in Enclosure 1. The agenda for the meeting is contained in Enclosure 2.

Discussions by the meeting attendees included the status of current NRC initiatives regarding the significant determination process (SDP) appeal process, safety culture lessons learned and the status of the NRC cross-cutting issue task force, implementation of the Browns Ferry Unit 1 performance indicators (PIs), several Public Radiation Cornerstone SDP changes, ROP realignment, and Mitigating Systems Performance Index (MSPI) and other PI frequently asked questions (FAQs).

Other topics discussed by the staff were ROP security issues, the final decision for the Kewaunee appeal, FAQ 69.2, and ongoing efforts due to the issuance of RIS 2007-021, Generic Communication on Adherence to Licensed Power Limits. The staff will continue to discuss these topics during future ROP public meetings.

With respect to the Kewaunee FAQ appeal, the staff provided the Director of the Division of Inspection and Regional Support (DIRS) decision regarding FAQ 69.2. The Director ruled that after weighing the arguments presented by staff and industry that the MSPI unavailability time for the period from June 28 to August 17, 2006, should not count towards the indicator.

CONTACT: Joseph Ashcraft, NRR/DIRS/IPAB 301-415-3177

J. W. Andersen The status of the open draft FAQs are as follows:

TempNo.

PI Topic Status Plant/ Co.

70.0 MSPI Blown Fuse on Diesel 06/13 Introduced 07/18 Discussed 08/22 Discussed 09/19 On hold pending Kewaunee 10/18 Tentative Approval Ft. Calhoun 71.0 1E01 Chemistry Excursion 07/18 Introduced and Discussed 08/22 Discussed 09/19 Discussed 10/18 Appeal Duane Arnold 71.1 1E03 Environmental Condition Downpower 07/18 Introduced and Discussed.

08/22 Discussed 09/19 Tentative Approval 10/18 Remains Tentative Approval FitzPatrick 72.0 EP03 Siren Activation 08/22 Introduced and Discussed 09/19 Discussed, continue next mtg.

10/18 Tentative Approval D.C. Cook 73.0 MSPI Changes to CDE for Basis Document Parameters FAQ 09/19 Introduced and Tentative Approval 10/18 Final Approval Generic 74.0 MSPI PRA Model Revision 10/18 Introduced and discussed 10/18 Rejected La Salle FAQs on Appeal:

TempNo.

PI Topic Status Plant/ Co.

69.2 MSPI Fuel Oil Line Leak Final Decision Issued and attached.

Kewaunee FAQ 70.0 was tentatively approved based on its similarity with FAQ 69.2 for Kewaunee.

The staff and industry plan to work together to clarify the guidance in this area. FAQ 71.1 remains in tentative approval, the working group is rewording the response and it will not count towards the indicator.

FAQ 72.0 is also in tentative approval, the working group is rewording the response and it will count towards the indicator. The staff should make both final approvals at the December meeting. FAQ 73.0 went to final approval.

The staff and industry could not reach consensus on FAQ 71.0. The NEI/industry is planning to appeal the staffs response to the FAQ.

FAQ 74.0 was introduced and discussed. The NRC staff and industry rejected the FAQ because there is sufficient guidance in NEI 99-02 to handle the issue.

J. W. Andersen The date for the next meeting of the ROP Working Group is December 5, 2007.

Enclosures:

1. Attendance List
2. Agenda
3. FAQ Log, dated 10/07
4. NEI Action List
5. RIS 2007-21 Status Report on Upcoming Actions
6. Office of Nuclear Regulatory (RES) Response to User Need Request NRR-2007-003
7. NEI DRAFT Safety Culture Survey Results
8. NEI Reactor Oversight Process (ROP) Task Force Post-Implementation Assessment Mitigating Systems Performance Index (MSPI)
9. Proposed Changes to Public Radiation Safety SDP - ML072960690

J. W. Andersen The date for the next meeting of the ROP Working Group is December 5, 2007.

Enclosures:

1. Attendance List
2. Agenda
3. FAQ Log, dated 10/07
4. NEI Action List
5. RIS 2007-21 Status Report on Upcoming Actions
6. Office of Nuclear Regulatory (RES) Response to User Need Request NRR-2007-003
7. NEI DRAFT Safety Culture Survey Results
8. NEI Reactor Oversight Process (ROP) Task Force Post-Implementation Assessment Mitigating Systems Performance Index (MSPI)
9. Proposed Changes to Public Radiation Safety SDP - ML072960690 Accession Number: Package - ML072960246; Memo - ML072960170; - ML072960690 OFFICE DIRS/IPAB DIRS/IPAB NAME JAshcraft JAndersen DATE 10/26/07 10/31/07 OFFICIAL RECORD COPY DISTRIBUTION:

PUBLIC DRoberts ABoland AVegel AHowell BHolian CCasto CPederson DLew DChamberlain HChristensen ISchoenfeld JShea JClifford KKennedy THsia MGamberoni RCaniano SWest VMcCree DDube IPAB PAppignani IRIB

1 of 1 ATTENDANCE LIST INDUSTRY/STAFF ROP PUBLIC MEETING NAME AFFILIATION 1

John Butler NEI 2

Julie Keys NEI 3

Lenny Sueper NMC 4

Al Haeger Exelon 5

Duane Kanitz STARS 6

Don Olson Dominion 7

Robin Ritzman FENOC 8

Fred Mashburn TVA 9

Kay Nicholson Duke 10 Lou Larragoite Constellation Energy 11 Bryan Ford Entergy 12 George Oliver NEI 13 David Lockaum Union of Concerned Scientists 14 Roy Lithicum Exelon 15 Jim Peschel FPL 16 Ros Murrell FPL 17 Rob Ritzman FENOC 18 Bryon Ford Entergy 19 Don Olson Dominion 20 Don Olson Dominion 21 Terry Reis NRC 22 James Andersen NRC 23 John Thompson NRC 24 Jim Isom NRC 25 Art Soloman NRC 26 Joe Ashcraft NRC 27 Steve Garry NRC 28 Paul Harris NRC 29 Don Hickman NRC 30 Steve Orth NRC 31 Roy Caniano NRC 32 Steve Alexander NRC 33 Don Dube NRC 34 Steve Stein NRC 35 Ron Schmitt NRC

1 of 3 ROP WORKING GROUP PUBLIC MEETING AGENDA October 18, 2007 9:00 a.m. - 4 p.m.

Nuclear Energy Institute Conference Call Line: 800-638-8081 301-231-5539 Pass Code: 8217# Meeting Leader: Joseph M. Ashcraft Time Topic Process Leader 9:00 - 9:05 a.m.

Introduction and Purpose of Meeting Discuss Andersen 9:05 - 09:20 a.m.

Performance Assessment Branch Topics

1. Safety Culture
2. Other Topics
Discuss, share information.

1.-2.Andersen 09:20 - 10:30 a.m.

Reactor Inspection Branch Topics

1. Regulatory Issue Summary (RIS) 2007-21
2. Significant Determination Process (SDP) appeal process
3. ROP Realignment draft
4. SDP Changes in the Public Radiation Safety Cornerstone
a. Revise effluent branch to incorporate leaks and spills and a substantial failure to implement effluents program, and environmental program changes per SECY 07-112
b. Revise the radioactive material control branch to remove aggregation of findings
c. Revise the transportation branch to remove the low level burial ground decision block
Discuss, share information 1.-2. Reis
3. Isom 4.Garry

/Keegan 10:30 - 10:45 a.m.

Public Input

2 of 3 10:45 - 12:00 p.m.

Performance Indicator Topics

1. NEI Safety culture survey
2. Cross-Cutting Issue Task Force Update
3. MSPI Unavailability Reviews
4. Other
5. General Discussion of ROP Security Issues (discussion closed to the public)
Discuss, share information
1. Andersen
2. Gramm
3. Keys
4. Andersen
5. Costello 12:00 - 1:00 p.m.

Lunch 1:00 - 2:30 p.m.

Open and New PI Frequently Asked Questions (FAQs)

The latest draft FAQs are located on the public web at http://www.nrc.gov/NRR/OVERSIGHT/AS SESS/draft_faqs.pdf This list is subject to change the day before the meeting based on availability of new draft FAQs provided by the Nuclear Energy Institute (NEI)

Discuss, share information All 2:30 - 2:45 p.m.

Public Input 2:45 - 3:15 p.m.

Continuation Discussion of Open and New PI FAQs

Discuss, share information All 3:15 - 3:35 p.m.

Maintenance Rule alignment with ROP

1. NEI 93-01 (update status)
1. Status 1.Alexander 3:35 - 3:45 p.m.

Future Agenda

1. Future Meeting Dates 12/05 (Nov/Dec) 01/16/08 02/20/08
2. Action Item Review
3. Future Topics
4. Meeting Critique 1.Select
2. Review
3. Decide
4. Discuss
1. Andersen
2. Keys
3. All
4. All

3 of 3 3:45 - 4:00 p.m.

Public Input Meeting To Adjourn Following Public Input

FAQ LOG 10/07 Page 1 of 5 TempNo.

PI Topic Status Plant/ Co.

70.0 MSPI Blown Fuse on Diesel 06/13 Introduced 07/18 Discussed 08/22 Discussed 09/19 On hold pending Kewaunee Ft. Calhoun 71.0 1E01 Chemistry Excursion 07/18 Introduced and Discussed 08/22 Discussed 09/19 Discussed Duane Arnold 71.1 1E03 Environmental Condition Downpower 07/18 Introduced and Discussed.

08/22 Discussed 09/19 Tentative Approval FitzPatrick 73.0 MSPI Changes to CDE for Basis Document Parameters FAQ 09/19 Introduced and Tentative Approval Generic 74.0 MSPI PRA Model Revision 10/18 Introduced and discussed La Salle FAQs on Appeal:

TempNo.

PI Topic Status Plant/ Co.

69.2 MSPI Fuel Oil Line Leak Final Decision Issued and attached.

Kewaunee

FAQ 70.0 Page 1 of 5 Plant:

Fort Calhoun Station Date of Event:

July 21, 2004 Submittal Date:

May 24, 2007 Licensee

Contact:

Gary R. Cavanaugh Tel/email: 402-533-6913 / gcavanaugh@oppd.com NRC

Contact:

L. M. Willoughby Tel/email: 402-533-6613 / lmw1@nrc.gov Performance Indicator: MSPI Site-Specific FAQ (Appendix D)? No FAQ requested to become effective when approved.

Question Section NEI 99-02 Guidance needing interpretation (include page and line citation):

Clarification of the guidance is requested for time of discovery. Is time of discovery when the licensee first had the opportunity to determine that the component cannot perform its monitored function or when the licensee completes a cause determination and concludes the component would not have performed its monitored function at some earlier time, similar to the situation described in the event section below.

Page F-5, Section F 1.2.1, lines 19-21:

Fault exposure hours are not included; unavailable hours are counted only for the time required to recover the trains monitored functions. In all cases, a train that is considered to be OPERABLE is also considered to be available.

Page F-22, Section F 2.2.2, lines 18-19:

Unplanned unavailability would accrue in all instances from the time of discovery or annunciation consistent with the definition in section F 1.2.1.

Page F-5, Section F 1.2.1, lines 34-40:

Unplanned unavailable hours: These hours include elapsed time between the discovery and the restoration to service of an equipment failure or human error (such as a misalignment) that makes the train unavailable. Unavailable hours to correct discovered conditions that render a monitored component incapable of performing its monitored function are counted as unplanned unavailable hours. An example of this is a condition discovered by an operator on rounds, such as an obvious oil leak, that resulted in the equipment being non-functional even though no demand or failure actually occurred.

Event or circumstances requiring guidance interpretation:

FAQ LOG 10/07 Page 2 of 5 On October 19, 2004, while reviewing detailed plant computer data related to the operation of the Emergency Diesel Generator Number 2 (DG-2), Fort Calhoun Station (FCS) discovered that DG-2 had become inoperable for 29 days beginning on July 21, 2004. On August 18, 2004 when DG-2 was started for the next monthly surveillance test, DG-2 started but failed to achieve proper voltage and frequency. At that time, DG-2 was declared inoperable, trouble shooting commenced, and three hours later following a fuse replacement, DG-2 was declared operable.

Data obtained from the FCS control room computer subsequently confirmed that the condition occurred as the operators were performing engine unloading and shutdown during completion of the monthly surveillance test (Attachment 1) on July 21, 2004. In attachment 2, there are highlighted sections of a print out which is an attachment to the July 21, 2004 surveillance test for clarification. As DG-2 was being shut down following the successful surveillance test, the control room staff received numerous expected alarms. The alarms in question are plant computer alarms and not tiled annunciator alarms. Since the alarms were expected as part of unloading and shutting down DG-2 they were acknowledged and treated as a normal system response.

The earliest opportunity for the discovery of the failed fuse condition was upon receipt of the plant computer alarms for DG-2 low output frequency and low output voltage which occurred following the opening of the DG-2 output breaker.

When attempting to complete the next monthly surveillance test in August 2004, DG-2 started but failed to achieve proper voltage and frequency. At that time, DG-2 was declared inoperable, trouble shooting commenced, and three hours later DG-2 was declared operable following fuse replacement. In an effort to determine unavailability hours for reporting of the Emergency AC Power MSPI, FCS determined that the unavailability began on August 18, 2004 when DG-2 was started for the next monthly surveillance.

If licensee and NRC resident/region do not agree on the facts and circumstances explain Issue #1:

In the opening lines of the FAQ, the licensee references NEI 99-02, page F-5, lines 19-21, which states: Fault exposure hours are not included; unavailable hours are counted only for the time required to recover the trains monitored functions. In all cases, a train that is considered to be OPERABLE is also considered to be available.

...and the licensee further references page F-5, lines 34-40, stating...Unplanned unavailable hours: These hours include elapsed time between the discovery and the restoration to service of an equipment failure or human error (such as a misalignment) that makes the train unavailable.

Unavailable hours to correct discovered conditions that render a monitored component incapable of performing its monitored function are counted as unplanned unavailable hours. An example of this is a condition discovered by an operator on rounds, such as an obvious oil leak, that resulted in the equipment being non-functional even though no demand or failure actually occurred.

As described in NRC Inspection Report 05000285/2005010, Emergency Diesel Generator #1

FAQ LOG 10/07 Page 3 of 5 was both inoperable and unavailable from July 21, 2004 until August 19, 2004. The inspection report also explained why discovery of the condition should reasonably have occurred on July 21, 2004:

After a review of this event, the inspectors noted that the licensee had several opportunities to promptly identify the degraded voltage condition that affected the safety function of Emergency Diesel Generator 2. These opportunities included:

The failure to recognize the alarm for low emergency diesel generator output voltage was indicative of a degraded voltage condition.

The failure to recognize that the watt-hour meter turns off when emergency voltage goes below the watt-hour trigger setpoint, indicative of a degraded voltage condition.

The failure to recognize that the emergency diesel generator output voltage meter indications were reading approximately half their normal value, indicative of a degraded voltage condition.

The failure to recognize that data obtained during surveillance Operating Procedure OP-ST-DG-0002, performed on July 21,2004, showed the emergency diesel generator output voltage decreasing to approximately 2200 volts, indicative of a degraded voltage condition. This surveillance procedure was reviewed and determined satisfactory by three operations personnel and the system engineer.

Based on the multiple opportunities to identify this condition, the Resident Inspectors/Regional staff believe the conditions mentioned above would be indicative of an obvious condition, similar to the leaking oil condition example above. Therefore, the definition of unavailable hours would be met.

Issue #2:

In the licensees FAQ, the licensee stated on page 2,... the control room staff received numerous expected alarms. and then went on to say These expected plant computer alarms were received within moments of when they normally would have occurred.

Please refer to the 4 bullets listed above. The control room alarms were not expected at the times that they occurred, and the significance of these conditions were neither recognized individually or collectively by multiple licensed operators. As described in the NRC Inspection Report 05000285/2005010... Emergency Diesel Generator 2 was operated at normal speed, unloaded, for approximately 12 minutes to cool down the turbo charger. During this time operators discussed the loss of indication on the watt-hour meter and decided to write a condition report on the discrepancy. Given that the alarms/indications were present approximately 12 minutes early, the Residents/Regional staff do not agree with the licensees assertion that this equates to within moments of when they normally would have occurred.

Issue #3:

In the Proposed Resolution section of the FAQ, the licensee stated... Although the earliest opportunity to discover the failed fuse was July 21, 2004, FCS concluded that it would have been an improbable catch for them to do so. While changes were put into place following discovery of this condition to prevent recurrence, it was determined that it would have been unreasonable to expect the control room staff to have caught this when it occurred. The licensee further stated......this issue was appropriately classified as discovery on August 18, 2004.

Region IV personnel believe that it was reasonable, as documented in the previous sections

FAQ LOG 10/07 Page 4 of 5 and in the inspection report, for the control room staff to have caught this when it occurred.

Issue #4:

In the licensees FAQ, they stated:... the Significance Determination Process (SDP) was used to characterize the risk of the event and this process evaluated the fault exposure period to determine that risk.

Once a performance deficiency is identified, the SDP assesses the risk of a condition, (i.e., how significant is it during the time that equipment was unable to perform its function), irrespective of whether the equipment is considered fault exposure time or unavailability hours. Region IV personnel consider that one of the salient aspects of the PI, an indicator of performance, is to identify both unavailability and fault exposure hours. The staff considers this period to be unavailability in regard to the PI.

Issue #5:

The licensee has considered the failure of DG-1 as a Failure-to-Load on August 19, 2004 in their calculations.

The Region IV staff considers this should be counted as a Failure-to-Run (FTR) on July 21, 2004 instead of a Failure-to-Load. Per the NEI guidance, Failure-to-Load items are those that prevent the engine from starting or running for an hour. The fuse failure occurred after the engine had run successfully for greater than one hour. While the type of failure does not directly affect the subject of this FAQ (calculation of hours for the PI), erroneous failure classifications could be misleading if they are to be considered with any subsequent failures.

Summary:

In summary, the licensee stated that... unavailability should accrue on August 18, 2004 when the failure occurred. The licensee believes that the duration between July 21 and August 19, should be counted as Fault Exposure Hours. However, Region IV staff does not agree with this position. The licensee had ample opportunity to identify and correct this condition, as was stated in a previously cited 10 CFR 50, Appendix B, Criterion XVI violation. Region IV staff believes the duration that DG-1 was non-functional should be counted as Unavailability Hours.

Potentially relevant existing FAQ numbers None Response Section Proposed Resolution of FAQ Although the earliest opportunity to discover the failed fuse was July 21, 2004, FCS concluded that it would have been an improbable catch for them to do so. While changes were put into place following discovery of this condition to prevent recurrence, it was determined that it would have been unreasonable to expect the control room staff to have caught this when it occurred.

FAQ LOG 10/07 Page 5 of 5 In a strict determination of the unavailability you would have to conclude that since an annunciation occurred, it should have been caught by the control room staff (i.e., time of discovery). However, when presented with the facts surrounding this case, FCS concludes that this issue was appropriately classified as discovery on August 18, 2004.

FCS has reviewed NEI 99-02, Revision 4 guidance and determined that in MSPI, unavailable hours are counted only for the time required to recover the trains monitored functions.

Therefore, the time of discovery for the purposes of assigning unavailable hours starts from the time the diesel was declared inoperable on August 18, 2004. Unavailability, prior to the determination that the failure affected the ability of the diesel to perform its monitored function, is actually fault exposure, which is not included in the MSPI unavailability calculation. Since performance deficiencies were noted for this event, the Significance Determination Process (SDP) was used to characterize the risk of the event and this process evaluated the fault exposure period to determine that risk.

The information provided in lines 18-19 on page F-22 of section F 2.2.2. Unplanned unavailability would accrue in all instances from the time of discovery or annunciation consistent with the definition in section F 1.2.1., might be misunderstood to imply that any alarm originating in the control room would indicate that monitored equipment is obviously inoperable.

In this instance the control room annunciation was from a computer monitored point and indicated DG-2 Low Output Frequency and Low Output Voltage, as expected.

Consistent with the definition in F1.2.1 lines page F-5 lines 20 and 21 In all cases, a train that is considered to be OPERABLE is also considered to be available. Therefore, the unavailability should accrue on August 18, 2004 when the failure occurred.

If appropriate, provide proposed rewording of guidance for inclusion in next revision.

N/A

FAQ 70.0 Fort Calhoun Station June 2007 FAQ Relevant Pages of July 2004 EDG2 Surveillance Test

FAQ 70.0 Fort Calhoun Station June 2007 FAQ

FAQ 71.0 Page 1 of 4 Plant:

Duane Arnold Energy Center Date of Event:

3/18/07 Submittal Date:

6/07/07 Licensee

Contact:

Robert Murrell Tel/email: 319-851-7900 bob_murrell@fpl.com NRC

Contact:

Tel/email Performance Indicator: Unplanned Scrams per 7000 Critical Hours Site-Specific FAQ (Appendix D?): No FAQ requested to become effective: From the time of the event (3/18/07).

Question Section NEI Guidance needing interpretation (include page and line citation):

NEI 99-02, R4, pages 10 and 11, specifically page 10 lines 11-12 and page 11 line 2/line 5 and line 2/line 11.

Page 10, lines 11-12: Unplanned scram means that the scram was not an intentional part of a planned evolution or test as directed by a normal operating or test procedure.

Page 11, lines 13 - 15 [Line 2 Examples of scrams that are not included:] Plant shutdown to comply with technical specification LCOs, if conducted in accordance with normal shutdown procedures which include a manual scram to complete the shutdown.

Page 11, line 5: [Line 2 Examples of scrams that are not included:] scrams that are part of a normal planned operation or evolution.

Events or Circumstances requiring guidance interpretation:

Duane Arnold experienced a reactor water chemistry excursion (increasing conductivity readings while performing condensate demineralizer manipulations) at approximately 1630 on March 18, 2007. This excursion occurred with the plant operating at ~34%

power during a post Refueling Outage startup. By 1630, the conductivity level quickly surpassed the Technical Requirements Manual (TRM) limits of >1 and >5 (µmoh/cm).

This resulted in actions being initiated as required by the TRM for restoring the limits immediately and analyzing a sample within 8 hours9.259259e-5 days <br />0.00222 hours <br />1.322751e-5 weeks <br />3.044e-6 months <br />. At the time, conductivity was > 10.

The plant also entered the TRM requirement to be in Mode 3 within 12 hours1.388889e-4 days <br />0.00333 hours <br />1.984127e-5 weeks <br />4.566e-6 months <br /> and be in mode 4 within 36 hours4.166667e-4 days <br />0.01 hours <br />5.952381e-5 weeks <br />1.3698e-5 months <br /> as a result of the out of specification chemistry parameters.

FAQ 71.0 Page 2 of 4 In addition to the TRM requirements, the plant also entered Plant Chemistry Procedure (PC) 1.9, Water Chemistry Guidelines. PCP 1.9 also requires an immediate shutdown. PCP 1.9, Attachment 4 requires that conductivity > 1.0 be monitored for compliance with TRM 3.4 (above), and conductivity > 5.0 requires a plant specific analysis within four hours indicating whether plant shutdown or continued at power operation is the most prudent approach with regard to IGSCC and fuel damage or default to an orderly shutdown of the plant as the most prudent approach. Per Section 4.3, if conductivity is restored to < 1.0, the shutdown may be stopped. (A value between 1.0 and 5.0 requires an orderly shutdown after 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> per PCP 4.2.3.) If the value remains above 5.0 for six hours, an orderly shutdown is to be initiated. The basis for the conductivity limits in PCP 1.9 is to prevent IGSCC and maintain fuel performance and radiation field buildup at optimal levels.

At 1649, the plant entered Abnormal Operating Procedure (AOP) 639, Reactor Water/Condensate High Conductivity. AOP 639, steps 13 and 15 require compliance with PCP 1.9 upon conductivity > 1.0 and TRM 3.4 and PCP 1.9 for conductivity > 5.0.

During this event, the conductivity limits that would require the plant to insert a manual scram or commence a fast power reduction were never met.

As a result of the TRM and PCP LCOs, the plant commenced a shutdown in accordance with Integrated Plant Operating Instruction (IPOI) 4, Plant Shutdown, Section 6.0, Fast Power Reduction. This IPOI consolidates information for a safe and efficient shutdown from 35% power operation to cold shutdown or other shutdown conditions, and is not an AOP.

At 1940, after completion of the steps up to inserting a manual scram, a manual scram was inserted. This action was accomplished after careful review of the condition; senior plant management determined that the prudent course of action was to bring the plant to cold shutdown in a controlled and prompt manner to reduce the potential adverse affects of the chemistry excursion on the plant. The decision to shut down was driven by internal plant chemistry guidelines and the TRM. The directed plant shutdown was performed in accordance with Integrated Plant Operating Instruction (IPOI) 4, Shutdown, which includes separate sections for a plant shutdown with slow power reduction and for a plant shutdown with a fast power reduction. Plant management elected to utilize the plant shutdown with a fast power reduction to minimize the potential adverse consequences from the chemistry excursion. The IPOI 4 fast power reduction instructions include the initiation of a manual scram as is the typical proactive for plant shutdowns at Duane Arnold. Prior to initiating the manual scram the IPOI requires that recirculation flow be set to minimum, and recommends that the non-essential 4160 VAC busses be transferred to a different power supply and the Intermediate Range Monitors be inserted into the core. These steps were completed, as were all others required by the procedure, prior to initiating the scram. Ultimately, the scram was initiated with reactor power below 30%. IPOI 4 is the standard procedure that would be utilized to conduct such a plant shutdown.

FAQ 71.0 Page 3 of 4 The guidance provided in NEI 99-02, Revision 4 clearly supports the March 18, 2007 scram not being considered an unplanned scram. On page 10, lines 11 and 12, the guidance defines an unplanned scram as "Unplanned scram means that the scram was not an intentional part of a planned evolution or test as directed by a normal operating or test procedure." The March 18, 2007 scram was clearly part of the normal Duane Arnold shutdown and the scram was initiated in accordance with the Integrated Plant Operating Instruction, (IPOI) 4, Shutdown." On page 11, line 5, the guidance excludes "scrams that are part of a normal planned operation or evolution." The March 18, 2007 shutdown was clearly a planned evolution that was proactively directed by plant management to minimize any potential adverse affects from the chemistry excursion.

On page 11, line 11, the guidance excludes "Scrams that occur as part of the normal sequence of a planned shutdown." As stated above, the March 18, 2007 shutdown was clearly a planned evolution that was proactively directed by plant management to minimize any potential adverse affects from the chemistry excursion. Specifically, the shutdown was driven by the plants TRM, not by the plant AOP. However, the scram would be considered a planned scram, and the event and its effects counted instead within the Unplanned Power Changes indicator. (See NEI 99-02, R4, pages 9 - 11 and 18.)

A review of operator logs from 2001 to present indicates that the fast power reductions section of the IPOI has been used approximately 6 times with a shutdown being completed in one occurrence on November 2, 2003, one of which was taken to completion. It should be noted that IPOI 4 has contained the fast power reduction instructions for over 15 years.

The NRC Resident does not agree with the Duane Arnold position as he considers the fast power reduction of Integrated Plant Operating Instruction, (IPOI) 4, Shutdown" to be an abnormal section of a normal procedure and therefore concludes the scram should count as unplanned.

Is it the correct interpretation that the above event should be not be considered an unplanned scram with respect to the NRC indicator?

Potentially relevant existing FAQ numbers:

Archived guidance FAQ 159 dated 4/1/2000 and FAQ 5 dated 11/11/1999 also support the conclusion that the event would not be considered an uplanned scram with respect to the NRC indicator.

FAQ 159 Posting Date 4/1/2000 Question: With the Unit in Operational Condition 2 (Startup) a shutdown was ordered due to an insufficient number of operable Intermediate Range Monitors (IRM). The reactor was critical at 0% power. B and D IRM detectors failed, and a plant shutdown was ordered. The manual scram was inserted in accordance with the normal shutdown procedure. Should this count as an unplanned reactor scram?

FAQ 71.0 Page 4 of 4 Response: No. If part of a normal shutdown, (plant was following normal shut down procedure) the scram would not count.

The response to FAQ directly applies to the March 29, 2007 shutdown as the plant was following the normal shutdown procedure, IPOI 4, "Shutdown."

ID: 5 Posting Date 11/11/1999 Question: The Clarifying notes for the Unplanned Scrams per 7000hrs PI state that scrams that are included are: scrams that resulted from unplanned transients. And a scram that is initiated to avoid exceeding a technical specification action statement time limit; and, scrams that are not included are scrams that are part of a normal planned operation or evolution and, scrams that occur as part of the normal sequence of a planned shutdown If a licensee enters an LCO requiring the plant to be in Mode 2 within 7 hours8.101852e-5 days <br />0.00194 hours <br />1.157407e-5 weeks <br />2.6635e-6 months <br />, applies a standing operational procedure for assuring the LCO is met, and a manual scram is executed in accordance with that procedure, is this event counted as an unplanned scram?

Response: If the plant shutdown to comply with the Technical Specification LCO, was conducted in accordance with the normal plant shutdown procedure, which includes a manual scram to complete the shutdown, the scram would not be counted as an unplanned scram. However, the power reduction would be counted as an unplanned transient (assuming the shutdown resulted in a power change greater than 20%). However, if the actions to meet the Technical Specification LCO required a manual scram outside of the normal plant shutdown procedure, then the scram would be counted as an unplanned scram.

Although Duane Arnold was not in a Technical Specification LCO, the shutdown was conducted in accordance with the normal plant shutdown procedure IPOI 4, "Shutdown" and the response to FAQ 5 directly supports the Duane Arnold position.

Response Section Proposed resolution of FAQ:

The scram was conducted in accordance with normal shutdown procedures, as part of planned evolution to respond to a condition and to minimize any potential adverse affect on the plant. It was not an unplanned scram and should not be counted against the Unplanned Scrams per 7000 Critical Hours performance indicator.

FAQ 71.1 Page 1 of 2 Plant:

James A. FitzPatrick Nuclear Power Plant Date of Event:

04/02/07 Submittal Date:

Licensee

Contact:

Gene Dorman Tel/email: (315) 349-6810/ edorman@entergy.com Licensee

Contact:

Jim Costedio Tel/email: (315) 349-6358/ jcosted@entergy.com NRC

Contact:

Gordon Hunegs Tel/email: (315) 349-6667/gkh@nrc.gov Performance Indicator:

Unplanned Power Changes Per 7,000 Critical Hours Site Specific FAQ (Appendix D)? Yes or No: Yes FAQ requested to become effective when approved.

Question Section:

NEI 99-02 Rev 5 Guidance needing interpretation (include page and line citation):

Unplanned Power Changes Per 7,000 Critical Hours, beginning at the bottom of page 14 at line 42 and continuing on to the top of page15 through line 4, the guidance document states:

42 Anticipated power changes greater than 20% in response to expected environmental problems 43 (such as accumulation of marine debris, biological contaminants, or frazil icing) which are 44 proceduralized but cannot be predicted greater than 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> in advance may not need to be 45 counted unless they are reactive to the sudden discovery of off-normal conditions. However, 46 unique environmental conditions which have not been previously experienced and could not 47 have been anticipated and mitigated by procedure or plant modification, may not count, even if 48 they are reactive. The licensee is expected to take reasonable steps to prevent intrusion of marine 49 or other biological growth from causing power reductions. Intrusion events that can be 1 anticipated as a part of a maintenance activity or as part of a predictable cyclic behavior would 2 normally be counted unless the down power was planned 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> in advance. The 3 circumstances of each situation are different and should be identified to the NRC in a FAQ so 4 that a determination can be made concerning whether the power change should be counted.

Event or circumstances requiring guidance interpretation:

On March 2, 2007 the Operations Department initiated a condition report (CR-JAF-2007-00841) identifying that the differential temperature across the B1 waterbox had risen approximately 9°F since the February 6, 2007 defish evolution, and that higher water box differential pressure on the B2 waterbox and rising backpressure in the B condenser indicated that there was some fouling of the waterboxes. It is notable that configuration of the supply piping to the waterboxes causes the B1 Waterbox to more readily collect debris than the others. On March16, 2007 Engineering notified Planning and Scheduling that the waterboxes would have to be cleaned to restore performance. Based on the parameters (waterbox Delta T, Delta P, Condenser Backpressure, CWS flow, and CWS Pump amps) and available trend information it was determined that the cleaning could be performed during a scheduled May 2007 Downpower and On-Line Emergent Work Addition Approval Form EN-WM-101 was submitted to add Work Order 51102525 to the downpower schedule.

During the last week of March increased turbulence in the lake was observed with the passing of storms and melt off of the winter snow pack. When the condition of the lake was identified the traveling screens were placed in continuous operation. While continuous operation of the screens is effective in removing large material, the screens are not fine enough to prevent the

FAQ 71.1 Page 2 of 2 entry of smaller debris such as zebra mussel shells. On Saturday March 31, 2007 at 2030, Operations noted that the B condenser Delta T had risen 13 °F in a three hour period. Review of historical trend information showed the plant has never experienced such a rapid change in condenser Delta-T. A condition report (CR-JAF-2007-01273) was entered into the corrective action program. On Sunday April 1, 2007 at approximately 0130 Engineering determined that the observed degradation was consistent with condenser fouling, likely caused by the disturbances on the lake transporting additional marine debris into the condenser water boxes. Temperatures and pressures stabilized such that no operational limits were exceeded.

On Monday April 2, 2007, after review of the data, the decision was made to perform a downpower of approximately 25% to support defishing of the B1 and B2 condenser waterboxes, rather than wait until the scheduled May downpower. Power was reduced on April 3, 2007 at 0240.

The defishing evolution is included in the Circulating Water System Operating Procedure (OP-4).

The evolution was evaluated using the online risk model and the impact on the work week was assessed. Since the plant parameters were stable and within operational limits the plant could have waited an additional 18 hours2.083333e-4 days <br />0.005 hours <br />2.97619e-5 weeks <br />6.849e-6 months <br /> to meet the 72 hour8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> criteria, but chose to make a conservative decision to reduce power and defish. The defishing evolution was conducted using the same procedures and guidance used during the February defishing evolution.

In summary, JAF believes that the downpower on April 3, 2007 was caused by an environmental problem that could not have been predicted greater than 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> in advance, that actions to address the problem had been previously proceduralized and did not require 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> to plan, and that the downpower was not performed due to a sudden discovery. The downpower on April 3, 2007 should not be counted against the performance indicator.

As noted above NEI 99-02 Revision 5, in discussing downpowers that are initiated in response to environmental conditions states The circumstances of each situation are different and should be identified to the NRC in a FAQ so that a determination can be made concerning whether the power change should be counted.

Does the transient meet the conditions for the environmental exception to reporting Unplanned Power changes of greater than 20% RTP? - Yes, the transient meets the conditions for an environmental exception and should not count against the performance indicator.

If licensee and NRC resident/region do not agree on the facts and circumstances explain:

This has been reviewed with the Senior Resident and there is no disagreement with regard to the facts as presented.

Potentially relevant existing FAQ numbers: 158, 244, 294, 304, 306, 383, 420, 421 Response Section:

Proposed Resolution of FAQ:

Yes, the downpower was caused by environmental conditions, beyond the control of the licensee, which could not be predicted greater than 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> in advance. The licensee had taken the available measures to minimize the impact of the environmental conditions and the downpower should not count toward the performance indicator.

If appropriate proposed rewording of guidance for inclusion in next revision.

None required

FAQ 73.0 Page 1 of 4 Proposed MSPI Guidance Change Changes to CDE for Basis Document Parameters FAQ Plant:

Generic Date of Event:

N/A Submittal Date: September 18, 2007 Licensee

Contact:

Julie Keys Tel/email: 202.739.8128/jyk@nei.org NRC

Contact:

Joe Ashcraft Tel: 301.415.3177 Performance Indicator: MSPI Site-Specific FAQ (Appendix D)? No FAQ requested to become effective when approved.

Question Section NEI 99-02 Guidance needing interpretation (include page and line citation):

This FAQ proposes a guidance change to improve consistency of the guidance and allow flexibility in the timing of CDE entries made to reflect changes in site MSPI basis documents.

The current MSPI guidance (NEI 99-02, Rev 5) states the following regarding changes to baseline information:

Page 30, lines 35-40 and Page 31, lines 1-12 (regarding changes to PRA parameters):

The MSPI calculation uses coefficients that are developed from plant specific PRAs. The PRA used to develop these coefficients should reasonably reflect the as-built, as-operated configuration of each plant. Updates to the MSPI coefficients developed from the plant specific PRA will be made as soon as practical following an update to the plant specific PRA. The revised coefficients will be used in the MSPI calculation the quarter following the update. Thus, the PRA coefficients in use at the beginning of a quarter will remain in effect for the remainder of that quarter. Changes to the CDE database and MSPI basis document that are necessary to reflect changes to the plant specific PRA of record should be incorporated as soon as practical but need not be completed prior to the start of the reporting quarter in which they become effective. The quarterly data submittal should include a comment that provides a summary of any changes to the MSPI coefficients. Any PRA model changes will take effect the following quarter (model changes include error, corrections, updates, etc.)

For example, if a plants PRA model of record is approved on September 29 (3rd quarter), MSPI coefficients based on that model of record should be used for the 4th quarter. The calculation of the new coefficients should be completed (including a revision of the MSPI basis document if required by the plant specific processes) and input to CDE prior to reporting the 4th quarters data (i.e., completed by January 21).

Page F-8, line 44 and following to Page F-9, line 3 (regarding changes to baseline planned unavailability):

FAQ 73.0 Page 2 of 4 The baseline planned unavailability should be revised as necessary during the quarter prior to the planned maintenance evolution and then removed after twelve quarters. A comment should be placed in the comment field of the quarterly report to identify a substantial change in planned unavailability. The baseline value of planned unavailability is changed at the discretion of the licensee. Revised values will be used in the calculation the quarter following their update.

Page F-23, lines 38-40 (regarding changes in estimates of demands):

The new estimates will be used in the calculation the quarter following the input of the updated estimates into CDE.

Event or circumstances requiring guidance interpretation:

The concern is that the guidance is unnecessarily restrictive regarding CDE entry for changes in baseline planned unavailability and estimated demands, especially when compared to the guidance for PRA model changes. If a plant makes a change to its basis document for baseline planned unavailability or estimated demands, these values should not be used until the quarter following the change. However, sites should be allowed the flexibility to enter these changes into CDE during the data submittal period at the beginning of the new quarter following basis document revision. This allows the site time to make the entry into CDE. The site basis document can be easily audited to ensure that the change was approved prior to the beginning of the new quarter.

If licensee and NRC resident/region do not agree on the facts and circumstances explain This issue was discussed with NRC at the 8/22/07 ROP TF meeting and agreed that it should be moved forward as an FAQ.

Potentially relevant existing FAQ numbers None Response Section Proposed Resolution of FAQ If appropriate, provide proposed rewording of guidance for inclusion in next revision.

Plant Specific PRA (Page 30, line 35 - Page 31, line 12)

The MSPI calculation uses coefficients that are developed from plant specific PRAs. The PRA used to develop these coefficients should reasonably reflect the as-built, as-operated configuration of each plant.

Specific requirements appropriate for this PRA application are defined in Appendix G.

Any questions related to the interpretation of these requirements, the use of alternate methods to meet the requirements or the conformance of a plant specific PRA to these requirements will be arbitrated by an Industry/NRC expert panel. If the panel determines that a plant specific PRA does not meet the requirements of Appendix G such that the

FAQ 73.0 Page 3 of 4 MSPI would be adversely affected, an appropriate remedy will be determined by the licensee and approved by the panel. The decisions of this panel will be binding.

Clarifying Notes (Page 32, lines 4-8)

Documentation and Changes Each licensee will have the system boundaries, monitored components, and monitored functions and success criteria which differ from design basis readily available for NRC inspection on site. Design basis criteria do not need to be separately documented.

Additionally, plant-specific information used in Appendix F should also be readily available for inspection. An acceptable format, listing the minimum required information, is provided in Appendix G.

Changes to the site PRA of record, the site basis document, and the CDE database should be made in accordance with the following.

Changes to PRA coefficients. Updates to the MSPI coefficients developed from the plant specific PRA will be made as soon as practical following an update to the plant specific PRA. The revised coefficients will be used in the MSPI calculation the quarter following the update. Thus, the PRA coefficients in use at the beginning of a quarter will remain in effect for the remainder of that quarter. Changes to the CDE database and MSPI basis document that are necessary to reflect changes to the plant specific PRA of record should be incorporated as soon as practical but need not be completed prior to the start of the reporting quarter in which they become effective. The quarterly data submittal should include a comment that provides a summary of any changes to the MSPI coefficients. Any PRA model changes will take effect the following quarter (model changes include error, corrections, updates, etc.). For example, if a plants PRA model of record is approved on September 29 (3rd quarter), MSPI coefficients based on that model of record should be used for the 4th quarter. The calculation of the new coefficients should be completed (including a revision of the MSPI basis document if required by the plant specific processes) and input to CDE prior to reporting the 4th quarters data (i.e., completed by January 21).

Changes to non-PRA information. Updates to information that is not directly obtained from the PRA (e.g., unavailability baseline data, estimated demands/run hours) will become effective in the quarter following an approved revision to the site MSPI basis document. Changes to the CDE database that are necessary to reflect changes to the site basis document should be incorporated as soon as practical but need not be completed prior to the start of the reporting quarter in which they become effective. The quarterly data submittal should include a comment that provides a summary of any changes to the basis document.

SECTION F 1.2.2 (PAGE F-8, LINE 44 THROUGH PAGE F-9, LINE 3)

The initial baseline planned unavailability is based on actual plant-specific values for the period 2002 through 2004. (Plant specific values of the most recent data are used so that the indicator accurately reflects deviation from expected planned maintenance.)

These values are expected to change if the plant maintenance philosophy is substantially changed with respect to on-line maintenance or preventive maintenance. In these cases, the planned unavailability baseline value should be adjusted to reflect the current maintenance practices, including low frequency maintenance evolutions.

FAQ 73.0 Page 4 of 4 Some significant maintenance evolutions, such as EDG overhauls, are performed at an interval greater than the three year monitoring period (5 or 10 year intervals). The baseline planned unavailability should be revised as necessary in the basis document during the quarter prior to the planned maintenance evolution and then removed after twelve quarters. A comment should be placed in the comment field of the quarterly report to identify a substantial change in planned unavailability. The baseline value of planned unavailability is changed at the discretion of the licensee. Revised values will be used in the calculation the quarter following the basis document revision.

FAQ 74.0 Page 1 of 2 Plant: LaSalle County Station Date of Event: September 6, 2007 Submittal Date: October 18, 2007 Licensee

Contact:

Steve Shields, 815-415-2811, stephen.shields@exeloncorp.com NRC

Contact:

Dan Kimble, Senior Resident Inspector, LaSalle Performance Indicator: MSPI Site-Specific FAQ (App. D)? Yes FAQ requested to become effective: When approved Question Section NEI 99-02 Guidance Needing Interpretation: NEI 99-02, Rev. 5, page 30 line 38 through page 31, line 12:

The revised [PRA] coefficients will be used in the MSPI calculation the quarter following the update. Thus, the PRA coefficients in use at the beginning of a quarter will remain in effect for the remainder of that quarter. Changes to the CDE database and MSPI basis document that are necessary to reflect changes to the plant specific PRA of record should be incorporated as soon as practical but need not be completed prior to the start of the reporting quarter in which they become effective.

The quarterly data submittal should include a comment that provides a summary of any changes to the MSPI coefficients. Any PRA model changes will take effect the following quarter (model changes include error, corrections, updates, etc.)

For example, if a plants PRA model of record is approved on September 29 (3rd quarter), MSPI coefficients based on that model of record should be used for the 4th quarter. The calculation of the new coefficients should be completed (including a revision of the MSPI basis document if required by the plant specific processes) and input to CDE prior to reporting the 4th quarters data (i.e.,

completed by January 21).

Events or circumstances requiring guidance interpretation:

On September 6, 2007, LaSalle County Station identified an error in the then-current PRA model, designated as 2006B. LaSalle had implemented the 2006B model prior to the end of the second quarter 2007. In accordance with the guidance, the 2006B model was therefore effective for the third quarter 2007 MSPI data submittal, and was due to be incorporated into the MSPI Basis Document and CDE by September 30, 2007.

Additionally, this PRA modeling error existed in the 2006A PRA model, which was effective first quarter 2007 and used for the second quarter MSPI data submittal.

Because of the error in the 2006A model, the MSPI PRA values for the second quarter 2007 were in error. It should be noted that the error resulted in approximately an order of magnitude change in Birnbaum values for the Residual Heat Removal System in the non-conservative direction, and thus resulted in under-reported MSPI values. The error did not affect any MSPI thresholds for either the second quarter or third quarter data.

The prior model, 2003A, was used for initial MSPI development and was in place for MSPI purposes through the end of the first quarter of 2007. No significant errors in the

FAQ 74.0 Page 2 of 2 2003A PRA model affecting MSPI values are known to exist.

Given that the error in the 2006A and 2006B model was non-conservative and was discovered prior to the end of the third quarter 2007, LaSalle decided to rescind the 2006A and 2006B models prior to the end of the third quarter and re-instate the 2003A model as the model of record until corrections could be made.

For the third quarter MSPI data submittal, LaSalle chose to use the 2003A model to develop MSPI data. This was done after consideration of the non-conservative nature of the error, the desire to not knowingly provide inaccurate data to the NRC, and after discussion with the site Senior Resident Inspector. A comment was placed in the data submittal identifying that an FAQ would be generated to resolve the deviation from the NEI guidance.

Despite the guidance that states that the model of record at the beginning of the quarter should be used for MSPI data reporting, LaSalle requests that this FAQ allow use of the 2003A model for MSPI data for the third quarter 2007.

If licensee and NRC resident/region do not agree on the facts and circumstances, explain:

The LaSalle Senior Resident Inspector agreed that use of a PRA model with a known non-conservative error did not seem appropriate and supports use of the 2003A model for the third quarter data.

Potentially relevant existing FAQ numbers: None Response Section:

Proposed Resolution:

LaSalle should use the re-instated 2003A model for MSPI data for the third quarter 2007 data and not incorporate a PRA model with a known non-conservative error. This deviation is warranted given the non-conservative nature of the error and the discovery prior to the end of the quarter. Given the magnitude of the error, data for the second quarter of 2007 will also be revised. Subsequent revision of the model will be performed in accordance with current guidance.

This FAQ is approved for this situation only and does not apply to other plants or situations.

If appropriate, provide proposed rewording of guidance for inclusion in next revision.

Not applicable.

FAQ LOG 10/07 1 of 3 Kewaunee Power Station FAQ Plant:

Kewaunee Power Station Date of Event:

August 17, 2006 Submittal Date:

March 7, 2007 Licensee

Contact:

Paul Miller Tel/email: 920-388-8350/paul.c.miller@dom.com NRC

Contact:

S. C. Burton Tel: 920-388-3156 Performance Indicator: MSPI Site-Specific FAQ (Appendix D)? No FAQ requested to become effective when approved. FAQ effective for 3Q07 data submittal.

Question Section NEI 99-02 Guidance needing interpretation (include page and line citation):

Clarification of the guidance related to whether time of discovery is when the licensee first becomes aware that the component cannot perform its monitored function or is when the licensee completes a cause determination and concludes the component would not have performed its monitored function at some earlier time, similar to the situation described in the event section below.

Lines 19-20 on page F-5 of section F 1.2.1 in discussion about train unavailable hours.

Fault exposure hours are not included; unavailable hours are counted only for the time required to recover the trains monitored functions.

Lines 18-19 on page F-22 of section F 2.2.2. Unplanned unavailability would accrue in all instances from the time of discovery or annunciation consistent with the definition in section F 1.2.1.

Lines 34-40 on page F-5 of section F 1.2.1. Unplanned unavailable hours: These hours include elapsed time between the discovery and the restoration to service of an equipment failure or human error (such as a misalignment) that makes the train unavailable. Unavailable hours to correct discovered conditions that render a monitored component incapable of performing its monitored function are counted as unplanned unavailable hours. An example of this is a condition discovered by an operator on rounds, such as an obvious oil leak, that resulted in the equipment being non-functional even though no demand or failure actually occurred.

Event or circumstances requiring guidance interpretation:

On June 28, 2006 a small leak (one drop per minute) was identified in a diesel generator fuel oil system. A work request was written on that day to repair the leak, but no operability determination or repair was performed. On July 20, the diesel was

FAQ LOG 10/07 2 of 3 successfully run for 2.6 hours6.944444e-5 days <br />0.00167 hours <br />9.920635e-6 weeks <br />2.283e-6 months <br /> with the leak still present. On August 17, the diesel was run for 0.35 hours4.050926e-4 days <br />0.00972 hours <br />5.787037e-5 weeks <br />1.33175e-5 months <br />, at which time it was identified that the leak became more significant.

The diesel was shut down 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> after being started. At this time the diesel was declared inoperable. The diesel was considered operable up until the time the leak became more significant on August 17. The fuel line was repaired and the diesel was returned to service August 18.

A diesel failure was assigned in the MSPI data for 3Q06 and unplanned unavailability hours were assigned for the August 17-18, 2006, time needed to restore the diesel to service.

If licensee and NRC resident/region do not agree on the facts and circumstances explain The Kewaunee Senior Resident Inspector believes the time of discovery should start when the original small leak on the fuel oil line was discovered on June 28, 2006. This was based on the fact that the station did not perform an operability determination (OD) when this leak was found and that a reasonable conclusion of a proper OD at that time would have been that the EDG would not have been able to complete its monitored safety function, and, therefore, the unplanned unavailable hours should start in June.

Potentially relevant existing FAQ numbers None Response Section Proposed Resolution of FAQ Kewaunee Power Station believes that in MSPI, unavailable hours are counted only for the time required to recover the trains monitored functions, and, therefore, the time of discovery for the purposes of assigning unplanned unavailable hours starts from the time the diesel was declared inoperable on August 17, 2006, and that the guidance adequately states this. Unavailability, prior to the determination that the failure affected the ability of the diesel to perform its monitored function, is actually fault exposure, which is not included in the MSPI unavailability calculation. Since performance deficiencies were noted for this event, the Significance Determination Process (SDP) was used to characterize the risk of the event and this process evaluated the fault exposure period to determine that risk.

The example given on Page F-5, lines 38-40 (An example of this is a condition discovered by an operator on rounds, such as an obvious oil leak, that resulted in the equipment being non-functional even though no demand or failure actually occurred) would imply that the discovery of the oil leak in June should be the starting point for unavailability. However, the determination that the degraded condition affected the ability of the diesel to perform its monitored function was not made until some time after the failure

FAQ LOG 10/07 3 of 3 APPEAL DECISION After weighing the arguments presented by staff and industry in this FAQ, I've concluded that the MSPI "unavailability" time does not include periods of "failed discovery," such as that which occurred at Kewaunee from June 28, 2006 through August 17, 2006. I find this to be the interpretation most consistent with the definition of "unavailability" contained on page F5 of NEI 99-02, Revision 5, and on balance, the most appropriate way to read the guidance of NEI 99-02 in its entirety.

I recognize that the MSPI unreliability index value may under-represent conditional core damage frequency for situations in which failed discovery extends longer than a routine surveillance period. While this is less exact for the purpose of measuring system performance, it is consistent with the recognized limitation that MSPI does not capture the effect of latent defects such as design errors that are identified through analysis rather than by surveillance testing. This limitation in the MSPI is one of the factors leading to the use of both the MSPI Performance Indicator and the inspection and assessment process when evaluating regulatory response under the ROP. The ROP significance determination process is an appropriate tool for addressing the performance issues associated with failed discovery, such as occurred at Kewaunee.

FAQ effective for 3Q07 data submittal

1 of 1 REACTOR OVERSIGHT PROCESS ROP Working Group Action List - Status October 2007 Action Item Description Task Responsible Org/Individual Target Date 06-01 Unavailability Issue: The issue of planned vs. unplanned unavailability continues to result in confusion and continuous discussion.

Industry to develop and present for NRC discussion proposed recommendations to fix unavailability indicator NEI ROPTF TBD Status: 8/07: Hold for NRC research projection completion; to be closed to MSPI ASSESSMENT 06-12 EDG White Paper Issue: PWR Owners Group Request Revisit EDG max mission time to use a weighted avg. time.

NEI ROPTF Roy Linthicum Oct 2007 Status: Involve Don Dube and Gerry Sowers at the appropriate time. 05/07: Still reviewing data. 07/07: This item linked with Survey results. Will review results and determine if further action is needed. 08/07 Survey confirmed there is an issue. Roy to work. TO be closed to MSPI ASSESSMENT 07-01 MSPI Data Collection Issue: Discuss ways to make MSPI data collection more efficient NEI ROPTF Oct 2007 Status: Finalize review and present in Oct.;l Complete 07-05 MR Approval Obtain NRC Approval of NEI 93-01 letter to align Maint. Rule with ROP.

NRC Steve Alexander Sept 2007 Status: 04/07: Letter issued 05/07: NEI to Follow up for approval status. 07/07: NRC to attend Aug meeting and give an update. 08/07: Update received NRC to formally respond within 30 days with suggestions for additional changes.

07-08 EP03 Clarification Clarify EP03 acceptance criteria NEI ROPTF Oct 2007 Status: Draft sent to industry. Will coordinate with NRC once EP has weighed in; Complete,no change

2 of 1

1 of 1 RIS 2007-21 Status Report on Upcoming Actions Several actions have been started to assess the industry response to issuing RIS 2007-21.

A working group meeting has been set up between the NRC and NEI for November 14th to share information and determine the path forward for ensuring appropriate inspector and industry guidance with regards to a licensees max thermal power limit.

The following actions have been taken:

  • Points of contact have been established between NEI and the NRC.
  • Reactor Inspection Branch has been engaged to propose any revisions to the inspector guidance as listed in IP 61709 and/or MC 0612 guidance.
  • Regional staff have been solicited to supply input for any additional guidance and to be a member of the working group.
  • Tech Spec Branch is reviewing background information to determine the basis for the max thermal power limit.
  • NEI is providing any additional concerns the industry has and any guidance that has been given to the industry since RIS 2007-21 has been issued.

1 of 4 Mitigating Systems Performance Index (MSPI) Review of Operating Experience and Data Office of Nuclear Regulatory (RES) Response to User Need Request NRR-2007-003 RES/DRASP/OERA/PRB

2 of 4 Objectives

- Identification of trends regarding changes to risk coefficient values and anomalies and outliers; and creation of significant issues list

- Decomposition of UAI values into planned and unplanned values to determine anomalies or trends in occurrence of re-baselining

- Evaluation of selected failure data from EPIX to assess if reporting is consistent with the development of UR baseline failure rates

- Recommendations for changes additions or clarifications to NEI 99-02 as a result of analyses and evaluations

3 of 4 Scope

- Review and Analyze MSPI Results

  • Compare actual and reported results to previous expectations based on guidance in NUREG-1816, Independent Verification of the Mitigating Systems Performance Index (MSPI) Results for the Pilot Plants
  • Determine trends or anomalies in MSPI, UAI, URI, and White results
  • Tabulate reported changes to risk coefficients and relate to their timing
  • Create a significant issues list

- Analyze UAI and suggest Improvements

  • Decompose or disaggregate each UAI value into corresponding planned and unplanned UA values
  • Conduct analytical studies sensitivity, trends on changes to UAI
  • Determine (or verify data from industry) Birnbaum values for train/segment UA and evaluate thresholds for exclusion of segments

4 of 4 Scope (continued)

- Conduct Assessment of Reported Failures

  • For selected failures (random and directed) from EPIX determine if their classifications as MSPI or non-MSPI are consistent with development of baseline failure rates
  • Use above information to determine what constitutes a component failure and whether the current definition is being interpreted/applied appropriately
  • Use above to develop a clear definition of fault exposure time and failure occurrence time and suggest appropriate revisions to NEI 99-02

- Support for NRC Meetings and Workshops

- Training Support

1 of 4 DRAFT Safety Culture Survey Results Summary It has been one year since the implementation of changes to the ROP to address Safety Culture. The Safety Culture changes were effective beginning July 1, 2006 and initiated an 18-month implementation period. During this period, NRC and industry implementation of the changes are being monitored by NRC and the ROPTF to identify areas requiring adjustment and/or correction.

The original direction given in SECY-05-0187 for Safety Culture program implementation was to:

Ensure that the resulting modifications to the ROP are consistent with the regulatory principles that guided the development of the ROP.

Industry has previously expressed concerns regarding the lack of a mechanism to screen out cross-cuttings aspects of low significance, the subjectivity of assigning cross-cutting aspects and the low-threshold for screening greater then minor findings. As a result, questions have been raised as to whether the safety culture process correctly portrays safety culture concerns or is correctly positioned to predict declining performance.

NEI and the ROP Task Force issued a safety culture survey in August with the objective of determining whether the revisions continue to meet key ROP principles.

There were 30 respondents to the survey. Respondents provided both yes and no answers to the questions as well as comments. As a result of the comments and the questions the following issues were identified:

The overwhelming majority of respondents acknowledged that safety culture and the identification of cross-cutting issues was difficult at first and rarely identified during exit meetings however, they also recognized that this is changing and the majority agree that this is no longer a problem.

The majority of respondents acknowledged that the assignment of cross-cutting issues can be arbitrary and that the IMC 0612 guidance is subjective. They also noted that working with the inspectors helps to get the appropriate cross-cutting issue assigned.

Some respondents noted that the assignment of greater then minor is not always clear and the explanation given is also not always clear.

In addition, review of the survey comments noted that for some plants cross-cutting issues are assigned all or the majority of the time a finding is issued and for other plants it is much less frequent.

Recommendations Work with NRC to clarify the guidance for assignment and examples in IMC 0612.

Continue to monitor safety culture to determine for consistent application.

2 of 4 Attachment A Questions:

1. Do NRC inspectors clearly identify which (if any) findings or violations are assigned cross-cutting aspects at the site exit meeting? Yes = 27; No=3
2. Do NRC inspectors clearly identify and explain the performance deficiency that resulted in a finding at the site exit meeting? Yes = 30; No=0
3. Do NRC inspectors identify the cross-cutting aspects (area, component and aspect) that are being assigned? Yes = 28; No=2
4. Do NRC inspectors attempt to explain their reasoning for assigning the aspect to the finding (i.e., how the aspect significantly contributed to the cause of the performance deficiency that resulted in the finding)? Yes = 28; No=2
5. If you answered No to any of the above questions (1 through 4), did you attempt to follow-up with the inspector or Branch Chief for an explanation? N/A = 20; Y=10
6. When cross-cutting aspects are identified by inspectors at the exit meeting, do the subsequent inspection reports consistently report the same aspect? Yes = 28; No=1; Y/N=1 If not, please provide details (inspection report number, aspect assigned at exit meeting, etc.)
7. When cross-cutting aspects are identified by inspectors, does the final inspection report clearly identify the cross-cutting aspect(s)? Yes = 29; No=1 If not, please provide details (inspection report number, aspect assigned at exit meeting, etc.)
8. If a cross-cutting aspect is changed between the time of the exit meeting and the inspection report, did the inspector attempt to formally re-exit to explain the change? Yes = 22; No=4; N/A=4 If not, please provide details (inspection report number, aspect assigned at exit meeting, etc.)
9. Has your site implemented the use of any tracking mechanism to continually brief management of the status of numbers of findings with cross-cutting aspects as a result of the ROP changes in July 2006?

Yes = 29; No=1

10. Did your site implement changes to their Corrective Action Program to preclude a Substantive Cross-Cutting Issue? Yes = 16; No=13; Y/N=1 If so, please briefly describe the changes
11. As a result of the changes to the ROP and the increased importance of cross-cutting aspects, has your plant changed its practices regarding acceptance of NRC findings? Yes = 7; No=17; N/A=6 If so, please briefly describe the changes
12. Are you more likely to challenge the characterization of a green finding (instead of characterization as a minor finding) because of the ability of NRC to assign a cross-cutting aspect to a green finding? Yes

= 20; No=10

13. Has your plant disagreed and attempted to push back on the assignment of a cross-cutting aspect to a finding? Yes = 23; No=7

3 of 4 If so, please briefly describe the disagreement. Did you think that no cross-cutting aspect was appropriate? or did you believe that a different cross-cutting aspect should have been assigned.

If you believed that a different cross-cutting aspect should have been assigned, did the inspectors substantially use the plants causal determination for the finding or did they develop their own cause?

Did your push-back result in a better understanding of or agreement with the cross-cutting aspect assignment? Did NRC change the assignment based on your additional information? Were you satisfied with the end result?

14. Have you experienced any instance in which you believe NRC inappropriately characterized a finding with minor significance as being greater-than-minor so that a cross-cutting aspect could be assigned?

Yes = 3; No=26; Y/N=1

15. Have you experienced any instance in which you believe NRC inappropriately characterized a finding as NRC-identified or Self-Revealing when it should have been Licensee Identified so that a cross-cutting aspect could be assigned? Yes = 1; No=29
16. In assigning cross-cutting aspects to findings, is the NRC adhering to the guidance in IMC 0612 (reproduced below)? If not, please provide specific examples.

The finding is evaluated as more than minor (note: cross-cutting aspect of the finding shall not be used to determine whether the finding is greater than minor)

Cross-cutting aspect was a significant contributor to the inspection finding The cross-cutting aspect of the inspection finding is reflective of current licensee performance Cause of the finding is related to one of the three cross-cutting areas (Problem Identification and Resolution, Human Performance, or Safety-Conscious Work Environment)

Yes = 26; No=4

17. Is the NRC following the guidance regarding minor findings in IMC 612 (reproduced below)? If not please provide specific examples.

Review the list of sample minor findings listed in Appendix E.

If the finding is similar to the samples listed as being minor, then the finding should not be documented. If the finding is similar to the samples as being greater than minor, then describe the set of conditions that make the finding greater than minor (e.g., the associated cornerstone attribute and how the objective was affected).

If the examples in Appendix E are not applicable, then answer the minor questions in Appendix B, Section 3. If the answer to any of the minor questions is Yes, then go to section 05.04 of this chapter to determine its safety significance. Also, describe the set of conditions that make the finding greater than minor (e.g., the associated cornerstone attribute and how the objective was affected).

If the answer to all of the minor questions is No, then do not document the finding. See exception in text box noted below.

Yes = 25; No=4; Underdetermined=1

18. Is the NRC appropriately applying the guidance in IMC 0305 for determining whether a substantive cross-cutting issue exists (reproduced below)? If not, provide specific examples.

A substantive cross-cutting issue in the problem identification and resolution or human performance cross-cutting areas would exist if all of the following three criteria are met:

1. There are more than 3 green or safety significant inspection findings in the PIM for the current 12-month assessment period with documented cross-cutting aspects in the areas of human performance or problem identification and resolution.

Observations or violations that are not findings should not be considered in this determination.

2. There is a cross-cutting theme. The findings should be from more than one cornerstone. However, it is recognized that given the significant inspection effort

4 of 4 applied to the mitigating systems cornerstone, a substantive crosscutting issue may be observed through inspection findings associated with only this one cornerstone.

3. The Agency has a concern with the licensees scope of efforts or progress in addressing the cross-cutting theme. In evaluating whether this criteria is met, the regional offices should consider if any of the following situations exist:

The licensee had not identified or recognized the cross-cutting theme affected other areas and had not taken any actions to address it.

The licensee recognized the cross-cutting theme affected other areas but failed to schedule or take appropriate corrective action.

The licensee recognized the cross-cutting theme affected other areas but waited too long in taking corrective actions.

The licensee has implemented a range of actions to address the crosscutting theme; however, these actions have not yet proven effective in substantially mitigating the cross-cutting theme.

Yes = 29; No=1

Reactor Oversight Process (ROP) Task Force Post-Implementation Assessment Mitigating Systems Performance Index (MSPI)

October, 2007 ROP Task Force

Post-Implementation Assessment of MSPI Table of Contents

1.

Executive Summary 1

2.

Assessment Method 4

3.

Summary of Industry Survey Results 6

4.

Summary of Data Review Results 19

5.

Recommendations 21

6.

Implementation Good Practices 24 Appendix A - Complete Set of Comments from Industry Survey 26 Appendix B - ROP Task Force Members 64

Post-Implementation Assessment of MSPI Page 1 of 64

1.

Executive Summary In April 2006, the NRC and the nuclear industry implemented the Mitigating Systems Performance Index (MSPI) as a new performance indicator in the NRCs Reactor Oversight Process (ROP). MSPI monitors the performance of selected plant systems based on their ability to perform key risk-significant functions. MSPI results are determined using data from system performance and plant-specific probabilistic risk assessments (PRAs); as such, MSPI is the first truly risk-informed performance indicator in the ROP.

Following implementation of MSPI, the industry ROP Task Force initiated an industry-wide assessment of the indicator, with the following objectives.

1. Determine the degree to which the anticipated benefits of MSPI have been realized.
2. Review industry MSPI source data and results to better understand the key performance drivers and identify any inconsistencies in implementation.
3. Identify improvements to the indicator design and written guidance that would improve indicator effectiveness and implementation.
4. Identify any industry best practices in collecting and evaluating data or preparing personnel to support the indicator.

The assessment consisted of an industry survey and a review of industry MSPI source data and results. 44 of 65 US nuclear plant sites responded to the survey, and the results were used to evaluate Objectives 1, 3, and 4. The source data review was conducted by a four-member panel highly knowledgeable of MSPI design and expected results. The results of the data review were used to evaluate Objective 2. As a result of the data review, 46 plants were asked to review portions of their MSPI data to ensure accuracy. Each of the 46 plants that were contacted regarding potential outliers responded to the assessment teams inquiry. Based on the responses, approximately 25% of these plants identified the need to revise MSPI data. Any errors identified were addressed as part of the plant corrective action process.

The major findings of the assessment are as follows:

Objective 1 The anticipated benefits of MSPI have been largely realized. The vast majority (74%) of the survey respondents agree that MSPI is an improved measure of safety system performance over the previous indicator (SSU). This is due to the addition of system reliability as part of the indicator, the use of plant-specific, risk-informed inputs, the removal of fault exposure concepts, and elimination of support systems unavailability.

MSPI has resulted in an increased focus on safety system reliability. 15% of plants responding report improvements in preventive maintenance to ensure safety system reliability. 12% of plants responding have implemented or are planning system modifications to reduce the risk significance of potential failures and improve system reliability. These modifications include installation of system cross-tie capability, addition of backup power, component replacement, procedure improvements, and component upgrade/redesign. These results are significant, as they indicate that actual safety improvements have occurred in response to introduction of a risk-informed performance

Post-Implementation Assessment of MSPI Page 2 of 64 indicator. More generally, the results demonstrate the positive safety impact that can arise from collaborative efforts between the NRC and industry in initiating risk-informed improvements to the ROP.

As an additional benefit, MSPI has resulted in improvements to plant-specific PRA models and to the NRCs risk models for plants (SPAR models). 20% of plants responding report improvements in modeling of events, closure of important peer review items, and improvements in baseline data. Further, as part of the pre-implementation reviews for consistency of PRA results, NRC compared plant-specific PRA parameters such as importance measures results to SPAR model parameters and revised SPAR models for many plants.

A significant area for improvement is the need to simplify and reduce workload for the indicator. Due to the plant-specific, risk-informed nature of the indicator, 65% of plants responding state that implementation and maintenance of MSPI has created a significant additional workload for the industry. Additionally, many plants commented on the complexity of the indicator, which makes communication of the results more difficult than for SSU.

Objective 2 Review of industry-wide data indicates the need to consider revising the treatment of planned unavailability. MSPI was designed to allow plants to subtract a baseline planned unavailability component from total system unavailability. While this feature recognizes the importance of performing adequate planned maintenance, some survey respondents noted that it does not reflect the risk impact of all maintenance performed on safety systems. Further, industry data indicates significant variations across the industry in overall plant risk due to planned system unavailability. Finally, some survey respondents indicated that management of the baseline planned unavailability is time-consuming and should be simplified.

Objective 3 Principal recommendations resulting from the assessment are as follows. A complete list of recommendations is provided in Section 5.

Consider revising the treatment of baseline unavailability to account for the risk worth of planned unavailability and simplify management of unavailability data. This recommendation is derived from the recognition that the current design does not fully reflect the risk impact of planned unavailability, from variations identified in the actual risk contributions of planned unavailability during the data review, and from the survey comments regarding the need to simplify this process.

Simplify the indicator as much as possible to reduce workload and complexity. This recommendation is derived from the numerous comments regarding ongoing workload to maintain MSPI. Potential areas for simplification are discussed in Section 5.

Resolve current issues regarding the design or intent of the guidance, including time of discovery, applicability of PMT failures, and excessive importance of EDG run failures.

Post-Implementation Assessment of MSPI Page 3 of 64 Clarify the NEI 99-02 guidance in a small number of areas identified in the survey.

Prioritize and implement improvements to CDE to reduce error traps, make data entry more efficient, and increase ability to analyze effects of future changes.

Improve training and communications. The survey and data review results indicate the need for an industry-wide workshop and improved training to strengthen industry knowledge and implementation of MSPI. A number of inconsistencies in implementation were identified and corrected as a result of the review.

The NEI ROP Task Force and the Industry/NRC ROP Working Group are evaluating these recommendations for implementation.

Objective 4 Nine implementation good practices were identified for sharing with the industry. These are discussed in Section 6.

Conclusion As viewed approximately one year following implementation, MSPI has been successful at increasing focus on the risk-significant aspects of safety system performance. As the industry and NRC gain experience with the indicator, it is likely that the safety improvements realized from the indicator will likely continue to accrue, and the concerns regarding complexity and workload will likely diminish. This being said, the recommendations for improvement in the indicator need to be considered seriously for implementation by the industry ROP Task Force and the industry/NRC Working Group.

Post-Implementation Assessment of MSPI Page 4 of 64

2.

Assessment Method The assessment consisted of an industry survey and a review of industry source data and results. Each of these is described further below.

The industry survey was developed by the ROP Task Force and distributed to MSPI points of contact at each nuclear plant site. 44 of 65 plant sites responded to the survey.

The survey contained the following major sections:

Effect of MSPI implementation. This section was designed to determine the degree to which the anticipated benefits of MSPI had been achieved.

Indicator design and guidance. This section was designed to gather recommendations for improving the design of the indicator and the clarity of the guidance document.

Implementation issues and practices. This section was designed to identify issues related to the implementation of the indicator, including consolidated data entry, MSPI web board, the FAQ process, training, and resources required for implementation and maintenance of the indicator. This section also included a section to share plant-specific practices that might increase efficiency or accuracy of data reporting.

The data review was conducted by a four-member panel of industry personnel that are highly knowledgeable of the MSPI design and expected results. The panel reviewed data from the INPO consolidated data entry (CDE) database to identify potential inconsistencies or outliers among plants. Comparisons were conducted within similar reactor types where applicable. The following data were reviewed. Further discussion is provided in Section 4.

Planned unavailability. Potential outliers were identified as those with planned baseline unavailability that represents more than 10% of the plant baseline CDF.

Unplanned unavailability. Plants were identified that appeared not to use the values for baseline unplanned unavailability provided in the MSPI guidance.

Low unavailability probability. Plants were identified in which unavailability probability assumed in the plant PRA (UAP) was very low compared to the unavailability probability assumed in the plants baseline planned unavailability for MSPI (UABLP).

Importance measures (Fussell-Vesely) for individual train unavailability.

Potential outliers were identified as those with risk worth of a train that is greater 10% of the plant CDF or significantly different from similar trains at the same plant.

Component failure margins by failure type and component type. Potential outliers were identified as those with either abnormally high risk worth for a failure (Xd,r greater than 1E-05 or two failures to yellow), or abnormally low risk worth for failures (Xd,r less than 1E-10 or excessive number of failures to white).

Actual or estimated demands and run hours by component type. Potential outliers were identified as those with greater than two standard deviations from the industry mean in number of demands or run hours for the component type.

Post-Implementation Assessment of MSPI Page 5 of 64 Mission time. System mission times were reviewed to identify any apparent deviations from the guidance or inconsistencies in similar systems between plants.

The assessment team sent information to each plant that had potential outliers and requested that the plant evaluate the information for potential errors, misapplication of the guidance, or other factors that might indicate an unintended result. Many potential outliers may be legitimate differences due to plant design or operational practices.

Plants were asked to document any problems identified in the corrective action process.

Individual plant data was not shared beyond the applicable plant.

The ROP Task Force reviewed the results of results of the survey and data review and formulated recommendations for improvement.

Post-Implementation Assessment of MSPI Page 6 of 64

3.

Summary of Industry Survey Results The results of the industry survey are presented below. 44 plants responded to the survey. The distribution of the results is provided. The sum of responses does not add to 44, as some plants did not respond to all questions. Comments are summarized and grouped with similar comments. The number of similar comments is provided in parentheses. A complete set of comments is provided in Appendix A.

Section I - Effect of MSPI

1. MSPI is an improved measure of mitigating systems performance compared to the PI that it replaced (Safety System Unavailability - SSU).

Strongly Disagree 0

Disagree 1

Neutral 11 Agree 25 Strongly Agree 5

Comments:

Risk-informing the indicator is an improvement. (7 comments)

Inclusion of unreliability is an improvement. (6 comments)

Amount of effort to maintain the indicator is significant. (2 comments) Could simplify counting rules (such as demands, OOS time) to reduce this complexity.

Inconsistencies between units due to PRA differences. (1 comment) (Note: based on similar comments to later questions, the commenter seems to be indicating that there should be some way to compare across plants that does not depend on the plant-specific nature of PRA results.)

Indicator failed to consolidate the methods for reporting unavailability (e.g., between WANO, maintenance rule), adding to the workload. (1 comment)

Allowing for baseline planned unavailability subtraction is not conservative, but other programs take care of this. (1 comment)

Cooling water system unavailability is taken, even when cooling water systems are operable per technical specifications. (1 comment)

2. Implementation of MSPI has resulted in increased focus on the overall reliability and availability of mitigating systems.

Strongly Disagree 0

Disagree 6

Neutral 12 Agree 21 Strongly Agree 3

Comments:

Post-Implementation Assessment of MSPI Page 7 of 64 Has improved site focus on reliability (on at least some systems). (11 comments)

Little or no change in focus - emphasis was already high. (8 comments)

Station management still has habit to focus on unavailability or does not realize impact of unreliability. (3 comments)

Focus on reliability may increase as with more time and experience with MSPI. (1 comment)

Only increased focus is related to the confusion of trying to understand the indicator when a system approaches a threshold. (1 comment)

3. The elimination of cascading support system unavailability is an improvement compared to SSU.

Strongly Disagree 0

Disagree 1

Neutral 6

Agree 18 Strongly Agree 17 Comments:

Simplifies data collection and/or is more reflective of system performance. (13 comments)

Concern that we still must cascade for other PIs. (6 comments)

Unavailability for any reason should be counted. Otherwise attention to support systems may be reduced. (1 comment)

4. The elimination of fault exposure hours is an improvement compared to SSU.

Strongly Disagree 0

Disagree 1

Neutral 6

Agree 15 Strongly Agree 20 Comments:

Elimination of fault exposure is an improvement. (4 comments)

Concern regarding recent interpretations that appear to return to fault exposure concept. (4 comments)

Still must figure fault exposure for WANO indicators, so no savings. (2 comments)

Fault exposure still part of SDPs, so no benefit. (1 comment)

Plant experiences very little fault exposure time. (1 comment)

Post-Implementation Assessment of MSPI Page 8 of 64 Ignoring fault exposure results in non-conservative bias. (1 comment)

5. The incorporation of risk-informed concepts (risk significant functions, importance measures, mission times, and success criteria) into MSPI is an improvement compared to SSU.

Strongly Disagree 0

Disagree 3

Neutral 8

Agree 22 Strongly Agree 9

Comments:

Agree that risk-informing is an improvement. (10 comments).

MSPI is too complex and this has reduced the potential benefit of the indicator. (3 comments)

Not cost-effective, given the increase in resources to maintain (2 comments)

Makes it more difficult to compare plants due to PRA differences. (2 comments)

Has helped expand risk awareness of plant personnel. (1 comment)

Indicator has resulted in more interpretation issues (e.g., failure, mission time counting demands). (1 comment)

6. Actual MSPI results appear to properly reflect the significance of unavailability and failures.

Strongly Disagree 3

Disagree 6

Neutral 13 Agree 18 Strongly Agree 3

Comments:

Not true for unavailability (i.e., since baseline unavailability is not counted). (9 comments)

Complexity of MSPI makes it difficult to explain the impact. (5 comments)

Some failures seem to have a larger impact than expected (e.g., EDG run failures).

(3 comments)

Results are only as good as baseline data. Assigning risk-significance to a component type (e.g., MOVs) penalizes some systems unnecessarily. (1 comment)

Post-Implementation Assessment of MSPI Page 9 of 64 Care must be taken to use the correct MSPI menu selections when entering failure data. EPIX failure mode selections dont always produce the expected MSPI results.

(1 comment)

7. The treatment of planned baseline unavailability allows the plant to focus on performing adequate preventive maintenance to improve safety system reliability.

Strongly Disagree 0

Disagree 4

Neutral 14 Agree 19 Strongly Agree 5

Comment:

Agree (10 comments)

Other indicators and programs still drive reduction of unavailability. (9 comments)

Should remove unavailability tracking from MSPI. (2 comments)

MSPI provides no incentive to reduce planned unavailability. (1 comment)

Management of planned baseline unavailability has increased burden. (1 comment)

The baseline snapshot was too narrow. (1 comment)

No change in plant practices. (1 comment)

8. Regulatory interactions regarding NRC inspector verification of MSPI data submittals have been manageable, given the newness and complexity of the indicator.

Strongly Disagree 0

Disagree 4

Neutral 1

Agree 35 Strongly Agree 2

Comments:

Minimal increased impact due to NRC Inspector verification (5 comments)

NRC is unfamiliar with the indicator. (3 comments)

Good open discussion with NRC. (2 comments)

PI verification is more difficult due to complexity of indicator. (1 comment)

Regulators are as frustrated as the plant personnel with the whole process. (1 comment)

Post-Implementation Assessment of MSPI Page 10 of 64 The level of effort required for MSPI data submittal and follow up was quite onerous.

(1 comment)

The FAQ process appears to favor inputs from certain plants / individuals even where NEI 99-02 guidance is open to interpretation. (1 comment)

9. The level of effort required to maintain MSPI is not significantly greater than that required to maintain SSU.

Strongly Disagree 11 Disagree 16 Neutral 10 Agree 4

Strongly Agree 1

Comments:

Increased workload is significant. (30 comments) Examples include initial impact for basis document, initial and continuing impact on PRA personnel, tracking UA hours on trains and segments with no impact on indicator, need to track different rules for multiple indicators, incorporation of additional systems, addition of failure tracking, basis document maintenance, tracking actual demands, responding to failures late in the quarter, actions to improve margin, communicating impact of failures and unavailability to management, using margins report, putting data in CDE, validation.

MSPI has reduced level of effort required (following implementation). (1 comment)

10. Please describe any significant plant modifications, physical or procedural changes, or maintenance philosophy changes that have been implemented or are planned at your plant that are specifically directed to improve system performance or to reduce risk of failures or unavailability as a result of MSPI performance. Do not include PRA model revisions that have been made to increase margin, if they are not connected to modifications or physical/procedural changes.

Comments:

No specific improvements driven by MSPI. (11 comments)

Performing more maintenance to ensure reliability or moving up planned maintenance. (6 comments)

Planning or investigating modifications or upgrades to reduce risk significance or improve reliability of a system. Includes AFW system modifications, changeout of all safety related 4-KV breakers, change to SW pump design, modify MOV control circuitry, replaced DG governors, removing loads from EDGs, adding emergency power feed to a component, adding backup emergency power source. Also, some procedure changes to improve reliability or reduce unavailability. (5 comments)

Plant now assesses unavailability impact on MSPI when scheduling work. (1 comment)

Post-Implementation Assessment of MSPI Page 11 of 64

11. Please describe any significant improvements to your plants PRA model as a result of implementing MSPI.

Comments:

No improvements driven by MSPI. (11 comments)

Improvements to model, including crediting additional capabilities, separating basic events, improving baseline data, improved modeling of systems. (9 comments)

Addressed open F&Os and/or upgrading to meet PRA quality standards. (4 comments)

12. Additional comments on the effects of MSPI implementation.

Indicator is more complex and requires significant additional resources to maintain.

Need to find ways to simplify. (9 comments)

Indicator has shifted focus towards most risk-significant systems and/or has resulted in greater focus on reliability. (2 comments)

The MSPI values cannot be as readily translated into "% Margin Remaining to White" graphs as the SSU indicators could due to their logarithmic nature. Also, future predictions are difficult, since there are so many variables. (3 comments)

Industry is under a significant amount of stress to insure accuracy of the indicator under the rules of 10 CFR 50.9. The threat of penalty has been strongly stated for this extremely complex process. The level of scrutiny to which we are reviewing our unavailability time does not seem to be consistent with the impact of minor discrepancies. (1 comment)

From a systems and planning perspective, WHITE is viewed the same as RED. Was this the intention? (1 comment)

MSPI implementation created a confusing environment due to its criteria gaps when compared to INPO SSPIs and design / licensing basis criteria. (1 comment)

Section II - Indicator Design and Guidance

1. Please list up to three high priority improvements that should be made to the MSPI indicator guidance. These can be improvements in the clarity of the guidance or proposed improvements in the indicator rules that might make the indicator more effective or make data gathering more efficient. Provide a short explanation of why the improvement is needed.

Comments:

Guidance changes Simplify counting of unavailability or do not count at all. Various suggestions including stop counting, align with WANO, align with maintenance rule. (21 comments)

Post-Implementation Assessment of MSPI Page 12 of 64 Simplify reporting of demands and run hours. This includes consolidating types (ESF non test, non test and test), making it less cumbersome to validate the 25%

requirement to revise estimates, discouraging counting of actuals. (5 comments)

The "7 decade below baseline" quantification requirement should be relaxed. There is little value to this deep of a quantification. (3 comments)

The baseline should be what ever is normal for a system over the current 3 year period. The fact that 2002 to 2004 was the baseline skewed the data. Some equipment had 2 maintenance windows performed during the period while sister or shared equipment had only 1 maintenance work window. This discrepancy causes cyclic performance in the indicator. Also, a three year baseline period may not account for larger, infrequent preventative maintenance work windows. This results in baseline adjustments to the basis document too frequently. It may be better to annualize baseline unvailability over a longer period. (1 comment)

Model control circuitry separately from actuated component to show actual risk worth of auto start, manual start, etc, and count failure of control circuit in proportion to actual worth of failure. Failure of one start circuit has a risk worth of typically a tenth of the worth of failure of the actuated component, but counts with the actuated component. Causes diversion of station resources out of proportion to actual risk worth. (1 comment)

Improve the definition of "time of discovery" as it relates to failures. There is disagreement between the industry and the NRC as to what the station should have known versus what it did know at the time a degraded condition is identified. (1 comment)

Evaluate the inclusion of Performance Limit Exceeded. In some cases, it seems that one or two failure on some systems turn a system white, regardless of a steller unavailability history. If PLE is maintained suggest derating the URI impact by inserting a.75 or.8 multiplier against it. (1 comment)

The impact of a diesel run failure should be revisited. (1 comment)

The guidance should provide for a default Cooling water Unplanned UA value for those plants with no unplanned UA in the baseline UA data. The guidance provides for each site to develop site specific unplanned UA values, but zero is not acceptable. (1 comment)

Clarify guidance for maintenance induced damage/issues found during PMT prior to returning the component to functional status. (This appears to refer to the outcome of FAQ 428, which involved a component failure during a PMT run.) (1 comment)

Training, Communication Provide initial and ongoing training on MSPI use. (2 comments)

Post-Implementation Assessment of MSPI Page 13 of 64 Guidance clarifications The instructions aren't entirely clear for the case where an SBDG is running in PMT mode and you have a grid fluctuation that results in an actual ESF start demand and load demand. It's unclear whether to count the start demand, since you can't say whether the diesel would have successfully started or not, given that it's already running. (2 comments)

Clarify the impact on the MSPI value of the Failure Mode selection during data entry for component failures, and the fact that selecting any of the EPIX-based failure modes from the table via the data entry "drop-down" menu will cause a "Demand" failure to be introduced by default into the MSPI calculation. The information on Page 168 of the Data Element Manual provides no discussion of the consequences of this action, especially since a "Demand" failure may have more of an adverse effect on the index value than a "Run Time" failure. If the failure mode is selected from an EPIX perspective, an incorrect MSPI value can (and has) been generated through introduction of a "Demand" failure when a "Run Time" failure was intended.

Alternatively, provide an MSPI-specific "drop-down" menu for "Demand" failures (Start/Load/Open/Close) and "Run Time" failures (failures to meet mission run time).

The current degree of industry understanding of these terms should make past misapplication of risk-based performance factors avoidable. (2 comments)

NEI 99-02, Page F-27: Plant has encountered a problem with the terminology "established success criteria," "success criteria of record," and "pre-defined success criteria." An Emergency Diesel Generator voltage regulator's readings were out of tolerance by the guidance contained in a surveillance procedure. The procedural range of acceptable readings was not contained within an approved plant calculation.

The NEI guidance would have the plant count the this condition as a MSPI failure even though the ensuing Engineering evaluation determined the as-found condition to be acceptable. (1 comment)

Clarify the unavailability section - particularly as it applies to taking credit for operator actions and "already written in a procedure". As it is now, its meaning can be widely interpreted. A clarification was informally provided to the industry at the Summer 2006 MRUG meeting by myself, after sitting down with Steve Alexander of NEI.

Codifying this clarification in both the MSPI and NEI documents could avoid further problems. (1 comment)

Provide clearer instructions about when PRA data is required to be updated. We have a different (a)(4) model than is used for MSPI which may impact the importance measures, but based on PRA does not meet the MSPI requirements for updating the importance numbers. (1 comment)

De-standardize the number of steps that are virtually certain to be accomplished in restorative actions. Current interpretation is excessively conservative. (1 comment)

NEI 99-02, Page F-25: There appears to be a potential for a MSPI failure to exist without meeting the definition of a Maintenance Rule Functional Failure. (1 comment)

Post-Implementation Assessment of MSPI Page 14 of 64 Clarify unavailability with respect to Technical Specification operability. A component should not be considered unavailable if it is operable. (1 comment)

Guidance needs to address reporting of operational run hours which are a continuation of a post-maintenance test run. In some cases, mainly our service water pumps and possibly our RHR pumps, we leave the pump running following the post-maintenance test. (1 comment)

CDE and results reporting Need simplification of the margin quantification, including PLE - consider providing a small table showing projected MSPI values for 1 or 2 of each type / location of failure. (1 comment)

The process for running multiple-system Derivation Reports is too cumbersome.

There should be a way to run the report for multiple systems at once, and for both Unreliability and Unavailability at the same time, similar to the Margin Report. (1 comment)

Provide the ability to allow revisions/updates to EPIX reports without needing to unlock as long as the MSPI Yes / No determination is not changed. (1 comment)

Due to the logarithmic nature of the indicator, it can be difficult to provide senior management with a clear-cut picture of the remaining "% Margin Remaining to White" for a given system, which is what they have become accustomed to seeing in graph form for the SSU. Numbers of hours and failures remaining in Green are useful, but they don't have the strong visual presence of a graph, and the "PI View Report" graph in CDE isn't a good alternative. We have created a linear "% Margin" bar graph for this purpose, showing the change over a two-month window, but it somewhat misrepresents the facts since it is linear. Some guidance from INPO on how to best make this translation would be helpful. (1 comment)

Revise EPIX reporting so that it is clear when choosing responses whether this will cause an event to be a demand or run failure. This can be confusing and may lead to the wrong failure type. (1 comment)

Other Eliminate the indicator. (2 comments)

Find some standardized way so that an individual plant can be compared to the industry and other plants. (2 comments)

Incorporate approved FAQs into the guidance much more often than is being done.

(1 comment)

Realize that it will take at least 1/2 of a full time equivalent position to maintain the program. Especially during the early implementation. (1 comment)

Establish that the MSPI basis document was required only for initial implementation and was not intended to be maintained up-to-date. Site procedures must be in place to maintain the MSPI constants. (1 comment)

Post-Implementation Assessment of MSPI Page 15 of 64 Use the industry input from this survey to make improvements. (1 comment)

Section III - Implementation Issues and Practices

1. Please provide any comments or suggested improvements to the following implementation-related items.
a. Consolidated data entry Comments:

Improve layout of data entry screens. This includes formatting of tables in CDE to match basis document format, making data element entry, review and approvals a single screen for all units, and removing duplicate entries (start, run and load data) for components, having one text file for all entries. (5 comments)

Allow changes to CDE based on a basis document revision without waiting until after the quarter is ended. (3 comments)

Improve speed and reliability of CDE (lockups, data losses, data saving speed). (2 comments)

A CDE report separate from the Derivations Report that would summarize the potential/actual failures for all the MSPI systems would be helpful. (1 comment)

The equipment search results screen needs to display plant-specific component name instead of industry standard name. (1 comment)

When filling out the CDE section on MSPI each system asks for any "Potentially Related Failures this Month". Not sure what the expectation is for this section. (1 comment)

Data review screens should show approvers the value submitted, rather than just the MSPI number. (1 comment)

The potential failures are identified in the unavailability section should really be identified in the reliability section. (1 comment)

b. Consolidated data entry what-if feature Comments:

Cumbersome to use. (16 comments) Suggestions include, auto populate some what-if fields UA and critical hours), allow saving of scenarios, simple way to enter a what-if for failure, enable a 12 quarter projection feature.

Post-Implementation Assessment of MSPI Page 16 of 64 Hopefully, the switch to estimated ESF demands will also be incorporated here. (1 comment)

Should have a coach report like CDE. (1 comment)

c. MSPI web board Comments:

Too much misleading information is provided. This should be a place where questions are posted and definitive answers provided by someone in authority to interpret the guidance. (4 comments)

Never used (5 comments)

The web board is useful and should be continued. (2 comments)

Communicate access and user requirements (1 comment)

Incorporate the Official FAQ logs and discussions on the web board. (1 comment)

Require a response time. Also allow for personal sorting and organizing of information. (1 comment)

A weekly or monthly summary (digest version) would be nice. (1 comment)

d. Process for addressing frequently-asked questions Comments:

A better means of communicating and making FAQ results available is needed. Suggestions include more frequent incorporation in to 99-02, publish on web board, send to all stakeholders. (4 comments)

Need to streamline process so as to get answers more quickly (3 comments)

Process needs to provide for resolution of questions that are not necessarily disagreements, but which would benefit from more official answers. This could include more issues screened by ROP Task Force (prior to becoming an FAQ), use of the web board, or perhaps just more FAQs. (3 comments)

Provide guidance on WANO FAQ that was deleted when NEI 99-01 revision 4 was developed. (1 comment)

The points of contact should be provided the potential FAQs as this has an impact on how issues are counted at their plants. (1 comment)

Post-Implementation Assessment of MSPI Page 17 of 64

e. Need for additional training on MSPI Comments:

Need periodic training for new users. (13 comments)

People need more training on What IF mode, how to use it. (1 comment)

A "MSPI User's Group" might be useful to allow for interchange of ideas and methods on an ongoing basis. (1 comment)

Improvement lessons learned issued in areas like NRC responses, failure reporting and unavailability reporting would be helpful. (1 comment)

Management needs training on personnel responsibilities. (1 comment)

More detailed information on the derivation of the MSPI Margin Report.

(1 comment)

Since component failures have the most impact on the indicators, additional training on the guidance related to component failures (e.g.

component boundaries, design vs. PRA vs. licensing success criteria, etc.) may be beneficial. (1 comment)

Based on the quantity of web board questions and difference of opinion on answers, it appears that a post implementation lessons learned is in order. (1 comment)

2. Implementation resources Please estimate the total person-hours expended by plant personnel on a monthly basis to maintain MSPI, including collecting, entering and verifying data, reviewing results, reviewing margin and planning improvements to increase margin and maintaining basis documents, and any other MSPI-related activities.

Risk management personnel: average of 10 hour1.157407e-4 days <br />0.00278 hours <br />1.653439e-5 weeks <br />3.805e-6 months <br />s/month, with range of 0 to 60.

Plant engineering personnel: average of 62 hour7.175926e-4 days <br />0.0172 hours <br />1.025132e-4 weeks <br />2.3591e-5 months <br />s/month, with range of 8 to 160.

Regulatory affairs personnel: average of 8 hour9.259259e-5 days <br />0.00222 hours <br />1.322751e-5 weeks <br />3.044e-6 months <br />s/month, with range of 0 to 48 Other personnel (specify):

Work Planning 100 person-hours per month MSPI coordinator (Engineering) 10, 30, 30 person-hours per month Operations 10, 16 person-hours per month Maintenance 30 person-hours per month Management 3, 1, 2, 4, 8 person-hours per month PI Coordinator 2, 4, 5, 6, 5, 10-15 person-hours per month Corp. Reg. Affairs 5 person-hours per month Data entry and collection 1, 3, 100 person-hours per month EPIX 2 person-hours per month

Post-Implementation Assessment of MSPI Page 18 of 64

3. Please describe any implementation practices that may be unique which have contributed to personnel efficiency or effectiveness in performing the tasks required to maintain MSPI data. Please list a contact person that would be willing to share any practices with the remainder of the industry.

(See Section 5 for results)

Post-Implementation Assessment of MSPI Page 19 of 64

4.

Summary of Data Review Results The data review results initially identified 46 plants with potential outliers in the following categories. Plants with potential outliers were contacted and asked to respond to NEI with a resolution of the potential issue. Plants were requested to address any actual issues within the corrective action program. The table below provides the results of the review.

Parameter Criteria for identifying potential outliers Number of plants with one or more potential outliers Planned unavailability Trains with planned baseline unavailability that represents more than 10% of the plant baseline CDF 5

Unplanned unavailability Trains that appeared not to use the values for baseline unplanned unavailability provided in the MSPI guidance.

6 Importance measures (Fussell-Vesely) for individual train unavailability Trains with risk worth greater than 10%

of the plant CDF or significantly different from similar trains at the same plant.

7 Unavailability probability Unavailability probability assumed in the plant PRA (UAP) is very low compared to the unavailability probability assumed in the plants baseline planned unavailability for MSPI (UABLP).

2 Component failure margins by failure type and component type Components with either abnormally high risk worth for a failure (Xd,r greater than 1E-05 or two failures to yellow), or abnormally low risk worth for failures (Xd,r less than 1E-10 or excessive number of failures to white).

12 Actual or estimated demands and run hours by component type.

Components with greater than two standard deviations from the industry mean in number of demands or run hours for the component type 30 Mission time..

Mission times with apparent deviations from the guidance or inconsistencies in similar systems between plants 8

Other Miscellaneous issues 2

Post-Implementation Assessment of MSPI Page 20 of 64 Each of the 46 plants that were contacted regarding potential outliers responded to the assessment teams inquiry. Based on the responses, approximately 25% of the plants identified the need to revise MSPI data. These issues were addressed in the plant corrective action processes.

Additionally, the review identified a large number of plants that appear to have an unrealistic contribution from a run failure (e.g., E-5 to E-6) of an emergency diesel generator (EDG). 70% of units would invoke the risk cap for a failure to run. 34 plants would have less than two run failures to white. The unusually large contribution of a run failure is due to the difference between how failures are classified in PRA and in MSPI.

Note that a large failure to run contribution is non-conservative for the effect of other types of failures. As this appears to be a widespread issue not related to individual plant application of the guidance, plants in this category were not contacted.

Post-Implementation Assessment of MSPI Page 21 of 64

5.

Recommendations Changes to indicator design

1. Consider revising the treatment of baseline unavailability to account for the risk worth of planned unavailability and simplify management of unavailability data. This recommendation is derived from the recognition that the current design does not fully reflect the risk impact of planned unavailability, from variations identified in the actual risk contributions of planned unavailability during the data review, and from the survey comments regarding the need to simplify this process.
2. Simplify the indicator as much as possible to reduce workload and complexity. This recommendation is derived from the numerous comments regarding ongoing workload to maintain MSPI. Potential areas for simplification include the following.

Place a high priority on aligning the identified differences between the maintenance rule guidance and MSPI guidance.

Continue to pursue the effort in progress to automatically translate MSPI unavailability data into WANO safety system unavailability (SSU) data.

Simplify reporting of demands and run hours. This includes consolidating demand types (ESF non test, non test and test), making it less cumbersome to validate the 25% requirement to revise estimates, discouraging counting of actual demands, and considering allowing PMT demands to count for simplification of data collection.

Consider relaxing the PRA model quantification (truncation) requirements.

Consider endorsing alternative methods for maintaining and revising information in the plant basis document. This would reduce workload associated with maintaining a separate document.

3. Resolve the current issue with regard to "time of discovery" as it relates to failures.

There are currently two frequently asked questions being addressed by the Industry/NRC ROP Working Group regarding this issue. This recommendation is also derived from comments in the survey.

4. Revise the guidance to resolve the excessive impact of EDG run failures. This recommendation is derived from the results of the data review.
5. Revise the guidance for maintenance-induced damage/issues found during PMT prior to returning the component to functional status. This recommendation is derived from comments in the survey.

Guidance clarifications

6. Evaluate the need to clarify NEI 99-02 guidance for the following issues. This recommendation is derived from comments in the survey.

Address how to count demands in the case where an EDG is running in PMT mode and then receives an engineered safeguards features (ESF) start demand and load demand.

Address reporting of operational run hours that are a continuation of a post-maintenance test run.

Post-Implementation Assessment of MSPI Page 22 of 64 Investigate improvements to the section describing credit for operator recovery actions to restore the monitored function. A few survey respondents indicated that the guidance was too restrictive.

CDE and results reporting

7. Develop an industry template and supporting software for results reporting that clearly indicates MSPI margin to non-green performance. This recommendation is derived from comments in the survey.
8. Develop a prioritized schedule for the following CDE improvements. Seek additional funding from the industry if necessary to accomplish these improvements in a timely manner. This recommendation is derived from comments in the survey.

Improved data entry process. Suggestions include allowing entry via a delimited text file, changing the basis document data table layout to match CDE entry screen layout, or revising CDE tables so that they are on one page and are human factored to help minimize entry errors.

Improved process for running multiple-system derivation reports.

Provide the ability to allow revisions/updates to EPIX reports without needing to unlock as long as the MSPI Yes / No determination is not changed.

Revise EPIX reporting so that it is clear when choosing responses whether this will cause an event to be a demand or run failure. This can be confusing and may lead to the wrong failure type.

Improve the capability and flexibility of the CDE what-if feature.

Allow changes to parameters during the quarter.

Training and Communication

9. Communicate the results of this assessment to senior plant managers. This communication should include explanation of the key performance drivers in MSPI.

This recommendation is derived from comments in the survey, indicating that some senior plant managers may not appreciate that reliability is a key performance driver for MSPI.

10. ROP Task force should provide periodic initial and ongoing training on MSPI. This recommendation is derived from comments in the survey.
11. ROP Task Force should conduct a post-implementation workshop. Workshop topics should include: assessment results, training on areas of inconsistency identified from data review, initial MSPI training for those that need it, discussion/workshop on proposed guidance revisions (e.g., unavailability revision, EDG run failure), CDE training for those that need it. This recommendation is derived from comments in the survey, and the results of the data review.
12. Improve the usefulness of the MSPI web board by incorporating the following. This recommendation is derived from comments in the survey.

Publicize the access instructions widely.

Consider methods to ensure timeliness and accuracy of responses.

Post-Implementation Assessment of MSPI Page 23 of 64

13. Maintain an up-to-date list of station MSPI contacts and publish the results of ROP Task Force meetings, including FAQ status. This recommendation is derived from comments in the survey.

Post-Implementation Assessment of MSPI Page 24 of 64

5.

Implementation Good Practices Desk top instruction for gathering data and reporting - Tracy Rushing (309) 227 2166, Sylvain Schwartz (609) 971-4558, Mark Kimmich (217) 937-3527, Bob Masoero (717) 948-8884 We utilize Control Room Operations personnel to log MSPI unavailable time in the Plant Logging system that is used to monitor Technical Specification Limiting Condition of Operation events. This allows us to link directly to the monthly data with Microsoft Access and quickly collate the unavailable times to each MSPI system.

Our MRule Program Manager daily reviews all electronic Condition Reports and Plant Logging events for potential MRule failures. He enters (cut/paste) all potential events into the MRule database. Those that occur in MSPI systems are noted by setting a flag. This allows a simple report to be generated each month of all the potential MSPI events for review. A complete package for review and validation can be generated within an hour or so following completion of the month. Mitch Morris 509-377-2100.

The WBN operations staff recently issued a shift order establishing criteria and expectations for logging unavailability. Additional training is being developed help operations employees better understand the unavailability requirements. These efforts are intended to improve both the management of unavailability and the accuracy of unavailable hours being recorded each month. Nick Horning, nchorning@tva.gov, 423 365-1861 Performance indicator reports that provide data to the senior management on margin to white in hours and demand failures is unique and very useful. Terry Printz, terry.printz@exeloncorp.com, 630 657-3809 We require weekly collection of data and monthly publishing to assure all data is completed by the quarterly due date. We use double blind verification to assure accuracy. Mike Caselli, Michael_caselli@FPL.com, 305 246-6459 Use of running spreadsheets to maintain the planned unavailability baseline. Dennis Curtley, 423-843-6707 There is no centralized data entry for CDE, it is performed by the individual Data Stewards and validated by Regulatory Affairs using the manual quarterly data-gathering worksheets and the CDE Derivation Report, as necessary, prior to NRC submittal. This creates a greater sense of ownership on the part of the Data Stewards. Data entry is performed on a monthly rather than an end-of-quarter basis, so any anomalies surface early in the quarter. Regulatory Affairs owns the CDE Local Administrator function and controls CDE access approval, as well as training any new departmental Data Stewards. Wayne Limberger, 802-258-4204.

All data is collected manually by system engineers and receive 3 levels of review with supporting information. Data is entered into CDE by different individual and required additional verification. Reviewing the possibility of having the SE directly input data into CDE. Curt Fischer, curtis.fischer@twcny.rr.com, 315 349-2806

Post-Implementation Assessment of MSPI Page 25 of 64 Millstone has an Access Database that is linked to the plant logs. The database has established queries that are used to reduce the logs to just the information associated with certain systems. This reduces the duration it takes to determine the unavailability. Chris Janus, 860-447-1791 x6806.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 26 of 64 The comments below are unedited, except for obvious spelling errors and removal of plant identifying information.

Section I - Effect of MSPI Respond to the following statements by circling the appropriate response. Provide comments as necessary.

1. MSPI is an improved measure of mitigating systems performance compared to the PI that it replaced (Safety System Unavailability - SSU).

Comments:

Should improve emphasis on keeping most important equipment safe, but has increased admin burden of PI.

Improvement as failures are inputs into the MSPI equations While risk informing the indicator may enable a better weighting of failures and unavailability time, it still does not provide an objective member due to the differences in PRA between plants and even between units.

The PI eliminated monitoring of details that had no impact on risk-significance such as fault exposure and the indicator is a more accurate indicator of performance.

Counting actual failures instead of SSU time is a better measure of reliability.

MSPI utilizes PRA risk analysis to provide "worth" to a SSC within a defined boundary and monitors the associated reduction in reliability if the SSC fails. MSPI has both a Reliability component and an Unavailability component. The PI measured Safety System Unavailability only and at best was an approximation of the safety system performance.

Including the impact of reliability is an improvement in principle.

I was not involved with the Performance Indicators prior to MSPI.

Margin to thresholds is much harder to comprehend / explain.

MSPI is an improvement over SSPI PI applying plant specific baseline UA and plant specific PRA to the results.

It is clearly more risk-based than the previous system, so attention is more focused on the more important items.

As an Industry, when MSPI was implemented, we have failed to produce one standard method for measuring unavailability, whether it be Maintenance Rule, MSPI or WANO, or INPO. As a result, MSPI created a new indicator with different rules, requiring additional workload for plant staff with minimal benefit.

Too early to make this determination.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 27 of 64 Having the PRA risk basis built into the calculation eliminates any second-guessing or additional engineering effort on assessing the relative nuclear risk contribution of a component failure or out-of-service period.

Risk is a better evaluation of the ability of safety systems to perform.

The PI does have its benefits but the complicated nature of the indicator detracts from its usefulness. When it takes a thorough explanation of how the indicator is calculated and what the effect any change will have, the indicator becomes just a number rather than a measure of performance or an evaluation of impact to risk.

Since failures are still evaluated under the SDP process the impact of risk due to individual failures is already evaluated separately. This was one of the selling points of going to this indicator in the first place.

We do have some increased awareness on the impact of equipment out-of-service time.

Some of the margins based strictly on the calculations are far from conservative.

Thats OK because other programs and general good practices will kick in. But not encouraging a site to drive down UA is a good thing.

First actually risk based indicator, what NRC should be concerned with, however imperfect it is.

The measure of the performance of a system should be based on the reliability of the system as well as the availability of the system. SSU only measured the availability of the system. However, the complexity of the rules for data qualification need to be simplified. Example, counting miniscule moments of out of service time while the min flow valve is open, an unanalyzed position, during pump starts is counter productive; it requires an inordinate amount of time for record keeping and documentation with little or no benefit to the indicator. The same could said to be true for the weighting factors applied to various components; it leaves both engineers and management confused regarding the impact of a specific event and associated time out of service. This confusion proves counter productive from both the planning and management aspects.

The old SSU had too much rolled up into one indicator and therefore was not a good indicator of reliability. Additionally failures are what typically drives maintenance rule and MSPI does a better job of assessing the impact of failures. Additionally due to design differences all stations did not have some of the same support systems but everyone had the same threshold.

The amount of effort required to collect the data vs the benefit is questionable.

Agree that the PI is improved by elimination of fault exposure and cascading of unavailability. Disagree as unavailability has marginal impact on the indication; indicator is more affected by failures. Also disagree, as Cooling Water Pump unavailability must be taken even when not required by Tech Specs.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 28 of 64

2. Implementation of MSPI has resulted in increased focus on the overall reliability and availability of mitigating systems.

Comments:

MPSI at least eliminated the penalty for planned maintenance, so should allow maintenance to keep reliability high. Seen little change in focus.

Agree as reliability was not considered previously, but there is still a station habit to focus on unavailability.

The implementation of the new PI has provided an opportunity to edify principal senior management members as to the importance of performing the right maintenance at the right time. Availability has always been a focus for these key safety significant systems.

The station is very focused on MSPI but more from the aspect of making a mistake in data reporting. The focus on equipment reliability and unavailability was high even before MSP.

MSPI has not changed the focus at this plant. Mitigating system performance has always been monitored by plant management.

Prior to MSPI, there was already a high focus on SSPI equipment reliability (INPO/WANO & MRULE) and availability (SSPI). MSPI does not achieve any more attention than the already established Mrule (a)(1) system monitoring from the site level, although it uses a more defined risk based approach to establish monitoring thresholds. The use and weight given failures has the ability to increase the focus on reliability; however, the MSPI process is still quite young and the process should when mature tend to focus more on reliability aspects.

MSPIs are discussed as part of the monthly Performance Indicator Meetings. Site management raise MSPI concerns regarding the proper restoration of MSPI equipment to preclude failures and prompt return to service to reduce unavailability.

Focus from plant management has definitely increased, especially since we went White on EAC power systems at the initiation of MSPI (Green prior).

However the indicators are more strongly influenced by the reliability factor than unavailability. The amount of unavailable hours necessary to exceed the limit is, in most cases, far in excess of what is needed to appropriately maintain equipment. As a result the focus is on minimizing failures.

At xxxx we are very focused on equipment reliability. We utilize more useful, user friendly tools and programs to focus on reliability and availability of the mitigating systems that are more preventative in nature. With the implementation of MSPI there has been an increased focus on MSPI. MSPI is a lagging indicator and reporting tool so it is not useful in preventing problems only reporting what has already happened. The other site programs like the maintenance rule and the

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 29 of 64 corrective action program will identify and resolve issues long before MSPI will pick them up.

xxxx already had strong System Health and Maintenance Rule Programs providing this function as MSPI was in development, as well as instituting a PMO process.

Prior to MSPI, there was already a high focus on equipment reliability and unavailability. MSPI does not achieve any more attention than the already established (a)(1) system monitoring from the site level.

The fact that a new process was being rolled out forced a higher amount of upper level focus.

Because the majority of MSPI systems have been running significantly better than the upper end of the Green performance band, there has been little reason to focus management attention on them from a reliability or availability perspective.

Reliability has received more attention.

With regards to the DG this may be true do the limited margin associated with the DG to failures however no change has been observed for HPCI, RCIC, SW, or RHR.

MSPI changed focus from availability to reliability. This index is neutral to planned unavailability, and creates a confusing picture when compared with INPO / WANO SSPIs relative to managing plant maintenance work.

Plant engineering personnel involved with risk sensitive systems are generally aware of the limitations for failures or unavailability with respect to those components/systems.

New focus on preventing failures of our 30 components. Otherwise, communication is continually needed to convince other organizations that we don't need to drive down UA for NRC PI's. This is a change of thinking.

The only increased focus comes from the confusion of trying to understand the indicator when a system approaches a threshold. The time spent trying to resolve the confusion over the accounting takes valuable time away from addressing the real issues.

Most management still does not understand that reliability is the driving force for the values.

In most cases availability has a minor impact on the indicator, therefore the focus has shifted to reliability. It is the station's expectation that you will take the availability you need to maintain a high reliability.

The focus has been on justifying component failures are not MSPI failures and why systems were available when in question. The large disparity between the affects of a MSPI failure on the MSPI value as compared to unavailability affects on the MSPI decreases emphasis on minimizing unavailability.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 30 of 64 Increased focus on reliability and decreased focus on availability Plant Management still focus on INPO/WANO numbers.

Agree for overall reliability, disagree for availability. MSPI is more sensitive to failures than to unavailability.

3. The elimination of cascading support system unavailability is an improvement compared to SSU.

Comments:

Helps for MSPI, but still counts in other PI's.

A positive aspect is MSPI systems are not penalized for other system performance issues, but it does make it more complicated in determining unavailability.

Elimination of cascading unavailability has simplified some monitoring required for PI data element submission. However, without a similar change to the maintenance rule, accounting for system unavailability is still needlessly complex.

The elimination of cascading gives a truer representation to the monitored system UA.

Elimination of cascading provides a truer measure of mitigating system performance.

Because we still must cascade for WANO and in some cases for MRule this has increased the complexity of the record keeping.

The elimination of cascading is an improvement, and represents a reduction in MSPI engineer/staff workload.

There is no reduction in time or effort due to this change since the WANO SSU PI requires cascading, and the same personnel are responsible for both sets of data.

This has simplified the data collection process.

The old system made reporting more difficult, and was not reflective of system health.

The elimination of cascading is an improvement, however our site did not have much problem with cascading unavailability.

Helps to focus resources on the actual cause of the unavailability.

While there are still some lingering questions about monitored component boundaries, as is visible from a few early FAQ questions, this change has simplified application of the indicators.

Much clearer picture of actual system performance.

Illustrates pure system performance.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 31 of 64 If the system doesn't work for whatever reason then it should be considered unavailable. This tends to lessen attention for issues in the support system that may need to be addressed.

At xxxx the cascading did not have much of an impact. At xxxx the elimination of the ventilation unavailability made a big improvement.

Cascading support system unavailability had inappropriately penalized monitored systems, which masked true system performance. The breakout of cooling water systems improves this situation.

However, cascading unavailability is still required to be reported and therefore results in additional work and potential Human Performance Errors.

4. The elimination of fault exposure hours is an improvement compared to SSU.

Comments:

Should have been an improvement, but looks like FAQ resolutions may take this away, reference the Diesel Generator latent failure FAQ.

No comments.

A failure is a failure. Historical analyses of a potential impact in the of any actual event provided no value or measure of how well key safety systems performed.

Because we still must figure fault exposure for WANO this has increased the complexity of the record keeping.

The elimination of cascading is an improvement, and represents a reduction in MSPI engineer/staff workload.

There is no reduction in time or effort due to this change since the WANO SSU PI requires reporting of fault exposure, and the same personnel are responsible for both sets of data.

Fault exposure hours are still a part of SDP. There is no benefit from elimination in MSPI.

We have not encountered the issue ourselves, but it is clearly better to track failures directly rather than try to track them via fault exposure.

The elimination of fault exposure is an improvement, however our site did not have much problem with fault exposure hours.

The elimination of fault exposure hours is an improvement. Although, based on some recent events, it appears that we are still counting fault exposure.

This always seemed like a difficult factor to pin down with any real degree of certainty, and its elimination is a definite improvement.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 32 of 64 We experience very little fault exposure time.

Ignoring fault exposure results in a non-conservative bias Fault exposure used an arbitrary method to estimate out of service time for an event with an unknown start time. In theory it is a good way to estimate out of service time; however, in practice it caused excessive out of service time and provided little benefit in improving system performance.

The fault exposure hours often resulted in changes to the indicator for one issue.

This was not a true identification of the unavailability or the impact of the one failure.

It should be noted that there is currently a disagreement in the form of an FAQ about what is considered fault exposure. The NRC's position would result in counting fault exposure again. This would negatively impact MSPI.

This should be a "Strong Agree," but there continue to be situations where the NRC resident inspector and the licensee disagree on time of discovery for failures. The NRC appears to be using 20-20 hindsight to determine when the licensee should have known a component was in a failed condition versus when the licensee actually determine the component was inoperable.

5. The incorporation of risk-informed concepts (risk significant functions, importance measures, mission times, and success criteria) into MSPI is an improvement compared to SSU.

Comments:

This is an improvement only in-so-far as it reduces the importance of issues that involve degradation, but no failure of the component to be able to do what it really needs to be able to do.

No comments.

As stated previously, it helped to align the stations focus on risk informed maintenance for the key safety significant systems. However, it remains a poor objective measure due to the individual nature of unit and station PRAs.

Certain failures are more significant than others and that is accounted for.

The focus on the ability to achieve component mission times provided needed clarity to provide risk-informed success criteria.

Conceptually, it is an improvement. However, the use of risk informed concepts makes it more difficult to compare to other plants and the industry as a whole.

Risk informed performance indicators are considered to be an improvement over SSPI and have an added benefit of expanding the knowledge base of industry professionals (plant staff) that can now relate system performance and risk importance/impact.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 33 of 64 This allows the measures to be uniquely appropriate to the plant design and impact on nuclear safety.

It helps define the data requirements for "UNAVAILABILITY" and "RELIABILITY" reporting, which benefits the reporter.

The previous SSU were not risk based.

The use of MSPI has not been cost effective. It has drastically increased the amount of PRA and engineering resources devoted to risk-inconsequential items/ issues/

overhead.

It makes reporting clearer and focus attention on the higher risk items. Development of the basis document also provided some clarity on success criteria, etc.

Conceptually, risk informed performance indicators are considered to be positive, however, our site does not use MSPI to drive safety system maintenance. Balancing unavailability and reliability was already a priority before MSPI.

The degree of sophistication afforded by incorporation of these concepts means that there is less imprecision in the index and correspondingly less latitude to "manage around" the indicators.

Too little benefit for the huge expense associated with the creation and implementation of MSPIs. Implementation of risk-informed concepts are difficult to follow and are presently disconnected from the Technical Specifications regulatory criteria, which control operation of subject systems.

This has not been fully grasped by the industry. Significant focus still placed on Unavailability. This is driven somewhat by the WANO indicator and impression by plant management that lower Unavailability hours are always better (i.e. Top Quartile).

Similar to Maintenance Rule, helps in balancing resources.

This aspect has so complicated the indicator that few understand it. If one is not able to properly understand the indicator then the benefit of the indicator is lessened.

The success criteria is not being used as intended in most cases. Most plants do not have this information readily available and therefore use design basis. The 24 hour2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> mission time can add confusion because it is different than Operability space.

Although the intention is good, it means more different rules to track which may result in more errors. The importance measures is good because it allows determining what is the impact of a failure. There should be some control on the PRA and a better definition of when MSPI values require updating. In some cases the MSPI numbers differ from the (a)(4) model. We are sending the wrong message if we are not being consistent.

The original concept sounded good, however, the MSPI that has been implemented is too complex for the average nuclear employee to understand. It is too difficult to forecast future performance.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 34 of 64 Good in the respect that the indicator leads us to focus on the risk significance of components in the plant, however, it has resulted in more general interpretation issues (e.g., failures, mission times, ESF vs. Operational demands/run hours, etc).

6. Actual MSPI results appear to properly reflect the significance of unavailability and failures.

Comments:

Results are weighted by PRA Birnbaum importance so should reflect its PRA significance. however actual CDE calculation method including Bayesian updated failure rate make it difficult to understand results (e.g. why do run time failures of diesels get us to White sooner than start failures?). Calculations appear to depend very heavily on PRA, which is not standardized across the industry. Consequently, the level playing field intended by MSPI does not truly exist.

No comments.

ROP mitigating systems cornerstone did not significantly change as a result of the implementation. If we were a good performer under SSU, we should remain a good performer under MSPI.

True for failures, but not for unavailability.

In many cases the unavailability margin is so large that it seems meaningless to even keep track of it.

MSPI is more heavily weighted on reliability. Some of safety systems can have enormous amounts of unavailability without adversely impacting the MSPI calculation. For example, the current allowed MSPI unavailability hours is >2700 hours for RCIC (112 days) and as much as 7300 hours0.0845 days <br />2.028 hours <br />0.0121 weeks <br />0.00278 months <br /> for EAC (304 days). There has also been some confusion since Mrule and INPO/WANO allowed unavailability is a fraction of what is allowed by MSPI. Thus, Mrule and INPO/WANO unavailability remain the significant unavailability measures for safety systems.

The MSPI results are difficult to explain when using the Margin Report values. It is confusing to persons not familiar with the MSPI.

On the whole may properly reflect, but results are only as good as your developed baseline data, and can be significantly effected or impacted by such input. Also, the ability to only provide risk-significance to a component type, such as MOVs as a group. penalizes systems unnecessarily.

XXXX MSPI values have a disproportionate impact with failures over unavailability -

focus shift follows that reduces concerns for unavailability.

Strongly agree with this statement for failures which appears to show the relative importance of the components. Disagree with the statement in regards to unavailability which has a relatively minor impact on the overall indicator value.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 35 of 64 The MSPI calculation overemphasizes the risk importance of the EDG failures to run.

MSPI appears to be heavily weighted on reliability only. Some of our systems can have enormous amounts of unavailability without impacting the MSPI calculation.

It is hard to relate Core Damage Frequency Index to any actual significance.

Since CDE was intended to consolidate EPIX, WANO and NRC ROP data entry, care must be taken to use the correct "MSPI-informed" menu selections when entering failure data. The EPIX failure mode selections that are accurate for the physical nature of the failure don't always produce the expected MSPI results. See Suggested Improvement #1.

MSPI results by themselves are not intuitive. Understanding requires a deeper knowledge of the relative contribution of different failure modes.

The results are somewhat complicated and projections on where you would be if an additional failure occurred on you receive X hours of unplanned unavailability are not as easily determined Not sure. If MSPIs reflected planned unavailability correctly, then it appears to be a gap between MSPIs and INPO/WANO SSPIs in this area.

Rules on counting control circuits unfairly provide excess penalty. Failure of a control circuit when redundant means exist to actuate the component result in gross over counting of the importance of specific control system failures.

A specific event may have more impact than another; but, if an indicator is so complicated it takes and indicator specialist to decipher its meaning then the true of value is diminished and counter productive due to time spent in performing the accounting. It becomes similar to today's tax law requiring a special niche of personnel specifically trained for the purpose of maintaining this one method of accounting.

Some failures seem to have a larger impact than expected.

They seem to reflect the significance of both, although what it is telling you is unavailability is not that significant. We may want to put a bigger penalty on unplanned unavailability.

Only the PRA professionals that designed MSPI can accurately answer this question.

It reflects the significance of failures. The unavailability has a low impact on the indicator as evident by the large margin threshold of unavailable hours to White.

The significance of the failures and unavailability are based upon the risk numbers generated by the PRA. Since I was not involved in the development of the PRA model, I can not state conclusively that the results are proper. However, they do seem to make sense from my simulator experience.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 36 of 64

7. The treatment of planned baseline unavailability allows the plant to focus on performing adequate preventive maintenance to improve safety system reliability.

Comments:

UA has little impact on the MSPI calculation, so it does not really matter Continued focus on removing UA tracking from MSPI should be a priority.

The process for revision of planned baseline hours is very cumbersome. It is a good concept; however, a complex implementation method (time consuming) for a simple revision.

This is key in establishing reliable safety systems. Perform the right preventive maintenance in an appropriate window, as originally intended by Technical Specification Limiting Conditions Of Operation, and safety systems will perform within established acceptable ranges.

A 3 year period is not reflective of our maintenance cycle. It's too difficult of a process to change baseline unavailability when a large scope PM comes up. If a plant changes their maintenance windows or outage vs online schedule philosophy, there is no process for creating an accurate baseline number.

If it were only MSPI driving the managing of Unavailability this would be true, however MRule and WANO still drives us to minimize unavailability perhaps at the sacrifice of adequate timely preventive maintenance.

INPO SSU PI still has some influence on management decisions regarding on-line elective maintenance.

INPO/WANO and Mrule Unavailability rules constrain our use of this process globally. Therefore it is recommended that we expedite the process of combining all safety system indicators into a single methodology even if that means being different than WANO.

There has been some discussion of changing planned baseline unavailability from being "free". While it is generally agreed that some penalty may be appropriate, particularly for large deviations from industry averages, it is important that overhaul time continue to be free. Overhauls or replacements to improve reliability should continue to be free.

The MSPI does not provide any incentive to reduce planned unavailability while still maintaining equipment reliability.

System engineers would say it has little impact.

Indicators allow excessive amounts of unavailabilities before the indicator is significantly impacted. We focus on achieving the more restrictive Maint Rule limits for unavailability, which ensures the MSPI limits for unavailability are never approached.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 37 of 64 With the SSPI indicator, planned UA was not factored into the results thereby masking good PM strategies to improve performance. With MSPI, planned UA does not adversely impact the results.

Allowing plants to schedule and do PM at the appropriate times without penalizing them an equal amount as for unplanned maintenance is a very appropriate thing to do.

The issue with planned baseline unavailability hours and their utilization in calculating UAI is that typically at any given month the actual unavailability will vary greater than 25% over / under the baseline. The 25% mark is not a good mark and represents less than 50% of (1) standard deviation (67%). A more reasonable mark should equal or exceed one standard deviation - at least there would be a mathematical basis for the requirement. Also as stated earlier, MSPI appears to be heavily weighted on reliability only. Some of our systems can have enormous amounts of unavailability without impacting the MSPI calculation.

Maint Rule already did this. MSPI has not had an affect beyond Maint Rule.

Somewhat cumbersome process that presents an error trap if utilized, but it does accomplish greater flexibility.

Tracking planned unavailability is not contributing to maintaining overall plant safety.

The fact that baseline planned unavailability can be increased when significant maintenance or modifications are planned for the next quarter makes this a meaningless indicator because the impact of this maintenance is neutral. All tracking planned unavailability does is increase the burden on the licensees to manage the baseline by adding appropriate amounts when necessary and ensuring it is removed again after 36 months. Planned unavailability and its risk impact should be managed by the Maintenance Rule by means of established performance criteria. MSPI should focus on the impact of failures and the subsequent unavailability that results from it.

Because nearly all of the MSPI systems have been running significantly above the upper end of the Green performance band, there has been little awareness of this provision at the maintenance planning level.

Other indicators such as WANO and Maintenance Rule still based solely on unavailability. Goals are based on performance related to these indicators (namely WANO)

Using planned unavailability efficiently helps insure increased reliability.

That might be true if INPO SSPIs were to be ignored. Current INPO SSPI criteria drive nuclear plants to minimize both planned and unplanned unavailability.

The reporting of the planned baseline hours for the next quarter has created an additional accounting burden and resulted in challenges when schedule flexibility is required to address equipment issues not associated with MSPI.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 38 of 64 XXXX is definitely using the baseline to increase our maintenance. We routinely review the work and will change our baseline if our maintenance practice changes.

This has changed the focus of our management team and eliminated some of the nickel and diming the unavailability time.

The treatment of planned baseline unavailability for MSPI does allow the plant to perform adequate preventive maintenance to improve safety system reliability.

However, because Maintenance Rule and WANO do not treat planned unavailability the same way MSPI does, the station cannot take advantage of this aspect of MSPI.

The baseline snapshot was too narrow Plant Management still focuses on INPO/WANO number, not MSPI.

The planned unavailability has a minimum impact on MSPI and has not changed how we perform preventative maintenance at the site.

If the planned unavailable hours are less than the baseline planned unavailable hours, the planned unavailable hours will be set equal to the baseline value. This feature minimizes restriction on planned maintenance activities.

8. Regulatory interactions regarding NRC inspector verification of MSPI data submittals have been manageable, given the newness and complexity of the indicator.

Comments:

There has been good open discussion of the MSPI implementation.

No issues with NRC inspector verification; however, corporate response to identified issues was very rigorous and resulted in several issues identified of minor significance.

Our NRC inspector was as unfamiliar as we were with the process.

I think the regulators are as frustrated as the plant personnel with the whole process.

XXXX has a good working relationship with the NRC with respect to MSPI input.

Our site has experienced minimal increased impact due to NRC Inspector verification and inspection results yielded only editorial comments.

Our experience has been that the NRC Resident Inspectors have been struggling to master full understanding of MSPI concepts.

NRC interaction has been minimal. Aside from the TI-2515 /169 "Mitigating System Performance Index Verification" there has been very few interactions with the NRC.

There are the occasional questions on how certain component failures were addressed, but other than that NRC comments have been minimal.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 39 of 64 The level of effort required for MSPI data submittal and follow up was quite onerous.

The FAQ process appears to favor inputs from certain plants / individuals even where NEI 99-02 guidance is open to interpretation.

Our site has experienced minimal increased impact due to NRC Inspector verification.

The NRC has performed one implementation inspection and had very few issues or questions. The Resident Inspector has not spent any additional time in this area.

This should be addressed by the residents. There was no burden on the plant during the initial TI.

From my interactions, I don't believe the NRC inspectors were given adequate training on MSPI. Neither was the industry.

The complexity of the MSPI calculations makes verification of the results difficult.

9. The level of effort required to maintain MSPI is not significantly greater than that required to maintain SSU.

Comments:

Certainly much more work for PRA personnel. MSPI has had a large impact on site resources - particularly the MSPI coordinator, which could be a full time job in itself.

The necessity to keep MSPI reporting in 50.9 space, makes reporting and verification and approval more time consuming because of the adverse affects of a minor error.

The necessity to keep accuracy of reporting UA hours to the 3rd or 4th significant digit is overburdensome due the negligible impact on the overall MSPI value. The indicator involves more personnel with the expansion of systems than SSU. So the level of effort has had a negative impact on site resources for data submittal and verification.

Changes to practices are very complex and time consuming to implement. Any changes in bases document (site PRA, planned unavailability, etc.) gets you in a very cumbersome basis document revision.

The involvement of the PRA program manager, the necessary periodic updates of the PRA, the MSPI Bases Document, the INPO CDE PRA values necessary for the calculator to work, and the monthly data collection and input is significantly greater than required under SSU. The resource requirements have at least doubled.

The quantity of time taken to setup the initial MSPI data was detrimental to my system because it took away the time I needed to be spending on preparing for upcoming work on my system.

The ongoing performance of MSPI data collection and Reporting is also a significant increase in Engineering resources.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 40 of 64 In general, any benefit that may have been gained by having a slightly more accurate indicator than the previous PI has been overshadowed by the consumption of Engineering resources necessary to work through the whole MSPI process.

Once it was initially established, MSPI was easier to maintain, reducing the level of effort involved.

XXXX is on a 24 month refueling interval. Since the MSPI PIs are three year indicators, there will be a need to revise our Basis Document to adjust estimated demands and run time hours per 12 quarters and revise CDE.

SSU only addressed unavailability. MSPI added unreliability. This is an additional work effort to maintain.

The level of effort to maintain MSPI is significantly greater. Not only is the data input more involved, the tasks associated with managing/supporting the MSPI basis document is a considerable effort.

We now must maintain a Basis Document and all that entails as well as enter more data in CDE (which now requires multiple people to enter/validate). Because of the complexity of MSPI compared to SSU we must spend more effort in explaining results to management.

MSPI is more complicated than SSU and requires interpretation of the rules for every reliability event impacting the indicator. More time is required for data collection than the old SSU, but more significantly, much time and effort is required for interpreting the rules for calculating MSPI.

The PRA group has experienced increased effort because of the need to revise the PRA three times over the last year.

There is more of a burden on the reporters and verifiers to insure that all failures are reported in a timely manner. Previously, EPIX reporting of the same type failures would follow our Maintenance Rule review process. With MSPI, the failures occurring in the last month of a quarter require prompt submittal, and we do not have the same time considerations as the other months.

Additionally, the level of effort for verification of data has increased with MSPI implementation. The overall amount of documentation being maintained (notes, data sources, worksheets, etc.) has increased due to MSPI reporting requirements.

Data must be gathered on all attempts to start and run equipment. There is much more equipment monitored in the MSPI process. The data cannot all be retrieved automatically. Operations logs must be manually scanned for both starts, runs, and unavailability. The paperwork documenting why a failure is or is not an MSPI failure is significant. Ask any MSPI data steward and he will tell you MSPI is a time consuming process The amount of data collection is vastly greater than what was required for SSU previously.

I was not involved prior to MSPI, but have been told it is significantly greater than previous system.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 41 of 64 MSPI is somewhat easier from data input perspective but more difficult from a plant internal communications perspective. Explanation of failure or UA hours impact on margin is very difficult.

Ongoing level of effort to maintain MSPI is more than what was expended for SSU.

However the ongoing efforts when no limit is approached is not significantly greater.

If indicator does move towards the threshold limit, then a greater effort is expended than with SSU. This involves use of PRA resources to evaluate the PRA model for identifying possible conservatisms in the PRA model that could be changed to improve margin within the indicator.

MSPI maintenance requires a significant increase in level of effort and resources to maintain compared to SSU.

MSPI is extremely complicated and requires interpretation of the rules for every event impacting the indicator. More time is required for data collection than the old SSU, but more significantly, much time and effort is required for interpreting the rules for calculating MSPI.

Data gathering is more involved and has far wider scope (ex unavailability on individual pumps, segments). And determining when a failure is a MSPI failure is complex and subject to too much narrow scope interpretations, as evidenced by the numerous discussions on the web board. CWS & EDG must maintain running planned maintenance spreadsheets to adjust & keep up with the baselines due to nonroutine planned outages.

For the Residual Heat Removal, Safety Injection and Auxiliary Feedwater systems, the elimination of the cascaded unavailability has decreased the effort to maintain MSPI. Service Water and Component Cooling Water were added, which is an increase in level of effort. Because our diesels are shared equipment, the need to track when the units are critical is an additional level of effort. From a programmatic standpoint, the effort required to manage planned unavailability, and in particular, the planned unavailability baseline, is way out of proportion with its contribution to overall risk importance. See answer to (7), above.

The data-gathering component is largely identical, and the PRA model has not required update since MSPI inception, so maintenance level-of-effort has been low.

It does take an ability to use and understand the CDE Margin Report and Derivation Report, which require resident Subject Matter Experts to explain the intricacies of the back-up data to interested managers.

The tracking of run hours, valve demands and distinguishing between test, operational and ESF demands/run hours. At a minimum, the reporting test and operational values should be combined into one data set. MSPI allows the use of estimates however when exceeding 25% deviation baseline MSPI basis document changes are required. This requires us to track all values regardless of estimates or actuals are being reported.

Reliability related reviews are time consuming and add a significant effort to the availability reviews required for SSU

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 42 of 64 XXXX has had PRA model updates at both units since the implementation of MSPI.

Changes to the basis documents and inputting into CDE requires significant additional effort for PRA and Regulatory Compliance personnel.

While the reliability data was previously collected, due to driving the ROP indicator, more significantly more validation and review effort is being applied.

There is a slight monthly workload increase, but not "significant".

Large effort required to make failure calls accurately with much heavier management involvement than the previous indicator.

An indicator of this type should provide more benefit than the time it takes to maintain it. I agree that both reliability and availability indicators and thresholds are necessary to maintain equipment effectively. However, a cumbersome morass of rules, nuances and details provides little benefit in this effort. I believe the KISS principle should be applied; the simpler the rules are the more benefit can be derived at the lowest cost. After all each of us judge our automobile's performance on does it start when it want it to, does it get good gas mileage, and is it in the shop more than I am willing to accept. Combining each of these into a single super-indicator does nothing for Jon Q Public but it sure does keep a statistician busy.

Demand tracking can be burdensome.

The demand data can be somewhat of a burden. First the data does not need to be divided up into the 3 categories there is no gain by that. INPO can get their ESF demand data by looking at LERs. Secondly estimating can be difficult due to changes in operational demands. In some cases the number of components does not make the accuracy of the number of demands very important. When there are fewer components in the component grouping the accuracy becomes very important and may make the difference in a color change.

MSPI requires significantly more effort to maintain than SSU.

Collection of RHR data takes 3 times longer than with SSU.

Since INPO/WANO still requires data with cascading, the work load has actually increased without any decrease as originally stated by eliminating cascading.

Additional hours are being spent on collection/review of reliability data. There is a duplication of efforts currently between tracking of unavailability for MSPI, MR and WANO. Additional PRA information is now required and would be recreated for each model update that was not required for SSU.

The level of effort has increased by approximately a factor of 10.

From a PRA standpoint, revision of the basis document is required for all major PRA model changes and for significant variations from the baseline UA and UR values.

From a system engineering standpoint, the level of effort is significantly greater to maintain MSPI, than SSU.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 43 of 64

10. Please describe any significant plant modifications, physical or procedural changes, or maintenance philosophy changes that have been implemented or are planned at your plant that are specifically directed to improve system performance or to reduce risk of failures or unavailability as a result of MSPI performance. Do not include PRA model revisions that have been made to increase margin, if they are not connected to modifications or physical/procedural changes.

None, With INPO still handing out plant grades of top performance to those performing at top quartile or better, the focus has been and will always remain so to minimize UA and continue striving for top decile or better yet zero performance. The only maintenance philosophy is to minimize UA. We have taken no such advantage of the MSPI program and perform more on line maintenance.

None XXXX has been a champion of on-line system maintenance since the early 1990s.

We have consistently performed planned system outage windows that have contributed to a highly reliable suite of mitigating systems. Several human performance related failures, combined with a senior management focus on "top quartile" performance have driven re-evaluation of the on-line maintenance strategy.

With the advent of MSPI our senior management has again embraced performing PLANNED maintenance in the interest of reliability improvements.

Planning a replacement/upgrade of the HPCI Signal Converter Control System -

though not because of MSPI. It needs to be done regardless of the indicator.

None identified, the implementation of MSPI has been too recent to adequately evaluate this.

To date, reliability improvement has been based on our overall operations, maintenance and engineering philosophy and has not been driven by MSPI.

MSPI is "failure" dominant. This combined with the use of planned baseline unavailability has resulted in some cases of not trying to manage unavailability hours for maintenance in order to minimize the number.

We quarterly review the next quarters planned maintenance to determine if a change to the Basis Document is required. Annually we re-assess our estimates to ensure they are still adequately reflecting plant practices.

None to date.

No plant modifications have been implemented specifically to address MSPI. An administrative procedure regarding the collecting, manipulating and submittal of data was generated to address MSPI.

We have begun looking at the work windows much earlier. Maintenance is much more focused on completing the work on these systems. Due to low margin on the AF system, a proposed modification to provide a crosstie between the Train A systems on each Unit is being investigated to increase the amount of redundancy.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 44 of 64 We have moved up planned maintenance of our Diesel Generators, such as replacement of relays, due to recent failures.

No specific changes have been made to date.

Procedure changes to better capture demands and run hours. Engineering instruction to assure proper verification to maintain accuracy.

None, but as previously stated, we already had a good System Health and Maintenance Rule Process.

None to date None. Maint Rule has already driven such activities.

None up to this point in time.

None that can be attributed to MSPI performance Nothing significant, except for some PMs, which are now performed on line instead of refueling outages.

As a result of the XXXX License Renewal Project the site has recognized that the Auxiliary Feedwater System would benefit from an additional supply of auxiliary feedwater. Because the Unit 2 Auxiliary Feedwater System is a "low margin system" the impact on improving the risk significance of the system for MSPI is also a factor.

This is "very" preliminary and is not in the planning stage.

Major capital modifications - changeout of the entire safety related 4kv breaker population, in work, high priority. Expect modification to MOV control circuitry across the monitored population to change torque switch bypass settings.

We have increased review of planned outages on MSPI systems. Some 7300 card replacements have been made.

We are changing our maintenance philosophy with respect to fixing degraded conditions especially with the diesel. When a degrade condition comes up we assess if it were to fail what would be the impact on MSPI. There have been no modifications or procedural changes that have been implemented.

Changes to SW pump design to eliminate bearing water vulnerability and reduce unavailability. Revising AFW procedures to use dedicated operator to reduce unavailability. Redesigning AFW flow control to reduce risk. Removing loads from diesel generators to remove potential for overloading during periods of high ambient temperatures.

(1). MSPI results do not influence management decisions; (2). Diesel Generators have had governors replaced; (3). Changed RHRSW Operating Procedure and Surveillance procedures to allow pump start via spray return valves. This reduces unavailability charged due to maintenance on bypass return valves.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 45 of 64 Maintenance planning/scheduling now assess the affect of unavailability on the MSPI when coordinating work. The station has become much more aware of the consequences of breaker failures on key station equipment (i.e. RHR, EDG's). The station is evaluating a modification to allow for rapid powering of a charging pump from the Technical Support Center Diesel Generator.

11. Please describe any significant improvements to your plants PRA model as a result of implementing MSPI.

Improvements-none. Our site was forced into changing our PRA model as a result of MSPI implementation (NRC outlier). This change by itself single handedly took EDG, HPI and CWS to very low margin conditions. This placed the site in an at risk to non green performance with one failure.

An Interim PRA update is being planned for crediting the SBO DG building on 90% of extreme weather events in the PRA model that would decrease the XXXX CDF from 7.4E-6/yr to 5.3E-6/yr. An update to the PRA model will be performed prior to revising the MSPI Basis Document and calculator as the diesel Birnbaum number would change.

XXXX had explicitly modeled all Mitigating systems previous to the implementation of MSPI. No significant improvements have been identified as a result of MSPI.

The station is revising the station blackout model for the emergency diesel generators as a result of the MSPI implementation.

MSPI caused re-evaluation and revision to the XXXX Core Damage Frequency due to taking into account the newly installed backup Emergency Diesel Generator.

None.

The modeling of several accident sequences has been revised to be on much more robust technical basis as a result of addressing the PRA quality requirement before the MSPI implementation.

Separated super-component basic events into discrete basic events; Updated Loss of offsite power initiating event frequencies; Updated electrical support modeling for 480V, DC.

Reviews of the PRA model were conducted to support implementation and were not of a greater magnitude than what would have been done to support routine updates to the model.

No major changes to the PRA model to address MSPI. Small changes to reduce identified PRA modeling asymmetries were implemented.

None but some are in progress on SX.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 46 of 64 Baseline data for cooling water system was improved, to accommodate changes in maintenance practices.

1) Closure of A and B F&O's from Peer Review. 2) Better Loss of Offsite Power modeling 3) Updated T&H success criteria calculations.

No significant changes made up to this point.

Dispositioning of A&B F&Os impacting MSPI; Generic Data update and Dual Unit model enhancement.

Generic Update, CCW/ICW model changes to address F&Os Reviews of the XXXX plant risk models were performed in support of MSPI implementation and in support of other requirements and desired model improvements. Model changes were implemented based on these reviews. The overall set of changes represented a significant improvement in the plant PRA models.

No improvements per se. Changes were made to accommodate MSPI.

The PRA is currently being updated to change the modeling of the Component cooling water system based on a review prompted by MSPI.

No changes have been made since the initial development of the MSPI model.

Not aware of any.

Updates and minor changes that were fallout from a complete model run.

Service Water flow model updated - core can be cooled with a single SW pump (vs

2) reduced risk worth of SW pump failure in PRA by an order of magnitude No significant improvements have been made even though the PRA model has been revised twice since implementation of MSPI.

Added Hot Leg Injection, added RCP Seal LOCA Initiating Event, added Primary Safety Valve LOCA Initiating Event, fixed Containment Heat Removal Logic, added HPSI Mini-flow Recirc Valves, fixed Feedwater Line Break EFW Logic, fixed EFW Control and Isolation Valve Logic, fixed ACC Pump B Fail to Start Probability, fixed CCW Unavailability Modeling.

A model update was required to meet the PRA quality standards as stated in NEI 99-

02. This included data failure updates, initiating event updates and some system modeling updates.
12. Additional comments on the effects of MSPI implementation.

It is clear that MSPI missed the mark during implementation when we look at the bill of sale that was handed to us at the very beginning. We have created a very complex, overburdensome, manpower intensive program. This program was sold on

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 47 of 64 the idea it would be easier to manage than the old SSPU. To long for those SSPU days again. At least it is job security for those involved with MSPI, everyone else keeps at arms length away.

The MSPI program in general is more complicated than it had to be for some plants in processing changes and in counting unavailability.

I feel the industry is under a significant amount of stress to insure accuracy of the indicator under the rules of 10 CFR 50.9. The threat of penalty has been strongly stated for this extremely complex process. This seems like overkill for when in reality, a minor discrepancy in unavailability time has an insignificant impact on the overall calculation. The level of scrutiny to which we are reviewing our unavailability time does not seem to be consistent with the impact of minor discrepancies.

MSPI has shifted the station focus toward the station's most risk-significant systems (auxiliary feedwater followed by emergency diesel generators).

Training and understanding of the nuances of this method is very difficult. More work should have been done to simplify this process.

Much more than a "Level of Effort" project. Next time realize the effect on the organization. Also, implementation should not be started until the guidance is finalized.

The approach to the MSPI that was used was "overly quantitative". What began as a method to better capture data for safety-system performance morphed into a process where factors such as common cause and truncation limits made implementation very cumbersome.

MSPI implementation and maintenance resource requirements were not accurately communicated to the plant. The time required to support this performance indicator has increased significantly when compared to SSU and continues to be a burden to support.

Requires many Engineering manhours to collect data, plant computers can not collect required data. Most plant procedures do not require Operations to document required data.

From a Systems and planning perspective, WHITE is viewed the same as RED.

Was this the intention?

The MSPI values cannot be as readily translated into "% Margin Remaining to White" graphs as the SSU indicators could due to their logarithmic nature. See Suggested Improvement #3.

MSPI implementation created a confusing environment due to its criteria gaps when compared to INPO SSPIs and design / licensing basis criteria.

The meaning and significance of exponential numbers (particularly negative) isn't apparent to personnel not familiar with the program. Without access to the margin

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 48 of 64 reports the index number is not very useful. Explanation of what the exponential numbers and margins mean is required on a regular basis.

XXXX has been successful in MSPI. Actual reliability data is used and has proven to be less of a burden.

Resulted in a greater focus on reliability.

Due to the complexity of MSPI, few people at the station understand it well.

Projecting future performance is extremely difficult because there are too many variables.

(1). Took a lot of time up front to implement; (2). Cumbersome to exclude PMT valve strokes from overall count. Easier just to count total strokes for valves with computer points.

Raised work load because cascading unavailability has not been eliminated which was part of the original selling point for MSPI.

MSPI is difficult to explain to people not familiar with PRA.

Section II - Indicator Design and Guidance

1. Please list up to three high priority improvements that should be made to the MSPI indicator guidance. These can be improvements in the clarity of the guidance or proposed improvements in the indicator rules that might make the indicator more effective or make data gathering more efficient. Provide a short explanation of why the improvement is needed.

First suggested improvement:

Either fully standardize it, including PRA inputs or eliminate it. This will level the playing field.

Simplify counting of unavailability (or do not count at all) based upon significance to plant (i.e. we are held accountable to counting to the nearest 1/10 of an hour when we have over 3000 unavailability hours remaining before going white).

Outside of eliminating the indicator, I have no suggestions for improvement.

The way that unavailability is considered for MSPI and for the INPO/WANO indicator and for Maintenance Rule is significantly different for some systems. At a minimum, MSPI and INPO/WANO need to come together and agree to count unavailability the same way. This is one of the reasons that the amount of time to do indicators on a monthly basis has dramatically increased.

Initial training for personnel new to MSPI and on-going (annual) training for others. A suggested format might be an annual workshop with 1-2 days of training for new personnel followed by 1-2 day conference to share insights, upcoming changes, etc.

This is necessary to ensure users are providing consistent results and that new

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 49 of 64 personnel have a thorough understanding, especially with respect to the nuances of bases documents.

NEI 99-02, Page F-27: XXXX has encountered a problem with the terminology "established success criteria," "success criteria of record," and "pre-defined success criteria." An Emergency Diesel Generator voltage regulator's readings were out of tolerance by the guidance contained in a surveillance procedure. The procedural range of acceptable readings was not contained within an approved plant calculation.

The NEI guidance would have XXXX count the this condition as a MSPI failure even though the ensuing Engineering evaluation determined the as-found condition to be acceptable.

Find some standardized way so that an individual plant can be compared to the industry and other plants.

Consolidate INPO SSU to match MSPI data input and definitions.

MSPI unavailability monitoring is not a significant input to MSPI. Therefore, it is suggested that unavailability aspects of MSPI be evaluated for elimination/modification. MRule unavailability limits for safety systems are a good measure and are the predominant indicator for RBS. More meaningful benefit could be gained by having only one method to monitor availability of systems (INPO/WANO/MSPI).

Implement conforming changes in NUMARC 93-01; most importantly, change the required hours for maintenance rule unavailability from "when required" to when "Rx is critical".

The baseline should be what ever is normal for a system over the current 3 year period. The fact that 2002 to 2004 was the baseline skewed the data. Some equipment had 2 maintenance windows performed during the period while sister or shared equipment had only 1 maintenance work window. This discrepancy causes cyclic performance in the indicator. Also, a three year baseline period may not account for larger, infrequent preventative maintenance work windows. This results in baseline adjustments to the basis document too frequently. It may be better to annualize baseline unavailability over a longer period.

Incorporate approved FAQs into the guidance much more often than is being done.

Need simplification of the margin quantification, including PLE - consider providing a small table showing projected MSPI values for 1 or 2 of each type / location of failure.

Requiring each site to break demands into 3 categories ( ESF non test, non test and test) makes the task more difficult and unnecessarily complex. I am not sure there is a real benefit from this effort. I would suggest not requiring the demands be broken down into categories.

Revise the definition of UA to match the definition in the Maintenance Rule. It is a tremendous burden for the system engineers tracking the different UA rules for MSPI, WANO and Maintenance Rule.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 50 of 64 Eliminate the separate consideration of planned unavailability.

Clarify the unavailability section - particularly as it applies to taking credit for operator actions and "already written in a procedure". As it is now, its meaning can be widely interpreted. A clarification was informally provided to the industry at the Summer 2006 MRUG meeting by myself, after sitting down with XXXX. Codifying this clarification in both the MSPI and NEI documents could avoid further problems.

Evaluate the effectiveness of monitoring unavailability as a part of MSPI when systems have enormous margins Eliminate adding in non-routine planned maint into the MSPI and adjusting the baseline for it. This is like adding the same number to both sides of an equation. Why not just require that all non-routine planned maint be reported as a line item only so it may be reviewed but not included into MSPI. We have to maintain running spreadsheets to adjust the baselines to ensure the baseline is readjusted 3 years later for each event.

De-standardize the number of steps that are virtually certain to be accomplished in restorative actions. Current interpretation is excessively conservative.

Clarify the impact on the MSPI value of the Failure Mode selection during data entry for component failures, and the fact that selecting any of the EPIX-based failure modes from the table via the data entry "drop-down" menu will cause a "Demand" failure to be introduced by default into the MSPI calculation. The information on Page 168 of the Data Element Manual provides no discussion of the consequences of this action, especially since a "Demand" failure may have more of an adverse effect on the index value than a "Run Time" failure. If the failure mode is selected from an EPIX perspective, an incorrect MSPI value can (and has) been generated through introduction of a "Demand" failure when a "Run Time" failure was intended.

Alternatively, provide an MSPI-specific "drop-down" menu for "Demand" failures (Start/Load/Open/Close) and "Run Time" failures (failures to meet mission run time).

The current degree of industry understanding of these terms should make past misapplication of risk-based performance factors avoidable.

Since planned unavailability essentially do not play a factor into the determination of the results because of the baseline unavailability, the only hours that should be reported should be hours that accrued for activities that have not been scheduled for X days. This will better reflect the overall condition of the system.

Align MSPI, INPO SSPI, and Maintenance Rule criteria. They are all trying to achieve the same goals, and yet, they are all different and require a significant labor expense for no obvious benefit to the industry. Moreover, the existing gaps between them are result in conflicting work management strategies.

Model control circuitry separately from actuated component to show actual risk worth of auto start, manual start, etc, and count failure of control circuit in proportion to actual worth of failure. Failure of one start circuit has a risk worth of typically a tenth of the worth of failure of the actuated component, but counts with the actuated

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 51 of 64 component. Causes diversion of station resources out of proportion to actual risk worth.

Effective July 1st, post-maintenance run times will be excluded. In some cases, mainly our service water pumps and possibly our RHR pumps, we leave the pump running following the post-maintenance test. Guidance needs to address reporting of operational run hours which are a continuation of a post-maintenance test run.

Stress use of estimated data. Actual start, runtime data is very time consuming.

Provide a clearer instructions about when PRA data is required to be updated. We have a different (a)(4) model than is used for MSPI which may impact the importance measures, but based on PRA does not meet the MSPI requirements for updating the importance numbers.

Improve the definition of "time of discovery" as it relates to failures. There is disagreement between the industry and the NRC as to what the station should have known versus what it did know at the time a degraded condition is identified.

Should there be any guidance for the number of "significant digits" for the unavailability numbers?

Clarification of the segment approach to determine service water trains needs improvement. Currently, the way our Cooling Water system is configured, one of the 3 cooling water pumps is normally in a non-safeguards mode. MSPI requires that unavailability and unreliability data be reported for this pump even when it is not required by Tech Specs. According to the guidance, we cannot designate this pump as an installed spare since it receives an auto-start signal, while in the non-safeguards mode.

Align MSPI indicators to maintenance rule. It takes a long time to look at the unavailability for maintenance rule, then MSPI. Basically we have two sets of numbers and it makes it difficult to track.

Second suggested improvement:

Eliminate UA tracking and baseline UA reporting. The UA portion carries less weight than the failures. If we cant eliminate it, then reporting to the nearest 10 hrs should be adequate.

Consistency between NEI, INPO and NRC for definitions and practices related to unavailability and reliability.

The industry needs to develop some type of program for training new data stewards.

Either coordinated and taught through INPO or even provided as a generic set of web based training material. This will be important as the original data stewards move on to other jobs, training a new individual on how to do MSPI would be an extremely difficult task to accomplish inside a typical 2 week turnover period when a new engineer takes over the responsibility of a departing engineer.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 52 of 64 Realize that it will take at least 1/2 of a full time equivalent position to maintain the program. Especially during the early implementation.

NEI 99-02, Page F-25: There appears to be a potential for a MSPI failure to exist without meeting the definition of a Maintenance Rule Functional Failure.

Somehow, some way, decrease the amount of information that needs to be input into CDE. One example is to eliminate the requirement to input actual ESF demands.

Establish that the MSPI basis document was required only for initial implementation and was not intended to be maintained up-to-date. Site procedures must be in place to maintain the MSPI constants.

Planned baseline unavailability PRA importance values Demand and Run Hour estimates.

Make NRC, INPO, and WANO data gathering and reporting the same, or at least better than is required now.

The "7 decade below baseline" quantification requirement should be relaxed. There is little value to this deep of a quantification.

Remove the need to validate the estimated data to be within 25% of actuals. This data should not be revised unless there is a change in maintenance philosophy. If the data must be validated then actuals need to be collected anyway. Clarify the requirement for actuals to be within 25% of the estimated values. How often is it necessary to perform the validation and what is required for documentation. Does the validation information need to be in the basis document?

Remove requirement for solving to a truncation of 7 orders of magnitude below the baseline CDF, and return to the original requirement of 5 to 6 orders of magnitude below the baseline CDF.

I see there is still a requirement to report Actual ESF demands each month, which is annoying to all involved. Assuming this is changed, it will be important to make it clear what to do if you are using a zero estimate and then actually have an ESF demand. Also, we just noticed the instructions aren't entirely clear for the case where an SBDG is running in PMT mode and you have a grid fluctuation that results in an actual ESF start demand and load demand. It's unclear whether to count the start demand, since you can't say whether the diesel would have successfully started or not, given that it's already running.

Evaluate the inclusion of Performance Limit Exceeded. In some cases, it seems that one or two failure on some systems turn a system white, regardless of a stellar unavailability history. If PLE is maintained suggest derating the URI impact by inserting a.75 or.8 multiplier against it.

Until the above is implemented, allow adjusting the planned unavailability baseline during the quarter.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 53 of 64 The process for running multiple-system Derivation Reports is too cumbersome.

There should be a way to run the report for multiple systems at once, and for both Unreliability and Unavailability at the same time, similar to the Margin Report. (This has been identified informally by CDE management as a long-term objective.)

The distinction between test, operational hours and ESF hours should be eliminated.

The overall result of any of these is either the performance or failure of a particular component regardless of why it was operating.

The impact of a diesel run failure should be revisited.

Consider a way to only report unplanned unavailability since if properly managed there is no impact.

Eliminate the requirement to segregate the demands. This has no benefit and can take a significant amount of time to determine especially during outages.

Third suggested improvement:

The guidance should provide for a default Cooling water Unplanned UA value for those plants with no unplanned UA in the baseline UA data. The guidance provides for each site to develop site specific unplanned UA values, but zero is not acceptable.

Ensure MSPI unavailability requirements take into account licensing requirements.

Get rid of unavailability tracking. Any failure of the equipment has a much more dramatic effect on the indicator.

NEI 99-02, Pages F-25-26: XXXX supports Draft FAQ 67.3.

Use the industry input from this survey to make improvements!

Provide the ability to allow revisions/updates to EPIX reports without needing to unlock as long as the MSPI Yes / No determination is not changed.

Clarify guidance for maintenance induced damage/issues found during PMT prior to returning the component to functional status.

Remove requirement for solving to a truncation of 7 orders of magnitude below the baseline CDF, and return to the original requirement of 5 to 6 orders of magnitude below the baseline CDF.

Eliminate the input of planned unavailability and only provide forced unavailability.

Planned unavailability does have a PSA impact but this is covered by 10CFR50.65 (a)(4) monitoring. Forced unavailability should be the only thing monitored under MSPI. Maintenance rule (a)(4) covers this situation adequately.

The monthly reporting of the MSPI by INPO is invalid as the planned unavailability baseline is only adjusted quarterly and therefore, the monthly MSPI can be very

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 54 of 64 wrong when there are large adjustments made to the baseline. Either allow monthly adjustments of the baseline or eliminate the monthly MSPI.

Due to the logarithmic nature of the indicator, it can be difficult to provide senior management with a clear-cut picture of the remaining "% Margin Remaining to White" for a given system, which is what they have become accustomed to seeing in graph form for the SSU. Numbers of hours and failures remaining in Green are useful, but they don't have the strong visual presence of a graph, and the "PI View Report" graph in CDE isn't a good alternative. We have created a linear "% Margin" bar graph for this purpose, showing the change over a two-month window, but it somewhat misrepresents the facts since it is linear. Some guidance from INPO on how to best make this translation would be helpful.

Clarify the guidance with regards to when estimates for demand, run hours and load hours need to be updated. (i.e. only when a planned permanent change in usage occurs not for short time increase or decrease in operational/seasonal variations).

Fourth:

The indicator itself should be viewed as the numerical risk value AND the failure margin i.e., a measure that indicates how many and what kind of failures can be tolerated before risk crosses a regulatory threshold.

Fifth:

The need to gather both MSPI data and INPO WANO data should be consolidated to minimize resource requirements.

Reliability estimates may vary by 25% before they are required to be changed. A 25% variation can have a significant effect on the MSPI value. The data submitted quarterly to the NRC must be complete and accurate. This should include a confirmation that the estimates are within 25%. An error made when reporting actual demand data will have a very insignificant effect on the MSPI value and the guideline should address this by adding an allowance similar to the 25% allowance when using estimates.

Revise EPIX reporting so that it is clear when choosing responses whether this will cause an event to be a demand or run failure. This can be confusing and may lead to the wrong failure type.

Eliminate the difference in counting between WANO, MSPI, and Maintenance Rule.

Additional the performance criteria should be similar between Maintenance Rule and MSPI, this way both programs drive the same thing.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 55 of 64 Section III - Implementation Issues and Practices

1. Please provide any comments or suggested improvements to the following implementation-related items.
a. Consolidated data entry NEI task force and INPO should get on the same page with respect to PRA data entry. The guidance provides for format of tables that are directly opposite to how the tables are laid out in CDE. This is a human error trap. Also, placement of PRA data three separate times for start, run and load data for components is also a human error trap. Most sites only have one PRA value for all three and chances are there will be a mistake.

CDE should have a feature to make one entry and make the computer do the math.

No comments Make data element entry, review and approvals a single screen for all units.

The program is good, no needed changes found.

Allow changes based on the baseline revision and not wait until after the quarter is ended.

CDE is very slow, regularly locks up and dumps my data entry (what is a "SOAP" error anyway?). Because of this I end up doing to multiple entries of the same information every month. Very frustrating.

A CDE report separate from the Derivations Report that would summarize the potential/actual failures for all the MSPI systems would be helpful.

We have a monthly report for Management/Plant Staff use that summarizes MSPI Margins and Actual Reliability/Unavailability. Now must run a Derivations Report for each MSPI system and glean the potential failures from each repo.

INPO still has room to further consolidate the data they collect. See above.

Runs a little slow but it's okay.

Saving data is extremely frustrating. Takes way too long to save, and many times comes back as unable to save. You know how many people have to use it, and everybody is basically entering data in the same time frame, so provide a system that can handle the needed capacity. This needs to be addressee.

The Equipment search results screen needs to display PLANT specific component name instead of industry std name.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 56 of 64 Have one page where all data can be entered. The INPO software should be able to take the data fields and use them as necessary.

Current data entry is cumbersome and leads to false indication unless significant manual check and verification of data entry is performed. It is highly recommended that the data input process would be revised such that licensee would enter one text file that contains all required key parameters, then the CDE program automatically process the content of the data file for each system.

Current data entry is cumbersome and leads to false indication unless significant manual check and verification of data entry is performed. It is highly recommended that the data input process would be revised such that licensee would enter one text file that contains all required key parameters, then the CDE program automatically process the content of the data file for each system.

Runs a little slow but it's okay Generally easy to use and navigate.

When filling out the CDE section on MSPI each system asks for any "Potentially Related Failures this Month". Not sure what the expectation is for this section. We evaluate potential failures under our corrective action program and only enter an event once it's determined to be a maintenance rule and/or MSPI failure. Unless we have an actual failure this would always be answered "no".

Data review screens that show approvers the value submitted rather than just the MSPI number.

1. There seems to be some inconsistency in the effective dates for changes. Changes to components (removal/addition) are effective immediately where PRA changes are not effective until the next quarter.

It also would seem that changes should be allowed up until the required quarterly submittal date (21st day of the appropriate month) for it to be effective the next quarter.

2. The potential failures are identified in the unavailability section, they should really be identified in the reliability section.

A table format would make initial data entry easier than navigating through layers of pages.

Once we learned the location of all the data fields required for MSPI, it has not been too difficult to update the data.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 57 of 64

b. Consolidated data entry what-if feature Can what if feature be improved so that the effect of changes can be more readily evaluated. Data entry should all be zero UA for future months as a default critical hours should also default.

Cumbersome to use.

Make it work. Enable a 12 quarter projection feature.

Too cumbersome.

Requires improvement. Users cannot enter failure data directly into the "what if" mode to evaluate impacts of proposed activities.

Have not used - none.

I used to use this regularly, but this is now so cumbersome that I don't use it. Trying to input hypothetical failures is way too time consuming.

It takes some practice to use the "what-if" feature efficiently.

We have not used this feature much, but it is definitely needed to predict future MSPI outcome.

Allow it to work by changing one parameter. Having to manually re-enter the data is non-productive. Also, provide a means to save separate what-if scenarios so they don't have to be re-created every time a question comes up.

Rarely used.

Allow simple ability to enter a postulated failure, selectable by component and failure type in order to view impact on MSPI values.

1. When performing WHAT-IF scenarios allow saving of different cases.
2. When printing a WHAT-IF derivation report and margin report clearly indicate that it is a WHAT-IF.
3. For every scenario, when printing the derivation and margin reports list the deviations/changes from the production copy.

What-if feature should be improved such that licensee would be able to use the recommended process listed above (Section III, 1.a.) and additionally, in case of satisfied data entry and results, click a button to transfer the data changes made in what-if module to (replace) the production data module. The current process would require the cumbersome re-entry of data in separate modules (Production module and what-if module).Anything to ease the process of updating data in the CDE would be welcome. I should be able to view and edit all of the

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 58 of 64 relevant MSPI CDE data in a few pages. The repetitive drill-down process of the CDE is onerous.

What-if feature should be improved such that licensee would be able to use the recommended process listed above (Section III, 1.a.) and additionally, in case of satisfied data entry and results, click a button to transfer the data changes made in what-if module to (replace) the production data module. The current process would require the cumbersome re-entry of data in separate modules (Production module and what-if module).

Hopefully, the switch to estimated ESF demands will also be incorporated here.

We have not used this feature much, but it is definitely needed to predict future MSPI outcome.

Should have a coach report like CDE.

Haven't had all of the "live" data loaded into it, so it hasn't been used.

Just recently learned that it requires pre-loading.

What if should auto populate the Unavailability data and reliability data to eliminate the need to input data for each month. Unavailability could be populated as zero, plant generation as 100%, and Actual ESF actuation as zero.

Provide auto update from real data. Allow running what if calc by just putting in a number of failures (or changing unavailable hours) of a given type and component, vs requiring full creation of a pseudo failure record on the "failure" screen.

A simple failure addition method. Using an EPIX entry for what if is too detailed. A simple thing that failed, failure date and type of failure is all that's needed to "what if".

Provide a simplified front end that allows easy manipulation of input data.

Potentially provide a table or listing of the existing inputs with an empty cell next to each existing input for entry of what you want to change the data to.

(1) Too cumbersome. No control of data manipulation when more than one individual has "What-If" access rights. We should have a feature to save different "What-If" scenarios. Currently, when we want to run a new "What-If" scenario, we need to spend additional time to upload all new data, just to ensure the data is current. We are not sure if the existing data in the "What-If" calculator has been modified by a previous user.

(2) Adding new failures is not a simple task. To clone an existing failure makes it simple, however, if a system does not have failures to clone, inserting a new failure is time consuming.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 59 of 64 (3) Unmanageable effort to do long term "What-If" projections, due to the amount of data entry required.

The what-if feature is very helpful in projecting changes to the MSPI numbers.

c. MSPI web board Too much misleading information is provided. This should be a place where questions are posted and definitive answers provided by someone in authority to interpret the guidance.

No comments Never used. Communicate access and user requirements.

Questions asked on the web board do not receive a definitive answer.

People respond with differing opinions and one is then left to their own interpretation. It leaves one to wonder what each reader gets from the responses.

Have not used - none.

Don't use it.

Incorporate the Official FAQ logs and discussions on the web board.

We have not used the web board much since MSPI initial implementation.

However, we have made extensive use of the questions asked by other via email notification feature.

Good that it is available, though don't always see a lot of response.

Require a response time. I submitted a question on the web board and it has been almost two weeks and I haven't gotten a response. Also allow for personal sorting and organizing of information.

Have one consolidated response to questions to avoid confusion.

It's not clear how "official" the answers on the board are, unless a direct reference is given to an NRC-approved document.

We have not used the web board much since MSPI initial implementation.

Haven't used up to this point.

A weekly or monthly summary (digest version) would be nice.

The MSPI web board is something that should continue. It's an ideal forum for the industry to receive informal clarification of issues or guidance interpretation without going through the formal FAQ process and/or task force.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 60 of 64 The web board is a good forum for working out problems which should reduce the number of FAQ's on MSPI.

d. Process for addressing frequently-asked questions The process needs to be flow charted. Also, a better means of researching FAQ's is needed.

FAQs are not treated to same rigor as the MPSI program set-up (i.e.

incorporate approved FAQs into NEI 99-02).

Provide guidance on WANO FAQ that was deleted when NEI 99-01 revision 4 was developed.

Web board should have links or pointers to FAQs.

Its adequate.

Easy to access.

Ok, but not very timely.

Needs to provide an initial forum of unbiased, objective personnel to screen draft FAQ submittals. This forum can validate the need for guidance clarification and move the FAQ along to final review / approval.

Streamline the process.

FAQs should be sent to stakeholders; it is not clear that MSPI-related communication reaches those affected.

Seems to take a long time to achieve resolution, and the output documents aren't as useful as they might be. Case-specific implementation guidance for those wanting to take advantage of the results would be useful.

Many issues need clarification without a disagreement with the Resident having to occur first.

The FAQ process needs to include submitting an FAQ in order to get an official interpretation or ruling on a particular issue. In some cases, the resident has recommended that I submit an FAQ. It is not that the resident disagreed with the interpretation but that they could not provide an official interpretation and recommended an FAQ be submitted to get that official interpretation.

A way of having an industry review for situations that are not clearly addressed would be nice. These might not be FAQ's but are areas somewhat open to interpretation.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 61 of 64 The points of contact should be provided the potential FAQs as this has an impact on how issues are counted at their plants.

e. Need for additional training on MSPI People need more training on What IF mode, how to use it.

Industry training shall be set-up for the new/potential replacement engineers due to the aging work force and for operations management.

webinars during roll-out were helpful, but infrequent and poorly scheduled for plant personnel involved with many other demands.

Initial training for personnel new to MSPI and on-going (annual) training for others. A suggested format might be an annual workshop with 1-2 days of training for new personnel followed by 1-2 day conference to share insights, upcoming changes, etc. This is necessary to ensure users are providing consistent results and that new personnel have a thorough understanding, especially with respect to the nuances of bases documents.

Needed more NEI developed training modules for NRC, Planners, OPS, etc.

Turnover is inevitable and regular CDE training courses should be offered.

A "MSPI User's Group" might be useful to allow for interchange of ideas and methods on an ongoing basis.

We may need training assistance when the current MSPI personnel retire.

Improvement lessons learned issued in areas like NRC responses, failure reporting and unavailability reporting would be helpful.

Management needs training on personnel responsibilities.

Training ends up being more hands-on, as you work through issues.

MSPI training should be available every year since people are moving around companies in different positions.

More detailed information on the derivation of the MSPI Margin Report.

Regular training is necessary for system engineers, due to the transient nature of system assignments.

No additional training is required from the industry. CNS Training has already perform a needs analysis for SED training requirements related to

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 62 of 64 CNS. A lesson will be developed and implemented based on changing of assignments.

Since component failures have the most impact on the indicators additional training on the guidance related to component failures (e.g.

component boundaries, design vs. PRA vs. licensing success criteria, etc.) may be beneficial.

A periodic net meeting could be nice no Based on the quantity of web board questions and difference of opinion on answers, it appears that a post implementation lessons learned is in order.

Yes. Our fleet has a lot of new coordinators taking over the program. In addition, it would be good to go over lessons learned after the first year of implementation.

More direct training should be provided to those who have MSPI responsibilities.

2. Implementation resources Please estimate the total person-hours expended by plant personnel on a monthly basis to maintain MSPI, including collecting, entering and verifying data, reviewing results, reviewing margin and planning improvements to increase margin and maintaining basis documents, and any other MSPI-related activities.

Risk management personnel:

To-date, there has been no identified need to update the PRA information in the MSPI Basis Documents (other than planned updates for other reasons). Thus, to-date, no effort has been expended updating these documents. However, as in all living risk applications, eventually, such a need will occur.

20 ( based on 2 persons@ 100 hours0.00116 days <br />0.0278 hours <br />1.653439e-4 weeks <br />3.805e-5 months <br /> each year for margin-related PRA Update and 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br /> per month for Q&A) 60 (Based on 2 3-month effort to improve PSA to obtain more margins for HPSI)

To-date, there has been no identified need to update the PRA information in either of the MSPI Basis Documents. Thus, to-date, no effort has been expended updating these documents. However, as in all living risk applications, eventually, such a need will occur. Since we have no experience as to how much effort will be required, it is not possible to estimate an average risk resource expenditure necessary to maintain MSPI documents.

Post-Implementation Assessment of MSPI Appendix A - Complete Set of Comments from Industry Survey Page 63 of 64 0 per month. It takes approximately 80 hours9.259259e-4 days <br />0.0222 hours <br />1.322751e-4 weeks <br />3.044e-5 months <br /> to update the basis document once the PRA parameters have been changed. It is noted that XXXX has not updated the PRA numbers since the implementation of MSPI.

24, 5, 8, 8, 10, 24, 6.5, 4, 5, 16, 2, 8, 5, 5, 55, 2, 20, 1, 20, 0, 4-6, 1, 6, 8, 1, 0, 8, 4, 8,2, 8, 20, 15 person-hours per month Plant engineering personnel:

92 hours0.00106 days <br />0.0256 hours <br />1.521164e-4 weeks <br />3.5006e-5 months <br /> per month is the normal amount of total time for both units. Since the tasks are performed at the same time, this includes the time associated with Maintenance Rule and WANO indicator as well. During an outage (2 months per 18 months per unit) the tasks will take an additional 40 hours4.62963e-4 days <br />0.0111 hours <br />6.613757e-5 weeks <br />1.522e-5 months <br /> due to the increase in number of operational demands.

10, 100, 144, 32 hours3.703704e-4 days <br />0.00889 hours <br />5.291005e-5 weeks <br />1.2176e-5 months <br /> per system, 160, 60, 36, 14, 60, 40, 50, 8, 62, 48, 60, 56, 60, 80, 60, 50 (10 per system), 40, 60, 40, 70, 72, 30, 12-15, 30,32, 10, 80, 30, 48, 110, 120, 40, 40, 90, 2, 72, 60, 125 person-hours per month Regulatory affairs personnel:

5, 4, 16, 16, 10, 10, 8, 2, 5, 4, 1, 2, 16, 8, 1, 8, 3, 16, 0.5, 20, 20, 4, 1, 2, 2, 20, 5, 0, 48, 0, 25, 2, 8, 1, 2, 4, 15 person-hours per month Other personnel (specify):

Work Planning 100 person-hours per month MSPI coordinator (Engineering) 10 person-hours per month Operations 10 person-hours per month Maintenance 30 person-hours per month Management 3 person-hours per month MSPI Coordinator 30 person-hours per month Management 1 person-hours per month PI Coordinator 4 person-hours per month Performance Indicator Analyst 6 person-hours per month Plant Management 1-2 person-hours per month Corp. Reg. Affairs 5 person-hours per month Data entry and collection 100 person-hours per month MSPI coordinator 30 person-hours per month Data Entry 1 person-hours per month PI Analyst 10-15 person-hours per month Shift Technical Advisor 16 person-hours per month EPIX 2 person-hours per month ES Manager 4 person-hours per month Performance 5 person-hours per month CDE Coordinator 2 person-hours per month Data entry, review, approval 3 person-hours per month Monthly MSPI Review board Meeting 8 person-hours per month

Post-Implementation Assessment of MSPI Appendix B - Complete Set of Comments from Industry Survey Page 64 of 64 NEI Industry Reactor Oversight Process Task Force Members John Butler - NEI Brian Ford - Entergy Al Haeger - Exelon Nuclear*

Ken Heffner - Progress Energy Duane Kanitz - Palo Verde/STARS Julie Keys - NEI*

Lou Larragoite - Constellation Roy Linthicum - Exelon Nuclear*

Fred Mashburn - TVA Glen Masters - INPO*

Dave Midlik - Southern Company Kay Nicholson - Duke Don Olson - Dominion Jim Peschel - FPL Robin Ritzman - First Energy Gerry Sowers - Palo Verde*

  • - directly involved in preparation of MSPI assessment

Page 1 of 1 Proposed Changes to Public Radiation Safety SDP - ML072960690