ML20129F515

From kanterella
Jump to navigation Jump to search
AEOD/S804C, Maint Indicator Demonstration Project
ML20129F515
Person / Time
Issue date: 05/31/1990
From:
NRC OFFICE FOR ANALYSIS & EVALUATION OF OPERATIONAL DATA (AEOD)
To:
Shared Package
ML20129F513 List:
References
TASK-*****, TASK-AE AEOD-*****, AEOD-S804C, NUDOCS 9610020006
Download: ML20129F515 (69)


Text

__

i

  • .: # omncuay t, toco)

O o AEODIS804C l

1 ^

1 j l l

I I.

i j u u b

~

~

i MAINTENANCE l INDICATOR j DEMONSTRATION j PROJECT i

i i

j May 1990 i

i Report Prepared By:

U.S. Nuclear Regulator Commission Office for Analysis and Evaluation of Operational Data l

l l

l 9610020006 900507 PDR ORG NEXD 1 .i

. . ._ _ _- . ._ . _ _ /

\ rur'rusyr,/ san AEOD/S804C ,

- . ~o l

.: l EXECUTIVE

SUMMARY

in response to the Commission's direction in the Staff Requirements Memorandum on SECY 89- l 143/COMLZ-89 21 " Amendment to 10CFR50 Related to Maintenance of Nuclear Power Plants," June l 26,1989, the staff initiated a " demonstration project" for the development of maintenance performance indicators. This report documents the lessons and results of the Demonstration Project.

Candidate utilities for the Demonstration Project were identified by the staff based upon characteristics such as nuclear steam supply system (NSSS) design, plant age and power rating, utiity organizational size or number of plants operated, and location. Through the coordination of the Nuclear Management and Resources Council (NUMARC), six utilities volunteered to participate and provide a member to the Project group. The six utilities were: Commonwealth Edison Company, Duke Power Company, Northeast Nuclear Energy Company, Rochester Gas and Electric,

. Southem Calilomia Edison Company, and System Energy Resources, incorporated. The utiitty participants agreed to a project limited in scope to the review of the NRC staff's proposed maintenance effectiveness indicator.

The Demonstration Project was conducted through a series of centralized meetings of the Project group and individual site visits between September 1989 and March 1990. The elements of the Demonstration Project were: presentation of the NRC inillative to the utilities, data review and analysis of the proposed indicator by the participating utilities and INPO, plant specific discussions of maintenance management and monitoring techniques, and further development activities by the staff.

Individual meetings with each of the participating utilities typically took one and one-hall days and involved their maintenance managers, along with members of their reliability or performance assessment groups. The Project group was well rounded with representation from utility maintenance, operations, licensing, and performance or reliability assessment organiza' ions.

I i The industry participants did not agree that the NRC staff's proposed maintenance indicator was a j measure of maintenance effectiveness. This disagreement arises from the industry's Imited

! definition of maintenance versus the NRC's broad definition of maintenance, as descrt)ed in the

! policy statement. The staff believes that consensus was reached on specific improvements to the ,

l proposed maintenance indicator. These improvements involve the indicator construction, calculation,  !

and use. I l Based upon AEOD's perspective on the issues that arose during the Demonstration Project, the  ;

l following changes are suggested:

1. Revise the algorithm used in calculating the indicator to eliminate " ghost ticks" and capture

" shadow ticks." Two allemative methods developed and being tested by the staff were introduced to the Demonstration Project during tho March 1990 joint meeting.

I l

. ( C-

. os*r}ueyv.seec; h k AEOD/S804C

. $$ 2. Modify the overall indicator to include both system-based and component-based indications.

The calculations and displays are being rnodified to also show component-based indications.

3. The staff should obtain critical components lists from participating utilities and determine il some of the components monitored by the present indicator could be deleted.
4. Modifications to the list of components to be monitored should be explored by the staff to include more safety system equipment and refine the previous scope.

~.

5. Continue to encourage improvement in NPRDS participation and support the initiatives in the industry Action Plan regarding irrproving NPRDS data quality.
6. Encourage NUMARC to include a specific element in the Industry Action Plan that addresses improving the quality of maintenance documentation regarding the nature of the equipment failures (cause and function impact), and assuring that adequate resources for NPRDS reporting are provided during outage periods.

l j

1 I

1 d

u I

)

DMn(May 1. Im) AEODIS8040

. , , s a

t MAINTENANCE INDICATOR DEMONSTRATION PROJECT

~.

EX ECU TIVE S U MMAR Y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 f

INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 INDICATOR TECHNICAL ISSUES ....................................... 2 Indicator Construction ............................................. 2 Equipment Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Nexus to M aintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 N P R D S Dat a Qua lit y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Recommended Appro ach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

,; A P P EN DIX A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 )

i SUMMARIES OF MEETINGS ....................................... A.1 A P P E N D IX B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 MAINTENANCE INDICATOR DEMONSTRATION PROJECT DETAILS . . . . . . . . . . . . . B.1 i

A P P EN DIX C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1 INDICATOR TECHNICAL ISSUES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1 4

l i

d W

  • .onAn tuny 1.1000) AEODIS804C

, . 1

- . r MAINTENANCE INDICATOR DEMONSTRATON PROJECT INTRODUCTION In response to the Commission's direction in June 1989,' the Nuclear Regulatory Commission (NRC) staff initiated a " demonstration project" for the development of maintenance performance indicators.

The staff identified candidate utilities based upon characteristics such as NSSS design, plant age and power rating, utility organizational size or number of plants operated, and location. Through the coordination of the Nuclear Management and Resources Council (NUMARC), the institute of Nuclear Power Operations (INPO) and six utilities agreed to participate. The six utilities in the Project were:

Commonwealth Edison Company, Duke Power Company, Northeast Utilities, Rochester Gas and Electric, Southem Califomia Edison Company, and System Energy Resources, Incorporated. These industry participants agreed to a project limited in scope to the review of the NRC staff's proposed maintenance effectiveness indicator'.

The Demonstration Project included the following elements: presentation of the NRC indicator b the utilities; NRC staff preparation of computer files and associated information, including presentations of failure records for which the proposed indicator was used to develop a representation of maintenance effectiveness; data review and analysis by the participating utilities; analysis by INPO; substantive discussions of maintenance management and monitoring techniques with plant statis; further development activities by the staff; and working discussions of the Project group members to formulate results. Meetings with individual utilities generally involved their maintenance managers and the reliability or performance assessment groups (Appendix A). These meetings, which were i typically one and one half days long, followed the same basic agenda (Appendix B). The Project l grottp was well rounded with representation from utility maintenance, operations, licensing, and performance or reliability assessment organizations.

The Demonstration Project participants worked toward consensus on issues associated with a l

performance indicator to monitor the effectiveness of maintenance at domestic nuclear power plants.

l The staff believes that some technical consensus was reached on various methods to improve the NRC staff's proposed indicator. These included changes to the indicator construction and J calculation, the identification of potential improvements to the Nuclear Plant Reliability Data System

( (NPRDS), and, to some extent, use of the indicator. However, fundamental differences in j perspectives on what constitutes maintenance functions resulted in disagreement over whether the i indicator was a measure of mairtenance effectiveness.

1 i

8tsN F- Memorandum en sECY 80-143/COML2-00 21. Amendment to 10 CFR 50 Reised to Meinionance of Nudeer homer Plante. June 28.1900 The developmert and prowneus voudstion eflaste sogenAng me NRc sten's maintenance inecular have been documented bi two reporte. AEOo/80048, 'Appscosion of me NPRDS for Enoctiveness Montering.* issued in January 1000, and the EG40 idshe. Inc, repost. "Meintenance ENectivenees Indcator,' lesund in Ocicher 1989.

1

s eurrtuer r. reno) AEODIS804C

~

. Typically, the maintenance manager does not control all the elements of the broadly defined  !

. maintenance process, but feels accountable for anything labeled " maintenance." The proposed indicator is programmatic by design, and is premised on the broad view of maintenance as outlined in the NRC Policy Statement on maintenance and industry guidelines. However, this view of maintenance and the current configuration of the indicator do not match up in a practical way with  ;

traditional maintenance line organizations at the sites. Thus, it may be difficut for plant staffs to beneft in a direct way from the proposed indicator in its current form. Variations of the indicator to ,

- make it more useful for plant staffs were discussed, including component scopes consistent with reliability-centered maintenance (RCM) programs, and calculating the changes in failures by component type as well as by system. Potential changes to the indicator to address these concems were also discussed, including both the indicator construction and the calculation technique.

INDICATOR TECHfelCAL ISSUES The technical review by the participating utilities focused on the construction of the indicator, tfm i

equipment covered by the indicator, the alignment of the indicator with the utility maintenance program (nexus to maintenance), and the quality of the data that supported the indicator. The staff believes that consensus was achieved on selected improvements to the indicator.

Indicator Construction The indicator construction issues refer to the calculational technique employed to generate the indications from a plant-specific database. Two issues emerged that are beir*g addressed as a result of the Demonstration Project. These are: (1) ghost and shadowed ticks md (2) failure grouping.

Ghost and Shadow Ticks - in special cases, an indcation would occur in a month in which no component failures were discovered, or the number of failures declined from the previous month.

This phenomenon was referred to as a " ghost tick." The original algorithm created an Indication based upon a change in the average of a month-to-month count of component failures. As a result, a high number of failures discovered in one month would carry over its impact into the average calculation for the subsequent month even though no new failures or fewer failures were discovered in that month. For the case of no new failures, a utility charged with troubleshooting the cause for its indications would find no basis for the indcation in the month assigned. Although this feature served as a measure of the magnitude of the first month's component failures, it was misleading.

l Conversely, the uninal cak% Mon algorithm also led to the masking of some significant failure j changes. Thesc

  • lications which were not generated were known as "tthadow ticks." in this case, i indications were r 4 generated for significant failure changes because preceding siyWicant increases in failures ove%dowed the two-month average associated with the later failure increase. Two attemative calculational techniques were developed and are being tested by the staff. They are

, ' described in Appendix C.

FalAue Grospng - The issue of failure grouping arose from the consideration that the indicator l

j (clustw, of failures that caused an indication) should be amenable to root cause analysis by the utility. The original indicator was calculated based on the change in the failures of a selected set of 8

4

. sm'rrtwy s' teno) AEODIS8040

- . I ' components (Outage Dominating Equipment [ ODE]) within a system. This method produced indications that were, in many cases, due to the failure of different types of components (e.g., circuit

': breakers, pumps). This arrangement was consistent with the maintenance approach of some utilities in the Demonstration Project since systems were often made available for maintenance during a l certain chronological interval, and the discovery of failures would tend to cluster (Indicate) by system.

in addition, the approach provided a usable tool that enhanced accountability for plants with systems

~

engineers. However, other members of the Project group noted that this method would result in complicating the cause analysis. They requested a second version of the indicator that grouped

. Indications by type of component. This method enhanced the root cause evaluation.

Eauipment Selection The equipment selection for the proposed indicator was the subject of review during the Demonstration Project. Two aspects are worthy of note. First, the staff's original equipment selection assumptions were re;atively good. Second, the utilities preferred that the equipment selection reflect the plant's maintenance approach.

The original equipment selection for the indicator was restricted to achieve maximum reporting consistency across plants. NPRDS reporting is a function of the initiation and processing of corrective rnaintenance work orders. To accommodate the range of the aggressiveness of the operating crew and organization in work identification, a set of equipment was selected for tha indicator that would Ikely be the subject of timely repair and work order processing. This set of equipment was that needed to support plant operation. In general, the staff's assumption was affirmed regarding the assurance that this set of equipment would be the subject of closer scrutiny, and hence, less reporting variability. Some licensees in the Demonstration Project had created a similar list of components based upon their operating experience. These components were then the subject of special oversight. This visibility helped ensure that the work order flow and information contained on the NPRDS record were better than average. Participating licensees also noted that the same maintenance program and practices are ernployed on balance-of-plant (BOP) equipment and safety equipment. Therefore, the maintenance quality on this set of equipment would Ikely be representative of the maintenance on all plant equipment.

The equipment selection for the indicator may not fully reflect the plant's approach to maintenance.

The maintenance programs for some components may utilize a 'run to failure" philosophy. Under such a philosophy, no systematic preventive maintenance (PM) is applied to the component. Only corrective maintenance is utilized. As a result, some component failures captured by the indicator would be allowed by the plant's maintenance program. Two of the Demonstration Project participants were mnducting major RCM programs and viewed this as a major potential difference between the goal of the indicator and the results. RCM methodology, using cost / benefit analysis, may dictate such a philosophy based upon engineering analysis of the impact of the failure (i.e., the local, system, or plant level impact). Redundant components and technical specifications (TS) are among the considerations of that analysis. For example, equipment failures with only a local impact and no TS consequences may permit omission of any PM on the component. Two of the participating utilities agreed to fumish the staff with a list of their crtical components (i.e.,

components contained in their PM program as a resut of RCM adivities), and the staff agreed to determine il some components should be eliminated or added to the list of components monitored 3

l 1

  • 'ok r(usy r. resol AEOD/S804C by the indicator. A general comment that was received from the participants was that the impact of

,- each of the individual component failures being counted was, in most cases, not significant. Normal

2 priorities of maintenance may allow some of the failures. The industry would prefer that the staff reduce the scope of monitoring to a smaller and more significant set of equipment, preferably only safety related equipment. The proposed indicator monitored approximately 600 to 2000 componen'.s per plant to reflect a broad scope of the maintenance activities.

Nexus to Maintenance The nexus of the indicator to maintenance was a major topic during the demonstration project.

The staff reviewed selected component failures with each of the participating utilities. The utilities' analyses of these failures and the indicator consumed up to 6000 person hours and used all of the information available, including the memory of individuals. Therefore, their cause analysis went far beyond that afforded the staff through review of the narrative descriptions of the causes contained in the NPRDS failure record. Based upon a review of the NPRDS records, the staff had found that 84% of the component failures that comprised the indicator were related to maintenance, under the NRC's perspective of maintenance. During the Demonstration Project, the utilities found that about 14% of such failures were due to maintenance (primarily errors of commission), under their perspective. The major differences between these failure categorizations were associated with failures that the staff assigned to maintenance but which the industry assigned to wearout, design / manufacturing / application, random, or unknown causes, j

i i Many probierns that arise from design or application deficiencies are responded to through j preventive maintenance measures. The Industry asserted that the indicator did not measure j maintenance effectiveness since the root cause for the failures cornprising the indication was not i

maintenance-related, under their definition. For example, a utility would argue that charging pump I failures or feedwater pump seal failures were due to the original design of the equipment. However,

! during the discussions, the staff noted that the solution to the problem was through the plant's maintenance program; that the occurrence or disappearance of the failures was a direct result of the preventive maintenance program; and that the indicator was measuring changes in failure i frequencies resulting from that program.

[ Another major cause for failures that, from the viewpoint of the industry, clouds the relationship of l the indicator to maintenance is wearout. Specifically, the industry categorized many failures not as

< maintenance, but as "first of a kind wearout." This category contained many failures that the staff

! contended could have been detected and repaired at the incipient failure stage through a predictive i

maintenance program or an aggressive preventive maintenance program. If detected and repaired as incipient failures (prior to the component function being degraded below specification), they would I have escaped the indicator. Again, as in the previous example, first failures were frequently j addressed by preventive maintenance. If a component (e.g., transmitter) continually failed after three years in a claimed design life of five years, the utility would initiate a preventive replacement 4

at the eartier point, therQy eliminating the failures. The indicator would detect the change as a i result of improved maintenance and the dispute regarding the initial cause of the failure is moot relevant to the integrity of the indicator.

2 4

6 1

~

, oAv,r(my t. re90) V V AEOD/S804C

~

'.[ This fundamental difference in perspective between the staff and the participating utilities on the nexus of componeat failures to maintenance came to be referred to in the Demonstration Project as

,i

  • Big M versus Little m." These differences were graphically lliustrated during the Demonstration i

Project.

During the proof of concept phase of the development of the proposed indicator, the staff categorized the causes for approximately 4000 NPRDS failures based on the failure narratives.

l These narratives were sampled from the

- periods of ODE component failure history which genersted indications. This categorization NRc PERSPECTIVE represented the ' Big M" perspective. The i results of this categorization can be seen in , ,,,,,,p,,

Figure 1. ===== mpesi neem tv o insa The categories shown in Figure 1 should be i understood as follows: The failures assigned to this category could be reduced by improvements in that programmatic area. For j example, failures associated with the *PM" i category were judged to be reducible by either improved implementation of an existing PM Figure 1. Nexus to Maintenance - NRC i program, such as extending the program to Perspective cover additional equipment, or by instituting a more extensive PM program, such as using vibration analysis or periodic oil sampling. Two cases drawn from the documentation of recent NRC maintenance inspection results serve to further illustrate this PM assignmert in the first case, four forced outages involved degradation of reactor recirculation pump seals. To address this problem, the licensee initiated a program to collect and analyze reactor recirculation pump shaft vibration data, and has modified the seal INDUSTRY PERSPECTIVE replamnt fgg.' In the second case, a turbine-driven auxillary feedwater pump

' ,y oversped and tripped on start. The cause was o traced b lack of a PM program for periodic

,,,,,,,,p,,, ,,,,,,p., flushing of the govemor as recommended in

!M- the vendor technical manual.' in both cases, e the staff would assign the failure to the 'PM"

,,,,,.p,,,

category.

Figure 2 shows the "Little m* perspective. It was developed during the Demonstration Figure 2. Nexus to Maintenance - Industry Project by the utility participants with the assistance of INPO and captured distinctions tener irem A. s. Davis. NRC to C. Reed. CECO, transmiting inkiel SALP s Report for the Quad Caos Nuclear Ptert.

February. 2.1990.

Leser from L Reyes. Nac, to J. oomorg. FPL. tronomining Notice of vloission for the R. Lucie lachy. Mare 14. 1990, s

. s E . ,oM/rpany lrese AEOD/S804C

. , they felt important in determining the nexus to maintenance. These distinctions were also brought out in the review of ODE failures the staff considered related to maintenance during site visits.

' Design" constitutes a large percentage in Figure 2, about eight times larger than the amount assigned by the staff. Such a difference is understandable in that the regulatory perspective of maintenance includes the feedback of experience gained through engineering and design

~

modifications used to eliminate component performance problems, and over time, this process does

[' achieve improvements. Examples of this that were discussed in some detail in the Demonstration

. Project included the improvement in charging pump performance at Ginna, the improvement in service water pump performance at Grand Gulf, the planned upgrading of charging pumps 2

(replacement of blocks with new design / material) at San Onofre 2&3, and increased surveillance frequency on recirculation pump pressure switches to compensate for drift before their function was s

impaired. In all these cases, the number of failures experienced decreased or should decrease,

thereby reflecting improvement. In other cases, however, failures would continue to occur at about I the same rate because, as discussed for Millstone 3, utility studies showed that it was cost-4 beneficial and safe to simply periodically repair a marginally desl0ned main feedwater pump seal rather than to pursue a design improvement.

i NPRDS Data Quality l

I The utility participants stated that NPRDS data provides information on maintenance, but quality

! limitations impact its usefulness for a maintenance indicator used by the regulator. There is

consensus between the staff and the industry that continued improvement and strengthening of

} NPRDS is needed.' in particular, the utility participants felt that a source of inconsistency in NPRDS i reporting was the determination of the existence of a degraded component state needed to tri0ger l NPRDS reportability. Overreporting of minor or incipient conditions revealed by an a00ressive and j j proactive PM program as degraded failures would result in relatively more failures being used in the j indicator, and potentially a greater maOnitude in the indicator over time. However, regardless of any j overall NPRDS data quality improvements, the data can still be used to determine trends at

individual plants.

i l The proposed indicator itself is designed to De benign to proactive preventive or predictive i maintenance. This was accomplished by only including NPRDS failures designated by the utility as

  • immediate" or "dograded". These terms are defined in NPRDS as follows:

{

l Immediate - A failure that is sudden and complete.

I i Degraded . A failure that is gradual, partial, or both. The component degrades to a level l that, in effect, is a termination of the ability to perform its required function.

j This code should be chosen when a system of component does not satisfy j the minimum acceptable performance criteria for a specific furwfon or when i a component must be removed from service or isolated to perform corrective

maintenance.

1 The revised Potey 81mement on Maintenanos states in part 'The Commissian encouragee Wie use of the industry-wide NPRDs done.. mehenng in, roved induary use as and paredpadon m the tenos to gauge lhe effediveness of melreenance?

i e

DIw,T(myv.fwe b

  • AEOD/S804C j .t j ,

Proactive maintenance would identify incipient conditions, defined as an imperfection in the state or i l 3

': corx# tion of a component that could result in a degraded or immediate failure il corrective action is i not taken. These incipient conditions would be reversed by PM prior to the component entering a

] degraded condition where function was imoaired, or such impairment was imminent. An incipient  ;

l ,

designation or code indicates an optional record, since failure has not occurred. This code is also

~

l used by INPO to classify records judged not to be failures during the INPO failure audit process.

j NPRDS documentation provides extensive guidance for making this determination.

j The proposed indicator should not penalize proactive preventive and predictive maintenance. It's important that incipient conditions discovered by these programs not be interpreted as degraded j failures in the use of the indicator, improvement of the quality of maintenance work orders appears i essential to achieving improvement in NPRDS quality. The utility particpants felt that work orders j often do not contain enough detail to make a proper determination. The industry also feels that the j lack of detail in work orders clouds root cause documentation and impacts the diagnostic value of j the indicator. Also, timely maintenance work request close-out and associated NPRDS reporting is j needed to capture important details.' These difficulties can be addressed through strengthening the j quality of the maintenance work order documentation process, for example as done at Grand Gulf 4

through a dedicated closeout Engineering Review Group (ERG), and through greater rigor in the l quality assurance review conducted by INPO. ,

Grand Gulf has established the ERG to improve the quality and timeliness of maintenance  !

l: documentation and closeout. The charter of the ERG, as created within the Performance and

! System Engineering Department, is to perform a final, independent review of maintenance work orders prior to closeout. Grand Gulf has tasked this group with ensuring work orders reflect i adequate details of the identified problem, including the overall work scope, root cause, corrective

' actions taken, and component failures. The ERG represents a plant improvement with the peiential

for a direct impact on maintenance indicator development, addressing the concems expressed about

! the quality of NPRDS reporting and its effect on the indicator. A group such as the ERG provides l additional assurance that the failure information documented in the maintenance work orders j (MWOs) is accurate and complete. This, in tum, helps assure that the subset being reported to the  ;

i NPRDS is accurate and complete. l

! I I The specific duties of the ERG consist of: j l'

I

1. Reviewing completed work orders for consistency, l 4
2. Obtaining predictive maintenance data for trending, i 3. Providing reports to system engineers for analysis, l 4. Maintaining control of the surveillance tracking program, Entering all MWOs into SIMS for component failure trending-

.' 5.

l The ERG consists of a supervisor, three full time engineers, two clerical personnel, and two engineering i.chnicians.

1 r i 7

p i

4

~

i od4rrtusy f. resol (s AEOD/S804C I

, The Industry Action Plan for improving maintenance identified the need for focused effort on the

. ': NPRDS to improve the industry's effectiveness in monitoring and maintaining the reliability of important plant equipment. Specifically, INPO and the utilities will upgrade NPRDS effectiveness by improving data quality and expanding the scope io include additional selected balance-of-plant equipment.

Recommended ADoroach:

1. Revise the algorithm used in calculating the indicator to eliminate " ghost tidus" and capture  !

" shadow ticks." Two allemative methods developed and being tested by the staff were 4

introduced to the Demonstration Project during the March 1990 joint meeting.

i 1

2. Modify the overall indicator to include both system-based and component-based indications. 1 The calculations and displays are being modified to also show component-based indications.

i

3. The staff should obtain critical components lists from participating utilities and determine if l i some of the components monitored by the present indicator could be deleted. l l
4. Modifications to the list of components to be monitored should be explored by the staff to ]

include more safety system equipment and refine the previous scope.  ;

l i

! 5. Continue to encourage improvement in NPRDS participation and support the initiatives in the

, Industry Action Plan regarding improving NPRDS data quality, i 6. Encourage NUMARC to include a specific element in the Industry Action Plan that addresses i improving the quality of maintenance doajmentation regarding the nature of the equipment i

failures (cause and function impact), and assuring that adequate resources for NPRDS

{ reporting aie provided during outage periods.

i '

I l

i

)

l i

8 4

, i ,

s .oMirtuny 1. teno) AEOD/S8040

+

5 1

s' APPENDlX A SUMMARIES OF MEETINGS WITH DEMONSTRATION PROJECT UTILITY PARTICIPANTS This Appendix contains copies of the NRC minutes for the individual meetings that were held with the six utility participants in the Demonstration Project. The minutes are arranged in the following chronological order:

11tJLity Meetina Dates Pggg Commonwealth Edison Company 11/29- 30/1989 A.2 l Southem Califomia Ediven Company 12/12 13/1989 A.4

Duke Power Company - Oconee 01/09- 10/1990 A.8 Rochester Gas and Electric Corporation 01/17- 18/1990 A.18 Systems Energy Resources, Incorporated 02/20- 21/1990 A.25 Northeast Utilities 02/28-3/1/1990 A.33 i

j 4

)

I f

A.1 i

,. . b /

' DMFT(Mey r,1900) AEODIS804C

.'_, 7

! '. MEMORANDUM FOR: Thomas M. Novak, Director l . f: Division of Safety Programs Office for Analysis and Evaluation i i of Operational Data i

!- FROM: Mark H. Williams, Chief 4

Trends and Pattoms Analysis Branch

! Office for Analysis and Evaluation

of Operational Data

SUBJECT:

MINUTES OF COMMONWEALTH EDISON /NUMARC/NRC NOVEMBER 29-30, 1989 MEETING l

On November 29-30,1989, representatives from the NRC staff met with the Commonwealth Edison staff and a representative from the Nuclear Management and Resources Council (NUMARC) at the

! Chicago Office of Commonwealth Edison. The meeting was scheduled as part of the Maintenance

! Indicator Demonstration Project to discuss the staff's proposed Maintenance Effectiveness Indicator j . (MEI). This meeting was the first in a series of meetinos to be held with individual utilities as part l of the MEl Demonstration Project. A list of attendees is attached. ,

4

} The NRC stalf presented the detail and logic followed by the staff during the development process

! for the proposed maintenance indicator. The intent of this presentation was to familiarize utility j personnel with all the details necessary for understandin0 the proposed maintenance indicator.

4 2

During the course of the meeting R was determined that Commonwealth Edison is moving toward monitoring equhment (component) performance. Monitoring of component reliabilty by j Commonwealth Edison is in general consistent with the logic being followed by the NRC staff during j the development of the proposed maintenance indicator. In addition, R was determined that: 1) utilization of component failures to measure the quality of maintenance is appropriate and useful,2)

! utilization of failure rate increase methods is a reasonable way to approach the detection of changes

! in maintenance effects, and 3) the ODE equipment list / selection for the indicator is generally consistent with Commonweath Edison's priorty listing for equipment availability.

l Feedback from Commonwealth Edison was in general positive and constructive. The following

, recommendations were made: (1) The current methods used to calculate the MEl may have to be l revisited to make the indicator more useful to plant staff, e.g., consider groupin0 failures by j component type and by system, (2) the indicator should be sensitive enough to reflect ormoing j programs to address specific fixes for a given component; I e., check valves, MOV's, pumps, etc.,

(3) additional sources of data beyond NPRDS (GADS, Greybook) may be useful to better describe ODE equipment performance.

, Mark H. Williams, Chief i Trends and Pattoms Analysis Branch Office for Analysis and Evaluation

, of Operational Data j oc: E. Jordan, AEOD j W. Smith, NUMARC

! P. Kuhel, Ceco j PDR A. .

l

  • DMFT @y 1,1900) AEODIS804C ,

~

. l ATTENDANCE LIST November 29-30,1989 Meeting

. MAINTENANCE INDICATOR DEMONSTRATION PROJECT M ORGANIZATION Paul Kuhel Commonwealth Edison Martin G. Kief Commonwealth Edison Don E90ett Commonwealth Edison Robert Lazon Commonwealth Edison Thomas Kovach Commonwealth Edison Lee A. Sues Commonwealth Edison Walt Smith NUMARC Larry Bell NRC/AEOD Pat O'Reilly NRC/AEOD Mark Williams NRC/AEOD Thomas Novak NRC/AEOD I

l 1

i 4

.- l AEODIS804C

' _ :onsfrtuer t senos k " MEMORANDUM FOR: Thomas M. Novak Director

, Division of Safety Prograrns

, ': Office for Analysis and Evaluation

' of Operational Data FROM: Mark H. Williams, Chief Trends and Pattems Analysis Branch Division of Safety Programs

,~

Office for Analysis and Evaluation of Operational Data

SUBJECT:

SUMMARY

OF DECEMBER 12 13, 1989 MEETING WITH SOUTHERN CALIFORNIA EDISON COMPANY REGARDING MAINTENANCE INDICATOR DEVELOPMENT On December 12-13,1989, members of the Nuclear Regulatory Commission (NRC) staff met with representatives of Southem Califomia Edison Company (SCE) and the Nuclear Utilities Management and Resources Council (NUMARC) at the San Onofre Nuclear Generating Station (SONGS) site to discuss maintenance indicator development. This meeting was a followup to the October 13,1989 meeting of the NRC/ industry Maintenance Indicator Demonstration Project.

A list of meetin0 attendees is enclosed.

The NRC staff presented the detail and logic which the staff followed durin0 the development of the staff's proposed Maintenance Indicator. The purpose of this presentation was to familiarize SCE l personnel with aN of the detail necessary for understanding the proposed mdicator.

SCE explained to the staff that, although they do not have an integrated program for measuring the effectiveness of their maintenance program, they do monitor a number of specific maintenance-related areas (e.g., non-outage productMty, thermal performance, vbration monitoring, rework

monitoring, and oil sampling).

I The primary issue which was discussed during the meetin0 was the link between the NRC's I

proposed indicator and maintenance. This was accomplished tr/: j (1) Listing representative cases of component failures comprising the NRC's indicator which the j staff had designated as maintenance-related and SCE has not.

I i

i (2) Analyzing the failure narratives for the component failures identified in (1) above. The staff's l analysis was based solely on the information contained in the narrative; SCE's analysis was

~

based on aN available information (including individual memory) at the site regarding specific

! failure.

i (3) Discussin0 the difference in views of " maintenance-relater failures which, in tum, resulted in l the fotowing issues:

SCE expressed the view that the first failure of a ceirvenect, or the failure of a component after it has been in service for a long time, should not be necessarily

. considered as related to maintenance. On the other hand, it is not clear that such i

j- A.4 i

4

, +

t , ,
  • omrr(my r. rsool AEOD/S804C failures should be excluded, since lack of maintenance attention or oversight

. regarding inclusion in a PM program could be the cause of the failure. The number [

I of such failures captured by the indicator and their effect on the indicator has not been determined, but this area should be explored.

SCE believed that "wearout" was an acceptable characterization of a failure cause, and that "wearout" failures should generally not be considered as related to maintenance. In general, the staff feels that prevention of wearout to the point of

,~

loss of function (failure) is the objective of a maintenance program, and thus failures assioned a "wearout" cause should be considered when assessing the performance of a maintenance program. Further, "wearout" may be used too frequently in lieu of more ri0orous cause analysis.

  • SCE indicated that a reliability centered maintenance (RCM) program could lead to a planned "run-to-failure" strategy for some equipment, and thus failures of that equipment should not be used as part of a maintenance indicator, in particular, condition directed RCM will allow selected components to reach a degraded failure state and thus generate an NPRDS failure report. SCE plans to review the list of equipment used in the candidate indicator and recommend modifications to address this issue. Related to this concem, SCE felt that there is some acceptable level of component failure associated with an effective maintenance program, but the indicator counts all failures in establishing trends, which implies that any failure is a result of maintenance ineffectiveness. The indicator uses failures across a broad spectrum of equipment over time to establish a trend, and in that framework no single failure is used to reach a conclusion about the effectiveness of the program.

This concem could also be handled by putting an " error band" around the indicator.

  • The concem about reporting incipient conditions as degraded failures to the NPRDS was also discussed. SCE indicated that some utility maintenance tracking systems mipht allow corrective action to be taken under the umbreNa of preventive maintenance, and thus no failure report would be submitted to the NPRDS. This issue related to the completeness and consistency of NPRDS reporting.

Finally, a number of suggestions were made for improving the current indicator which led to the

following items for future action:
(1) SCE win review the specific list of equipment monitored by the indicator for San Onofre
Units 1, 2, and 3 and designate those components that should be anowed to run to failure l including condition direded cases. Upon staff agreement with such a list, this may have the
effect of reducing the number of first failures of a component contributin0 to the indications.

? The staff will provide SCE the pertinent engineering records for these three units to facilitate l the review.

! (2) The staff will develop a template or peer grouping for use in interpreting the calculated indicator. This will be cycle-based. No comparison across plants would be attempted i except within the context of this template. Hence, the template would have the so-caHed

" acceptance bands" mentioned previously.

i

! (3) For future analyses, the staff will produce the indicator calculated by component type as well

! as by system. Selection of specific component types will be influenced by the component A.5

- _ _ - . . - - - . .. . . = - . .

O p D4<riquer t. t000) AEODIS804C j

~

types considered in a CFAR run. SCE will provide a list of the CFAR component t',pes for s this purpose.

(4) The staff will modify the indicator algorithm to eliminate the problem of " ghost ticks."

(5) The staff will determine the extent of the problem of different " levels

  • of degraded failures -

those being discovered during operatione, versus those discovered during refueling outages (particularly under "open and inspect" conditions).

l l

l l

Mark f1. Williams, Chief i Trends and Pat' erns Analysis Branch Division of Safety Pro 0 rams Office for Analyt,!s and Evaluation of Operational Data

Enclosure:

As stated j cc: E. Jordan, AEOD W. Smith, NUMARC M. Rodin, SCE PDR I

i I

I l

1 I

k l

A.s i

l l

'-y ,

  • OMFr(uny t. I000) \

v AEODIS8040 ENCLOSURE

'*. List of Attendees SONGS NRC MAINTENANCE INDICATOR MEETING l December 12 - 13, 1989 I

l Name Oroanization Brian Katz' Mgr. NSSSD Don Evans

  • SSSD i Ralph Sanders SSSD Robin Baker Licensing L.D. Brevig Licensing Fred Briggs Sta. Tech.  !

A.D. Toth NRC Region V R.L. Dennig* AEOD/NRC Walt Smith

  • NUMARC Jack Rainsberry Licensing Mark Williams
  • AEOD/NRC Loyd Wright

R.H. Bridenbecker VP, Site Mgr.

Harold Ray VP,NES&L M.E. Rodin* SSSD/ Reliability Pat O'Reilly* AEOD/NRC Barbara Aden SSSD Bob Levline* SSSD/ ERIN )

i

, 1

Note:
  • Full time attendees 4

4 1

l

. -.. . . -~ - .- .- . - . - - . - - - - - . .

OMFT@y 1, r000) AGOD/S8040 4  ?

- MEMORANDUM FOR: Thomas M. Novak, Director 1 Division of Safety Programs

Office for Analysis and Evaluation i of Operational Data FROM
Mark H Williams, Chief j Trends and Pattoms Analysis Branch ,

Office for Analysis and Evaluation j of Operational Data 1

SUBJECT:

SUMMARY

FROM JANUARY 910,1990 MEETING AT OCONEE l

< On January 910,1990 we met with representatives of Duke Power and NUMARC to pursue the formulation of maintenance performance indicators. The list of attendees is attached.

]

Discussions followed the agenda provided as Attachment 2, and were limited to the Oconee j station since Duke staff indicated that no review had been performed for McGuire or Catawba.

j The Duke staff did invest a significant amount of time in analyzing the indicator for the Oconee case.

i ltem 2 of the agenda, the discussion of interim indicator results, raised issues on how the indicator would actually be used, and how much of a resource impact any additional indicator, technical merits aside, would have on Duke general office and plant staffs. The concem

expressed by Duke staff was that any new indicator would require resources to respond. At a  ;

minimum, they would have to periodcally review it and understand its implications. This would l

detract from other inplant reliability analyses already in process, e.g., CFAR, FATS. Their j concem about such an impact is proportional to the degree that this indicator would be used j based on its face value, e.g., absolute magnitude, without additional analysis and interpretation
by knowledgeable individuals.

The NRC staff indcated that the proposed indicator was not intended for use without additional l

j information on maintenance, for example, as found in inspection reports, and that use of any I indicator alone as a basis for a regulatory decision or perspective on performance was contrary '

to NRC policy.

l Item 3, root cause analysis of individual component failures, was accomplished by reviewing a

! selection of failures for ODE equipment. Based on the information in the failure narrative the

staff classified these examples as maintenance related, while the utility had not. Of a total of 15 j cases reviewed, the Duke staff believed that six could be related to maintenance (as they define l It) in whole or in part. With the additional information provided by Duke, the staff concluded that j 3 of the 15 cases were not related to maintenance (as the staff defines it based on the Commission policy statement). The participants disagreed on the remaining 6 cases, due to the differing definitions of what maintenance encompasses, differing understandings of what the term

' maintenance related" means, and the suitability of the NPRDS guidance on what constitutes a degraded failure (which is used in the indicator) and an incipient condition (which is not used in the indicator).

! The interpretation of Nrearout"is a particular concem. Duke staff contends that wearout is a l legitimate cause designation which relates to ramal equipment service, and does not necessarily indicate deficient maintenance. On the contrary, Duke staff felt that wearout actually A.8 r

3 .

2 - .

, omrr p.y r. resof Al!OD/S804C 1

' may indicate proactive and desirable maintenance for either the incipient or degraded degroes of

. failure. The staff does not take issue with this contention per se, but argues only that degraded and immediate failures (where by definition a component cannot adequately perform one or more J of its functions) attrbuted to wearout are relevant to evaluating the effectiveness of maintenar:ce.

In general, the staff indicated that its definitions were consistent with the Commission policy statement on maintenance, with INPO industry guidelines, and current NPRDS reporting

. guidance on the degree of failure. Duke staff disagreed with the boundaries drawn by the staff

,~

in its interpretation of the scope of maintenance. The Duke staff further suggested that the l NPRDS guidance on degree of failure, in the context of a proposed maintmance indicator, may be too conservative and result in capturing incipient conditions as degraded failures.

l Under item 5, Duke Power approaches to component failure trending, the Oconee staff provided information on a nurnber of different efforts, either underway or in the formative stages, as described in Attachment 3. One database used for component failure trending purposes is the l

Failure And Trending module (FAT). This data base contains information for every maintenance l work order that indicated a problem. It includes all failures that would be reported to NPRDS, but covers a much greater scope of equipment, and covers problems of a lower severity than ,j '

those reportable as degraded or immediate failures for NPRDS. When comparing trends under item 6, Duke staff used the flagging algorithm proposed by the staff in combination with FAT I

data and generally obtained more flags. No alternative algorithms or thresholds were tried. The Duke staff at Oconee is also making use of CFAR, which is based on NPRDS data and compares a plant against the industry for numerous component groupings and application coded

) components using failures per component hour. However, CFAR does not currently provide a

, trendable indicator.

The Duke staff stated that the proposed indicator provided a measure of component failures, but

that as currently calculated it did not line up with the Oconee maintenance organization, and thus would not provide useful feedback to the plant staff. The mechanical maintenance at Oconee is organized by type of component, while the instrumentation and electrical is organized
by system. Thus, the system-based calculation undertying the cumulative indicator display, with j lts mix of different types of components, does not align with the responsbilities of their plant l staff. In response, the staff explained that the proposed indicator was programmatic, and not i constructed as a detailed feedback tool for taking corrective action. Adverse indicator trends

! would necessitate a broad review of the maintenance program and its implementation.

Nonetheless, the indicator could be made more useful to plant staffs, for example by cutting the

{

data by component type, as suggested earlier by Commonwealth Edison staff. Steps to make the indicator more useful are being pursued by the staff, in addition to eliminating mechanittic problems such as " ghost ticks."

The Oconee staff is also becoming used to interpreting the component failure rates provided by CFAR and prefers that similar statistics, i.e., failures normalized by component population, be i used to avoid confusion. Given the preference for the CFAR type approach, Duke staff indicated

, that they would try to develop a way to tum CFAR results into a trending tool. The Duke staff offered a number of allematives for staff use in measuring maintenance effectiveness as

! presented in Attachment 4.

! In summary, a number of issues concoming the indicator raised in previous dismssions with the AHAC participants were again raised by the Duke staff:

i e

A.9 i

~

~ +

DMFT(May 1.1000) AEODIS804C

.k ' Ascribing the first failure and wearout failures to maintenance, The potential for counting failures that are discx)vered by PM and not severe j enough to impact the component's primary function (due to some NPRDS failures being coded as " degraded" in accordance with guidance although they are felt to be incipient),

{.

Not highlighting repeat failures or rework, 1

l '~ The presence of " ghost ticks,"

i j The degree of usefulness to the plant staff, l The need for multiple indicators to capture all the nuances of maintenance l performance.

l i The Duke staff views the proposed indicator as an equipment trend indicator, and believes that I i a component failure oriented indicator is needed as part of a set to monitor maintenance. Duke j staff maintained that more than one overall indicator was needed to monitor the maintenance process. The NRC staff agreed and noted that monitoring equipment failures, the focus of the

NRC staff activities, was one useful and important measure of maintenance
effectiveness that should be used with other utility indicators to assess and improve the
maintenance process. The scope of equipment covered by the indicator (ODE) contained as a i subset the equipment Duke would be concemed with given the same basis for selection. More j than in previous discussions the Duke staff expressed concem about resources needed to dcal with the indicator for response and diagnosis. In particular they felt that since they were already committed to periodic use of CFAR, the need for an indicator might be met by some modification of CFAR, thus saving er'gineering resources.

! Mark H Williams, Chief i

, Trends and Pattoms Analysis Branch 4

Office for Analysis and Evaluation j

of Operational Data i

Attachments: As stated j

j cc: E. Jordan, AEOD i W. Smith, NUMARC l

S. Lindsey, Duke Power L Wiens, NRR PDR  ;

I l

A.10

s . .

> . . DMFT tuer t.1000) .

AEOD/S8040 Attachment 1 l 4

., Attendance l January 9-10,1990 Meeting with Duke Power Company i

{ ReGardin0 Maintenance Indicators l l

Name Qfggg Telephone Number i Bob Denni0 NRC/AEOD 301-492-4490 Tom Novak - NRC/AEOD 301 492-4484 i

Wayne Hallman DPC/GO 704-373-2345

Walt SmMb NUMARC 202-872 1280 Bill Foster DPC/ONS/MAINT. 803-885-3152 Mark Williams NRC/AEOD 301-492-4480 3 Ronnie Henderson DPC/ONS/MIASU 803-885-3152 Sam Hamrick DPC/ONS/MMSU 803-885-3519 Stuart Lindsey DPC/NUC. MAINT. 704-373-8788 Pierce Skinner NRC/ SRI-Ooonee 803-882-8927 Dendy Clardy DPC/ONS/MAINT. 803-885-3180 1

4 4

l 4

4 A.11

a - , ,

  • omfr(my 1.1900) AEODIS804C

. Attachment 2

~

AGENDA i

. JANUARY 910,1990 MEETING WITH DUKE POWER COMPANY 1

REGARDING MAINTENANCE INDICATORS (1) NRC presentation Performance Indicator Development, Analysis Assumptions and 1

Purpose of Meeting.

(2) Discussion of Interim Indicator Results. j

! NPRDS Reporting of Component Failures involving Outage-Dominating Equipment.

(3) l

?

(/) Root Cause Analysis of Individual Component Fallures of Outage-Dominating Equipment.

} (5) Discussion of Duke Powers Programs / Approaches for Trending Equipment Failures and Failure Causes as They Relate to Maintenance (" FATS")

l 1

l (6) Comparison of Maintenance Trend Information (a) Trends Calculated with the NRC's Indicator l (b) Trends Calculated with Duke Powers Indicator (s) l n

a 1

A 4

a i

A.12

l - -.

r 4 , ,

. oMn(mr f. f880/ AEOD/S804C p , .- .

Attachment 3

[ .

Outline of Trending Approaches at Oconee Nuclear Station

i. - 1) Communications from Work Execution Technicians and Planners to Maintenance Engineering
of component failure trends and repeat actions recognized while planning and/or performing
maintenance. This is an ongoing process and serves as an active feedback mechanism in the

' maintenance-triangle concept. Planners now have the capability to retrieve printed information j

sheets that show a component's corrective maintenance history while planning each work request. This enables the Planner to look for trends during the planning process. The history i sheets are attached wkh the work request so that Work Execution can review as well.

1

2) Maintenance Engineers are accountable for defining and driving Technical Support Programs
.for components and systems. The TSP's include component trending activities. The j l Maintenance Engineers are expected to define and perform programmed maintenance in an

' ownership" manner, and they monitor the performance and failures of their components on a regular basis. The Maintenance Engineers supply regular feedback and condud meetings to j inform appropriate Planning, Work Execution, Radiation Protection, Operations and Maintenance l ,

Management of actions that are needed for problem components discovered through trending or j components that will be monitored closely for trends while operating. Maintenance Engineers maintain trend data in a variety of places ranging from personal files to computer data sets. i

! 3) Some examples of Technical Support Programs where trending is ongoing are:

a. The Predictive Maintenance and Monitoring Program (PM2). This program includes the
acquisition and trending of vibration and oil analysis data for rotating equipment. The

] responsble Maintenance Engineer monitors the data for adverse trends and prescrbes corrective and preventive maintenance when trends indicate actions are necessary.

l 1

i b. The pipe ErosiorVCorrosion Control Program. This program includes the acquisition of l

pipe and fitting weil thicknesses that are maintained in a computer file. The responsbie l Maintenance Engineer monitors the data for trends that show wall thickness that are l decreasing at an adverse rate. When trends are discovered, the Maintenance Engineer

prescribes the appropriate actions.

! c. Instrument procedures provide data sheets for l&E technicians to identify components l where malfunctions or exceeded calibration tolerances are discovered. These data sheets are named Component MalfunctiorvMaximum Tolerance Limit Exceeded sheets.

l I The data sheets are forwarded to the responsbie Maintenance Engineers for evaluation, l

and the sheets are kept in l&E Maintenance Engineering files for trending data. The l

} ISE Maintenance Engineers review the filed data for trends.

d. The l&E Maintenance Engineer responsible for the RPS system monitors the Reactor Coolant flow for deviations greater than one-half percent and trends that show increasing deviations. As increasing deviations are discovered, the Maintenance Engineer prescrbes the necessary actions to prevent excessive deviations.

(

A.is  ;

q i l

1

, . ,oder, (my r,'rsoo; AEOD/S804C 4 .' ' '-

j - .. e. The l&E Maintenance Engineer responsible for the Control Rod Drive breakers monitors j t the trip times obtained during monthly Preventive Maintenance testing and trends the

, '. data for trip times that show increases towards a limit established by the engineer. The

~

4 j., engineer prescribes necessary actions when adverse increases are apparent.

f. The Performance Group trends leak rate data, valve stroke times, pump performance j .. etc., and notifies Maintenance when adverse trends are discovered.
g. Limitorque valve operator MOVATS and lubrication analysis data are trended by the

. responsble Maintenance Engineers for predictive maintenance purposes.

i

4) The following failure reports are provided to Maintenance Engineers for their use in trending components.

I~

a. The "Value Report Card" is supplied to the Maintenance Valve Engineer after each Refueling Outage. This failure report identifies any corrective maintenance work l requests written within the thirty-day window following the Refueling Outage. The engineer analyzes the identified valve failures for failure trends as well as work execution '

effectiveness.

i.

b. The " Multiple Work Request Report" is supplied to both Mechanical and l&E l

j Maintenanos Engineering groups. This report identifies components that encounter j multiple failures (not necessarily related) in a selected time period.

i i c. The " Average Failure Frequency Report" is supplied to both Mechanical and l&E

Maintenance Engineering groups. This report develops failure rates or frequencies considering component populations, number of corrective maintenance work requests I written within a selected time period for respective components and the amount of work l hours expended.
d. The " Component Fauure Analysis Report" (CFAR) is now being supplied to the l

i Maintenance Engineering groups quarterly. CFAR identified Oconee's NPRDS l components that are experiencing higher failure rates than similar component applications throughout the industry. NPRDS reports that are submitted are now being

supplied to the corresponding Maintenance Engineers on a monthly basis with a

! summary sheet being sent to the Maintenance Engineering Manager.

l e. Special failure reposts are supplied to Maintenance Engineers as they request them and

as the MMSU group discovers failures that indicate a need for further investigation.
These reports are built from maintenance history data and failure data contained in the j Equpment Database (EQDB), Nuclear Maintenance Database (NMDB) and the Failure i and Trending module (FAT).
f. Future capatWlities being considered are reports that identify rework, repeat failures, and i corrective maintenance required following PMs.

l 5) Examples of other maintenance indicators trended at Ooonee:

a. Ooonee's Management Information System Report (MIS Report) is a monthly report that supplies a detailed accounting of work hours expended by types of work, the ratio of A.14 i

, . h

\ AEOD/S8040

' , , *omrr(esy t, reso) l '

Preventive Maintenance to Corrective Maintenance work hours, the number of high l

~, * ' . priority work requests written and closed out during the month, the work request backlog

}- .

! .I gree;er than 90 days, the status of each open work request and the responsble Planner.

l .

Each monthly issue is reviewed and trended by Maintenance Management.

! b. Weekly audits of work requests by Planning Coordinators and Planning Manager for

!** completeness and acaJracy. One purpose of the audits is to trend the quality of information documented on work requests.

j

c. Housekeeping reports are used to trend Material Condition.
d. The Operations group identifies Control Room Annunciators and instruments out of service monthly to the Plannity Group for corrective action. Planning and Operations trend the monthly reports.
e. Others.
6) Indicators such as Availability Factor, Safety System Actuations, Forced Outage Rate, Corrective Maintenance Backlog, High Priority Work Requests, Ratio of PM to Total Maintenance, PMs Overdue, Thermal Performance, Capacity Factor and Number of Continuous Days of Operation have shown favorable trends during the past years and indicate that Oconee's Maintenance Proarams are effective in manaaina component failures, i

A.15

O AEOD/S804C i

,omrr(uer r, fsool

. .y ~

' ' Attachment 4 s'.

"* PROPOSED OPTIONS "*

FOR MAINTENANCE EFFECTIVENESS INDICATOR 4

  • OPTION 1: Utilize LER Forced Outage Rate Data 3

Reasons: 1 Both LER and Forced Outage Rate data are now reported to NRC under ,

familiar reporting guidelines. This data is relatively pure and accessR>le for NRC l i

use.

  • Maintenance-related LER data would provide indications of those maintenance i

chalenges to safety systems or design operating bases for a plant. Forced Outage Rate data captures the challenges to the major outage-causinD 4 equipment. Together they provide a good basic picture of a plard's maintenance wthout constructing another ind'cator.

j Cost / Benefit: 1 Cost to the NRC and industry would be minimal. This data is wen understood and wiu not require redundant analysis / review which would be necesstated by a new indicator.

Needs: 1 Better maintenance cause codes need to be defined for LER reporting, in l addition, the current LER data would need to be reviewed and reclassified for a l I prior baseline period (e.g.,3 years backut would probably give a good track record for trendin0). Based on a review of Oconee LER data for 3 years, this took about 45 minutes for aH 3 unts.

i

)

l Option 2: Utilize reme of the importarn " Maintenance Indicators"  !

! Reasons:

  • A defined core of these maintenance indicators, when reviewed collectively, do provide a more accurate picture of Maintenance Program Effectiveness then any one indicator could. These are l

what most utHRies use to measure their program effectiveness; therefore, the data is again wen defined.

l 4

Cost /Benett:

  • Cost to the NRC and industry would be minimal. This data is weR understood and win not require redundant analysis / defense which would be necessitated by
the new indicator.

1 Needs: 1 Both the industry and the NRC need to come to a more definitive a0reement as to what " Maintenance" means. This wiu require defintion of a core set of indicators that when looked at cumulatively provide indication of Maintenance Program heath. Possibly a reliability /availabilty indicator needs to be added to the " Set" of accepted indicators.

i A.is

4

- Y , .

, olwr ter r. r8ee d V AEOD/S804C

- e

  • Option 3: UtRize CFAR type report (s) or Reconstructed Indicator (s) based on Failure Data j 2 -

, Grouped Component Types L Reasons: 1 FaNure data arouped by system appears to be ineffective in correlating with '

{ other program indicators; therefore, any reliability type indicator (s) need to be j grouped based on component groups similar to CFAR. This would provide some l*. feceack on repeat failures. In effect, trending CFAR " hits" or failure rates would

! provide the same type of information and alleviate the extra cost to the industry i of trending a duplicate indicator.

l..

< Cost / Benefit: 1 The cost for the NRC to use CFAR would not be as high but CFAR in its l present form is still judged inadequate to represent Maintenance Program j effectiveness. Thus even il CFAR is provided to the Commission, it will require some additional cost to reconstruct CFAR for the type of analysis desired, j However, CFAR data is well understood and will not require redundant analysis / review which would be necessitated by the new indicator, i

1 i Needs:

  • if a new reliability indicator is generated then several major changes need to a be incorporated to make it useful: l

\

l 1. Grouping should be made by major critical component groups mutually agreed upon by the industry and the Commission.

j 2. Wearout should be allowed as a legitimate separate cause code not strictly

, maintenance related. Additional definition of legitimate wearout will be needed to satisfy both industry and the Commission.

! 3. Failure trending should account for population size of the group (i.e., % failures l of a given population would provide some benefit for efficiency of maintenance).

4. Failure trending should be strictly plotted as total # of failures, or failure rate, or j  % failures for given population per quarter. If trigger levels are desired then j Alert and Alarm levels should be established based on statistical confidence
limits of population functional ability (i.e., something the a 90% confidence of I 90% of the population being functionally operable durm0 a given time period).

)

An algorithm which averages failures should not be used.

i j 5. A reliability indicator should not be used unilaterally to measure maintenance j program effectiveness, but should be only one of several indicators evaluated.

l Also, the PM program should be aooounted for in any maintenance indicator.

6. Ir@sd of the failure needs to be evaluated and incorporated (e.g., was the I i failure significant to system operability and safety).

i l

\^

l l A.n t

l

, - . - _ _ , _ _ . . - - _ ______________J

- - -- - - - . -. - - - . _ . . - . - . . . - - - = . .

Y, ,*

r i .

( AEODIS8040

{

, DWT (May 1.1000) ,

3 '

. MEMORANDUM FOR: Thomas M. Novak, Director l ,'. Division of Safety Programs

) ,' Office for Analysis and Evaluation i of Operational Data

)

  • ,- FROM: Mark H. Williams, Chief  ;

Trends and Pattems Analysis Branch  !

Office for Analysis and Evaluation ,

l ,* of Operational Data l l

SUBJECT:

SUMMARY

OF JANUARY 17-18, 1990 MEETING WITH i l

ROCHESTER GAS & ELECTRIC CORPORATION REGARDING ,

i I MAINTENANCE INDICATOR DEVELOPMENT 3

On January 17-18,1990, members of the NRC staff met with representatives of Rochester Gas and Electric Corporation (RG&E), their consultant, ATESI, and the Nuclear Management and  ;

) Resources Counci (NUMARC) at the Ginna site to discuss maintenance indicator development.  !

' l A list of meeting attendees is contained in Enclosure 1. Enclosure 2 contains the overaR l

j meeting agenda. Enclosure 3 is the agenda for RG&E presentations that discussed specific I

portions of the agenda items from Enclosure 2.

! This meeting was a followup to the October 13,1989 meeting of the NRC/ Industry Mainten&nce i

Indicator Demonstration Project. The composition of the demonstration project represents a l broad spectrum of utility organizations and sizes, as weX as plant sizes and nuclear steam supply system designs and ages. RG&E was included in the demonstration project to gain

. Insights regarding the monitoring of maintenance from the perspective of a relatively smaN utilty operating a single, older plant - RG&E's Ginna plant. Ginna began commercial operation in 4 1970 wth a two-loop Westinghouse-designed PWR having an electrical output of 470 MWe, and ,

l. represents roughly one-half of the utility's electric generating capacity. l The NRC staff presented the detaN and Icgic which were followed durin0 the development of the '

' staff's proposed Maintenance Indicator (MI). The purpose of this presentation was to familiartze

RG&E personnel with all of the detaH necessary for understanding the proposed indicator.

1 RG&E presented results of their assessment of the NRC's proposed indicator, which irwolved an l RG&E staff effort of approximately 1000 manhours. This assessment, which included j

mathematical vertication of the 'ndicator algorithm and results of their analysis of Individual l

NPRDS component failure narratives, focused on an example system (chemical and volume l

control system) that, according to the indicator, had equipment problems, and a discussion of the reliability-centered maintenance (RCM) program being implemented at the Ginna plant.

RG&E presented the badground behind their RCM projed, Rs system selection criteria, the

RCM analysis and task methodology, and the RCM Living Program. The results of the RCM i analysis determine which components will receive PM tasks designed to maintain component function.

I l

A.18 1

E .

l

, . \

- .omrrtmy r, reso) AEOD/S804C i .' '

i .  :

  • The following major issues were discussed during the meeting:

2

, i (1) RG&E expressed concem that the staff's proposed indicator did not distinguish cruical

! failures from failures which were not significant. They were concemed that use of this 4

indicator could result in a plant's maintenance program focusing on relatively unimportant individual failures. RG&E stated that significant events which occurred at Ginna over the i * , time span of interest were not tracked by'the indicator, The staff explained that, as a programmatic indicator, the proposed indicator was not intended to track significant events.

Rather, it was intended to track component failures across a broad spectrum of equipment

    • over time to establish a trend on the premise that no single failure would be used to l reach a conclusion about the effec 4iveness of the maintenance program.

i

) (2) Definition of Maintenance - comparison of the results of independent reviews of example NPRDS failure narratives performed by RG&E and the NRC staff led to the issue whether l'

failures which involved wearout or were first of a kind were maintenance-related, t

RG&E reevaluated all of the NPRDS failures using a jury expertise approach, and, in their i view, a low percentage could be attributed directly to " maintenance" (11%), as their organizational structure defines maintenance, l

j According to RG&E, intrinsic design reliability results in random failures for some j components [e.g., components that rely on materials that degrade over time (capactors, relays, seals)] which are expected and are not a result of ineffective maintenance.

}'

A case in point was a group of failures involving the charging pumps. In these failures, l

4 the pump packing was found to be leaking, the packing was replaced, and the events

were reported to the NPRDS as degraded failures. After several pump packing failures of this type, RGSE determined that the leaking packing was a wearout problem. The i corrective action taken was to prepare a PM procedure to replace the pump packing I

periodically. Under current NPRDS reporting guidance, RG&E considers the packing

replacement a wearout condition, and not a maintenance-related failure. The NRC staff j commented that for this case, regardless of the cause of the first failure of the pump i packing (wearout or maintenance-related), since the indicator would be tracking the failure history, it would show a valid improvement in the RG&E maintenance program when the

! new PM procedure for the pump was implemented. Therefore, the indicator in this case

! would measure a maintenance program improvement, and the question of whether the j initial failures were due to wearout or lack of maintenance was moot.

RG&E pointed out that, independent of incipient or degraded reporting, the economic i decisions exercised during the selection of the preventive maintenance adivties or

decisions not to maintain but replace when appropriate are treated negatively by the j staff's proposed indicator. The indicator does not consider economic and ALARA 1 considerations. This is related to the concem expressed in other meetings with project

^

participants that there is some acceptable level of component failure rate associated with an effective maintenance program. However, the proposed indicator counts all failures in l

1 establishing trends, which implies that any failure is a result of maintenance ineffectiveness. To this concem, the staff has responded that the irviscator uses failures across a broad spectrum of equh> ment over time to establish a trend, and in that i framework, no single failure is used to reach a conclusion about the effectiveness of the i

' A.19 k

+

. o'err p.y r. reso) AEOD/S8040 i .n program. The staff believes that these concerns could be resolved by putting a band around the indicator which would identify the region of acceptability.

I a (3) Reliability-Centered Maintenance Since the analysis is done on a component basis, this j methodology may allow components to run to failure, or to a condition where corrective

' maintenance is required due to a loss of function, if a redundant component (i.e., another i *. train or path) is available. The analysis used to identify this equipment considers the local impact, system impact, and piant impact of the component failure. There will be no

} ,*

system impact if all of its constituent trains are not taken down by the failure of the component.

i

] RG&E stated that the RCM systems selected are predominately standby systems, whereas 5

the systems monitored by the indicator are outage-dominating systems. The staff's proposed indicator does not currently cover most standby safety systems.

t l The staff pointed out that the proposed indicator can serve as a check on the adequacy of the RCM program and implementation. To ensure that the indicator maintains i consistency across plants to the extent possible, the equipment scope of the RCM j program should be included in the selection of equipment to be monitored by the

indicator in this vein, the list of equipment monitored by the indicator may be modified, j contingent on recommendations received from the industry during the demonstration project.

f From their review of the set of NPRDS failures, RG&E concluded that no PM Program activity at l

Ginna should be modified as a result of the failures aggregated under the indicator algorithm 2

methodology. Other equipment failures have caused PM Program changes at Ginna.

Since the Indicator for the Ginna plant remained below the average for PWRs of its type and l-size, with no adverse trends, over the entire period of interest, the staff would not have j opx:ted any PM Program chan0es to be made based on the indicator.

I I RG&E indicated there is significant risk in reliance on a single indicator to measure maintenance l effectiveness; the staff's proposed indicator could penalize a good performer by lessening the l priority for budgets being applied to mainteronce if the indicator showed good performance.

RG&E utilizes both process indicators (bacidog) and industry performance indicators (i.e.,

! availability) as measures of maintenance effectiveness. RG&E did identify the following two sets i of indicators, one qualitative, the other quantitative, which they would propose using to monitor maintenance effectiveness:

i Qualitative - plant material condition, repetitive component failures.

l l Quantitative - forced outage frequency, turbine runback frequency, safety system availability, i

RG&E identified the following issues which they consider to be most significant in resolving their concems about the staff's proposed indicator.

l

! (1) System and component selection.

(2) Effects of failure (Local versus system versus plant).

(3) " Ghost" Ticks-Remove superfluous " Ghost" ticks.

l

^

t rw ,m.,, ____ _ - - - - ,. _ --__ .__ ___________ _. _ _ _ . _ _ _ _ _

. , ok4dru y v.'rseal AEOD/S8040

~

(4) Multif aceted (other indicators, maintenance team inspections, other inspections).

(5) Individual NPRDS plant reporter expertise and report completeness -

Can significantly affect the quality of the NPRDS data.

The following items were identified for future action:

(1) RG&E will prepare a list of equipment, based on their RCM experience, that should be monitored with the staff's indicator. ,

e (2) RGAE will provide the staff access to conponent data for the

systems analyzed to date within the Ginna RCM Program.
RG&E agrees with this summary.

j i

Mark H. Williams, Chief Trends and Pattems Analysis Bianch

! Office for Analysis and Evaluation of Operational Data

Enclosures:

As stated.

J l

e i

a f

4 A.21 4

AEOD/S8040

',; .omrr(my r, reso;

,i ENCLOSURE 1 i

. ATTENDANCE LIST JANUARY 17 18, 1990 MEETING

! WITH ROCHESTEP GAS & ELECTRIC CORPORATION s

I NAME AFFILIATION l

John Fischer RG&E Mark Flaherty RG&E i i

i James Huff RG&E j Tom Marlow RG&E i Bob Smith RG&E

, Herb Van Houte RG&E i Gerald Wahl RG&E i Joe Widay RG&E i Bill Zomow RG&E i Walt Smith NUMARC 1 Jim Huzdovich ATESI

,I John Wilson ATESI i Victor Benaroya NRC/AEOD Bob Dennig NRC/AEOD l

i Pat O'Reilly NRC/AEOD J Mark Williams NRC/AEOD 4

4 4

j A.2it I

. , OMI'T tuny I.1000) \ AEODIS804C ENCLOSURE 2 AGENDA i JANUARY 17 18, 1990 MEETING WITH ROCHESTER GAS & ELECTRIC CORPORATION REGARDING MAINTENANCE INDICATORS (1) NRC presentation - Performance Indicator Development. Analysis Assumptions and .

Purpose of Meeting.

(2) Discussion of Interim Indicator Results (3) NPRDS Reporting of Component Failures involving Outage-Dominating Equipment.

(4) Root Cause Analysis of Individual Component Failures of Outage-Dominating Equipment.

(5) Discussion of RG&E's Programs / Approaches for Trending Equipment Failures and Failure Causes as They Relate to Maintenance (6) Comparison of Maintenance Trend Information l (a) Trends Calculated with the NRC's indicator (b) Trends Calculated with RG&E's indicator (s) i l

i .

1 J

l

i 1

I l J

A.23 i

  • sDkVT(by r, r000) ~

AEOD/S804C S ENCLOSURE 3 RG&E AGENDA FOR MAINTENANCE

.. INDICATOR DEVELOPMENT MEETING ,

i Introduction (Marlow) l l

NRC Presentation of A0enda items 14

1) RG&E Assessment of NRC Data

.l a) RG&E mathematical verification.

b) 57 reports.

(Zornow) 'l (Zomow) l

2) Analysis of Validity of MEl (Marlow)

I a) Concems with MEl data. (Marlow) b) Example of a specific Ginna system which had ticks - CVCS. (Wahl) c) Matrix. (Marlow) d) Present graphs, charts. (Marlow)

(5) Discussion of RG&E's Programs / Approaches I!

for Trending Equhment Failures and Failure Causes as They Relate to Maintenance  !

a) RCM system selection vs. MEl system selection. (Wilson) b) RCM analysis and RCM task evaluation. (Wilson) l c) RCM Living Pro 0 ram Tells if we did not )

have the right system, critical component, dominant failure modes, or frequency. (Wilson) l (6) Comparison of Maintenance Trend information l

l (a) Trends calculated with the NRC's j

[l proposed indicator.

(b) Trends calculated with RG&E's indicator.

RG&E's Recommendation for an MEl (Marlow) a) Qualitative.

b) Quantitative.

Conclusions (Marlow) i A.M t

', owrtmr f resor AEOD/S8040 I: ** MEMORANDUM FOR: Thome,3 M. Novak, Director

, Division of Safety Programs

,\ Office for Analysis and Evaluation

, of Operational Data FROM: Mark H. WiHiams, Chief l 3

+ . Trends and Pattoms Analysis Branch j Division of Safety Programs

Office for Analysis and Evaluation j

~

of Operational Data i i j

SUBJECT:

SUMMARY

OF FEBRUARY 20-21, 1990 MEETING WITH SYSTEMS i ENERGY RESOURCES, INCORPORATED REGARDING  ;

j MAINTENANCE INDICATOR DEVELOPMENT  !

J  !

} On February 20-21,1990, members of the NRC staff met with representatives of Systems

Energy Resources, Incorporated (SERI), the licensee for the Grand Gull plant, and the Nuclear j Management and Resources Council (NUMARC) at the Grand Gulf site to discuss maintenance
~ indicator development. A list of meeting attendees is contained in Enclosure 1. Enclosure 2
  • contains the meeting a0enda.
This meeting was a followup to the October 13,1989 meeting of the NRC/ industry Maintenance Indcator Demonstration Project.

The NRC staff presented their proposed Maintenance Indicator (MI). The purpose of this i presentation was to familiarize utility personnel with an of the detail necessary for understanding

. the proposed indicator.

f Two unique programs at Grand Gulf are particularly relevant to the work on the Demonstration l Project. They are the Engineering Review Group (ERG) and the NPRDS Trend Report, both of which are discussed in detail below.

l j An Engineering Review Group has been formed within the Grand Gulf Performance and System

En0ineering Department to perform a final, independent review of work orders prior to closeout.

l Grand Gulf management has tasked this group with ensuring work orders reflect adequate i details of the identified prtblem, including the overall work scope, cause, corrective actions

' taken, and component failures. The ERG represents a plant ir@rovement with the potential for i a direct impact on maintenance indicator development, since one of the concems expressed i about a component tulture-based lodicator has been the quality of NPRDS reporting. A group such as the ERG provides additional assurance that the failure information documented in the

! MWOs (and of the subset being reported to the NPRDS) is accurate and complete.

The specific duties of the ERG consist of :

(1) Reviewing completed work orders for consistency, (2) Obtaining predictive maintenance data for trending, (3) Providing reports to system engineers for analysis, j l (4) Maintaining control of the surveillance tracking program, I i

(5) Entering all MWOs into SIMS for component failura trending. l I

s l

l

i

, OMNTtuer t.1000) AEODIS804C

.2 - The NPRDS Trend Report, which has been prepared and issued periodicaNy since 1987, contains a listing and evaluation of the component fdures at Grand Gulf which were entered

,% into the NPRDS database over the previous period. This report: (1) flags repetitive faHures, (2)

] ,

trads corrective actions, (3) plots the failure rate for components which have experienced major i repetitive failures (e.g., radial well pumps, diesel-generator starting air compressor), (4) trends i reporting times, and (5) tabulates data for easy reference.

1 * .

i l Grand Gulf staff described their maintenance organization and explained their maintenance philosophy. Basically, the responsibility for equipment at the Grand gun station is structured

! around the systems engineering concept. For this reason, they preferred the systems 1 perspective of the NRC's proposed indicator, as opposed to the component type perspedive.

As far as quality of maintenance is concemed, no distinction is made between safety systems j and balance-of-plant systems. The only difference in the maintenance of the two types of I systems is that maintenance on safety systems receives a higher priority. Their maintenance 3 program is predicated on the premise that its primary objective is to ensure that the plant
operators have available the equipment necessary to operate the plant in a safe manner in accordance with the Technical Specifications. Grand Gulf tries to perform as much of the i maintenance tasks as posisible during normal plant operation, as opposed to accumulating work l'

for outages. For that work which is performed during a refueling outage, timely closeout of maintenance work during a refueling outage, timely closeout of maintenance work orders (MWOs) and timely reporting to the NPRDS are stressed.

l Grand Gulf staff described how the Grand Gulf outage planning and scheduling group interfaces I with the regular maintenance organization. Outage planning at Grand gun starts as a " seed" j that pulls in line management to actually manage the outage. During an outage, the Grand Gulf j plant is run by this specially constituted outage organization, and the normal paant organizational j lines do not exist during this time. The transition to this outage organization begins about two i months before the start of a refueling outage. Following the refueling outage, a formal report is l prepared which domments any lessons leamed during the outage that can be considered in the

planning and scheduling for the next refueling outage. The plant staff stated that they determine l whether a refueling outage has been successful from the amount of work completed during the l outage and how the plant operates after the outage is completed.

In keeping with the systems perspective, Grand Gulf looks one quarter ahead and tries to consolidate all preventive maintenance (PM) and surveillances for a partwiar system into, for l

example, a one-week period, and get aN (corrective maintenance, as wel as PM) of the work done within this time frame - called a " system outage." The purpose of this approach is to minimize the total time that the system is out of service.

i Grand Gulf has adively continued a Maintenance improvement Program since June 1987. A key element of this program is the installation and implementation of the Station Wormation Management System (SIMS). This system allows Grand Gull management the opportunity to closely monitor planned work adivities at Grand Gulf. In addition, SIMS provides more space j for documenting detailed descriptions of problems and the corrective adions taken. SIMS has the capability for electronicany providing the input for NPRDS failure reports. Akhough this i capability is currently not being used, Grand gun has future plans to use this system for NPRDS

repost preparation, i

i The Grand Gull staff stated that verbatim compliance with written procedures is stressed at au times with maintenance and operations staff, and personal accountability is emphasized. They A.26 4

, DY(Mey r, r000) AEODIS8040 .

. -., instill a feeling of " ownership" in their operations, maintenance, and engineering support

, personnel.

.' ~

Another part of the maintenance philosophy at Grand Gulf is the stated policy that contractors are not ernployed to perform routine maintenance tasks.

. Another key element of the Maintenance Improvement Program at Grand Gulf is its Predictive Maintenance Program. Grand Gulf staff presented a discussion of this program. BasicaNy, it consists of the following:

(1) Vbration monitoring of rotational equipment.

(2) Lube ou analysis program.

(3) Motor-operated valve testing.

(4) Pump and valve testing program.

(S) Local leak rate testing.

(6) Check valve performance monitoring.

(7) Leakage reduction program.

(8) Relief valve testing program.

(9) Scram frequency reduction program. l (10) Human performance evaluation system (HPES).

(11) Plant performance monitoring. l (12) NPRDS.

(13) Erosion / corrosion program.

Consistent with a stated management goal to make Grand Gulf a top performer, SERI has pursued cross-fertilization between Grand Gulf and those U.S. plants, as well as plants outside the U.S., which are considered among the best performing units in the country. This exchange of technical expertise has taken place at all levels of plant management.

Discussion of the results of root cause analyses of a selected set of Grand Gulf NPRDS failure narratives and the indicator trend led to the identification of a number of issues regarding the NRC staff's proposed maintenance indcator.

(1) Grand Gulf staff expressed concem that the indicator can be skewed by just a few problem components and thereby show maintenance problems. The NRC staff pointed out that high maintenance equipment can result in indications, but that the indicator looks across a broad spectrum of equipment and a few problems wHl not make a plant stand out.

(2) Grand Gull staff expressed concem about the usage of the staff's proposed indicator.  :

How it will be used and by whom are major a>ncems which have been voiced in previous projed meetings. The NRC staff explained that it would be used by the NRC staff to l monitor the industry's progress in maintenance and to provide input to senior management regarding plant performance through the following process. The indicator for a given plant would be compared against the average of its peers, and the indicator trends would also be examined. If a plant's indicator is consistently higher than the peer group average and displays an adverse trend, the plant operational data for the period (s) where the indicator exhbits the unfavorable charaderistics would be examined in detaR to determine the driving forces behind the component failures experienced during the period. Also, the staff would mock into the plant's NPRDS reporting history to determine whether this had A.27

. e

, oAvr(usy r, reso; AEOD/S804C

$% an influence on the indicator. The indicator would be used as a screening tool to trigger

, a more detailed review of plant data and experience obtainable from many sources (e.g.,

,N regional office inspections, maintenance team inspections, diagnostic evaluations, SALPs).

~

(3) Grand Gulf staff expressed concem about the characterization of the indicator. In this respect, they were concerned that each individual indicating flag, or even each individual

, component failure, could be construed as a sign of maintenance ineffectiveness. The NRC staff explained that the indicator was designed as a programmatic indicator, and as such, was not intended to track individual events.

(4) Discussion of the failure history for the radial well pumps at Grand Gulf led to identification of a case very similar to that of the charging pumps at San Onofre 2 and 3 (i.e., a case where original design engineering support, and traditional maintenance have played roles over time in the performance of equipment). In the case of the radial wen pumps, Grand Gull staff explained that the pumps have had a history of seal failures, in part caused by suspended mud intake from the river water. As river level varied, so did mud intakes. Over a period of time, systems engineering and maintenance staff have formulated an improved maintenance approach, employing PM to "get ahead" of the failures as much as possble, and they expect the pump failure rate to decrease, at which point the proposed indicator would reflect improv6d performance resulting from a maintenance program improvement. They also plan to erect a building over the pumps to protect them from the elements and facilitate detection of seal failures at the incipient stago. Extensive maintenance had not coped with detecting early failures in the past. l However, they pointed out that some random pump failure rate will persist due to " bursts" of sediment in the wells. Complicating the situation is the fact that, at certain times of the year, work cannot be perfermed on the pumps because of the danger to personnel from the high level of the Mississippi River. Therefore, the Grand Gull staff was concemed i that individual failures of this nature would be considered as caused by ineffective j maintenance, and that some failure rate would alwayn be present, since cost-benefit wouid  !

j not support a zero-failure approach to this problem.

j The NRC staff explained that for these pumps, the way to demonstrate improvement in

. the maintenance process was to track the failures before and after those improvements.

! In this sense, the failures are related to maintenance, especially within the broad context i of the Commission's policy statement. Individual failures are also fikered through the

indicator algotthm, which tends to screen random failures. However, the staff is exploring j additional ways to address the existence of a residual inherent failure rate, such as the use of a tolerance band around the indicator trend.

l I (5). Discussion of the failure narratives associated with the Grand Gull LPRM system led to identification of another similar case. In this situation, the LPRM detectors (which are the

, . first of a kind and unique to the BWR/6 design) were falling with an NPRDS failure description of "out of caibration," and a cause category of " dirty connections." The Grand l Gull staff explained that this condition was not caused by dirty connections as indicated,

, but actually was a design peculiarity unique to these specific detectors. The detectors

! were not field repairable, since the intemals were not accessble. After much interaction

! between the NSSS vendor and SERI, it was found that the root cause of the detector

! going out of calibration was a buildup on the intomal connections in the instrument. The I corrective action recommended for the problem was a capacitive discharge test which would bum off the buildup on the connections. Since there was no way to anticipate this -

l l A.38 l

i. i i

l

._ J

)

J 1

. . 044rr(u y r, sosol . AEOD/S8040 l- type of failure, the Grand Gulf staff eventually implemented a PM task that performs the ,

test before the performance of the instrument progresses to the degraded stage. Grand

. Gull staff maintained that failures of this type should not be tracked by the indicator since there was no way that the first failure of the detectors could have been prevented, and then the uniqueness of the design and inaccessibility of the detector internals made it impossible to perform any sort of preventive maintenance until a failure history of the

. Instruments could be compiled over a long enough span of time upon which to base appropriate PM.

(6) A number of cases were discussed which consisted of the reporting of incipient conditions as degraded failures. The Grand Gulf staff explained that past NPRDS reporting practices may have been somewhat conservative, and commented that incipients would today be recogn! zed and categorized more readily.

(7) " Ghost ticks'should be eliminated.

l The Grand Gulf staff uses the following activities and documents at the frequency indicated to l assess maintenance at the Grand Gull plant.

1 DAi!1 (1) Plant Status Report.

(2) Plant Tourc to monitor maintenance activities and housekeeping / plant j material conditions.

l Weekly (1) Work Order Status Report.

(2) Plant Contamination Report.

(3) Maintenance Task Tracking.

(4) Quality Deficiency Status Report.

(5) Material Nonconformance Report.

M919!r i

(1) Maintenance Performance Report.

, (2) Performance Monitoring Report.

(3) Thermal Performance Report.

(4) Operational Analysis Report.

, (5) Health Physics Summary Report.

Quartetty (1) . Quality Programs Status and Trend Analysis Report.

, (2) NPRDS Trend Report.

2 Of particular interest is the Maintenance Performance Report, which is issued on a monthly basis, and is made available to all maintenance personnel for their review. This report tracks

the following mairtenance-related information
(1) maintenance goals versus actual achievements, (2) major work Items during the month, (3) safety report, (4) occupational injury and illness, (5)

A.29 t

e ->

i .

-a

. iswr(my r, resol

( AEOD/S8040

\ ,

, , *. LERs, (6) violations, (7) radiological deficiency reports, (8) personnel contarnination report with  ;

, details, (g) personnel exposure, (10) quality deficiency reports, (11) security response to insecure  !

.,'. doors, (12) maintenance outages, (13) rnalntenance work status, (14) task tracking, (15) department overtime, and (16) budget.

Mark H. Williams, Chief Trends and Pattoms Analysis Franch Office for Analysis and Evaluation  ;

of Operational Data

Enclosures:

As stated l

I i

1 I

A.30

> , dMbtuer v. reso) AEOD/S804C ,

4 = [. *.

ENCLOSURE 1

~

ATTENDANCE LIST 4 .

FEBRUARY 20-21, 1990 MEETING

- WITH SYSTEMS ENERGY RESOURCES, INCORPORATED l NAME AFFILIATION

.l Bill Angle SERI l SERI W. T. Cottle

^

i Joel P. Dimmette, Jr. SERI Chuck Dugger SERI Norman G. Ford SERI Randy Hutchinson SERI SERI I M. A. Krupa Ron Moomaw SERI ,

Jerry C. Roberts SERI l

Steve Saunders SERI Warren J. Hall SERI f I

H. O. Christensen NRC/Ril-SRI Bob Dennig NRC/AEOD J. L. Mathis NRC/Ril Ri T. M. Novak NRC/AEOD Patrk:k O'Reilly NRC/AEOD j Mark Williams NRC/AEOD j i

A.31

O l

, oder,Mrf.fm) V .

AEOD/S804C

]

. l ENCLOSURE 2 AGENDA FEBRUARY 20-21,1990 MEETING WITH SYSTEM ENERGY RESOURCES, INCORPORATED REGARDING MAINTENANCE INDICATORS (1) NRC presentation - Performance Indicator Development, Analysis Assumptions and Purpose of Moeting.

(2) Discussion of Interim Indicator Results (3) NPRDS Reporting of Component Failures involving Outage Dominating Equipment.

(4) Root Cause Analysis of Individual Component Failures of Outage Dominating Equipment.

(5) Discussion of SERl's Programs / Approaches for Trending Equipment Failures and Failure Causes as They Relate to Maintenance (6) Comparison of Maintenance Trend Information (a) Trends Calculated with the NRC's indicator (b) Trends Calculated with SERI's indicator (s) 4 1

4 A.32

  • . o'ertcuey r. sono) \ AEODIS804C

. 3

. MEMORANDUM FOR: Thomas M. Novak, Director

  • Division of Safety Programs Office for Analysis and Evaluation of Operational Data

. FROM: Mark H. Williams, Chief Trends and Pattoms Analysis Branch Division of Safety Programs Office for Analysis and Evaluation of Operational Data

SUBJECT:

SUMMARY

OF FEBRUARY 28 MARCH 1,1990 MEETING WITH NORTHEAST UTILITIES REGARDING MAINTENANCE INDICATOR DEVELOPMENT On February 28 - March 1,1990, staff from AEOD, Northeast Utilities (NU), NU's operating companies, and NUMARC met at the Northeast Utilities offices in Berlin, Connecticut to exchange information on maintenance indicators. This meetin0 was part of the NRC/ industry Maintenance Indbator Demonstration Project. A list of meeting attendees is contained in Enclosure 1. Enclosure 2 provides the meeting '

agenda. On March 1,1990, the staff also toured the Haddam Neck nuclear plant.

The NRC staff presented the detail and lo0 i c which were followed during the development of the staff's proposed maintenance Indicator (MI). The purpose of this presentation was to familiarize utility personnel with all of the detail necessary for understanding the proposed indicator, in their opening remarks, NU discussed their management approach for the Millstone and Haddam Neck sites. Each unit at each site is operated as an independent entity under the direction of the unit superintendent. Within this framework of Independence, each unit has its own maintenance staff and facilities, and tracks cents per kilowatt at the bus bar. However, certain major aspects of the maintenance policy are established at the corporate level. For example, it is NU's policy that their nuclear plants are not allowed to enter a limiting condition for operation (LCO) solely for the purpose of performing planned maintenance. NU also has established a system-wide Production Maintenance Management System, or PMMS.

d PMMS, which was first placed into operation almost ten years ago on a phased implementation 4 basis, is now almost completely implemented, and is used to trad maintenance at al of their electrical generating stations, fossil as wel as nuclear. It is a computerized maintenance tracking system with fairfy extensive capabilities. NU has used PMMS to: (1) identify plant equipment by means of a system-wide common nomenclature, (2) estabush a dedicated planning l

j function at each of their generating facilities, (3) establish a common maintenance work order mechanism across facilities (4) provide a uniform work priority system, (5) provide resource i forecasting and tracking on a consistent system-wide basis, and (6) provide a database of production-related information in support of management decisions.

i l There is an important difference between PMMS and the staff's proposed indicator. PMMS tracks work orders and associated information. The staff's proposed indicator tracks equipment i ' failures, in order to extract failure data from PMMS, engineering analysis supported by standardized Guidance, such as found in NPRDS, is required.

{

A.33 e

i

olw'Mer r, r8eol AEOD/S804C

, NU employs PMMS to generate the PMMS Performance Report on a quarterty basis. This

'- report trends a number of indicators which NU uses to monitor maintenance performance at their plants. The contents of the PMMS Performance Report are as follows: (1) Preventive Maintenance Percentage, (2) Corrective Maintenance Backbg, (3) CM Backlog Indicator, (4) Preventive Maintenance Performance, (5) Twenty Most Worked on Components, (6) Ten Most Worked on Systems, and (7) Rework Percentage. Performance Indicators have been used in the NU organization as management tools for about five years.

NU considers items (1), (3), and (7) above their primary maintenance indicators. The Preventive Maintenance Percentage displays a trend of the preventive work accomplished by a task department as a percentage of the total maintenance work. The CM Backlog indicator is an indicator which was developed intomally by NU. This indicator displays a wrve of CM work that indicates the condition of the work backlog and the clearing rate time constant. This consists of the number of priority 3 nonautage CM work orders that are open at a point in time. This process indicator is not used to provide diagnostic feedback to the organization at the working level. The Rework Percentage displays a trend of CM and other work orders that failed a retest by operations by quarter. ,

NU a!ao produces a quarterly Utility Performance Report for NU management which contains (1) capacity factor, (2) forced outage rate, (3) thermal performance (unit heat rate), (4) LERs, (5) unplanned automatic reactor trips, (6) plant design change evaluation status, (7) plant design change request status, (8) solid radioactive waste generated, (9) collective man-rom exposure, (10) total skin and clothing contaminations, (11) PMMS Indicators #1 and #3, (12) NRC inspections violations and severity level, (13) outstanding INPO recommendations, (15) NE80 contractors, and (16) Enforcement conferences.

During a discussion about maintenance during outages, NU stated that each of its four units (Haddam Ned, Millstone Units 1,2, and 3) prepares an outage report 30-60 days after the completion of a refueling outage which documents lessons leamed during the outage. Within the NU organization, outage planning is done on a unit level, as opposed to the corporate level.

! Usage of the NPROS database by the NU organization was also discussed. Currently, there is

! a task force within the organization evaluating how NPRDS could best be used to enhance plant i operations. In the past, the NU organization has not used NPRDS data very much, and since it

! is prepared at the corporate level, unit maintenance managers are generally not familiar with the l NPRDS data for their units.

Prior to the meeting NU was provided with examples of NPRDS-reported failures, used in l

l constructing the proposed indicator, that the staff categorized as maintenance-related. The discussion of the history behind these failures indicated that plant staff were aware of component

, performance problems and had often made various adjustments to the maintenance programs in response. However, the utility determination that the performance problem originated I in a marginal application of a component design resulted in their concluding that the failures were net related to maintenance. Several examples are discussed below. Since the frequency of such failures is being controlled by the maintenance program, the staff believes an increase

or decrease in such failures is a measure of maintenance effectiveness.

!~

! There were a number of failures of a reactor recirculation pump pressure switch at Millstone 1 that the utility had attributed to wearout in the NPRDS failure records. On other occasions, the

same switch had drifted out of specification due to unknown causes. The NU staff explained A.34 e

_ , ~ __ - ,

, o'urr tuny 1.1000) AEODIS8040 l . that this particular switch was a design problem that had existed since the plant was built. It

, was essentially a misapplication of design which utility management had made a decision to live

  • with, and had charged the maintenance deptirtment to keep the equipment operating, given this deficiency. NU stated that a temporary solution to the problem had been implemented. This consisted of an increased surveillance frequency which was established to catch the instrument drift while it was still in the incipient stage before the instrument's function became degraded.

~.

Another example consisted of three failures of main feedwater pump seals at Millstone 3. In ]

this case, according to the utility, the original pump seal design was marginal, especially at low flow conditions, when flashing led to overheating of the seal and subsequent failure. As explained, this was a misapplication of design, for which utility management had decided that continuing to fix seal failures was more cost-effective than making a major design modification.

The maintenance organization was then faced with the responsibility of keeping the pumps in operating condition in spite of the seal problem. These failures were either categorized as due to unknown causes or attributed to des ~ign problems.

During the meeting, NU staff expressed a number of concerns about the usefulness of the proposed indicator. The need for resources to respond to another indicator (fielding questions from the NRC and various PUCs), with the likely outcome that these resources would be diverted from existing staff now devoted to utility performance trending, was a major concem. In the NRC staff's view, the intended use of the proposed indicator should help allay this concem.

The utility staN also felt that the proposed indicator was difficult to interpret, and offered little diagnostic information for corrective action. As a programmatic indicator, diagnostic capability was not a prime mncem originally, but comments from other Demonstration Project participants have resulted in modifications, such as mtting the indicator by component type, to enhance its usefulness to plant staff.

The utility staff also felt that the quality of NPRDS reportinC may not be high enough for this important use. The tendency for NPRDS data to show concentrations of failures discovered in outages, and the potential for penallaing proactive maintenance if incipient conditions were reported as degraded failures were raised as issues. NRC staff actions to adjust indicator interpretation based on various segments of the fuel cycle, and examination of reporting pattems in interpreting the indicator were cited by the staff as potential remedies for these mncems.

l j Lastly, NU staff were ncemed about use of a single indicator to track maintenance. the staff j explained that no indicator is used in the absence of other information, including other indicators j and information from various types of inspections. Further, the proposed indicator was developed as an example of the type of indicator needed, and was not intended to be the only indicator based on component failure data.

i Mark H. Williams, Chief j Trends and Pattems Analysis Branch  !

j Office for Analysis and Evaluation of Operational Data

! l

Enclosure:

As stated l l

I l

1 A.36 i

1 -_ . _ ._

i AEOD/S804C

' , .os e r(uer r,reso)

,  :. ~.

E ENCLOSURE 1 ATTENDANCE LIST FEBRUARY 28-MARCH 1,1990 MEETING WITH NORTHEAST UTILITIES NAME AFFILIATION Bob Dennic NRC/AEOD T. M. Novak NRC/AEOD ,

Patrick O'Reilly NRC/AEOD Mark Williams NRC/AEOD Thomas laats EG&G-Idaho Howard Stromberg EG&G-Idaho Peter M. Austin Northeast Utilities Mke Ciccone Northeast Utilities Tom Dente Northeast Utilities Neil Herzig Northeast Utilities William J. Nadeau Northeast Utilities Wayne D. Romberg Northeast Utilities Jere LaPlatney Connecticut Yankee Atomic Power Company Neil Bergh Northeast Nuclear Energy Company Peter J. Przekop Northeast Nuclear Energy Company Ron Rothgeb Northeast Nuclear Energy Company Walt Smith NUMARC Tom Tipton NUMARC i

d k

l i

h i

d A.3s

( AEODIS804C

', ' DMrrtwr t.1000)

< - ENCLOSURE 2 Y

AGENDA i

FEBRUARY 28-MARCH 1,1990 MEETING WITH NORTHEAST UTILITIES j .

4 REGARDING MAINTENANCE INDICATORS

! i

(1) NRC presentation Performance Indicator Development, Analysis Assumptions and 4

Purpose of Meeting.

(2) Discussion of Interim Indicator Results (3) NPRDS Reporting of Component Failures involving Outage-Dominating Equipment.

(4) Root Cause Analysis of Individual Component Failures of Outage-Dominating Equipment.

1 (5)

Discussion of Northeast Utilities' Programs / Approaches for Trending Equipment Failures and Failure Causes as They Relate to Maintenance (6) Comparison of Maintenance Trend IrWormation

(a) Trends Calculated with the NRC's indicator I

(b) Trends Calculated with Northeast Utilities' indicator (s) i i

s l

1 J

i a

f b

A.37

, , E

', , derrtunyf.fosol AEOD/S8040

, i l

APPENDlX 8 i

3 4

MAINTENANCE INDICATOR DEMONSTRATION PROJECT DETAILS .

l I

i i

) This Appendix discusses the details of a typical meeting with one of the utility participants in the i

Dernonstration Project. It also contains a roster of all the utility staff and consultants that participated in these meetings, along with copies of the NRC standard presentation slides used

- during each of these individual meetings.

l 1

i l

l l

i l

4 I

I W

B.1 i

l- '

'., ~

owTtuer I. toen f AEODIS804C l

1 .

.g ,

l.

". APPENDIX 8 i ,#

MAINTENANCE INDICATOR DEMONSTRATION PROJECT DETAILS

{*.

The second meeting of the NRC/ Industry Maintenance indicator Demonstration Project took place on October 13,1989. At this meeting, each of the six project utility-participants presented their

, preliminary comments regarding the NRC staff's proposed Maintenance Indicator and summarized l the results of their reviews of the plant-specific set of NPRDS component failures which the NRC I staff had provided to each utility-participant at the first meeting of the project on September 12, j 1989. In order to obtain more details conceming each utility's review, the NRC staff held a series j of two4ay meetings with each of the six project utility-participants over the five-month period j November 1989-March 1990. These meetings were held either at the utility's headquarters office or j at one of the utility's plant sites. Table B 1 shows the date and location for each of the six o meetings. Prior to each of the six meetings, the NRC staff sent a letter to the senior management
of the respective utility-participant acknowledging the meeting date and transmitting a proposed I- agenda for the meeting. To ensure consistency in the information discussed during the meetings, a I standard agenda was used for the series of six meetings. Table B-2 contains the standard rneeting i agenda. Table B-3 identifies the participating utility staff and consultants who participated in these l six meetings.

}

i TypicaHy, each meeting began with introductory remarks by the utility's Senior Vice President -

! Nuclear or his designated representative. The NRC staff then Dave a detailed presentation on the

development and validation of the staff's proposed Maintenance Indicator. Using a standard set of slides (Table B.4), the NRC staff described the proposed indicator concept, explained how the indicator was constructed, discussed the indicator validation process, and for illustrative purposes, j presented the indacator for a typical plant. The staff's presentation was designed to familiartze utility i personnel with an of the details necessary for understanding the indicator, i'

I Next, the NRC staff presented the indicator for the utility's plants. In the discussion that ensued, the l NRC staff related to the utility staff their interpretation of the specific plant's indicator; whether the j indicator for the plant was hi0her than, below the average, or average relative to the avera0e for

that plant's peer group, and whether any adverse trends in the ino.cator were noted. In tum, the

. utility staff provided their comments on the proposed indicator based on their review of the failure data which were monitored by the Indicator. This discussion of the indicator was usuaNy followed by

{ a discussion of the NPRDS the utility's NPRDS reporting philosophy (tendency to over report vs.

under reportin0), how the reporting is handled (on a unit basis or at the corporate level), and who 1 determines what information from the work orders is reported to NPROS.

j i The NRC staff and the utility staff then embarked on a detailed discussion of the root cause of a specific group of NPRDS failure records that contributed to the indicating fla0s generated by the

! NRC staff's proposed indicator. A sar@le set of failures used in this discussion is shown in Table i B.S. The records discussed consisted of failures which the utility had categorized as attrbutable to j causes other than maintenance (e.g., en0 ineering/ design, wearout, unknown, random failure), but the NRC staff, applying the scope of the Commission's definition of maintenance as specified in its Revised Maintenance Policy Statement issued December 4,1989, had classified as maintenance-related. Generapy, the staff's review of the NPRDS failure narratives for the records in question had

l. - resulted in about 70-80% of the failures reviewed being ascribed to maintenance. In contrast, the utility's review of the same set of failure records, using all of the detailed information about the s.2 1

i

om#rtueri. rose AEOD/S804C l,' ., individual failures at the utility's disposal and applying the industry's much narrower view of the

., definition of maintenance, usually resulted in a much smaller percentage (515%) being characterized

/ as maintenance-related. Typically, the majority of the failures were attributed to wearout or to

. unknown causes.

Out of these discussions arose issues such as whether first failures of failures of components that i . had been in service for relatively long periods of time should be classified as maintenance-related.

Another issue that was identified during these discussions was whether the failure of a problem cornponent for which a management decision had been made to continue to maintain the

. component in operable condition as opposed to implementing a major (and, therefore expensive) design modNication (e.g., the charging pumps at San Onofre 2 and 3) should be captured as a i maintenance-related failure.

I i A related issue which originated from these discussions was the discovery that, in the Interest of conservatism, rnost of the utilities in some cases had reported what were apparently incipient ,

conditions as degraded failures. Such over reporting would have a direct adverse effect on the '

NRC staff's proposed indicator, since the indicator was originally designed to consider only degraded 1 and immediate failures, not incipient conditions.

i

', These discussions enabled each of the two parties to better understand the other's perspective of 4 maintenance. Sometimes the utlUry staff changed their position on a given failure, and agreed with

} the NRC staff that the failure was maintenance related. In other cases, the NRC staff agreed with

the utility's position. The end result of these discussions was generally that the percentage of the

. total numt:er of failures attributed to maintenance-related causes might change by as much as 10%.

However, as far as the NRC staff was concemed, the majority of the component failures that

comprised the indicator was still maintenance-related, and their original conclusion on this issue was i still valid.

The utility staff then discussed their programs for monitoring trends in maintenance. For the rnost

, part, these consisted of plant level performance indicators which track the maintenance process (termed process indicators in AEOD/S804A and S8048). Included in this category are the three INPO performance indicators that are related to maintenance. These are Corrective Maintenance I Backlog, Ratio of Preventive Maintenance to Tota! Maintenance, and Percentage of Preventive )

l Maintenance Missed. Some utilities track these indicators in a separate formal report which the i

plant staff prepares for senior management on a regular basis. Other utilities include the 1 maintenance-related indicators in the overall plant performance indicator report that is issued

! periodically to' management. One utility has developed its own maintenance indicator which it tracks in a special maintenance performance report that is issued on a periodic basis. Another utility did not have any formal report which tracked maintenance indicators.

Finally, the last item on the meeting agenda was a comparison of maintenance trend information calculated with the NRC staff's proposed indicator and the maintenance trend information calculated J

with the utility's indicator (s), in this case, the only available trend information was that provided by' i the NRC staff's proposed indicator. None of the utilities visited have a programmatic indicator that  !

i is used to routinely monitor equhment performance'and feed back that information to the organization at the working level. Consequently, the only discussions which took place with each utility regarding this agenda item were pnmarily qualitative.

Following all of the meetings except one, the NRC staff was given a tour of the plant site conducted i by the utility staff.
4 s.s 4

4

~ . . _ .. . _ . , . ~ _ _ ._ ._ _ , ___ _ _ __._ ~~,

. D M FT(utt f. tow) AEOD/S804C

, ,c .

" Table B-1 NRC Staff Meetings with Individual Project Utility Participants Meetino Dates Project Utility Particloant Meetina location 11/29-11/30/89 Commonwealth Edison Commonwealth Edison Company Office Chicago, IL 12/12 12/13/89 Southem Califomia San Onofre Plant Site Edison Company 01/09-01/10/90 Duke Power Company Oconee Plant Site 01/18-01/19/90 Rochester Gas and Ginna Plant Site Electric Corporation 02/20-02/21/90 Systems Energy Grand Gulf Plant Site Resources, Inc.

02/28-03/01/90 Northeast Utilities Northeast Utilities Office - Berlin, CT i

i l

i 4

8 B.4

owrtuer t. rsool AEOD/S804C

. 3 .

. Table B-2 s

. Agenda Used in IAsetings with Six Project Utility Participants

. (1) NRC Presentation - Performarre Indicator Development, Analysis Assumptions and Purpose of Meeting.

(2) Discussion of Interim Indicator Results.

(3) NPRDS Reporting of Component Failures involving Outa0e-Dominating Equipment.

(4) Root Cause Analysis of Individu.al Component Failures of Outage-Dominating Equipment. ,

1 I

(5) Discussion of Project Utility-Participant's Pro 0 rams / Approaches for Trending Equipment Failures and Failure Causes as They Relate to Maintenance.

(6) Comparison of Maintenance Trend Information.

(a) Trends Calculated with the NRC's Indicator.

(b) Trendt, Calculated with the Utuity-Participant's Indicator (s).

4 i

4 d

4

.)

t l

}

i

'l

! s.$

, ,r -- ----.-w-,m-, , w,v , en - - - , --- - , , - - .

. + , ,

' . ow7tusy r reso) AEOD/S8040 -

    • . Table B-3 Utility Staff and Consultants Participating in Demonstration Project Meetings

. M Affiliation Jere LaPlatney Connecticut Yankee Atomic Power Company

- Ron Rothgeb Northeast Nuclear Energy Company Neil Bergh Northeast Nuclear Energy Company Peter J. Przekop Northeast Nuclear Energy Company Tom Dente Northeast Utilities Mike Chiccone Northeast Utilities Neil Herzig Northeast Utilities Peter Austin Northeast Utilities William Nadeau Northeast Utilities .

Wayne Romberg Northeast Utilities l Paul Kuhel Commonwealth Edison Company i Commonwealth Edison Company l Martin G. Kief Don Eggett Commonwealth Edison Company Robert Lazon Commonwealth Edison Company Thomas Kovach Commonwealth Edison Company Lee A. Suas Commonwealth Edison Company -l Brian Katz Southem California Edison Company Don Evans Southem Califomia Edson Company Ralph Sanders Southem Caillomia Edison Company .

Robin Baker Southem California Edison Company L D. Brevig Southem Cailfornia Edison Company i Fred Briggs Southem California Edison Company Jad Rainsberry Southern Califomia Edison Company Loyd Wright Southem California Edison Company R. H. Bridenbeder Southern Califomia Edison Company l

- Harold Ray Southem California Edison Company M. E. Rodin Southem California Edison Company l

, Barbara Aden Southem California Edison Company i Bob Levine Southem California Edison Company Wayne Hallman Duke Power Company

[ Bill Foster Duke Power Company Ronnie Henderson Duke Power Company Sam Hamtid Duke Power Company

Stuart Lindsey Duke Power Company Dendy Clardy Duke Power Company Bill Angle Systems Energy Resources, incorporated W. T. Cottle Systems Energy Resources, incorporated j Joel P. Dimmette, Jr. Systems Energy Resources, Incorporated Chud Dugger Systems Energy Resources, incorporated

^

Norman G. Ford Systems Energy Resources, incorporated Randy Hutchinson Systems Energy Resources, incorporated M. A. Krupa Systems Energy Resources, Ir,wweted Ron Moomew Systems Energy Resources, incorporated

! Jerry Roberts - Systems Energy Resources, incorporated l Steve Sanders Systems Energy Resources, incorporated i

s.s i

w -- - - - . - - , - - -- - - -

)

ofwY(unt r reso)

O f AEODIS804C l

Table B 3 (Continued)

Utility Staff and Consultants Participating in Demonstration Project Meetings

. Ng,n)g Nfiliation John Fischer Rochester Gas & Electric Corporation Mark Flaherty Rochester Gas & Electric Corporation James Huff Rochester Gas & Electric Corporation Bob Smith Rochester Gas & Electric Corporation Tom Marlow Rochester Gas & Electric Corporation Herb Van Houte Rochester Gas & Electric Corporation Gerald Wahl Rochester Gas & Electric Corporation Joe Widay Rochester Gas & Electric Corporation Bill Zornow Rochester Gas & Electric Corporation Jim Huzdovich ATESI John Wilson ATESI l

4 m

f 1

I B7

l . .

i . .

) l i

. OMrrtuey1.Im) AEODIS804C l .

. s .

! , . Table B.4 l .

-NRC Standard Presentation Sildes i Slide No. Sublect

+

1 Current Indicators - Simple List i -

2 P.1. Report page - Finger Charts 3 P. l Report page Trend Charts l 4 P. l. Report page - Part 11 event descriptions 5 Commission Direction on Maintenance Pls - Background j

6 LER Causes & Corrective Actions - Ind. Avg. w/ maintenance j 7 MEl Summary Description - Failure rate increase with causes 8 MEl Trend totals of prior slide portrayed over time for a plant 1

9 ODE Equipment Selection Basis 10 MEl ODE Systems Selected I

11 Key Aspects of the indicator 12 Indicator Display candidate (with cumulative curve) l 13 Validation Activities 14 MEl vs. Cause Code Correlation ,

15 MEl BWR & PWR Populations (2 yr. totals) i 16 MEl Trend for PWRs (2 yr. regression line) 17 Demonstration Project Background i

18 Demonstration Project Utility membership s.s

PMFTtuer t.1900)

O AEODIS804C l

{ . Silde 1 r

4 CURRENT INDICATORS

, i

~

! e Automatlo Scrame While Critloal e Safety Systems Actuatione I

~

  • Signifloent Evente l
  • Safety System Falluree j I

e Forood Outage Rates l

e Equipment Forced Outsees/1000 Crit. Hre.  !

1

e Collective Radiation Exposure. ,

I i

e Caute Codes i

' .emmes ne see .

I

r 4

Slide 2 Slide 3 )

l

+

l inets E E*.=.',==

, ' e nia s e.53

= = =G.

==** , .. ,_

f :l s -

.-l

e

_..m a e- = . , em

,, n. ,N

=

  • s 'a-')tr' t'It 1*

s eime

  • i

.. .44*e. .

Qw ] i

  • ~.,*~ I
W[.,W ~~.or m a* .se *
  • i 7_";" *

,tw . . *.'l" .t*.*"*

. .e, is! = =

~:::7.!,t= * ..

) =W # #

e = t .e #4 ll

.ll J'E "'EJ8 4... # h * **** [* #'

l l

8.9 l

q owrder r. r'eso; AEOD/S804C I' I. ,

a .

4 E Slide 4 l .

t 4'

ess==mme

,-m=>

.n .. ..

. . E.

i -

!?.L..*a.'* ".::r.E .

"..".. 2.".

.. =* . .- -

  • gg .,, ,,.,,, ,,y.,.,.,., .;,,p. .

i . , ,

.u.,.,.

4 1

1 . . .. -... ..

,4 .

.uH et tvuute ese se*4

.. =. .. . -

es . . .

'" '""." "."= l'll".llL"".lll l "."#.".&TLll." ""'"" "
  • 1, we . . . .

a '"* * . .. . .

,i 'lll"."" ". .l.l"-llllll". - :l. '.. .. . . - -. . .

a .

we . ..

" . s . .

E**1." """." .. ". .. I'"*.ET -.

l l 4

n s.esa = n-s i

.e.n.. .

es, ., , .

.. s a

E5"..*.**."".""."*.#.*."."..

es,.,

s

, .co. . .. I *

      • "" E..E." """.' " "."....

. . ".*. *." EE'." "'." l'AI* *. *.*i3E E i

4 Slide 5 Slide 6 1

o l MAINTENANCE MONITORING comecru enous

! EVENTS WITH kWNTENANCE CAUSES s N****"" OIRECMON a MAftTENANCE SeeQATOR$

e A300 8804A Peusses hidinalare (10/ gal e A500 8804D4ftselfsenses ladiostare (3/000 j

  • PesouM e dessenemost i

e MAptTENANCE N N 0su888 Ctyrocfhe Acthyts ebeenMat and Punoglen - =. .e

  • Reguietary and Peter Steensent

!

  • 15 600 NTH INTEINhL m
  • teabitenenee ladienter Use .

e eenff Evolussten 4

. .== ~

e #fTSRACT.I.O.N.-WITH er m #4000TIW i

t i

i e ,

i S.10 i

t

AEootseo40

(

~

, curr tuer t. Ineos

~

{

I . siide 7  ;

i .

i I

  • 2?t2*'""

l--

,l\

PIAs. Gh sirt tule j

.tlr." *.=.zmr.,

j .

~

1 I

Maintenance Effectiveness Monitoring

. .,, u.i . .. u. . i . ..

1 4

4 sinoe a i

i MAINTENANCE EFFECTIVENESS INDICATOR (TREND)

INDICATIONS se Cumulative E Weathly Count so - a s0 -

[

ni eveuse si rveuen is -  : : : : -

10 -

s -

-1 Ell II E.. ...... E,E EE.

7 e s iott s i a a 4 e e r a s io tit: 1I a4sa as 1

l I as I at I i PERIOD l

B,11 I

1 <

. , e

< . ~

^

.purrpuyt,toeo) AEODIS804C 4

1

  • ii 1

4 i

)**

i Slide 9 Slide 10 I

MEISYSTEMS ODE SY8 TEM AND COMPONENT SELECTION MTORED i uma ens

. sysTEw ANo couroNENT SEECDoM ,,,,,,,,,,,,

i . io a e ersteme per pient per wnder - a.-== e.- a=

. ooo a tooo mponente per pient ww

. as a st oweeni et reportamie e- e

! oemeenente per vender p nw p e==w ._ _

man me emen ausemen i e UST 00MMMO AGAINST .,,,,w,,,,.

e NUMG4030 (Grey Bened ",,,,,

' - mar ee.Bri .- - .- -

  • NEM soport ,,,,,,,,,,,,, ,,,,,,,,,,,,,

en sme e.animen 5

1 i

l 1

l S:Ide 11 1

3 MAINTENANCE EFFECTWEMOS DCICATOR i OmY AsPEcra) f e -- Te pLAsf espasmme poneness

. em anneman was SIst i_ ^ t 15 mgereud i

e IseurTORBIO WITesel euttes em AIsegnar ease

< e mesimAL i

'- e L___.._.i le emme A seLNWar estartete

' em NT peft shamels

  • examassmus sem stunt IIPIWe j

8

  • TIsaenas a esseseen

. -. asase IIpese .a.w.e.s.

baseutuS .-

'

  • helgiuol hAses est Isabsted I W em".'=*m".*m Ta- se-aw e ensee ouefest Aassens to entse Appagame l

2 t

B.it

i

. OMrrtwy t. Im) AEODIS804C

~ ~

W I, . .

Glide 12 EXAMPLE MEl DISPLAY i

WE GENERIC .

  • i E P.as t --

.l '

j 45 5 4 6 G y 4 4 6 3 4 4 4 4 6 448 6 4 6 3 I 1 s 6s 6 a a n u t ey u 6 a lE E Pawe -

, y n..y u 6 u n...u n o 'L . ..y hmd mIrfm>

iE *

~yl6bby 6bebbb5bhby 666f I I* einem esums am,. e 3 .gg yt6asy estaaasessy 6a66 _

~' .

1&64y 64 4466666gf.&.4 I EE P.mo g46466y 456444444By 6644 i

LE P.***

y 4666y 4463444445y 6443 .e 1

Po.#

/,./

2 1

I y o u .y u n u ...E.Eyuu = < .

' , y et.sy .s.a.asu y . 6a y 4 uy u 6 a n o 6 y u .y' .

4

E E c .a -

^

\

'g n.sy u u n s e 6 sy u u

.~~*~= ~ ~ ~. _ . .

i E Silde 13 J

ia VALIDATION ACTIVITIES i l 3,

e e ROOT-CAUSE ANALYSIS i e LER 00RRELATION ANALYSIS j e INSPECTION REPORT ANALYSIS .

i e OTHER CORRELATION ANLYSES i - Time Las e CORRELATIONS WITH OTHER f%".E8

- FRV

- MFP i

B.13

1

- , . 1

' l

. . ' , , ~ cun tuny t.1900) AEODIS8040 l

Silde 14 4

PWR MEl 8CATTEAPLOT

. MEl V8 CAUSE CODES cAues ones urs u .

s -

u . . . .

e t .. .

u ..i.- . .. l al 1
  • I e s a w m a a m 4 we couwe  ;

i - nsensee.om i

i 1

i i

Slide 15 1

DISTRIBUTION OF CALCULATED MEl Slide 16 Metwo Pinnes Numter of Plants

MAINT. EFFECTIVENESS NDICATOR TREND 6 aza Au. uAruns Pwns

,.. . . . . .. ."'**... , ==== or ===== uwe 64 - 8 o s M_o o o

=

e -

0 e e*

e- -

u .

a ,  ;

N '

g. . . . . . . . .

'fAS.ONDdreAud.,JASSESdrWA[f i i -

MONTWYEAR

. . . . . R. .

H M 1F3 9 41 SHF 9He Number of bullections 8.14

~

O AEODIS604C

, ,, , ,OMFT (Met 1,1990) 3.

Slide 17 NRC/ UTILITY DEMONSTRATION PROJECT l

\

  • INDUSTRY MEETINGS ,

l

  • Initial meeting - July 19,1989 NUMARC agreement on coordination )
  • Task Group meetin0 - September 12,1989 AEOD presents results and provides plant-specific data.

o October 13,1989 -Industry feedpack i issues - Definitions and Reporting 1

e Utility Visite - Trend Comparisors and evaluation issues in controversy l

  • Schedule - Development Completion - 3/90 SIlde 18 .

1 DEMONSTRATION PROJECT l e INDUSTRY MEMBERSHIP Commnwealth Edloon Dules Power Misalesippi Power & Ll0ht i

Northeast Utilities i Rochester Gas & Electric Southern California Edison l

INPO NUMARC i

I B.15 i

i

1

e l

! .' i

+

e ,

r . I

/ Dw7(Mer b f000) AEOD/S8040

*** Table B.S i e Examples: Failure Narratives Categorized as Maintenance
1. Evenez CVCS borto acid blender corerelisolation velve 6. Event: CVCS osion bed denwnersnaar supply valve

, teilwo stem broie e Diesevery Date: SM47 D6seevery Date: 3/22/06 Cause Cau Unknown Cause Cata Unimown

. . Casmo Dessa Normai/ Abnormal Wear, Conooion Cause Dessa NormeFAbnormal Wear; AgingCydic Faligue

  • Narrative: The horic and to boric m:sf bierder cortrolisolasion wahre Narrettve: N cmhen bed domineraliaer 1 in the ogply One from to 4 was found not fully doeed aner conducting a proceere The unit was mis bed dominereR er valve was found inoperabio by se opereier. The d

being cooled deem. The valve intemais were headly corroded and unit was at power. The esem broke on tte vehe and p64ed ed of the

! eroded. WR 131870. The med eine. Danket, and bonnet were aseembly. The vehe had worn internels. WR 134301 The bonnet

, replaced. Reassembled and tested the vehe for proper operehen, assembly, gashee, daphre0m, and 0. rings were repieand; te valve was torquod and furusanally verfied as in proper operadiott j conoment. The velve had a previous tallure S. 3/87 when the velve plug was dscovered broken off and the velve see we bar9y Comment: This velve failed poveuely on 11/4/07 when a chain bik to j eroded; then the pkg assembly, seat nng, and cage were replaced. the vehe operator feA all a aprodet; also the vehe was lealung due to j normal wear of the irgemais: two master Inle in the chain were d

2. Event: CVCS seal weer infeen finer inist Hie 4 ion repieced and a new daphragm sem and bonnet were botaset
velve failure
Diesevery Date
11/1447 6. Event: CVCS veno operator on the chargmg now Cause CatJ Unknown control veno failed l Cause DomeJ Out of Mechenkat Adpsament;l*revous 06ecovery Date: 7/747 j Repair / Installation Status Caume CasJ Unknown j Cause DoeeJ OJ el Callbration  !

l 2

Marvative: Seal weser ingsdion tiller 1 A inist leolaHon valve wee nea6ung a wee laurel by an opovarn E4 tile the unit we coming g in Narreteve: The chareg flow control valve was leakirg by, it was l power eher a refueling culage. The vehe daphragm was crushed found by an operaser stWie to wit was beirg refusind. The veno and the vehe dec had wrong measurements; the root coues was opermor was nJ closity fully due to a low air supply preneure The unknoem. The see and vehe daphreem were repinned; tte vehe valve was not sesslry poporty. WR 006801. The air ogply preneure was reassembled, and proper operation and to leakage were vertfied. was adpstod eruf se the vehe travel. Chedied for proper opersion.

(WR 13eb64)

Comment: The impoper air suppe r peasure ir s, gis this taigu.

Commenta The emme elsecrtplen "- the feliure wth previous may have been marannanos reassed. j repair. i

7. Event: Reecent Protocelon and Lt, a system 30 flow
3. Event: CVC3 charging flow control velve opermor falure penser.iner (MCFFT5000) od of ceNbration Diesevery Dece: 34247 Dienevery Date: 3/1346 Cause CatJ Unknown Cause CatJ Unknown Cause passa Od of Calibreen Cause DemoJ Out of Calibralen Norretive: Chemical and Volume Control chargtry flow cent.ol velve Notreetwo: 8 team generuser 10 feedweser How trenaminer for chanrel l was owmpwg when pump W wee placed into serves. The veno #1fanednow. The paart een normaty opersang The trenomatter was sabilized when pump T we placed Iree service. The unit was at found out of calibremen Ier an unkreen manors The transmaher we fug power. The come was impeper eduesrient i of the gain on the recalibrated. W/R 122301, control vehe eenpeAnn0 circut casd. The aircuit conf en6n een reactuated to the proper value for control valve opermen. The Ceassent. This trenaminer had the same problem tw6os prev 6eisly on opermer perfonned as required. WR 130726 3/746 and U2148. h each of these casse the transmeer was recentwesed arut the came category was urtnowes Cenomente The knproper adjustmers of te gain bdicsses the eney have been a manionanomissed failure. 8. Event: Reacier Proqedion and Logic eyesem mem steam flow trenometer (M8MFT5000) ed of
4. Event: CVCS volume corarei tank vehe opermor failed coNtraen and zJer in junchon hos Discovery Date: Me47 Dienevery Dets: M847 Cause CatJ Urinson Cause Cat.: Unimoon Cause DomeJ Bumed4umed Od; Mechanical C--n Ca, Cause Domea Foreignfincorrect Maserial: Od of Cautraien Narror' a: The vehens coveral tank rut 1 outlet cor:rel veno would Norrative: Mah seem now transminor ctennel No. 3 from eseam not 5 en aher eiseerg for esfety bjection slurlne startusk The mater generseer "C' was leurut ed of solitwasion during a refusiing survoiNones In af g operunt. The valve operuser had a broken drive nut test. The how transminor had weser in the p68 bruzion bon below the appatormy benhen when pulliry the disc cil to oesL The operator had transminer, h treremener we out of tolerance by 4.7% Ngh pd eacesswo close twust en the vol.e. The bmed up motor and througtcut the range. The flow trorumilier pus junchen bor was broken drive ma more espiand The aduelor was reessed to ensure dramed, aJ medianical connedom were se rvened, and the trenommer proper torque wtpd. WR 96133. WR 64908. WR 131717. was calibrased for proper operatiert WR Desset t' , The problema described in the narrative el the vehe Comment- The presence of weser in the >siccion hos and the need to faduro suggest ihas pu leikse may have been mairterance reinted lighten all connecuens hdicate 9 at the lature may hoe been maintenance reissed.

B.16

.,e

, . [ 5

., ' N"(*r 8. '8"> AEOD/S804C ,

. . s.

1 6

l

' APPENDIX C l r ]

- INDICATOR TECHNICAL ISSUES i

)

i . I

! i i

l i l i

s Appendix C contains details of the algorithm methods being explored to address concerns expressed

during the Demonstration Project over how the proposed indicator introduced
  • ghost" indications and suppressed
  • shadow" indications.

I 1 i

I i

1 l i

f 9

i i

1 1

4

?

I I'

i 4

i

}

C.1

.' owr (my v. resof -

AEOD/S804C

,. \.

, ,L5 APPENDIX C INDICATOR TECHNICAL ISSUES As initially presented in AEOD/S8048, the maintenance indicator used a simple computational

~

,' algorithm that compared failure counts over a sliding five-month time interval. Only when a selected threshold value was exceeded did it flag the comparative change as being significant. The indcator was based on selected components in selected systems and it trended the summation of the cumulative indicator flags for each system considered based on all component failure indications within these selected systems. All failures of the equipment as reported to NPRDS were included 11 they were of an immediate or degraded nature, reported incipient failures were excluded. As a result of this initial construction and bases, several compromises were introduced into the indicator's precision and usefulness to utility staffs. These included being a system-based rather than a component-based indicator, tracking of only some of the systems reportable to NPRDS with exclusion of most safety systems, introduction of ' ghost" indications and suppression of " shadow" indications. During the Demonstration Project, several methods and modif' cations have been explored to address these problems.

Algorfthm Refinements .

l The algorithm used in constructing the indicator was very simple, it processed the selected NPRDS failures by first counting the failures by calendar rnonth using the NPRDS failure discovery date. It then looked for a relative increase in the failure frequency within a moving five-month window, comparing the average number of failures in the last two months to the average number of failures in the first three months. When this difference exceed 6d a fixed threshold value, a marker was assigned to the latest month of the five-month period. If the failure count for the fourth month is high enc 3h, however, the overall average for the fourth and fifth months can be great enough to produce an indication in the fifth month even when there were zero failures in the fifth month. _ This

" ghost tkk' phenomena was identified early in the development of the proposed indicator but the

! formula was not modified since it was felt that sensitivity to the magnitude of a failure jump increase was desirable and the precise placement of indications was riot critical.

i Converse'y, the original calculation averaging also led to some significant failure increases not J k

j generating indications. This " shadowing" of indications occurred when significant increases in [

failures in a proceding month, when included in the three-month average used in the algorithm, overshadowed the two-month avera0e associated with the later failure increase. This phenomenon i was also recognized during the indicator development but this lack of indication was considered to j not be a problem given the anticipated way that the indicator was meant to be used.

i I

Two revised calculational methods are being explored to eliminate the " ghost ticks" while capturing 4

  • shadow ticks", thereby yielding a more precise set of indications. Both of these exploratory I calculational methods still employ the same sliding five-month time window used in the original l algorithm. They differ from the original algorithm in the methods used to treat the failwa information within the five-month window.

f ' Three-month Averaoina: In the three-month averaging method, the algorithm is applied to the failure

' data, as originally proposed if the average number of failures for the last two months of the five-l c2

. __ _ ._ ~ __ , _ . --_ __ _ __ _ .- _ _ _

m (O

'wMmuer t. tum  ! AEODIS8040

V

.{ j. month window exceeds the average number of failures for the first three months of the time window a

by the threshold limit of 1.01, the algorithm calculation is satisfied such that an indication would be generated for the fifth month. At this point, a check of this indication is made to verify that it is not a ghost tick. This check is performed by averaging the values of the failure counts in the first three months of the time window being considered. If the actual failure count value in the fifth month exceeds the first three-month average for this window, the indication is permitted to remain. If the o value of the fifth month does not exceed the average value of the first three months then the indication is eliminated.

  • If the average of the last two months of the five-month window does not exceed the average of the first three months by the threshold limit, checks are made to determine if an indication should be generated but is being " shadowed" by previous recent failure histories. This check is performed by averaging the values of the failure counts in the first three months of the time window and substituting this average value for the highest value in the three-month period. The algorithm is then completed using the actual f ailure values for the last two months of the five-month period. If the threshold value of the algorithm is now exceeded, an indication is generated and retained for this fifth month.

Five-month Averacina: The five-month averaging method uses the same algorithm and threshold as used in the original indicator. The difference occurs once a failure indication is generatea. In the five-month averaging method, once an indication is generated the data values to be used in subsequent calculations are revised. This is accomplished by substituting the average value of the failurcs for the actual values of the failures in the five month time windcw that resulted in an indication being generated for the fifth month. The time frame is then shifted one month and the original algorithm is applied but now the first four months of the five-month window are average f ailure values, not actual values. 11 no failure indication is generated for this new time window, the window is shifted another month with the first three months of the window retalning the old average value and the last two months containing actual monthly failure counts. If no indications are generated, the window is shifted again and the process is repeated. This continues until a new Indication is generated. Once a new indication is found, the actual failure values for the five-month window involved in the new indication are retrieved, if necessary, and a new five month average is determined. These average values are then substituted for the actual values and the process outlined above is repeated.

These processes are continued for the entire time period under consideration for both the system- l based and the component-based sets of NPRDS equipment failures for each plant. Comparative graphs of the cumulative results of these efforts are plotted. The following examples illustrate how the revised algorithms compare with each other and with the original approach. The examples are l based on actual NPRDS failure data for plants which were represented in the Demonstration Project.

In these examples, an "F" denotes an Indication found by all three methods, a "G" represents ghost indications that are eliminated by the revised calculation method, and an "S" notes a shadow indication which is added by a revised calculation method.

In the first example, a plant experienceo 57 failures in systems and components used in constructing the original indicator. Of these,41 failures were experienced in just two systems.

These 41 !ailures resulted in the generation of a total of eight Indications, four in each of the two systems, when the original algorithm was applied. The remaining 16 failures were distributed among six other systems and these failures resulted in no additional indications. The distribution of the 41 failures between the two systems is shown in Figure C.1. Included in this figure are the c.s

r

  • k MFT(May 1,1990) ,

} AEODIS804C

' , Kf a comparative indications generated when the original and the two revised algorithms are applied to g this data.

MoK!H M A M J J A & O N D J F M A M J J'A 5 O N' D J F M 'A M J 7.= Fasum Symm A 0 2 0 1 1 0 1 0 0 0 0 0 3 0 2 02 0 0 1 0 2 0 5 0'l'l-0

,- syM- a o 1 0 0 2 0 0 2 2 0 0 s.: 0 0 0 0 2 1 0 0 0 s.0: 0 1 0-

, AnconmtM Ortprm!

Byman A . . . . . . . . F G . . . . . . . . . F 0. . .

SyMesa B . . . . . . . . . . F . . . . . 7 . . .:F G '. . .

3 Month S stam A .. . . . . . . F . . . * - * . . . . F . .. . -

8 aseen 9 . . . . . . - *

  • F . . . . - F . . . F .'. . .

E E.anth 8-staan A . * . . . . -- . F - . . . . . . . . . F'.' . . .

Systern B . . . . . . . . . F . . . . . F . - - F- . ' .. -

Figure C.1 Example Application of Various Algorithms in this example, both of the revised algorithms eliminated three " ghost" indications. The failure distribution was such that neither revised algorithm determined that additional " shadow" indications were present.

In the following example, a different plant experienced 229 failures, with 35 of these failures occurring in one particular system. Applying the original algorithm to these failures resulted in the generation of three indications. In this case, the application of the revised algorithms both eliminated one " ghost" indication but found one " shadow" indication. Figure C.2 illustrates these indications.

. Y I

1 Mow!H M A M J J-A 8 O N D J. F M A M J J A 8 0 N D J.F'M A'M.J-i Failurve  ;

l Syntan C 0.5 3.O' O 1 1 0 0.3 3 0 1 011 0 0 0 7 0 !! 3 0 - 2137-2 E1 1 i

. AILORr!HM i i

Orwnal 3

Systan C - . . . ' . ' . .:. F

. . -- . . . F-C'. . . . ..:..J'.. . .

J

3-Month '

j systan C . . . . . . F' . . . . . . . F - . 8 .. ... .'

i  !

SMonth Systan C .. . . . . F . . ~ . . . . . F . . 8 . . . ' =. I Figure C.2 Additional Example Application of Various Algorithms Thus for this example, the total number of indications remains the same. However, the revised algorithms yield a different distribution of the indications over the time period being considered.

Comparisons of additional examples reveal that the two revised methods are equally sensitive to j capturing " shadow" indications but the five-month averaging method is rnore sensitive and eliminates additional " ghost" indications.

4 C.4

,