ML20205F816

From kanterella
Jump to navigation Jump to search
Informs Commission of Preliminary Results of Staff Program to Develop Maint Performance Indicators
ML20205F816
Person / Time
Issue date: 10/07/1988
From: Stello V
NRC OFFICE OF THE EXECUTIVE DIRECTOR FOR OPERATIONS (EDO)
To:
References
SECY-88-289, NUDOCS 8810280142
Download: ML20205F816 (135)


Text

'

p.

fbk n

5**'Q p

q O

E tj

\\...../

POLICY ISSUE October 7, 1988 (Information)

SECY-88-289 For:

The Commissioners From:

Victor Stello, Jr.

Executive Director for Operations

Subject:

PRELIMINARY RESULTS OF THE TRIAL PROGRAM ON MAINTENANCE PERFORMANCE INDICATORS

Purpose:

To inform the Commission of the preliminary results of the staff's program to develop maintenance performance indicators.

Background:

This work is in support of the Commission's proposed rulemaking "Requirements to Ensure the Effectiveness of Maintenance Programs for Nuclear Power Plant."

In Staff Requirements Memoranda dated June 17, 1988 and June 24, 1988, the Commission directed the staff to develop indicators of maintenance performance.

Specifically, the staff was requested to conduct a trial program to define and validate maintenance indicators, and provide preliminary results with the proposed maintenance rule package.

The enclosure to this forwards the results of these efforts.

Discussion:

The development of maintenance indicators was recognized as a critical element of the NRC's Performance Indicator Program.

From the outset, program elements were dedicated to the task.

However, in June 1988,2 the Commission directed the staff to expedite a trial program of maintenance indicators in support of the accelerated schedule for the maintenance rulemaking.

In response to this direction, AE00's Division of Safety Programs initiated an intense activity to complete a trial program on that schedule.

The enclosed AE00 special report (Enclosure 1) provides the interim results.

These results are based upon the recent work of AE00 with contributions from RES, Idaho National SRM in responses to SECY 88-142 dated June 17, 1988

Contact:

Mark Williams, AE00 Q4@

x24480

/

3 g910200142 801007 FDR

'::.ECY PDC 66-239

{

f

~-

O Engineering Laboratory, Brookhaven National Laboratory, Science Applications Inc., and thirteen power reactor licensees.

This accelerated program built upon tSe results of earlier NRC staff efforts from the Performance Indicator Task Group development, those of NUREG-1212, and the perspec-tives of the nuclear industry obtained in the NRC sponsored maintenance workshop of July 11-13, 1988.

The issues of the workshop were well discussed in a report by XYZYX Corp.2 The trial program utilized actual operational data fron -

commercial power reactor plants and assessed the ability of selected indicators to determine maintenance effectiveness.

It involved data for thirteen candidate indicators from twenty-three reactors at thirteen sites.

In addition, a survey was done of the plant practices related to maintenance i

performance monitoring.

This survey of industry practica provided an understanding of the current state of these programs as implemented by licensees.

INP0 indicator information from 17 sites was also reviewed.

Preliminary Results:

The enclosed report of *,he preliminary results focuses on three areas.

The first is a discussion of the current industry practice in the utilization of maintenance performance indicators.

It is based upon the operational programs for the plants in the trial program.

Secondly, it provides an analysis of, and results from, the validation program for the candidate indicators.

Example trend charts and validation results are provided.

Lastly, it provides a discussion of the capability of an existing industry component reporting system, Nuclear Plant Reliability Data System (NPRDS), to provide a data source for maintenance indicators.

Based upon the plant visits and data analysis, the NPRDS attributes make it the prime data source for indicator development and implementation.

Interim findings are also presented in the report, and are discussed below.

Plant Prtctices Regarding Maintenance Indicators Based upon the practices that were nottd at thirteen sites visited which are considered representative of the industry, the following findings are made involving maintenance indicators.

o Most> plants are using some form of maintenance process indicators as management tools to monitor work flow.

The range of programs involving the use of these 2

Observations and Recommendations the Proposed Rulemaking for the Maintenance of Nuclear Power Plants, July 27, 1988, Kay Inaba, XYZYX Information Co,rp.

9 0

6 g

cv -

4 indicators, for example, corrective maintenance backlog, varies widely.

Some utilities establish and actively' pursue goals for selected process parameters (indicators) while others simply monitor the information.

o Although some direct measures of maintenance quality or effectiveness may be informally used by maintenance or engineering personnel, no plant specific programs were found that monitored direct indicators of maintenance effectiveness in a formal and systematic manner similar to that employed for the process indicators.

Indicators in this category involve, for example, detailed equipment history tracking, such as equipment out-uf-sarvice trends, failure rate trends, or trends in component rework.

o Utility managers rely on the overall plant indicators such as forced outage rate and automatic scrams, as developed for industry by INPO to identify adverse trends in. activities such as operations or maintenance.

The overall indicators intuitively reflect maintenance effectiveness as an important element, but not as a "leading" indicator.

Data Acquisition, Analysis, and Validation:

Much of the data needed to support indicator development is not normally entered into the computerized system by plant personnel.

The limited data available without reviewing the individual work authorization packages or operator logs, influenced the staff's and the licensee's ability to cc.istruct and test candidate indicators.

Specific findings in this regard include:

o Many plant specific data systems are designed for the effective management of maintenance and provide good information for process monitoring.

However, the constitution and scope of these systems is not oriented to provide measures of maintenance effectiveness based on component failure information or equipment histories.

The current systems are oriented to capture the equipment work authorizations.

This finding is consistent with that of NUREG-1212 wherein it was noted that most plants are not yet well equipped to do extensive formal equipment trending and analysis, o Plant specific records quality varies widely among the plants.

In some cases, the requisite information to support a candidate maintenance effectiveness indicator is not readily available, e.g., number of failed.

post-ma.intenance-test, rework, or equipment functional out-of-service records.

In some cases, the information cannot be obtained without using engineering judgment during the records review to obtain best estimate. data.

e i

4

4 a.

t o Using plant specific process data, no consistent validation result was found across plants for any single

. maintenance process indicator.

In a few cases, some indicators such as the corrective maintenance backlog and preventive maintentice overdue for different plants, appear to have merit.

o The limited on-site data available for the effectiveness indicators yielded very few results and no consistent correlations.

Application of NPRDS for Maintenance Effectiveness Indicators The'NPR05 was the most reliable and consistent data source found by the staff and licensees for much of the information supporting the maintenance effectiveness indicators.

Using NPRDS as'the data source, the staff constructed seven trial maintenance effectiveness indicators including four which paralleled the component-level, equipment based candidate maintenance performance indicators.

The seven indicators constructed using NPRDS were:

maintenance rework; ratio of failures discovered during surveillance to total failures discovered; average time for B0P components out of service during a calendar quarter; mean time to return components to service; failures of components in outage dominating systems; average time outage dominating components out of service; and failures reported per 1000 components.

For the four indicators in common, using NPRDS it was possible to develop about 1200 quarters of data, while plant visits yielded only 440 quarters of data.

Findings regarding NPROS are:

o The information reported to the NPRDS appears capable of supporting maintenance effectiveness indicators.

Some of the indicators obtained from NPRDS appear to be promising.

l o Although most of the candidate maintenance effectiveness indicators are obtainable from the NPROS structure, certain aspects of the current NPROS may limit its usefulness for this purpose.

These include the timeliness and completeness of reporting and restrictions in scope.

==

Conclusions:==

The following conclusions were drawn from the preliminary results of the trial program.

1.

Process indicators have merit for plant specific moni-toring and control.

Plant management should continue to improve their plant specific process indicators to support the establishment of meaningful quantitative

~

4 4

9 8

g a _ __

[:,

4-goals toward which management can strive.

However, they do not appear to provide the desired level of consistency or correlation to warrant industry-wide monitoring by the NRC.

2.

The use of NPRDS to provide a data base for constructing and validating maintenance effectiveness indicators provided reasonable and encouraging results.

While no specific indicators were fully validated across a number of plants, the extent of the correlations show merit for indicator use.

3.

Indicators that are based upon actual component reli-h5 ability and failure history provide the best measure of maintenance effectiveness. Such indicators need a well structured and component oriented system to capture and track equipment history data.

Reliance on an established industry-wide system, e.g., NPRDS, appears to be the only feasible near term solution to obtain needed component data for such indicators.

4.

The NPRDS can be used for maintenance performance monitoring, however, the need for improved. timeliness and completeness in reporting, and improvements in scope should be assessed.3 Recommendations:

The proposed rule on maintenance currently before the Comission (10CFR50.65) errphasizes that an integral part of a good maintenance program is the monitoring and feedback of results.

In this regard, the maintenance programs should utilize quantitative indicators that are based upon actual component reliability and failure history to provide the best measure of maintenance effectiveness.

In addition, process indicators should be utilized as management tools to adjust and monitor maintenance activities. Therefore, with the goal of providing the NRC staff and licensees with a practical near-tenn method to. track maintenance effective-ness, the following recomendations are made:

1.

Licensees should be encouraged to continue to improve their use of process indicators as plant-specific maintenance management tools.

2.

Licensees should be strongly encouraged to utilize an industry-wide component failure reporting system, e.g. NPRDS, as a basic element of the maintenance effectiveness moriitoring activity that is to be required by the rule.

Plant-specific systems would not support the sharing of generic operating experience 3

It is important to note that the NPRDS was designed to be the major source for reliability data to support PRA in the U.S. Nuclear Industry.

Other NRC programs such as the Individual Plant Evaluations (IPE) and Plant Life Extension also have a need for component data reported to NPRDS.

4

.u.

f,-

y, e

k 6-s 1

..o or facilitate industry-wide monitoring'of maintenance-effectiveness.

3.

The staff should continue efforts to develop and validate maintenance effectiveness indicators using

.the NPRDS.

/

)

ic or Ste o,

cutive tractor for Operations j

DISTRIBUTION:

Commissioners OGC OI OIA GPA REGIONAL OFFICES EDO ACNW ACRS ASLBP ASLAP SECY l

I

+

b r

I b

I i

's l

4 I

i i'

r r

t t-

r.

DRAFT AEDD75804 PRELIMINA4Y RESULTS CF THE TRIAL PROGRAM FOR MAINTENANCE PERFORMANCE INDICATORS October, 1988 Prepared by Division of Safety Programs Office for Analysis and Evaluation of Operational Data U.S. Nuclear Regulatory Comission s

e e

TABLE OF CONTENTS g

Page

~

I.

INTRODUCTION 1

II. PLANT PRACTICES 3

A.

Site Visits and Additional Licensee Contacts 3

8.

Maintenance Programs 4

C.

Maintenance Monitoring ractices 5

D.

Findings Based on Plani Visits 9

E.

Conclusions Regarding Plant Visits 10

!!I. DATA ACQUISITION, ANALYSIS, AND VALIDATION 11-A.

Introduction 11 B.

Database Construction 14 C.

Data Acquisition 14 Data Acquistion Findings 15 Data Acquistion Conclusions 21 D.

Analysis and Validation 22 Across Plant Indicator Review 22 Validation - Analysis Within Plants 26 Validation Approach 30 Benchmark Selection 33 Validation Results 35 INP0 VaTidation 43

-Validation - Findings 45 IV. USE OF NUCLEAR PLANT RELIABILITY DATA SYSTEM (NPRDS) 49 A.

Introduction 49 8.

Indicators Based on NPRDS 50 C.

NPRDS - Validation Results 53 General Observations 53 Rework 55 Outage Dominating Equipment 55 Failures Per 1000 Components 61 Surveillance Ratio 61 Component. Return Time 66 Average 80P Outage Time 66 Findings 66 Conclusions 68 g

Ob i

O e

9 G

L-A 4

TABLE OF CONTENTS (Continued)

Page Y.

OVERALL FINDINGS, CONCLUSIONS AND RECOMMENDATIONS 68 A.

Sumary 69 B.

Findings 69 C.

Conclusions 72 D.

Recomendations 73 i

APPENDICES:

APPENDIX 1 - Candidate MPIs and Definitions Al APPENDIX 2 - Sites Visited A3 APPENDIX 3 - Executive Sumary of SAIC Report on Maintenance Performance Indicators A4 l

APPENDIX 4 - Brookhaven National Laboratory Interim Report A21 l

APPENDIX 5 - Discussion of Validation Results for Three Select l

Plants A29 APPENDIX 6 - NPRDS Scatter Plots A57 I

4 6

9

=

i i 4

S g

9

y-p c.

't 4

- List of Tables L

Page c

't Table 1 - Sumary of Across Plant Indicator Data Review 16 Table 2 -~ Validation Benchmarks 31 Table 3 - Validation Results (Before Engineering Review) 36 Table 4 - Cumary of. Four Cases 37 Table 5 - Preliminary NPRDS Correlation Results 54

' Table 6-Percentage of Equipment Forced Outages Outside NPRDS Scope 58

[

r l

'I h

I

[

I t

e V

j tii i

i I

i 8

cre. <

~.

Ik List of Figures Page Figure 1 - Preventive to Total Naintenance Hours 24

~

2 - Corrective Maintenance Backlog 26-3 - SSPI - Diesel Generators Cumulative Unavailability 27 4 - SSPI - Ofesel Generators Cumulative Unavailability 28 5 - SSPI - Ofesel Generators Cumulative Unavailability-29 6 - Plant 1. Total Scrams--CM Backlog 38 7 - Plant 1, EFO--CM Backlog 39 8 - Plant 1, FOR--BOP:MTRS 41 9 - Plant 2, FOR--80P:MTRS 42 10 - Plant 3, EFO--80P:MTRS 44 11a - SALP Rating vs Selected MPIs 46 11b - SALP Rating vs Selected MPIs 47 11c - SALP Rating vs Selected MPIs 48 12 - Plant 1. FOR--Rework 56 13 - Plant 1. FOR--00E Failures (0 Qtr. Lag) 57 14 - BWR Plants EF0 in NPRDS Scope vs 00E Failures 59 15 - PWR Plants. EF0 in NPRDS Scope vs ODE Failures 60 16 - Plant 10, Availability--Failures Per 1000 Components 62 17 - Plant 1 Availability--Failures Per 1000 Components 63 18 - Plant 1 Availability--Deficiencies Found During Surveillance 64 19 - Cumulative Surveillance Ratio (Percentage) 65 20 - Ava;1 ability-Component Return Time (0 Qtr. Lag) 67 e

0 iv e

e a

e

  • Division of Safety Programs (DSP)

Contributors to the MPI Trial Program Efforts Bell, Larry Lam, Peter Benaroya, Victor Novak, Thomas Black, Kathleen O'Reilly, Pat Boyle, Eugenia Padovan, Mark Brady, Bennett Pettijohn, Samuel Burton, William Salah, Sal Crooks, Jack Singh, Rabi Cross-Prather. Peggy Stern Steve Dennig, Robert Trager, Gene Jones, William Tripathi, Raji Kauffman, John Williams, Mark Wolf, Tom The AE00/DSP secretarial staff provided invaluable assistance in preparation of this report.

In particular, Alna Johnson, Gail Parris and Wanda Wood provided significant help.

In addition to the AE00/DSP secretarial staff, significant typing was provided by Ruby Lee Williams, ARM /ISB.

Brookhaven National Laboratory and Science Applications International Corporation contributed to this effort.

AE00 believes it is important to acknowledge the cooperation and interest of those utilities that contributed to the effort - Northeast Nuclear Energy Co.,

Southern California Edison Co., Duke Power Co., Connecticut Yankee Atomic Power, Pacific Gas' and Electric Co., Consolidated Edison Co., New York Power Authority, Commonwealth Edison Co., Carolina Power and Light Co., Consurers Power Co., Louisiana Power and Light Co., Florida Power Corp., Toledo Edison, and Iowa Electric Power and Light. The openness exhibited by these licensees is evidence of the industry's willingness to move forward in the generic improvement of maintenance programs.

0 a

Eb v

4

e o

L s

PRELIMINARY'RESULTS OF THE TRIAL PROGRAM i-FOR MAINTENANCE PERFORMANCE INDICATORS I.

INTRODUCTION In approving the Performance IrAlcator Program presented in SECY 86-317 of October 28, 1986, the Consnission directed the staff to continue to explore and develop new indicators beyond those included in the current program. The Comission was particularly interested in the areas of maintenance and training. The NRC staff moved forward in the area of maintenance and on May 23, 1988, submitted to the Commission a proposed "Staff Plan" and Schedule for Proposed Rulemaking For Maintenance of Nuclear Power Plants (SECY 88-142).

The Comission responded requesting an accelerated schedule for the maintenance rulemaking.

In addition, to support that schedule, the Comission directed the staff to focus its efforts to develop new indicators of maintenance performance for comercial power reactors.

In Staff requirements memoranda dated June 17, 1988, and June 24,' 1988 the Comisison d.irected the staff to focus its efforts to develop new indicators of maintenance performance, specifically, to conduct a trial program to define and validate maintenance indicators, and provide preliminary results in the proposed rule package.

Staff requirements memorandum dated June 17, 1988 specified:

"The October Comission paper forwarding the proposed maintenance rule should discuss the preliminary results of the trial program on maintenance perfomance indicators and the effectiveness of the proposed indicators (subject to

~

modification as the staff continues its validation.in preparation for the final rule)."

The trial program in'cluded'an analysis of candidate maintenance indicators that

~

utilized actual operational data from a set of commercial power reactors.

It comprised collecting data for thirteen candidate indicators from twenty-three 1

l plants at thirteen sites.

INP0 indicator data from 17 sites was also reviewed.

The criteria regarding indicator selection and plant selection is discussed in the sections that follow (see Appendix'l for the list of candidate indicators).

The program included an element to ensure that the final maintenance indicators that were selected adequately reflected the effectiveness of the plant main-tenance. That element was termed "validation".

In addition, a brief review was done of the plant practices related to the usage of maintenance indicator information. This brief survey of industry provided an understanding of the state of these trending programs as implemented'by power reactor licensees.

The NRC staff noted in NUREG 1212 M that most plar.ts are not well equipped to

' do extensive formal e.quipment trending and analysis.

The trial program site visit information presents an up-to-date discussion of the status of several plant specific maintenance indicator programs.

This preliminary report contains three major sections. The first is a dis-cussion of the industry practice in the utilization of maintenance performance indicators. As mentioaad, this section is based upon the programs of the plants in the tr'ial program and represents the most recent information acquired by the NRC staff. The second section provides the analysis, including the results validatiori program for the candidate indicators.

Example trend charts and validation results are provided with all other charts and data contained in the proprietary Appendix to this report.

The third section provides a discus-sion of the capability of an existing industry, component reporting systen, the Nuclear Plant Reliability Data System (NPROS), to provide maintenance indicators.

Before concluding this introduction, AE00 considers it important to note the cooperation and interest of those utilities that contributed to the effort -

Northeast Nuclear Energy Co., Southern California Edison Co., Duke Power Co.,

Connecticut Yankee Atomic Power, Pacific Gas and Electric Co., Consolidated 1/ NUREG 1212 "Status of Maintenance in the U.S. Nuclear Industry 1985,"

~

June 1986.

e e

2

Edison Co., New York Power Authority, Comonwealth Edison Co., Carolina Power and Light Co., Consumers Power Co., Louisiana Power and Light Co., Florida Power Corp., Toledo Edison, and Iowa Electric Power and Light. The openness exhibited by these licensees is evidence of the industry's willingness to move forward in the generic improvement of maintenance programs.

For two reasons, data in this study are not identified by plant or site.

The first reason was an attempt to eliminate possible bias by plant recognition.

Second, a few licensees were concerned that data could be misinterpreted and therefore asked for anonymity.

II.

PLANT PRACTICES A.

Site Visits and Additional L;censee Contacts To collect data for the trial program of candidate maintenance performance indicators,13 sites were visited and data -4,ted for 23 plants at 13 sites.

Site data collection was a highly corgressed activity taking place during the period from July 5, 1988 through August 12, 1988. A list of sites visited is in Appendix 2 and a sumary of each visit will be included in the final report.

The criteria used'to select sites were:

plants of different age, reactor type, NSSS vendor; representation of each NRC region; perceived levels of maintenance effectiveness; certain plants selected for NRC maintenance team inspections; and operational events or trends attributable to maintenance practices.

The

. final selection was based on the criteria and discussions with the interoffice task group on performance indicators.

The sites selected for visits covered a wide spectrum of maintenance programs - from those judged as well structured and administered to programs considered less so. An AE00 led team was organized for data collection from each site.

Contractor personnel from 8NL and SAIC supplemented the staff in several of these efforts.

Each tean was provided with a data collection package consisting of the definitions of the candidate maintenance P!s and possible data sources.

The same package was sent to most utilities prior to the visit, to inform the licensee of the data needs.

It was recognized early that it may not be possible to obtain the historical

. data for all the candidate maintenance Pts from every, plant within a reasonable l

3

L P

time and without considerable effort. The strategy was to obtain as much data i

as practical to support or the evaluation and validation activities..

Subsequent to the site visits, additional contacts were made with the licensees to obtain supplemental information on plant and/or site specific maintenance monitoring activities.

Contacts were made with the licensee staff generally involved with the mainetenance and perfonnance (e.g., maintenance managers, performance monitoring managers, plant managers and licensing managers).

I Discussions were focused on topics such as:

(1) Whether programs were formal or informal (2)

Examples of plant specific MPIs (3) Status of implementation (4) Management involvement, particularly upper levels (e.g., levels of review and interactions)

(5) Who sets the goals and objectives for maintenance (6) Are the MPIs validated In addition, sampfes of periodic reports containing performance monitoring data were obtained for at least half the sites.

B.

Maintenance Prograns, i

While an assessment of the effectiveness of maintenance programs was not the objective of the 13 site visits, some general observations were made similar to j

those in earlier staff and industry reviews of maintenance oractices and I

programs.

These are that the programs are diverse from pit.nt to plant and utility to utility due to differences in such factors as organizational structure, resources, and operating and maintenance philosophies.

The majority generally maintain equipment using combinations of the vendor technical

[

l recomendations, their maintenance philosophy, and/or their p~ articular l

operating experience; perform surveillance and testing as required in the plant Technical Specific'ations and as dictated by.operationa and engineering needs; 3

i l

f

t procure spare parts and tools for required equipment repairs; hire personnel that are judged qualified to perfom or manage maintenance activities; monitor and control by a variety of methods the work in-progress; and perform quality checks on completed tasks that are safety-related or otherwise warrant such attention for production purposes. The quality with which such activities are conducted varies. During the

[

visits, it was apparent that the licensees' staffs from the lower staff levels

[

to top management are cognizant of the increased emphasis being placed by the NRC and others on the need to properly maintain plant equipment, particularly the balance-of-plant equipment that are known to initiate transients, and the safety equipment that mitigates the consequences of such events.

Many of the maintenance programs are continuing to evolve over time. Obvious reasons are the dynamics of the operations at the plant such as changing budgets, plant needs, and external motivation. One utility noted that a chaage in operational philosophy from placing emphasis on extended on-line periods of l

operation to the use of planned mid-cycle maintenance outages is producing I

I better overall plant performance. A different utility is implementing a change i

from mid-cycle maintenance outages to extended on-line periods with preventive maintenance based 'on reliability centered equipment performance monitoring.

j Many plants are increasing their preventive maintenance activities; others are I

considering reductions where preventive maintenance activities appeared t

excessive.

t C.

Maintenance Monitoring Practices While NUREG-1211 provided a broad overview of the status of maintenance monitoring programs in 1985 and the general direction of industry maintenance monitoring practices, the 13 recent site visits provide an updated overview of

)

t the current industry practices from a sampling of plant specific programs.

Regarding maintenance performance indicators in use industry-wide all plants visited are monitoring maintenance work process activitter, (e.g., wo"k control I

documentation and resource expenditures) along with the overall plent operational i

performance and other related process activities. Maintenance effectiveness is generallybeingassessedonlyattheoverallplantperIormancelevel(e.g.,

5 e

equivalent availability, capacity factor, and forced outage rate) with. limited ad hoc assessments at the system, sub-system and component level." Intuitively, it is believed that overall plant performance indicators:usually lag the occurrence of maintenance program deficiencies, while assessments at the system and component level can be leading indicators (i.e., forecast performance).

Examples of maintenance work process indicators that receive industry-wide use are the ratio of man-hours spent on completion of non-outage preventive maintenance (PM) to the total man-hours for completed non-outage maintenance (i.e., PM, corrective maintenance (CM) and surveillance); the percentage of total outstanding CM work requests, not requiring an outage, that are greater than three months old; and the percentage of PM items not completed by the scheduled date plus a grace 4

period equal to 25 percent of the scheduled interval. While industry guidance and standard definitions exist for these indicators, they and sonie others oriented to maintenance activities are more broadly used now than several years 1

ago. However, they are still evolving.

Varying interpretations of the definitions for PM and CH, varying work r.equest prioritization systems, and varying administrative and automated capabilities for sorting the data l

currently available in plant specific maintenance documentation (e.g., work requests, equipment outage data, and personnel time records) into the appropriate categories were identified.

For example, one plant comented that

{

it is improving its ability to segregate corrective maintenance work requests for the repair and restoration of failed or malfunctioning equipment from other work requests (e.g.. those for general support and plant modifications),

l Although most plants segregate maintenance work to be performed during outages j

from non-outage work, considerable judgn'ent is involved even with the available j

industry guidance.

j In addition to the use of some of the maintenance parfomance indicators j

developed for industry-wide use, the plants have implemented formal and informal monitoring of maintenance activities that are specifically tailored to j

their needs. Many of these indicators are tailored variations of industry guidance. However, such monitoring is generally work process oriented.

The i

following examples appear in various montnly goals and objectives documentation obtained from the ' utilities - maintenance overtime worYed, ratio of PM to CH, i

4 i

6

maintenance training activities, operating and maintenance expenditures, number

[

of CM work orders, auxiliary equipment PM (items and man-hours), protective i

relay maintenance (items and man-hours) and auxiliary equipment lubrication (items and man-hours). Although monitoring work process variables gives an 1

indication of the overall direction of the plant maintenance program,

, systematic maintenance effectiver.ess data on itemr, such as train level unavailabilities, mean time to repair, amount of rework, and post maintenance test failures to be able to diagnose more specific areas of the maintenance 5'

program that may need additional management attention, either do not exist, or are diff' cult to retrieve from existing computerized databases.

During data collecti(n, NPRDS was a primary source for the available da'.a.

t Although til data needed to calculate the parameterl or to match definitions 1o" all the candidate I

maintenance performance indicators is not in the database, NPROS offers a fair l

~

f degree of industry-wide standardization and a base for further development.

It also already contains a significant amount of data, including engineering

[

records, at the system, sub-system and component level, t

i A few human-performance-related maintenance performance indicators are available (e.g., maintenance-caused scrans and the number of maintenance caused j

licensee event reports) at some plants. Maintenance staff turnover data are l

specifically collected at some plants, but usually with insufficient detail to be able to extract required information according to the candidate maintenance performance indicator definition.

One of the plants that was visited was f

forming a four-person group to monitor human performance errors associated with

[

maintenance.

All the sites visited have automated capabilities for. collecting, storing, sorting, and retrieving data related to maintenance work. However, the capabilities of the systems vary from elementary programs to broad based programs with extensive capabilities.

The information input into the computers varies.

Some detail'ed information pertinenttomaintenanceeffectivenessindication(e.(,out-of-servicetimes and mean times to ' repair) are available sometimes in hard copy (i.e., on paper 7

4 e

A in a log book or on a data collection sheet), but not in the computer.

There is.also 6 large amount of other usable data that are not often computerized.

These data are usually in hard copy form in specific log books. Examples of these are the "LCO Log," "Out of Service Log," and other similar log books.

I Thus, the types and quantity of detailed maintenance information available from l

the automated systems varied from site to site.

l I

As mentioned previously, the maintenance performance indicators are in an evolutionary stage. The implementation of industry standards and plant I

specific programs are taking considerable time (years) to develop into

[

sophisticated, fully operational systems. As these programs are developed.

[

indicators are trended forward in time.

Efforts to go backward in time are rarely done due to unavailability of the desirod historical data and cost of the effort.

Based on the industry-wide and plant specific experience gained in

[

~

using the per#ormance indicators, some of the indicators are being modified, f

sore dropped, and new ones added.

Some plants indicated that their maintenance performance indicator programs are still in the formative stages and will not be fully operational for several years.

The maintenance performance indicator data are being used as management tools I

in conjunction with other nperational and engineering performance data to i

[

assess the quality and quantity of activities taking place at the sites. The data appear to be used from the lower levels (e.g., workers and first line, supervisors) througn upper management (Vice President a id above) as the basis

[

I for a variety of decisions on resource applications, problem resolutions, and g031 setting. However, based on discussions there did not appear to be any significant dedication of resources to validation of the data as leading indicators to potential safety problems or significant operational events at the sites visited, such activities were ad hoc, if anything, j

i A numbcr of plants have established goals for certain of the indicators, such f

as the number of corrective and/or preventive work orders to be completed i

within a period, a limit on the number of open control room deficiencies, and a j

~

specified minimum ratio for pH to CM.

The plants track performance against the

' established goals.

t 8

I

The data on maintenance performance indicators is being published and disseminated broadly within the utility organization.. Quarterly some of the data are sent for industry-wide use (e.g., to INPO).

Eleven of the sites visited prepare the perfornance indicator reports monthly; the other two issue it quarterly.

These reports usually have a company-wide distribution, includ-ing site upper management.(Vice President ano above). Usually, a condensed version is sent to corporate senior management.

Som3 also made external distributions, such as to contractors and NRC regiona' offices, project managers and the NRC resident inspector.

Some of the performance indicatore are also displayed on bulletin boards to keep the plant personnel informed.

In addition, annual compilations are used for meetings such as the corporate executive officer meeting at !NPO.

These can include explanations for variance from industry norms and planned actions to adjust seemingly adverse trends.

D.

Findings Based on Plant Visits 1.

Increased attention by licensees to the maintenance area over the past several years was evident, due in part to the increased emphasis by NRC and others (e.g., INPO) on the need to properly maintain plant equipment fsr safety and production purposes.

p Based on observations made during the visits to 13 sites, a diverse range t

4.

of maintenance programs and practices exists.

This was expected and similar to the results of a variety of other studies of the maintenance practices within the nuclear industry (e.g., Status of Maintenance in the U.S. Nuclear Power Industry 1985, NUREG-1212).

The programs for monitoring maintenance activities also varied considerably from site to site in scope and sophistication of implementation, action tracking, data processing and use.

3.

All of the sites visited monitored some of the elements of the maintenance process (e.g., man-hours expended, numbers of work. items (such as PM, CM, and surveillance completed, backlogged, and overdue), and material orders]

and overall effectiveness measures of plant performance such as capacity i

factor, availability, forced outage rate, thermal performance, and 9

9

. _ - - ~ - - - -,.. _ - - - - -

collective radiation exposure. The latter are considered te be lagging types of maintenance effectiveness indicators rather than leading indicators-(i.e., those that would forecast trends). The maintenance indicators are multi-tiered -- some are based on formal '.ndustry-wide guidance, some are formal to meet plant specific needs, and some are informal in use at the lowest levels of staff.

Such indicators are tools that il conjunction with other factors drive management actions fram the lowest to the upper levels to resolve maintenance-related problems.

4 None of the plants visited are rout.inely extensively monitoring leading types'of maintenance effectiveness indicators at the system level or 5

below, such as mean time tu repair, amount of rework, post maintenance ten failures and safety system and train unavailabilities. Activities

, hat exist appear ad hoc in nature.

5.

NPRDS was a primary source used for equipment history data since it has some of the basic data elements for several of the maintenance effectiveness performance indicators.

It has other desirable features since it is standardized, industry-wide, and failure oriented.

It already contains a significant amount of data, including engineering records, at the system, s'ub-system and component level.

6.

There did not appear to be any routine activities at the plants oriented toward the validation of the maintenance performance indicators as potential leading indicators to safety or operational problems based on historical experience.

E.

Conclusions Regarding Plant Practices 1.

Maintenance performance indicators are ' management tools that are evolving slowly, although they are receiving increased attention. They are necessary, being used genera 11y, and judged to be a. good management practice when fully supported with resources and involvement at the upp'er management levels.

n.

e a

9 10 n

m

9 2.

The current status of industry-wide maintenance perfomance indicators is such that they are of very limited value for some activities such as as:essing maintenance effectiveness at the system / component level and g

acrot,s industry monitoring.

3.

Maintenance effectiveness indicators at the system, sub-system and component level are not generally recognized as needed by industry. If maintenance effectiveness indicators are implemented, NPRDS is the best available data collection and processing system but further development will be required.

III. DATA ACQUISITION, ANALYSIS, AND VALIDATION The trial program started with thirteen candidate maintenance indicators listed in Appendix 1.

This trial program has produced key findings and lessons regarding (1) the feasibility of developing maintenance indicators in the near term from existing data sources, (2) what the candidate indicators would reveal about maintenance performance, and (3) how that perceived maintenance performance could actually predict overall plant operational and safety performance. The sections that follow present this information - data acquisition, analy' sis, and validation.

A.

Introduction The quality of historical records varies fra plant to plant.

In attempting to capture some candidate indicator data, such as rework or the 1 umber of failed

. post maintenance tests, it became apparent that such infoma ion was not generally available because either the licensees had no requirements or plans to record it or because of difficulties in obtaining meani:.gful, consistent quality data.

Even in major plant specific efforts to support PRA. activities, plants have had varying degrees of success in constructing complete component history (availability) data sets. Other AE00 programs, such as the Integrated Reliability and Data Acquisition Pro' gram have also shown that'the generation of equipment availability data from historica.1 records is a difficult. task that requires as much engineering judgement as data gathering. Data on when

(

11

~

J

equipment is removed from service and returned to service is often not collected ano continously compileo or added to automated data bases where it can be essily retrieved.

In some cases, the data is' not retrievable from-

.stmple log book reviews.

Information that required inordinate effortr, to retrieve is referred to hereafter as not readily available.

Such limitations makes retrogressive analysis of some candicate indicators improctical. Other findings of the trial program relate to information that are more readily available (i.e., informatiun that is collected ano compiled by,the licensee ur directly retrievable fre, automaiad cata bases), such as information relating to Maintenance Work Requests (M ds), work control systems and NPRDS. The trial prograi.. also ideni.ified the best data sous ses for some of the canoidate indicators. These data acquisition findings are pienented in detail in tiie Udia Asquisition section.

l Tne data analysis and validation section peuvides a discussion of the behavior l

of scme of the candidate indicators and Liie degree that some of tiie sandidate incit.aiors were indicative and,ieaoing of plant periurmance. Two types of dnalyses were Conduti,eo.

First, a treno analysis was performed for Liie ind'catcrs across plants. This provided pictures of the normal behavior vi the indicatus including the range of values across a number of piants and how the indicator n=asures against quantilai.1ve goals, e.g., peruntage of preventat,1ve maintenance. This acro >> plant analysis cannoi 'e used to compare plant >

w since the magnitude vi the values are uniquely related to the plant specific process data utilized and definitions are not truly standaroized. However, the across plant analysis does display the indicators' value. The second part of the analysis was within-plent trends.

Trend charts were reviewed and insights drawn concerning the effects 0 plant specific events, outages, and other 4

influences upon the candidate ind estor. This understanding was necessary to if 'he statistical validation results.

The provide a basis for the evaluatio1 t

goal of the validation activity was to determine the degree of correlation to overall performance and whether the indicator leads actual plant performance.

This necessitated an analysis approach that characterized overall plant

^

performance, established that as a benchr: ark, and then tested for leading characteristics of the indicators over time.

The candidate maintenance process 12

and effectiveness indicators were each tested against the overall plant performance measures for up to an 18 month leading characteristic.

l Before proceeding with a discussion of the details of data acquisition, analysis and validation, a brief discussion is provided here concerning candidate indicator selection. The trial program was a compressed activity.

As a result, the candidate indicators that were the subject of data gathering 1

were based upon the established set. However, candidate maintenance indicator selection was not finalized with the initiation of the trial program. Although the data were collected for the thirteen indicators listed in Appendix 1. Other i

candidates were developed by parallel staff activities during the data collection period (July and August 1988).

The initial set of candidate indicators, and those 10 contained in the "strawman proposed rule" were selected based upon recomendations of the NRC Performance Indicator Interoffice Task Group, and were based primarily upon the field observation and experience of the P.RC staff. At the time, it was recognized that other, more i

systematic approaches tv N intenance perfo mance indicators were also needed.

Therefore, two parallel efforts were initiated. The Office of Nuclear i

Regulatory Research (RES) requested Science Applications International Corporation (SAIC) to review the maintenance activities associated with an j

operating power reactor and provide a framework from which to draw additional j

candidate indicators.

Brookhaven National Laboratory (BNL) was requested by j

AE00 to propose trethods to combine candidate indicators into overall system and j

plant level maintenance effectiveness indicators, in addition, the report of 2/

XYZYX Information Corporation rovided concepts articulated in the NRC-industry workshop regarding the strawman rule as well as original recomendations.

The status reports by SAIC and BNL are enclosed as Appendices l

3 and 4 These efforts are continuing. Based upon their merits individually or grouped with other indicators, the staff may elect to pursue the validaticn l

of additional candidate MPIs.

1 4

2/ Observations and Recommendations on the Proposed Rulemaking for the

~

Maintenance of Nuclear Power Plants, 27 July,1988. Kay Inaba, XYZYX Information Corporation.

~

13

I l

l l

Regardless of the strategy for the selection of candidate indicators, the data to support the final indicators should be obtainable from the operating reactors. However, during the trial program it was determined that there was determined that there is a sm611 and fairly common set of data sources that provide various pieces of the requisite information. The majority of such indicator support data are obtainable from automated plant maintenance data management systems, actual maintenance work requests. NPRDS and oprvations logs. However, some data are not readily available and some are not collected.

The level of effort reouired to obtain the necessary data increased dramatically as the degree of detail in the plant automated tracking systems. decreased.

B. Database Construction Data obtained at plants were of two types: copies of plant records (used for such indicators as the ratio of preventive maintenance hours to total maintenance hours) and floppy discs or computer printouts (used for indicators such as mean time to return to service).

For three pla'nts, data were collected for at least 11 of the 13 candidate ' indicators for most of the 14 quarters covered in this study. For other plants, data are usually available for about six of the indicators. Data for each plant were entered into electronic spreadsheets from'which the actual indicators were calculated and entered into the database.

Software was written to produce indicator spreadsheets which contained the available data for each indicator. There are approximately 7700 total records in the database.

The data and the calculations were checked by NRC staff and a consultant to ensure that data had been entered correctly and data entry was consistent between plants. This engineering review involved date comparison and evaluation.

C.

Data Acquisition A part of the trial program was to determine data availability and consistency for the candidate maintenance indicators. There were two aspects t.o the review e

14

g for each indicator. One was the availability and consistency of the data over the period of the study at a given site and the other dealt with site to site considerations.

For a given site, the collection of data for performance indicators has been an evolutionary process as mentioned previously in plant practices.

In particular, over the past few years, the importance of performance monitoring has received increased emphasis industry-wide. Thus, as certain areas received increased attention, data gathering activities were initiated or increased. Consequently, the availability of data for each in' icator at a site varied over the period dependent on when focuc was placed d

on that area.

In addition, in some cases, the base data used for an indicator changed over the period at specific plants. However, in general, the available data for a given site for each indicator was consistent for the study period.

The results of the data acquisition across plants are shown in Table 1.

As indicated, with two exceptions (the ratto of preventive to total maintenance and corrective maintenance backlog), data for each indicator was not available 4t all sites.

In addition, while some consistencies do exist, the ease of' data retrieval and data consistency varied from site to site for each indicator.

The least amount of maintenance performance indicator data was obtained for the performance indicators for safety systems other than the diesel generators; backlog of enginee' ring change notices

  • nd post maintenance testing. The general lack of consistency of data across plants exists in spite of best efforts to obtain data in a consistent fashion (i.e. use of standard l

definitions and guidance). The findings and conclusions are listed below, l

Data Acquisition Findings 1.

Regarding the indicators for the ratio of FM to TM, CM backlog greater than three months old, and FMs overdue, data are easily retrievable from almast all the sites. for the majority of the period, because the indicators are in use industry-wide.

The ranges of the resultant quarterly ratios for each indicator are very wide (50 percent or greater).

The interpretations of definitions and implementation vary considerably between plants, such that they are only useful for plant specific 9

15

Table 1 Summary of Across Plant Indicator Data Review

~

Data Availability Approximate Average Qtrs Data g

Indicator Plants Per Plant Consistency Comments Ratio of Preventive 23 9

Definitions of

-Licensees are trending for to Total Maintenance corrective, management information preventive, and maintenance requiring outage are inconsistency applied across plants and in om some cases within plants

' Corrective Maintenance 23 9

Definitions of

-Licensees are trending for corrective main-for management information ter.ance and main-

~

tenance requiring f

outage are incon-sistency applied across plants and in 5

some cases within plants.

Preventive Maintenance 20 10 Definition of pre-

-Licensees are trending for items Overdue ventive maintenance for management information inconsistency applied

~

across plants and in some cases within l

plants l

l i

l

j Table 1 (Continued)

Summary of Across Plar.t Indicator Data Review

l

~

Data Availability

]

Approximate f

Average Qtrs Data

)

Indicator Plants Per Plant Consistency Comments Maintenance Staff 15 12 Some difficulties

-Data values are low (61%

Turnover Rate existed in assesing of data is zero; the -

4 the reason for highest value was less 4

resignations 5 percent) 1

'I 1

Maintenance Rework 7

5 No unive'rsal

-Some licensees that 3

definition of rework do not track expressed.

exists concern about cost of obtaining data Ratic of Deficiencies 9

12 Different equipment

-Equipment populations Discov' erd By populations and included equipment for Survelliance to all different surveillance which there was no Means practites used at surveillance requirement different sites.

for most sites I

~

t Balance of Plant 14 10 Indicator was con-

-Information usually not trended 3

Equipment Out of structed from main-

-Specific indicator data."-

Service tenance management not available without I

f records.

Inconsis-significant effort

]

tent equipment pop-j ulations and start j

and ends ~ times used i

i i

l i

1 1

i i

4 m

..--,--.--em--,=------,---1 c-

.- - -, _ - +

. ~.

.m

Table 1 (Continued)

Summary 'of Across Plant Indicator Data Review Data Availability Approximate Average Qtrs Data Indicator Plants Per Plant Consistency Comments

~

Safety System 15 10

. Data inconsistent

-Analysis required to Performance Indicator due to unknown interpret data (Diesei Generator) but estimated time unavailable l

~

Safety System 2

Il Data consistency be-Performance Indicator ing evaluated (Other 3 Systems)

Mean Time to Return 15 11 Indicator data was

-Informatio's not trended to service constructed from

-Data not a sallable without g;

maintenance man-significtnt effort agement records. In-i consistencies arose due to varying equip-ment populations and start and end times used.

j Backdhg of Engineering 3

Data matching

-Engineering change Notice Change flotices definition was activities are tracked Related to Equipment unavailable stepwise from start'to Performandce finish, but often in several systems

~ '

6 9

Q/

Y

Table I (Continued)

Summary ~of Across Plant Indicator Data P,eview Data Availabilify Approximate Average Qtrs Data Indicator Plants Per Plant Consistency Comments

. Safety System 8

7 Data is inconsistent

-Effort to evauate continuing Function Trends due to differences in methods used to obtain data and in data sources j

Unplanned-Scrams.

23 14 Data from plants was

-NRC staff generated fats Due to Maintenance inconsistent from LER informatici using its criteria Post Maintenance 5

3 Data inconsistent

-Limited data available ts Testing due to varying

-Some licensees that do not equipment population track expressed tested concern about cost of obtal'ning data O

t e

9 e

analysis. Some plant specific indicators are also inconsistent over the study period due to evolving definitions.

2.

Some data on maintenance staff turnover are available at seven sites visited.

However, the indicator definition, by eliminating retirement, death, promotion, and termination for cause, forced the indicator values to be very low.

Indicator values were generally zero or less than 5%.

3.

Although all the sites visited track activities related to implementing engineering changes, data are not available for the candidate MPI (backfog of ECNs related to equipment performance). The definition of this indicator is inconsistent with plant practice. The assumption that ECNs have a fixed completion date is not correct.

ECNs are scheduled based on planned available resources. The dates are changed when resource allocations and priorities change. Also. ECNs associated with equipment performance are not currently tracked as such.

4.

Regarding the two indicators addressing equipment out of service (80P number and duration of equipment 00S and mean time to ret' urn to service).

(a) The maintenance rianagement information systems (MMIS) are designed to track p'rocess activities and as such are not currently configured to provide reliable equipment failure or out of service data.

(b) Data for these indicators was not available at most of the sites visited. These indicators were constructed by using individual source records from site maintenance management systems. The indi-cator calculation require extensive analysis of the records.

(c) NPRDS has the basic elements to produce BOP 005 and mean time to return to service data, but is currently limited'in the equipment it covers.

(d) For the trial program, significant efforts were required to obtain reliable equipment history data for MP! use even if NPRDS, which has thebasicelementsto.producesuchdata,is$ sed.

4 9

20'

i 1

l 5.

Data on the indicator addressing deficiencies discovered during surveillance versus those found be other means are not readily available.

Additionally, data are inconsistent among plants due to differing data sources, equipment populations, and surveillance practices.

6.

Specific data on scrams associated with maintenance are not readily available due to varying interpretations of maintenance related scrams.

NRC staff generation data based on LER review is available for analysis of this indicator.

l

(

7.

Regarding the safety system performance indicator (SSP!) and the safety system functions trend indicator (SSFT), data are not generally readily available. For SSPI for the diesel generator, data are available at 12 of i

the plants visited.

However, data for the remaining SSPI systems are i

available for only 2 plants for more than 2 quarters.

Evaluation of the i

data for the diesel generator SSPI shows that it is inconsistent. Data I

for the remaining SSPI system indicators is still under review.

I

\\

Data for the SSFT indicator are 'readily available at 5 plants. Obtaining 9

data for another three units obtained from plant records required i

extensive NRC staff time, utilizing a high degree of engineering judgment and knowledge of plant operations.

These data will require extensive effort to resolve inconsistencies.

8.

Data for the indicators addressing the amount of maintenance rework and post maintenance test fail.ure rate are gene' rally not available. Where the i

data are available, inconsistency existed among plants, because of l

varying interpretations of definitions.

The difficulty of obtaining l

maintenance rework data from existing plant data systems and PMT failure i

data varies among sites.

Data Acquisition Conclusions I

l 1.

A variety of plant specific maintenance indicators based on industry-wide guidance and definitions exists.

However, guidance ano definitions for I

7 t

i 21 f

l L

O

the indicators have been inconsistently interpretated and implemented.

Consecuently, there is no consistent, maintenance performance (process or effectiveness at the equipment level) indicator in use.

2.

Equipment level data are not collected and placed into suitable site database systems at plants such that maintenance effectiveness indicators based on equipment availability histories can be easily constructed.

3.-

A well structured and defined component oriented system to capture and track equipment history data industry-wide is desirable.

NPRDS currently reets a' number of requirements and appears to be a feasible system to improve and to support necessary modifications.

D. Analysis and Validation Across Plant Indicator Review This section describes preliminary results of the across plant review of indicator data.

In this review, indicators were compared across plants in a systematic way to' correlate relationships of the indicator to plant coerations, determine the indicator trends and patterns, and to compare any brportant differences in indicator behavior between plants.

Based on available data (13 candidate indicators each with available data from an average of 14 plants),188 cumulative indicator charts were examined. A detailed review of th.e charts was performed relying on plant specific information. This section summarizes this process and provides an overview of the type of information obtained.

Three examplo indicators are used to describe this process.

These three indicators are (1) the ratio (expressed as l

a percentage) of preventive to total maintenance; (2) the ratio (expressed as I

a percentage) of corrective maintenance backlog greater than three months old; and (3) safety system performance for the emergency ac system. All indicators for which sufficient data are available were reviewed. However, this description presents only general observations; apparent inconsistencies I

observed in this review are still being evaluated.

22

o 1

1 Charts of the accumulated indicator were used to perform this across plant analysis. The charts show the accumulated value for the indicator over time.

Thus, an increasing slope indicates that the indicator is increasing. A decrease in slope indicates that the value of the indicator itself is decreasing.

The first illustrative example of this process, ratio of preventive to total maintenance, is shown in Figure 1.

For reference, two lines indicating slopes for an indicator value equal to 50% and 66% in each quarter have been placed on the figure. The accumulated indicator curves show that two of the three plants (plants B and C) are very similar and that they vary significantly from the curve for plant A.

This trending is valuable in setting and maintaining quantitative goals for resource commitments and obtaining feedback.

Some plant management consider an optimum split between preventive and corrective maintenance to be a two-thirds to one-third resaurce ratio. Others consider a 50-50 split to be a reasonable target.

In Figurt 1, plant A has adjusted its program to the 50% level while ' plants B and C are trending parallel to a 66%

level. The plant practice section discussed some of these kinds of changes.

The graph also implies that further changes were made during 1987 to adjust to the current levels of the ratio of preventive to total maintenance. However, based upon this information alone, one should not conclude that the preventive maintenance activities are greater at one plant vs. another, or the corrective maintenance activities are less at one plant vs. another, since the definition of, or the detailed constitutions of the indicator, may have been and continues to be different at the three plants at any one point in time. Nevertheless, the trends are useful management feedback mechanisms when control of the indicator is faithfully implemented and consistent within the plant over time.

The indicator generally decreases during outages due to reduced non-outage PM and an increase in completion of non-outage PM activities.

The second example, corrective maintenance backlog, is shown in Figure 2.

The slopes of the accumulated indicators for each of the three ptants are very similar. Thus, these three plants have similar values for the indicator.

Generally, this indicator decreases following outages Ind then increases 23

FIGURE 1

PREVENTIVE TO TOTAL MAINTENANCE CUMULATIVE PERCENT PLANTS i

/

coo-C A

500-Y w

s

/

O

/

to 400--

/

n.

/

/

w l

7 h

/

1 3x -

s' R

s' i

3

/

/

/

/,

2ao.

/

e l

s'/

p e5-1 85-2 85-3 85-4 e6-1 66-2 86-3 86-4 S h-1 S N-2 87-3 87-4 88 1 YEAR - QUARTER 6

e

-___.---,-,..,--_-,--.---------,,,.-,-v-.

..,.my-

- - -. -, _..,... ~ _,

FIGURE 2 CORRECTIVE MAINTENANCE BACKLOG CUMULATIVE BACKLOG PLANT 70,0-

,/

,/

C

/

600'

,/

- p

/'

/

500

/

f f

/

,/

EL

,/

_ =.

p

/

..s l'

/

oo

/

/

/

/

100-

/

f

/*

p l

l O

0 e5-1 85-2 ' SS-3 85-4 86-1 86-2 SS-3 SG-4 87-1 67-2 07-3 87-4 e8-1 YEAR - OUARTER 4

slightly until the next outage. This effect is due to non-outage corrective' maintenance work being completed during an outage.

For plant C however, no outage trend was observed.

The third example of this indicator review is shown in Figures 3, 4, and 5.

This example illustrates how the accumulated indicators are used to identify inconsistency of data.

Figure 3 shows accumulated diesel generator unavail-ability for three plants. The slopes of the accumulated unavailability curves indicate that these three plants have similar values of unavailability over the time span for which data are available.

The fine structure of the accumulated curves is due to the normal way in which plants accumulate unavailability at irregular intervals.

Figure 4 shows the accumulated diesel generator unavailability for a fourth plant, plant O.

The accumulateo indicator shows that this plant accumulated diesel generator unavailability at a rate greater than three times that of those plants in Figure 3.

Based on this observation, review of indicator data for these four units shows that the "estinated hours out of service" for plant 0 were not consistent with the other three plants.

Reduction of this component of the indicator so that it was consistent with the other three units resulted in Figure 5.

Thus, the accumulated indicator data for plant 0 and plant B are very similar.

This type review was perfomed for all indicators fot which sufficient data were available.

Validation - Analysis Within Plants Using the data obtained associated with the plant visits, validation was performed on a number of the candidate MPIs to ensure that a candidate indicator actually related to plant safety and operational perfomance.

However, based on the data acquisition findings, the following candidate MPIs were not given validation trials due to the limitations of the data -

maintenance staff turnover rate; safety systems performance [ auxiliary feedwater (PWR), high pressure safety injection (PWR), residual heat removal (BWR), and high pressure injection (BWR)); ECN backlog; Safety System function trends; unplanned automatic scrams while critical associated-with maintenance; and post maintenance testing failure rate.

Further, except for one multiplant site, the data for FM to TM ratio, CM backlog and' FM items overdue were 26

FIGURE 3 S.S.P.I.

DIESEL GENERATORS CUMULATIVE UNAVAILABILITf PLANT 0

/

A

,/

8 m

/

,/

/

C

/

N

/

/

~

/

/

W 22

/

/

x a

/

/

6

/

i

-5 t o -

/

e 5

l

.,,, ' /,

./

te c'

n'

/

~.

7r

/

fb,,

s' f~

5

/

o e

4

/

/

/

2

/

/

/

,7 MS-t MS-2 e5-3 c5 SE 2 66-3 co-4 87-1 67 2 87-3 a N-4 no-1 es-1 YEAR - QUARTER

)

i I

FIGURE 4 S.S.P.I.

DIESEL GENERATORS CUMULATIVE UNAVAILABILITY 70 l

OO-e vg.

h PLANT D

=

m

~5 o.

~-f Uz D

ta 30<

2 b-

~5 b

g 2a.

o 6

30-sk2 sh-s e h.4 1

e5 S e5-2 e5-3 e5-4 ed s e5-2 so s e5-4 e7.s ee.s es-2 l

YEAR - QUARTER

~

r.

FIGURE 5 l

S.S.P.I.

DIESEL. GENERATORS CUMULATIVE UNAVAILABILITY I

PLANT I

18-

$g 3.

/

B f

m

/

5t l

/_--

v 14'

/

D eu

/

~

/

--2 12 l

~

/

-t

/

p' 10 --

/

/

5

/p/s is) 6'

  1. ,s',

O

/

2 6 4

a

/

O

/

j f

/

/

l 2

/

/ /

o-SS-1 85-2 85-3 85 4 86-1 86 2 86-3 86-4 87-9 87-2 87-3 87 4 86 1 YEAR - OUARTER 4

-w-m wm n m r--em-o-w-w-

-w--s r~w ww e w w w a r ~ w7'n - mvw m p'

=--~ W -- w

=~wwevem" wmvu w-t

--~-~rvemr w m w==

v-w w

1-+<

e--c

~

(

provided as combined site data rather than per plant. Thiseimpacted the validation for these indicators.

I i

To accomplish this validation, first plant performance benchmarks were selected. Then plots for each selected indicator were compared with the-performance benchmark and correlation analyses were performed using the j

Statistical Analysis System (SAS) computer program Correlation Procedure (PROC I

CORR).

In the following paragraphs, the steps of this validation approach are f

described, beginning with the selection of plant performance indicators that i

were used as benchmarks for validating the candidate maintenance performance j

indicators. This is followed by a section entitled "Benchmark Selection" to i

provide additional discussion of the data and the methods used, t

f t

Yalidation Approach Good plant perfonnance is usually characterized as good operating availability without safety significent events. To reflect this consensus, the plant level measures in Table 2 were selected as benchmarks for validation. Several of these benchmarks are currently performance indicators in the NRC PI program.

The SALP maintena'nce rating was added to the list as it is an existing measure f

of overall plant maintenance effectiveness. Other studies have shown a modest correlation between plant availability and the SALP maintenance rating.

I i

i l

l i

t i

i 30 f

l t:

l Table 2 Validation Benchmarks I

I l

I Current PI Additional Benchmarks i

Total Scrams While Critical Availability Scrams above 15 percent of full Critical Hours per 1000 Critical Hours Scrams below 15 percent of full power SALP Maintenasce Rating Forced Outage Rate Equipment Forced Outages per 1000 i

Critical Hours i

1 AE00 engineers reviewed time trend plots of each candidate indicator and details of the plant operational profile to identify and explain any trend or pattern with respect to the validation benchmarks. This was done for each of i

the 23 plants in the study.

Examples of the candidue maintenance indicator time trend plots are provided in Appendix 5.

\\

[t j

In parallel with 'the engineering review, computer runs were made to calculate the statistical correlation between the benchmarks and the candidate MP!s, l

similar to using linear regression analysis. The correlations between tne plant level benchmarks and the quarterly candidate MPI statistics were calculated using the Statistical Analysis System (SAS) Correlation Procedure I

t (PROC CORR) both for base cases where indicators were matched by calendar guarter, and for cases where +.he candidate MPI statistics were shift (d in time so that they would in effect be leading the plant benchmarks. Thest shifts l

?

t I

l l

b L

i

[

31 l

{

\\

were done a-quarter at a time, up to a maximum shift of 6 quarters. +

.u work in the performan:e indicator program had shown that lead times w mc. : 4 year were typical. The SAS correlation runs were reviewed to identify cases were the correlation between two indicators, e.g., between availability and corrective maintenance backlog at Plant A, was at least.75 M in absolute value and statistically significant at the 0.1 level. Both bar charts and scatter plots were examined to further verify the strength of the selected correlation.

The results of the engineering review were compared and coribined with the results of the computer analysis to provide the final determination of the validity of a candidate MPI against benchmarks at each plant. This two-pronged approach provided assurance of the v,lidity of the results. The engineering review was needed to provide a check on the correlation analysis, since the approach of estimating pairwise correlation coefficients was adopted as a general tool to handle a wide variety of situations.

It is obviously most useful when comparing two variables, i.e., indicators, that have other than zero data in a large number of quarters. For cases where data are sparse for one or both of the variables being compared, for example, total scrams in a quarter versus another variable, the correlation coefficient may be high but l

might only be the' result of the alignment of just one or two values.

These kinds of situations were reviewed and discarded if they lacked real meaning.

On the other hand the statistical analysis suggested potential relationships that were missed in the review of the time series plots.

to the site, i.e., it is not plant-specific.

To adapt the SALP maintenance 3/

A correlation coefficient of zero indicates there is no relationship between the variables; when there is perfect correlation and the

~

variables vary in the same direction then the coefficient is 1.0, When there is perfect correlation, but the variables vary in the opposite direction, the coefficient is -1.0 (negative correlation).

The coefficient can vary between the extremes of 1 and -1 :o indicate we interrediate degree of correlation.

e e

32

t

[

l Benchmark Selection This section discusses the use of SALP in this validation effort, the use of

(

availability by other analysts, and the relationships among the benchmgrks. The

[

problem with using SALP maintenance rating in the analysis is that the rating l

covers an extended period of time, and at multi-plant sites, the rating applies to site, i.e., it is not plant-specific.

To adapt the SALP maintnenace rating to our analysis, all of the SALP time periods in the SALP data base for the plants of interest were specified.

Then, on a per plant basis, each SAf,P period was individually checked to see if it spanned any of the quarters of f

interest.

If none of the quarters of interest fell in a partic.ular SALP period, no SALP rating assignments were made to the quarters. However, if a I

particular SALP period spanned any or all of the quarters, a SALF number was f

assigned to the quarter or quarters spanned.

This assignment was made based on the length of time in the quarter that fell in a given SAlp period.

If the entire quarter fell in a SALP period, then the number associated with the SALP period was assigned to the quarter.

If less than two complete months fell in the SALP time period, then the next period was checked.

If no otter time periods spanned the quarter, then an assignment of NA was given to the quarter.

If other SALP periods were found that spanned the quarter, then the same assessment.was ma'de and the appropriate rating number was assigned to the I

quarter.

l While the above treatment allows the analysis to proceed, the nature of the I

SALP variable lessens its usefulness in the validation within plants. For example, the three plants chosen for discussion in -the results section covered the full range of SALP maintenance rating. During the period of interest.

plant 2 received a rating of 1 in the each of two SALPS; plant 3 received a 2 in each of two SALPs and plant I received a 3 in each of two SALPs. While this provides interesting background againt t which to view the other indicators across these three plants, within the plant the SALP variable is considered constant for all quarters and thus no useful correlaticos either coincident or leading were expected or observed using quarterly data as is~normally used in the P! program.

~

i 33

d l

Another consideration is the selection of validation tenchmarks. Plant I

availability has frequently been cited as a plant level measure of the ultimate 1

effectiveness of maintenance.

Inaba, in the paper previously cited, stated that availability should be used to classify the reactor population as a j

prehde to furtner thinking on maintenance effectiveness indicators.

In a recent study for the Department of Energy A/. Westinghouse used availability as themeasureofplantperformance.El In a graphical analysis of the i

cerfonnance of selected "process" indicators as measures of maintenance

(

(discussed furtner in the section entitled Validation-Results) INPO used

{

equivalent availability as one of the "yardsticks" to validate against.

j I

A number of the plant level benchnarks are intercorrelated by their definitions and there is no intent to represent them as independent measures.

In addition j

to the relationships through definitions, some association between SALP and availability has been noted in other work.

The previously cited study by Westinghouse indicated that an average of SALP ratings for maintenance and i

surystilance was moderateiy useful in explaining variation in availability. 5 ~

The INPO graphs also suggested a relationship between EAF and SALP maintenance ratings. However, in both cases that availability measure was averaged over at 1

least an annual period which more closely matches the SALP period than our ouarterly approach, i

[

dl WCAP-11716. "The Causes of Nuclear Plant Unavailat:ility." January 1938.

El The Westinghouse analyst rejected the use of capacity factor because it f

depends on the plant's maximum rating which is of ten adjusted upwird or downward. An equivalent availability factor (EAF) that adjusts for load-j l

following was preferred. The conventional, unadjusted Gray Book availability j

was used since the data were tsadily available. For the same reason, the Gray Bonk data were usec in this study.

{

$# WCAP-11716 tested two measures of "maintenance quality" (their term) as part f

of a ruitivariate regression: SALP maintenance rating averaged with SALP surveillance rating, and the rating assigned by resident inspectors in the i

survey for NUREG-1212.

They found both the SALP based rating and the resident i

based rating were equally significant in explaining unavailability.

i t

~

i 34

~ - -

-.= -. -.

I

t Validation Results l

In this section, the results of the staff's validation effort are highlighted and other validation results that were provided by INP0 to NRC are briefly discussed.

i The initial validation trials produced well over one hundred potential

orrelations among the remaining seven candidate indicators against the benchmark parameters.

These results are shown in Table 3.

Correlations were obtained with various leading times (quarters of lead) for indicators such as

[

i those related to preventative and corrective maintenance. Example correlations i

I include the corrective maintenance backlog for plants 1, 14, 15 and 16 vs.

scrams. Such correlations were further pursued in detail for the plants that provided the most correlations and possessed the most data.

These reviews are y

presented in Appendix 5.

The initial observations were not supported by further review. However, some lessons were learned such as the impact of refueling outages on the candidate indicators or the behavior of new plants (most indicators trend upward upon r

start-up and the initiation of programs, hence the existence of correlations).

Although the evaluation continues, based on what has been completed there are l

no clear consistent correlations of an indicator acros; a number of plants I

and none are expected to emerge.

The following discussion for three plants is presented to explain some of toe t

considerations that were used to review the validation results.

The details of l

this review are in Apper. dix 5.

These three plants represent an upper bound, l

f.e.. the best case on our ability to identify meaningful relationships between l

benchmarks and MPIs.

For these plants, 21 SAS runs were reviewed, each of l

which contained roughly 15 to 20 correlatiens that could reasonably be expected

[

to provide meaningful results.

Eleven cases were of potential interest.

l However, while some of these showed evidence of potenti41 usefulness, such as i

leading or trending behavior, further examination indicated corplications or f

alternative explantions for the observed behavicrs that diminished the significance. The four cases with strongest initial indications are discussed l

35 I

- i

A _ -.,

va > a a h l

gi.

p

....ii.

l e9 w io.

-ti

[.6 i

e i a

... e i R

1

i..

6,.'s,.

  • c.

g

.. i ic..

.. i......

3 L. e

..,..... s

+ ua ao

+. e E

3 i

a oa m..

.t

.e t

i t

. vs a.

e..

ei..

. n., a...

1 -i..

.e.

4L vs *, s

. i... i e i

y

= 6 o e..

. i....

e

~

. o a

.a e i.

i

...... i e i......

ee l

1 3.g*

a o a a o c..

o*

... <i..

n oa = -

7 5

. ve o a i e..

i

-=

i..... i.

o.

(

I v. > s e

... i s

t.

.J.

e w. o i.

i s'

.. i T

A li

.e.!.

. o a

. i.

,.... ii...

a va o 1

.e n

a va =-

i i e

i i.

ii 4

l e oa e..

i....

< V i

1

-=-.

.ea J....

4 W

9. e a e i T

T.

i.. i i,.

i i i J i. i i i.

l' e e

. 6 o

.... e.....

3

.f i.

. o.

i i

e i e i....

i i

... i j.....

V.

5 m va a o i i.......

6

.i w

e oa =.

.t, t.

. v s e a w

_s m

.e.

i w

ea a

.. e e i i I

g

-o

.~c

.. o...

i.

g e

l 5

a. e I

e W

o..

ti o

. a i.....

=

1 8

. v e a e i.

i 3

g

. v.

.~

...... o9... -.....

..o

.t I

i

. va e

. Ue

.I

.1

, i i.

.a t

w a e.

4 en

.e.

en o 9 e,

a

, o..

. e *

.i I

e c

h.

a e e

.. i c....

... o.....

e o e

ak

9..

4 t

  • v a a o a

.., 6 e i i..

e.

a A.

A.

E, i

i ti v $ 3 W.

P M

i

i.

.

  • M i i....

a N

e.

.e T

9 5 a 3..---

e

. W 6

).

..... l.

4 4

6 1

em

..e.

ap.

1

. M.

w a s a

.P

.I 8.

.g J. c g

.e.

1.

.. o i =

E, g

w e.

4 ii.

i

.L4

e...

9

.i..

. o.

e e.

=

p.

.e

.t...

ci

{

e i

e

^

.6.t.

c,.I 5".

u w a *

.i e e.

a I

r g

F.

S.

4

.e.

h P*

S $

C.

N

. O 9

J

..i.

c, ei.

.=.

h=

I 36 s

s

I

{

briefly in this section, and also in Appendix 5 along with the plant j

_ discussions. Table 4 sumarizes the four cases. Only one of these, the first in the table, involved a process MP!.

The other three cases were of interest i

because of possible systematic trends of equipment based cP11date indicators I

with respect to plant outages that deserved further revin.

I o

Table 4

(

Sumury of Four Cases Plant Benchmark MP!

t Plant 1 Total Scrams /EF0 CN Backlog Plant 1 ForcedOutageRate(FOR)

OBOPMT/MTSERV' Plant 2 Forced Outage Rate (FOR)(EFO)

DBOPMT*

]

Plant 3 Equiptoent Forced Outage OB0PMT/MTSERV*

i Case 1 - CM Backlog vs. Total Scrams /EF0 i

I The engineering review of trend plots and.the statistical analysis flagged the relationships between CM backlog and total scrams, and CM backlog and EF0 as l

shown in Figures 6 and 7.

In both of these situations, the CM backlog shows a fairly regular and pronounced decline from a peak roughly three to four l

quarters prior to a higher incidence of scrams or a peak in EF0 rate.

Thus it exhibits some of the desired characteristics of leading behavior, 4

l However, this is counter to the expectation that the CM Backlog should build at I

some rate. i.e., show an upturn sometime before a peak that coincides with some significant event, such as an EFO. or the span of a quarter with a high j

forced outt.ge rate. Also, in the EF0 case, the peak in the EF0 pe? 1000 l

critical hours of 3.06 in 86-1 is the result of one EF0 occurring in just 327 critical hours, an anomoly caused by the construction of this P! statistic.

j Thus, the relationship between CNb and EF0 is not as strong as the figure suggests.

i

~

  • DBOPMT:. Mean tiin to return 80P qquipment to service l

MTSERY: Mean time to return components to service 37 i

d

FIGURE 6 7_

E Automotic Scroms Plant f 1

~

Leged E3E3 co e<e *"'e-a <+ oo<"o9 N 65-1 to 68-1 I01AL SCRAMS CM BACKl OG 5

- n) l e.,

~

89 22.'

x tr m

3 m

w so

?

U)

-Y, y

r

<{

b 0

O

,i

.y aj '

2 g

O)

.?

c--

O N

40 V

i_g l

?

g, f:

j 3

g-R 2,'

g;

+

f, vi

~

j g

f s

f e

jl ed s

j.

?

a a

k, i

A

(.

'l lR if (1

T, I I.

y

~

~f

~

D

.h ll

.h

?

k L

Q 3

g 1

'(

ih l

h r

W O

4-E-

d O

eS-t SS-2 eS-3 e-4 so-86-2 66-3 es-4 67-8 87-2 87-3 87-4 es-t Year -Ouct ter 9

e

  • s

l i

,9 w lI L ) w~ ~: a 4

v

~

y j

2 S

R i

e ex i

,$ 5 i

y p,.

~ w:p -se o

E

' 'E

{w c

3g 1.

-d El i

ey i

$a

.~ g wv i

a

' _. m,..

_.,~ y~ 7 l

o l

i l

4ma wnw n

e

-n-T' I

c.

p u

i

~

hi o

e c

m i

h i

i I

~.

sN w Nr',

_o h

.=

gi :

u cas x (u, -

s 1

4

~

w

{

e o

c

,~

cc v

E ip s

,m z.

sw e o

[

8 I

o i

I O

O sm,

, anweem m

?

s e r iG I

l S'

b-t, I

sl~

i 7

k i

0 i

i pg-

?

~;w

,8 l

M,,1#::.

~G n

swwwi ?

1 e

\\

\\

\\

~

i g

l

{

.I I

I

- rx

-t i

i

+

.C.

l.

=3 c

e

~

e SEnO' WDlilED 000L / 033 l

l e

39 1

(

Case 2 - Mean Time to Return 80P/All Equipment to' Service vs. Forced Outage Rate (DB0PMT/MTSERV vs. FOR)

Figure 8 shows the relationship between FOR and DBOPMT.

This plot is also representative of the relationship between MTSERV and FOR, since at this plant DB0PMT and MTSERV were highly correlated themselves, that is they share the same pattern over time.

In Figure 8 both DB0PMT and FOR show a rough "sawtooth" pattern with the first cycle rising from 85-1 and peaking in the late 86 thru early 87 time frame.

The beginnings of a second cycle are evident. The patterns for 080PMT appears to be closely related to the state of the plant rather than leading it, which is intuitively reasonable: while the plant is running, the mean time to return 80P equipment to service is shorter than 'when the plant is shutdown, and the mean time is influenced by the length of the outage.

Case 3 - Mean Time to Return B0P Equipment to Service' vs. Forced Outage Rate (DBOPMT/MTSERV vs. FOR)

At one plant, the statistical analysis indicated that DB0PMT led FOR by tnree quarters. Figure ? illustrates this relationship where a relative peak in DB0PMT in 85-2 and 95-3 led the FOR peak in 86-1 and 86-2, the peak in DB0PMT in 86-1 led the FOR <eak in 86-4, and the DBOPMT peak in 86-4 led the FOR peak in 87-3.

However, as in Case 2 and as will be discussed in Case 4, the plant operating profile, including refuelinn outages, is driving the pattern in DBOPMT. The peaks in DB0PMT in 85-2 and 85-3 are related to 85-2 refueling outaget the peaks in 87-4 and 88-1 are related to the 87-4 refueling outage. The values of DB0PMT in quarters between these peaks do not vary a great deal, but are generally a little higher in quarters with a positive FOR. Thus, DB0PMT seems related to FOR in a coincident way. However, the magnitude and trend of such an. indicator, or one that measures the amount of equipment otrt of service, may be useful as a monitor of equipment condition.

40

6 -

4 l

l V

wwllv.

Q x

m

~

b O

w O

C E

T Ik S*

{

I i

5.h.,

f e

6 I

m n,,

_ m. m__

[

s e

1 l

kw--m mw.mx

~

bi f

c m

m' g' M' + w "w

  • 2

'v o '

'v h

\\.s ;', s s

w'

_.)

O r-~' ^ ~

E Q

- s-E a

1 E

E emanen.

- e

.w r ~ ~ t ' vu ? O W

I

. g$

l w

i o

. q.j;+'i ? -U S w w.

n-E rv l

5 C.

~,,

i i

a en.

s 1

l l

- O O

.IC l

r i

.e.

i i

\\,

ad -

g q

c o

o 1

s ec i

w E

L w

e N

I w a

\\

1 l

~

e l

l l

l 41 I

i

1

~

i Sd'nOH R

E S

E 8.8 8

is

?

R 0

a pp,

.;. ;g g "

en 6 E

3 e

CE' Y

~I

~

3I

~ f4_

r b

~

i es a to eJ ^

l ev 9=

6 I

4

=

gl

~

pfmww e er Q) 2 e

CT l

e O

h s

a U

c sn

m. ?,

e s

M N N a,[dd w s s

I E

s i

c-t

)

(

~

7t,w e s ssa +wma,

~t 1

6

'l o

I I

~h l

i l

l I

1 l*
  • i E i

~

l il l

~

es G G

~

o 4

C1

[ ~

3 h

3 5

f.

O w

e en *

~

l e

0 42 l

l

I Case 4 - Mean Time'to Return B0P/All Equipme'nt to Service vs. Equipnient Forced Outage (DB0PMT/MTSERV vs. EFO)

As in Case 2, OB0PMT and MTSERV were very closely related at this plant, so it suffices to look at just the relat%nship of DB0PMT to EFO.

This is shown in Figure 10.

The statistical analysis flagged this case based on the relationship of the peak in DBOPMT in 85-3 to the EF0 peak in 86-4, implying that a strong peak in OB0PMT led a peak in EF0 by a number of quarters.

However, this interpretation is misleading, as' the peaks in DB0PMT in 85-3 and again in 87-3 came at the end i

of a refueling outage, and for these peaks the same behavior as in case 2 with

[

a strong relationship.to plant outages can be observed.

The engineering review I

of trend plots ^ identified this relationship correctly. What is E teresting in this particular case' is the regularity with which CB0PMT trends up between refuelings, from roughly 85-4 through 87 5..

Thus, from the work performed to date it appears that there is little likeli-hood of finding a systematic, predictive relationship between a process type indicator and a p1' ant performance benchmark. The size of the peaks and regular behavior of some of the equipment based candidate indicators may be of some value as a measure of equipment condition.

1 As one additional check for useful properties of process indicators, a trend line between refueling outages.will be constructed for each plant and each process indicator.

The slopes of these lines will be compared to the slopes

.for the validation benchmarks.

INP0 Vali4 tion 1

As part of the overall effort to identify and validate maintenance performance indicators, AE00 requested that INP0 supply data for several industry-monitored 4

maintenance indicators for selected plants with above and below average SALP maintenance ratings. As part of the response, INP0 provided a graphical analysis.

43 i

l SanOH i

E e

i 3

j

,E s i

o i

6g

.p,7 m

i e

E j '*e o

i

$}t E_

l 3

.=wc m;4g

?

I e

e i

w 3

I i

i i

)

m o,

gg-

~

(,e

-m.q i

~

o e

s

..c}

t gi m-

~m Ol

~

c1 9

0 C

1 t

o t

C Riaagi:Ammi i

.0 J

C K N._E -

5 u

I 2

in i

-?, O e

~.,

i e

e o

c oU

[p 4

~y s.

gj ?

p E

i C

i t.l -'-

- g' L

i E).,

-+ 1 i

s o

g-

--m; s

s.~;

f n v e

i s

aw 1

i e

I L

i

_fu c.

n N

C SEnOH lYDl.UED 000' O.33 e

44 4

L b

4 Based on data averaged over the period from 1/85 to 3/88, the graphs shown in Figures lla,11b and 11c illustrate the range of the indi::ator across sites, and the characteristics of that range as a function of SALP Maintenance rating.

If SALP is taken as the measure of overall maintenance performance, then the figures illustrate, at least for average values, the extent that other indicators individually discriminate between above and below averge performances.

The indicators matched against SALP were forced outage rate (FOR), equivalent availability factor (EAF), safety system performance - emergency ac (SSP),

preventive to total maintenance (PMTOT), preventive maintenance itoms overdue (PMOVER), and corrective maintenance backlog (CMB).

The mean values for the above and below everage SALP maintenance rating population vary in the expected direction as a function of SALP for all the indicators except PMOVER.

In that case, on the average, the better plants have more PM items overdue. But even if the indicators behave as expected across the industry, the test of its _value as another indicator of good maintenance performance depends on how it behaves at individual plants.

Here the overlapping ranges of values for each indicator suggest that in most cases, given the values of the indicator.for a particular plant, one could not very well predict the SALP maintenance rating.

Two possible exceptions to this observation are EAF and the closely related FOR. The elimination of one very extreme outlier from the above SALP average plants vrould reduce the range overlap significantly in these cases, suggesting the SALP maintenance rating is related in some systematic way to EAF and FOR at least for averages over long periods of time.

Validation - Findings l

Using plant specific process data, no consistent validation result was found across plants for any single process indici or.

The INP0 graphical analysis 6

did not show any useful properties for ;,rocess type indicators.

In rare instances, some indicators such ae cM backlog give some indication of value at a specific plant. Several' of the equipment-based candidate indicators, e.g.,

mean time to return to service and B0P mean time to return to' service, show behavior worth further analysi';.

45

Figure lla - SALP Rating-Versus Selected MPIs C m eletion of 1 ALP Welatenance Retings eith Corrective Weintenance Beatleg > 3 Wesths,014 (1/05 - 3/88) 70-11 5

u 60-l4....

(... O

  • a
o..)

m

^ 00 ~

M**~;,'2lyfl n

n 40-a

.er b

ii e

} 30-u 1

20 Reds se enese 1%de se Wee owege W wooewee aerop w momeneus retags toimp Corre t ellen of S ALP Welat enance itatings wit h

,Preeestive Weistenenes iteme Oversee (1/sl - 3/88) 15-14-13-12-M 11-10-9-

j 8-7 i....

t.-

. o a

.. (* 4 5-

.... r.. o 4-3-

2-1-

[

0 i

i mots va see e ReNe se base retege meets M w ome enrege W morteme rateg e

e e

46 4

Figure 11b - SALP Rating.Versus Selected.liPIs Carteletion of SALP Welateneste Ratines eith Forsed Ostege Rate (t/85 - 3/B8T 60-u 50-40-u 1 30-20-asa= M 6 P

u

..+..a n 10-ll...., c 4 H e-me... 6 4 0

4 Rede enn asese Meme om baies seese W merdeece serege W momenance

'**9'

'""I' i

4 Carrelellae of SALP Weistenance Retinge eith leslealent A elletility Fester (1/85 - 3/48) 100-

[

90-t e

ll...,,,,3, 4 l

80-

"eauttsh a

yn, d

~

g

n. sM:"fu 8 a 50-5 40-30-20-10-t I

4 y

e a e.e.,

n e..

e,

    • ege W meNegrce oserege W moNonence a=-

retets retnge l

- ~,,.,. - - - -,

,,c,

10, Figure 11c - SALP Rating Versus~ Selected MPIs Correlation of SALP Welatenance Ratin e olth Safety Systy Perlermesse (Emergency Af Peeer(- 1/05 - )/88) 0.060-0.050-1 0.040-

c o

.w.=

o t

" 0.030-e ae. w es 0.020-

,,,, g,9 u..... <.. >

t 0.010 -

n u

O.000 PleN8 eda enese Plants sh base eerage W meste aeroy W mentenece

' retop rewip Corretellen of SALP Weletenense Retings eith Rolle of Presenties le istal Weintesenes (1/85 - 3/88) 60 3 L

SS-p 50-if y45-o 40-

""*d.*0[OI}

a

..e.... n 3.3

,,,, g,

y a

a35-lt 1

4 30-1 t

25-20 u

Plante en anes mee ve towe perege W meNewce merege W mentenece ree9e tog t

O e

L 48 S

w-

n.

e As discussed in the previous section looking across indicators, some process i

indicators such as PM to total maintenance do show trends toward targets.

Thus, they are valuable as part of feedback to management on the implementation of their program. However, the validation effort has indicated that they do not behave in a sufficiently stable way to provide a leading indicator or to correlate with measures of plant performance. Additional analyses of average trends of process indicators between refuelings will be performed.

IV. USE OF NUCLEAR PLANT RELIABILITY DATA SYSTEM (NPRDS)

A.

Introduction In mid-August the AE00 staff with support from several contractors began an intensive effort to explore and validate the NPRDS as a source of data for

' maintenance performance indicators. This effort was based on the premise that the effectiveness of maintenance is most quickly and directly reflected in the rate and duration of component failure, and 'he NPRDS'is specifically t

designed to capturu such information in a consistent way across all plants.

Further, the NRC staff has on-line access to the NPRDS. Finally, the potential value of NPRDS was' reinforced during the site visits when in a number of cases plant personnel either used the NPRDS themselves or referred the team to the NPRDS as the best source for some of the MPI data.

Since INP0 assumed responsibility for the NPRDS in 1981 the system has shown vast improvements as a source of industry component failure data.

This progress and the need for further improvement is discussed in SECY 88-1.

With further improvement and expansion of scope to cover more plant systems the NPRDS could become a viable source of maintenance effectiveness data in the near term.

The indicators constructed from NPRDS data have been subjected to the same validation approach as the candidate MPIs, i.e., engineering review of trends, and. statistical correlation runs.

This effort was made more meaningful because comparably defined data is used for all indicators for all 23 plants. The 49 4

6

B remainder.of this section first describes the indicators developed from NPROS data and how they relate to the candidate MPIs. Then highlights of our

+

validation results are discussed, including a discussion of anticipated problems with NPROS scope and the need for continued improvement in failure reporting completeness.

B.

Indicators Based on NPROS A total of seven indicators were constructed using NPROS data:

Maintenance Rework Ratio of failurss discovered during survefilance to total failures discovered ('iurveillance Ratio)

Average time, for BOP components out of service during a calendar quarter (Average 80P Outage Time)

Mean time to return components to service (Component Return Time)

Failures' of components in outage dominating systems (ODE Failures)

Average time ODE components out of service (ODE Outage Time) i Failures reported per 1000 components (Failures per 1000)

In all. cases, variation from plant to plant due to different reporting philosophy was lessened by using only those component failures that the NPROS Reporting Procedures Manual requires to be reported. Reports of incipient failures, reported at the option.of each plant, were not used.

The following j

paragraphs describe the relationship the NPROS indicators to be candidate MPIs.

l i

Rework was based on one.of'the alternative definitions given~for the candidate

[

MPI entitled Rework, i.e., the number of repeat corrective maintenances t

50 a

performed on the same component. A restriction was added that repetition m0st i'all within 10 days of the initial corrective maintenance. This restriction gave higher confidence that the repetitive failures would indeed be related to the same uncorrected root problem, and 80-90% of the sample cases reviewed using the NPRDS failure narrative indicated repetitive maintenance was indeed being performed.

Surveillance Ratio was defined analogous to the candidate MPI. The same NPRDS fields were used to construct the statistic that the plants used that obtained this ratio from NPRDS.

Average 80P Outage Time is the ratio of the total out-of-service hours for components belonging to BOP systems in a quarter divided by the number of BOP components out-of-service in that quarter. Both the numerator and denominator could be tested as indicators, but that has not yet been done. Average BOP Outage Time differs from the candidate B0P-related MPI which is a mean time out-of-service calculated using the components returned to service in a given 3

quarter.

Corponent Return Time is analogous to the candidate MPI mean tirre to return to service.

The remaining NPRDS based indicators, ODE Fuilures, ODE Return Time and Failures per 1000 components, are new candidate: MPIs.

Failures per 1000 componerits uses the entire NPRDS failure data base for each plant and divides the number of failures discovered in a quarter by the number of components at the plant in the NPRDS scope, i.e., the number that could potentiall.y be reported as failed.

The remaining indicators. ODE Failures and ODE Outage Time, use failures reported against a specially selected set of components.

One data acquisition consideration for the ODE indicator was that a consistent set of data be obtained from all plants. Based on the staff's interaction with industry NPRDS coordinators at various plants, it is believed that although reporting does vary in quantity dramatically among the plants, the coordinators i

generally report important failures.

Therefore, failures that are "announced" i

51 J

.,-.-__,$_----n--

- -~- -'~-

~ ~ ~ ~ -

N I{n L

^

c

~

w r

in nature:and result in a plant outage, were considered a safe set to rely on 4

for industry-wide records that would represent a consistent set among the 4

plants. To further explain "announced failures", these failures have been intercreted as normally running equipment that is necessary for continued plant I

operation..Such equipment may not imediately cause a shutdown, but would impact cperation eventually.

With this basis, the data acquisition proceeded using NPRDS..

The ODE candidate indicator should, given good NPRDS reporting, correlate well l

with plant availability, FOR, and EOF or possibly scrams. Other factors.

such as the fact that the reporting scope of NPRDS does not include all of the equipment that causes forced outages at piants, were considered in the

[

analysis.

It is believed that, given a good correlation between this set of i

equipment and availability, the trending of this same set of equipment for failures that are of a degraded or immediate nature (possibly some incipient reported failures at some plants), would prove to be a leading indicator of plant. unplanned outages--including scrams, which present unplanned challenges 4

[

to plant safety systems.

I j

The ODE componer.ts', i.e., the plant equipment that have historically contributed the most to unplanned outages, were selected starting with a i

compilation performed by S.M. Stoller Corporation for EPRI using the OPEC-2

)

data base M.

From this compilation, equipment was eliminated that are not f

normally running during plant operation, structural items such as steam j

generator tubes and BWR recirculation piping, and equipment outside the current I

as implemented scope of NPRDS.

B0P systems and components such as the i

t l

circulating water, non-nuclear service water, non-nuclear closed cooling water.

l l

instrument air, service air, turbine-generator and turbine-generator support I

systems, and.the condenser are currently outside 'the reportability scope of f

NPRDS. The restricted scope of NPRDS regarding BOP systems limited the l

equipment considered to the feedwater, condensate, and plant ac power systems.

l l

t i

I.

t U esults of this compilation have been published in a series of EPRI R

j l

reports: EPR!-NP-5544, EPRI-NP-4368, and EPRI-NP-3480.

52 u

3-Finally, failures for all components in these systems were counted by quarter for ODE Failures, and the associated out of service times were used for ODE Outage Time.

C.

NhRDS-ValidationResults This section discusses the preliminary result!.of the trial program based upon NPRDS data. NPRDS unit-specific data is proprietary; therefore, these data are presented without any association to a particular plant to make the presentation non-proprietary. Data for the seven NPRDS-based indicators have been placed in a data base, plotted and analyzed for trends and patterns and correlated with plant performance benchmarks.

Cumulative plots which show trends for plants grouped by NSSS vendor type and bar charts for each NPRDS indicator for each plant will be provided in the final report.

In the remainder of this section, highlights of the validation results are provided, beginning with some generic results and progressing to some specific examples that are encouraging.

General Observations Average correlation between an indicator and a benchmark over long periods of time can establish general relationrhips, but they do not provide insight into leading behavior or the trends at a plant and were not relied on heavily

.in this analysis. Nonetheless, data for each indicator were averaged over 1985 through 1987 for each site in the study, and correlated with the corresponding average SALP maintenance rating and average availability.

In general, these correlations showed the proper relationships, e.g., the greater the average time out of service, the lower the availability, but the scatter of the data was fairly large.

Two examples are presented in scatter plots in Appendix 6.

The greater amount of comparable indicator data results.in the higher number of preliminary indications *of correlation as shown in Table 5.

In the earlier section, the seven candidate indicators available frem plant specific records produced about 120 potential correlations. The usage of the NPRDS data for seven candidate indicators has produced about 300 potential correlations.

53 A

TAalf S - PpELIMINAm? NPDOS CE.SPELATION pf5ULTS I


I--


3--------------------l---------------------l-----

1---------


l----

.-------l--------------.---~g i

i FAILUni$

l AVESAGE DOF l

COMPONENT I

i l

l l it er IC60 cc.gmeent s a l e rbokK l st*MVE I LLANCE D AT10 1 OtJTACE TIME l

PETUPN TIME I

DOE Fall.09ES l

OB)E pr1UDW TIMC l

e i

I I

I I

I I

I i

l $ $ 5 C A $ 5 5 5 C A l 5 $ 5 C A i5 1 S C A l 5 S S C A 45 5 5 C A l 5 5 5 C A I

  • $1 ANT l C C C D V 1C C C D V l C C C R V l C C C R V l C C C D V iC C C R V l C C C R V i 1

3D 88 m D I E T A I a a a e E T & 8 p p a r E T A l m F la r E 7 A Ia p p F E 7 A 8R R R r E T A l 9 R R S t-7 A 1 l

8 87 M

L G f M

1 5 7 N L o I N I l 7 N L o r M I l T N L o r M 1 1-M L o r M I IT M L o r M I l T M L o I N 3 8 I

IL 1 o a o a L 6L I o a o a L la 3 o a o a Ll L I o a o a L IL I o a o a L l L I o p o a L I L, 1 o a o a Ll 1

---l-----------

.l----------_ -_-----_-l


g-----._---_-------_-g--------------_---_--l-------


_--g-------_-_-_--------g.

I l

I I

I 1

l' 8

1 86 0, 6 - - I 6 0, 6 - - I - - - - - - - - - - - - - - - - 6 l - - -

-l 6 6 40 - -

-t 1

1 g - y - - y - _ $ $1 -. - - - - - l 3 4 - S - - - 1

- - - - - - I-

- - - - - - g-4 - - g j

l' j

g 1 - - - - - - 6 6 l - - - - - - - t6 6

, - - - - - - - 1 I

4 8 - -

6 6 - - - - - -

I-

- - - - - - - - - - - - - - - - - 4 - 0 0 - -

I S

I 1 - - - - - - - - 3 3 - - - - - -

I-

- - - I-

- - - - - - I-

- - - - - 6

  • 4

- - - 1 l

I I-

- - S - *, -

1-S 6 S,6 - 5,66 I-3-

- - - - - - I-

- - 0,6 - 0 0 I-

- - S - S SI-

- - SI 8

l' l

l l

8 6 6 I I

I l

I 7

1 3 - 0 0 1 - - - - - 4 4 1-

- - - - S

-I- - I - I-

- - - - - - - - - - - - - - - - 1

[

i t,

a I-t,

- O,3 0 1 - - - - - - 2 3 3I i I - - - - - - 1,2 3-

- 0

-l 10,1-

- - - - I r

I l $ - -

- - - - - - -. - - - 6 6 l - - - - - - - I-

- 6 - - - - l - - - - l 8 30 1 - - -

- 0 0 1-

-I-1 - - - - - - - - 0 01-

- - - - - - 3 *- - - - - - - I I II I-

- - 4 4 - - I S 6 - 6 6 - - - - - - - - - - 3 1 - - 6, - - - l 4 $ - S S - - l l

12 1 - - - - - - 1 - - I-

- - - - S SI-6 3 - - -

-14

- 6 6 - 0 03-

- - S - - -

1-

- - -.- - - 1 I 33 1 - - - - - -

I e 6 - - I-

-l 4 - - 0 - - - - - 6 6 8-

.- - - - I 14 1-

- 0 0 I - - - 6 8-

- - - I6 1 1 1 1 1 - l 6 - - - - - - -

0 0 1-

,?

- - 6 6 I I

I I

l 1

6 6 6 6 1

1 I,,

1 1 15 l. _ y - - - S - - - - -

- - 1 3 gS 6 1 -I-6 - - 1. - 6 1 - f * *- - *

-- - 00t 1-p-I la I C 0 - - - - -

I-

- - - - 0 01-

- - - - - - I-

- - - - 3

,I0 0 - 0 0 3 3 l - - - - I i 17 I - - - - ' - D,6 0 10 0 - - - - - - - - - - - - - - I-

- - - - S 4 - - I g la 1-0 86 6 - l 6 - - 3, 6 - - -

1-J 66 8-

- - - - 0 0 1-

- - - - 6 6 8-

- 6 6 l I

g I

I i

I 6 6 1 I

I

$ j.

g-

- - - _ o,6 ol _ - - - - - - - S -

E 6 12 - - - - - - - - - 0 0 1-

- - - - - - 12,3 - - - - - - 1 I

I I

I i

i 6 6 4 l

l

$ 10 1-

- - - - 0 0 1 - - - -

I-

- - - - - - I-

- 2 2 - - - 2 2 2 0 0 1-1'I l I JI I - -

a 6 0 0 1 S S - S S -

-I6 6 S - - 0 0 I-

- - - S S SI-

- - - 0 6 6 12,S 2.

0 - - - S - - S -

-1 I

I I6 6 I

I I

I I

I I

7 1-o,t G 1 -

1, 6 - - l 2 S - - - S *> I - - - - - - 2 4-

- -, - - - I-

- - - - 0 0 1-S - - - - - 1 I

l l

4 I

I i

1 4 I 1

l 7a 1 - 0,1 ' 4 - - - -

1-

- - - - S,6 - - 6 6 3 3 1- - - - !-

- - - - - - S SI-6 4,6 - 0 5 $ 1

(

l 4

1 1

1 l

8 1

8


1---------------------l------------------1---


I

-I------------

l---~~

I--------------------I t.ata was awa13ahle i t.

all MPIs in all cates _

In Table 5, the large number of correlations with lags of 0,1, 5, or 6 are due to a strong underlying relationship between failures reported.to the NPRDS and plant availability. As will be brought out in the following paragraphs this is most strongly reflected in the behavior of the failures per 1000 components indicator which uses all failures reported to NPRDS by each plant.

In addition, plant availability is associated with surveillance ratio and i

component return time, as both of these indicators look across all failures.

Rework, average 80P outage time, and the indicators based on ODE are less l

closely related to availability.

Rework Studies performed for perfonnance indicator development have historically and most recently consistently recomended that the amount of rework be an indicator of maintenance quality.8_/ As discussed in the previous sections, data for rework was unavailable at the plants.

Using NPRDS, indicators were developed

]

based on the frequency of corrective maintenance in a comparable way for all 23 plants in the study. Assuming good NPRDS reporting by licensees, it appears feasible to develop a reasonable measure of rework for all licensed plants

{

from NPRDS, wherea's such a neasure is not now tracked or available at most plants.

Figure 12 is actual plant NPRDS data and shows an example of how rework appears to be trending up piior to an extended forced outage period that was due to equipment problems.

Outage Dominating Equipment Figure 13 shows an example where these failures show some leading behavior, prior to a period of extended forced outage.

However, the developtrent and validation of a good leading indicator for forced outage rate or equipment forced outages using the failures has met with only moderate success due to b "Interin Report on the Development of Maintenance Performance Indicators by a Systematic Process", SAIC, September 1988.

55 l

9

i FIGURE 12 E forced Outoge Rote Pl AHI.

1 Legend: o

~'e~a ae-k 85 -I to 88-1 I

FOR REWORK w___

7 e

g..

- 6 tO p

x

~

~

,o Q:

O ta l '-

4c 1

I l

- 3 6

2 2

20 q

T,

~

&A

1-

,o

,o

,o o

o e

o MS-1 8%2 SS-3 M"r - 4 6--t 86-2 86-3 06-4 87-t 87-2 87-3 1 -4 88-1 6

YCOr -Quar te i

9 e

---_.-._..,m,,..

y.

-,-,---------e.-w.,--

we,,---

s-

=

m--

r-

=

.----.-----------w

i 53807VJ 300.

a e,

2

~

o I

l 1

-le 7

s cm E

2 A

en i

-d.

a a

S I

yw i

I 5E 5

n l[

g'_;

i l

..m E

d 5

i c

w

?

D

+~

1 2 m

~

l

=!

E

~

~

b W

2o L

e

~I.

i d

"8

[

u

\\

a o

O

+

? 8 l

5

~8 l

1 l

1 f;

~k...t',

F!T l

j l

ss\\,, s.

i &NN 4

s' "2

i l

l Qi e

l n

l L

l i

-g I

1-i ?

TG I,

i y

i 4

- :q l

JJ

.i c i

l l

l l

i 1

(( ",

Q C

Q O,

I C

C n

~

t n

i j

l

~

-)

57 1

t

limitations in the current as-implemented NPRDS scope and NPRDS reporting completeness concerns. While the underlying philosophy in determining the scope of the NPRDS was to include all systems that initiate plant challenges, major BOP equipment such as the main turbine generator, the condenser, and the circulating water system were not included. Thus, in reviewing the equipment forced outages for the 23 plants a sizeable proportion were found to result from equipment not in the NPRDS scope, as shown in Table 6.

The PWR types show a consistent percentage which is less than that for BWRs, a finding that is consistent with the relatively higher importance of the turbine generator for BWRs. The need to broaden the NPRDS scope to capture more B0P equipment in order to provide a good leading indicator for forced outage will be pursued further.

Table 6 Percentage of Equipment Forced Outages Outside NPRDS Scope NSSS Total EFOs Not Percentage EFOs Vendor EFOs in Scopa Missed GE 81 46 58%

Westinghouse 114 49 43%

c B&W 87 35 40%

CE 94 37 39%

Figures 14 and 15 show the results for BWRs and PWRs if the number of EFOs is adjusted downward to account for the more limited NPRDS scope and plotted against ODE failures.

These figures indicate that there appears to be a relationship between these two variables, but it is probably going to be a function of design, possibly down to th'e NSSS product line/ BOP design level.

Additionally, in a number of cases the particular failure that caused an EF0 could not be found in the NPRDS during preliminary checks. Thus, relative completeness of reporting across plants may be impacting the results.

l 58 4

7 l

l i

I

}

EO. FORCED CUTAGES IN NPROS SCOPE 5

5

$M 2

~

O M

^

m cc O

M s

i i

i i

O d

m i

a O

~

O T

~

o R

mO s

L._

9>

^D &

P D

O ~

'{ W C

M n

rn C

t M

g J

('

a 9

\\

a u

I O

~

O i

l e

e u

- a a

~

OO 4

e 0

59

)

EO. 70RCED OUTAGES IN NPROS SCOPE e

O M

O O

M a

o O

M A

O O

M

's o

I t

t I

i O

M _

O O

O O

Q 0-O O

O O

A _

m~

T 0 0

O m

a

g C

I C

/

F 0

r; a rn 5

i=

m M

M a

C fn

(

G O

OO a

g

~

z 0

60

S'

'l Failures Per 1000 Components This indicator exhibits the strong 9st relationship to plant availability, as

{

illustrated for two plants in Figures 16 and 17, whereas availability increases, the failures per 1000 decreases, and vice versa. Said another way,

~

the failure per 1000 indicator correlates with unavailability. This can be r

explained by the argument that when a given plant is down for some reason, t

either for a refueling or a forced outage, then access to more of the plant is possible, more surveillances are performed, and more PM is performed. All of l

these activities associated with an outage condition lead to the discovery of more failures, with the number discovered proportional to the length of the outage. However, the magnitude of this indicator in an outage period may provide a gauge on the plant material condition going into that outage.

l Also, while peaks coincide with long outages, the indicator is fairly well-behaved and does show some trends with the plant operating, thus it could be tracked from quarter to quarter, with any undesirable trends or higher than usual values deserving attention. However, for this use the fiPRDS reporting would have to be timely. At the beginning of 1988, the staff evaluation of NPROS indicated the median time to submit a failure report was about one f

calendar quarter,'with wide variation from plant to plant.

Reporting lags of less than one quarter would be needed for indicators with short lead times.

Surveillance Ratio The behavior of this indicator is closely tied to plant outages as a result of higher number of surveillances, both conditional and fuel cycle related, that are performed during outages.. Figure 18 shows an example of this association.

The cumulative curvo shown in Figure 19 sh'ows the short term impacts of outages in the changes of slope in the curves.

The figure also illustrates that the ratio provides some differentiation among plants in that all the curves have a similar slope except two.

g Eb e

9 61 O

O

i AVALABUTY (.)

m-

~

wp o

a a

8 8

9 i>

o -

3_

l a

"G c

i

.,.,s-g l

l 3.

r=

s

~q

  • 3 I

cc l

i M

F= 1 t

1.-

-l l

5

.g s,;

I L,M V

- [a i

I I

1 m

arm s s

~

a a

5 5

l o!

"'4 i

,=

5 i

E I

c A

k-b, i

'A e

1 l

[

D k

'a r

l n

i r

-)'

l 6

-- S j

6o i

l 1

?,

1.

o

t. o?

o 3I

, p.

O

,1 r

. ;l 1,

.=-

3 l

3 I

a e

1 l

i Y

A I

~

__b.

~

-l r,

8

)

j ml 1

I 4

i 1

j I

0 d

3 3

'J

s l

l FALURES' FER 1000' COMPONENTS I

i l

~

~

A i

i

~

l

'l j

.3 62 l

__-7-__

i 1

t 4

r i

I l

i AVALABLl1Y (.)

cc p i

v. a i

8 a

8 3

3

.L '>

o

-- c i

o m 3,

-fe-i 4

)

3 i

y n

3.

5

-w%.

r,--

CD 3.

F l

ar w

- 1; I

l M

5 ff' j

l I

g~

s....

l

f. L ;

~:w Of l

a t

m o u?-

s< ~,

1; y

q t

I 5

-p C

A 4~y m

=

p; m,,

o i

i o

s u,.

,m w

2 o

l t

?

t".

?_

W i

i i

i-i

{

l v

i i

l 1

l 1,

4..

de 5

i I

x l

v l

O E

I e

i ft l

O 37 i

b l

r l

d l

I.,

2 i

a n:-

3 j

i 9

I l

i i

o t /) -

5 0

8 J

s "l

i FALURES FER 'C00 COV;CNENTE i

i

~

%,5 i

f g

l' i

~

i i

i 63 I

I s

)

i I

t t

AVALABUTY (.)

m "U cn r-o s

a 8

8 9

i>

l o +

f t

l

? --

N p-L-

i q.M,.n 2

3

~1r n e

i cc 3.

~ t; F_

e 1

l 3..

8

L, y-1 l

l l

I l

7 3

l i

i i u 1

I 6

s..m.

O i

d' m

I a,

]

I

.?

o i

1 1,

o a

'4' 5

R, 2

j c

1 4

1,?

R i

i r-a m

e f

I M

lQ i

i a,

m e*

n

,x i

=

I.m i

=:

l 35 a

N

~

1 I

C a

. r,s C

??

~..

z

+}

I

=.

{t_

o l'

e..

_J!

r i

l W

.f 1

11 1

~

m i

r=

0

~.

3 Months x

p 5.10.7 5 X

X x

x x

s of Mandenarce Personnel Corsananations x

p 1.0.4.5 x

x x

x Ag AllA 8 of Missed Survedfarces on Equipment x

p 5.5.3.4 x

X r

X s of MWHs.Wr'dlen by Maintenance Personnel x

p 3.3 X

x s of Hepeat Maintenarco liems x

P 6.4 x

X x

X x

s of Temporary Moddcations over 3 Monsh Delay (%)

x p

S.6.5.4 x

X x

x x

  1. Healignment Enors during Mawdenance x

OP 5.1510 x

X x

X X

s Tenporary Moddcations x

p 510. M X

X x

x 3 Wrong UruvWrong Train Everes X

0,P 1.0 x

X X

X x

% Correctrve MWils Older than 3 Monels x

p 56 x

x

%iEHsdueip airsenarce X

OP 1.0 x

x X

x x

x x

% Preventsve MWits Cortpleted on Salesy Equipment a

p 5.1 x

X Accumutaled IAxation of I.CO Condesons x

p 1.0 tx X

Baci log of ECf1s Relaled to Equipment Per1onnance x

P 56 x

x x

X Ilackkx; of Mairdenance Procedure Revisions o

p 3 6.5.7 X

x X

x X

Componers iniCO Condston x

p 1.0 X

X Corrective Mandenance Backlog > 3 Marsis x

p 56 x

x x

ESF Actualions due to Mairdenance & Tessing X

0 1.0 x

X x

X X

X FractionIabor llours on Survedance x

p 3.4.5.5 x

x I raction MWils Heviewed by OC x

p 4.4 x

x e

e I

Tab 6e 1 Evaluation of Maintenance Pts Mantenance Human Factors P1 Dana feeg Dwaean Ommy n W Ames comanun Teanng Aden Oen l

I Hate of Mandenance flequested Trasnog Programs o

p 82 x

x X

X l

flate of Maissenarce Staff on Verxfor Courses o

p 82 x

x X

X

[

Hale of Mairdenarce Staff Heirainin3 x

p 82 x

x X

X Hale of Marixxsrs in Maintenarce x

p 33 X

X X

x Itateof Mrsaiwynnerds x

  • O.P 61,7.5 X

x X

X X

X Italeof MWHs x

p 55 X

X x

'flate of MWHs Conpleted x

p 55 X

x X

x i

flate of Out Of Service Tags x

p 55.75 X

x X

X Ilate of Perwing f30thlicasion Requests x

p 55 x

x x

X X

i flate of floot Cause Evaluations due to Mamsenance x

P 35.64 x

X x

X X

Hale of Spare Parts Unavadable x

P 33.72 x

x x

X Plannirm3 Halio OC/Maisdenarce Staff x

p 33.44 x

flatio, a f lours to Ikpair Degraded Cornponents/ Total Mainse-x p

52 nancta iloirs flatie. s Hepairs wtmle Degradets Hepairs Failed + Degraded a

p 52 x

Hasio Deiciercies Descovered in Surwemance/ Total Dncoveretl x

p 3.4 Halio. Fadures ing Post Mantenance Tess/s 64 F14 Tests x

p 64 X

x x

x l

Itatio. Iligt sty MWils/ Total MWHs x

p 54.52 x

x x

I flacio, Mean I air Tiene/ Time to Failure or Degrade a

p 1.0 x

X X

llatio. MWIls ReceiveddMWIls Compieled x

p 56 X

X

,ilatio. Preventeve Maineenance/ Total Massenance x

p 56,34 x

X X

flaho, t Addy/Covaractor Stall a

p 62.81 x

flepair Duration weli respect so AOT by Tech. Specs x

p 1.0 X

x x

j Safety Syste. i runction Trend X

P 1.0 x

x i

4 l

d L.

i i

l l

I I

C I

U l

's sW

'!i 1

hb bl b

1

____.j__..

I k

e g

m n

u 1

p xxx

=

xx x

==.

=

1 MMM M x M -

=

wM x

x M

.mMM E j,I xx x

1 1

5 ij!::!? !j;;d:!}@!ajj:

m

}j 3... g....

g..e3

..e.,

e

..g W

^

=

+

w1 E

h h

wf z

e jf i

j

.l t 5

3 1

=

4'w}

Uwk

}

k3~ be j3 a2 33a i

8 316

=

=

{ ; 1, j.

3 i

I.e s =

. k.,

c

=g irg <jsf.

E =g e 8 ats 8*

kb a

j !. J.

JET C

I E.5.H W=5 'l e

ss e

tE gg!5 15 % li

=Masj II 5

g.gymd 4 ; ;.

o i!

B -ai 82a E

. l 2. } =EI I I " :]".

g H.m s

a IIUS

$d"g63S S' a

sgi=

3*a E

se

---g-2 n z z.

22222

$$$bb bbbbb bbbib L

o, XX X

s ro t

c m

a F

X x

na

,y m

u 4

1 w

x x x

X xX e

c s

n t

a P

n e

x e

m e

c a

n M

a n

x x x

e t

n ie 4

2 e

0 0 A07 t

1 1 1 1 3

0 f

8 3

)

o m

1 n

P.

i o

pppp s

O y

le s

e u

g t

n a

xX x oX i

v E.

o t

)

r e

o l

1 e

p b

e a

r t

d c

t e

e a

r l

T i

l u

o q

c e

r h e

(

ge ur e

oT l

h l

b t n a

l o

l a u a

(

g v

r c

on a

ee s h l

p as i

e y

b s I

c e P

dT l

l a n b

n l

n&

i a

e i

Iee l

p a e o

v c c c i

ni r s a

a n aae

~

a o

a n v

t a

mn c i

r e n

t e

~

n os ags f

n a

a t t

s.

d n r i c nA ea ai a

t i s PM d e NPv d ia o a st Y

d n

t eu n oe J E 1

=

f t

r s cp l

e ear s 1

w

=

ot t

s uRit X

ru nr r n a G

yd

= pO Ss eiP f

a 0

1 y n v!

l 1

1 = =

o! q i

0 l

r r t N

8 x

o y p0 1,

ea r

r o

UST c war E

WW t

R

~

,1 l

ll!

l1il1lll lll

Table 2.

Short list of Potential _ Indicators Number of repeat maintenance items; Number of realignment errors; Number of wreas unit / wrong train events; Number of inadvertent engineered safety features (EST) actuations; Backlog of engineering change notices (ECNs) related to performance; Mean time between repairs; Scrams due to test and maintenance; Vrong parts events; Heat loss rate.

5.

Data collection and Validation 5.1 Data Collection The principal data sources for analyzing the potential indicators proposed by this study (Table 2) are the NRC maintenance performance indicator data collected during the summer 1988; Licensee Event Reports; and Licensed Operating Reactors Status Summary Report (NUREG-0020).

This project participated in the previously mentioned NRC data collec' tion effort and data from eight sites will be used for the purposes of this analysis. Nevertheless, one of the significant limitations in this project is the limited availability of plant data, particularly for those plants that do not store data in an easily retrievable form. It is furthec recognized that follow up visits may be required in order to better explain anomalies possibly recognized during the analysis.

5.2 Validation Analysis The purpose of the validation analysis is to investigate the ability of the indicators to anticipate changes in safety performance of the plants.

The validation process involves estimating the time relationships and degrees of association between the indicators and a measure of safety.

This will be performed using methods from systems modeling and statistics.

The search will all6w for extremes in indicator values, and will account for the f act that performance may be reflected by optimum ranges rather than extreme values.

Several alternative measures Eere considered as a basis for the objective reference measure. These include the previously used SALP assessments, "significant event" occurrences, risk-oriented measures of scrams and system unavailabilities, and measures of unit and systems availabiltty.

The preferred attribute's' of such a ' measure were that it should be based on a variety of -historical ' events (in the spirit of "the Commission's recommendation to use retrospectiva analyses for vftidation), which should AIS

. t be quantifiable, and strongly associated with risk as an expression of safety significance. None of the measures uniquely met these preferences.

As a result, it was decided to use two approaches: (a) to develop (on a trial basis) a measure of safety performance that reflects the historic risk perspective, and (b) to use a series of indicatets representing intermediate elements of safety, such as the number of secams, system unavailabilities.

This work is presently under way. Results are expected to be available in October 1988.

6.

Prelimina ry conclusions The results of the analysis are still being developed. The indicators selected in this work are unlikely to be uniquely "correct" in a formal sense. Other analysts could make a different selection based on their own preference schemes. The use of a systematic way allows analystu to expresa preferences with a common basis for discussion.

The systematic frameworks developed provided a basis for discussing attributes of maintenance and how it relates to safety. More importantly, they provide a structured and explicit way to select indicators for validation.

The selection of the indicators for a final analysis was based on their ability to anticipate changes in plant sa'fety. The results of the validation analysis will provide additional insights for the usefulness of these indicators to the NRC's performance indicator program.

7.

Biblioeraphy of Literature Sources i

NRC Documents

  • L An Investigation of the Contributors to Vrong Unit or Wrong Train Events, NUREG-1192, US Nuclear Regulatory Commission, Washington, DC. April 1986.

i Boccio, J.L., and M. A. Azarm Performance Indicator Program-- Data L

Collection Work Plan (Draft), Brookhaven National Laboratory, Upton, NY,

,May 1988.

t I

Bogel, A.J., et al., Analysis of Japanese-US Nuclear Power Plant Maintenance, NUREG/CR-3883, Battelle Haman Af f airs Research Centers, t

Seattle, WA, June 1985.

i Loss of Integrated Control System Power and Overcooling Transient at Rancho i

Seco en December 26,1985, NUREG-1195, US Nuclear Regulatory Commission, n

Washington, DC, February, 1986, i

{

Loss of Power and Water Hammer Event at San Onofre, Unit 1, on November 21, i

1985, NUREG-1190, US. Nuclear Regulatory Commission, Washtngton, DC, January 1986.

Murphy, G. A., and J.W. dietcher,II, Operating Experience Review of Power Operated Relief Valves and' Block Valves in Nuclear power Plants, l

NUREG/CR-4692, Oak Ridge Nat'ional Laboratory, Oak Ridge, TN, October 1987.

I i

f A16

57C Augmented Inspection Team (AIT) Reports. December 1985-January 1988, l

US Nuclear Regulatory Connission, '/ ash l.ngton, DC (21 AIT reports reviewed.)

NRC Information Nctices, January 1984 to June 1988, US Nuclear Regulatory Commission, Vashington, DC (Approximately 150 notices reviewed.)

NRC Temporary Instruction #2505-15 US Nuclear Regulatory Comission, Vashington, DC, 1988.

Olson, J., et al., Development of Programmatic Performance Indicators (NUREG/CR-5241), Battelle Human Affairs Research Center, Seattle, VA, in publication.

l Performance ladicator Program Plan for Nuclear Power Plants. US NkC Interoffice Task Group on Performance Indicators, US Nuclear Regslatory Coi.ssission, Washington, DC, September 1986.

Report to Congress on Abnormal Occurrences, January - March 1987, NUREG 0090 Vol. 10 No. 1. US Nuclear Regulatory Commission, Washington, l

l DC, October, 1987.

Safety Evaluation Report related te the Restart of Davis-Besse Nuclear Power Station, Unit 1, following the event of June 9, 1985, (Docket No.

50-346) US Nuclear Regulatory Commission, Washington, DC, June 1986, i

t Seigel, A.I., et al., Front-End Analysis for the Nuclear Power Plant Maintenance Personnel Reliability Model NUREG/CR-2669, Oak Ridge National i

laboratory, Oak Ridge, TN, August 1983.

i Status of MIintenance in the US. Nuclear Power Industry, 1985, NUREG-1212, US Nuclear Regulatory conssission Washington, DC, June 1986.

Stello, V., Guidance on the Use of Performance Indicators, EDO Announcement

{

i

  1. 30, US Nuclear Regulatory Commission, Washirston, DC, February 5, 1988.

' Stello. Y., Memorandurs to Chairman Zech and Conantasioners Roberts, Carr and

}

Rogers Proposst R.alemaking for the Maintenance of Nuclear Power Plants, US Nuclear Regulatory Convaission, Washington, DC,, June 27, 1988.

l 1

Systematic Assessment of Licensee Parformance (SALP)

Inspection Report No.

i 86001, Palisades Nuclear Generating Station, Consur.aars Power Con.pany i

(Docket No. 50-255), US Nuclear Regulatory Cocnission, Washington, DC, 1

February 1986.

l Transcript of Proceedings, Public Wortshop for NRC Ralemaking on Maintenance cf Nuclear Power Plants, US Nuclear Regulatory Commission, i

Washington, DC, July 1963.

a Vesely, W.E., et al., deesures of the Risk Impacts of Testing and l

Maintenance Activitics SUREG/CR-3%1, Battelle eelu.nbus Laboratories.

I Columbus, OH, November 1983.

[

A17

4 1g.

4 Other Sources Bauman, M.B., et al., Survey and Analysis of Work Structure in Nuclear Power Plants, EPRI NP-3141. Electric Power Research Institute Palo Alto, CA, June 1983.

Bignell, V., et al., Catantrophic Failures, The Open University, Milton Keynes (UK), 1977.

Engel, R.J., et al., The Causes of Nuclear Power Plant Unavailability (WCAP ll716), Westinghouse Electric Corporation, Pittsburgh, PA, January 1988.

Geivitz, J., et al., wandbook for Evaluating the Proficiency of Maintenance j

Persornel, EPRI NP-57.0, Electric Power Research Institute, Palo Alto, CA, March 1988.

l Human Factors Review of Power Plant Maintainability, EPRI NP-1567, Electric Poser Research Institute, Palo Alto, CA,1981

Inaba, X., Observations and Recommendations on the Proposed Rulemaking for the Maintenance of Nuclear Power Plants, XYZXY Information Corporation, July 1988.

INPO maintenance indicator documents, Institute os' Nuclear Power Operations, Atlanta, GA.

Kleta, T.A., What Went Wrong?, Gulf Publishing Company, Houston, TX, 1985.

Nuclear News, honthly "Power" and "Internationai" Sections, American Nuclear Socieby, L4 Grange, it June 1984-May.988 (Approximately 31 relevant artteler reviewed.)

Potster, T.H., Pettermance Monitoring, Lextngton Books, Lexington, MA, 1982.

' Utility-specific indicator report.s (as available).

9 i

e 4

W e

Em Gb a

Al8 w----

~m

8.

Comments on Maintenance Indicators Suggested by the XYZYX and Westinghouse Corporations SAIC, tasked by RES to develap maintenance indica ws, tased on a systematic approach, was requested by the NRC staff to comment (n two recent reports related to performance indicators. A summary of the SAIC comments is provided here.

8.1 The XYZYX Report l'ollowing the NRC's Maintenance Rulemaking Workshop in June 1988 Dr. Kay Inaba developed a report, "Observations and Recommendations on the Proposed Rulemaking for the Maintenance of Nuclear Power Plants," (XYZYX Corporation, July 1988), that discussed the NRC's approach to the maintenance rulemaking and related performance indicators.

In that report, Dr. Inaba criticized h7C for not taking a systematic approach to the development of indicators and presented a top-down hierareny for identtfying indicators.

The report suggests a set of indicators associated primarily with plant availability. The report puticularly identifies 4

operating logs and maintenance records as sources of data for these indicators.

It is agreed that some of Dr. Inaba's points had merit at the time the report was, written.

This project, which has systematically identified l

indicators, was not. available to XYZYX at the time because the analyses was j

under way.

A top-down approach is a logical path to follow if the l

t indicators being sought are only reliability parameters or their

{

surrogates, as Dr. Inaba recommends.

The limitation on reliability 1

parameters is that they lag maintensnee perfctmance and are often slow in i

responding. Mo're critically, plant availability is only one aspect of I

safety and is. considered a limited one at, that.

L i

In practice, data collection is a determining factor in the selection of indicators.

Airline maintenance programs use typically extensive data i

collection programs, and detailed records are maintained.

(See SAIC's

[

Interim Report, Attschment B, for details of a typtcal commercial aviation i

' maintenance program.) Such records do not exist in the nuclear industry, and would require a very extensive comunitment of resources to be available.

8.2 The Westinghouse Report v

The study by Westinghouse (Engel, RJ, et al., "The Causes of Nuclear Power Plant Cnavailability" (WCAP-ll716), January 1988) is focused specifically on identifying factors that seem to play a significant influence tn correlating with commeretal plant availabilities. The analysis was performed using multiple regression analyses to estimate the significance of a series of factors associated with the paysical plant, and with plant i

i and utility organizational factors.

External factors, such ac 'he Nuclear Regulatory Commtssion',, ere discussed but r.ot quantifieh.

The factors described is most significant.ere:

I l

i A19 l

i e

(i) physical plant:

-R'a ting Age (service time)

Use of salt water cooling Plant complexity (number of NSSS valves) re-

+

(ii) organizational:

-Staff size Turnover rate Maintenance quality Communications Overtime use Data were presented in support of only a few of these analyses. In order to evaluate the results that were presented, a brief statistical study was performed for the rel.tionships between complexity and availability, and between plant age and availability.

In the case of the complexity, using statistical tists (Kendall's "tau" test and Spearman's "rho" test) showed that the calculated correlation could be spurious; the tau and rho tests both showed a likelihood of the "null" hypothesis (i.e., there is no relationship between complexity and availability) being valid was approximately 50%; that is, the correlation calculated is as likely to be a y

result of random data points as it is.to be a result of an underlying relationship.

. In the case of '.ge, similar analyses were performeo; in that case, the "null" hypothesi's could be rejected with a probability of approximately 90%. However, in performing a regression analysis using the Westinghouse data, only four points out of a total of 74 points were influential in determining the correlation coefficient used in that study; these were at the extremes of the data.

The analyses themselves were aimed at factors influencing availability.

This is only tenuously linked to safety.

Based ~on these limitations, it has proved difficult to use the results of that study as a basis to select indicators.

+

O t

e e

8 a

0 Eh 4

A20 L

~~

m v

APPENDIX 4 BROOKHAVEN NATIONAL LABORATORf INTERIM REPORT PROGRESS ON CONSTP'ICTING PLANT LEVEL AND SYSTEM. LEVEL INDICATORS OF MAINTENANCE EFFECTIVENESS 1.0 OVERVIEW A procedure has been developed for constructing system level and plant level maintenance effectiveness indicators from basic, equipment level data.

Because they synthesize individual equipment information, the system level and plant level indicators have enhanced powers for identifying trends and changes in maintenance performance.

To aid in interpretation of the outputs, powerful statistical tests are applied to the indicator outputs to signal when significant trends and changes in maintenance effectiveness are occurring.

The system level and plant level maintenance indicators can also be decomposed to identify the contributors to the maintenance ineffectiveness, for exampl', uncontrolled aging of equipaent.

including, e

The system.and plant level maintenance effectiveness indicators have,been applied to actual maintenance data collected from Plant 1.

The results from the application were extremely promising.

The outputs of the maintenance indicators were not only consistent with engineering information, but predicted problems and shutdowns before they actually occurred.

The indicators furthermoro displayed the improvements in maintenance offectiveness which resulted from actual plant and NRC instituted changes.

The results from the applications are so promising that further application and development of the indicators is highly recommended.

2.0 BASIC DATA USED FOR THE SYSTEM LEVEL AND PLANT LEVEL INDICATORS The system lovel a'nd plant level indicators ar'o constructed, or synthesized,,from basic equipment maintenanco ineffectiveness data.

The basic equipe. int maintenance ineffectivoness data which are collected for a given picco of equipment are:

The number "of failuros por quar, tor mi

I tM k

and The total downtime per quarter The failures involve loss of function of the piece of equipment.

The total downtime is the total out-of-service time for the equipment.for failures, for corrective maintenances, and for preventative maintenances.

The ondetected downtime is included in the dawntime for a failure.

"ailures and downtimes occurring during plant operation are distinguished from those occurring during plant shutdowns.

The number of equipment failures and total downtime occurring during operation represent'the ineffectiveness of maintenance in keeping the equipment in an operable state.

There is general agreement from past studies and reviews that the above data represent the most direct measures of equipment maintenance ineffectiveness.

This is true whether the equipment is safety system equipment or balanco of plant equipments or if it is standby equipment or operational equipment.

This basic dr.ta, furthermore, is, utilized in the INPO safety system performance indicators, though not in the same way as it is used here.

The above failure and downtime data can be collected at the component, train, or system level.

Data collected on a component level allow more detailed analysos to be performed on

' contributors to system and plant maintenance ineffectiveness.

The above data, at a component level,~ is obtainable from plant maintenance logs.

NPRDS is also capable of providing this data at the component level.

The system level and plant level maintenance indicators which are constructed from the equipment data do'not require the data'to be collected on all the equipment, but only on a representativo sample, which is an officient feature of the approach.

3.0 CALCULATING THE BASIC EQUIPMENT INDICATORS To construct the system and plant level indicators, the basic equipment maintenance inoffectiveness indicators must first be constructed.

This is done as

.\\1ows.

For each piece of

\\

eqe,ipment considered; the number of failures per quarter and the total downtime per quarter are collected over the number of quarters to be evaluated.

The failures in the different quartera are then scaled to give relative values.

This is also done for the total downtime per quarter.

The relative failures per quarter and the relative downtimes per quarter are the basic maintenance ineffectiveness indicators which are then utilized for the equipment.

To obtain the relative failure indicator for a given piecc of equipment, the failures in the different quarters are first ranked according to their size.

The smallest number of failures has rank 1 and the largest number of failure has rank of N, where N is the number of failure values, which is also the number of quarters evaluated.

The intermediate failure values are given intermediate rank values from 1 to N according to their size.

(Tied values are assigr.ed the average of their renks).

The relative failure indicator for a quarter is then defined to be the rank for that quarter divided by the number of failure values (number of quarters).

The relative downtime indicator for the piece of equipment per quarter is obtained in a similar manner.

The relative indicator va aes are 'hus the relative-number of failures and downtime per quarter.

This method of 3

obtaining relative values is a standard statistical technique (ranking technique) which allows powerful statistical tests to be performed to identify trends and changes in performance.

This will be importanc in' utilizations of the indicators.

The 1

calculation. of the relative downtime indicator is illustrated below for a record of five quarters; the calculation of tne relative failure indicator is similar.

4 QUARTERS 1

2 3

4 5

DOWNTIME 12 36 106 8

10 (HOURS)

~

~

RMIK 3

4 5

1 2

m RELATIVE 1,

4._

5,_,

l.

2_,

DOWNTIME 5

5 5

5 5

INDICATOR h

on

4.0 INTERPRETING THE EQUIPMENT INDICATORS AS MAINTENANCE INEFFECTIVENESS ~ INDICATORS As was indicated, the relative failure indicator and the

^

relative downtime indicator which were described,in the previous section can be interpreted as relative maintenance ineffectiveness indicators for the equipment.

The relative indicators range in value from 0 to 1.

As the relative failures or relative downtimes in a quarter increase, the relative maintenance ineffectiveness on the equipment increases because relatively more failure or downtime are allowed.

The relative maintenance ineffectiveness is greatest V 1 the indicator value is 1 since this value represents the grectest number of failures per quarter occurring or the greatest downtime per quarter occurring.

4 As the basic. equipment maintenance ineffectiveness

' indicator, we shall use the average of the relative failures per quarter and the relativo downtime per quarter.

We shall thus use l

the average value of the relative failure indicator value and the i

relative downtime indicator value per quarter.

Other functions of these indica' tors could be used, however the average allows standard, powerful statistical tests to be utilized to identify trends or changes in performance.

The statistical tests can thus he used to "calibrate" the indicators to aid in their i

interpretation.

5.0 CONSTRUCTING THE SYSTEM LEVEL AND PLANT LEVEL INDICATORS To construct the system level indicator, the equipment level indicators are constructed as in the previous section for the different equipment in the system.

Each piece of equipment has its own relative maintenance indicator per quarter.

The individual equipment indicator values for a quarter are now averaged over the different equipment in the system to obtain the system level maintenance indicator for the quarter.

The system level indicator represents tha'rolative number of l

equipment failures and downtime occurring within the system per quarter.

Hence the system level indicator represents the lig

t relative maintenance ineffectiveness at the system level.

The system level indicator again ranges from 0 to 1 in value with

. higher values representing relatively more ineffectiveness.

As for the equipment indicators, formal statistical tests can be applied to calibrate the system indicators.

Warning levels can be defined for which there is a 95% confidence that the relative failures and downtimes for the system in the. quarter

'are excessively'large as compared'to other quarters.

Trend tests and level tests can also be applied to identify significant trends or significant changes in the system maintenance ineffectiveness over a given time paried.

The system indicators will have enhanced statistical powers because of the synthesizing of information.

.The system level maintent.nce indicators can be cor.structed for different systems in the plant.

Synthesized Indicators can also be constructed.for~given equipment classes across different systems; these equipment class indicators measure the relative number of failures and downtimes, and hence relative maintenance ineffectiveness, for the equipment class per quarter.

These equipment class indicators are examples of plant level indicators.

On overall plant level indicator can be obtained by combining (averaging) the system level indicators or the equipment class indicators.

A hierarchy of maintenance ineffectiveness indicators can

'.n fact be constructed, from an overall plant level maintenance indicator, to system and equipment class maintenance indicators, to specific equipment maintenance indicators.

Maintenance ineffectiveness indicators for specific areas, such as mechanical electrical and instrumentation areas can, also be constructed.

For any of these synthesized indicators, the appropriate individu'al equipment indicators for a quarter are averaged to obtain the synthesized

  • maintenance indicator for th's quarter.

The averaging of two equipment indicators is illustrated below, which is representative of.the basic calculations which are performed.

A25

d QUARTER.

1 2

3 4

5 EQUIPMENT 1 3

4 1

1 2

INDICATOR 5

5, 5

5 EQUIPMENT 2

.2, 1

.4,

.3.

.1.

INDICATOR S

5 5

5

' EQUIPMENT 1 jL JL JL i

JL AND 2 10 10 10 10 10 INDICATOR

^

6.0 EXAMPLES OF APPLICATIONS TO PLANT 1 DATA The following figure illustrates applications of the maintenance ineffectiveness. indicators described in the previous sections.

Other. applications of the indicators are described in the report to be issued.

The indicators in the figure were constructed from Plant 1 data on equipment failures and downtimes occurring from the first quarter of 1983 to the secodd quarter of.

1989.

The figure plots the equipment' class maintenanco

~

ineffectiveness indicators that were constructed for pumps and valves.

The equipment class maintenance indicators are averages of the individual pump maintenance indicators and the individual valve maintenance indicators over nine systems.

Periods of major plant outages are identified at the bottom of the figure.

The 95% warning limits in the figures are derived fr-)m statistical properties of the indicators and represent upper 95% tolerance values for the indicators.

When the indicator crost the 95%

warning limit then there is 95% confidence that the

_intenance ineffectiveness has significantly deteriorated from the average performance in terms of the increased number'of relative failures and downtimes occurring.

The indicator curves in Figure 1 can be interpreted as,

follows:

k 1)

Maintenanco inoffectiveness on both pamps and valves i

surpassed the 95% warning limit in bho second quartof of 1985 (i.e. 85-2).

This warning from the indicators occurred approximately one year before the maj or shutdown which.actually occurred in 1986 and 1987 and' which was due to equipmont problems.

A26

O 2)

The maintenance ineffectiveness indicators on pumps and valves generally remained above the 95% warning limit from the second quarter of 1985 (85-2) through the beginning of 1987.

Thus, there was a sustained warning of maintenance ineffectiveness problems which were actually identified by NRC and by the plant at the end of 1986.

3)

There was a significant increasing time trend identified in the valve maintenance ineffectiveness indicator starting from the first quarter of 1983 to the second quarter of 1985.

This time trend was significant at the 95% confidence level.

Valves were later identified as a ' major problem area by both the NRC and plant personnel.

Thus, a significant time trend was signaled at least two years before a maj or plant shutdown occurred and the actual problems were

' identified.

4)

After'the plant started up in the second quarter of 1987 i87-2), both the pump and valve indicators dropped below the 95% warning limit and continued to significantly decrease, indicating that maintenance ineffectiveness significantly decreased.

The indicators thus identify that the engineering changes that were instituted by the NRC and the plant during the major shutdown in 1986 and 1987 resulted in significant measurable maintenance ineffectiveness reductions.

0 Me e

6 6

A27

PLANT 1 PUMPS - NINE SYSTEMS y

1

$ o.o t-c-

  • 0.6 95% warning ifmit 0

h 0.4 3

E 0.2 E

o 3

0

, i 4

i i

w 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 y

I 83 I

84 I

85 I

86 I

87 1881 a:

Avg. F & DT PLANT 1 l

VALVES - NINE SYSTEMS e

e 5

1-

} 08 0'6

'V 95% warning limit

.4

/

0

  • a 3 0.2 significant timo

(

E trend i

s0 O

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3

1 83 l' 04 l

86 I

86 l

87 18 81 1

l Avg F & DT l

l l

F----i l HH l

, Plant Shdtdown Periods' l

FIGURE 3.

EQUIPMENT CLASS MAINTENANCE INEFFECTIVENESS INDICATORS l

\\

A28

e Appendix 5 DISCUSSION OF VALIDATION RESULTS FOR THREE SELECT PLANTS Introduction This appendix' provides detailed validation results for three selected plants.

the objective in providing this material is to better inform the reader of the I

validation process and of the considerations for making oecisions on the significance of a statistical result. While only four cases were worth highlighting in the main body of the paper, an additional six are discussed herein and illustrated with' plots in order to provide a more complete picture of'the analysis.

Within the appendix the following abbreviations are used:

SCRHI - Scrams above 15% of full power per 1000 critical hours i

SCRTL - Total unplanned automatic scrams EF0 -

Equipment Forced Outages per 1000 critical hours CMB -

Corrective Maintenance Backlog PMT0TM - Ratio of PM to Total Maintenance MTSERY - Mean Time to Return Components to Service PMOVER - Percentage of PM Items overdue 080PMT - Mean Time to Return B0P Components to Service Plant 1 l

Plant I has spent a large amount of time in either scheduled or forced outages i

in the 3 year period from January,1985 through March 1988. Also for many of the indicators, data was only recently available for this plant in a form suitable for this analysis for quarters near the end of.the period studied. As a result it was difficuirto draw many finn conclusions betGeen the prospective maintenance indicators and other measures g plant performance.

Since 1985, the unit has had I extended shutdowns: from November, 1985 into March,1986 for refueling, and from May,1986 to April,1987 due to problems A29 L

t with turbine EHC controls. Also, there was an outage (277 hours0.00321 days <br />0.0769 hours <br />4.580026e-4 weeks <br />1.053985e-4 months <br />) in mid-July 1987 due to a loss of offsite' power event (LOOP) originating from aa fault on a startup transformer.

The plant-level performance data were reasonably continuous (i.e., non-zero data values for more than a few quarters in the most limiting situation) holding at least some promise of revealing meaningful results for indicator correlations.

Process Indicators Of the process indicators only CNB, corrective maintenance backlog, showed any

.useful correlation with a plant-level indicator. There is a decreasing trend in CMB from 81.7% in the 1st quarter of 1985 to 35.9% in the 1st quarter, 1988. However, within this overall trend there are two periods of fairly local, regular decrease leading up to concentrations of unplanned events.

CMB correlated with S'CRHI, SCRTL and EF0 when leading by 4 quarters.

Figures la/lb, 2a/2b, and 3a/3b illustrate the relationship between CMB and SCRHI, SCRTL and EF0 respectively.

(In each case the "a" figure shows the two variables when they are coincident in time, the "b" shows CMB shifted to the

.right by 4 quarter's while the plant-level variable is held in place).

The result for CMB-SCRHI is influenced by an anomalous SCRHI value in 86-1 of 29.3 that resulted from one scram occurring in just 327 critical hours.

The SCRTL-CMB pair does not have this problem (the normalization by critical hours is not used).

Figure 3a shows that the CMB begins ramping down from a. peak four quarters prior to a quarter containing a scram.

The EFO-CMB relationship is affected by the same anomaly as SCRH1 since it is also normalized by critical hours and the single scram in 86-1 was also counted as the single equipment forced outage.

However, there are more data points for EF0 than for SCRHI, and some statistical relationship probably would exist even with an adujstment. Nevertheless, the trend of CNB prior to scrams or equipment forced outages is counter to the expectation that the CM Back1og should build at some rate, i.e., show an upturn, some time before a peak that coincides withsomesignificanteventoraquarterwithahighfoIcedoutagerate.

A30

The ratio of PM to total maintenance {PMTOTM) has been increastng since the 1985/1986 refueling outage (from 7.6% in the 1st quarter, 1986 to 43.9% in the 2nd quarter, 1986) and the LOOP (from 54.3% in the 3rd quarter, 2987 to 71.7%

la the 4th quarter, 1987). However, no correlation with plant level data was found.

5 No correlations with plant level data was identified using PMOVER, PM items overdue. After dropping from a peak value of 50% in'85-3 to'18.1% in 85-4

[

l (the quarter the refueling outage was entered), the unit mowed a worsening l

trend through the 3rd quarter, 1988. According to the licensee, major emphasis 4

i has been placed on timely PMs since the beginning of 1987.

l J

Other Indicators At Plant 1, an instance of "rework" is determined when the maintenance process is completed, the component or system is turned.over to operations, and the required test is failed.

By this definition, rework has been increasing through-out 1987/1988, the' only period for which this data was available, but appears to be reaching an' equilibrium. However, there is no apparent correlation t

between this and any other measure of performance.

i The ratio of deficiencies discovered during surveillance to those discovered t

by other means, DEFIC, was obtained by the licensee from NPRDS.

The ratio has l

t been relatively stable with two exceptions.

In the 4th quarter, 1986, the ratio increases si.gnificantly which is due to the fact that more surveillances were perfomed. Then, in the 3rd and 4th quarters, 1987, few deficiencies were I

discovered, none through surveillance testing. No correlations were identified.

l l

Finally, the correlation analysis indicated that 080PMT and MTSERV (highly correlated themselves) led FOR, and availability (also c,ritical hours) by 6 quarters. That is these component-level effectiveness indtrators had high values 6 quarters before the availability peaked. The relationships areplotte'dinFig'u.res4a/ band.Sa)b.

A31

w The general patterns of data for MTSERV and CB0PMT is the same, as reflected by the high correlation between them. This is not unexpected since the equipment covered by DB0PMT is a subset of that covered by MTSERV. OB0PMT and FOR show a rough "sawtooth" pattern with the first cycle rising from 85-1 and peaking in the late 86 thru early 87 time frame.

The beginnings of a second cycle are evident. The patterns for OB0PMT appears to be closely related to the state of the plant rather than leading it, which is intuitively reasonable: while the plant is running the mean time to return BOP equipment to service is shorter than when the plant is shutdown, and the mean time is influenced by the ler.gth of the outage.

Plant 2 The plant availability and critical hours on a quarterly basis changed drama.tically during the time period of the study, primarily because of 7 equipment problems, rollowing a refueling outage in the second quarter of 1985, the plant was returned to normal operations by the end of 1985.

On January 1,1986, the "A" reactor coolant pump (RCP) shaft failed while the plant wu operating at 92% (the plant tripped automatically) and it was necessary to enter a lengthy outage to replace the pump shaft and the shafts of the three other RCPs. After completion of the outage the plant was returned to normal operations by the third cuarter of 1986.

In the fourth quartar of 1986 outages were necessary because of problems wit'1 a leak of primary coolant into the nuclear services cooling system (via a failure in the "A" letdown cooler) and problems with the feedwater system.

In the third quarter the plant was taken out of service to replace the 1A pCP snaft seal and to perform maintenance on the other three RCP shaft l

seals. The plant was returned to full operation during the second quarter of 1987, but RCP shaft seal problems were again experienced causing delays to operations in the third quarter of 1987. Following a refueling in the fourth ouarter to 1987, the unit was returned to normal' power operations in the first quarter of 1988 without further,RCP shaft seal problems. RCP problems were unquestio'nably the primary cause of opersting problems during the time period of this ' study.

A32

\\

+

Process Indicators One variable, PMOVER or PM items overdue, correlated with SCRTL when lagged one quarter, and with FOR when lagged three quarters.

These relationships are illustrated in Figures 6a/b and 7a/b. However, preliminary examination does not indicate any sufficiently regular behavior that-would allow use of PMOVER as a leading indicator.

The data for process indicator Corrective Maintenance Backlog Greater than Three Months old are available for the third ouarter 1985 through the first quarter 1988. A plot of the ratio indicates there has been much variation and perhaps a small. decrease in the indicator over time.

It was unclear how the data were related to performance of maintenance. This indicator did not appear to be a leading indicator.

No useful. trends or correlation involving'the ratio of.PM to total maintenance were identified. However, it is worth discussing as it illustrates the diffi-culty of interpreting indicators that are ratios, especially on a quarterly basis.

The data PM to total maintenance shows a sharp increase in the indicator in the first quarter of 1986 followed by a sharp decrease in the following quarter.

In subsequent quarters, there was little change until another sharp increase occurred in the third quarter of 1987, and the indicator remained at that higher level in the two subsequent quarters. When the data were examined more closely, the reason for the variation in indicator became more clear. Data for the third and fourth quarters of 1985 and the first quarter of 1986 were about the same except that the hours of surveillance jumced frcm zero to 99,810 hours0.00938 days <br />0.225 hours <br />0.00134 weeks <br />3.08205e-4 months <br />.

This jump occurs at the time of the outage to replace the RCP shafts.

The indicator returned to a lower value as less surveillance and more corrective maintenance is'perf,ormed in subsecuent quarters. The indicator rapidly increases again in the last two cuarters of 1987 and the first quarter of 1988, because of increasing numbers of hours of surveillance and preventive maintenance and a decrease at the same time in the numbers of hours of corrective,caintenance. Total A33

T hours of maintenance do not show a similar change.

In fact total hours maintenance drops from about 51,400 hours0.00463 days <br />0.111 hours <br />6.613757e-4 weeks <br />1.522e-4 months <br /> in the second quarter 1987 (there were normal power operations during this entire period) to about 31,300 hours0.00347 days <br />0.0833 hours <br />4.960317e-4 weeks <br />1.1415e-4 months <br /> in the third quarter 1987 (during this period p.roblems were again being experienced with.the RCP shaft seals and a refueling was begun).

Other Indicators Similar to Plant 1, the Plant 2 data showed a correlation between FOR and DBOPMT.

The data are shown in Figure 8a/b. However, while for Palisades the indicated lead time was 6 quarters, for Plant 2 it was three quarters.

' However, the plant operating profile, including refueling outages, is driving the pattern in 080PMT. The peaks in DBOPMT in 85-2 and 88-3 are related to 85-2 refueling outage; the peaks in 87-4 and 88-1 are related to the 87 4 refueling outage. The values of 080PMT in quarters between these peaks don't vary a great deal, but are generally a little higher in quarters with a positive FOR. Thus, DBOPMT is related to FOR in a coincident way, and its pattern does not provide any leadirig indicator in this case.

CR3 had data on deficiencies found during surveillarice for the 14 quarters i

from the first quarter 1985 through the second quarter 1988.

The ratio varied from zero in the first quarter 1986 to a value of.60.97,in the second quarter 1988. The ratio did not appear to be related to performance, nor was it a leading indicator.

Plant 3 The plant entered an extended refueling and maintenance outage in February 1985 m

(85-1),and returned to power in July 1985(85-3).

Minor problems requiring power reduction were encountered following the outage, i.e.,'repai s of turbine f

backup overspeed trip circuitry and leak repair of a feodwater ninimum flow drain line.

Routine operation continued until a 23-day scheduled maintenance outage in March 1986 (86-1). Sinorproblemsrequiringpt 1r reduction were encountered in April 1986, i.e., loss of control power to the steam jet air A34

,. o l

ejector condensate return pumps and a recirculation pump MG set trip.

The picnt was removed from service for three days in June 1986 (86-2) for installation of a new auxiliary transformer.

During the subsequent startup, the reactor was manually scrammed from low power due to feedwater flow control problems. This forced outage was extended two days due to turbine emergency bearing oil pump = problems. Routine operation continued until September 1986 (86-3) when the plant was Outdown to make repairs to the recirculation pump lube oil system.

During the subsequent startup, steam leaks were detected during inspection of the RHR head spray line, this extended the outage four days (86-4). Other problems encountered in October '1986 included a Technical Specification required shutdown due to inadvertent deluge and wetting of charcoal filter beds in both trains of the SBGT system and a condenser vacuum problem.

In December 1986 (86-4), an 11-day outage was entered due to NRC concerns with equipment qualification of primary containment electrical i

penetrations.

In March 1987 (87-1), a 75-day refueling outage was started.

In July 1987 (87-3), an 8-day outage was required to repair the "A" outboard MSIV.

During the quarter 87-4, numerous short power reductions were required for repairs to various systems and components including EHC piping, condensate demineralizers, RCIC valves, SBGT and MSIV Leakage Control System.. In April 1988(88-2), a sho~rt power reduction to repair feedwater pump support equip-ment occurred.

Plant 3 exhibited onl/ one scram over the period of interest, so any correlation with the scram-related plant level values holds little interest.

EF0 is also sparse, with values greater than zero for only two quarters.

Process Indicators One process variable, CMB showed a correlation when it led availability and critical hours by five quarters.

However, this was only based on five cases, which does not represent a strong case.

Corrective maintenance backlog varied from 12.8% to 63.4% but was generally flat around 50%..The number decreases following major outages and tends to increase until the nextmajor outage.

e e

A35

m The PM to total maintenance ratio varied from 14.9% to 99% over the time periods studied. There appears to be some lowering of the. ratio for quarters that include outages, but this effect is confounded by a fairly regular decrease from a peak of 994 in 86-2 to 34.8% in 88-1 that may be the result of changes in definitions for the types of maintenance feeding the indicator.

PMs overdue were zero throughout at Plant 3.

Thus, this indicator is not a useful trending indicator at this plant.

Other Indicators We did identify a situation in which the two candidate MPIs that measure the mean time to return equipment to service, DBPDMT and MTSERV (highly correlated with one another) peaked five quarters before the quarters for which the EF0 value was positive. The data are plotted in Figure 9a/b and 10a/b.

The statistical analysis flagged this case based on the relationship of the 1

peak ir. OB0PMT in 85-3 to the EF0 peak in 86-4, implying that a strong peak in OB0PMT lead a peak in EF0 by a number of ouarters. However, this interpretation is misleading, as the peaks in DBOPMT in 85-3 and again in 87-3 came. at the end of a refueling o'utage, and at least for these peaks we are

,sfehg the same behavior as before - a strong relationship to plant 3

i outages. The engineering review of trend plots identified this relationship correctly. What is interesting in this particular case is the regularity with l

which DBOPMT trends up between refuelings, from roughly 85-4 through 87-1.

l Maintenance rework varied from 0% to 5.7%, and generally hovered around.3.3%.

l This indicator trends slightly upward from the end of one refueling outage to the next.

The ratio of deficiencies discovered during surveillance' to 30tal deficiencies var.ied from 44% to 71%.

This indicator was relatively flat around 55%.

Generally this ratio achieves a small peak following nejor outages. This is evidently due to more conditional surveillances being recuired, and provides no useful leading information.

A36

B h

9 e

P I

(t) 870

.c f

5 8

8

?

R g

3 e

X8 e : co Le y

(LZE * 'A??b[ikN:sf 4A 7

Yl e s

O C e

h sp'w6'4%iWFM 0 f

    • l _I a @

O M

b dc 4

i w

wa+,

a 7

O t

i 1

o

..i e

g

  1. .l'S A dkj k[sE

+ k x

f-$'

'4s' ; 'o s

m 4

d

~

h Q

G k

CT M

'x l'

$[N A d,. $'

N

_b ' $ mA

_s a

u k

s ke _n :

g,_

or,igwsMAkisb@ew-l.7 b

-. ts,w i

+

N e

t t.()

6C Q,

O c gwn4Nysiphu 7

,o A

3 h

h b

.v

~y a.

s

.]

?

LA g

i k

.. g_gISWm-

-- ar 8 l e

-Q a

t 0),

.. f "J >'

.) 7 I,

o l

~C

{

O t

~

I

[

i i

c k_ Wp p,

V. vl J' M 9 OCCI a

O' j W< w - ~

a/

T g

v u

~

s me A37

FIGUPE lb Scroms > 15=/ 1000 Criticd His Plarit i 1 l_egend:

co<*i~e u-i - e B* % N 85--I to 88-1 15 CM BACki OG (4 OTR. LAG)

SCRAMS h

(/ )

  • n:

3 4 80 Or w

-T,

=

M u

w

$i y.

n' w3 v

so a-n a

4 G

o v

m y-e s

a m

m o

y g

2 f-1 0

s o

i 92 li Ti

?!

f i

~

U

<o a

e n

F 1

M l',

4 :

y E

M p

g_;

in w.

~

a if i

f

t 2

o,'

2o a

y e

y e

s in g

j

,e q Q

C 7l V;

ra a

1_2;_2.s

_ 2 l l of o

o,ia o,ra o,ra

_2 o,

o MS-t 85-2 85-3 8'

4 86-1 86-2 86-3 86-4 87-8 87-2 37-3 87-4 88-t Year -Ouor ter G

G

FIGtlRE 2a Automatac Scrorrn Plant f I Legerp_l:

co.. m i~e m ie - e n.- umN 85 - 1 to 88- !

I O f Al.

SCRAMS CM BACKLOG

=_.==---- -

(

att

~

4 80 M'

>w o

tL,'

3

l?

QF 60 0)

']

y a

v 4

E{

LD Et s

u)

=,

V

.y

~

'd 2

U i

U 2

+

40 i

E'

~'

I

/p I

'I j

7

(

e e

4 20

~

+

t 5

+

{'

9

+

c.

,b o.

o, -_ _1 o

.- e.:. _1

-1

,.,f

+

o L_t

.o BS 1 SS-2 E- - 3 SS 4 8

-3 86-2 86 3 86-4 87-3 87-2 87-3 87-4 BS-t Year -Quor ter

9 O

6 e

I 3

8 8

?

R s

3 e

C hwrco a:

. ) i E,j J

E o

t e

j f

~

1 gi e.

<1RGA n W95 W'"7

.]

H l ~i V

O e

e d

3 M;

I o

~y

,1 ~.

j C

i 1;

1 l

f y

C' h

[>

ri 7

s. c.

q w+:e.s

<

  • s o

=

g CD sN 4t N"

NN N" c' {

f Ny 'cN',y9 ' C'\\'.N'

.g A N

4 kJed J

C

)

n m

~ '

7 v y'4 ~ syv vv pw 4.-

($

y y^

i M

l I?

G W

6 l

C y

Ri s

+ >.

s::m..s w ;

?

g -,,,,

Ln '

\\

2 w

msn,u ua.

-hN'8 w

W q

I (f) 9

. G' o

_J<

5 ?

" M

'8

{

s

.I. 7

{

  • G i

5

- 13

~ ~e y

a m

w C

so o

~

=

Oe O

C)'f WW ^ 0 4

w W % v assa i

i A40

1 a

i 7

OntP7d3d w

[

b b

h k

O ef I

A CD e

c Mt nl I

1 h.' s O

i I

i 2 'E o

v i

k e MQA$.S. 5, ?&& {

e

.=

si iml; 6

5 i

~

4

~,

.lr

'M.

+l

{

Cl I

l 7!

m I

I i

Uk l

^

-) ', O I q

fj. :

s N

2 s

s M

E o

W t

( )

cc O

l h.:

gj 7 C

.w.

s s

O' &

l TE l

i I

l O

v.

ew w D

tsmMMnm o u.

s.

~

~_

i

~8 l

\\

g} a w%

..a cs uc t c tac vests %s i+ %

v i

~

8 I

seJI, I

l i

(

o i

t l

l 8

11_

io

. - +2.,u2 w

?

t i

i I

g 1

I 9c

~

~

r, 1

y' h,

a D

t

\\

I

{

d

~

\\

4 ns

\\

9 e

e te C

c..

e v

_"nC-

" I.W. _' 0 0 *.

n.d..'.

N A41 l

s

s fir 41RE 3h

-~

M Equrment Fos ced Outoges Piant f I Leqend: C"!E3 co eciwe ueeieaoece oo<n, o) 85 I to 88-1 Ei O CM BACKLOG (4 OTR. LAG)

,no ll <

eo n

D 4'

'I l

  • [)5 W

w so E

4 w

9 l

U 1

w 2

40 f

x g

,f 20 3

Y E

Y Na f

o,ca

n

.,ra a

a o

o Year -Ouorier S

r --

--~.-a.

4 G

9 S'dnOH

(

8 5

g g

g e

u

~

o o

i l

l 3

I i

\\

$2 h m _; m u m,,u ss i

s jg 7 6 't r

u8 fim s

. ?.

s w3 3:

a l

ei ll>

_2 y

c b

l Uw ;h@e J.Ms+ dd% Abs'Id[d (%%s s

rv

=

c m

O C-bw i O a,

ss

.o m

y

=

x ;-- -m

i. w s

?O EE sig sa s..w dns.a m

x U

l l

I' m

i O

i ij -x O

xs xsm s e, s

n, p

i

(~V l

I 1

I i

e.,

t a g ~$

t I

5 l

t.i -

)

~

l C

I o

j i

e.

T

{

-lf l

w,

C~

b S

IS S

d C

~

z I*,

l D

\\,;) %QJ w

j t

j 3

I l

A43 i

e Og1

~

=

a E

0 0,

s 4

o' i

-o s

E se E

)

4 U

G 7

8 A

D' L

3 7

8 R

E T

O k

7 7

8 6

d

(

t n

7 e

S e

g R

r e

T 4

e b

L M

t a

s r

o-E u

9l 3

O tG P

I 6

F 8

O tc B

e Y.

6 e

s e

R O

F e

I I 8 8 _

i t

o nt a

I l

c 0

P e

F 4

>t

-E << e t

I l

ii ll

]j:'

ll

,ll ij j

e Sd'nOH 5'

R S

es 8,

G 2

n e

E

=c-I r-[

i j

V)

?

y I

$ mT*?p*,mTX/ '.13?7f.

~

AM2

}

t

=

l 4

1 C

5M=

i,, - = sM eg

-}f o

e

~

CT O

w

't

,. rmrm !

i F l

s

~

_ a

,m, M

l U

m 0

w h;

c,t ;-

+

.d

- M:

>l ? O b.

g 8

l 5

o u

l C

d

?

l traggr %:et 3 Q4^':N AJ 7 y

s 8

i O.

i i

j ti c

q i

R,u nac i

i l

~$

l r

2 1

III I i i

i i

o M

I b

+.

t a_

8.

d M

3

?

O.,

O

~

m.,

( *4 j AlllEYlVAY 1

f am.

1 l

A45

p I

i O

SBnOH

[

e I

J A

[

8, 8

i,

~

e 6

rb g

p+s 4 -.

m

.. _ yh =

(

'm a

w c.

m 3

)

I

}

y

'd n

i

-b J

e e

i v

i

([)

E

~

y' t

2 i

a.

C7 g

a

.c J

n 2D 4' 3

[

5

  • 5 o

a i

E

?O

-Wu G

u l

l Co i

l 5

a s

l l

l

=

eur8 l

l

=

i i

a l

s

s l

i s

a i

r i

u i

s.

a t

i l

5 e

4 i

d 5 I

j N~

j g;

n a

e P.,

l e

e o

i N un=nym l

i i

i I

p l

A46

V i

u 4

E (t) 2nO83AO vid d

o o

e, e

o n

a 5

. -@h '

~E e

l 3}

l D

i ;,

a a

vy y

8

-M-;

l LG

~.

i 1-t o

I g

..c C

d.

". =4 o

5

~

7 C.

O w.

y

-7e

.;, o l

o3 w

ca 1

-11 O

.' g I

i

=e E

b Cn

?

p
~

eA b

O b

~

i I

l

.o i

l j

s o

I EL _- _.___

.j

+

s

~,

e l

1 1

i Of R "N

~

t o

W w

I l

[

~

i e

i c

I b

Qie b

.)

d r 4 gg *, q y

r; a

m 9

.i A47

-- j J

l i

FIrWRE Eb M Avwnotic Scroms j

Planti2 Legend; O w ~e we~~ ae"=Sa*=

l e'- i to es - 1 TOTAL SCRAMS PM ITEMS OVERDUE (1 OTR. LAG)

.i 12 2' 5 m

S f

  • nas 3

an a

2 t-4

([

rr L)

(12

'n 3,

2 r

o s

1 er 7

t,

2

.l l'

l' f

i 3

M

}

D

<fj r;

t -

a b

I i

,:a

.:a

=

a-a a-a a

o o

es-i es-2 ar; -3 es-e e6 - 3 so-2 e6-3 es-e er-s 87-2 87-3 87-4 e;-s Year -Ouor t er M

e 9

.,, - - - - - -. ~. - -

Si (t) 2nOS3AO Wd 3

o o

e o

o e

Eo E

W

,) J

. L!o 3 e,f ikE I

3 1

d~.,

2I bb N

I i

o i

Er e

n.

i ?

3L u

i y

o O

..rc C

2 o

W i

c 2

1 l

i

~

w 2

s

=

G 1

en

O l

o s

t 1

6 O

\\

j 4d g

[.

= h.EL y.

s

+

x l, N

,s g

I i

I N

M-A-

\\

l

[

i I

,I U

i u

.E t

mg; i

bl -

.]7 I

.r_

w a.#

L G

l 4

l l'

5

~

r

~ C' i

i c

(

J w r i 13 G

i v

gg

q J

L i

I b

f

2' O

O O

)

i' I

9 04 3) i

,s I *. ) d)j I

l f

l l

6 A49

FIGURE 7b M Forced Outo Plati2 Legend; O n~-~e ge Rete

-- a-S-*

8',--I to 88-1 l

FOR PM ITEMS OVERDUE (3 OTR. LAG) l

%)O. -

D i

e n

i tv>

12 i

n3

~

m

'o

- 9 g;

ta e

g o

It m

O ta l

' 4 0 y

4 6 O 4

gi 5

l f

77 I

?

f LL i

. I'" I'"

L"_

___t d

_o 7e M

20 g

3 7,

n 7

i c,'"

o '"

__ 1 o

B*, - t eS-2 CS-3 85-4 e6-9 86-2 86-3 86-4 87-t 87-2 f -3 87-4 ro-t Year -Ooor ter

)

i

t 5'nOH e

I o

s 3

a Lf

~

jh 21 i

b.

I 4

J T 2[

I 7 -)

i v

gf

'?

l O

. m E

I a

e g

I l

c I

m a

i i

=

4 g.i o.

u

==

I 1

o e

i m

l i

u l

0 l

l iW ? !$

j I

E '

l l

C I

i i

O

.C l

l I

j t

l

-h l

1 l

l sC ?

mC l

rn ?

I l

l a

ll

\\

I i

\\

l

D l

~

. es t

S.

-s 1

I s

d -

l k~

h s

3 s

?,

C t

1 I

i 5

t i

(*)

y.m. ;

4..

t i

j j

I I

i t

1 A51 1

i

-1 i

fIGtP.E Pb Fcaced Outor Rcte i

$I1RDt 4 2 Legend:

    • '- r. e ou at <-

l M'>

1 t., M3 1 l

1 L

-~- ---

t r

FOR

. BOP : MIRS (3 OTR. LAG)

I kN t

um w,

M tb g

s m

i

~

cc I

W 600 3 i

O O

}

I' I

[

4o l

n 4W

[

I,r_

e ch9_

af3_

eh_M_._.o u

~

20 i

200 i

ON or o,' a o__

i RS-s SS - 2 85-3 eS-4 M-t M-2 56-3 M-4 87-1 87-2 87-3 87-4 58-1 Year -Quor ler

\\

1

s 3

SEnOH 5'

E 8

c 8

8 s

a z

o U

e h

-m 3

i O

s &

o o

"a i

i 7?

t

~

O G

I.

G

([)

  • I l

N d

y

=

=

+

l C

t.

((

i4 J{

q) u~

o CT h

b w

h[51'E.$S '

'55 I g

J O

s &

CD

'a g w

3 h

I.f :-

'-:: e E

l O

n.

1 s

8 i

a l

C u

,1 l

)

y

~

a

=

li[E ~

I G

o a'.

i 7 eg

.s

~

i l

O i

t S

(* )

T g

k E

O

'te f

a i

e ;

i l

AQ -

l e

n c<

=

A I

7 q 'gg { :

N. ].lj'u ^-,30^'

tl 4 ',

h n

.v v.

I I

l A53 L

y-e e

9 i

58nOH o

.a N6 m

504 Ch

=

I E'

e 7"3 h

v!

=

a 5

nI

~

e O

tr I_

6-C

'\\

~

u7 7

v C

E1- ~

T 2

O W

C cf u

D J

c.

,fa, T D c"

e J

7

~E os w

O kJ e,e

! ?O CL s

I I

O o

5 a* w l

b o

I g

=

i

(

O i

s t

L_'.

M O

LJ 5

a.

O l

1

~,

i C

-l g

C l

PI %

I o

'w-l 5

f GI 8

E 3;

sencH w ow sp coc.'

033 j

i l

AS4

r y

G M

SEnOH

-5 e

.s 6

g2 Ej -

'l i

e E

o T$

o i

8 it-i ?

Q

~

O G

==

w an D

G d

(/)

h; d

i c

O 7

e 40 N

A w

.L +

Q) 6 e

,b u'

O c.

{

l C

E El '

-! ? O I

3 I

u OD Q

,,Lt' H

e.

L.

LJ IN..

-1 7

Vi

=1 i

G a

l-li ?

TC 5

+-

h

~

G

.=

3d f

-G r?

o e,,.

i O

l w

1 C

\\

4

}

O 9

"l (1

O g

O

,O., m.n. L g "W 9l W o s.l i V.9, O00!

iO _4 0 m

i

~

j l

i i

t i

4 A55

8

~

FIG)RE 10b I

I Equpment Forced Ou wjes e

Plant i 3 Legend:

  • = t~ ro actu'a Ta serv.ce 85-1 to 88-1 EFO MTRS (5 OTR. LAG)

S tzoo F

(O

- N

. g4 D

O 1

l E

eco ko3 I

w in ip o'

(;

- 000 D O

O I

O2 O

l.

1 O

a to I E

iAJ

- 200 a

k

. 1 o

e5-n5-2 e5-3 e5-4 ed-i ed-2 ed-s ed-.

e7-3 er-2 er-3 el-4 es-i Year-Quor ter e

O

(

11 1

i APPENDIX 6 NPROS SCATTER PLOTS J

I e

i l

l i

1 4

l 6

l l

e A57

ODE OUTAGE TIME -- AVAILABlUTY 1985-1987 00 80

+ +

+

44 4

~

+

+

60 p

t--

J ca a

+

+

4 40

~

+

g 20 0

500 1000 -

. ODE MT-(HOURS) e h

e 8

A58 W

ODE OUTAGE TIME -- AVERAGE SALP SCORE 1985-1987 4

i 3

+

b Ci w

O *o

+

+

+

g W

++

4 1

+

+

.y 0

500 000 ODE'MT (HOURS) e g

h O

A59 9

COMPONENT RETURN TIME AVERAGE SALP SCORE 1985 -

1987 4

3

+

Q.

++

J w02

+

++

+

+

.w

+

+

+

1

+

++

I 0

500 1000 1500 NfTRS (HOURS)

~

G A60 A

COMPONENT RETURN TIME AVAILABILITY 1985 -

1987 10 0 80

+

t

+

+

60 p

b I

co

+

h

+

+

40

+

20 f

1 1

j i

I t

t i

i i

500 1000 1500 MTRS (HOURS)

A61

.