ML20198E555

From kanterella
Jump to navigation Jump to search
Provides Comments on Dec 1997 Guidance for Reporting Reliability Data to Equipment Performance & Information Exchange (Epx) Sys & Apps Attached to Earlier Guidance
ML20198E555
Person / Time
Issue date: 12/31/1997
From: Baranowsky P
NRC OFFICE FOR ANALYSIS & EVALUATION OF OPERATIONAL DATA (AEOD)
To: Mchenry T
INSTITUTE OF NUCLEAR POWER OPERATIONS
References
NUDOCS 9801090147
Download: ML20198E555 (10)


Text

December 31, 1997 Mr. Thomas J. McHenry, Manager Equipment Performance Department institute of Nuclear Power Operations 700 Galleria Parkway Atlanta, GA 3033g 5957

Dear Mr. McHenry:

We have reviewed the December 1997 guidance for reporting reliability data to the Equipment Performance and Information Exchange (EPIX) System that you provided us. Preliminary comments on an earlier draft were provided to you in a telephone conference on November 14.

This letter prevides our comments on the December guidance and the Appendices attached to the earlier guidance.

In general, the proposed scope and guidance are consistent with the commitment that NEl/ INFO / industry made to provide reliability information in response to the proposed reliability data rule. However, we have some suggestions that we think will enhance the quality and usefulness of the data to meet the proposed purposes of supporting Maintenance Rule reviews of industry operating experience and providing reliability data for selected risk-significant components. These suggestions, many of which were discussed during the eval"stion of the voluntary approach, are provided in the attachment.

If you would like in discuss these comments further or arrarege a meeting on this matter, please call me directly at (301) 415 7493 or Bennett Brady of my staff at (301) 415-6363.

Sincerely, original signed by Patrick W. Baranowsky, Chief Reliability and Risk Analysis Branch Safety Programs Division Orfice for Analysis and Evaluation of Operational Data s

Enclosure:

As stated i NMUb g Distribution:

RRAB RF DHickman HVandermolen AThadani LSpessard C 4 '1 - 9 5, SPD RF File Center JRosenthal MDrouin TTMartin SLong ERodrick FTalbot MCunningham GHolahan (- [g433Q%

PDR DOCUMENT NAME: H:\BMB\LET EPIX.FOUR ll! ,1,II,ll Ill,ll,I !!I!

  • See previous concuirence To tateve e et py of tMe docuM Indicate ln the boa "C" wro attachtenci'E' copp w!sttach/enel *N* no copf OFFICE RRAS/RRAB E RRAS/RRAB E NRR E NRR E RRAB t, SPD p NAME BBrady' SMays' SBlack' JHack' NaknNy CERgsh'g DATE 12/17/97 12/17/97 12/19/97 12/17/97 p /d 4 /97 /J / 3j /97

-9001090147 971231 P N-'i

= a" NBC r,.d, CMER COPY-g "'7J

L NRC COMMENTS ON CHAPTER 1 AND 4 AND APPENDICES OF

GUIDANCE FOR REPORTING DATA TO EPIX"

1. Test and non-test demands should bs categorized so that one can distinguish between those demands that are similar to the demands for components to perform their safety function and those that are not. Combining all demands into two categories (test and non-test) will not provide the delineation of demand data necessary for computing the correct component failure probability for PSA/PRA and risk informed applications.

NRC has recommended that demands be r# gorized as shown in pages 2 through 4 of Enclosure A that was provided to industn, .4.ig our discussions of the voluntary approach. This categorization will allow th estimation of component reliability using the most appropriate set of failures and deman: t for sisk informed regulatory applications.

2. Similarly, the " General" guidance on page 14 in the second bullet states that system ,

operational mode is not considered in counting demands and run hours. Therefore, tho data will include houm of routine operations such as filling a tank along with hours run in

. performing a component's risk significant safety function.

The stresses on a component may not be the same for routine operations as those associated with operation during an accident, off normal, or simulated accident test conditions. Combining the data for these modes could blas the calculation of reliability of risk important component failure modes. Operating hout data should be categorized by the nature of the operations (i.e., test, routine operations, or safety demanc e

3. The reporting guidance for reporting test demands should define what is considered valid maintenance and surveillance test demands. For example, preventive maintenance immediately preceding testing may involve manipulations of the component that would constitute " preconditioning" the component rather than testing in the "as found' condition. We suggest INPO ensure that its guidance for defining valid test demands to be counted in EPIX is consistent with NRC guidance on preconditioning.
4. The " General" guidance on page 141,. the fourth bullet on return to-service tests states "each failure should have only one associated retum to-service test demand." '

This guidance on counting only the one successful post-maintenance test will bias the demand failure probability in the non-conservative direction.

' In the example given, a motor winding caused the pump to fail and after it was r repaired the pump again falls due to a pump start breaker when an attempt is made to restart l the pump. Ono key component failure is charged against the pump with one demand an' d two

! supporting component failures. This would only be correct if it is presumed that the breaker l tallure was caused by the first (winding) failure or the maintenance / repair process. Since the second of these two " supporting component" failures would have prevented the pump from starting or running had it been put back in service, it should be counted as a second demand and failure, l

m p r g y= 7 y 4---~2=,- - - ---w-ye--p+---, - . , -wae p y e.y- - '-' -- w-*-'- ----- '

er-my-a-

Furthor, this method is not appropriate for counting demands and failures for Maintenance Rule purposes. If the utility oeclares the equipment available but not operable per Technical Specifications and if the equipment fails the return to service test, the post-maintenance functional failure should be counted. NRC has recommended that retum to-service test demands and failures be reported separately as proposed in Enclosure A to meet the needs for PRA/PSA applications and for Maintenance Rule implementation.

5. Table 1 in the introduction states that the intranet MRRI
  • supports required reviews of industry operating experience, as outlined in NUMARC 93 01" and 'provides reliability data for selected risk-significant key components.' However, the document states on page 15 that 'there is littic value in being able to compare the planned unavailability of a component from one plant with that of another plant."

To meet 'he requirements of the Maintenance Rule, licensees must track both planned and unplanned unavailability due to corrective, preventive, and predictive maintenance.

Planned unavailability can also lead to unplanned unavailability (i.e., predictive maintenance may discover degraded components that need corrective maintenance).

These data are also needed for estimating equipment reliability (the probability the equipment is available and capable of performing its function) for PRA applications and risk informed regulation. Planned unavailability can be a significant contnbiitor to total unavailability and can vary significantly from plant to plant. from system to system within a plant and over time for the same system at the same plant.

As the guidance notes, EPIX allows for optional reporting of planned unavailability.

However, optional reporting of planned unavailability will not provide industry with a database for the evaluations of industry-wide operating experience and for identifying generic preventive or predictive maintenance problems that may lead to large periods of unavailability. NRC recommends that planned unavailability information be reported at an appropriate frequency to the Intranet MRRI module.

s

6. The Component specific guidance on page 15 identifies check val tes as
  • passive" devices and hence states that neither demands nor run hours are eported. Appendix 1.

Component Type Specific Guidance for the Collection of Reliability Data, Table 11: Type of data required for key components, identifies the type of reportint, data for check valves as "none

  • Pump discharge check valves have a demand it open on pump start and a demand to close on pump stop.

Using this guidance, EPIX will not capture rel;ibility information on potentially risk-significant components required for risk informed management of the peformance of these components or for use in PSA/PRA applications. The guidance should address all key components including check valves in the risk-significant systems for which demand and test data may be available.

i

7. The guidance in the second bullet on page 15 states "that demands for valves that do l not provide a controlling function are based on a full cycle." In the third sub-bullet, one failure to open and two failures to close in 50 strokes are combined into 3 fa; lures to l

stroke in 50 demands for a reliability of 0.06 failures per demand.

l

7 j

i i

This is not consistent with standard practices for computing reliability. For reliability estimates and for maintenance rule purposes, reliability should be estimated using the number of demands and failures of the valve to perform its risk significant safety.

function.

8. The failure record screen in Figure 21 in the EPIX user's guide allows licensees to enter three times / dates,
  • Discovery Date, Date Equipment Unavailable, and Date Equipment ,

Returned to Availability." T5e first two dates will capture the previously unknown unavailability between the times of occurrence and discovery, the fault duration time.

Hwever, the definition of

  • Equipment Unavailable Date/ Time"in Appendix A le defined
as "when equipment is removed (used for tracking unavailable hours) " The definition of

'Date Equipment Unavailable" should include guidance similar to the guidance for SSPl data for report;ng known or estimated fault duration time. The definition of "Date [

Equipment Returned to Availability'should be clear that this does not include time to process paperwork and other administrative functions. It is the time when the  ;

equipmant is operabie and functionally capable of performing its safety function, l

~ 9. We understand that the EPIX software willinsure that all required fields of reports are i completed and will not allow totally invalid input. However, NRC recommends that the -

accuracy of the data entered in EPIX be monitored along with the completeness of i

reporting on some periodic basis.

i I

I: .- - . . , . . , .. _ - . _ _ , _ _ ,

ENCLOStjRE A  ;

i i

Additional Information Needed F  ;

"F=linre Di=~werv Screen" i

It is essential to categorize a reported failure according to the. type of demand upon which the tailure occurred. !! this is not done, the data cannot support estimates of  ;

demand reliability in those cases where failure rates are likely to vary substantially  ;

with the severity of the demand. (See the attachment for further information.)

Thus, for failure reports, the following informatbn is necessary: ,

General failures ,i 4

It is given that the failure discovery acreen would already capture the.

information listed below in plain text. (This information is currently shown on the failure discovery screen). In general, the additional information  :

shown in Italles would be necessary as well,

\ ,

. CliECK ALL TliAT APPLY KEY COMP. SUPP. COMP.

7,

- Prcventive maintenance _ _

i

- Inservice inspection _ _

c

- Inserv. survellance test _ _

- Non demand inspection _

j

- Actual demand , ,

  • - Test demand ~ ~

+

- Renan to service ten (H '

- Running at steady state _ _ ,

- None _ _

(4 Based on discussions with INPO personnel, failures on return to service tests will be reported in EPIX the same as in SSP!, that is: _

(a) If there is a functional failure (which would be reported) and upon return to -  !

service test there is another functional failure, the second failure would not be counted and reported as an additional failure.

-(b) If there is not a functional failure and the equipment is taken out of service to '

perform maintenance and upon return to service test there is a functional failure, that failure would be counted and reported.

1 y

'e

.. , , . _, . . . _ . . . , ....,_,_,,..,,,,,,_,mm_

~

,.i.__mm_... .. .. . , ,[ [_~,.m,.,

-M e; a 4 +6.1-+ e -,L.A., '

',. . ENCLOSURE A l

3 l

. MOV failures De following e.dditional information would be necessary for MOV failures on test demand.

  • Valw Jtroke test i

. Valw yo,; test ('I .

EDO failures For EIX) failures the failure discovery screen would capture the information listed below in plain text.I'l De additional information shown in Italles would be necessary as well. .

+

. Start demands (up to rated speed and voltage)

- Manual start -

- Auto start

. Load run demands (up to I hour)

- Manualload

. Auto load

. Running hours case (in excess of I hour) ,

Describe Kev Components Screen":

For demands, the additional information listed below in italics is needed to obtain best estimate (non-conservative) demand reliability estimates, (See the attachment for further information.)

Pump demands Estimated test demands  ;

. Renan to mvice tests

. Other fests Ihat is, start pump & open valve to pass flow (or close valve to stop flow).

I'l This information is not currently shown on the screen but is discussed in the tpxt, i 2 l

l l

" i ENCLOSURE A

. i k

i

. Counted non test demands

  • Acaml & spwion demands

. GAur non test denunds(*I MOV demands Estimated test demands  !

. Vahe.netde tem 3

- Renan to service tests ,

- - Other tests Vake flow tem

- Retwn to . service te.sts

- Other tests Counted non test demands

  • Actual & spurious demamLs

= Other non ten demands EDG demands Estimated test demands

. Start demands (up to rated speed and voltage)

- Manual starts

. Retwn to service tests

. Gher tests

- Auto starts

. Retwn to sewice tests

. Other tests

. Lead run demands (up to I hour)

- Manualload

. Retwn to service tests

. Other teJts

- Auto load

. Return to service tests

. Other tests

. Running hours case (in excess of I hour)

- Return to service tests l

l

'l For example, accumulator filling, wate transfer, tank mixing. ,

3

._ .. - - .- - .-.-.~..._-

ENCLOSVRE A l

.- t

. - N desir {

Counted non test demands (8)

  • Start demands (up to rated speed and voltage)

- Mausalstats

- Auto stats

. Load run demands (up to I hour)

- Manualstatt

- Auto stets

. Running hours case (in excess of 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br />)(')

(8) Dese could be estimated from LERs if all plants considered non test demands on >

EDGs to be ESF actuations, liowever, some plants do not.

l (51 It is not intended to imply that 26 boxes are needed for EDGs. For example, on a basis sintilar to " check all that apply," 7 to 9 would do the job, 4

1. . - - . = , , .:-:.=._.....=...---.-....-- --.-.:-..- -- -

j Attachment i

A significant body of work, supported by the NRC, has been done on several major  !

component types, including Emergency Diesel Generators (EDGs) and valves, especially l MOVs. His analysis strongly indicates that there are differences in estimated reliability. '

of these components depending on the types of demands used to determine their reliability or operability and on their usage. De following summarizes a portion of this ,

analysis with respect to EDGs and MOVs. l EDO.Rellahility Estimates Dere is a difference in estimated reliability, for both failure to start (the period in wh!ch _

the EDO is brought up to rated speed) and failure to load (the action to load the EDO on to the bus and initiate run), depending on whether the EDO is manually or  :

automatically loaded. %ls difference in reliability would be important in estimating PRA parameters. 3 MOV Reliability Estimatts i

. There is a difference in estimated MOV reliability between the reliability estimated from stroke tests and the reliability estimated from actual demar.ds and tests where the MOV  :

Is operated under flow. This difference in reliability would be important in estimating PRA parameters.

Significance I

Combining demands or failures for EDGs or MOVs without preserving the above distinctions, can introduce a substhntial error in the estimated equipment reliability. For 1

a reliability data base to support risk informed regulation, demand and failure counts should, at a minimum, support estimation of EDO and MOV reliabilities.

EDO and MOV reliabilities estimated for accident sequences that involve EDGs that must load /run, or MOVs that must operate under flow, that do not recognize the above -

distinctions, will not be adequate. : Both the NRC and licensees will need the same data for evaluating changes in current regulatory requirement, his data can either be-collected on a timely, systematic basis through EPIX, or it can be collected adhoc for each relevant submittal to the NRC. Either way, it will have to be collected sometime.

The above reliability differences can be significant in terms of their impacts on the relative importance of various accident sequences. The differences usually affect both 1

l-

- . . . - - ,: z .- . - - . . - . . , . - , - _ . . , . - . . , - , , . - . - . -

ENCLOSURE A

. i

,, individual component reliabilities and common cause failure estimates. Individual i accident sequences cr.n be substantially affected by these differences. Thus, requests for changes involving potentially affected sequences will have to account for them. - De ,

studies and evaluations that show these reliability distinctions have of course, been j forwarded to NRR, and NRC personnel in NRR are well aware of them. Dus, specific  ;

justification accounting for the differences may be requested for evaluating specific i

changes to regulatory requirements,

?

-i t

r 4

-2 ,

. , - - , - - . - - , . ,.