ML20134F360

From kanterella
Jump to navigation Jump to search
Forwards Response to RAI Re Util Graded Quality Assurance Program Questions
ML20134F360
Person / Time
Site: South Texas  STP Nuclear Operating Company icon.png
Issue date: 10/30/1996
From: Rosen S
HOUSTON LIGHTING & POWER CO.
To:
NRC OFFICE OF INFORMATION RESOURCES MANAGEMENT (IRM)
References
NUDOCS 9611050183
Download: ML20134F360 (31)


Text

.

The Light c o m p a n y South Texas Project Electric Generating station P. O. Box 289 Wadsworth, Texas 77483 gp

/

October 30,1996 ST-HL-AE-5434 File No.: G02.05 10CFR50.54(a)

U. S. Nuclear Regulatory Commission Attention: Document Control Desk Washington, DC 20555-0001 South Texas Project Units 1 and 2 Docket Nos. STN 50-498, STN 50-499 Response to Request for Additional Infermation Regarding the South Texas Project's Graded Ouality Assurance Procram Ouestions

Reference:

1)

Letter from W. T. Cottle to U. S. Nuclear Regulatory Commission dated March 28,1996, " Submittal of Revised Quality Assurance Plan" (ST-HL-AE-5321) 2)

Letter from Thomas W. Alexion to William T. Cottle dated August 16, 1996, " Review of Revised Operations Quality Assurance Plan, South Texas Project, Units 1 and 2 (STP)(TAC Nos. M92450 and M92451)"

The Operations Quality Assurance Plan (OQAP) which incorporated the methodology for implementation of the Graded Quality Assurance Program was submitted on March 28,1996 (Reference 1). Based on the Nuclear Regulatory Commission's review of the Graded Quality Assurance Program submittal, several questions were raised on the South Texas Project's Probabilistic Safety Assessment and the Operations Quality Assurance Plan as they relate to the Graded Quality Assurance Process. These questions were formally provided to the South Texas Project on August 16,1996 (Reference 2). Attached is the South Texas Project's response to the Probabilistic Safety Assessment questions. The attached response is provided in the same arrangement and format for each of the questions (shown in italics).

l Based on discussions with the Nuclear Regulatory Commission during an August l

site visit, a decision was made to perform a major reformatting'of the Operation Quality l

Assurance Plan. This reformatting will require the South Texas Project to evaluate the original questions to see which, if any, still apply and answer them appropriately. This was discussed with Mr. Gramm (NRC) on September 17,1996.

d00 t 9611050183 961030 PDR ADOCK 05000498 G

PDR Project Manager on Behalf of the Participants in the South Texas Project i

_ _ _ _ _.~..-..

i I

i Houston Lighting & Power Company i

South Texas Project Electric Generating Station ST-HL-AE-5434 File No.: G02.05

(

Page 2 1

i We considered your staff's visit to the site during the week of August 19,1996, to be very beneficial. We appreciated the opportunity to answer questions regarding the methodology of the South Texas Project's Graded Quality Assurance process. We hope that your staff's l

observation of the Graded Quality Assurance Program Expert Panel and Working Group l

l meetings, provided your staff with a better picture of the South Texas Project's Graded Quality Assurance Program. We hope that continued dialogue between the South Texas Project and the Nuclear Regulatory Commission allows for additional opportunities for information exchange on the Graded Quality Assurance process.

If there any questions regarding the Graded Quality Assurance Program Probabilistic Safety Assessment responses, please call Mr. C. R. Grantom at (512) 972-7372. If l

there are any questions regarding the Graded Quality Assurance Program Operations Quality i

Assurance Plan reformatting, please contact Mr. R. J. Rehkugler at (512) 972-7922.

l I

L. Rosen

Manager, Risk Management &

i Industry Relations l

l JMP/

i

Attachment:

1) Response to NRC Questions on Graded Quality Assurance Program /PSA
2) Figure 1, GQA Screening Process Flowchart t

j i

l'

I

  • O i

1

+

Houston Lighting & Power Company ST-HL-AE-5434 South Texas Project Electric Generating Station File No.: G02.05 Page 3 Leonard J. Callan Rufus S. Scott Regional Administrator, Region IV Associate General Counsel l

U. S. Nuclear Regulatory Commission Houston Lighting & Power Company l

611 Ryan Plaza Drive, Suite 400 P. O. Box 61067 l

Arlington, TX '76011-8064 Houston, TX 77208 i

l Thomas W. Alexion Institute of Nuclear Power Project Manager, Mail Code 13H3 Operations - Records Center U. S. Nuclear Regulatory Commission.

700 Galleria Parkway Washington, DC 20555-0001 Atlanta, GA 30339-5957 l

David P. Loveless Dr. Bertram Wolfe Sr. Resident Inspector 15453 Via Vaquero c/o U. S. Nuclear Regulatory Comm.

Monte Sereno, CA 95030 P. O. Box 910 l

Bay City, TX 77404-0910 Richard A. Ratliff l

Bureau of Radiation Control J. R. Newman, Esquire Texas Department of Health Morgan, Lewis & Bockius 1100 West 49th Street 1800 M Street, N.W.

Austin, TX 78756-3189 Washington, DC 20036-5869 l

J. R. Egan, Esquire M. T. Hardt/W. C. Gunst Egan & Associates, P.C.

i l

City Public Service 2300 N Street, N.W.

P. O. Box 1771 Washington, D.C. 20037 San Antonio, TX 78296 l

J. C. Lanier/M. B. Lee U. S. Nuclear Regulatory Commission l

City of Austin Attention: Document Control Desk l

Electric Utility Department Washington, D.C. 20555-0001 721 Barton Springs Road I

Austin, TX 78704 Central Power and Light Company J. W. Beck ATTN: G. E. Vaughn/C. A. Johnson Little Harbor Consultants, Inc.

P. O. Box 289, Mail Code: N5012 44 Nichols Road Wadsworth, TX 77483 Cohassett, MA 02025-1166

~. -

!.a l

ST-HL-AE-5434 Page1of27 i

Recnonse to NRC Onections on Graded Ouality Accurance Program /PSA GENERAL

)

Question G-1):

In 1991, the NRC staff completed an in-depth review of the STP probabilistic risk assessment (PRA),

i andfound the level of detail of the models "quite high and consistent with current state-of-art. " A i

subsequent update ofthe PRA included a variety ofcore damagefrequency (CDF) estimatesfor various assumptions regarding the rolling maintenance schedule and combinations of modsped Technical Specepcation Allowed Outage Times and Surveillance Test Intervals. The Revised Quality Assurance (QA) Plan submittal includes the elements of the PRA model that will be controlled under the STP QA

}

program and discusses the process ofupdating and changing the model, but does not appear to address i

any issues which might be specifc to the use of the PRA to support classifcation ofSSCsfor Graded l

QA. By what process is the model considered to have been validatedfor purposes of application to GQA?

i

Response

l l

The process by which the STP PSA is validated for purposes of Graded QA, or other PSA applications, is the PSA configuration control process as described in the PSA program procedure. The process to ensure that the PSA is valid for any particular application is similar for Graded QA, as well as, other PSA applications. The PSA model must be technically acceptable and the proposed application must have identifiable impacts to the PSA which can be measured by accepted figures-of-merit. The validation process of a PSA application has two parts:

First, it is an evaluation that the risk model accurately establishes the frequency of events leading to e

undesirable consequences (i.e., figures-of-merit). In this regard, validation can be thought of as the l

process which ensures that the PSA risk models properly reflect and characterize accident / transient l

progression incorporatin; human and equipment performance factors relative to the undesirable outcomes (i.e., core damage). Once the PSA is determined to provide acceptable risk estimates, then the important contributors to risk can be identified.

Second, the impacts of the PSA application must be capable of being measured by the PSA. In other o

words, changes in parameters associated with a PSA application can be used as an input to the PSA and result in changes to key figures-of-merit (e.g., core damage frequency) commensurate with their importance.

l For the first point, the activities associated with ensuring PSA technical accuracy occur at the time of the initial PSA quantification through technical reviews, and then subsequently recur as a part of the configuration control process for the living PSA program. In that regard, validation is an inherent part i

of a living PSA program and ensures that PSA risk models are current (i.e., date and time stamped).

This is a feature of both the initial PSA quantifications and ongoing PSA updates.

a ST-HL-AE-5434 Page 2 0f 27 For the second point, the PSA does contain parameters that are associated with the elements of Graded l

QA which can be measured (e.g., equipment failure rates, equipment failure modes, common cause failures) and trended. The PSA can reflect these changes at a component level, a system level, or a plant l

level. This is due to both the integrated nature of PSA methodology and the structure of PSA relative to incorporating important plant / system dependencies and human actions. Therefore, the PSA can identify important feedback and corrective actions for adjusting levels of quality based on performance (at the component, system, or plant levels) in the event adverse trends occur and is, thus, valid for the application of Graded QA.

l l

STP's PSA has undergone continuous improvement to refine the analysis both in terms of modeling scope, and equipment performance.

Key milestones in the STP PSA program history highlight j

significant efforts that have contributed to continuous improvement and further demonstrate the validity of the STP PSA as a technical basis for analysis of the STP units for Graded QA and other applications.

Some of these milestones which illustrate the history of continuous imprew ment in STP's PSA are as l

follows:

t Both in-house and outside peer reviews were performed, with all comments resolved, during the e

initial quantification and documented to determine if any important contributors or factors relative to severe accidents were overlooked by the PSA; i

NRC SERs were documented for both internal and external events PSAs for STP approving it for o

use as a basis for licensing submittals; Detailed system and plant level analyses were performed and have been re-performed as part of o

STP's PSA configuration control process to identify important equipment failure modes and any changes in plant design or performance that could impact identified equipment failure modes; Important enhancements to PSA modeling technology have been incorporated into the STP PSA to o

facilitate the evaluations of configuration risk.

j Plant specific performance data for use in updating equipment failure rates were obtained. This data o

l was used to determine the impact of plant specific operating and maintenance practices; Periodic collection of plant spceific performance data is also a key element of STP's PSA o

configuration control process; l

Recently, program procedures were developed to implement Appendix B features to establish o

configuration control of the PSA models.

The PSA has undergone a continuing process ofimprovement which has further substantiated it as the most appropriate and technically proper tool to evaluate the integrated effect of programmatic changes l

(e.g., Graded QA) on station performance. By its scope, the PSA is both a plant specific design model and a performance model. In that regard, the STP PSA is the most valid mechanism for evaluating changes in plant design, programs, and performance relative to risk.

i

- a ST-HL-AE-5434 Page 3 of 27 l

Question G-2):

l Attachments 2-5 of the submittal are described as STP's, " process to identify the appropriate safety sigmficance of structures, systems, and components" (SSCs). Much of the procedures appear PRA generic in nature and Addendum addressing the MOVprogram and LLRT are planned, yet references specific to Graded QA are scattered throughout the text.

a) is this process to be developed into a set ofprocedures governing use ofPRA in general with specific Addendumfor each risk informed application (e.g., IST, ISI, etc.)?

Response

Yes, specific Addenda will be developed. for applications of the Comprehensive Risk Management program. These addenda may reference other documents or processes necessary to ensure proper treatment of affected SSCs.

Question G-2)(Continued):

b)

Are there plans to re-rank SSCsfor diferent applications (e.g., is an equivalent to GQA SCREENINGfigure 1 plannedfor each application) or is a master SSC safety sigmficant listplanned?

Response

l The process for treatment of the rankings or groupings for safety significance begins from the base case or " master" PSA model. Each application will require a risk ranking treatment which is specific to the application. PSA applications can vary widely with respect to affected equipment and plant processes. Therefore, it is important to structure risk rankings or groupings to account for specific attributes which will be impacted. In other words, the risk ranking treatment is application specific and will generally be performed as follows:

The proposed changes that will result from a given application will be evaluated to determine how the base case PSA will be impacted.

Once the impacted PSA parameters are identified an appropriate analyses will be performed to generate an " application-specific safety significance" ranking. The impact against the base case will be quantified based on standard importance measures (e.g., FV, RAW).

An " application-specific safety significance" ranking will be the initial PSA input to a l

Working Group for any given application provided the figures-of-merit (e.g., core damage frequency) are the same.

1 l

e

~

a 1

ST-HL-AE-5434 l

Page 4 of 27 i

The initial " application-specific" risk ranking input will be used by the Working Group (s) e which will then apply " application-specific" deterministic parameters / insights to yield a set ofrecommendations.

Those recommendations that result in quantifiable changes will be factored back into the base case PSA and requantified as a sensitivity study.

The results of the base case requantification (i.e., sensitivity study) a,d any changes to equipment safety significance changes will be reassessed for the application under-consideration. Also, any other changes resulting from those applicat;.ons that have been previously implemented will be incorporated to ensure adequate consideration of cumulative effects.

j A final set of recommendations will be developed by the Working Group for submittal to the l

Expert Panel. A determination will be made by the Expert Panel relative to implementation of recommended changes.

j Implemented changes will be incorporated into the PSA as appropriate and a revised base e

l case and " application-specific" risk ranking would be generated.

Question G-2)(Continued):

l l

c) is there a linkage of the categori:ationfor GQA to the maintenance rule ranking and implementation?

l l

Response

i l

Yes, there is a linkage for the categorization of GQA and Maintenance Rule rankings. In l

general, the risk rankings for both applications should be similar because the PSA is used for i

both and the equipment functions performed for the Maintenance Rule were also used for the Graded QA application. Also, some of the deterministic Graded QA screening questions were structured based on the Maintenance Rule.

It should be noted that although there are several points of tangency between Graded QA and the Maintenance Rule (e.g., SSC functions, equipment failure modes); there are also some differences. For example, the Maintenance Rule and Graded QA are very similar in that both are performance based programs. They are different in that Graded QA incorporates elements of organizational performance that are not a required element of the Maintenance Rule (i.e., plant organizational effectiveness versus maintenance effectiveness).

1

)

i i

l 4

a ST-HL-AE-5434 Page 5 of 27 Question G-3):

The staff understands that the working group will provide the expert panel with a preliminary categorization of all SSCs to be considered. This list will include SSCa modeled (PSA SSCs) and not modeled or not adequately represented in the PSA (non-PSA SSCs). The categorization of an SSC will determine its safety sigmficance and the level ofQA controls that will be applied to the SSC. We believe that a well described, systematic, and traceable process whereby SSCs are evaluated and assigned safety sigmficance is a cornerstone in the graded QA process.

The staffinds the current description of the grading ofnon-PSA SSCs to be unclear. Recognizing that many SSCs whose safety sigmficance will be determined during theprocess are not in the PRA models, a)

Please provide a discussion on how non-PSA SSCs whose QA may be changed under the graded QA process will be identafled, evaluated, and ranked by the working group.

Response

The typical process of evaluating non-PSA SSCs for GQA is described as follows:

I At the beginning of a plant system evaluation, the SSCs associated with that system are identified regardless of whether they are PSA-modeled or not. The identification process consists of querying our Master Equipment Database for all components which are associated with the particular plant system. These components are then evaluated and risk ranked in accordance with the process described in our Comprehensive Risk Management Procedure and specifically the flow chart shown as Addendum 3 to the procedure. All of these components are evaluated deterministically and the results of these evaluations are: 1) used to determine the risk ranking for non PSA-modeled components or 2) blended with the PSA data to determine an overall ranking for PSA-modeled components.

Question G-3)(Continued):

Regarding both PSA and non-PSA SSCs, page 3/17 in Attachment 2 indicates that the expert panel will take input #om the Graded QA working group and review and approve the categorization ofSSCs and the assignment ofQA measures to the SSCs.

b)

Please describe the information the working group willsubmit to the expertpanel.

I I

Response

j The Working Group will generally submit the following information to the Expert Panel depending on the application:

t 1)

List of components in the scope of the evaluation i

i j

i I

![o l

ST-HL-AE-5434 I

l Page 6 of 27 L

2)

Data sources used to extract information relevant to the subject system, including design basis documents, operating experience review, licensing basis, etc. and the evaluation of l

this information for input to the risk ranking of the components.

3)

System functions, including a determination as to whether each function is considered critical to the system's mission.

l 4)

For each component, the identification of system functions that component supports.

5)

For each component, a recommended risk ranking and Graded QA level, the following information is also provided as the basis for this recommendation:

l a)

PSA data, where modeled b)

Answers to the critical questions shown in the Comprehensive Risk Management flow chart l

c)

Additional deterministic input d)

Critical attributes of" Targeted" components l

6)

Recommended methods to implement the Graded QA levels on the subject system i

7)

PSA assumptions and limitations 8)

Any dissenting opinions Question G-3)(Continued):

c)

Please describe the expert panel's guidance on how to evaluate the deterministic and probabilistic rankingprovided by the working group.

Response

As described in paragraph 6.8 of the Comprehensive Risk Management procedure, the Expert Panel will utilize the same criteria identified in the addenda to the procedure in order to evaluate the Working Group recommendations. The Expert Panel shall also inject their own deterministic insight, as appropriate.

Also, the Expert Panel role in approving Working Group recommendations for implementation includes other insights related to organizational functions relative to Graded QA.

1 k

I

o

'a ST-HL-AE-5434 Page 7 of 27 Question G-3)(Continued):

l l

d)

Without results of the analyses, the traceability and reasonableness of the safety sigmficant categorization process can not be fully evaluated. Please provide detailed

\\

results to illustrate theprocess.

t l

Response

1 The entire evaluation process for the subject system shall be documented in a Graded QA Basis l

Document, which shall be a controlled "living" document. The Basis Document shall include the Working Group recommendations, including supporting bases as shown in (b) above, and the l

Expert Panel evaluation process including deterministic insight, resolution of any dissenting l

opinions, evaluation of Working Group recommendations, and final decisions. This document is intended to be typical of the process to be used for all systems evaluated under the STP Graded QA process. It also represents the basis for the recommendations relative to Graded QA and provides the traceability and justifications for the safety significance categorizations.

Question G-4):

How is the safety significance ofpassive SSCs such as pipe segmenk, supports, bolts, and other items whosefailures could lead to a pressure boundary determined? Pressure boundaryfailures are not systematically represented in the PRA models and tend tr recu!?e esplicit evaluation to determine potentialinqpack.

Response

1 The safety significance of pipe segments, supports, bolts or other pressure boundary devices is not 1

l included in the PSA models. Some considerations are given to passive devices in the development of some PSA assumptions for determining system boundaries to facilitate PSA analysis. For passive SSCs not evaluated deterministically through the Graded QA pacess, the safety significance will conservatively remain in the current classification.

l 1

l l

9 ST-HL-AE-5434 Page 8 of 27 ATTACHMENT 2 COMPREHENSIVE RISK MANAGEMENT PROCEDURE Question 2-1):

Page 3/17 section 4.1 states that the expertpanel will approve the criteriafor categorization of andfor assignment of QA measures to, SSCs. Are these criteria to be approved by the expert panel in the current draft submittal and, ifso, which criteria are they? Ifno:, when will the criteria be available?

l

Response

i As stated in paragraphs 6.4 and 6.8, the criteria to be used for categorization of, and for assignment of QA measures to, SSCs are contained in the addenda to the procedure.

Question 2-2):

In the program level and descriptions (pages 12 to 14) " items and activities" are categorized. Activities are normally categorized according to the category of the item (SSC) to which the activity is directed.

l Can activities themselves be assigned a safety sigmficant category irrespective of the item they cover?

l Please clarify the categorization ofactivities.

l

Response

l l

Yes, activities can be assigned a safety significance category irrespective of the items they cover.

However, for the South Texas Project's GQA program activities are normally categorized according to l

the category of the item to which the activity is directed. It is possible, for example, that the function of the activity itself can be categorized particularly if an activity does not address the failure mode (s) of a l

large population of SSCs or if the activity is associated with an SSC failure mode which is an ins:

'Gcant contributor to the overall failure of the SSC. It is known from recent analyses that required actmties do exist which provide little, if any, additional assurance that components will perform their intended functions or even if the failure modes wb'ich the activity is intended to detect are valid, l

Examples of this can be seen in elements of Apper dix J requirements where criteria for acceptable l

leakage rates do not account for the component's role or contribution relative to risk (i.e., CDF, LERF).

The functions of activities should be evaluated as a part of the Graded QA process to determine if their j

function and intent provides a tangible safety benefit for both the Owner and the public.

1 l

l l

d j

l i

- =

i l

ST-HL-AE-5434 Page 9 of 27 Question 2-3):

The lower left corner ofFigure 1 GQA Screening on page 1 of1 in Addendum 3 is unclear. Are the bottom set of questions (starting with "Could directly cause or has caused an initiating event") to be i

applied to " Active System Components" which are not modeled in the PSA, to components which are modeledin the PSA but which have no ranking (e.g., due to truncation), or both'?

Response-l The questions at the bottom of the GQA Screening flow chart are to be answered for all components, whether they are modeled or not. This has been clarified in Revision 1 (Draft) to the procedure.

Question 2-4):

Figure 1 also indicates that a "high" ranked non-safety-related component will only be placed in the Targeted category, while a safety-related component ranked "high " will be placed in the Full category.

This is inconsistent with page 8/17 which states that "Fullprogram controls are applied to items and activities determined to have high risk sigmjicance". There may well be merit in applying controls functionally equivalent to full program attributes to high ranked non-safety-related SSC's. Please i

explain the inconsistency in the text.

Response

l The GQA Screening flow chart is correct in showing that a high-risk non-safety-related component l

would be placed in the Targeted category. The degree of additional controls applied to such a component l

would be commensurate with its risk significance and would be structured to address the component's l

key critical characteristics relative to their importance in preventing risk significant failure modes. These l-additional controls would be greater than those typically applied to a non-safety-related component.

l Safety-related components will be grouped into either Full or Basic QA programs. High safety significant components will be placed in the Full QA Program and medium / low safety significant components will initially be placed in the Basic QA Program. All components placed in the Basic l

program are subject to review by the Working Group and Expert Panel. Additional augmentation as recommended by the Working Group and approved by the Expert Panel will be applied for components which require specific key attributes to be optimized. In this regard, Basic retains a similar flexibility for safety-related components as that achieved in the Targeted programs for non-safety related components.

A revised flow chart, attached as Figure 1, has been provided to illustrate the GQA screening process. It is not STP's intention to change non-safety-related items or activities to safety-related. However, the Graded QA process will ' target' the critical functions ofitems and activities associated with each.

p ST-HL-AE-5434 l

Page 10 of 27 ATTACIIMENT 3 PROBABILISTIC SAFETY ASSESSMENT RISK RANKING PROCEDURE Question 3-1):

i The definitions on page 2/6 of Fussell-Vesely (FV) and Risk Reduction Worth (RRW) are inconsistent with standard definitions. The FV measure is simply the fraction of CDF in which failure of the component (with its nominalfailure probability) contributes. A measure based on the difference between the CDF with the componentfailed and the CDF with the component successful (similar to STP's 2.3 definition) is related to a Birnbaum measure. Also, the standard RRW is the inverse of that defined in section 2.5. Please reconsider the definitions or the names ofthe measuresyou intend to use.

Response

The South Texas Project PSA uses RISKMAN@ software in quantifying the PSA figures-of-merit (i.e.,

CDF, LERF). Since RISKMAN@ is controlled by the vendor's (PLG,Inc.) Appendix B program, it has undergone a process of continuous improvement through the efforts of the RISKMAN@ Technology Group. The improvements which have incorporated into RISKMAN@ has extended the traditionally referred to "small fault tree - large event tree" PSA approach into a methodology with much fewer limitations relative to risk model sizes. Thus, the "small fault tree - large event tree" approach has 1

become the "large fault tree - large event tree" approach. RISKMAN@ quantifies all event tree paths above the user-defined truncation cutoff frequency, some of which result in plant damage states (i.e.,

1 core damage) and many others that result in success (i.e., no core damage occurs). As a result of the way the event tree quantification, as described above, is generated by RISKMAN@, it is necessary to adjust the numerical methods used to calculate the standard risk importance measures, as discussea below.

The standard output from a RISKMAN@ quantification is a database of sequences. These sequences contain all pertinent information (i.e., all success and failure data) for all linked event tree paths up to the user defined truncation limit. For those event tree paths where the sequence quantification is less than j

the truncation limit the pertinent information is not saved; however, the individual sequence frequency for each truncated sequence is saved and subsequently summed and stored as the " unaccounted for l

value"(this is unique to RISKMAN@). When one seeks to ask what the contribution of a given element has to a selected figure-of-merit, RISKMAi T@ searches the sequence database for all occurrences where the element is, for example, failed (failure can be both independent or dependent). For a proper mathematical treatment, this would include ccre damage sequences and those sequences where the element is not applicable. It is this type treatment that results in different mathematical derivations for quantifying the standard risk importance measures.

A white paper is being prepared by the vendor to show the equivalency of the RISKMAN@ approach and the standard methods used by cut set codes. The results of the white paper show that:

I For the Risk Reduction Worth (RRW), the RISKMAN@ software does calculate the inverse from the o

]

standard definition of the RRW as defined in the EPRI PSA Applications Guide.

The Fussell-Vesely Importance as calculated by RISKMAN@ is equivalent to the defm' ition as e

stated in the EPRI PSA Applications Guide, i.e.,1-1/RRW.

I

  • O a

ST-HL-AE-5434 Page11of27 Question 3-2):

Section 5.7 states that the risk ranking ofSSCs shall be (re-) generated and re-evaluated on a periodic basis.

a)

Will the expert panel be authorized to change the safety sigmficance category and thus the QA requirements on any SSCs based on this new ranking?

Response

i j

Yes, the Working Group and/or Expert Panel will be able to change the significance and thus the QA requirements based on the periodic update of the risk ranking. This is an important aspect to the feedback loop of the grading process. This is documented in Figure 1 of Addendum I in the Comprehensive Risk Management procedure, OPOP02-ZA-0003.

f Question 3-2)(Continued):

b)

Will a new ranking which indicates changes might be calledfor include all the sensitivity and verification studies accompanying the initial ranking?

\\

Response

Yes, this is outlined in the PSA Risk Ranking Procedure, OPGP01-ZA-0304.

Question 3-2)(Continued):

c)

How will it be decided which changes require prior approval by the NRC?

I

Response

STP will use existing licensing processes, such as the 50.59 process, to determine when changes will require prior NRC approval. Also, licensing representation on the Graded QA Working Group and the Expert Panel is maintained to ensure proper considerations relative to the station's license are conserved.

1 l

l

~o ST-HL-AE-5434 Page 12 of 27 Question 3-3):

Does " Quantify all risk models " in the Risk Rankingprocess in page 4/6 mean to re-quantify thefull set oflevel 2 logic models or to re-quantify the cut sets? Please provide a description of the models and processes behind the quantification ofall risk models.

1

Response

l i

There is no difference between quantification of the level 1 and level 2 models, except that for level 2 l

the containment event tree with release category binning rules is linked to the level 1 trees. In general, the quantification process for all the STP PSA models is as follows Database variables describing equipment failures, human factors and unavailability periods are created or modified.

l System fault trees are created and minimal cut sets are determined. Maintenance alignments are l

designated as applicable. The system models are quantified, determining the values of top event split fractions.

l Using the top event split fractions, the event tree model is quantified by evaluating the complete i

j list ofinitiators through the associated event trees.

The extent of requantification depends on the sensitivity study to be performed and where the information to be studied is introduced into the calculation. In all cases the event tree model has to be requantified. For instance:

To vary failure rates of equipment, the whole process outlined above would need to be followed. New database variables would be constructed and all those systems containing that equipment would need to be requantified as well as the event tree model.

The removal of common cause or maintenance alignments can be done starting at the system level. This requires regeneration of cut sets and quantification of split fractions and the event tree model.

Some sensitivities can be studied by setting particular split fractions to guaranteed failure or success, which may not involve any requantification at the system level, but only at the event tree level, i

_~

~

ST-HL-AE-5434 Page 13 of 27 Question 3-4):

l It appears that STP is performing calculations that directly rank systems (e.g., top events and split i

fractions, page 4/6) as entitles. That is, importance-type calculations are performed at a level higher than basic events in the model. Please clarify thefollowingpoints.

l a)

What measures are definedfor such entities, and what are their definitions? In practice, i

how are they actually calculated? Is it necessary to requantify the model a large number oftimes?

Response

Addendum 2 of the Probabilistic Safety Assessment Risk Ranking procedure, OPGP01-ZA-0304, defines the thresholds and ranking criteria for other levels of analysis (i.e., system, top event, split fractions, operator actions, etc.). The RISKMAN* software utilized by STP does not require requantification of the model for the different levels of analysis. Except, of course for the l

sensitivity studies, each of which require a whole quantification.

l l

The following are the standard definitions for the risk indices used in the RISKMAN* software:

Fussell-Vesely (FV) measures the fraction of the overall risk involving sequences in which e

l the component is postulated to fail. The equation for the FV is:

L R(1)- R(0) yy =

, gy, = ; _ ppy R(F M) where:

R(F-O-M) represents the total of all sequences for the figure-of-merit, i.e., core damage frequency; R(1) represents sequences with the element ofinterest guaranteed failed (i.e., value equals one);

R(0) represents sequences with the element of interest set to success (i.e., value equals zero);

SFI represents the split fraction value for the element ofinterest; RRW represents the Risk Reduction Worth or R(0)/R(SFI).

s l

e

\\'D ST-HL-AE-5434 Page 14 of 27 Risk Achievement Worth (RAW) is the increase in risk if the component is assumed to be failed at all times. The equation for RAW is:

R(1) y g, _

R(F M) where:

R(F-0-M) represents the total of all sequences for the figure-of-merit, i.e., core damage frequency; R(1) represents sequences with the element ofinterest guaranteed failed (i.e., value equals one).

Question 3-4)(Continued):

b)

Ifrisk achievement t)pe measures are calculatedfor such entities: Does such a measure for afront-line system reflect consideration of the importance ofthe underlying support systems, or is that captured by the measures calculatedfor those systems?

Response

l The STP PSA is an integrated plant specific model that incorporates all the necessary support systems in the overall analysis. So, the answer is yes in regards to front-line system risk indices reflecting the importance of the underlying support systems.

Question 3-4)(Continued):

l i

It appears that once a system is deemed unimportant, no element ofit is a candidate to be considered important. This makes intuitive sense unless a component in a system performing an unimportantfunction can have an adverse efect on anotherfunction. An example of such a i

situation might be an unimportant train ofonefluid system adversely afecting the suction source that it shares with an important fimetion. From the point of view oflogic modeling, this situation would not create aproblemfor the proposed approach, provided that thefailure mode is explicitly modeled under the logic ofthe importantfunction in such a way that, when system importance measures are calculated, the failure mode can show up. Reasoning in this way wouldplace a heasy burden on the details of theformulation of the model. What guidance can be given to an expertpanel to assist it in identifying situations such as this?

l

- =

ST-HL-AE-5434 l

Page 15 of 27 i

Response

First, the risk ranking procedure, OPGP01-ZA-0304, has been rewritten and does not include the l

process of deeming a system ' unimportant' and thereby deeming all components in the system as

' unimportant.'

Second, the guidance provided to the Expert Panel is outlined in the Comprehensive Risk Management Procedure, OPGP02-ZA-0003.

This guidance includes requiring a minimum quorum containing the Supervising Engineer of the Risk and Reliability Analysis section. This section maintains the STP PSA. Also, a list of PSA inputs are considered by the Working Group and the Expert Panel. These inputs include, PSA model assumptions, l

common cause/ mode failure rates, treatment of support systems, level of definition of cut sets l

and cut set truncation, model assumptions relative to repair and restoration of failed equipment, l

human error rates, and definition /use ofimportance measures.

l Question 3-5):

Page 4/6 includes a variety ofsensitivity ranking calculations. Page 6/6 provides a set ofcriteriafor ranking SSCs based on threshold values.

a)

Will the same threshold values be applied to all the sensitivity results?

Response

j Yes, the same threshold values will be used for all the sensitivity studies.

Question 3-5)(Continued):

i

)

b)

Please explain how the diferent sensitivity results are combined, for example does a "high" importancefor any of the sensitivity studies at either the CDF and LERF levels l

yields afinal "high " working group ranking?

1

Response

It is STP's intention to use tre baseline ranking as the overall PSA ranking. The intention of the sensitivity studies is to support the baseline ranking by providing the impact of various studies.

I There is no formula for calcult. ting a new ranking inferred from the sensitivity studies, it is up to the opinion of the Working Group or Expert Panel to change the ranking based on the. sensitivity studies, if appropriate.

t d

1 e

l o

ST-HL-AE-5434 Page 16 of 27 l

Question 3-6):

There are a number ofissues requiring attention during the categorization ofthe safety sigmficance of SSCs based on numerical importance measures. These issues are particularly sigmficant when a comprehensive and quantitative uncertainty analysis is not part of the evaluation process. The major concerns provided below emphasize the evaluation of the CDF risk metric. Further issues may arise when the calculations are expanded to other risk metrics such as early andlate releasefrequencies and others as appropriate.

a)

Truncation limits: Inappropriate truncation limits can lead to incorrectly low or even missing RAW values, on of the principal measure of safety sigmficance. Results of various studies show that, for a CDF of approximately 1E-5 per year, a truncation limit in the range of JE-11 to 1E-13 is requiredfor stability in the ranking results. Please identify the truncation limits used during the sensitivity study quantification.

Response

STP understands the significance of truncation limits set at inappropriately high levels and ensures that the truncation limits for the sensitivity studies are the same as that used for the overall plant quantification. For the STP PSA, truncation limits are set at both the fault tree (i.e.,

system level) and event tree (i.e., plant level) levels. At the fault tree level, the user defined threshold is referred to as the "cutset truncation." At the plant level, the user defined threshold is referred to as the " sequence truncation." Both cutset truncation and sequence truncation are user defined software parameters. User defined truncation thresholds are used for complex systems to facilitate the analysis relating to computer software limitations and run times.

The "cutset truncation" can be thought of as the means of capturing enough cutsets from the fault tree to adequately describe the system for analysis purposes. The cutset truncation level is dependent upon the complexity of the system being analyzed. For simple fault tree analysis, the cutset truncation does not require a truncation level to be established, i.e., all cutsets for the fault tree are quantified and saved in the system analysis database. For large fault tree analysis with a cutset truncation limit set at zero, a portion of the captured cutset information will not significantly contribute to the overall failure probability of the system (i.e., large numbers of cutsets each with extremely low contributions). Therefore, a cutset truncation is desired for computer limitations like hard drive space and run time. The approach in determining a l

truncation limit is to set the limit at least 8 orders of magnitude under the 'all support available' l

value for the system level fault tree. The 'all support available' case represents the system failure probability given all support required by the system is available. In all cases, this results in a cutset truncation limit which is less than or equal to IE-II. Note, all system level truncation levels are less than IE-11 and only one systems analysis is equal to 1 E-11.

l l

l

= <, *

  • i l

ST-HL-AE-5434 l

l Page 17 of 27

~i STP " sequence truncation" limit will be set at IE-10. The sequence truncation limit represents i

the frequency at which individual accident sequences at the plant level are saved to the sequence database. The sequence database is used for computing the risk indices (e.g., FV, RAW). It i

should be noted that the sequence truncation limits for STP's On-Line Maintenance Program is set at 10. This truncation level is adequate for establishing the risk significance of plant I

configurations while still allowing for a manageable quantification time to appropriately facilitate the program.

Question 3-6)(Continued):

b)

Detailed (component level) models for initiating events caused by the loss of support i

systems can be important when ranking at the component level. Please provide a l

discussion on how SSCfailures which might contribute to initiating events but which are

)

l not included in the PM are identified and categorized.

l

Response

L There are several different categories of initiating events modeled in the STP PSA. These include loss of systems modeled as support systems, Balance of the Plant (BOP) systems, internal events (i.e., LOCAs, SGTR), and external events (i.e., seismic, floods, etc.). For components in systems that are modeled in the PSA, the component importance will include the contribution from an initiating event frequency. For example, the Essential Cooling Water system is modeled both as a support system and as an initiating event (i.e., loss of all three trains i

l of Essential Cooling Water will cause an initiating event to the plant.). The calculation of each component's risk indices will include both the support system indices values and the initiating l

l event indices. For BOP systems, the initiating event frequencies that were previously based on i

industry data have been updated with plant specific initiating event data. When the Graded QA l

Working Group _ meets to discuss the safety significance of the system and its respective components, a discussion will be provided on how the system as a whole impacts the plant with

)

respect to modeling in the PSA.

STP is currently working on a BOP availability model at the component level. This information will be presented to the Working Groups once the BOP model has been addressed by the configuration control process.

Question 3-6)(Continued):

c)

Our current position is that there is a certain level ofsafety sigmficance best measured by the MW such that the SSC should be maintained "high" regardless ofits assumed reliability. A MW>10 has been suggested. In practice this would correspond to a third i

branch under MWin thefigure on 6/6. This branch would,for example, bypass the FV l

measure and go directly to 'high". Please provide STP's position on establishing and using a maximum failure consequence level for individual SSCs, above which no l

adjustment ofcurrent regulatory requirements should be made.

i

ST-HL-AE-5434 Page 18 of 27

Response

STP's position is consistent with the EPRI PSA Application's Guide (TR-105396) which states that care should be used in applying the Risk Achievement Worth (i.e., failure consequence

~ level) in the ranking process because for highly reliable components it may be unrealistic to l

assume that the component always fails. This concern is addressed by using the Fussell-Vesely Importance measure in conjunction with the Risk Achievement Worth.

Question 3-6)(Continued):

i d)

Dynamic versus static plant configurations:

The efects of the diferent plant configurations on component ranking should be evaluated. This might be important duringperiods where there are scheduled maintenance or rolling maintenance windows when pre-specified amount of time. STP includes some calculations where maintenance I

unavailabilities are removed. This calculation is described as part of the defense-in-depth evaluation (see question 3-7). It does not appear to directly address the plant configuration issue since it does not include the impact ofremovingfrom operability the SSCs under maintenance. Please explain how the efects ofdiferentplant configurations I

are evaluated.

Response

i

. A new sensitivity study has been added to the Risk Ranking Procedure to evaluate the dynamic effects of the generic 12-week rolling maintenance cycle.

Question 3-6)(Continued):

e)

Common causefailures: STP includes calculations where CCF events are removed, and thus systematically addresses issues regarding the potential shadowing ofimportant risk I

contributors by highly uncertain CCF values. STP also varies the failure rates of common equipment ranked as low (see question 3-8). Are these last evaluations to insure that previously un-modeled CCFfailure would not become major contributors due to decreased QA controls? Ifnot, how does STPplan to controlpossible unacceptable CCF increases in " low" category components?

Response

No, this sensitivity study does not attempt to address non modeled common cause failures. The control of possible unacceptable common cause failure increase in " low" ranking components will be evaluated and monitored through the Maintenance Rule Program. The Maintenance Rule Program tracks the number of Repetitive Maintenance Preventable Functional Failures. This j

information will be taken into account as feedback for the graded QA process.

,.n

ST-HL-AE-5434 Page 19 of 27 4

l l

Question 3-6)(Continued):

I f)

The use ofthe risk reduction worth (RRW) as a screening criteria makes more sense ofthe base-line calculations are done usingfailure probabilitiesfor the SSCs assuming that all l

SSCs are subject to the lowest level of QA control. Since the currentfailure probability

\\

includes the benefit ofthe umformly high QA controls, simply calculating RRWfrom the nominal baseline does not appear to be a robust approach; an SSC could have a low i

RRW using this measure, but the importance could increase considerably if QA requirements on the SSC were relaxed. Although STP defines and calculates RRW, it l

does not appear to be used in the categorization process. Does STPplan to use RRW measures and ofso how?

Response

t Though the Risk Reduction Worth does provide important information, it is not one of the two risk indices used in the PSA Risk Ranking Procedure. The two indices used by the risk ranking j

l procedure will be the Risk Achievement Worth (RAW) and Fussell-Vesely Importance (FV) l measures. The RAW provides an indication of the CDF or LERF given failure or unavailability of the component (e.g., out-of-service for maintenance). The FV importance measure as calculated by the RISKMAN* is the ratio of the difference in the CDF with the component failed and with the component successful over the average CDF. It is more of a reliability indicator, whereas, RAW is an indicator of configuration risk. The intent is to optimize reliability and i

availability to the greatest extent practical. It is important to note that the extent to which l

. improvements in reliability or availability can be effected varies significantly from component to component and is dependent upon many factors. By maximizing the reliability and availability i

of components, a direct benefit in safety levels can be realized. Therefore, these two indices, used in conjunction, provide a reasonable and dependable risk ranking criteria.. The following i

figure provides a description on how these indices are used in the risk ranking process:

h Quadrant B Quadrant C I

R2 A

Quadrant A Quadrant D W

LOW 0.005 HIGH FV J

e Quadrant A: Components with low FV and low RAW are components that have very little impact on the plant when they fail (i.e., when they have poor reliability) or whenever they are j.

unavailable; thus these represent " low" ranked components and will be recommended to the Working Group / Expert Panel as candidates for " basic" graded QA requirements. Focusing l

efforts to improve the reliability and/or availability of these components has little pay back in i

terms of plant safety levels. Programs such as the Basic QA program and the Maintenance Rule f

provide adequate monitoring and corrective actions.

I 1

I

[...,

l ST-HL-AE-5434 Page 20 of 27 e Quadrant B: Compoauas in this quadrant are characterized by high reliability levels and higher configuration risk concerns. Although the contribution to risk of the component is very low due to their low failure rate (i.e., reliability), when the component is voluntarily removed from service there is a substantial risk impact. Thus, these components are placed in the " medium" gioup and will be recommended to the Working Graup/ Expert Panel as candidates for the Basic graded QA program with consideration for add: tonal augmenMon for components with elevated RAW values. The Basic program (safety-related) with augmentation or the Targeted (non safety-related) is appropriate for this group because elements of the full program (and in some cases almost all elements of the full QA program) can be applied to improve availability, whereas, due to their high levels of reliability, less emphasis on improving reliability factors is required or possible to provide safety benefits.

l e Quadrant C: This grouping is for those components with high impact on configuration risk (i.e.,

availability) and substantial additional risk impact due to reliability considerations or in the event their failure rates increase.

They are considered "high" ranked components and will be recommended to the Working Group / Expert Panel as candidates for " full" graded QA program.

l

  • Quadrant D: Although these components have a relatively high impact on overall plant risk due l

to reliability considerations (i.e., high failure rates), the impact of the unavailability of the l

component has a very little additional impact on plant risk; thus, these components represent a

" medium" risk and will be recommended to the Working Group / Expert Panel as candidates for Basic graded QA program. The Basic program (safety-related) with augmentation or the Targeted program (non safety-related) is appropriate for this group because elements of the full program (and in some cases almost all elements of the full QA program) can be applied to improve reliability, whereas, due to their low level ofimpact relative to configuration risk, less emphasis on improvmg availability factors is required or possible to provide safety benefits.

l Question 3-7):

l The issues discussed under Question 3-6 identify our current understanding ofsome major sources of l

uncertainties in level 1 CDF calculations. Analyses to reduce the sensitivity of the PRA based l

categorization to these uncertainties are requestedfor all risk informed applications. Identifying SSC l

importance to the containmentfunction is a key element of the ranking process. Therefore, the staf l

intends to include large early release frequencies (possible separated into containment failure and bypass) in its evaluation ofthe safety sigm'ficance ofSSCs. STP states that large early releasefrequency will be used during the SSC PRA categorizationprocess.

i l

l

ST-HL-AE-5434 Page 21 of 27 Some issues under discussion which may be the source of major level 2 uncertainties for large dry containments are direct containment heating assumptions; possible shadowing of SSC importance by conservative SGTR and non-isolable LOCA calculations; hydrogen ignition models and timing; credit takenfor in vessel core damage arrest by aflooded cavity; and sensitivity to changes to the criteria defining a small vs large early containment isolationfailure. We request STP provide a discussion on what level 2 issue analyses you suggest such that, similar to the above level 1 issues, the sensitivity of the SSC categorization on the major level 2 uncertainties would be minimal or compensatedfor.

Response

l 1.

Direct Containment Heating and Hydrogen Ignition -

NUREG/CR-6338 " Resolution of the Direct Containment Heating (DCH) Issue for all Westinghouse Plants with Large Dry Containments or Subatmospheric Containments" indicates that the probability of containment failure from DCH at STP is very small, as it is for all similar l

plants in the USA. In the STP PSA model, DCH is a small fraction (~3%) of the Large, Early Release Frequency, and therefore a sensitivity study would have a very small effect on risk ranking.

2.

Ex-vessel cooling -

Since STP does not have a sunken cavity under the r. ctor vessel, it would take many Refueling l

Water Storage Tank volumes to submerge the lower head, so no credit is taken for in vessel core damage arrest by flooding the cavity.

3.

Non-isolable LOCA -

The V sequence LOCA frequency is currently computed outside the event tree model and is i

simply added in with no detailed representation in the event trees. This will be changed so that SSCs associated with this sequence are represented, so that importance can be computed.

4.

SGTR assumptions -

Induced steam generator tube rupture contributes to a large part of LERF in the STP PSA. A sensitivity study will be performed by reducing the assumed probability of an ISGTR by one half to determine the effect on risk ranking.

5.

Definition of small vs. large early containment isolation failure -

The cutoff for the definition of small vs. large containment isolation systems is a line 3 inches in diameter. The only large lines leading directly to atmosphere are the containment supplementary l

purge and the SI injection lines, which are explicitly modeled and are binned to large release.

The other active penetrations are smaller than 3 inches. No sensitivity study is needed.

~

i i

l l

ST-HL-AE-5434 Page 22 of 27 Question 3-8):

Calculations based on the level 2 LERF risk metric should include a linkage between the level 1 sequences and the level 2 containment protection systems to properly accountfor system and support system interdependencies. Are the support systems modeled in ST/'s leve! 2 containment protection systems? Are the models linked before reduction and quantification so that the dependencies are systematically included? Ifnot, please explain how the LERF calculations are performed.

Response

The level 1 and level 2 models are linked before quantification. The containment event tree is the last tree in each sequence, and containment response is linked to the availability of plant systems through split fraction rules.

Question 3-9)

The use ofPRA in regulatory matters must support the NRC's defense-in-depth philosophy. In graded QA, a trial stafproposal interprets defense-in-depth as ensuring that the "high" category will be assigned to at least two SSC's in any cut-set and that at least one success pathfor every initiating event is comprised of "high" category SSC's. Page 4/6 indicates that re-quantifying the models after removing all maintenance unavailability contributions quantifies the optimum level ofdefense-in-depth.

Please explam the rational behind this process and identify any other plans STP might have to insure defense-in-depth.

Response

Removing all the maintenance unavailability contributions, (i.e., "no wintenance case" quantification) configures the PSA to model the configuration of the plant with all systems available. Full defense-in-depth is refketed in this plant configuration by the fact that no equipment is removed from service and all safety functions as modeled in the PSA (which reflects the plant design) are available. This accounts for defense-in-depth from both barrier defense-in-depth and level of redundancy defense-in-depth. For assessing configuration risk this is the optimal point to assess risk increases.

The trial staff proposal to artificially assign an SSC to a high category out of a group oflow ranking components or other grouping is not an acceptable method for assuring defense-in-depth. It implies that the PSA is inadequate or not believable. A proper PSA is the best tool for evaluating defense-in-depth and for assessing risk. Arbitrarily, selecting components to be high subverts the process of ranking and dilutes the focus from risk important functions which are explicitly modeled in the PSA.

l i

n i

i I

i I

l

~ _.

e 4 *,

ST-HL-AE-5434 Page 23 of 27 Question 3-10)

Page 4/6 of the submittal discuss searchingfor non-linear effect by varying the failure rates oflow ranked common [ nominally identical?] equipment. However, an evaluation to estimate or bound the maximum practical aggregate change in plan * :,afety accompanying the reduced controls on the low and medium safety sigmficant SSC does not appear :o be addressed in the submittal. How does STP plan on insuring that non-linear efects due to the aggregate changefrom reducing QA controls on all low ranked SSC does not result in an unanticipated risk increase?

Response

An important aspect of the "living" Graded QA process is the feedback loop. This feedback loop will monitor any unanticipated increase in risk due to the non-linear effects of an aggregate change in reducing QA controls. Two key attributes in the feedback loop is the Maintenance Rule program and the PSA risk ranking analysis.

Maintenance Rule Program: As part of the maintenance rule program, performance criteria have o

been established for all SSCs scoped within the rule. This includes tracking of Maintenance Rule Functional Failures, Maintenance Preventable Functional Failures, Repetitive Maintenance Preventable Functional Failure and System Unavailability. This information will be provided as feedback to the Graded QA Working Group and Expert Panel. Also, this information will be incorporated in the PSA for re-ranking of SSCs.

l PSA Re-Ranking of SSCs: Plant specific data updates for the PSA will show the aggregate effect on o

l the risk matrix (i.e., CDF, LERF, etc.).

The frequency of these updates is outlined in the Probabilistic Safety Assessment Program procedure, OPGP04-ZA-0604.

It is felt that a sensitivity study trying to address the aggregate effect by increasing the failure rates for i

all " low" and " medium" significant SSCs will add linle value to the overall risk ranking process. For

" medium" SSCs, an increase in the failure rates is not anticipated. All " medium" ranked safety-related l

SSCs will fall under the Basic category of the Graded QA process (see Figure 1). Since this category provides focused emphasis of key functions of the SSC, no increase in failure rates is anticipated. Also, all these components are within the scope of the Maintenance Rule and, as such, have oversight, feedback, and corrective action mechanisms to prevent adverse trends and increasing risk levels.

Sensitivity studies performed in 1995 showed there were no " low" significant SSCs shifted from the low category to the high category. These studies increased the failure rates by factors of 2,5 and 10. Since no evidence of" low" SSCs escalating to "high" SSCs exist, it is felt that the feedback loop would be more than adequate for addressing aggregate effects on plant risk.

5 l

4

ST-HL-AE-5434 Page 24 of 27 l

Question 3-11):

l In the Risk Sigmpcance Thresholds discussion on 6/6, top event importance is used as a high level screening to route systems / components directly to " low " sigmpcance. Figure 4-1 and 4-2 in the PSA applications guides are referenced as providing the prescribed threshold values. These twofigures, however, provide EPRI suggested thresholdsfor the level of evaluation need to implement a chanee.

l e.g., a before implementation versus an after implementation CDFandLERF.

a)

Please explain which values are being usedfrom the EPRIreport and what these values l

are compared tofrom the STP results. Are before and after graded QA implementation values being calculated?

Response

As stated previously, the PSA Risk Ranking Procedure has been updated and no longer reflects the use of Figure 4-1 and 4-2 of the EPRI PSA Application Guide (TR-105396). These figures were used to determine the initial screen of components for " low" risk significance. Afler further i

review it was determined that this step added to the complexity of the procedure and did not add any value to the risk ranking process. The current procedure requires the calculation of risk indices (Fussell-Vesely and Risk Achievement Worth) for all the components modeled in the PSA. After implementation, recalculation of risk indices is part of the feedback loop in the Graded QA process.

Question 3-11):

b)

Also the staff has not approved the PSA application guide.

Please provide your justipcation ofthe appropriateness of the threshold values you selectfor determining the safety sigmpcance ofSSC'sfor graded QA.

Response

The thresholds in the PSA Applications Guide are appropriate for use at STP. This is due to the rate at which risk is accrued for temporary conditions (i.e., planned maintenance). The threshold allows for instances where risk accrues at a higher rate but also allows corrective actions to be taken to reduce risk in future weeks, if necessary. Maintaining the level of weekly risk as low as l

practical and below the IE-6 threshold ensures the yearly cumulative risk will be below the l

target. It is also two orders below the safety goal; however, monitoring station risk at STP up to l

the threshold ensures that both the safety goal is met and the target goal for risk at STP is also l

met.

l 1

s.

ST-HL-AE-5434 Page 25 of 27 ATTACHMENT 4 PROBABILISTIC SAFETY ASSESSMENT PROGRAM PROCEDURE Question 4-1):

Figure 2 on page 10/12 indicates that internal events, internalfires andfloods, seismic events, and other external events are allpart ofthe STP PSA model.

a)

When the submittal refers to the quantification or review ofthe " risk model" are all these parts included? Inparticular, do all the risk measures and all the sensitivity study results include thefull quantitative contributionsfromfires, floods, and other external events?

Response to first part:

Yes, important contributions from these types of events are included in the PSA model quantification. These topics will be reviewed, but not as often as the systems analysis part of the PSA.

Response to second part:

Yes, this risk measuring and sensitivity study results include the contributions from floods and other external events.

1 Question 4-1)(Continued):

b)

If the ranking process does not include the external event PRA models, how are these eventsfactored into the ranking and eventual categorization process?

Response

The ranking does include the external event models as discussed above under part a.

J l

l

r.

ST-HL-AE-5434 Page 26 of 27 ATTACHMENT 5 CONFIGURATION CONTROL OF THE PROBABILISTIC SAFETY ASSESSMENT PROCEDURE Question 5-1):

On the "InitialScreening Criteria" form (page 9/16), there are two statements at the end oftheform. If answers to any of the questions on the form is "yes", it says " proceed to PSA CHANGE EVALUATION". Ifanswers to any ofthe questions is "no", it says tofile the change in the applicable System or Event Notebook". In some cases, both situations could occur. If this happens, should one bothproceed to PSA CHANGE EVALUATIONandfile in the applicable notebook? Please explain.

Response

This form has been deleted from the procedure. The requirement as currently written is more general:

"The reference PSA Models are updated every Unit I refueling cycle incorporating applicable plant modifications, procedure changes and data collected since the previous update." The procedure does not specify exactly what forms are required and how they are to be filed.

Question 5-2):

Addendum 1 contains PSA input data and Addendum 2 contains event and fault trec notebooks.

Addendum 3 contains three layers ofdecisions with 1) initial screening, 2) change evaluation, and 3) i detailed change flow chart. Section 5.4 states that Addendum 2 will be updated at least every 18 months.

Page 9/17 in attachment 2 states that the OEG provides biannual report where SSC performance is qualitatively graded.

i i

a)

What starts the Addendum 3 process?

Response

Addendum 3 has been deleted. A formal model update is performed every Unit I refueling as stated in the previous answer.

Question 5-2)(Continued):

b)

Question 3 on 10/16 asks if the change requires an immediate update. What determines ofthe change requires an immediate update?

l l

Response

This form has been deleted. In general a change that significantly affects the PSA results should be incorporated as soon as possible.

d

... s ST-HL-AE-5434 Page 27 of 27 Question 5-2)(Continued):

c) is it correct to assume that the refueling outage update (or a least every 18 months) will incorporate all reliability data and logical models changesfrom the period?

Response

Yes, the update will incorporate reliability data and logical models changes from the period.

Question 5-3):

Step 23 states to " submit the system packagefor review to the PSA project team " Please explain who the PSA project team is.

Response

Step 23 has been deleted. Responsibilities are assigned to the Risk and Reliability Analysis (RRA)

Supervisor or to the member of the RRA section specifically assigned responsibility for a particular model.

1 l

l

-- -.. ~.

l ST-HL-AE-5434 Page1ofI Figure 1 - GOA Screening Process Flowchart i

i l

l i

4 l

i I

I f

l P

System or PSA High No FSA Med PSAlow No Not Modeled a

y Component A

-1

.4 a

Yes Yes Yes I

4 E

, o E

T i

E

[

Mode change Mibgates Could fail l

or shutdown safety Used in EOPs?

accxlent or risk sigrufcant Y

j-signifcant?

transient?

system?

N

,g s

T i

1 i

Assess nsk signifcance based on PSA C

i l

rarAings and/or deterministe evaluahons I

l

+

I i

Not Risk.

Hgh No Med No Low No Significant 3

n Yes Yu Yn d

I v

l

-No Safety f slated? /

4 Safety related?

Safety related?

No Quality related?

No

'Y

~

Yes e

Yes Yes u

ir g

1 t

(

I i

,_,m.__.

.., I