ML18324A536

From kanterella
Revision as of 07:54, 30 November 2019 by StriderTol (talk | contribs) (Created page by program invented by StriderTol)
Jump to navigation Jump to search
Summary of Category 2 Public Meeting to Discuss of a Graded Approach for Probabilistic Fracture Mechanics Codes and Analyses for Regulatory Applications
ML18324A536
Person / Time
Issue date: 11/23/2018
From: Patrick Raynaud
NRC/RES/DE/CIB
To:
Raynaud P
References
Download: ML18324A536 (8)


Text

Category 2 Meeting Summary U.S. Nuclear Regulatory Commission Public Meeting Summary 10/23/2018

Title:

Discussion of a graded approach for probabilistic fracture mechanics codes and analyses for regulatory applications Date of Meeting: 10/23/2018 Location: OWFN-09B04 Type of Meeting: Category 2 Purpose of the Meeting(s): The purpose of this meeting is to have round-table discussions with stakeholders on potential ways to implement a graded approach for probabilistic fracture mechanics (PFM) best-practices for regulatory applications. The NRC was looking for specific proposals and ideas on how to implement a tiered approach for quality assurance as well as verification and validation, for PFM codes and analyses.

General Details: This meeting took place from 8:00AM to 11:45AM on 10/23/2018. Around 10 NRC staff were in attendance, with 9 external in-person participants and a dozen additional participants on the webinar.

The NRC briefly presented some background for the meeting and its objectives. Following this brief presentation, EPRI presented their thoughts on minimum requirements for a PFM submittal. An open discussion between the participants then took place for the remainder of the meeting.

The list of know participants is below:

Name Organization Michael Benson NRC/RES Matthew Homiack NRC/RES Mark Kirk NRC/RES Shah Malik NRC/RES Patrick Raynaud NRC/RES David Alley NRC/NRR Jay Collins NRC/NRR Stephen Cumblidge NRC/NRR David Rudland NRC/NRR

Name Organization Steve Ruffin NRC/NRR Jana Bergman Curtis Wright Markus Burkardt Dominion Engineering Inc.

David Gross Dominion Engineering Inc.

Glenn White Dominion Engineering Inc.

Tony Cinson EPRI Robert Grizzi EPRI Tim Hardin EPRI Craig Harrington EPRI Nathan Palm EPRI Gary Stevens EPRI Ron Janowiak Exelon Nuclear Ryan Hosler Framatome Marjorie Erickson PEAI Mike Starr Sandia National Lab Dilip Dedhia Structural Integrity Associates Do Jun Shim Structural Integrity Associates Brian Golchert Westinghouse Brian Mount Unknown Eric Johnson Unknown Meeting Summary:

The first presentation was by Mr. Raynaud of NRC/RES (ML18298A166), and covered the following themes:

How are tiers defined for PFM submittals? Tier Drivers.

o Potential tier drivers include analysis complexity, application safety impact, temporary vs. permanent change, generic vs. plant specific application.

What is subject to tiers in a PFM application? Tier Topics.

o Potential tier topics include QA, V&V, scope of analyses.

Are more tiers needed? How much is too much?

o PFM usability vs. NRC comfort in reviewing.

Discussion points raised during the first presentation were as follows:

Alley: less QA and V&V may be adequate depending upon safety significance of the application.

Rudland: trust in the vendors QA programs may have an impact on what gets submitted to NRC.

Alley: regarding risk-informed vs. risk-based:

o Risk-informed implies that something beyond PFM is part of the argument, that is o We do not solely rely on a risk determination to make a decision.

Stevens: Whats the difference between V&V and QA? What does a relaxed review mean?

o Software QA is the act of maintaining configuration control over a code, and documenting the associated activities. Licensees should have QA programs that can be audited.

o V&V is the act of verifying hat the software has been implemented correctly, and that the predictions meet the stated objectives and requirements for the software, it may include benchmarking or comparisons to data, or both.

Rudland: vendor QA programs do a good job at document codes; NRC may not need to see everything; V&V though is different from just paperwork tracking.

Collins: there is a difference between an outage situation and a more generic submittal; what needs to be submitted to NRC is different in those two situations.

Rudland: we should develop whats minimally acceptable; we should avoid overly-complex guidance.

Discussion on the need for guidance or lack thereof:

Palm: Regulatory Guides (RG) sometime become de facto regulation. Anything not done per a RG can become problematic and questionable. The more detail goes in a RG, the more complex things get when one does not exactly follow it.

o Kirk: Be more specific about guidance being treated as regulation.

o Palm: RG 1.99 rev 2, GALL (Rudland), RG 1.190 on fluence.

o Alley: most everyone comes in as RG-1.99-compliant; we do not treat RG as regulations.

Palm: RG sets a strong precedent; the NRC is used to RG-compliance.

Kirk: doing things the same way leads to inertia; RG vs. regulations discussion may need another forum.

Palm: a particular application may not fit the RG; industry may get caught trying to disposition a RG rather than showing a PFM is appropriate for its purpose.

Harrington: the example RGs are very prescriptive; the PFM RG need not be so prescriptive. On the other hand, if this is a very general RG, it could be hard to meet.

Alley: with a RG, there may be occasional difficulties in review, but without a RG, there will always be difficulties. RG could actually improve the situation.

Rudland: recent OE where vendor lost configuration control on a PFM; situations like this suggest that guidance is needed.

The next presentation was by Nathan Palm of EPRI (ML18298A167). The presentation and associated discussion was as follows:

Palm: The PFM Technical Letter Report (TLR) presents various important aspects of PFM but does not say whats minimally acceptable.

o Could be interpreted as everything that must be submitted.

o Simple PFMs shouldnt require excessive review EPRI agrees that some tool could be helpful Presentation: A List of Topics to be Addressed in a Submittal was presented, which was very much in line with the important aspects of PFM outlined by NRC in the PFM TLR.

Presentation: About principles of Risk-Informed Decision Making in RG-1.174:

o Cumblidge: PFM not necessarily linked to risk; e.g., probability of leakage.

o Stevens: RG-1.174 is not a requirement.

o Raynaud: the PFM TLR doesnt link w/ RG-1.174.

o White: a document may be needed to cover RG-1.174 relation to PFM.

o Cumblidge: if you tie to risk, then NRC PRA reviewers have to get involved.

o Palm: PFMs do link back to RG-1.174 to define acceptance criteria.

o Rudland: it is harder to do this when the component doesnt affect the PRA.

o Alley: PFM might fall outside (below) RG-1.174: in PFM, we do not want to assume complete component failure at the outset, but in PRA, complete failure of a component is the starting point.

o Collins: licensee wanted to introduce crack initiation model in a deterministic argument; NRC rejected that idea; this guidance may make the expectations more clear.

Presentation: Minimum expectations: Supporting Documents o PFM software does not always have a user manual and adequate documentation. EPRI proposed alternatives to documentation, including audits, which were welcomed by NRC.

o Rudland: audit is a good idea, including where licensee runs the code in front of NRC to answer some questions.

Collins and Cumblidge strongly supported the idea of audits where the licensee would run the code for NRC.

o Homiack: run times can be an issue.

o Palm: paired-down versions of the code can illustrate concepts and provide confidence.

o Rudland: staff mostly are concerned about module interactions without the need to calculate converged probabilities.

Presentation: Minimum expectations: Models Presentation: Minimum expectations: Inputs o Rudland: it is unclear how to justify separation of aleatory and epistemic; that discussion has gone on a long time.

o White: small margin between result and acceptance criterion may be a reason to separate aleatory and epistemic uncertainties.

o Rudland: separation of uncertainties feeds into acceptance criteria Presentation: Minimum expectations: Convergence and Sensitivity Studies Presentation: Minimum expectations: Uncertainties o Conservatisms must be discussed.

Presentation: Minimum expectations: V&V and Acceptance Presentation: Above the minimum

o Raynaud: you do not always know your margin to the acceptance criterion ahead of time, so it can be hard to know a priori what to do and what to submit.

o Harrington: this is about whats submitted; not necessarily all the work that is actually done. There will likely be some work that is not documented in the submittal.

o Rudland: chasing the basis can be difficult, if not many documents are referenced.

o White: prescriptive guidance may be difficult on some of these issues; it establishes a precedent.

o Kirk: the information presented by EPRI could be turned into a RG. How is the EPRI proposal different than a RG?

o Palm: EPRI does not want a RG to put the industry in a bind.

EPRI announced that they were working on a white paper on PFM minimum requirements. The white paper will be provided to the staff late 2018.

o Stevens: public meeting information could help the industry develop better submittals; RG not necessary to achieve the goal. This was done for PT curves.

o Cumblidge: RGs go two ways: they can encourage uniformity in staff reviews and promote regulatory outcome consistency and predictability; this meeting is a way to find out what the industry would or wouldnt like to see in the RG (i.e. get feedback).

o Collins: there is and will always be a certain amount of variability from submittal to submittal.

o Rudland: Stevens has a point; we shouldnt discount other methods to achieve the goal.

o Stevens: there are 4-5 upcoming submittals using PFM in the next year. This will be an opportunity to practice both for the industry and the NRC. Submittals will come in prior to the RG being developed; these submittals are intended to establish precedent.

o Palm: a lessons-learned document from the submittals may be valuable.

o Shim: what about the pilot study associated with RESs PFM project?

o Raynaud: the pilot study is still under development.

o Stevens: what about the differences between accepted codes and new codes?

New codes can be benchmarked against accepted codes.

o Raynaud: an example is xLPR and FAVOR: these are or will be accepted by NRC (presumably), but the more you deviate from them (modify them), the more documentation would be required.

o Raynaud: question on the EPRI last slide on being in a case that requires more than the minimum documentation: do we leave it up to the industry to decide when to provide above the minimum info?

o Palm: EPRI previously developed a table with high-low rankings; however, could not neatly categorize an example.

o Raynaud: each minimum expectation topic may need to have tiers.

o Harrington: minimum information to be submitted really depends upon the submittal.

o Shim: the minimum expectations go beyond what has been approved in the past:

does this mean what we did before in no god, or needs to be revisited? How can you require more than you required I the past?

o Raynaud: we used to make wheels out of wood: as we learn more, we should do better, rather than stick to what was acceptable I the past, but has been superseded today.

o Raynaud: how does the staff comment on the white paper?

o Stevens: pre-submittal meetings could be used for this.

o Rudland: we should have a follow-on public meeting or public teleconference.

Open Discussion continued following a break:

Rudland EPRI slide 12 and NRC slide 5 are similar; but some things on these slides may be greater than minimum requirements.

Cumblidge: example on comparing Inter Granular Stress Corrosion Cracking failure rates to Probability of Detection: shows how often you need to inspect; is a PRA review necessary?

Rudland: what does extent of plant experience mean?

o Palm: extent of plant experience is referring to Operational Experience (OpE) and benchmarking the code to OpE. The more OpE (data) you have, perhaps the less documentation is needed, because you can benchmark your code to the OpE.

o Rudland: How does benchmarking feed into what gets submitted? Example:

Weibull distributions for the control rod drive mechanism issue; what should be submitted?

o White: more sensitivity studies may be needed for complex cases or when OpE exists.

Golchert: the is a difference between first-of-a-kind and more routine applications, Westinghouse procedures reflect this difference.

o Alley: we are starting with the minimum set and adding requirements for more complex cases; maybe we can try the opposite approach?; define the first-of-a-kind case first, then move backwards o Harrington: is it easier to describe the basic case or the totality?

Stevens: upcoming submittals will be useful; may establish a common ground.

White: minimum can vary submittal-to-submittal.

Harrington: were talking about the difference between submitting a sentence, a paragraph, or a report on each of the minimum topics to make our case to the NRC.

Palm: no system can completely eliminate all RAIs Alley: we can limit the number of RAIs, and the goal is to reduce their number and avoid circular RAI loops, but there will likely always be some RAIs. The goal is to maintain effectiveness but improve efficiency.

Rudland: what triggers us to require more than the minimum as described by EPRI? a code used before may just need the minimum.

Kirk: the same code or application may be subject to interpretation. FAVOR went through reviews and was applied to the PTS rule; then was applied to a different problem; shallow surface breaking flaws became significant; other areas of the code were examined; interpolation schemes were found to be wrong.

o Benson: Marks example shows the same code being applied to two different problems; the inputs changed.

o Kirk: change in the transient led to a change in the risk-dominant flaw.

Cumblidge: approving for RSIC 4 item does not imply approval for Class 1.

White: EPRI will discuss what circumstances require greater than the minimum in the white paper.

Stevens: for RPV with 10-7 frequency, a sensitivity study might be required, because you are close to a threshold or regulatory criterion.

Collins: in the case where I have 7 orders of magnitude below the criterion, I may question the number.

Shim: it must be backed up by the items in the Minimum Expectations; its hard to define everything up front; you need trial runs.

Stevens: OpE should be folded in.

Raynaud: you need to ensure applicability of the code and how youre capturing all relevant aspects of the problem being modeled.

o Harrington: up front in an application, one should mention what problem they are solving and how they are solving it.

Cumblidge: the NRC staff is concerned about reviewing a PFM for a less safety-significant application, because sometimes then the industry wants to apply the PFM to a more safety-significant application.

o Collins: for a generic application (vs. plant-specific), greater than the minimum may be required.

o Stevens: past applications of PFM have been more specific; they arent typically generic.

o Cumblidge: safety-related and safety-significant; an approval is only for one case; cant move to Class 1 based upon approval from lower safety class.

o Stevens: an analysis applied to service water piping might require 10 pages; but applied to RCS piping, the same might require 10 reports Collins: we could think about an SE for the code itself vs. an SE for each application of the code o Raynaud: this is done in fuel performance o Rudland: the NRC has approved the SRRA code for risk-informed ISI o Raynaud: there are two elements to consider - the code and the application of the code.

o Kirk: same code but different application; should have been ok but it wasnt; how can we develop guidance to address this?

o Raynaud: if using the same code for a different application, verification may not need to be redone, but validation may need to be redone. A code approval may be limited to a specific class of problems: if one wants to use the same code outside of the domain of applicability previously approved, they will need to show the new applicability domain through validation.

o Palm: the industry did not run a sensitivity case to look at small flaws o Stevens: can be an application-driven problem o Rudland: if you change the inputs, you should run sensitivity studies Shim: existing software can have 30-year-old models; developing new code can be better; we shouldnt be shackled to what has been approved in the past; for software, documentation and audits should be sufficient.

Collins: common understanding between the staff and industry on audits is important.

Stevens: applications coming to NRC in 3 months; a test is coming; applications and the white paper may help the process.

Summary and actions for the meeting were provided:

EPRI will provide white paper on PFM minimum expectations to NRC in 2018 Raynaud summary o goal of the meeting was to discuss a tiered approach o EPRI proposed minimum requirements for submittals o Attendees debated on what exactly triggers greater than minimum requirements o A distinction between the code itself and the application of the code was made o Tiers are needed for both the code itself and its application.

The NRC staff needs to think more on triggers Harrington: the minimum expectation topics are defined, but the question is how long will the submitted reports have to be. Topics: what you submit vs. how you tackle the problem.

EPRI and NRC may need a public conference call to discuss the EPRI white paper once it has been read by NRC.

Action Items/Next Steps:

EPRI to send NRC a white paper outlining their thoughts on a graded approach for PFM submittals.

NRC to set up a subsequent public meeting in the spring of 2019 to continue the discussion on PFM with stakeholders.

Attachments:

o Meeting agenda: ML17338B136 o Presentations: ML17348A018, ML17348A039, ML17348A034, ML17348A043 o PFM Technical Letter Report: ML17335A048