ML22143A840

From kanterella
Jump to navigation Jump to search
22 - 2022 - Nvib - Mat Counterparts - ISI Thoughts
ML22143A840
Person / Time
Issue date: 05/25/2022
From: David Rudland, Dan Widrevitz
NRC/NRR/DNRL, NRC/NRR/DNRL/NVIB
To:
Kalikian R, 301-415-5590
Shared Package
ML22143A408 List:
References
Download: ML22143A840 (23)


Text

Thoughts on Inspections and Sampling

DAN WIDREVITZ & DAVE RUDLAND

MAY 25, 2022 Topics

  • Motivation
  • What are inspections for?
  • Inspections and Uncertainties
  • Bathtub curve
  • Inspection modeling
  • Risk-informed decision making
  • Examples Motivation

NRC has noted an increase in ISI -related submittals that are explicitly or implicitly risk-informed; principally focused on ISI frequencies.

Many of these submittals contain novel applications of probabilistic modeling or other risk-based arguments - justifying adjustment of ISI.

Mixture of qualitative arguments regarding state of knowledge and (in some cases) probabilistic analysis requires staff to contextualize application in risk-informed decision making (RIDM) framework.

Motivation Inspections in RIDM

Inspections are a key aspect of RIDM and necessary to reach regulatory conclusions.

Staff are seeking to bridge qualitative arguments, historical precedent, and historical thumb-rules to match inspection proposals with more quantitative insights.

Staff seek to answer, how do we judge a proposed number of inspections versus the current practice?

What are inspections for?

Inspections answer important questions through direct observation:

  • Was a component fabricated to its design?
  • Was a component installed properly?
  • Etc.

What do we often use inspections for:

  • Direct evidence of the state of SSC
  • Data to feed/confirm models
  • Diverse and timely assurance of SSC integrity/function/etc.

What are inspections for?

Utilizing results

Inspection results can support modeling. This requires:

  • Sufficient data to model mean
  • Additional data to model (or at least bound) variance Inspection results can support follow-on actions:
  • Confirm or negate presence of degradation
  • Monitor integrity
  • Detect potentially novel degradation Bathtub curve Framing inspection programs

How many inspections? Where? For how long?

What are the plausible degradation modes that may threaten operation? How can we detect them? When may they occur? How long may we have between detection and failure?

Burn-in Maturity We a r-out Bathtub curve Shifting priorities

Burn-in - inspections are necessary to rapidly identify novel degradation and describe it (mean, variance, time-dependence)

Maturity - inspections are necessary to validate/confirm modeling and detect initiation of novel degradation (mean, trigger inspection program expansion)

Wear-out - inspections provide identification of entering wear-out period

Burn-in Maturity We a r-out Inspection Modeling

We will be presenting plots comparing inspection scenarios to illuminate program sensitivity.

Can approximate using binomial distribution for quick results.

Sensitivity here is the mean of at least one detection for many,, = 1 simulations.

Monte Carlo can be used to evaluate more complex scenarios (POD, time-effects, etc.)

Inspection Modeling Inspections vs. Sensitivity

The ability to (on Exams with Varying Population Incidence -

average) detect a level of detectables/population 600

100%

detectable degradation in 90%

a population is a strong 80%

function of detectable 70%

60%

incidence (% population 1/600 50%

6/600 with incidence) and 40% 30/600 number of examinations. 30% 60/600

20%

Sensitivity improves with 10%

increasing incidence or 0%

0 50 100 150 200 250 300 examinations. Inspections Inspection Modeling Mean and Variance

Mean sensitivity or Mean and Variance for n = {10, 20, 50}1 standard

100%

probability of detection is 90% deviation sample population 80% plotted

70%

independent.. but 60%

variance is not! 50%

40%

Confidence in mean value 30%

20%

goes up with sampling 10%

(examination by 0%

10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

percentage can help!) Estimated Population Incidence

Mean n = 10 Mean n=20 Mean n=50 Inspection Modeling Incidence vs. Sensitivity

Finding rare occurrences 300

95% probability of success requires many 250 50% probability of success inspections. 5% probability of success 200 Approaches 100%

150 inspections if rare occurrences are 100 sufficiently important to 50 detect as early as possible. 0 0% 5% 10% 15% 20%

Percentage of population with detectables Inspection Modeling Population vs. Sensitivity

Inspection schemes have 5 detectables in population 100%

different sensitivities. 90% 2 inspections This relates to total 80% 10 inspections population as well as 70% 50 inspections

60%

number of inspections 50%

(Monte Carlo result).

40%

Fixing sampling (not as a 30%

percentage) has limits. 20%

10%

0%

0 100 200 300 400 500 Number of Welds in Population Other inspection factors

Other factors to be considered:

  • Timeliness of detection
  • Consequence of detection
  • Consequence of later detection versus earlier detection Burn-in Maturity We a r-out
  • Capabilities of monitoring technology Principles of Risk-Informed Decision Making 1 The five principles of risk-informed decision form a holistic decision basis.

They are not separable - yo u cannot wholly replace Principle 5 with improvements to Principle 4 Principles of Risk-Informed Decision Making 2

Design In the materials engineering Margins context, we often have safety (etc.) margins defined through use of ASME BPV Code design Degradation requirements.

Modeling (PFM, etc.) Risk analysis often takes the form of modeling, such as PFM.

Performance monitoring includes inspections, leak detection, etc.

Monitoring and Inspections (ISI, etc.)

Degradation Modeling:

What is it for?

Degradation modeling allows the prediction of future degradation based on modeling assumptions (epistemic knowledge).

  • Design optimization
  • Inspection optimization
  • Future planning (repair, replacement, etc.)

Reliability approaches rely on modeling and (often) Bayesian approaches to build maintenance and inspection programs. The p u re use of this approach is considered Risk-Based by the NRC (Principle 4).

Performance Monitoring:

What is it for? 1

Performance Monitoring, in the Principle 5 sense, provides:

  • Direct evidence of presence and/or extent of degradation
  • Validation/confirmation of continued adequacy of analyses
  • Timely method to detect novel/unexpected degradation Can inform regarding uncertainties:
  • Epistemic uncertainties - model, parameter, and completeness uncertainties
  • Aleatory uncertainties - stochastic randomness Performance Monitoring:

What is it for? 2

Performance monitoring works together with other approaches such as degradation modeling to provide assurance of integrity with a high degree of confidence.

Running to failure is not an adequate program.

Significant systems must be maintained with a high degree of confidence in the assurance of their function - uncertainties must be handled by both modeling investigation (sensitivity studies, etc.) and by on-going inspection (model and completeness uncertainties.)

Examples:

Thought Experiment

Licensee proposal: Component inspection requires high degree of sensitivity, with a 95% mean detection probability. Proposal is for 50 inspections, will rare degradation be detected?

Assumptions: Binomial statistics are appropriate.

How rare is 95% likely to be detected? ~5.8% population incidence rate How many inspections for 1% population incidence rate? ~300 Staff thoughts: ~1/17 incidence rate is not especiallyrare. Binomial estimates are driven by number of inspections. Having a very high chance of finding rare occurrences requires very high numbers of inspections.

Examples:

Thought Experiment 2

Licensee proposal: Component has 10 inspections per ASME Code for each unit every 10 years; proposing 2 inspections every 20 years.

Assumptions: Population incidence of detectable indication of 5%

Per Unit Monte Carlo sensitivity: ~20%

Fleet Monte Carlo sensitivity (assuming 40 units): ~99%

Staff thoughts: Proposed inspection scheme would have very low ability to provide assurance regarding component and unit specific degradation, but very good sensitivity to fleetwide generic degradation. Expansion would be warranted if fleet detection occurred.

Examples:

PWR Weld Exams

WCAP-16168 TR Case Analysis: Primarily PFM analysis addressing risk delta of conditional RV failure frequency due stress and different inspection scenarios.

Performance monitoring plan: Extension of ISI interval to a maximum of 20 years. Fleet inspections are coordinated to ensure regular data on population level (monitoring and trending).

One-time inspection for subsequent extensions to validate that generic flaw-distribution used in report bounds plant-specific per 10 CFR 50.61a(e)

(model validation).