ML22143A840

From kanterella
Jump to navigation Jump to search
22 - 2022 - Nvib - Mat Counterparts - ISI Thoughts
ML22143A840
Person / Time
Issue date: 05/25/2022
From: David Rudland, Dan Widrevitz
NRC/NRR/DNRL, NRC/NRR/DNRL/NVIB
To:
Kalikian R, 301-415-5590
Shared Package
ML22143A408 List:
References
Download: ML22143A840 (23)


Text

Thoughts on Inspections and Sampling DAN WIDREVITZ & DAVE RUDLAND MAY 25, 2022

Topics

  • Motivation
  • What are inspections for?
  • Inspections and Uncertainties
  • Bathtub curve
  • Inspection modeling
  • Risk-informed decision making
  • Examples

Motivation NRC has noted an increase in ISI-related submittals that are explicitly or implicitly risk-informed; principally focused on ISI frequencies.

Many of these submittals contain novel applications of probabilistic modeling or other risk-based arguments-justifying adjustment of ISI.

Mixture of qualitative arguments regarding state of knowledge and (in some cases) probabilistic analysis requires staff to contextualize application in risk-informed decision making (RIDM) framework.

Motivation Inspections in RIDM Inspections are a key aspect of RIDM and necessary to reach regulatory conclusions.

Staff are seeking to bridge qualitative arguments, historical precedent, and historical thumb-rules to match inspection proposals with more quantitative insights.

Staff seek to answer, how do we judge a proposed number of inspections versus the current practice?

What are inspections for?

Inspections answer important questions through direct observation:

  • Was a component fabricated to its design?
  • Was a component installed properly?
  • Etc.

What do we often use inspections for:

  • Direct evidence of the state of SSC
  • Data to feed/confirm models
  • Diverse and timely assurance of SSC integrity/function/etc.

What are inspections for?

Utilizing results Inspection results can support modeling. This requires:

  • Sufficient data to model mean
  • Additional data to model (or at least bound) variance Inspection results can support follow-on actions:
  • Confirm or negate presence of degradation
  • Monitor integrity
  • Detect potentially novel degradation

Bathtub curve Framing inspection programs How many inspections? Where? For how long?

What are the plausible degradation modes that may threaten operation? How can we detect them? When may they occur? How long may we have between detection and failure?

Burn-in Maturity Wear-out Chance of failure

Bathtub curve Shifting priorities Burn-in - inspections are necessary to rapidly identify novel degradation and describe it (mean, variance, time-dependence)

Maturity - inspections are necessary to validate/confirm modeling and detect initiation of novel degradation (mean, trigger inspection program expansion)

Wear-out - inspections provide identification of entering wear-out period Burn-in Maturity Wear-out Chance of failure

Inspection Modeling We will be presenting plots comparing inspection scenarios to illuminate program sensitivity.

Can approximate using binomial distribution for quick results.

Sensitivity here is the mean of at least one detection for many simulations.

Monte Carlo can be used to evaluate more complex scenarios (POD, time-effects, etc.)

,, =

1

Inspection Modeling Inspections vs. Sensitivity The ability to (on average) detect a level of detectable degradation in a population is a strong function of detectable incidence (% population with incidence) and number of examinations.

Sensitivity improves with increasing incidence or examinations.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

0 50 100 150 200 250 300 Likelihood of any Detection Inspections Exams with Varying Population Incidence -

detectables/population 600 1/600 6/600 30/600 60/600

Inspection Modeling Mean and Variance Mean sensitivity or probability of detection is sample population independent.. but variance is not!

Confidence in mean value goes up with sampling (examination by percentage can help!)

1 standard deviation plotted 0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Estimated Population Incidence +/- 1 SD Estimated Population Incidence Mean and Variance for n = {10, 20, 50}

Mean n = 10 Mean n=20 Mean n=50

Inspection Modeling Incidence vs. Sensitivity Finding rare occurrences requires many inspections.

Approaches 100%

inspections if rare occurrences are sufficiently important to detect as early as possible.

0 50 100 150 200 250 300 0%

5%

10%

15%

20%

Number of inspections Percentage of population with detectables 95% probability of success 50% probability of success 5% probability of success

Inspection Modeling Population vs. Sensitivity Inspection schemes have different sensitivities.

This relates to total population as well as number of inspections (Monte Carlo result).

Fixing sampling (not as a percentage) has limits.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

0 100 200 300 400 500 Probability any Detections Number of Welds in Population 2 inspections 10 inspections 50 inspections 5 detectables in population

Other inspection factors Other factors to be considered:

  • Timeliness of detection
  • Consequence of detection
  • Consequence of later detection versus earlier detection
  • Capabilities of monitoring technology Burn-in Maturity Wear-out Chance of failure

Principles of Risk-Informed Decision Making 1 The five principles of risk-informed decision form a holistic decision basis.

They are not separable - you cannot wholly replace Principle 5 with improvements to Principle 4

Principles of Risk-Informed Decision Making 2 In the materials engineering context, we often have safety margins defined through use of ASME BPV Code design requirements.

Risk analysis often takes the form of modeling, such as PFM.

Performance monitoring includes inspections, leak detection, etc.

Degradation Modeling (PFM, etc.)

Monitoring and Inspections (ISI, etc.)

Design Margins (etc.)

Degradation Modeling:

What is it for?

Degradation modeling allows the prediction of future degradation based on modeling assumptions (epistemic knowledge).

  • Design optimization
  • Inspection optimization
  • Future planning (repair, replacement, etc.)

Reliability approaches rely on modeling and (often) Bayesian approaches to build maintenance and inspection programs. The pure use of this approach is considered Risk-Based by the NRC (Principle 4).

Performance Monitoring:

What is it for? 1 Performance Monitoring, in the Principle 5 sense, provides:

  • Direct evidence of presence and/or extent of degradation
  • Validation/confirmation of continued adequacy of analyses
  • Timely method to detect novel/unexpected degradation Can inform regarding uncertainties:
  • Epistemic uncertainties - model, parameter, and completeness uncertainties
  • Aleatory uncertainties - stochastic randomness

Performance Monitoring:

What is it for? 2 Performance monitoring works together with other approaches such as degradation modeling to provide assurance of integrity with a high degree of confidence.

Running to failure is not an adequate program.

Significant systems must be maintained with a high degree of confidence in the assurance of their function - uncertainties must be handled by both modeling investigation (sensitivity studies, etc.) and by on-going inspection (model and completeness uncertainties.)

Examples:

Thought Experiment Licensee proposal: Component inspection requires high degree of sensitivity, with a 95% mean detection probability. Proposal is for 50 inspections, will rare degradation be detected?

Assumptions: Binomial statistics are appropriate.

How rare is 95% likely to be detected? ~5.8% population incidence rate How many inspections for 1% population incidence rate? ~300 Staff thoughts: ~1/17 incidence rate is not especiallyrare. Binomial estimates are driven by number of inspections. Having a very high chance of finding rare occurrences requires very high numbers of inspections.

Examples:

Thought Experiment 2 Licensee proposal: Component has 10 inspections per ASME Code for each unit every 10 years; proposing 2 inspections every 20 years.

Assumptions: Population incidence of detectable indication of 5%

Per Unit Monte Carlo sensitivity: ~20%

Fleet Monte Carlo sensitivity (assuming 40 units): ~99%

Staff thoughts: Proposed inspection scheme would have very low ability to provide assurance regarding component and unit specific degradation, but very good sensitivity to fleetwide generic degradation. Expansion would be warranted if fleet detection occurred.

Examples:

PWR Weld Exams WCAP-16168 TR Case Analysis: Primarily PFM analysis addressing risk delta of conditional RV failure frequency due stress and different inspection scenarios.

Performance monitoring plan: Extension of ISI interval to a maximum of 20 years. Fleet inspections are coordinated to ensure regular data on population level (monitoring and trending).

One-time inspection for subsequent extensions to validate that generic flaw-distribution used in report bounds plant-specific per 10 CFR 50.61a(e)

(model validation).

QUESTIONS