ML21133A485
ML21133A485 | |
Person / Time | |
---|---|
Issue date: | 05/14/2021 |
From: | Homiak M NRC/RES/DE |
To: | |
Matt Homiack | |
Shared Package | |
ML21133A483 | List: |
References | |
Download: ML21133A485 (210) | |
Text
Technical Letter Report TLRRES/DE/CIB202111 Sensitivity Studies and Analyses Involving the Extremely Low Probability of Rupture Code Date:
May 14, 2021 Prepared in response to Task 2 in User Need Request NRR2014004, by:
C. J. Sallaberry and R. Kurth Engineering Mechanics Corporation of Columbus with contributions from Dominion Engineering, Inc. and Structural Integrity Associates under contract to the Electric Power Research Institute.
NRC Project Managers:
Patrick Raynaud Shah Malik Sr. Materials Engineer Sr. Materials Engineer Component Integrity Branch Component Integrity Branch Matthew Homiack Materials Engineer Component Integrity Branch Division of Engineering Office of Nuclear Regulatory Research U.S. Nuclear Regulatory Commission Washington, DC 20555-0001
DISCLAIMER THIS PUBLICATION WAS PREPARED AS AN ACCOUNT OF WORK JOINTLY SPONSORED BY THE ELECTRIC POWER RESEARCH INSTITUTE (EPRI) AND AN AGENCY OF THE U.S.
GOVERNMENT. NEITHER EPRI NOR THE U.S. GOVERNMENT NOR ANY AGENCY THEREOF, NOR ANY EMPLOYEE OF ANY OF THE FOREGOING, MAKES ANY WARRANTY, EXPRESSED OR IMPLIED, OR ASSUMES ANY LEGAL LIABILITY OR RESPONSIBILITY FOR ANY THIRD PARTY'S USE, OR THE RESULTS OF SUCH USE, OF ANY INFORMATION, APPARATUS, PRODUCT, OR PROCESS DISCLOSED IN THIS PUBLICATION, OR REPRESENTS THAT ITS USE BY SUCH THIRD PARTY COMPLIES WITH APPLICABLE LAW.
EPRI RETAINS COPYRIGHT IN EPRI-GENERATED MATERIALS CONTAINED IN THIS PUBLICATION.
THIS PUBLICATION DOES NOT CONTAIN OR IMPLY LEGALLY BINDING REQUIREMENTS.
NOR DOES THIS PUBLICATION ESTABLISH OR MODIFY ANY REGULATORY GUIDANCE OR POSITIONS OF THE U.S. NUCLEAR REGULATORY COMMISSION AND IS NOT BINDING ON THE COMMISSION.
EXECUTIVE
SUMMARY
Sensitivity studies and sensitivity analyses are two typical elements of a comprehensive probabilistic analysis that can provide important insights into structural analyses performed with probabilistic fracture mechanics techniques. Sensitivity studies can be used to assess the impacts of uncertain parameters and analysis assumptions on the results. The conclusions from such studies can then be used to justify or prompt refinement of analysis choices.
Sensitivity analysis is a useful tool for identifying important uncertain model inputs that explain a large degree of the uncertainty in a quantity of interest. Following development of the Extremely Low Probability of Rupture (xLPR) Version 2 (V2) probabilistic fracture mechanics code, the U.S. Nuclear Regulatory Commissions Office of Nuclear Regulatory Research (RES) staff, with the assistance of its technical support contractors and in collaboration with the Electric Power Research Institute, performed sensitivity studies and analyses with the objective of identifying which of the xLPR V2 models, model parameter inputs, and input variables contribute most significantly to uncertainty in the results. This objective was accomplished in three phases, which are detailed in Volumes 1, 2, and 3, respectively. The three phases involved developing a template approach for conducting problem-specific sensitivity studies and analyses, applying the approach, and then benchmarking the analysis techniques to confirm the conclusions.
Aspects of the template approach may be generally applicable to other types of probabilistic analyses.
The template approach was developed in the first phase to outline a standard process for conducting problem-specific sensitivity studies and analyses for xLPR V2 applications.
Potential methods and tools were investigated, described, and assessed as part of developing this template. The template analysis process includes five primary steps:
- 1. Generate Reference Simulations - In this step, initial deterministic and probabilistic simulations are run to characterize the problem and verify that it has been appropriately defined with the corresponding input values and probability distributions. This step also includes defining the quantities of interest that will be used for decisionmaking purposes or to otherwise evaluate the problem. The results are used as a reference in the remaining steps.
- 2. Conduct Uncertainty and Sensitivity Analyses - In this step, statistical analyses are conducted to characterize the probabilistic simulation results. Uncertainty analysis characterizes uncertainty in the outputs with statistical metrics such as the mean and confidence intervals. These statistical metrics enable the results to be compared against applicable decisionmaking thresholds. Sensitivity analysis estimates and ranks the contribution of each inputs uncertainty to the output uncertainty. This type of analysis can be used both for checking the validity of the simulation and to ascertain those inputs and models that are the most important drivers of uncertainty.
- 3. Perform Sensitivity Studies - Sensitivity studies, which can be either deterministic or probabilistic, are variations on the reference simulations used to answer specific questions posed by an analyst or decisionmaker. For example, the analyst might test new crack i
growth model parameter inputs or estimate the impact of a more severe welding residual stress (WRS) profile to determine whether they have a significant effect on the quantities of interest. Sensitivity studies may also consider alternative scenarios (e.g., extreme conditions) to increase confidence in the results.
- 4. Revisit Assumptions - This step is an optional iteration that may be performed when the previous steps have raised questions concerning the input parameter values, distributions, or other elements of the analysis. This step consists of revisiting the chosen inputs and then rerunning the original simulation or sensitivity study to determine whether the input changes affect the conclusions.
- 5. Optimize Simulations - This is an optional step that, as presented, is specific to leak-before-break problems that may be analyzed using the xLPR V2 code. It describes how to use the features of the code to confidently estimate extremely low probabilities (e.g.,
events occurring in the range of 10-6). Depending upon the specifics of the problem being analyzed, this step may or may not be applicable.
As part of the first phase, the template approach was illustrated by applying it to a problem representing the Virgil C. Summer Nuclear Generating Station, Unit 1 hot leg pipe-to-reactor-pressure-vessel nozzle weld. A leak in this weld led to identification of a service-induced axial through-wall crack and axial and circumferential surface cracks. Different analysis strategies were used in consideration of the variety of studies that could be performed using xLPR V2.
The analysts used the COMPMODSA suite of tools for the R language to conduct the sensitivity analyses. In addition, the xLPR V2 results were benchmarked against results from a comparable probabilistic fracture mechanics code and checked by experts to confirm that they were consistent with the expected behavior. Through this effort, the analysts demonstrated that xLPR V2 can perform the wide range of intended analyses, and that most of the methods and tools required to implement the template approach are either directly implemented in the code or can be implemented using simple spreadsheets with a moderate amount of effort. The most important uncertain inputs were determined to be the WRS profiles and the proportionality constants used in the primary water stress-corrosion cracking (PWSCC) initiation model, as well as the variability factors used in the PWSCC growth model. Higher operating temperatures were also found to have an influence on the results.
In the second phase, the template approach developed in phase one was confirmed by applying it to another problem. The problem selected for study in this phase represented the Tsuruga Nuclear Power Plant, Unit 2 pressurizer relief valve nozzle-to-safe end weld. A leak in this weld led to identification of a service-induced axial through-wall crack and surface cracks. This problem was selected because it was deemed to be different enough from the Virgil C. Summer problem to demonstrate the adequacy and flexibility of the template approach. The WRS profile was determined to be one of the most important uncertain inputs in phase one, and the differences in the mean axial WRS profiles derived analytically for the two problems were large enough to result in significantly different expected behaviors. In the Tsuruga problem simulation, a significantly higher axial WRS value at the inside surface of the weld leads to a higher probability of occurrence of circumferential cracks. The Tsuruga mean axial WRS profile then becomes highly compressive in the region between approximately 25 and 65 percent ii
through-wall depth. This highly compressive, mid-wall stress causes most of the circumferential cracks in the simulation to arrest in this region, and any circumferential cracks that grow beyond this region are large enough to lead to rupture as soon as they break through-wall. In contrast, the Virgil C. Summer problem axial WRS profile is generally tensile in this mid-wall region and results in more through-wall cracks that can be detected by leakage before rupture. The results from the Tsuruga analysis were also consistent with expectations based on the inputs considered. The most important uncertain inputs were determined to be the same as in the Virgil C. Summer problem.
Finally, in the third phase, the sensitivity study and analysis techniques developed by RES contractors were benchmarked against techniques used in the industry. The goal of the benchmark comparison was to determine whether the conclusions drawn were affected by the analysis techniques. The benchmark problem represented a typical, unmitigated safe end-to-steam generator inlet nozzle dissimilar metal weld in a Westinghouse four-loop pressurized-water reactor. A 15 percent weld depth repair WRS profile was assumed because it favored PWSCC initiation and thus provided more datapoints for comparison.
Four quantities of interest were selected for the benchmark comparison to represent different types of outputs that may be considered in future xLPR V2 analyses. The first quantity of interest was the probability of circumferential crack initiation over 60 years. This quantity was selected because circumferential crack initiation occurs relatively rarely and is represented by a discrete, binary (e.g., yes-or-no type) output. The second quantity of interest was the probability of leakage over 60 years. Leakage is also represented by a discrete, binary output, but the probability is higher as it takes both axial and circumferential cracks into account. The third quantity of interest selected was the number of axial cracks over 60 years. Compared to the first and second quantities, the number of axial cracks is also discrete, but not binary. The fourth quantity of interest was the leak rate over 60 years. The leak rate was selected because it is a semi-continuous output (i.e., either equal to 0 or a continuous value in the statistical sense).
The analysts used a range of techniques for the benchmark comparison. Some of the techniques were qualitative and others were quantitative. One industry partner in the benchmarking study used a local sensitivity method in conjunction with first and second order reliability methods. Another industry partner used learning models including qualitative classifiers such as gradient boosting decision trees, random forest decision trees, and linear support vector machines with the addition of regression models such as gradient boosting regression. The RES contractors used linear regression as well as nonmonotonic regressions, such as recursive partitioning and multivariate adaptive regression splines (MARS), coupled with Sobol decomposition techniques.
All the analysis techniques were equally capable of ascertaining the most important inputs affecting the different types of outputs considered. The RES contractors regression techniques are like the industry partners learning model techniques. The differences are mostly due to averaging of the xLPR V2 spatially varying inputs in the learning model techniques and use of MARS in the regression techniques. The industry partners local sensitivity method technique is iii
a somewhat different approach but was still capable when a monotonic assumption is preserved and when the output can be expressed as a reliability measure. Use of this technique was found to be superior to MARS when analyzing discrete, binary outputs, because MARS uses splines as the regression technique, which are not suited to represent discrete values.
Even with a very small number of events, the linear or rank regression and recursive partitioning techniques all identified the most important input parameters as reported by the coefficient of determination and consistent with understanding of the model (most notably the importance of crack initiation). These parameters are candidates for importance sampling in xLPR V2. When the analysis is performed using a PWSCC initiation model, the parameters associated with this model are the main driver of failure probabilities. Consistent with analyst expectations, rank regression performed better than linear regression in most, but not all, cases. Additionally, the regression techniques are similarly effective when the input set includes many noninfluential variables (e.g., more than 130 in the case of one of the xLPR V2 simulations considered).
While reducing the number of uncertain inputs in a simulation to those that have been determined through sensitivity study or analysis to have an impact leads to a cleaner analysis (especially when there are only a few events), care should be exercised to not remove uncertain input variables that have noticeable contributions to the output uncertainty.
iv
TABLE OF CONTENTS Volume 1 Template Analysis for a Selected Scenario using the xLPR Code Volume 2 Analysis of Tsuruga Pressurizer Nozzle using the xLPR Code Volume 3 Sensitivity Studies Comparison Analysis v
VOLUME 1 FINAL REPORT Template Analysis for a Selected Scenario using the xLPR Code for NRC-HQ-60-14-E-0001, NRC-HQ-60-14-T-0014 Extremely Low Probability of Rupture (xLPR) Leak-Before-Break (LBB) Regulation Guide Support NRC CONTRACT NUMBER - NRC-HQ-60-14-E-0001-T-0001 Task Order No. 1 Small Business Task Order FRACTURE MECHANICS STRUCTURAL INTEGRITY EVALUATION, ANALYSIS &
SUPPORT on Extremely Low Probability of Rupture (XLPR) Leak-Before-Break (LBB) Regulatory Guide Support ENGINEERING MECHANICS CORPORATION OF COLUMBUS 3518 RIVERSIDE DRIVE - SUITE 202 COLUMBUS, OHIO 43221-1735
Volume 1 TABLE OF CONTENTS 1.0 Overview ............................................................................................................................ 4 2.0 Recommended Approach .................................................................................................. 6 3.0 Generation of Reference Probabilistic and Deterministic Cases ....................................... 8 3.1 Selection of Inputs and Outputs ..................................................................................... 8 3.2 Implementation in GoldSim .......................................................................................... 13 4.0 Folder Structure ............................................................................................................... 18 5.0 Deterministic Reference ................................................................................................... 19 6.0 Probabilistic
Reference:
SA, UA and Stability Analysis ................................................... 21 6.1 V.C. Summer Sensitivity Analysis (DM1 Initiation) ....................................................... 22 6.1.1 Regression Analysis on Circumferential Crack Initiation Time .............................. 23 6.1.2 Regression Analysis on Circumferential Crack Initiation Occurrence ................... 24 6.1.3 Regression Analysis on Axial Crack Initiation Time and Occurrence .................... 26 6.2 V.C. Summer Sensitivity Analysis (Initial Flaw) ............................................................ 29 6.2.1 Interpretation of Conditional Results ..................................................................... 29 6.2.2 Regression Analysis on Time to Circumferential Through-Wall Crack .................. 31 6.2.3 Regression Analysis on Time to Axial Through-Wall Crack .................................. 32 6.2.4 Regression Analysis on Time to Rupture .............................................................. 33 6.2.5 Regression Analysis on Time between Circumferential Leakage and Rupture..... 35 6.2.6 Regression Analysis on Ratio of Length to Depth for Largest Circumferential Surface Crack ...................................................................................................................... 36 6.3 Uncertainty Analysis ..................................................................................................... 38 6.3.1 General Summary ................................................................................................. 38 6.3.2 Axial Crack Depth .................................................................................................. 38 6.3.3 Circumferential Crack Depth ................................................................................. 40 6.3.4 Number of Axial Cracks Occurring ........................................................................ 40 6.4 Stability Analysis ........................................................................................................... 42 6.4.1 Probability of First Crack Occurring ....................................................................... 43 6.4.2 Probability of First Leak Occurring ........................................................................ 45 6.4.3 Probability of Pipe Rupture .................................................................................... 48 7.0 Deterministic Sensitivity Studies ...................................................................................... 48 7.1 MSIP Analysis .............................................................................................................. 48 7.2 Overlay Analysis ........................................................................................................... 52 7.3 One-at-a-time Sensitivity Studies ................................................................................. 55 7.4 Influence of Crack Morphology ..................................................................................... 58 8.0 Probabilistic Sensitivity Studies ........................................................................................ 60 8.1 Application of Uncertainty on Temperature .................................................................. 61 8.1.1 Influence of Temperature Uncertainty on Crack Initiation ..................................... 61 8.1.2 Influence of Temperature Uncertainty on Crack Growth ....................................... 63 8.2 Probabilistic Sensitivity Study on Inlay Mitigation ......................................................... 66 8.3 Probabilistic Sensitivity Study on Overlay Mitigation .................................................... 71 9.0 Revisiting Uncertainty Parameter ..................................................................................... 73 10.0 More Accurate Analyses .................................................................................................. 74 10.1 Comparison of Stability Analyses ................................................................................. 74 Page 2
Volume 1 10.1.1 Increasing Sample Size: Circumferential Crack Initiation ...................................... 74 10.1.2 Increasing Sample Size: Probability of Leakage due to Circumferential Crack..... 75 10.1.3 Importance Sampling: Probability of Leakage ....................................................... 77 11.0 No Event Occurring .......................................................................................................... 78 12.0 Summary .......................................................................................................................... 79 Mitigation Rules Applied to WRS Profiles .................................................... A-1 A.1. Axial WRS Profile Update for MSIP Mitigation ........................................................... A-1 A.2. Hoop WRS Profile Update for MSIP Mitigation .......................................................... A-7 A.3. Axial WRS Profile Update for Overlay Mitigation ........................................................ A-8 A.4. Hoop WRS Profile Update for Overlay Mitigation ..................................................... A-10 A.5. Axial WRS Profile Update for Inlay Mitigation .......................................................... A-11 A.6. Hoop WRS Profile Update for Inlay Mitigation .......................................................... A-11 Estimation of Confidence Intervals ............................................................... B-1 B.1. Bootstrap Approach .................................................................................................... B-2 B.2. Inverse Binomial Distribution ...................................................................................... B-2 Methodology Used for Regression Analysis ................................................ C-1 C.1. Linear and Rank Regression Analysis ........................................................................ C-1 C.2. Information Available in the Linear Regression Column ............................................. C-2 C.3. Nonmonotonic Regressions ....................................................................................... C-3 C.4. Information Presented for Non-Monotonic Regressions ............................................. C-4 Page 3
Volume 1 LIST OF ACRONYMS CCDF Complementary Cumulative Distribution Function CDF Cumulative Distribution Function CI Confidence Interval FOI Factor of Improvement ID Inner Diameter LBB Leak Before Break LCB Lower Confidence Bound LEAPOR Leak Analysis of Piping - Oak Ridge MARS Multi-Adaptive Regression Splines MSIP Mechanical Stress Improvement Process PFM Probabilistic Fracture Mechanics PMF Probability Mass Function PWSCC Pressurized Water Stress Corrosion Cracking RLZ Realization(s)
SA Sensitivity Analysis SC Surface Crack TWC Through Wall Crack UA Uncertainty Analysis UCB Upper Confidence Bound WOL Weld Overlay WRS Weld Residual Stress xLPR Extremely Low Probability of Rupture Page 4
Volume 1 1.0 Overview With Version 2 of the xLPR code (extremely Low Probability of Rupture) complete, an important task is to exercise the code in support of Leak Before Break (LBB) and similar Probabilistic Fracture Mechanics (PFM) analyses. The purpose of this study was to develop and present a recommended approach for the analysis of a given scenario (e.g., dissimilar metal weld in the V. C. Summer nuclear power plant) to support the development of potential future regulatory guidance.
Risk-informed analysis of nuclear power plant primary system piping is complex and involves many aspects. While a large effort was invested to simplify the xLPR code to permit risk-informed analyses to be performed in a practical manner, such analyses remain complex and the xLPR code includes many features and options. Furthermore, any complex analysis requires a careful approach and will always involve several simulations to demonstrate confidence in the results and a good understanding of the system, both in the physical and risk senses.
A scenario is a specific problem of interest to the decisionmaker or analyst. In the current study, it represents either a specific or a generic weld in a nuclear power plant piping system. The specific welds considered so far in xLPR code development are from the V.C. Summer, North Anna, Ringhals, and Tsuruga plants. The generic welds considered so far are as defined by the xLPR Inputs Group for Babcock and Wilcox reactor coolant pump inlet nozzles, Westinghouse Reactor Pressure Vessel (RPV) outlet nozzles, and Westinghouse steam generator (SG) inlet nozzles.
The recommended analysis steps, which are further detailed in this report, are as follows:
Define purpose of the analysis: This step will help determine which outputs need to be analyzed and which runs need to be performed.
Generate probabilistic and deterministic reference cases: These analyses use the recommended values from the xLPR Inputs Group and serve as reference cases.
Perform sensitivity analyses: Sensitivity analysis estimates the importance of and ranks the uncertain inputs in terms of their contribution to uncertainty in the results. It is used to increase confidence in the model and to estimate the portion of the input space that needs further analysis.
Perform uncertainty and stability analyses: Uncertainty analysis summarizes the results for outputs of interest in terms of risk and helps the decisionmaker. Stability analysis assesses the analysts confidence in the results for the reference case and the potential need for a larger sample size or importance sampling.
Perform deterministic sensitivity studies: These sets of deterministic runs are compared to the deterministic reference case to understand the impacts of some selected inputs or alternative scenarios on the results. They focus on the physics of the system (i.e., the consequences).
Perform probabilistic sensitivity studies: These sets of probabilistic analyses are compared to the probabilistic reference case to understand the impact of some selected inputs or alternative scenarios on the results. They focus on the risk (i.e., the probability and consequences).
Page 5
Volume 1 Revisit uncertain parameters: Once all the above analyses have been performed, a short list of inputs is identified as being of key importance to the analysis. Revisiting their distributions or associating a distribution in the case of constant parameters allows for increased confidence and brings more insights into the analysis. It may even reduce the uncertainty if improved distributions can be generated.
Run enhanced simulations: It is unlikely that the initial reference runs would suffice to draw conclusions for decisionmaking purposes, especially considering the rarity of the events under consideration. In addition to the steps described above, larger sample sizes, importance sampling and/or adaptive sampling can be used to generate more precise and stable responses for the outputs of interest.
xLPR Beta_v2.1_R13, which would eventually become the official xLPR Version 2.0, was used to perform most of the simulations in this study. To enhance these simulations, the Emc2-developed PFM code, PROMETHEUS, which was used to support xLPR validation testing, was also used. The regressions analyses were performed using the COMPMODSA suite of tools for the R language. Some of the steps may require other analyses depending on the scenario under consideration. However, in the template approach presented here, the analyses and outputs considered are intended to be as exhaustive as possible. Other scenarios may require analyses not presented in this document, but this should not influence the global approach highlighted herein.
As outlined in Section 11.0, the probability of pipe rupture when leak rate detection is active is lower than 1 x 10-6 over 60 years of plant operation despite some conservative assumptions.
Confidence in this result was reached considering all the analyses that were performed. The results generated as well as the conclusions drawn were in concordance with the expectations of fracture mechanics and risk analyst expects.
2.0 Recommended Approach The following sections provide more detail for the different steps proposed for the analysis of a selected scenario. These steps are summarized below.
- 1. Generation of reference runs: using the recommended values from the Inputs Group, the code is run both probabilistically and deterministically to serve as a reference for our current state of knowledge. Due to the specificities and importance of crack initiation at least two probabilistic runs are recommended: one using Direct Model 1 as the initiation mechanism and another one using an initial flaw (with one axial crack and one circumferential crack). The deterministic case is used to study crack evolution and it is thus recommended to use an initial flaw (with one axial crack and one circumferential crack). The input file uses median values as recommended for all uncertain parameters that are set to constant for the deterministic run.
- 2. Preparation of the output: Selection of the output of interest. Depending on the purpose of the analysis (for instance utility relief request, assess risk of LBB, assess risk of rupture),
the output of interest may change. The nature of the output (indicator function vs. physical value, time-dependent vs. scalar) also has some influence. Nevertheless, even if only one metric is used and presented at the end, it is recommended to broaden the perspective to at least a handful of metrics to increase confidence in the results.
Page 6
Volume 1
- 3. Sensitivity Analysis (SA), Uncertainty Analysis (UA), and Stability Analysis: Once the probabilistic run is performed, sensitivity analysis is used to verify that the results make sense and to prioritize the uncertain input with respect to their influence on the output uncertainty. Sensitivity Analysis is performed using three types of regression analyses (rank regression, recursive partitioning and MARS) as well as scatterplots for the presentation. Uncertainty Analysis is then used for each output of interest to summarize the current state of knowledge - we recommend the use of horsetails plots1, summary statistics and complementary cumulative distribution functions (CCDFs). The data presented depends on the type of analysis and the nature of the output considered.
Stability Analysis is performed to assess the appropriateness of the sample size in order to reach stable results. The definition of stable results needs to be assessed without ambiguity as it may vary depending on the type of analysis performed. Confidence intervals around the quantity of interest are estimated using q-bootstrap and/or binomial approach as presented in APPENDIX B.
- 4. Deterministic Sensitivity Studies: One at a time variation analyses which serve multiple purposes:
- a. Gain a better understanding of the forces driving crack evolution and all other aspects of the problem being considered
- b. Test the influence of inputs that were considered constant and thus ignored by the sensitivity analysis
- c. Develop of alternative scenarios (mitigation, extreme conditions, severe accident) to study the performance of the weld in different configurations and environments Deterministic sensitivity studies increase the confidence in the physics used by the model and allow focus on a particular mechanism or parameter.
- 5. Probabilistic Sensitivity Studies: An extension of the deterministic sensitivity studies.
Instead of comparing a deviation from a reference case which is only one point in the input space, the comparison is made globally. It includes:
- a. A better understanding of the risk associated with crack evolution
- b. Testing the influence of inputs that were considered constant (or with low uncertainty) or changes in some of the distributions
- c. Development of alternative scenarios and study the effect on the risk Probabilistic studies are used to extend the deterministic sensitivity studies to the risk realm and to study specific aspects not considered in the deterministic model, such as crack initiation.
- 6. Revisiting Uncertain Parameters: The previous analyses are performed to increase knowledge of a scenario of interest and provide confidence in the results. They also help the analysts rank the variables in terms of importance (including inputs considered constant). This section revisits some of the uncertain parameters (or constant parameters 1
Horsetail plots are either CCDFs or time dependent estimates that are plotted for each realizations of a probabilistic run and tends to look like a series of horsetails when many are plotted.
Page 7
Volume 1 considered uncertain) to ensure that the distribution proposed is defensible and that it is the best possible.
- 7. Enhanced simulations: In this section a larger sample size and importance sampling are used to increase the stability of the outputs of interest when needed. Separation of aleatory and epistemic uncertainty is also considered to better separate the risk issues from the confidence over the estimated risk. When the most influential inputs are identified, these inputs should be considered for importance sampling. In this way the amount of data saved in order to run larger sample sizes is minimized.
3.0 Generation of Reference Probabilistic and Deterministic Cases The first step of any scenario analysis is to set up references runs. These analyses use the most recently updated set of inputs recommended by the xLPR Inputs Group.
The probabilistic runs use a sample size of 2,500. In the reference case, no distinction is made between aleatory and epistemic uncertainty: only one type is used, meaning the input set Excel file is consequently updated. While the choice between the outer (epistemic) and inner (aleatory) loop does not matter in a numerical sense, the specificities of GoldSim for the sub-model elements make it more logical to use the inner loop for cases when separation of aleatory and epistemic uncertainties is not required. All the uncertain variables are then set to aleatory. However, running 2,500 realization in the inner model leads to memory issues. As a result, the sample size is kept at 100 for epistemic and 25 for aleatory. A resampling (over the aleatory values) for each epistemic realization ensures that each input is sampled 2,500 different time.
Previous analyses showed the importance of crack initiation in the mechanisms considered in Leak Before Break (LBB). Its influence is so important that it often hides the influence of other mechanisms. As a result, probabilistic runs are performed to serve as references:
One with Direct Model 1 used for PWSCC initiation One with initial flaw with exactly one circumferential and one axial crack For the deterministic reference run, all uncertain variables are set to their nominal (median) value as recommended by the Inputs Group. Since it would not be of interest to analyze a case with no occurring crack or a crack occurring late in the simulation, the deterministic case uses an initial flaw for crack initiation with one single circumferential and one single axial crack.
3.1 Selection of Inputs and Outputs xLPR v2.0 gives the user control of more than 500 variables, each of which are potentially uncertain, with the uncertainty represented via a probability distribution. Furthermore, some of the variables are considered spatially varying and these values are sampled separately within each sub-segment. A typical analysis considers 19 sub-segments around the nozzle circumference, adding 18 new input values per spatially varying parameter. About 80 variables are treated spatially (some are considered both in the axial and circumferential direction), leading to a total of about 2,000 potentially varying inputs.
Therefore, it would not be practical to save all the potentially uncertain input values for two reasons:
Page 8
Volume 1 Each input would need to be set up in a GoldSim element individually and each entry would be prone to mistakes.
Saving every input value for each realization would take computer memory, slowing down the code and reducing the sample size that can be used for such analysis.
As a result, it was decided to focus only on the variables considered uncertain by the xLPR Inputs Group.
The inputs not associated with a specific material are listed in the property tab. The xLPR Inputs Group has considered 24 of them to be uncertain, as listed in Table 1. In the current set of analyses, mitigation is not considered. As a result, variables 4351 (hoop WRS post mitigation) and 4353 (axial WRS post mitigation) are not included (highlighted in red in Table 1).
Variables 1201 to 1217 are used to sample crack initial sizes (in the depth and length direction) from three potential mechanisms (fatigue, PWSCC and initial flaw from fabrication). In the code, these variables are sampled for each sub-segment but used only when appropriate (if the crack is considered as an initial flaw or caused by PWSCC and/or fatigue). Instead of saving all these inputs (knowing some will not affect the calculations since they are not used), GoldSim elements reported crack initial depth and length for an axial and circumferential crack both in length and depth, then the dimensions for the first 5 cracks that may occur over time in each direction were saved (for a total of 5 cracks 2 direction 2 metrics 20 inputs). These values were normalized as a fraction of thickness (for depth), circumference (for length of circumferential crack) or weld length (for length of axial crack).
The xLPR code is designed so that all the inputs are sampled whether they are used or not.
Meaning that for any spatially varying parameters, 30 values (the maximum number of potential crack locations currently implemented in the model) are sampled (sometimes in both direction).
Page 9
Volume 1 Table 1: Uncertain parameters in the properties tab ID Variable description Unit 1001 Effective Full Power Years year (EFPY) 1201 Fatigue Initial Flaw Full- mm Length (*)
1203 Fatigue Initial Flaw Depth (*) mm 1205 PWSCC Initial Flaw Full- m Length (*)
1207 PWSCC Initial Flaw Depth (*) m 1210 Initial Flaw Full-Length (Circ) mm
(*)
1212 Initial Flaw Depth (Circ) (*) mm 1215 Initial Flaw Full-Length (Axial) mm
(*)
1217 Initial Flaw Depth (Axial) (*) mm 4350 Hoop WRS Pre-mitigation MPa 4351 Hoop WRS Post-mitigation MPa 4352 Axial WRS Pre-mitigation MPa 4353 Axial WRS Post-mitigation MPa 5101 Log reg intercept param, beta_ 0 (circ) 5102 Log reg slope param, beta_1 (circ) 5103 Log reg intercept param, beta_ 0 (axial) 5104 Log reg slope param, beta_1 (axial) 5105 Depth-sizing bias term, a (circ) 5106 Depth-sizing slope term, b (circ) 5107 Depth-sizing bias term, a (axial) 5108 Depth-sizing slope term, b (axial) 9001 Fatigue Growth CKTH 9002 Surface Crack Dist Rule mm Modifier 9003 TW Crack Distance Rule mm Modifier Table 2 lists all the uncertain inputs in the left pipe and right pipe tabs. There are respectively 10 inputs for the left pipe and 7 for the right pipe. The number is small because cracks initiating in the pipe segments are not considered (therefore no crack initiation parameters) and no PWSCC growth. The uncertain parameters are either properties affecting crack stability (such as Yield Page 10
Volume 1 Strength and Ultimate Strength) or fatigue crack growth parameters. Note that the Init J-Resist parameters for left and right pipe (highlighted in red in Table 2) are not used by the xLPR code and are not considered (2106, 2107, 2108, 2306, 2307, 2308).
Table 2: Uncertain parameters in the left pipe (yellow) and right pipe (pink) tabs ID Variable description Unit 2101 Yield Strength, Sigy MPa 2102 Ultimate Strength, Sigu MPa 2105 Elastic Modulus, E MPa 2106 Material Init J-Resistance, N/mm Jic 2107 Material Init J-Resist Coef, N/mm C
2108 Material Init J-Resist Exponent, m 2164 Multiplier C1 (weld to weld) 2166 Multiplier C2 (weld to weld) 2168 Multiplier C3 (weld to weld) 2169 EAC Threshold Scaling Factor, Ckb 2301 Yield Strength, Sigy MPa 2302 Ultimate Strength, Sigu MPa 2305 Elastic Modulus, E MPa 2306 Material Init J-Resistance, N/mm Jic 2307 Material Init J-Resist Coef, N/mm C
2308 Material Init J-Resist Exponent, m 2361 Multiplier CSS (weld to weld)
Twenty-five inputs are listed as uncertain for the weld material and reported in Table 3. From that list, Yield Strength (2501) and Ultimate Strength (2502) are not considered since mixes of those parameters for left and right pipes are used instead (yield strength is also used for Direct Model 2 initiation but is not considered in this analysis).
Furthermore, only Direct Model 1 is considered for PWSCC crack initiation. The parameters associated with Direct Model 2 (2546 and 2547) and the Weibull (2551 and 2552) are not considered.
Page 11
Volume 1 A set of uncertain parameters are spatially varying (2525: strain threshold, 2528: C02, 2542:
proportionality constant, 2593: within component variability factor). For those parameters the sub-segment sampled values for the first five occurring cracks on both axial and circumferential direction (i.e. 10 values) are saved.
Table 3: Uncertain parameters in the weld tab ID Variable description Unit 2501 Yield Strength, Sigy MPa 2502 Ultimate Strength, Sigu MPa 2505 Elastic Modulus, E MPa 2506 Material Init J-Resistance, Jic N/mm 2507 Material Init J-Resist Coef, C N/mm 2508 Material Init J-Resist Exponent, m
2521 Surface Finish Factor, FSURF 2522 Load Sequence Factor, FLOAD 2525 Strain Threshold, STH 2526 multiplier STH 2528 C0 2529 multiplier C0 2531 Zn Factor of Improvement - 1, FOIZn-1 2542 Proportionality Const, A (DM 1) year MPa-1 2543 Multiplier proport. Const. A (DM1) 2546 Proportionality Const, B (DM 2) year 2547 Multiplier proport. Const B (DM2) 2551 Weibull Vertical Intrcpt Error, EpsC 2552 General Weibull Slope, Beta 2571 Multiplier Cni (weld to weld) 2591 Activation Energy, Qg kJ/mol 2592 Comp-to-Comp Variab Factor, fcomp 2593 Within-Comp Variab Factor, fflaw 2594 Peak-to-Valley ECP Ratio - 1, P-1 2595 Charact Width of Peak vs ECP, mV c
2 Low-cycle fatigue variable in fatigue crack initiation model Page 12
Volume 1 Finally, the xLPR Inputs Group recommended two transients representing the heat-up and cool-down of the plant. Each transient is associated with an uncertainty multiplier which is included.
This leads to a total of 102 uncertain input values.
Most of the outputs considered in xLPR are indicator functions, set to zero (0) if the specific event does not happen and one (1) if it happens. They are presented as a function of time and integrated over aleatory uncertainty (i.e., the average of all aleatory runs for the same epistemic run is calculated) to estimate a probability of such event over time. Sensitivity Analysis can be performed for a specific time-step. Usually either the end of the simulation (60 years) or a specific time for a given power plant (for instance 17 years for V.C. Summer, since that was the age of the plant when boric acid traces were found, indicating a leak) is considered. While analyzing such output is perfectly valid, it informs only on the situation at a given time-step. Furthermore, when probability (average over aleatory uncertainty) is considered only the impact of epistemic uncertainty is assessed.
In this first set of analyses, it was decided to consider the uncertainty as a whole and check the effect of aleatory uncertainty. The main outputs of interest are usually probability of occurrences which are expressed as indicator functions (0 if it does not happen and 1 if it happens). Since they are not very informative when considered individually, they were replaced in some of the studies with the following outputs:
Time of first circumferential crack occurrence Time of first axial crack occurrence Time of first leakage from circumferential crack Time of first leakage from axial crack Time of rupture Maximum ratio between half-length and depth for the largest circumferential surface crack (largest being interpreted as with the largest surface) 3.2 Implementation in GoldSim GoldSims approach for sampling is to generate the values for uncertain input only when the realization is called. This has two advantages, which are (1) a faster way to run a particular realization3, and (2) a simplification when multiple processors are used. One of the drawbacks is that inputs cannot be saved as a table at the end of the simulation. Although there is an option to save the value within each element, and display it, the display is limited to the first 1,000 values.
It is thus simpler to save the sampled values when each realization is called, via an Excel element.
The first 1000 values display limitation is why the outputs described in the previous section are also copied into Excel.
Data is saved using an Excel element in GoldSim. The Excel element can be used for both reading data from an Excel file (used to read the input workbook values and options selected by the user) and for copying data into Excel. Each value of interest is saved for every realization, 3
When running a particular realization, GoldSim can start directly from the random seed associated with this realization and does not require to rerun the whole sample.
Page 13
Volume 1 providing the advantage that in case of a failed simulation, everything is recorded up to the point of failure.
All the inputs are calculated at the beginning of each realization and will not change during the time calculation. As a result, a conditional container was created to be activated only at the first time step (when 0 month which corresponds to the first time step of any GoldSim run)
(see Figure 1).
Figure 1: Screenshot of the sensitivity_analysis container and its conditional property The following elements can be found inside this container (Figure 2):
Realization_number estimates the value of the current realization via the formula:
(epistemic_RLZ-1)*Aleatory_sample+aleatory_RLZ. The calculation is set to include both aleatory and epistemic loop. It can be changed to epistemic_RLZ to only take the epistemic loop into account.
INIT_DPT_CC, INIT_LEN_CC, INIT_DPT_AC, INIT_LEN_AC estimate the value of initial depth and length for circumferential and axial cracks respectively, depending on the options selected by the user and the type of cracking mechanism (fatigue or PWSCC) for each specific crack.
Sorted_STH_cc, Sorted_STH_ac, Sorted_C0_cc, Sorted_ C0_ac, Sorted_A_DM1_cc, Sorted_A_DM1_ac reorder the spatially varying values. All the potential inputs are sampled upfront before the start of the analysis. Spatially varying input are initially sorted with respect to the sub-segments. They are reordered to reflect the order in which the cracks appear (as estimated by the crack initiation model before the time loop starts).
Sensitivity_Analysis_Data is the Excel element which saves all the values as described in section 3.1.
Page 14
Volume 1 Figure 2: Elements in the sensitivity_analysis container The Sensitivity_Data_Element list all the inputs saved in the Excel file SA_DATA.xlsx in the spreadsheet tab inputs. Each input is copied in a different column, starting on row 4 with an offset equal to the realization number. As a result, all inputs for realization 1 (aleatory 1 and epistemic 1) are copied in row 5, for realization 2 in row 6 and so on.
Page 15
Volume 1 Figure 3: Details of the Sensitivity_Analysis_Data element Time dependent outputs are updated at each time-step in a separate GoldSim Element, and because when a rupture occurs, some values are no longer calculated, the code may return no values.
The elements used to save the output values (displayed in Figure 4) are:
Record_time is a vector element of size five. It records the time of occurrence of the following output events (first circumferential crack initiation, first axial crack initiation, first leakage due to circumferential crack first leakage, first leakage due to axial crack, rupture)
Current_Record keeps a record of when each event occurs, using the values from record_time and previous_record Previous_record is a copy of current_recort at the previous time step SC_only_surface estimates the surface area of all circumferential surface cracks Maxlocation reports the location of the circumferential surface crack with the largest surface area Max_ratio records the maximum ratio c/a for the largest circumferential surface crack using maxlocation and previous_ratio Previous_ratio is a copy of max_ratio at the previous time step Save_output is the Excel element saving all the outputs Page 16
Volume 1 Figure 4: Elements used to save output values The structure of Save_output is the same as the Excel element sensitivity_analysis_data (see Figure 5). All the values are saved in the same Excel file as previously identified (SA_DATA.xlsx) but in a different spreadsheet tab (outputs). The realization number offsets the row.
Figure 5: Details of the Save_output element Page 17
Volume 1 4.0 Folder Structure Each scenario analysis will require a certain number of deterministic and probabilistic runs. It is difficult to assess exactly the number of simulations required upfront, especially when considering that some of the necessary runs will be added after the results have been analyzed. However, each scenario will at least have one reference deterministic run and one probabilistic reference run.
Since each run is associated with a unique Excel file and that the name of the Excel file used for inputs is fixed (xLPR-2.0 Input Set.xlsx), as is the case for the text files created by the pre-processors (TIFFANY and LEAPOR), it is recommended to create a folder for each run in a master folder that includes the DLLs folder (required for running the code).
It is proposed that the modeler use the following convention to name each specific run folder.
VC Summer IO - D01 - REFERENCE scenario Ref. # description The naming is composed of three parts:
- 1. Scenario lists the scenario name (usually the power plant under consideration).
- 2. Ref.# is a unique reference number for this scenario. The key-letter D is used for a deterministic run and P for a probabilistic run. A two-digit number follows to indicates the run # in the corresponding category.
- 3. Description is a description of the run, with some keyword left to the analysts choice.
The analyses presented here used the keyword REFERENCE for the reference run. It is recommended that all reference runs use this keyword as well.
An example of a structure for V.C. Summer is presented in Figure 6. Note that two scenarios are considered for V.C. Summer: inside-to-outside welding (IO) and outside-to-inside welding (OI).
They are tracked separately. V.C. Summer, which is the first US PWR plant that experienced a leak due to PWSCC, was a unique weld. The weld was started, found to have a defect, machined in place with a bridge weld holding the nozzle and safe end together near mid thickness. The repair welds were then made to complete the weld. However, it was unknown whether the repair was made first from the outside followed by inside (OI) or inside first followed by outside (IO).
Therefore, both WRS scenarios were considered. Note that the order of welding (IO versus OI) plays an important role in WRS development.
Page 18
Volume 1 Figure 6: Proposed folder structure for V.C. Summer When possible, runs are performed using the full version of GoldSim (using the .gsm) file, rather than the player version. It is indeed possible to transform a completed GoldSim run into a player version equivalent, but not the other way around. Each file can be saved as a player version in the future if required. However, for those that do not have the full version of GoldSim using the player version will also work.
5.0 Deterministic Reference In order to run a deterministic case, the epistemic sample size is set to 1 both in the Excel file
(#0101: User Options - cell E20) and the GoldSim file (Simulation Settings Monte Carlo #
realizations). The aleatory sample size is set to 2 (#0107: User Options - cell E26 because even if the run is deterministic, GoldSim does not accept a probabilistic submodel with a sample size of 1. The cost in terms of computer time is reasonable, considering that it only adds about 4 seconds in the simulation.
The deterministic analysis requires having at least one crack and there is no reason to consider a crack occurring at a time later than the beginning of the simulation. As a result, the crack initiation option (#0501: User Options - cell E78) is set to initial flaw (0). Only one axial crack Page 19
Volume 1 and one circumferential crack are considered. As a result, the number of circumferential flaws
(#1209: Properties - cell H41) and axial flaws (#1214: Properties - cell H46) are both set to 1.
The deterministic reference case also uses the values recommended by the xLPR Inputs Group; however, all the input distributions are set to constant. By default, the median value for each distribution is provided in the deterministic value column (column H). This value is kept for the deterministic reference (aka nominal) case.
Two of the input parameters (fflaw and fcomp) influencing crack growth rate are treated differently for display purposes in cases when mechanical mitigation is to be modeled. The effect of mechanical mitigation depends on the crack size when mitigation is applied. Having a surface crack becoming through wall or leading to rupture in the middle of the simulation (around 30 years) and neither too rapidly nor too slowly allows a better display in the figures and easier interpretation of the effect of mitigation. The equation for crack growth rate shows that it varies linearly according to the variability factor, which is split into two parameters: the within-component variability factor fflaw
(#2593: Weld - cell H135) and the component-to-component variability factor fcomp (#2592: Weld
- cell H134).
The probability distributions for fcomp and fflaw are lognormal distributions, both with mean and median = 1.
For the V.C. Summer IO weld repair scenario, this set of values leads to a circumferential crack becoming a through-wall crack (TWC) around 38 years and leading to pipe rupture around 42 years (see Figure 7). This is considered the most likely repair sequence for V.C. Summer, but it is not known for certain.
Figure 7: Circumferential crack evolution through time for the V.C. Summer IO reference deterministic case: depth (left frame) and inner and outer half-length (right frame)
For VC Summer OI weld repair scenario, the use of the median value leads to a very fast TWC in 7 years (see Figure 8). With such a steep slope, the selected mitigation times to cover different crack depths will all be close to each other and the effect of mitigation will be difficult to assess (a change in mitigation of a few months will have tremendous impact).
Page 20
Volume 1 Figure 8: Circumferential crack evolution through time for V.C. Summer OI scenario case when fflaw and fcomp are both set to 1: depth (left frame) and inner and outer half-length (right frame)
By slowing down crack growth (i.e. using smaller values for fcomp and fflaw for the deterministic analysis), it is possible to spread the crack growth over a longer time and use a more representative time of mitigation (10 years, 15 years, 20 years) potentially better suited for presentation purposes. Figure 9 shows the circumferential crack evolution through time when fcomp is set to 0.8 and fflaw to 0.4. These two values were selected for the deterministic reference used to study the effect of mitigation4.
Figure 9: Circumferential crack evolution through time for the V.C. Summer OI reference deterministic case: depth (left frame) and inner and outer half-length (right frame) 6.0 Probabilistic
Reference:
SA, UA and Stability Analysis It is recommended that Sensitivity Analysis results be investigated before performing Uncertainty Analysis, and finally Stability Analysis. One should confirm that Sensitivity Analysis results are in line with what is expected from the code. The initial probabilistic simulation should be performed using 2,500 realizations within the same uncertainty loop. For xLPR, the recommended choice is to use the aleatory loop (as opposed to the epistemic loop), because it is easier to apply with 4
Keep in mind that the material parameters provided by the xLPR Inputs Group attempt to apply for the conglomerate of plants rather than a specific plant such as V.C. Summer.
Page 21
Volume 1 xLPR. Due to the importance of crack initiation mechanisms, it is useful (and recommended) to run one simulation with crack initiation governed by Direct Model 1 and another simulation with one existing initial flaw in each crack orientation (axial and circumferential). The resulting Sensitivity Analysis allows the user to estimate the most important parameters in terms of uncertainty. Uncertainty Analysis provides information on the stability of the results of interest and the potential need for a larger sample size and/or importance sampling.
6.1 V.C. Summer Sensitivity Analysis (DM1 Initiation)
In this first analysis, the distinction between uncertainty type (epistemic and aleatory) for the inputs is preserved. The Direct Model 1 is used for crack initiation. Some statistics on the results are summarized in Table 4. Out of 2,500 realizations, about 14% had at least one axial crack occurring and 2.5% had a circumferential crack occurring (only one realization leads to two circumferential cracks occurring).
These two probabilities are not independent. The likelihood of having both axial and circumferential cracks is about 1% (27 cases out of 2,500). It is three times higher than a theoretical 0.35% (~0.14 0.025) if the events were independent of each other. This result is expected considering that the A multiplier parameter has an effect on crack initiation and is the same for both axial and circumferential crack.
Table 4: Summary of event probabilities Leakage of Leakage of Initiation circ. Initiation axial circ. crack axial crack crack before crack before Rupture before 60 before 60 60 years 60 years years years
- RLZ5 63 345 46 329 42
% 2.5% 13.8% 1.8% 13.2% 1.7%
Not only are axial cracks more likely to occur than circumferential cracks, they also grow more quickly into through-wall cracks. Figure 10 displays the resulting cumulative distribution function (CDF) for the time from initiation to TWC (the time difference between initiation and leakage) for both an axial crack (plain red line) and a circumferential crack (dashed blue line). The red curve being above the blue dashed curve indicates that it takes less time for an axial crack to grow through wall than for a circumferential crack. For instance, one can see that about 80% of the axial cracks have grown through wall in 100 months or less, while only 30% of the circumferential cracks have done so.
5 RLZ = realization Page 22
Volume 1 Figure 10: CDF on time to Through Wall Crack for circumferential crack and axial crack 6.1.1 Regression Analysis on Circumferential Crack Initiation Time The sensitivity analysis on the time of first circumferential crack occurrence does not provide adequate results (see Table 5). One of the issues is that 721 months (60 years) is used as a placeholder when no crack occurred, and very few realizations have a circumferential crack occurrence before 721 months (only 2.5%). As a result, the Multi-Adaptive Regression Spline (MARS) technique has difficulties fitting a spline leading to a regression of medium quality (
0.57 . , and many unrealistic conjoint influences are found (FLLAWC2, ADM1A1, COA4 which do not impact circumferential crack initiation). An observation of the scatterplots indicates that the two components of proportionality constant A (ADM1C1 and AMULTDM1) have an effect (see Figure 11), as evidenced by the fact that circumferential crack initiation at times less than 721 months only occur for high values of each of the A components. Third comes Axial WRS with a small conjoint influence (slightly above 10%) which is consistent with the crack initiation model used. As seen in the last column of Table 5, conjoint distribution is the major contributor in this case, mostly between the three most important inputs parameters.
Page 23
Volume 1 Table 5: Regression Analysis on time of first circumferential occurrence (DM1 initiation)
Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.15 0.88 0.57 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1C1 0.07 0.07 -0.07 0.24 0.95 0.00 0.96 0.09 0.58 AMULTDM1 0.11 0.04 -0.05 0.02 0.61 0.02 0.00 0.02 0.26 AXIALWRS 0.13 0.02 -0.04 0.00 0.12 0.01 0.24 0.01 0.12 STHA4 0.14 0.01 -0.02 0.00 0.00 0.00 0.09 0.00 0.03 CKTH --- --- --- 0.00 0.13 --- --- 0.00 0.05 FFLAWC2 0.14 0.00 -0.02 --- --- 0.00 0.69 0.00 0.20 C3MULT 0.14 0.00 0.02 --- --- 0.00 0.21 0.00 0.06 BETA0C --- --- --- 0.00 0.03 --- --- 0.00 0.01 CKB 0.14 0.00 -0.01 --- --- 0.00 0.00 0.00 0.00 ADM1A1 0.15 0.00 -0.01 --- --- 0.00 0.40 0.00 0.11 RPYS 0.14 0.00 -0.01 --- --- 0.00 0.00 0.00 0.00 TWCDIST 0.15 0.00 0.01 --- --- --- --- 0.00 0.00 STHC5 0.15 0.00 -0.01 0.00 0.03 0.00 0.00 0.00 0.01 LPE --- --- --- 0.00 0.04 --- --- 0.00 0.02 HOOPWRS --- --- --- 0.00 0.13 --- --- 0.00 0.06 SZSLOPC --- --- --- 0.00 0.04 --- --- 0.00 0.02 COA4 --- --- --- --- --- 0.00 0.55 0.00 0.16 FOIZN --- --- --- --- --- 0.00 0.19 0.00 0.05
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 11: Scatterplots of time before first circumferential crack occurrence as function of proportionality constant (left) and multiplier on the proportionality constant (right) 6.1.2 Regression Analysis on Circumferential Crack Initiation Occurrence A new sensitivity analysis is performed here, this time on an indicator function set to 0 when no circumferential crack occurs and 1 when it occurs. The result of the regression is presented in Table 6.
Page 24
Volume 1 Rank Regression indicated that the three most important parameters should be the two components of the parameter (weld to weld (Amult) and within weld (Ai) proportionality constants) as well as the axial WRS at the inner diameter (ID). The regression nonetheless underestimates the importance of these three parameters when taken independently, because it is mostly the conjoint influence of these parameters that is important, and because Rank Regression is only additive. MARS uses splines to fit the model and is not appropriate in the present case since the output is a discrete step function (either 0 or 1), as reflected by a mediocre R2 value (R2=0.53) and some spurious conjoint influence (STHA4, C3MULT, FFLAWC2 which do not affect circumferential crack initiation). Once again, the three parameters are identified as important, with the inclusion of some other more spurious correlations.
Recursive partitioning is the most appropriate technique when dealing with discrete distributions, as a result, the coefficient of determination is good for the occurrence of a circumferential crack after 60 years (R2=0.88). The top three parameters are once again the two parameters and the axial WRS at the ID. The total sensitivity indices and first order sensitivity indices indicate that while the within-weld proportionality constant associated with the first occurring circumferential crack (A1) explains about 22% of the variance by itself (which is confirmed by the estimate from MARS), the rest is explained via conjoint influence (meaning both terms need to be high to have a circumferential crack initiated). This is confirmed by the scatterplots displayed in Figure 12 (all the high probabilities are associated with high values of each parameter).
Table 6: Regression Analysis on occurrence of circumferential crack after 60 years (DM1 initiation)
Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.15 0.88 0.53 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1C1 0.07 0.07 0.07 0.22 0.96 0.22 0.79 0.12 0.48 AMULTDM1 0.11 0.04 0.05 0.03 0.58 0.08 0.10 0.04 0.24 AXIALWRS 0.13 0.02 0.04 0.01 0.24 0.03 0.15 0.01 0.14 FFLAWC2 0.14 0.00 0.02 --- --- 0.01 0.11 0.00 0.03 CKB 0.14 0.00 0.01 0.00 0.03 0.00 0.00 0.00 0.01 STHA4 0.14 0.01 0.02 0.00 0.19 0.00 0.14 0.00 0.12 C3MULT 0.14 0.00 -0.02 --- --- 0.01 0.20 0.00 0.05 STHC5 0.15 0.00 0.01 --- --- 0.01 0.00 0.00 0.00 RPYS 0.14 0.00 0.01 0.00 0.02 0.00 0.00 0.00 0.01 TWCDIST 0.15 0.00 -0.01 --- --- --- --- 0.00 0.00 EFPY --- --- --- --- --- 0.00 0.00 0.00 0.00 ADM1A1 0.15 0.00 0.01 --- --- 0.00 0.02 0.00 0.00 COA4 --- --- --- --- --- 0.00 0.03 0.00 0.01 FOIZN --- --- --- --- --- -0.01 0.00 0.00 0.00 AMD1A2 --- --- --- --- --- 0.00 0.00 0.00 0.00 COC5 --- --- --- --- --- 0.00 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Page 25
Volume 1 Figure 12: Scatterplots of occurrence of circumferential crack in 60 years as function of the most important uncertain parameters: within weld proportionality constant (left), proportionality constant multiplier (center), axial WRS at ID (right)
This result is consistent with our understanding of Direct Model 1 and the importance of uncertain variables.
6.1.3 Regression Analysis on Axial Crack Initiation Time and Occurrence The results of the regression analysis for axial crack initiation are similar to those for circumferential crack initiation, for time of first occurrence (Table 7) and the occurrence indication at 60 years (Table 8). The influence of the within weld proportionality constant A by itself is more pronounced than for circumferential crack due to more realizations having at least one crack (13% vs. 2.5%). The conjoint influence of the three more important parameters (WRS and the two components of the proportionality constant) are still high, underlining that all three parameters are required to concurrently have relatively high values for an axial crack to initiate in 60 years (Figure 13).
Page 26
Volume 1 Table 7: Regression Analysis on time of first axial occurrence (DM1 initiation)
Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.48 0.95 0.78 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1A1 0.33 0.33 -0.27 0.52 0.94 0.61 0.69 0.43 0.23 AMULTDM1 0.42 0.09 -0.19 0.04 0.34 0.17 0.26 0.09 0.18 HOOPWRS 0.46 0.04 -0.12 0.01 0.17 0.05 0.15 0.03 0.12 AMD1A2 0.47 0.00 -0.04 0.00 0.02 0.00 0.02 0.00 0.01 ADM1C4 --- --- --- 0.00 0.01 --- --- 0.00 0.00 ECPP 0.47 0.00 0.03 0.00 0.02 0.00 0.00 0.00 0.01 BETA0A --- --- --- 0.00 0.00 --- --- 0.00 0.00 BETA1C --- --- --- 0.00 0.00 --- --- 0.00 0.00 TWCDIST 0.47 0.00 -0.02 0.00 0.02 --- --- 0.00 0.01 CKB 0.48 0.00 -0.02 --- --- 0.00 0.01 0.00 0.00 FOIZN --- --- --- 0.00 0.01 0.00 0.02 0.00 0.01 ADM1A4 --- --- --- --- --- 0.00 0.00 0.00 0.00 C1MULT 0.48 0.00 0.01 --- --- 0.00 0.00 0.00 0.00 QG 0.47 0.00 0.02 --- --- 0.00 0.01 0.00 0.00 ADM1A3 0.47 0.00 -0.03 --- --- 0.00 0.00 0.00 0.00 STHC5 0.48 0.00 -0.01 --- --- 0.00 0.00 0.00 0.00 STHA2 0.48 0.00 0.02 0.00 0.02 --- --- 0.00 0.01 STHMULT 0.48 0.00 0.01 --- --- --- --- 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Page 27
Volume 1 Table 8: Regression Analysis on occurrence of axial crack after 60 years (DM1 initiation)
Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.47 0.95 0.75 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1A1 0.33 0.33 0.27 0.44 0.92 0.63 0.78 0.41 0.29 AMULTDM1 0.42 0.09 0.19 0.05 0.46 0.13 0.30 0.08 0.26 HOOPWRS 0.46 0.04 0.12 0.01 0.23 0.05 0.07 0.03 0.11 ADM1C4 --- --- --- 0.00 0.01 --- --- 0.00 0.00 ADM1C1 --- --- --- 0.00 0.01 --- --- 0.00 0.00 STHC5 0.47 0.00 0.01 --- --- 0.00 0.00 0.00 0.00 AXIALWRS --- --- --- 0.00 0.01 0.01 0.01 0.00 0.01 COC5 --- --- --- --- --- 0.00 0.04 0.00 0.02 STHA2 0.47 0.00 -0.02 0.00 0.00 --- --- 0.00 0.00 C1MULT 0.47 0.00 -0.02 --- --- 0.00 0.01 0.00 0.00 QG 0.47 0.00 -0.02 --- --- 0.00 0.01 0.00 0.00 FFLAWC1 0.47 0.00 0.02 --- --- --- --- 0.00 0.00 BETA0C 0.47 0.00 0.01 --- --- 0.00 0.02 0.00 0.01 AMD1A2 0.46 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 ADM1A3 0.47 0.00 0.03 --- --- 0.00 0.02 0.00 0.01 ECPP 0.47 0.00 -0.02 0.00 0.01 0.00 0.01 0.00 0.01 TWCDIST 0.47 0.00 0.02 --- --- --- --- 0.00 0.00 CKB 0.47 0.00 0.01 --- --- 0.00 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 13: Scatterplots of time of first axial crack occurrence as function of the most important uncertain parameters: within weld proportionality constant (left),
weld to weld proportionality constant (center), hoop WRS at ID (right)
The other outputs were not analyzed using regression analysis as 2.5% of the realizations were giving at least one circumferential crack and 13.8% at least one axial crack. The likelihood of crack initiations is low enough to remain the driving factor in the estimate of probabilities for TWC or rupture. Note that 87% to 97.5% of the realizations have no leakage or rupture solely because no crack is initiated. This leads to the same important parameters (the two components of the proportionality constant for DM1 as well as WRS at the ID) that drive this as well.
Page 28
Volume 1 6.2 V.C. Summer Sensitivity Analysis (Initial Flaw)
The previous section has shown that the crack initiation uncertainty parameters have such an important effect that they dominate all the regression. A conditional analysis could be performed but would be limited considering that the number of realizations with at least one circumferential crack is relatively small (63 out of 2,500 - about 2.5 % of the runs) and the number of ruptures even smaller (42 out of 2,500 - about 1.7% of the runs). Note that this is consistent with the observed cracking phenomena in the VC Summer plant that occurred in the field.
Therefore, a new simulation was run for the V.C. Summer case with the following changes:
All uncertain parameters have been set to aleatory Crack initiation type choice (input 0501) was set to 0 (initial flaw)
Initial depth (1212, 1217) and length (1210, 1215) for both the axial and circumferential direction were set to the distributions used for depth and length for PWSCC, respectively The purpose of this additional analysis is to focus on the evolution of a crack, conditional on having an existing initial crack. Note that this analysis does not consider multiple cracks in a single direction at this stage, but only one circumferential crack and one axial crack simultaneously.
6.2.1 Interpretation of Conditional Results The probability of crack initiation is 1 by default since one crack is forced to be present in each direction (two cracks). The time of occurrence is always 0 since the cracks are imposed at the beginning of the simulation. The time to through wall crack does not need any correction since the occurrence of crack is at time zero.
Figure 14 displays the CDF for the conditional V.C. Summer case. As observed previously the time is shorter for the postulated axial crack leakage (plain red line) than for the postulated circumferential crack (dashed blue line).
Page 29
Volume 1 Figure 14: CDF on time to through wall crack for circumferential crack and axial crack for conditional V.C. Summer run The probabilities are lower than the ones estimated for the conditional equivalent from the reference V.C. Summer Case. This means that the time to through wall crack is longer when using an initial flaw than when using Direct Model 1. Due to the importance of WRS for both crack initiation and crack growth, Direct Model 1 realizations that have crack initiations tend to be those with larger sampled values of WRS at the ID. These larger values lead to faster crack growth through thickness. With an existing crack forced to be present from the beginning (Figure 14), all realizations result in crack growth even the ones with low WRS values, leading to slower crack growth in those cases.
As can be seen in Figure 15, the bias is consistent and evenly distributed in the CDF, confirming that it is not variation due to the inaccuracy of the first sets of CDF presented in Figure 10.
Page 30
Volume 1 Figure 15: Change in CDF of time from initiation to leakage for both axial crack and circumferential crack when using DM1 and initial flaw This example illustrates the importance of careful study of conditional runs. While the use of conditional runs for many of these analyses is recommended, the interpretation of these results must consider the possible synergism between the mechanisms involved. Another approach in this situation is to use importance sampling on WRS at the ID to increase the density of realizations with occurrence of cracks while keeping the associated high WRS values. In cases where leakage is not predicted to occur, the likelihood of a crack occurring at a time greater than zero introduces a bias in the time from initiation to leakage. In fact, simulation end-time will be used to calculate time to leakage in cases where no leakage is predicted, thus resulting in a truncation of the calculated time from initiation to leakage.
6.2.2 Regression Analysis on Time to Circumferential Through-Wall Crack The regression analysis on time to circumferential through-wall crack confirms the observation made on the CDFs in the previous section. Axial WRS at the ID is the most important parameter for circumferential crack growth and explains more than 50% of the variance for all three regressions (Table 9). Not surprisingly, the next two most important uncertain parameters are the weld to weld and within weld variability factors which are linearly influencing the crack growth rate and have significantly large uncertainty. The close values between the first order sensitivity indices ( ) and total order indices ( indicate that conjoint influence is limited (which explains why rank regression, which is an additive regression, is performing well with a total R2 of 0.78).
Scatterplots (Figure 16) confirm the importance and monotonic nature of these three parameters, each leading to shorter time to leakage (globally) when their value is increasing.
Page 31
Volume 1 Table 9: Regression analysis for time to circumferential through wall crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.78 0.85 0.78 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti AXIALWRS 0.50 0.50 -0.70 0.62 0.79 0.71 0.73 0.53 0.08 FCOMP 0.63 0.13 -0.36 0.09 0.22 0.12 0.14 0.10 0.06 FFLAWC1 0.74 0.11 -0.33 0.08 0.19 0.12 0.12 0.09 0.05 ECPC 0.76 0.01 -0.23 0.01 0.05 0.02 0.03 0.01 0.02 ECPP 0.77 0.02 0.18 0.01 0.03 0.00 0.00 0.01 0.01 INILENC1 0.77 0.00 -0.05 0.00 0.01 0.01 0.01 0.00 0.00 JRM 0.78 0.00 0.01 0.00 0.01 0.00 0.00 0.00 0.00 WELDE --- --- --- 0.00 0.00 0.00 0.00 0.00 0.00 CKB --- --- --- 0.00 0.02 0.00 0.01 0.00 0.01 INIDPTC1 0.77 0.00 -0.02 0.00 0.01 0.00 0.00 0.00 0.00 JRJIC --- --- --- 0.00 0.00 0.00 0.00 0.00 0.01 INIDPTA1 --- --- --- 0.00 0.00 -0.01 0.00 0.00 0.00 RPUTS --- --- --- 0.00 0.00 0.00 0.00 0.00 0.00 LPE --- --- --- --- --- 0.00 0.00 0.00 0.00 C1MULT --- --- --- --- --- -0.01 0.00 0.00 0.00 Figure 16: Scatterplots of time to circumferential crack leak and the first three most important uncertain parameters: axial WRS at ID (left), weld to weld crack growth multiplier (center) and within weld crack growth multiplier (right) 6.2.3 Regression Analysis on Time to Axial Through-Wall Crack The analysis over time to first axial through-wall crack is like the previous section, with the difference being that hoop WRS takes the role of the main driver. Table 10 and Figure 17 lead to the same conclusion with a strong first order monotonic effect from these three parameters.
Conjoint influence seems to be slightly more important for axial cracking than for circumferential cracking. A reason could be that the whole Axial WRS range at ID is so low that circumferential cracks can grow fast enough only if a high value of WRS at ID is sampled, increasing the sole influence of axial WRS. This issue is not found when considering Hoop WRS and axial crack growth.
Page 32
Volume 1 Table 10: Regression analysis for time to axial through wall crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.77 0.78 0.53 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti HOOPWRS 0.27 0.27 -0.50 0.33 0.69 0.50 0.56 0.26 0.16 FFLAWA1 0.70 0.21 -0.47 0.20 0.55 0.30 0.37 0.18 0.15 FCOMP 0.49 0.22 -0.48 0.03 0.18 0.09 0.10 0.10 0.06 ECPC 0.72 0.02 -0.30 0.00 0.04 0.01 0.02 0.01 0.02 INILENA1 0.76 0.02 -0.14 0.00 0.10 0.02 0.01 0.01 0.04 ECPP 0.75 0.03 0.23 0.00 0.03 0.00 0.00 0.01 0.01 EFPY 0.77 0.00 -0.06 0.01 0.06 0.01 0.00 0.00 0.02 INIDPTA1 0.77 0.00 -0.07 0.00 0.10 0.01 0.01 0.00 0.04 LPE --- --- --- 0.00 0.03 0.00 0.00 0.00 0.01
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 17: Scatterplots of time to axial crack leak and the first three most important uncertain parameters: hoop WRS at ID (left), within weld crack growth multiplier (center) and weld to weld crack growth multiplier (right) 6.2.4 Regression Analysis on Time to Rupture Previous analyses have shown that without inspection and leak rate detection, the time to rupture is strongly correlated with time to first circumferential leak rate. This is confirmed when plotting the time to rupture as a function of time to first through wall crack (Figure 18).
Page 33
Volume 1 Figure 18: Scatterplot of time to rupture and time to first circumferential TWC As a result, the regression analysis is very similar, with the three most important parameters being axial WRS at the ID and the within weld and weld to weld variability factors on crack growth (Table 11 and Figure 19).
Table 11: Regression analysis for time to rupture Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.76 0.85 0.77 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti AXIALWRS 0.46 0.46 -0.68 0.57 0.75 0.66 0.68 0.48 0.09 FCOMP 0.62 0.16 -0.39 0.13 0.27 0.16 0.18 0.13 0.06 FFLAWC1 0.72 0.10 -0.32 0.08 0.20 0.12 0.12 0.09 0.05 ECPC 0.74 0.01 -0.25 0.01 0.05 0.03 0.04 0.02 0.02 ECPP 0.76 0.02 0.20 0.00 0.04 0.01 0.00 0.01 0.01 INILENC1 0.76 0.00 -0.05 0.00 0.00 0.00 0.00 0.00 0.00 EFPY 0.76 0.00 -0.05 0.00 0.01 0.00 0.00 0.00 0.00 INIDPTC1 0.76 0.00 -0.02 0.01 0.00 0.00 0.00 0.00 0.00 JRM --- --- --- 0.00 0.00 0.00 0.01 0.00 0.00 CKB --- --- --- 0.00 0.01 0.00 0.00 0.00 0.01 INIDPTA1 --- --- --- 0.00 0.01 0.00 0.00 0.00 0.01 LPE --- --- --- 0.00 0.01 0.00 0.00 0.00 0.00 WELDE --- --- --- 0.00 0.00 0.00 0.00 0.00 0.00 C1MULT --- --- --- 0.00 0.01 --- --- 0.00 0.00 JRJIC --- --- --- 0.00 0.00 0.00 0.00 0.00 0.00 RPUTS --- --- --- --- --- 0.00 0.00 0.00 0.00 Page 34
Volume 1 Figure 19: Scatterplots of rupture time and the first three most important uncertain parameters: axial WRS at ID (left), weld to weld crack growth multiplier (center) and within weld crack growth multiplier (right) 6.2.5 Regression Analysis on Time between Circumferential Leakage and Rupture In order to determine if some parameters were more influential on time to rupture once the circumferential crack is through wall (leaking), regression analysis is applied to the time difference between leakage and rupture. Table 12 shows that axial WRS at the ID is the most important parameter, followed by the weld to weld variability factor. The third most important parameter is the within weld variability factor but mostly as conjoint influence. This is expected since, in the absence of leak rate detection and crack detection, crack growth at the ID will drive through wall crack growth. The stability parameters (yield and ultimate strengths of both pipes, and J-R curve parameters of weld) seem to have little to no influence at this stage. This may not be the case when running a regression on only TWC after detection is applied.
Table 12: Regression analysis on time from circumferential leakage to rupture Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.50 0.85 0.61 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti AXIALWRS 0.41 0.13 -0.36 0.62 0.91 0.83 0.87 0.39 0.13 FCOMP 0.29 0.29 -0.54 0.05 0.22 0.07 0.09 0.12 0.08 FFLAWC1 0.49 0.01 -0.12 0.02 0.18 0.03 0.07 0.02 0.08 ECPP 0.47 0.04 0.26 0.00 0.03 0.00 0.00 0.01 0.01 ECPC 0.44 0.03 -0.34 0.00 0.03 0.01 0.01 0.01 0.01 C1MULT 0.50 0.00 -0.03 0.00 0.02 0.00 0.00 0.00 0.01 INIDPTA1 --- --- --- 0.00 0.05 0.00 0.00 0.00 0.02 EFPY 0.49 0.00 -0.07 0.00 0.05 0.00 0.00 0.00 0.02 LPUTS --- --- --- 0.00 0.03 0.00 0.00 0.00 0.01 LPYS 0.49 0.00 0.05 0.00 0.01 0.00 0.00 0.00 0.01 RPE 0.50 0.00 0.03 0.00 0.02 0.00 0.00 0.00 0.01 RPUTS --- --- --- 0.00 0.05 0.00 0.00 0.00 0.02 RPYS --- --- --- 0.00 0.04 0.00 0.00 0.00 0.02
- highlighted in yellow if conjoint contribution larger than 0.1 Page 35
Volume 1 Figure 20: Scatterplots of time from first circumferential leak to rupture and the first three most important uncertain parameters: axial WRS at ID (left), weld to weld crack growth multiplier (center) and peak to valley ECP ratio (right) 6.2.6 Regression Analysis on Ratio of Length to Depth for Largest Circumferential Surface Crack The last regression analysis was performed by generating more cracks that may come up as undetected. Analyses during the pilot study indicated that long thin cracks were more likely to cause undetected rupture as they may either lead to surface crack rupture (large area cracks with depth lower than thickness) or rupture once a surface crack becomes a through wall crack (large area crack during the transition). As a result, the maximum ratio between length and depth for circumferential cracks was considered an output of interest for further analysis. The result of the regression is reported in Table 13. The most important parameter is the initial crack length. Initial depth is also important but mostly as a conjoint influence (highlighted in yellow in Table 13).
Another interesting feature is that the crack growth variability factors seem to also have a small influence. The scatterplot in Figure 21-right shows a positive trend (confirmed by the positive SRC value in Table 13). This would indicate that faster crack growth would lead to a larger ratio between length and depth.
Page 36
Volume 1 Table 13: Regression analysis for ratio of length to depth for largest circumferential surface crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.45 0.78 0.77 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti INILENC1 0.39 0.03 0.16 0.35 0.61 0.46 0.68 0.22 0.18 FCOMP 0.33 0.09 0.31 0.08 0.32 0.09 0.17 0.07 0.12 LPYS 0.12 0.12 0.26 0.01 0.06 0.02 0.03 0.05 0.02 RPUTS 0.24 0.11 0.22 0.01 0.06 0.03 0.04 0.05 0.02 AXIALWRS 0.36 0.03 0.18 0.04 0.23 0.04 0.17 0.03 0.12 FFLAWC1 0.42 0.01 0.11 0.01 0.10 0.03 0.07 0.01 0.05 INIDPTC1 0.45 0.00 -0.07 0.01 0.23 0.02 0.19 0.01 0.15 RPYS 0.41 0.02 0.18 0.00 0.02 0.00 0.00 0.01 0.01 ECPC 0.44 0.01 0.10 0.00 0.01 0.01 0.01 0.00 0.01 LPUTS 0.43 0.01 0.13 0.00 0.01 0.00 0.00 0.00 0.01 LPE --- --- --- 0.01 0.03 0.00 0.02 0.00 0.02 EFPY 0.45 0.00 0.05 --- --- --- --- 0.00 0.00 CKB 0.45 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 CSSMULT --- --- --- 0.00 0.01 0.00 0.00 0.00 0.01 JRM --- --- --- 0.00 0.00 0.00 0.00 0.00 0.00 C1MULT --- --- --- --- --- 0.00 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 21: Scatterplots of ratio between length and depth for the largest circumferential surface crack and the two most important uncertain parameters: initial crack length (left) and weld to weld crack growth multiplier (right)
Page 37
Volume 1 6.3 Uncertainty Analysis Uncertainty analysis refers to the analysis of the uncertainty for the outputs of interest that derive from the uncertainty in the input parameters. This encompasses most of the statistical techniques used to study and summarize distributions.
A set of tools are available to analyze the output and display the results as discussed below.
The distributions themselves are presented as a cumulative distribution function (CDF) or complementary cumulative distribution function (CCDF) and sometimes as a density function.
They can be summarized with statistics including mean, median, standard deviation and quantiles. Only one type of uncertainty (one single loop) is considered. As a result, the events considered (axial and circumferential crack initiation, occurrences of first leak, first axial TWC and first circumferential TWC, occurrences of rupture) will be presented as mean values over time.
We do not use the representation of quantiles over time since their value would be either 0 (not enough realizations have the event occurring) or 1 (enough realizations have the event occurring).
Quantiles and CDFs for other outputs, such as the times associated with these occurrences or crack properties will be presented.
6.3.1 General Summary A simple summary of the main output of interest at a selected time is often a good and quantitative way to present part of the results. Table 14 displays a general statistical summary for this scenario. The table summarizes the probabilities of first crack, first leak, and rupture for each combination of crack orientation. The last row presents the same probabilities if the two orientations were completely independent. Compared to the third row, rupture is independent of orientation (meaning that the result from one orientation does not provide information on the results for the other orientation). This was expected considering that axial cracks never lead to rupture.
For leakage and first crack occurrence, the independent results are slightly higher. This is expected as the important uncertain inputs have been identified and some (fcomp, A_mult) are shared between the two orientation mechanisms.
Table 14: Summary of V.C. Summer IO results Probabilities at 60 years sample crack rupture leak crack size orientation 2500 circumferential 1.08% 1.20% 2.08%
2500 axial 0.00% 9.48% 11.04%
2500 both 1.08% 10.28% 12.32%
2500 if independent 1.08% 10.57% 12.89%
6.3.2 Axial Crack Depth Crack depth is of interest in this study, since one of the flaws detected in VC Summer in 2002 was leaking due to axial cracking. It is also a good indicator on the analysis runs themselves, as depth will directly influence time of TWC and time of rupture.
Page 38
Volume 1 The complementary cumulative distribution functions (CCDF) of the deepest axial crack at 17 years for both WRS scenarios (IO and OI) are displayed in Figure 22. Seventeen years was chosen because the leak was observed after 17 years of service for the weld under consideration.
For such output, the presentation of quantiles or a mean would not bring meaningful information.
The mean would be strongly influenced by the fact that 70% of the V.C. Summer OI realizations and 94% of the V.C. Summer IO realizations do not see any crack and therefore have a crack depth of zero.
Figure 22: CCDF of the maximum axial crack depth at 17 years for V.C. Summer IO (left) and OI (right)
The distribution is bounded at 0 when no axial crack has yet occurred and 1 when the crack becomes through wall.
The shape of the CCDFs is directly related to the corresponding hoop WRS (see Figure 23). In the case of V.C. Summer IO, the average hoop WRS is low (less than 150 MPa) up to 20% of the depth. The hoop WRS then goes up to large positive values (more than 450 MPa) before going down again when close to the outer part of the weld. As a result, most of the occurring cracks are either in the shallow regions of the weld (less than 20% through thickness) or through-wall cracks (all through thickness). The CCDF is almost a straight horizontal line between 40% and 100% through the thickness, indicating that very few realizations will have a crack within this region at any time step.
In the case of V.C. Summer OI, hoop WRS starts high at the inner diameter (ID) (more than 350 MPa) and while it trends down after 40% through the thickness, it is compensated by the fact that the crack is already large and will continue to grow very fast. As a result, crack growth is very fast and snapshot in time (CCDF at a specific time) will show mostly either no crack occurring or through wall cracks. The transition is very close to a straight line highlighting the rarity of surface crack at any point in time.
Page 39
Volume 1 Figure 23: Profiles for the mean Hoop WRS for V.C. Summer IO (left) and OI (right) 6.3.3 Circumferential Crack Depth Figure 24 displays the CCDF for the first circumferential crack depth at 17 years and 60 years (Only one realization out of 2,500 generates two circumferential cracks). The parallelism between the two lines indicates that crack growth does not change significantly between 17 years and 60 years. This is expected as the sensitivity analysis underlined that the proportionality constant is the most important parameter in terms of uncertainty (WRS is second), and that the proportionality constant does not affect crack growth.
Figure 24: CCDF of the maximum axial crack depth at 17 years and 60 years for V.C.
Summer IO 6.3.4 Number of Axial Cracks Occurring The complementary cumulative distribution function (CCDF) and probability mass function (PMF) on the number of cracks occurring after 17 years for the V.C. Summer IO and OI scenarios are displayed in Figure 25 and Figure 26 respectively (with the vertical axis on a log scale).
Page 40
Volume 1 As also seen in these figures, roughly 94% of cases have no axial cracks for the V.C. Summer IO scenario. Of the remaining 6%, about 4.5% of the simulations see only one axial crack and the remaining 1.5% more than one. As for the V.C. Summer OI scenario, about 70% realizations do not have an axial crack. For the remaining 30%, about half experience one crack while the other half experiences multiple axial cracks.
Figure 25: Distribution of number of axial cracks for 2,500 simulations for V.C.
Summer IO (left) and OI (right)
Figure 26: Probability Mass Function of number of axial crackafter 17 years for V.C.
Summer IO (left) and OI (right)
Among the V.C. Summer IO cases that had at least one crack, 75% of the cases experienced only one axial crack, 20% saw two and the remaining 5% saw more than two cracks (Figure 27).
The maximum number of axial cracks observed was eight (for 2,500 realizations). For the V.C.
Summer OI scenario, of the realizations that had at least one crack, 53% experienced one axial crack, 21% two cracks and the remaining 26% more than two cracks. The maximum number of cracks observed (for 2,500 realizations) was 13 (in the PMF the realizations beyond 11 cracks are accounted for in the 11 category).
Page 41
Volume 1 Figure 27: Distribution of number of axial cracks given at least one axial crack occurs during the first 17 years for VC Summer IO (left) and OI (right)
A 25% chance of having multiple cracks when at least one crack occurs is high enough to be consistent with the observations of multiple cracks occurrences for the V.C. Summer scenario and within the range of acceptable comparisons.
6.4 Stability Analysis Monte Carlo methods are numerical analysis techniques allowing for the estimation of statistics over a multidimensional space. As with any numerical technique, their accuracy is dependent on the density of coverage of the domain.
The density of coverage is itself dependent on (1) the size of the sample used to cover the domain (the larger the size the better is the coverage) and (2) the number of uncertain inputs considered as each new input adds a new dimension to the hyperspace (the larger the number of inputs, the worse is the coverage).
Stability analysis is a statistical way to estimate the quality of the estimates (or statistics) generated and gives confidence that the conclusions drawn are not affected by the potential variation in the response. Indeed, analysis stability depends on both the potential variability of the output of interest and the threshold value or range that is used to make the decision. In fact, estimates far from the threshold value can be more variable without having a large impact on decision-making.
Stability of the outputs of interest will be assessed graphically using a 95% centered confidence interval, defined by the lower quantile 0.025 (2.5th percentile) and the upper quantile 0.975 (97.5th percentile) on the distribution of mean values. As a reminder, the interval generated does not represent aleatory or epistemic uncertainty. Rather, it shows the accuracy of the Monte Carlo technique for this output and with the considered sample size. By increasing the sample size (or using importance sampling), this interval is expected to be reduced.
Page 42
Volume 1 The confidence interval is estimated using a binomial distribution for any output defined as an indicator function (such as probability of first crack occurring, probability of first leak or probability of rupture). The methodology used for such indicator function outputs is described in Appendix B.2. For outputs that are not indicator functions, a classical percentile-bootstrap (or p-bootstrap) approach is used, as presented in Appendix B.1.
6.4.1 Probability of First Crack Occurring Figure 28 displays the probability of first crack occurring (grouping both circumferential and axial cracks together) for the V.C. Summer IO scenario, as well as the 95th percentile confidence interval around the mean, defined by 0.025 and 0.975 quantiles.
On a log scale the bandwidth is of medium size showing about a factor of 2 to 3 variation. As indicated in the beginning of this section, different displays may make this band look larger or shorter. It is thus important to not base conclusions on one graphical representation only, but rather either on comparison between the representations or on quantitative measures (ratio comparison to a threshold for instance).
Figure 28: 95% confidence interval around probability of first circumferential crack occurring for V.C. Summer IO scenario Stability can also be assessed by looking at the resulting distribution of the means, constructed using either the bootstrap method or the binomial method. Two aspects are considered in the resulting distribution:
Since this is the distribution of mean values, the central limit theorem [1] indicates that ultimately this distribution should be normal. Any deviation from the normal Page 43
Volume 1 shape indicates potentially less stable results, with a good chance of underestimating the mean when the events are rare.
If the distribution is choppy, it is also an indicator of lack of stability, as it indicates that the same resulting values (mean) are generated again and again.
Figure 29 and Figure 30 display respectively the CDF and PMF of the resulting distributions for the probability (mean) of occurrence of circumferential crack at 5 years and 60 years. As can be seen, the mean is less stable at early time, showing a choppier and more skewed distribution.
A lower stability does not necessarily mean the results are insufficient for the purpose of the analysis. A probability around 10-3 may be adequate at 5 years.
Figure 29: CDF for the probability of circumferential crack occurrence at 5 years (left) and 60 years (right) for the V.C. Summer IO scenario (2500 realizations)
Figure 30: PMF for the probability of circumferential crack occurrence at 5 years (left) and 60 years (right) for the V.C. Summer IO scenario (2500 realizations)
Figure 31 displays the same results for the V.C. Summer OI scenario. As can be seen, the curve is smoother over time and the confidence interval is improved.
Page 44
Volume 1 Figure 31: 95% confidence interval around probability of first circumferential crack occurring for V.C. Summer OI scenario 6.4.2 Probability of First Leak Occurring In this section, results from two separate outputs are presented to show differences in stability.
First, the probability of first leakage is presented below with the confidence intervals showing reasonably stable results for both the V.C. Summer IO scenario (Figure 32) and the V.C. Summer OI scenario (Figure 33). As axial cracks are included, smoother and more stable results are expected.
Page 45
Volume 1 Figure 32: 95% confidence interval around probability of first leakage (both circumferential and axial crack) for V.C. Summer IO scenario Figure 33: 95% confidence interval around probability of first leakage (both circumferential and axial crack) for V.C. Summer OI scenario Secondly, the probability of leakage due to circumferential crack only for Direct Model 1 is plotted.
This result has larger (relative) uncertainty as shown in Figure 34, compared to the previous results presented in Figure 32 and Figure 33.
Page 46
Volume 1 Figure 34: 95% confidence interval around probability of first leakage due to circumferential crack only for V.C. Summer IO scenario The distribution of the standard error over the probability of leakage (mean over the occurrences of leakages) at 15 years is represented below (Figure 35). This is a good example of a non-converged solution with a strong skewness in the distribution and a CDF represented as a step function. In such a case, even if the value is far from the threshold of concern, it may be considered worth using a larger sample size or a more sophisticated sampling strategy (importance sampling) to smooth the results.
Figure 35: CDF (left) and PMF (right) for the probability (mean) of leakage from circumferential crack for the V.C. Summer IO scenario Page 47
Volume 1 6.4.3 Probability of Pipe Rupture Figure 36 below shows the probability of rupture for the V.C. Summer IO scenario case when Direct Model 1 is considered. This figure is like Figure 34 and would lead to similar conclusions regarding stability.
Figure 36: 95% confidence interval around probabilityof rupture for V.C. Summer IO scenario 7.0 Deterministic Sensitivity Studies These analyses presented in this section apply variation to the reference deterministic case presented in Section 5.0. Experts change the influential output parameters one at a time (based on the previous sensitivity analysis or their own expert judgment). Such analyses are also performed to assess the impact of carefully chosen alternative scenarios (such as an extreme condition and/or mitigation). Deterministic sensitivity studies usually focus on the physics of the system and build confidence on the scientific aspect of the analysis.
7.1 MSIP Analysis A deterministic case was run using all nominal values recommended by the Inputs group (usually the median value when the parameter has an associated distribution). As less than 50% of realizations lead to a crack occurring when Direct Model 1 was used, an initial flaw was used to generate a single circumferential crack at time zero. While both axial and circumferential crack cases were run, only the circumferential crack case is presented since it is the case that can lead to rupture.
For V.C. Summer IO, the deterministic code leads to a circumferential through wall crack at 37.75 years (37 years and 9 months) and rupture at 43.25 years (43 years and 3 months).
Page 48
Volume 1 Four deterministic cases were run with the inclusion of mechanical stress improvement process (MSIP) mitigation at 20 years (crack ~20% through thickness ~2% around circumference), 35 years (crack ~50% through thickness ~5% around the circumference), 37 years (crack ~75%
through thickness ~5% around the circumference) and 40 years (through wall crack ~9% around the circumference). Note that this assessment assumes that MSIP is practical for a large size nozzle such as VC Summer. Code Case N-770-1, prescribing MSIP rules, states that MSIP can only be applied to mitigate PWSCC in nozzles that can be inspected and where the crack length is less than 10% if the circumference and the depth is less than 30% of the thickness. The study presented below does not consider these rules. However, results are consistent with a recent PNNL study [2] showing little improvement or even making the situation worse if MSIP is applied when the crack is already 50% or more of the depth.
Table 15 summarizes the times of circumferential TWC and rupture as a function of the MSIP time. An early implementation of MSIP showed the method stops the shallow cracks with no leakage or ruptures occurring. Up to 50% through the thickness, the axial WRS profile with MSIP is lower than the initial WRS profile, thus if the crack is less than 50% deep, application of MSIP reduces the crack growth rate. But even if the crack depth is more than 50%, since crack length is affected by the WRS at the ID, there is still a benefit in implementing MSIP, as evidenced by a delay of about two and a half to three years in time to leakage and time to rupture. Once 75% of the depth is reached, it is possible to have MSIP leading to a shorter leak time (by one month),
but the rupture is still delayed by one month. Once the crack is through-wall, the results are not affected (although this last case is not realistic as MSIP would not be applied to a weld for which TWC has been observed).
Table 15: Summary of MSIP effect depending on time of MSIP application for V.C.
Summer IO deterministic case for circumferential cracks MSIP time none 20 35 37 40 (years)
Leak time 37.75 none 40.33 37.67 37.75 (years) rupture time 43.25 none 46.33 43.33 43.25 (years)
Figure 37 and Figure 38 display the change in crack growth in the depth and half-length respectively for different mitigation times (the 40 year solution is not represented as the curves were indistinguishable from the case without mitigation). As observed in Table 15, a shallow crack would see a reduction in the stress applied such that it goes dormant and does not grow in the depth direction, and such that the growth in the length direction is slowed down significantly.
When the crack is 50% deep, there is still a benefit of about 3 years. Beyond this point, and for deeper cracks, MSIP does not bring any benefit and can even make things worse. Indeed, it is seen that the rules in code case N-770 appear to make sense for V.C. Summer given the change in WRS fields caused by MSIP through the thickness this result is not surprising. However, it must be noted that mitigation is preceded (and followed) by inspections that are likely to find the deepest cracks. The xLPR code reflects what is expected from a MSIP mitigation applied to a weld such as the one considered for V.C. Summer I.O.
Page 49
Volume 1 Figure 37: Influence of MSIP on crack depth at different depth and length stages for V.C. Summer IO Figure 38: Influence of MSIP on crack half-length as different depth and length stages for V.C. Summer IO A similar study was performed using the V.C. Summer OI WRS profile. The outside inside (OI) repair situation produces higher tensile WRS fields near the ID. This time, the circumferential crack became through wall in 21.42 years (21 years and 5 months) and lead to rupture in 33.08 years (22 years and 1 month).
Four deterministic cases were run with inclusion of MSIP mitigation at 10 years (crack ~30%
through thickness), 15 years (crack ~58% through thickness), 20 years (crack ~85% through thickness) and 25 years (through wall crack).
Page 50
Volume 1 As seen in Table 16, the conclusions are like the ones for V.C. Summer IO. A shallow crack greatly benefits from MSIP. A deeper crack will experience slower crack growth (especially if the crack length is still small). Of course, as mentioned above, MSIP would not be permitted to be applied in a plant for cracks deeper than 30% of the thickness. The example shows that while the time to TWC is not reduced by much for a 58% deep crack (2 years and 4 months), the rupture time is greatly reduced (almost 9 years). Once the crack is deep and large enough, the effect of MSIP is negligible, which is expected given the effect of MSIP on WRS fields well into the thickness.
Table 16: Summary of MSIP effect depending on time of MSIP occurrence for V.C.
Summer OI deterministic case MSIP time none 10 15 20 25 (years) leakage time 21.42 none 23.75 20.92 21.42 (years) rupture time 33.08 none 42.42 34.25 33.08 (years)
Figure 39 and Figure 40 also support this interpretation, with shallow cracks taking full benefit of MSIP with the benefit being reduced if the crack is deeper when MSIP is applied. As can be seen for the MSIP application at 20 years, it is possible to hasten the time to through-wall cracking if applied in a specific (and narrow) region of depth, but this does not affect rupture time.
Figure 39: Influence of MSIP on crack depth at different depth and length stages for V.C. Summer OI Page 51
Volume 1 Figure 40: Influence of MSIP on crack half-length as different depth and length stages for V.C. Summer OI 7.2 Overlay Analysis As a reminder, the reference deterministic run used to estimate the effect of overlay for the V.C.
Summer IO scenario has an existing circumferential crack at the beginning of the simulation becoming through wall crack at 37.75 years (37 years and 9 months) and leading to rupture at 43.25 years (43 years and 3 months). ASME code (Section XI, IWB-3640) does not permit overlay to be applied if a flaw of greater than 75% of the thickness is present. Note that the crack depth requirement is greater than MSIP since additional weld metal is deposited with WOR. Beyond the change in WRS discussed in the previous section, an additional weld overlay (WOL) thickness, which is added to the original thickness, needs to be set. Following recommendations from [3], a thickness equal to 1/3 of the original weld thickness has been used here. Furthermore, the recommended set of PWSCC growth parameters for Alloy 52/152 defined by the Inputs Group was used for the mitigation material6.
The first set of results for V.C. Summer IO uses the first fit without shift (see section A.3), in which WRS does not deviate much from the initial WRS (but there is still benefit from a thicker weld).
Table 17 displays the summary of leakage and rupture time as a function of the time of application of weld overlay mitigation. Compared to MSIP (Table 15), weld overlay has the benefit of adding thickness and leading to more time before leakage or rupture. As mentioned for MSIP, some of these results are only for illustrative purposes since a through wall crack would be detected during the inspection conducted prior to the mitigation and would be repaired.
6 The J-resistance curve recommendation for Alloy 52/152 is currently assumed to be identical to that for Alloy 82/182. This may change as the evidence now suggests that Alloy 51/152 toughness is less than that for Alloy 81/182.
Page 52
Volume 1 Table 17: Summary of WOL effect depending on time of WOL occurrence for V.C.
Summer IO deterministic case (first WRS fit)
WOL time none 20 35 37 40 (years)
Leak time 37.75 none 55.5 41.08 37.75 (years) rupture time 43.25 none none 50.16 47.67 (years)
Figure 41 displays the influence of weld overlay to crack depth at different times of WOL (so for different cracks depth and length). The apparent reduction in depth when weld overlay is applied is only because the normalized depth over thickness is reported, and thickness changes when the overlay thickness is added. The growth rate is also reduced as can be seen by the slope of the curves. It can also be observed that there is no drastic change in the curve slope when going through the weld/WOL interface (after WOL is in effect), indicating that with the current parameters, the choice of such material does not have a tremendous impact on crack growth.
Figure 41: Influence of WOL on crack depth at different depth and length stages for V.C. Summer IO Figure 42 displays similar results for half-length over time and is consistent with what is expected which is a slowing down and delay in growth in the circumferential direction.
Page 53
Volume 1 Figure 42: Influence of WOL on crack half-length at different depth and length stages for V.C. Summer IO The second WRS fit includes a shift (see section A.3) and leads to a larger change in stress at the ID ( 50 MPa) as a result of FSWOL application. The summary in Table 18 compared to Table 17 shows that such variation in WRS fitting does not change the gain in time for leakage or rupture. With careful selection, it is possible to have a situation in which the second WRS fit provides additional benefit when compared with the first WRS fit. This can be observed in cases when the crack is small enough to benefit from a lower WRS (Weld Overlay at 35 years), but the improvement is not significant. An interesting feature, already observed for MSIP, is that even for a deep crack, having a higher WRS value does not affect leakage time, and rupture time can still be delayed since half-length is affected by WRS value at the ID.
Table 18: Summary of weld overlay effect depending on time of weld overlay for V.C.
Summer IO deterministic case (second WRS fit)
WOL time none 20 35 37 40 (years)
Leak time 37.75 none none 41.08 37.75 (years) rupture time 43.25 none none 50.42 47.67 (years)
Figure 43 and Figure 44 confirm the results reported in Table 18: the small change in WRS profile has little impact, except when it happens for a crack about 50% deep.(where the slight change for the second fit compared to the first fit is enough to give a WRS low enough that it does not induce crack growth).
Page 54
Volume 1 Figure 43: Comparison of effect in first fit and second fit on crack depth for selected times of WOL occurrence Figure 44: Comparison of effect in first fit and second fit on crack half-length for selected times of WOL occurrence 7.3 One-at-a-time Sensitivity Studies One at a time sensitivity studies consider one input of interest and change it in the deterministic reference (presented in Section 5.0) to estimate how much it impacts a specific output. In the following sensitivity studies, circumferential and axial crack depth are observed for the V.C.
Summer IO scenario. For the probabilistic reference, the pressure and temperature are held Page 55
Volume 1 constant. Consequently their influence is not considered in the sensitivity analyses (presented in Sections 6.1 and 6.2), since only parameters with uncertainty can influence the output distribution.
The first studies considered pressure and temperature changes emulating a High-Dry-Low severe accident condition [4]( 16.2 MPa) [5]. During this type of severe accident both the temperature and the pressure increase to a maximum pressure of about 16.2 MPa (we ignore the temperature change here). Here we consider the case where pressure changed from 15.5 MPa to 16.2 MPa. Figure 45 illustrates the crack depth over wall thickness as a function of time for the IO repair case. This results in only small changes since pressure in PWR plants is reasonably controlled. Severe accidents will be considered in the xLPR defense in depth studies. However, such severe accident considerations are bounding situations, which may not be physically realistic. This is examined here to examine the xLPR code response. While severe accidents can lead to rapid failures the xLPR code is not explicitly developed to handle this case.
Figure 45: Effect of changing Pressure from 15.5 MPa to 16.2 MPa for the V.C.
Summer IO scenario on circumferential crack depth (left) and axial crack depth (right)
A similar analysis for changing the temperature from 325.7 to 330°C is shown in Figure 46. This small temperature change has an important effect on time to leakage for both circumferential and axial cracks. We note that for the High-Dry-Low accident scenario the temperature increases slightly over the first 9000 seconds and subsequently increases to much higher values. This temperature change is not due to the scenario but simply defined.
Page 56
Volume 1 Figure 46: Effect of changing Temperature from 325.7 °C to 330 °C for the V.C.
Summer IO scenario on circumferential crack depth (left) and axial crack depth (right)
We next change the PWSCC crack growth parameters. Component to component (#2592: Weld
- Cell H134) and within component (#2593: Weld - Cell H135) variability factors for PWSCC crack growth are, along with WRS, the most important parameters in the sensitivity analysis performed on time to through wall crack (see Section 6.2). These are the fflaw and fcomp parameters for the PWSCC crack growth law. The values considered are the upper and lower ranges provided by the Inputs Group while the standard WRS IO results (Figure 47, Figure 48) represent the mean values defined by the xLPR Inputs Group. As seen, their variation is expected to linearly affect crack growth (as they are multipliers). It is clear these two parameters (fflaw and fcomp) have a tremendous effect on results.
Figure 47: Effect of changing PWSCC growth variability factors for the V.C. Summer IO scenario on circumferential crack depth: slower growth with fcomp=.335 and fflaw=.313 (left) and faster growth with fcomp=2.04 and fflaw=2.64 (right)
Page 57
Volume 1 Figure 48: Effect of changing PWSCC growth variability factors for the V.C. Summer IO scenario on axial crack depth: slower growth with fcomp=.335 and fflaw=.313 (left) and faster growth with fcomp=2.04 and fflaw=2.64 (right) 7.4 Influence of Crack Morphology In xLPR, crack morphology parameters are read from the Excel input set by the Excel add-on used to run LEAPOR as a pre-processor. LEAPOR is only run for one crack morphology. Thus, the variability in these parameters (parameters 0851 to 0860) is not directly considered in xLPR.
In this section the influence of variability in these parameters is studied via a series of deterministic analyses.
Previous analyses [6] have found three of these parameters to have a potential impact in the estimate of leak rate. Global Roughness ( , Local Roughness and number of 90° turns
( . As fatigue is not considered in the reference scenario (only PWSCC), only parameters 0851, 0852 and 0853 are affected. A first set of eight runs was performed to check the influence of each of these parameters. Distributions were generated to create a factorial design on global roughness, local roughness and number of 90° turns. The 2.5th and 97.5th percentile values were selected as low (L) and high (H) values for each input, as presented in Table 19.
Table 19: Estimate of low and high values for morphology parameters Variable Mean Standard dev. Low value High value Global 113.90 80.21 27 323 roughness Local roughness 16.86 15.67 2.63 57.98 Number of 90° 8.04 2.04 4.78 12.73 turns The combination of all high and low values leads to eight different cases. For each of the 8 deterministic simulations, parameters 851 to 853 were changed accordingly, and the LEAPOR preprocessor was rerun. Then xLPR was run to generate a new history. The resulting leak rates (in gpm) are reported in Figure 49 (log scale) and Figure 50 (linear scale). The indicator for each curve reflects the value used for global roughness, local roughness and number of 90° turns in Page 58
Volume 1 that order. As can be seen in the figures, where the xLz and xHz curves are essentially superimposed, local roughness does not significantly affect leak rate.
Figure 49: Effect of change in morphology parameters on leak rate for the V.C.
Summer IO deterministic reference case (semi log representation)
Figure 50: Effect of change in morphology parameters on leak rate for the V.C.
Summer IO deterministic reference case (linear representation)
The source for the distributions in [7] could not be traced so a rerun was performed with only global roughness and number of 90° turns based on Table 3 of [6], giving a larger uncertainty Page 59
Volume 1 (standard deviation) for the number of 90° turns. The resulting profiles are plotted (with the previous profiles in dashed lines) in Figure 51. If the number of 90° turns was H, then the leak rate with the new distributions decreased slightly, but if the number of 90° turns was L, then the leak rate increased slightly with the new distributions. This is consistent with the theory and the role of 90° turns which represents angles in the crack morphology slowing down the fluid.
Figure 51: Effect of change in morphology parameters on leak rate for the V.C.
Summer IO deterministic reference case with new distributions (semi log representation)
With both the old and new uncertainties the conclusion remains the same. While there is indeed an impact of morphology on uncertainty, it should not influence Leak Rate detection as the variation from the reference is not significant. This analysis is done only on one deterministic reference case, therefore it is recommended that this analysis be expanded to a probabilistic case (considering only the high-high and low-low cases) once an appropriate output has been considered (rupture conditional on leak rate would probably not work as none of the 2,500 realizations lead to rupture when leak rate detection is considered).
8.0 Probabilistic Sensitivity Studies These analyses are the probabilistic equivalent of the deterministic sensitivity studies. The change in input can be a simple shift (as for the deterministic case), but also a change in uncertainty (spread) or point of interest (skewness). These studies include the probabilistic aspect which focuses more on the risk associated with each change.
Page 60
Volume 1 8.1 Application of Uncertainty on Temperature 8.1.1 Influence of Temperature Uncertainty on Crack Initiation A probabilistic run has been performed, using the same configuration used for the reference runs, as described in Section 2. The only change in the input set was to replace the constant value for operating temperature (325.7°C) to an uncertain (aleatory) parameter following a normal distribution with mean equal to 320°C and a standard deviation of 5°C, truncated at 306.1°C
( 0.05 and 326.7°C ( 0.85 . The distribution is consistent with the Westinghouse hot-leg inputs reported by the xLPR Inputs Group.
A sensitivity analysis has been performed showing non-detectable influence of the temperature parameter. The reason is that only 2% (for circumferential cracks) or 13% (for axial cracks) of realizations lead to crack initiation, mostly due to the proportionality constant and WRS value at the ID. The change when varying temperature is not large enough to affect the sensitivity analysis. While the effect is small, it is nonetheless present. It is possible to first globally compare the results, as show in Table 20 below: The number of realizations with crack initiation decreases when going from a constant temperature of 325.7°C to a temperature distribution whose mean is 320°C. This confirms that the lower temperature leads to lower likelihood of crack initiation.
Table 20: Comparison of selected statistics between fixed and varying temperature (2,500 runs)
First_Init_cc First_Init_ac First_Leak_cc First_Leak_ac Rupture Fixed # RLZ 63 345 46 329 42 temp (325C) 2.5% 13.8% 1.8% 13.2% 1.7%
Varying # RLZ 47 289 29 250 28 temp N(320,5) 1.88% 11.56% 1.16% 10.00% 1.12%
Since the same sampling is used in both cases, a realization to realization comparison is appropriate. Indeed, for each realization, the same values are sampled for all uncertain inputs.
The only change is therefore between the fixed constant temperature and the sampled one.
In the next set of results, only the realizations that initially had a crack occurring before 60 years (345 for axial and 63 for circumferential) were considered when calculating the change in time to crack initiation due to change in temperature The results are presented in Figure 52.
Page 61
Volume 1 Figure 52: Variation in initiation time when temperature is changed from constant to uncertain for axial cracks (top) and circumferential cracks (bottom)
As observable in Figure 52 and Figure 53, reducing the temperature can potentially increase the time for crack initiation. While there is a clear trend indicating that the maximum increase in time to cracking is inversely proportional to the change in temperature, a reduction in temperature may in some cases have little or no effect on the time for crack initiation.
The large spread is partially because some of the crack initiation times were close to 60 years.
Since the time of initiation is truncated to 721 months (60-year simulation time plus a one month time-step) when no initiation occurs, it can artificially reduce the change in time when temperature is reduced. For instance, with a fixed temperature of 325°C, the initiation time was 58 years (696 months). The maximum reported difference when only temperature is varied is thus about 2 years (25 months) since any non-occurring initiation over the time period considered will be associated with a time of 721 months (721 696 25 .
Page 62
Volume 1 Figure 53 shows the change in crack initiation time for circumferential cracks as a function of the sampled temperature. The blue dots show the crack initiation time when the fixed temperature (of 325.7°C) is used while the orange dots show the crack initiation time when the temperature has been sampled (from 320°C, 5°C, 306.1°C, 326.7°C , with the sampled value reported on the x-axis). As outlined by four selected pairs (vertical lines), the difference in time tends to be longer for low temperature but can also be sometimes small. This observation is an indicator of conjoint influence, meaning temperature affects the time of occurrence, but it needs to be coupled with other parameters to have a stronger effect.
Figure 53: Change in initiation time for circ. cracks when temperature changes from constant to sampled as a function of the sampled temperature This result is somewhat expected looking at the equations used for PWSCC crack initiation when using Direct Model 1 presented in [8]: many of the (uncertain) parameters are multiplied with each other to estimate time of crack occurrence. In conclusion, temperature affects crack occurrences, but a standard deviation of 5°C is not enough to make it significant when compared to the uncertainty in the proportionality constant and WRS. If the uncertainty in those parameters were reduced, then temperature could play a more important role.
8.1.2 Influence of Temperature Uncertainty on Crack Growth In the second set of simulations, the crack initiation model is replaced with one initial circumferential crack and one initial axial crack (using initial flaw sizes recommended by the Inputs Group). Crack initiation therefore does not play a role in this set of analyses. Table 21 and Table 22 present the results from regression analyses on the time of occurrence of first through wall crack. The most important parameter remains the WRS followed by the (component to component and within component) variability factors. Temperature is the fourth most important contributor with both a main effect and conjoint influence. Note that its influence (sign of SRRC) is negative, meaning that the higher the value of temperature is, the lower are the outputs under Page 63
Volume 1 consideration. In this case, it means that high temperatures lead to a shorter time before leakage, which is as expected. This trend can be seen on the scatterplots presented in Figure 54.
Table 21: Regression results for time of axial through wall crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.75 0.81 0.68 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti HOOPWRS 0.23 0.23 -0.46 0.27 0.58 0.36 0.38 0.23 0.13 FFLAWA1 0.40 0.18 -0.43 0.19 0.48 0.26 0.31 0.17 0.13 FCOMP 0.57 0.16 -0.41 0.08 0.25 0.13 0.14 0.10 0.08 TEMP 0.67 0.10 -0.33 0.06 0.21 0.11 0.12 0.07 0.07 ECPC 0.69 0.02 -0.32 0.01 0.06 0.04 0.05 0.02 0.02 ECPP 0.72 0.03 0.26 0.00 0.06 0.00 0.00 0.01 0.02 INILENA1 0.74 0.01 -0.12 0.00 0.05 0.02 0.01 0.01 0.02 AXIALWRS 0.74 0.00 0.06 0.00 0.05 0.02 0.04 0.01 0.03 INIDPTA1 0.74 0.00 -0.06 0.00 0.02 0.01 0.00 0.00 0.01 EFPY 0.75 0.00 -0.06 0.00 0.01 0.01 0.01 0.00 0.01 QG 0.75 0.00 0.04 0.00 0.00 0.00 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Table 22: Regression results for time of circumferential through wall crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.74 0.84 0.77 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti AXIALWRS 0.40 0.40 -0.63 0.45 0.69 0.52 0.58 0.39 0.12 FCOMP 0.52 0.11 -0.33 0.10 0.29 0.14 0.16 0.10 0.09 FFLAWC1 0.62 0.10 -0.30 0.08 0.23 0.12 0.13 0.08 0.07 TEMP 0.69 0.08 -0.29 0.05 0.19 0.10 0.12 0.07 0.06 ECPC 0.71 0.01 -0.26 0.01 0.05 0.05 0.05 0.02 0.02 ECPP 0.73 0.02 0.22 0.01 0.06 0.00 0.00 0.01 0.02 INILENC1 0.73 0.00 -0.05 --- --- 0.00 0.01 0.00 0.00 EFPY 0.73 0.00 -0.05 --- --- 0.00 0.00 0.00 0.00 INIDPTA1 --- --- --- 0.00 0.00 0.00 0.00 0.00 0.00 LPE --- --- --- 0.00 0.00 --- --- 0.00 0.00 C3MULT --- --- --- 0.00 0.01 --- --- 0.00 0.00 HOOPWRS --- --- --- --- --- 0.00 0.00 0.00 0.00 SURFDIST 0.74 0.00 -0.02 --- --- 0.00 0.00 0.00 0.00 WELDE --- --- --- --- --- 0.00 0.00 0.00 0.00 LPUTS 0.74 0.00 0.02 --- --- --- --- 0.00 0.00 JRC 0.74 0.00 -0.02 -0.01 0.00 --- --- 0.00 0.00 FFLAWA1 --- --- --- 0.00 0.01 --- --- 0.00 0.00 RPE --- --- --- 0.00 0.00 --- --- 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Page 64
Volume 1 Figure 54: Scatterplot of time to leakage for axial crack (left) and circumferential crack (right) as a function of temperature A realization-to-realization comparison is possible in this case since the two simulations have the same sample size and use the same sample inputs except for temperature. As was presented for crack initiation in Figure 52, the change in time to axial and circumferential through wall crack was estimated when going from a constant temperature to a distributed temperature. The result is presented in Figure 55, sorted as a function of time. As previously stated in Section 8.1.1, the decrease in crack growth rate is likely to be larger when temperature is lower, but the temperature decrease alone is not sufficient and the crack growth rate is also affected by other parameters, pointing toward a conjoint influence.
Page 65
Volume 1 Figure 55: Variation in time to through wall crack when temperature is changed from constant to a distribution for axial cracks (top) and circumferential cracks (bottom)
The conclusion reached agrees with the one from the previous section. When considered uncertain, temperature can influence crack growth rate to varying degrees, with lower temperature leading to potentially slower growth rate. Furthermore, the influence of temperature on crack growth rate is more important than on crack initiation (it is the fourth parameter after WRS and the two variability factors). The influence of temperature is partially conjoint with these other parameters.
8.2 Probabilistic Sensitivity Study on Inlay Mitigation The inlay depth was set to 3 mm as per Code Case N-766. The process consists of first machining the region under the dissimilar metal weld to a depth of 3mm. Next the alloy 52/152 weld metal Page 66
Volume 1 is deposited to a thickness of about 4.5 mm. Finally, the deposited inlay is machined so that the inlay depth is 3 mm total. Initial material properties developed by the Inputs Group were used.
The inlay material is Alloy 52/152 having slower crack growth properties and longer crack initiation times compared to Alloy 82/182. In the first analysis, the properties of Alloy 82/182 were preserved for the inlay and only WRS was updated. The resulting probabilistic analysis showed an increase in crack initiation, leakage and rupture following inlay (Figure 56).
Figure 56: Effect of inlay applied at 30 years on probabilistic results on initiation, leakage and rupture Following the finding in MRP-375, the crack initiation proportionality constant multiplier was updated to account for the material improvement in PWSCC resistance when going from alloy 82/182 to alloy 52/152 (parameter #2743 had the geometric mean defined in cell K81 changed from 1 to 1/37). The crack growth factor of improvement was also updated to account for the material change (parameter #2795 was changed from 1 to 150 in cell H137). These recommendations follow the discussion presented in section 3.4 of MRP-375.
The results are displayed in Figure 57. It is important to note that it is not the probability that is reduced by a factor of 37, but the time to reach a given probability. In order to illustrate the concept, we took the probability of first crack from Figure 56 and divide the time (starting at years 30 when inlay is set) by 37 (horizontal compression), then we added it with yellow dots to Figure 57, so we can see the matching for crack initiation.
Page 67
Volume 1 Figure 57: Effect of inlay applied at 30 years on probabilistic results on initiation, leakage and rupture when mitigation material properties are updated Observing crack depth for the first 1000 results (Figure 58) confirms that the 3 mm of inlay slows down any crack growth considerably: once the inlay is set, the growth is usually stopped in the inlay (3 mm).
Figure 58: Depth of first crack that occurs for the first 1000 realizations In conclusion, while the first analyses showed that inlay mitigation increased the ID WRS field considerably, the change in material properties to reflect the much improved PWSCC resistance of the new material leads to results more in line with what is expected.
Page 68
Volume 1 Following this analysis, the xLPR Inputs Group decided to revisit the material properties for the two parameters under consideration. The recommendation was a factor of improvement (FOI) of 5 on PWSCC crack initiation and of 10 on crack growth. These FOI are the lower bounds recommended in MRP-375 (although all the experiments indicate a larger FOI). The probabilistic analysis was performed with these new recommended values. Figure 59 displays the mean results for probability of first crack, first leak and rupture. The first two metrics increase significantly and lead to faster occurrences and leakage. The probability of rupture increases at about the same rate as was observed before mitigation (in Figure 59 the increase in the green dash-dot line for the last 30 years is of the same order than for the first 30 years).
Figure 59: Effect of inlay applied at 30 years on probabilistic results on initiation, leakage and rupture with factor of improvement of 5 on initiation and 10 on growth The first 1000 crack depth histories (Figure 60) show that the inlay slightly slows crack growth (explaining why in Figure 59, the probability of rupture is not as close to the probability of 1st crack after mitigation as before mitigation - see vertical arrows), but not enough to overcome the effect of a large WRS at the ID.
Page 69
Volume 1 Figure 60: Effect of inlay applied at 30 years on normalized crack depth for the first 1000 realizations with factor of improvement of 5 on initiation and 10 on growth Figure 61 displays the first 1000 inner and outer length profiles. The inner length for new cracks is lower than before mitigation (most of the length of the new cracks is less than 10% of the circumference and none is more than 20%). This is expected since the inner the inner length for new cracks is calculated in the mitigated section of the weld consisting of Alloy 52/152 where PWSCC growth is much slower. The outer length is in the original section of the weld (Alloy 82/182 with faster PWSCC) and therefore tends to be longer. Inlay creates non-idealized TWCs with an outer length that is larger than the inner length. Another consequence is that the time between TWC and rupture is longer, increasing the likelihood that the crack will be detected and repaired.
Figure 61: Effect of inlay applied at 30 years on normalized inner length (left) and outer length (right) for the first 1000 realizations with factor of improvement of 5 on initiation and 10 on growth Based on advanced finite element analysis (AFEA) [9], PWSCC crack growth through an inlay into the original weld leads to mushroom shaped cracks (smaller in the Alloy 52/152 inlay and ballooning in the original weld material (Alloy 82/182) where PWSCC growth rates are much larger). xLPR results may lead to an acceptable representation of the mushroom effect expected for a crack going through the inlay and starting to grow through the weld (see Figure 62), with a reasonable estimate of the ID and OD lengths. However, the leak rates will be constrained by the small crack in the inlay. This effect may have to be examined in more detail later - and may be a gap in the current version of xLPR.
Page 70
Volume 1 Figure 62: Conceptual representation of expected crack shape and resulting crack shape in xLPR v2.0 when inlay is considered 8.3 Probabilistic Sensitivity Study on Overlay Mitigation The deterministic sensitivity study on an overlay (presented in Section 7.2) demonstrates its efficiency in terms of reducing crack growth. The study was extended probabilistically to a sample of size 2,500 and with the application of overlay at 30 years. Figure 63 shows the change in probability of 1st crack, 1st leak and rupture over time. After mitigation, probability of rupture and probability of leakage are slowed down (after mitigation, the probability of leakage increases only by 4 10 over 30 years, i.e. only one realization out of 2,500 leads to TWC after mitigation).
Furthermore, the change in WRS at the ID leads to a reduced probability of 1st crack occurring after mitigation as the change of slope in the blue line indicates.
Page 71
Volume 1 Figure 63: Effect of overlay applied at 30 years on probabilistic results on initiation, leakage and rupture for the V.C. Summer IO scenario A look at the first 1,000 depth profiles for circumferential cracks over time (Figure 64) confirms the gains in both crack initiation and growth. The number of cracks initiating in the thirty years before mitigation (about 14 on average) is larger than for the same duration after mitigation (only two on average). The same observation can be made when both crack orientations (both circumferential and axial) are considered. On Figure 63, the probability of 1st crack is about 0.01 in the first 30 years. It only increases by 1.6 10 after. The decrease in thickness at 30 years is not a reduction in crack depth but is because the reported depth is normalized by thickness, and because the thickness abruptly changes due to the addition of an overlay (the original thickness is equal to 75% of the mitigated thickness). The growth rate itself is significantly reduced after 30 years (depth does not change much over time after overlay is placed).
Page 72
Volume 1 Figure 64: Effect of overlay applied at 30 years on normalized depth of circumferential crack for the first 1000 realizations for the V.C Summer IO scenario This study confirms the efficiency of overlay as a mitigation approach. Overlay usually decreases the weld residual stress field near the ID. Moreover, the overlay material is Alloy 52/152, which exhibits slower PWSCC. As such, these results are appropriate and expected.
9.0 Revisiting Uncertainty Parameter The sensitivity studies, coupled with the sensitivity analysis, identify the inputs driving the issues of interest. Revisiting these inputs, either to increase the knowledge or to consolidate the current state of knowledge, is recommended to further increase confidence in the results presented, since these will most likely be the inputs that are discussed and questioned.
For the V.C. Summer scenario, the uncertain parameters suggested for revisiting are as follows:
The proportionality constant (: parameter #2742) used in the DM1 initiation model to represent variability within the weld in term of crack initiation The multiplier to the proportionality constant ( : parameter #2743) used in the DM1 initiation model to represent weld to weld variability in term of crack initiation The component to component variability factor ( : parameter #2792) used in the crack growth equation to represent weld to weld variability in term of crack growth The within component variability factor ( : parameter #2793) used in the crack growth equation to represent variability within the weld in term of crack growth Page 73
Volume 1 The hoop Weld Residual Stress ( : parameter #4350) that influences both initiation and growth of axial cracks The axial Weld Residual Stress ( : parameter #4352) that influences both initiation and growth of circumferential cracks Beyond the initially uncertain inputs, deterministic and probabilistic sensitivity studies have demonstrated that temperature could influence the results, especially at higher temperatures.
The study of the associated uncertainties and potential improvement for the representation of these uncertainties are beyond the purpose of this template. Furthermore, such task should be considered by all the xLPR stakeholders.
10.0 More Accurate Analyses If the outputs of interest or not sufficiently stable in step 1, additional simulations can be performed. The analyst can increase the sample size (if possible), apply importance sampling (on the inputs identified via the sensitivity analysis) and/or separate aleatory uncertainty from epistemic uncertainty (if the distinction between risk and uncertainty over risk is necessary).
10.1 Comparison of Stability Analyses This section will focus on probabilities of occurrence due to circumferential cracks for V.C.
Summer IO. In Section 6.4 confidence intervals and distribution of the (mean) probabilities was presented with a sample of size 2,500. This section compares those results to the same results obtained with a sample of size 10,000.
Table 23 compares the estimations for probability of 1st circumferential crack, probability of 1st leak due to circumferential crack and probability of rupture. For circumferential cracks these probabilities are in the range of 10-2. Often, but not always, the probabilities of interest will tend to be underestimated when a sample size is too small. This is observed in Table 23. A way to check for stability can thus be obtained by incrementally increasing the sample size until the variations in the estimation are reduced to the analysts satisfaction (an example of criterion could be that the estimate changes by less than 10% when the sample size is doubled).
Table 23: Summary of V.C. Summer IO results Probabilities at 60 years sample crack rupture leak Crack size orientation 2500 circumferential 1.08% 1.20% 2.08%
10000 circumferential 1.30% 1.44% 2.46%
10.1.1 Increasing Sample Size: Circumferential Crack Initiation This section compares the probability of circumferential crack initiation. Figure 65 compares the probability and associated confidence interval with 2,500 realizations (left frame) and 10,000 realizations (right frame). The mean over time does not change significantly (mean at 60 years Page 74
Volume 1 is around 2.1 10 for a sample of size 2,500 compared to 2.5 10 for a sample of size 10,000) but becomes smoother, and with a tighter confidence interval.
Figure 65: Comparison between 95% confidence intervals on probability of circumferential crack initiation for the V.C. Summer IO scenario with DM1 with 2,500 realizations (left) and 10,000 realizations (right)
Figure 66 displays the distributions for the mean values (probability of circumferential crack occurrences) at five years (left frame) and 60 years (right frame) for both 2,500 and 10,000 realizations. At five years the resulting distribution with more samples is smoother and closer to a normal distribution. At 60 years, the shape is already close to normal with 2,500 realizations, but the resulting distribution with 10,000 samples is smoother yet. As is often the case for low probability events, smaller sample sizes tend to under-predict the mean. Consequently, the mean increases towards the true value as the number of realizations is increased.
Figure 66: PMF for probability (mean) of circumferential crack occurrences at five years (left) and 60 years (right) for the V.C. Summer IO scenario 10.1.2 Increasing Sample Size: Probability of Leakage due to Circumferential Crack Leakage due to circumferential crack is considered in this section. Figure 67 shows how the use of a larger sample size (right frame) leads to tighter confidence bounds. Another important aspect that was not visible for the crack initiation comparison is the time at which some realizations lead to leakage. With 2,500 realizations the first occurrences of leakage are observed around 13.5 years, while they are observed around four years when 10,000 realizations are considered.
Page 75
Volume 1 Figure 67: Probability of leakage due to circumferential crack for the V.C. Summer IO scenario using 2,500 realizations (left) and 10,000 realizations (10,000)
Figure 68 shows that the estimates after 20 years are relatively close and the most significant differences between the estimates is in the first 15 years. As expected, estimates of low probability events (such as early TWC) are better captured with a larger sample size. Notably the earliest leakage starts around 4 years with the 10,000 sample size, while it starts around 13 years with the 2,500 sample size. The earliest leakage is not expected to happen at time zero as it takes time for a crack to go from initiation to through wall. The shortest time found using importance sampling on WRS at ID to generate more realizations with faster growth was approximately 1.5 years (Section 10.1.3).
Figure 68: Probability of leakage from circumferential crack for the V.C. Summer IO scenario using two different sample sizes (2,500 and 10,000)
Page 76
Volume 1 Figure 69: CDF (left) and PMF (right) for probability (mean) of circumferential crack leakage at 15 years for the V.C. Summer IO scenario 10.1.3 Importance Sampling: Probability of Leakage Importance sampling was applied to axial WRS (emphasizing high values at the ID) and compared to the reference probabilistic run with an initial flaw. The resulting probability of leakage and confidence intervals is presented in Figure 70. Once the number of expected realizations with the event occurring totals between 20 and 40 (for a sample of size 2,500, it means when the probability of the event is in the interval 5 10 ; 10 ) there is no real benefit in using importance sampling. The benefit of importance sampling is if no or only a small number of realizations lead to an event (the time interval between 1 year and 3 years in Figure 70). When done appropriately, importance sampling, leads to a smoother time dependent estimate with shorter confidence intervals.
Figure 70: Probability of leakage and 95% confidence interval with (red) and without (black) importance sampling Page 77
Volume 1 11.0 No Event Occurring For some outputs of interest, it is possible that the likelihood is so low that no event occurs related to the scenario (e.g., probability of rupture given leak rate detection and inspection). In such instances the analyst can: (1) use a surrogate output to determine the region of input space most likely to lead to an event (e.g. ratio length over depth), (2) use expert judgement, and/or (3) use adaptive sampling from PROMETHEUS.
The procedure is illustrated for the V.C. Summer IO scenario for the probability of rupture given leak rate detection set to 1 gallon per minute (gpm). None of the 2,500 realizations led to rupture when leak rate detection is considered. The 10,000 realizations did not find any cases of rupture either. Even when a circumferential crack occurrence is forced at time zero, none of the 2,500 realizations led to rupture with leak rate detection. These runs already inform the probability of rupture with leak detection is in the range of 10-4 or lower.
Observations of data show that circumferential cracks that transition to through wall cracks have an outer length and inner length large enough to induce a leak rate larger than 1 gpm. The most likely scenario resulting in no leak rate detection is either one where a large ratio between half-length and depth of the crack exists so that either the surface crack (SC) would lead to rupture (large area covered) or one where rupture would happen as soon as the crack would become TWC. Since rupture with leak detection is rare (it did not happen in 10,000 realizations), several actions are necessary to increase the likelihood of such an event happening:
- 1. Crack initiation should be modeled with an initial circumferential crack occurring at time zero.
- 2. Axial WRS should be considered for importance sampling, as it is the most important parameter for crack growth. However, importance sampling on a specific area of the axial WRS is not recommended since it is not certain that the extreme value at the ID would lead to the most conservative case. It is possible that a high value at the ID would be associated with such a low value in the profile that it would stop crack growth. Therefore, a possible strategy for identifying the WRS regions of most interest is to use the adaptive sampling in PROMETHEUS to identify the range of the axial WRS ID distribution that could be importance sampled.
Initial crack half-length has been identified as the most important parameter to control the ratio between half-length and depth (see sensitivity analysis in section 6.2.6). The sample size was set to 100,000 with the number of adaptive samples set to 1,000. The trigger to identify areas of interest was / 10 with representing the half-length and the depth.
PROMETHEUS includes allows the user to input a distribution on the leak rate detection threshold. With the default constant setting of 1gpm for leak rate detection threshold, no rupture occurred with leak rate detection. Another run was performed adding a larger uncertainty for detection capability (via a normal distribution whose mean was set to 1 gpm and standard deviation to 2 gpm). The resulting probability of rupture given leak rate detection is displayed in Figure 71. Additionally, a Monte Carlo simulation of size 100,000 was run to check other probabilities and confirm that the estimate using the weighted results was performed adequately.
As can be seen, the probability after 60 years is around 10-5 conditional on having a circumferential crack occurring at time zero. All cases leading to undetected rupture are associated with a high detection threshold, larger than 5 gpm.
Page 78
Volume 1 Figure 71: Probability of rupture with Leak Rate detection for the VC Summer IO scenario The estimate presented is conditional on a circumferential crack occurring at time zero, meaning that considering the probability of having a crack would reduce the probability of rupture. As discussed in Section 6.2.1, the probability of having a circumferential crack initiated cannot be directly multiplied by a corrective factor. There is some non-conservatism that would be added to it since realizations with crack initiating tend to have higher WRS values at the ID which in turn will result in faster growth. But on the conservative side, all cracks are initiating at time zero, which gives 60 years to lead to rupture. As a result, there is still good confidence that the event under consideration (rupture with leak rate detection) would be lower than 10-6 over a 60-year period.
12.0 Summary This document uses V.C. Summer power plant scenario to present the approach developed for scenario analysis in support of illustrating a recommended approach for a future LBB regulatory guide.
This sensitivity study template has been developed with the following goals in mind:
Better understanding of the mechanisms involved in the LBB scenario considered Identification and ranking of the important variables driving the output uncertainty Comparison with observations Exploration of alternative assumptions or scenarios in support of defense in depth Assessment of the stability of each estimated output distribution and representative statistic Page 79
Volume 1 The aim is to increase confidence in the results and consequently in the decision that needs to be made.
The results presented for the V.C. Summer scenario are consistent with what was discovered at the plant during the October 2000 scheduled containment inspection for the weld between the reactor vessel nozzle and one of the hot leg piping, as it was demonstrated in the acceptance Software Test Results Report analysis. The results are consistent in the sense that they are not overly optimistic (supposing that the observation could only happen with a probability of 10 or less for instance) or pessimistic (considering it happens all the time with a probability of 1), but rather within the expected range (considering how many time it has been observed amongst the multiple plants in service in the US and the number of weld under consideration). Similarly, the results of deterministic and probabilistic sensitivity studies reflect what was expected by the experts. These analyses increase state of knowledge and confidence, not only in the code itself, but more importantly in the choice of the inputs selected to represent V.C. Summer. The analysis concludes that when inspection and leak rate detection are considered, the risk for a pipe rupture is extremely low (within the 10 range or lower), as expected.
Page 80
Volume 1 BIBLIOGRAPHY
[1] Wikipedia contributers, "Central Limit theorem," Wikipedia, The Free Encyclopedia., 26 september 2017. [Online]. Available:
https://en.wikipedia.org/w/index.php?title=Central_limit_theorem. [Accessed 28 september 2017].
[2] E. J. Sullivan and M. T. Anderson, "Assessment of the MSIP Process for Mitigating PWSCC in Nickel Alloy Butt Welds in Piping Systems Approved for LBB," PNNL-22070, (DOE report), 2013.
[3] E. J. Sullivan and M. T. Anderson, "Assessment of Weld Overlays for Mitigating Primary Water Stress Corrosion Cracking at Nickel Alloy Butt Welds in Piping Systems Approved for Leak-Before-Break," Pacific Northwest National Laboratory : PNNL-21660, August 2012.
[4] H. Rathbun, M. Benson, R. Iyengar and B. Brust, "Analysis of PWR Hot Leg in Sever Accident Conditions: Creep Rupture and Tensile Instability Initiation MOdeling," in Proceedings Of Integrity of High Temperature Welds, London, UK, Sept. 2012.
[5] B. Brust, R. Iyengar, M. Benson and H. Rathbun, "Severe Accident Condition Modeling in PWR Environment: Creep Rupture Modeling," in Proceedings of the 2013 ASME Pressure Vessels and Piping Conference, July 14-18 2013 Paris, France, PVP2013-98059.
[6] G. Wilkowski, R. Wolterman and D. Rudland, "Impact of PWSCC and Current Leak Detection on Leak-Before-Break Acceptance," in 2005 ASME Pressure Vessels and Piping Division Conference, Denver, Co, PVP2005-71200.
[7] Engineering Mechanics Corporation of Columbus, "Technical Progress Report on Extremely Low Probability of Rupture (xLPR) Leak-Before-Break (LBB) Regulatory Guide Support," NRC Contract Number NRC-HQ-60-14-E-0001-T-0001, 2015.
[8] xLPR Crack Initiation Subgroup, "xLPR Software Requirement Description for Crack Iniitation - PWSCC (Version 3.4)," Dec. 05 2015.
[9] F. W. Brust, D.-J. Shim, E. Punch, S. Kalyanam and D. Rudland, "Natural PWSCC Crack Growth in Dissimilar Metal Welds with Inlay - Paper PVP2010-26108," in Proceedings of ASME 2010 Pressure Vessel & Piping Division, Bellevue, Washington, 2010.
[10] xLPR WRS subgroup, "xLPR Version 2.0 Technical Basis Document : Welding Residual Stress Modelling Development," 2016.
[11] B. Efron and R. J. Tibshirani, An Introduction to the Bootstrap, ISBN: 0-412-04231-2: CRC Press LLC, 1998.
Page 81
Volume 1 MITIGATION RULES APPLIED TO WRS PROFILES A.1. Axial WRS Profile Update for MSIP Mitigation The following rules have been used to estimate the resulting WRS profiles when MSIP is considered in the WRS report [10]:
- 1. Axial WRS always is always reduced near ID and increased near OD. If WRS is tensile prior to application of MSIP, reduction near ID could be between 300 and 400 MPa up to a depth of R/t ~0.2. Near the OD the increase is about 150 MPa maximum (Figure 27 of
[40]) for the hot leg. In general, the higher the tensile WRS at the ID the more reduction in WRS from MSIP with little improvement at the ID if WRS is negative prior to MSIP.
- 2. If axial WRS is negative (compressive) near the ID prior to MSIP application, there is little reduction at the ID, to about 100 to 150 MPa at R/t ~0.2. The increase to the OD beyond R/t ~0.2 is only 50 to 100 MPa. This result was for pressurizer safety and surge line examples, but it is assumed to apply for hot leg sizes. If hoop stress is negative prior to application of MSIP little change near the ID and a slight increase near the OD would be expected.
- 3. Solutions for cold legs and pressurizer surge lines were given greater weight since the R/t is similar and thickness is also larger than in pressurizer spray/relief lines. There is not a lot of MSIP residual stress analysis work in the open literature.
Using these rules, a mathematical approach to estimate the resulting mean axial WRS profile when MSIP is applied is developed. The methodology is described below.
The first and second items indicate that the stress at the ID is always reduced when MSIP is applied. The reduction can be as high as 400 MPa if the initial ID WRS is tensile, and as low as 100 MPa if the WRS at the ID is compressive prior to MSIP. A quadratic function, bounded by the values of 100 MPa and 400 MPa was used to estimate the change in WRS at the ID.
The method to estimate MSIP impact on axial WRS was implemented in an Excel sheet in which the user can choose the boundary values. Since there is a third constraint to estimate the parameters of the second order equation, the derivative is set to 0 at the lower bound.
Figure 72 displays the resulting function estimating the reduction of stress at the ID when the lower bound and upper bound are respectively set to -300 MPa and 200 MPa.
Page A-1
Volume 1 Figure 72: Reduction of stress at the ID (y-axis) as a function of initial WRS (x-axis)
(MPa)
The reduction of stress is effective up to a depth of about 20% through the thickness. A slight linear attenuation (set to 20% of the original value at 20% through the thickness) was applied.
The linear attenuation makes the corresponding integral a trapezoid (Figure 73).
Figure 73: Conceptual reduction of stress at the ID with linear attenuation Equilibrium on a weld (ring) and not an infinite plane surface is estimated. So, the equation needs to also be integrated along the perimeter to cover the full volume of the ring (Figure 74).
Page A-2
Volume 1 Figure 74: Integration applied along the ring The volume integration is therefore equal to the multiplication of the change in stress to the surface of the ring. The change in stress decreases linearly and is therefore the average between the maximum change and minimum change. Let be the maximum stress change at the ID and suppose the attenuation is about 20% (meaning the minimum stress change is equal to 0.8 ). The average stress change is therefore equal to:
D= 0.8 0.9 The surface area of the ring is equal to the difference in surface area between a circle using the outer radius of the ring and a circle using the inner radius of the ring. The inner radius is equal to (inner radius of the weld), while the outer radius is equal to 0.2 (the rule is applied up to 20% through the thickness), the surface area is therefore equal to:
0.2 0.2 0.2 0.2 . 2. 0.1 0.4 0.1 The volume is therefore equal to:
0.9 0.4 0.1 0.36 0.1 Page A-3
Volume 1 The rules indicate the axial stress will be increased at the OD in a proportional way (to compensate, changes to the ID are inversely proportional to the OD). The gain in WRS (noted ) is between 100 and 150 MPa. It is expected a gain has to be proportional with a reduction in strength at the ID. A linear interpolation between 100 and 150 MPa as a function of the reduction at the ID is shown below in Figure 76.
Figure 75: Estimation of increase of WRS at the OD as a function of the decrease of WRS at the ID To estimate WRS variation from the OD inside the weld, we associate the same linear attenuation of the effect at the OD than for the ID (by default 20% attenuation). This creates a volume similar to the integrated volume at the ID.
The corresponding equation is:
0.9 where represents the outer radius of the weld and the inner part of the ring used for the integral.
The only unknown in the equation is . Since we want to respect the equilibrium condition, we will select the value of so the integral at the OD cancels the integral at the ID, i.e.
- 0. A conceptual representation (in 2D) is shown in Figure 76, noting that the calculation is slightly more complex since we integrate over a larger ring for .
Page A-4
Volume 1 Figure 76: Conceptual representation of selection of position on the outer part of the weld so that the two integrals cancel each other is estimated using the following formula:
The last step is simply to link the two corrections with a function having the following properties, as illustrated in Figure 77:
It is equal to 0.8 at 0.2 It is equal to 0.8 at It integrates to zero between 0.2 and These three conditions can be met using a quadratic function. In theory, the third condition is slightly more complex as it is integrated over the ring defined between 0.2 and . The correction would be minimal and would require a more complex implementation. As a result, a simple integral of the second order polynomial is used instead.
Figure 77: Illustration of the link between the two corrections Page A-5
Volume 1 The approach leads to the following two profile updates on axial crack for V.C. Summer IO (Figure
- 78) and V.C. Summer OI (Figure 79).
Figure 78: Initial WRS profile (plain blue line) and mitigated WRS profile (dashed orange line) when MSIP is considered for V.C Summer IO Figure 79: Initial WRS profile (plain blue line) and mitigated WRS profile (dashed orange line) when MSIP is considered for V.C Summer OI Page A-6
Volume 1 A.2. Hoop WRS Profile Update for MSIP Mitigation The change in hoop WRS for MSIP is applied through the entire thickness. All changes are applied as a reduction at the ID and a linear attenuation of this reduction through the thickness.
These two correction terms were varying for each scenario considered in the WRS report and were not directly proportional to the value at the ID. The initial ID value, maximum value, and average values for through wall thickness were thus used to estimate these two corrections factors (using an Excel solver). The resulting fitting can be seen in Figure 80.
Figure 80: Fitting of change and slope using estimates from WRS report The resulting change in the Hoop WRS profile for V.C. Summer IO is displayed in Figure 81.
Page A-7
Volume 1 Figure 81: Hoop WRS profile and updated MSIP version for V.C. Summer IO A.3. Axial WRS Profile Update for Overlay Mitigation The WRS report [10] lists a set of conditions to estimate the change in WRS when an overlay is applied:
- 1. If axial WRS prior to WOL are compressive, there is little change or a slight increase in WRS after WOL to about x/t~0.2 to 0.3.
- 2. If axial WRS is tensile prior to WOL, then WOL causes a decrease of compressive stresses (or reduction from tension to lower tension). This is observed for repaired welds.
- 3. High tensile WRS caused by repair are lowered by WOL.
- 4. Hoop stresses decrease at least 200 MPa near the ID and level out to the non-WOL results near OD (at WOL OD).
- 5. Solutions for cold legs and surge lines should be given priority since R/t is similar and thickness is larger than spray/relief lines.
The rules for axial WRS are like the ones used for MSIP. The exception is that the WRS at the ID is not always reduced and can go unchanged or even increased when the initial ID is under compression.
The change in WRS at the ID is based both on the rules and the estimates reported in the WRS Report [10]. Figure 82 displays the change in WRS at the ID based on the maximum value observed in the first 20% through the thickness. The green dots represent the values estimated by the WRS subgroup for generic plants. The bounds have been based on expert elicitations.
The linear fit (in red) is obtained using the least square approach.
Since Overlay is expected to have an impact on WRS, another fit is proposed (in dashed blue lines), where the change is at least 50 MPa in absolute value.
Page A-8
Volume 1 Figure 82: Estimated change in WRS at the ID with overlay based on the maximum value observed for WRS in the first 20% through thickness The resulting WRS profile with both fits can be seen in Figure 83.
Page A-9
Volume 1 Figure 83: Resulting mean WRS profile for Weld Overlay using both fits A.4. Hoop WRS Profile Update for Overlay Mitigation The implementation of overlay correction on hoop WRS is the simplest approach. The WRS value at the ID is decreased by 200 MPa; and linearly decreases to 0 over the entire thickness. An example of such a correction is shown in Figure 84 for V.C. Summer IO scenario.
Figure 84: Hoop WRS profile and updated Overlay version for V.C. Summer IO Page A-10
Volume 1 A.5. Axial WRS Profile Update for Inlay Mitigation Initially, axial WRS was updated following the rules recommended and used by the WRS subgroup for inlay. The resulting profile can be seen in Figure 85.
Figure 85: Initial and updated axial WRS profile when inlay is considered on V.C. Summer IO A.6. Hoop WRS Profile Update for Inlay Mitigation Hoop WRS profiles are changed due to inlay by increasing the value at the ID to vary between the yield strength of the material and 1.75 times the yield strength. The value used was obtained by fitting a curve to the values used in the scenarios considered in the WRS document. The resulting fit (purple curve) can be seen in Figure 86.
Page A-11
Volume 1 Figure 86: Fitting distribution (purple) to estimate the value of Hoop WRS at the ID when Inlay is applied, based on the scenarios used in the WRS document The increase attenuates linearly up to 20% through the thickness. From this point to the OD, the WRS profile remains unchanged.
Figure 87: Hoop WRS profile and updated Inlay version for V.C. Summer IO Page A-12
Volume 1 ESTIMATION OF CONFIDENCE INTERVALS Statistical stability is one important aspect of acceptance analysis for a probabilistic code. The validation of a probabilistic framework differs from its deterministic counterpart as it also requires taking the sampling structure and sample size into consideration.
Like a numerical models accuracy depending on the discretization of the space and time, the accuracy of a Monte Carlo result will depend on the sample size under consideration. The required sample size will vary not only as a function of the output of interest but also as a function of the purpose of the calculation and the result itself. For instance, if the probability of concern is in the 10-4 range, a result around 10-7 with an accuracy of one or 2 orders of magnitude may be acceptable. In the same way, a result around 10-1 will be high enough to conclude that there is an issue and would not require more runs. However, a result in the [10-5; 10-4] range would need enhanced accuracy.
Several methods and statistics can be used to assess the accuracy of a probabilistic output and are presented below.
The standard error is one of such measures. It assesses the statistical accuracy of an estimate (a mean, a specific quantile, etc.). As an example, let m be the mean of an output of interest. If the analysis was performed several times with the same sample size but different random seed, an estimate for the mean could be calculated each time. The result of all the mean estimates would provide an estimate of the distribution for this mean. Repeating the operation for many random seeds would make this distribution converge to the true distribution. The standard deviation of this distribution would then be a good indicator of how much the mean can vary given the sample size.
Due to the law of great numbers, the standard error for the mean can be directly estimated from the sample standard deviation and sample size, with the formula where represents the sample standard deviation and the sample size.
One of the limitations of this approach is that it can be only be used for the mean. For other statistics (quantiles), one has to either use replicated samples or bootstrap, described in more detail in the following section.
While standard error is a good quantitative indicator of stability, it does not provide the graphical appeal of other quantities. From the standard error (or other method), a confidence interval can be estimated. The confidence interval is associated with a quantile value (from the open interval 0; 1 ). A confidence interval represents a range in which the true value of the statistic will be with a probability . Usually is taken as a high quantile. For this analysis we set 0.95 to estimate a 95% confidence interval, which gives a 95% confidence that the statistics are within these bounds.
Page B-1
Volume 1 As for any statistics generated, there is always a possibility that the true value is on the edges of the distribution generated (in other words, that the estimate is over-predicting or under-predicting).
Another indicator of the stability is the distribution of the statistic itself. Due to the law of great numbers, it is expected that the mean should be normally distributed. For very low probability, the mean may have a skewed distribution, a potential indicator of an insufficiently large sample.
The following two sections present methods considered in this document.
B.1. Bootstrap Approach Bootstrap method has become increasingly popular over the past 15 years due to the increase of computing capabilities. Correctly used, it can estimate the distribution of a selected statistic/measure, from which standard error and confidence intervals can be extracted.
The bootstrap method consists of taking the list of sampled values for a selected output, and sample from it with replacement (thus considering it as a population from which a new sample of similar size can be drawn). This sample is taken with replacement, so it is not unique and will not match with the initial sample. For instance, as a result of the replacement, any value can be sampled more than once in one instance, and not at all in another instance. The operation is repeated many times (usually 10,000 or more) to generate as many distributions of results. From each the statistic or measure of interest can be extracted: mean, median, other moments or quantile values. Since the operation is repeated many times, a sample estimate for each of these measures is generated and can be ordered to create a distribution from which standard error can be extracted.
There is a limit to the accuracy that can be achieved with the bootstrapping method, especially for extreme quantiles. Simple bootstrap for instance cannot be used to estimate the minimum or maximum: it will never be able to generate extreme values beyond the initial values sampled, even though it is likely that more extreme values could be reached with the same sample size or larger. For the same reason, extreme (high or low) quantiles will require an appropriate initial sample size to be accurate. The sample sizes will thus have to be larger than for estimating the accuracy of the sample mean. In [11], a bootstrap assessment of the estimated error is presented and discussed for more information.
Different bootstrap techniques have been developed and can be used to estimate confidence intervals, such as the one using the standard error estimated via bootstrap, the percentile bootstrap (or p-bootstrap) the bias corrected bootstrap.
In the present document, we use the percentile bootstrap as some of the statistics we are interested in are very low in probability and the use of standard error could lead to non-physical results (negative probabilities).
B.2. Inverse Binomial Distribution We plan to look at probabilities of 1st crack, 1st leak, and rupture. Since we consider a single loop, we estimate our probabilities from a set of instances which are equal to 0 (no event) or 1 (event).
If we apply bootstrap on this set of data, every time a value is randomly selected, we have a probability p to have a 1 and (1-p) to have a 0, where p is simply the number of 1st observed divided by the total number of realizations. In other words, we are essentially performing a Bernoulli trial n times several times when creating our set of bootstrap results, which means we can use a binomial distribution.
Page B-2
Volume 1 With bootstrap, we consider our sample as the new population, so our p value is fixed. Instead of running a large number B of bootstrap resamples to estimate confidence intervals, we can simply use the inverse CDF of the binomial distribution, with a probability p and a sample size n.
Excel performs this calculation with the BINOM.INV function, allowing for easy variation over time without having to save unnecessary data. Provided the probability initial sample size is known, more stable and accurate estimates of confidence intervals can be generated than when using classical bootstrap methodology. Furthermore, we can discretize a set of quantiles to represent the distribution of the mean value quite easily and infer what could be a valid sample size to reach stability.
Below is an example of only two cases of leakage out of 2500 realizations at 60 years. Figure 88 displays the result of the CDF, a step function with a definite positive skew, indicating that the sample size needs to be larger.
Figure 88: Distribution of the mean value for a sample of size 2,500 (using binomial distribution approximation)
Supposing the same probability was estimated with larger sample sizes (Figure 89 and Figure 90). As these figures show, the distribution approaches a normal shape without bias. This is an indication of a more stable mean.
Page B-3
Volume 1 Figure 89: Distribution of the mean value for a sample of size 10,000 (using binomial distribution approximation)
Figure 90: Distribution of the mean value for a sample of size 25,000 (using binomial distribution approximation)
Of course, these estimates are purely theoretical as the value of p would change and the shape of the distribution would change accordingly. The advantage of this analytical method is increased accuracy an improved speed. However, this method is applicable only on confidence intervals of mean values and solely when the variable is discrete and can take the value of 0 or
- 1. It is therefore limited on the confidence intervals for probability of crack initiation, leakage or rupture or any other binomial choice (LOCAs for instance). While it is possible to develop such a Page B-4
Volume 1 technique when importance sampling is used, the implementation would be more complex, and a bootstrap approach would be simpler to implement and explain.
Page B-5
Volume 1 METHODOLOGY USED FOR REGRESSION ANALYSIS C.1. Linear and Rank Regression Analysis When performing a regression analysis, one tries to construct a model that will explain the variance in the response () via the variance of the inputs () (bold font is used for vectors).
Given a distribution of values for , , , , one can calculate the mean and the variance
. The variance summarizes how much each value deviates from the mean on average. The original variance on can be estimated with the equation:
1 1
where represents the sample size and each realization.
Usually since does not really affect the results, it is taken out of the equations to express only the sum of squares, i.e:
where stands for Total Sum of Squares.
This variance is associated with the uncertainty in as the model is deterministic. If one uses the same set of , one ends up with the same value of .
The goal of the regression technique is to construct a function to estimate from the values of
. That is to say, one estimates such that the difference between and is minimized (there are some instances where the minimization is not necessarily the most desired feature, especially for extrapolation, but in this situation it is close enough).
It is now useful to decompose the variance in the part that is explained with the model and the residual, thus is rewritten as follows:
2 .
When a least square approach is used on a linear regression the double product is equal to zero (the least square approach minimizes the square difference which is equivalent to find the zero value for the derivative, which include this double product) and thus the total sum of square simplifies to:
Page C-1
Volume 1 where represents the portion of variance explained by the regression while represents the residual.
The coefficient of determination, or , is defined as the ratio between the regression Sum of Square and total Sum of Squares:
Of course, this definition changes slightly depending on the regression used, but it will always come back to the same idea of assessing how much variance can be attributed to the regression model.
With enough variables of course, one will always find a combination that gives the perfect match.
After all, this is purely linear regression and if one has as many equations (inputs variables) as unknowns (sample size) there is a unique solution that gives the exact values. This process is akin to using a polynomial of order to go through 1 points and would result in overfitting.
This is the reason why stepwise regression was developed. The regression is constructed one parameter at a time. First one takes the best regression with only one parameter (highest R2),
then the best regression with 2 parameters including the first parameter selected, and so on. In and out criteria are used at each step to check if (1) there is benefit in adding another input (in criteria) and (2) the model would be more stable by taking out a previously selected input (out criteria).
As a result, the stepwise regression will give a progressive (incremental ) which shows how much of variance is gained at each step when adding a new input into the regression model.
At the final step corresponds to the final . This means that, in theory, the incremental should never decrease (it either increases or is stable). In practice, and in the study presented in this report, R2 does indeed increase, except that in the final table presented, the inputs are re-ordered according to three regressions (and not only stepwise), which scrambles the increasing order for . As a result, the final for the stepwise regression is the maximum over the ,
due to reordering of the inputs.
Stepwise regression is used to represent linear (and additive) impact. It is not that common to have linear impact of the inputs on the output, so the method is extended by working on the rank values. The advantage of rank values is that it expands the linear approach to monotonic relations, which are far more common. Furthermore, it reduces the impact of outliers.
C.2. Information Available in the Linear Regression Column Final R2: a number between 0 and 1, that reports how much variance of the output is explained by the regression technique. The closest the R2 is to one, the more the variance is explained.
The closest is it to 0, the less variance is explained.
For (stepwise) rank regression:
R2 inc: Incremental increase of R2 for stepwise rank regression. Note that while this is an increasing number in the initial stepwise regression, it may not be when all inputs are ordered by importance considering all regressions.
Page C-2
Volume 1 R2 cont: contribution of this specific input to the final R2. This is a good indicator of how much variance in the output is explained by this input (even if it is not directly calculated).
SRRC (Standardized Rank Regression Coefficient): Coefficient associated with this parameter in the regression model. The higher it is in absolute value, the higher is the importance of this parameter. This coefficient is used here not for ranking the variables (R2cont is a better estimator) but to identify the monotonic effect of the input towards the output. Negative sign means negative influence (high input values associated with low output values), while positive sign means positive influence (high input values associated with high output values).
C.3. Nonmonotonic Regressions Stepwise regression is a good tool that gives good results 60% to 80% of the time as the relations tends to be monotonic in the problem considered. However, the method lacks the ability to capture non-monotonic and non-additive relations (conjoint influence). The addition of mitigation, threshold, and human interactions tends to create non-monotonic responses (the most obvious failures will be found) furthermore it is common to have complex physics in the problem solved creating conjoint influence of the input parameters.
The last two regression methods (recursive partitioning and multi-linear adaptive regression splines (MARS)) work differently than stepwise linear and rank regression, because they estimate the regression first to create a meta-model (or response surface). This (analytical and therefore extremely fast) model is used to estimate the influence of each input should the regression be correct.
For recursive partitioning and MARS, the first order and total order sensitivity indices are estimated using the Sobol decomposition technique, which is a unique variance decomposition method. It requires a very large number of runs, and therefore the regression is used to estimate the equivalent .
In Sobol regression, the first order sensitivity index is used to estimate the influence of one of the inputs solely. It is done via two samples of same size. In one sample all values are varying. In the second, the values for are kept identical to the first sample and all the other values are changed. A comparison between both samples is used to estimate how much by itself influences the regression. The process is repeated for each and gives all the values (first order sensitivity indices).
In theory, one can repeat the same approach to estimate second order sensitivity indices for any couple , by running again two samples, but this time with both and fixed in the second sample, and so on. This is usually not done because the first order indices already require many estimates, and the number of interactions would quickly become prohibitive. Saltelli and Homma devised another way to tackle the problem: if one were doing the opposite (fixing all values except in the second sample), one would estimate the influence of all the variables and their interactions, except for . By taking the difference with the total variance, one would have the influence of and all its interactions. This approach requires only 2 samples and thus is used to estimate the Total indices.
Page C-3
Volume 1 So, for each input it is possible to calculate estimates of:
Its sole influence, called Its influence with all interactions ( , . , . , . . , , called The difference ) represents thus all the influences of all interactions that include on the output , except for the sole influence of Xi. This is what is used for conjoint contribution.
C.4. Information Presented for Non-Monotonic Regressions Si (first order sensitivity index): indicates how much variance of the output is explained by this input solely (can be compared with R2 cont. from stepwise regression). This represents the influence of the parameter by itself.
Ti (total order sensitivity index): indicates how much variance of the output is explained by this input and all of its interactions (including with itself). There is no equivalent for stepwise regression which is an additive regression (no conjoint influence). The quantity (Ti - Si) represents the influence of all interactions which input Xi is responsible, except for its influence solely.
Summary Indicators As mentioned above, the non-monotonic regressions estimate input parameter importance supposing that the regression is perfect. Stepwise regression estimates both Si and Ti together.
For this reason, the influences estimated with recursive partitioning and MARS are corrected for via the final . Such a correction is not necessary for stepwise regression.
Main contribution: The main contribution is the indicator used to rank all inputs according to their importance in term of uncertainty, it is the average of normalized influence from three regressions (between R2cont and Si). If an input is not considered by one regression, its value is set to 0 for this regression. R2cont is used directly without any normalization. As the Si reflect the contribution if the regression model was perfect, they are normalized by multiplying by R2 final for the regression considered Example for HOOPWRS:
0.27 (WRS explains 27% of the variance according to stepwise regression)
, 0.33 (33% of the recursive partitioning model is explained by WRS) but 0.78 meaning Sobol decomposition was performed on a model explaining only 78% of the variance
, 0.50 (50% of the MARS model is explained by WRS) but 0.53 meaning Sobol decomposition was performed on a model explaining only 53% of the variance No credit is taken for the unexplained variance and no preference is given to any regression technique over another. The final formula is:
1 0.27 0.33 0.78 0.5 0.53 0.24 3
1 3
Page C-4
Volume 1 Conjoint contribution: average normalized influence of the two non-additive regressions (Ti-Si). As (Ti - Si) reflect the contribution if the regression model was perfect, they are normalized by multiplying to R2 final for the regression considered. Note that, if the conjoint contribution is greater than 0.1, then it is highlighted in yellow. If not, it is probably better not to consider it.
Example for HOOPWRS:
Nothing for stepwise regression - this is an additive regression and does not estimate conjoint influence. We do not include it in the regression.
, , 0.69 0.33 0.36 (36% of the recursive partitioning model is explained by interactions including WRS) but 0.78 meaning Sobol decomposition was performed on a model explaining only 78% of the variance
, , 0.56 0.50 0.06 (6% of the MARS model is explained by interactions including WRS) but 0.53 meaning Sobol decomposition was performed on a model explaining only 53% of the variance No credit is taken for the unexplained variance and no preference is given to any regression technique over another. The final formula is:
1 0.36 0.78 0.06 0.53 0.156 2
1 2
Page C-5
VOLUME 2 FINAL REPORT Analysis of Tsuruga Pressurizer Nozzle using the xLPR Code for NRC-HQ-60-14-E-0001, NRC-HQ-60-14-T-0014 Extremely Low Probability of Rupture (xLPR) Leak-Before-Break (LBB) Regulation Guide Support NRC CONTRACT NUMBER - NRC-HQ-60-14-E-0001-T-0001 Task Order No. 1 Small Business Task Order FRACTURE MECHANICS STRUCTURAL INTEGRITY EVALUATION, ANALYSIS &
SUPPORT on Extremely Low Probability of Rupture (XLPR) Leak-Before-Break (LBB) Regulatory Guide Support ENGINEERING MECHANICS CORPORATION OF COLUMBUS 3518 RIVERSIDE DRIVE - SUITE 202 COLUMBUS, OHIO 43221-1735
Volume 2 TABLE OF CONTENTS 1.0 Overview ............................................................................................................................ 4 2.0 Recommended Approach .................................................................................................. 5 3.0 Generation of Reference Probabilistic and Deterministic Cases ....................................... 5 3.1 Selection of Inputs and Outputs ..................................................................................... 5 4.0 Folder Structure ................................................................................................................. 7 5.0 Deterministic Reference ..................................................................................................... 8 6.0 Probabilistic
Reference:
SA, UA, and Stability Analysis .................................................. 10 6.1 Tsuruga Sensitivity Analysis (DM1 Initiation) ............................................................... 10 6.1.1 Regression Analysis on Circumferential Crack Initiation Time.................................. 13 6.1.2 Regression Analysis on Axial Crack Initiation Time and Occurrence ....................... 14 6.1.3 Regression Analysis on Time to Axial Through-wall Crack....................................... 16 6.2 Tsuruga Sensitivity Analysis (Initial Flaw) .................................................................... 18 6.2.1 Interpretation of Conditional Results ......................................................................... 18 6.2.2 Regression Analysis on Time to Axial Through-wall Crack....................................... 19 6.2.3 Regression Analysis on Ratio of Length to Depth for Largest Circumferential Surface Crack ...................................................................................................................... 20 6.3 Uncertainty Analysis ..................................................................................................... 23 6.3.1 General Summary ..................................................................................................... 23 6.3.2 Axial Crack Depth ..................................................................................................... 24 6.3.3 Circumferential Crack Depth ..................................................................................... 25 6.4 Stability Analysis ........................................................................................................... 28 6.4.1 Probability of First Crack Occurring .......................................................................... 28 6.4.2 Probability of First Leak Occurring Conditional on One Initial Flaw .......................... 31 6.4.3 Probability of Pipe Rupture ....................................................................................... 33 7.0 Deterministic Sensitivity Studies ...................................................................................... 33 7.1 MSIP Analysis .............................................................................................................. 33 7.2 Overlay Analysis ........................................................................................................... 38 7.3 One-at-a-time Sensitivity Studies ................................................................................. 43 8.0 Probabilistic Sensitivity Studies ........................................................................................ 46 8.1 Probabilistic Sensitivity Study on Inlay Mitigation ......................................................... 46 9.0 Revisiting Uncertainty Parameter ..................................................................................... 49 10.0 More Accurate Analyses .................................................................................................. 50 Page 2
Volume 2 LIST OF ACRONYMS CCDF Complementary Cumulative Distribution Function CDF Cumulative Distribution Function CI Confidence Interval FOI Factor of Improvement ID Inner Diameter LBB Leak Before Break LCB Lower Confidence Bound LEAPOR Leak Analysis of Piping - Oak Ridge MARS Multi-Adaptive Regression Splines MSIP Mechanical Stress Improvement Process PFM Probabilistic Fracture Mechanics PMF Probability Mass Function PWSCC Pressurized Water Stress Corrosion Cracking RLZ Realization SA Sensitivity Analysis SC Surface Crack TWC Through Wall Crack UA Uncertainty Analysis UCB Upper Confidence Bound WOL Weld Overlay WRS Weld Residual Stress xLPR Extremely Low Probability of Rupture Page 3
Volume 2 1.0 Overview The purpose of this study is to illustrate a recommended analysis approach for leak-before-break (LBB) studies. This study analyzes a dissimilar metal weld in the Tsuruga plant to complement the study performed using a similar approach for the V.C. Summer plant, which is documented in Volume 1. Application of the approach to Tsuruga is of interest because of the different weld residual stress (WRS) profiles observed at Tsuruga and V.C. Summer. The analysis follows the same overall plan and the steps as outlined in the in Volume 1. As a recap, the recommended analysis steps are as follows:
Define purpose of the analysis: This step will help determine which outputs need to be analyzed and which runs need to be performed.
Generate probabilistic and deterministic reference cases: These analyses use the recommended values from the xLPR Inputs Group and serve as reference cases.
Perform sensitivity analyses: Sensitivity analysis estimates the importance of and ranks the uncertain inputs in terms of their contribution to uncertainty in the results. It is used to increase confidence in the model and to estimate the portion of the input space that needs further analysis.
Perform uncertainty and stability analyses: Uncertainty analysis summarizes the results for outputs of interest in terms of risk and helps the decisionmaker. Stability analysis assesses the analysts confidence in the results for the reference case and the potential need for a larger sample size or importance sampling.
Perform deterministic sensitivity studies: These sets of deterministic runs are compared to the deterministic reference case to understand the impacts of some selected inputs or alternative scenarios on the results. They focus on the physics of the system (i.e., the consequences).
Perform probabilistic sensitivity studies: These sets of probabilistic analyses are compared to the probabilistic reference case to understand the impact of some selected inputs or alternative scenarios on the results. They focus on the risk (i.e., the probability and consequences).
Revisit uncertain parameters: Once all the above analyses have been performed, a short list of inputs is identified as being of key importance to the analysis. Revisiting their distributions or associating a distribution in the case of constant parameters allows for increased confidence and brings more insights into the analysis. It may even reduce the uncertainty if improved distributions can be generated.
Run enhanced simulations: It is unlikely that the initial reference runs would suffice to draw conclusions for decisionmaking purposes, especially considering the rarity of the events under consideration. In addition to the steps described above, larger sample sizes, importance sampling and/or adaptive sampling can be used to generate more precise and stable responses for the outputs of interest.
The Tsuruga and V.C. Summer results differ mostly due to the WRS profiles associated with a double-V groove weld. High hoop (225 MPa) and axial (325 MPa) WRS values at the inner diameter (ID) generate higher probabilities of occurrence for axial and circumferential cracks, respectively. The mean axial WRS profile becomes negative between 1/4 and 2/3 through the thickness with low values (less than -300 MPa) between 36% and 48%, so most of the circumferential cracks stop growing in this area. The probability for a circumferential crack to Page 4
Volume 2 grow beyond this part of the weld is therefore very low (~10-5). When a circumferential crack grows beyond this part, it is large enough to lead to rupture as soon as it becomes a through-wall crack.
2.0 Recommended Approach Details on the various steps in the recommended approach are presented in Volume 1.
3.0 Generation of Reference Probabilistic and Deterministic Cases The first step of any scenario analysis is to set up references runs. These analyses primarily used inputs recommended by the xLPR Inputs Group.
The probabilistic runs use a sample size of 2,500. In the reference case, no distinction is made between aleatory and epistemic uncertainty: only one type is used, meaning the input set Excel file is consequently updated. While the choice between the outer (epistemic) and inner (aleatory) loop does not matter in a numerical sense, the specificities of GoldSim for the sub-model elements make it more logical to use the inner loop for cases when separation of aleatory and epistemic uncertainties is not required. All the uncertain variables are then set to aleatory. However, running 2,500 realization in the inner model leads to memory issues. As a result, the sample size is kept at 100 for epistemic and 25 for aleatory. A resampling (over the aleatory values) for each epistemic realization ensures that each input is sampled 2,500 different time.
Previous analyses showed the importance of crack initiation in the mechanisms considered in Leak Before Break (LBB). Its influence is so important that it often hides the influence of other mechanisms. As a result, probabilistic runs are performed to serve as references:
One with Direct Model 1 used for PWSCC initiation One with initial flaw with exactly one circumferential and one axial crack For the deterministic reference run, all uncertain variables are set to their nominal (median) value as recommended by the Inputs Group. Since it would not be of interest to analyze a case with no occurring crack or a crack occurring late in the simulation, the deterministic case uses an initial flaw for crack initiation with one single circumferential and one single axial crack.
3.1 Selection of Inputs and Outputs The inputs selected as uncertain by the xLPR Inputs Group are the same as for V.C. Summer and are listed below. In these lists, N stands for a normal distribution. The distribution parameters are the mean ( and standard deviation ( and sometimes truncations values (
and ). LN stands for a lognormal distribution. The distribution parameters are either the mean and standard deviation or the mean of the log ( and standard deviation of the log ( and sometimes with truncation values ( and ). T stands for a triangular distribution. The distribution parameters are the minimum (), mode (), and maximum (). Lastly, U stands for uniform distribution. The distribution parameters are the minimum () and maximum
().
Page 5
Volume 2 In properties, the uncertain inputs are as follows:
Effective full power years (year): N( 52.03; 3.277; 42 ; 60 Fatigue Initial Flaw full length (mm): LN(mean=8.608; stdev=4.849)
Fatigue Initial Flaw depth (mm): LN(mean=3 ; stdev = 0.05)
PWSCC Initial Flaw full length (m): LN ( 5.34; 0.8 PWSCC Initial Flaw depth (m): LN( 6.5; 0.35; 5 10 max 3 10 )
Temperature (°C): N( 344.9; 0.0882 Fatigue Growth: LN( 0; 0.139; 0; )
Surface Crack Distance rule Modifier: T( 0; 0.5; 0.75 TW crack Distance Rule Modifier (mm): U( 0; 508 For the left pipe (nozzle - carbon steel), the uncertain inputs are as follows:
Yield Strength (MPa): LN(mean=399; stdev = 68.05;=265; =583)
Ultimate Strength (MPa): LN(mean=629; stdev = 38.16;=545; =723)
A rank correlation of +0.607 is imposed between Yield and Ultimate Strength Elastic Modulus (MPa): N( 174960; 26244; 148716; 201204)
Multiplier for TCF Scaling factor C1: LN( 22; 0.4668)
Multiplier for Med-Sulfur Scaling factor C2: LN( 20.566; 0.4668)
Multiplier for High-Sulfur Scaling factor C3: LN( 15.767; 0.4668)
EAC Threshold Scaling Factor: LN( 0; 0.2103)
For the right pipe (safe end - stainless steel), the uncertain inputs are as follows:
Yield Strength (MPa): LN(mean=197.1; stdev = 53.86;=102; =355)
Ultimate Strength (MPa): LN(mean=440.4; stdev = 66.5;=273; =617)
A rank correlation of +0.628 is imposed between Yield and Ultimate Strength Elastic Modulus (MPa): N( 176600; 26490; 150110; 203090)
Multiplier for CSS: LN( 22; 0.42)
For the weld, the uncertain inputs are as follows:
Elastic Modulus (MPa): N( 196800; 29520; 167280; 226320 Material Init J-Resistance (N/mm): N( 524.3; 181.9; 225.1; 947.4 Material Init J-Resistance Coefficient (N/mm): N( 586.3; 76.2; 460.9; 763.6 Material Init J-Resistance Exponent: N( 0.661; 0.074; 0.2; 1 Surface Finish Factor: LN( 0.973; 0.170)
Load sequence Factor: LN( 0.438; 0.155)
Strain Threshold: N( 0.112; 0.017 Page 6
Volume 2 Strain Threshold Multiplier: N( 1; 0.17 C0: N( 6.157; 0.368 C0 Multiplier: N( 1; 0.0668 Zinc Factor of Improvement -1: LN( 0.29; 0.93 Direct model 1 Proportionality Constant (year-1MPa-1): LN( 4.4; 3.66 Direct model 1 Proportionality Constant Multiplier: LN( 0; 2.89 Cni Multiplier: LN( 0; 0.41)
Activation energy for crack growth (kJ/mol): N( 104; 20 Comp to comp variability factor: LN( 0; 0.5; 0.44; 2.24 Within comp. variability factor: LN( 0; 0.37; 0.335; 2.04 Peak to valley ECP ratio -1: LN( 4.52; 2.75 Characteristic Width of Peak vs ECP (mV): N( 18.2; 5.5; 18.2; 51.2 4.0 Folder Structure Each scenario analysis will require a certain number of deterministic and probabilistic runs. It is difficult to assess exactly the number of simulations required upfront, especially when considering that some of the necessary runs will be added after the results have been analyzed. However, each scenario will at least have one reference deterministic run and one probabilistic reference run. Since each run is associated with a unique Excel file and that the name of the Excel file used for inputs is fixed (xLPR-2.0 Input Set.xlsx), as is the case for the text files created by the pre-processors (TIFFANY and LEAPOR), it is recommended to create a folder for each run in a master folder that includes the DLLs folder (required for running the code).
It is proposed that the modeler use the following convention to name each specific run folder.
Tsuruga - D01 - REFERENCE scenario Ref. # description The naming is composed of three parts:
- 1. Scenario: Identifies the scenario name (usually the power plant under consideration).
- 2. Ref.#: A unique reference number for this scenario. The key-letter D is used for a deterministic run and P for a probabilistic run. A two-digit number follows to indicates the run # in the corresponding category.
3.
Description:
A description of the run, with a keyword left to the analysts choice. The analysis presented here uses the keyword REFERENCE for the reference run. It is recommended that all reference runs use this keyword as well.
An example folder structure for Tsuruga is presented in Figure 1.
Page 7
Volume 2 Figure 1: Folder structure for Tsuruga When possible, runs are performed using the full version of GoldSim (using the .gsm) file, rather than the player version. It is indeed possible to transform a completed GoldSim run into a player version equivalent, but not the other way around. Each file can be saved as player version in the future if required. However, for those that do not have the full version of GoldSim using the player version will also work.
5.0 Deterministic Reference In order to run a deterministic case, the epistemic sample size is set to 1 both in the Excel file
(#0101: User Options - cell E20) and the GoldSim file (Simulation Settings: Monte Carlo: #
realizations). The aleatory sample size is set to 2 (#0107: User Options - cell E26) because even if the run is deterministic, GoldSim does not accept a probabilistic submodel with a sample size of 1. Using an aleatory sample size of 2 only adds about 4 seconds in the simulation so it is not considered a limiting factor.
The deterministic analysis requires having at least one crack and there is no reason to consider a crack occurring at a time later than the beginning of the simulation. As a result, the crack initiation option (#0501: User Options - cell E78) is set to initial flaw (0). Only one axial crack and one circumferential crack are considered. As a result, the number of circumferential flaws
(#1209: Properties - cell H41) and axial flaws (#1214: Properties - cell H46) are both set to 1.
The deterministic reference case also uses the values recommended by the Inputs Group; however, all the input distributions are set to constant. By default, the median value for each distribution is provided in the deterministic value (column H). This value is kept for the deterministic reference (i.e., nominal) case.
Crack growth varies linearly according to the variability factor, which is split into two parameters:
the within-comp variability factor fflaw (#2593: Weld - cell H135) and the component-to-component Page 8
Volume 2 variability factor fcomp (#2592: Weld - cell H134). For a circumferential crack, the values for fflaw and fcomp are set to the median value of the recommended distribution, i.e. fflaw=fcomp=1.
Circumferential crack evolution is displayed in Figure 2.
Figure 2: Circumferential crack evolution through time for the Tsuruga reference deterministic case: depth (left frame) and inner length (right frame)
For an axial crack, very fast growth is predicted (as seen in Figure 3).
Figure 3: Axial crack evolution through time for the Tsuruga reference deterministic case using median variability factors values: depth (left frame) and inner and outer length (right frame)
For the deterministic sensitivity studies, the component-to-component variability factor was changed from 1 to 0.1 to slow down crack growth by one order of magnitude. The resulting crack evolution is displayed in Figure 4.
Page 9
Volume 2 Figure 4: Axial crack evolution through time for the Tsuruga reference deterministic case using fflaw=1 and fcomp=0.1: depth (left frame) and inner and outer length (right frame) 6.0 Probabilistic
Reference:
SA, UA, and Stability Analysis The reference simulations are run using only one uncertainty loop of size 2,500 (aleatory is simpler due to its implementation). Due to the importance of crack initiation mechanisms, one simulation uses crack initiation represented with Direct Model 1, and another simulation uses one existing initial flaw in each direction (axial and circumferential), resulting in two flaws for this second simulation. The resulting sensitivity analysis is presented below.
6.1 Tsuruga Sensitivity Analysis (DM1 Initiation)
In this first analysis, Direct Model 1 model is used for crack initiation. Some statistics on the results are summarized in Table 1.
Table 1: Summary of event probabilities Initiation Initiation Leakage of Leakage of circumferential axial crack circumferential axial crack crack before 60 before 60 crack before before 60 years years 60 years years Rupture
- RLZ 1074 758 0 745 0
% 42.96% 30.32% 0% 29.8% 0%
Out of 2,500 realizations, about 43% have at least one circumferential crack occurring and 30%
have at least one axial crack occurring. These two probabilities are not independent. The likelihood of having both axial and circumferential cracks is about 21%, which deviates enough from the theoretical 13% ( 0.43 0.3 0.13) if the events were independent. This result is expected considering that the A multiplier parameter influences crack initiation and is the same for both axial and circumferential cracks.
Page 10
Volume 2 The higher likelihood of circumferential crack initiation is expected considering the distribution of WRS values at the ID. Figure 5 displays both the probability density function for the Axial WRS (red dashed lines) used for circumferential crack initiation and the Hoop WRS (blue plain line) used for axial crack initiation. The sampled values for Axial WRS at ID are likely to be higher than those for Hoop WRS.
Figure 5: Probability density functions (PDFs) for hoop WRS and axial WRS at the ID Both WRS values are sampled independently from a normal distribution. The difference between two sampled values following independent normal distributions is also normal. The PDF of the difference is displayed in Figure 6 showing that the sample axial WRS will be higher than the sampled hoop WRS about 90% of the time and will be 100 MPa higher or more about half of the time. This difference is large enough to support a higher occurrence of circumferential crack than axial crack.
Page 11
Volume 2 Figure 6: PDF of the difference between sample axial WRS value and sampled hoop WRS values at ID While axial cracks are less likely to occur than circumferential cracks, almost all of them grow to through-wall cracks. Figure 7 displays the resulting cumulative distribution function (CDF) for the time from initiation to TWC (difference between time of leakage and time of initiation) for axial cracks. About 80% of the axial cracks grow through wall in 3 years or less, and almost all of them become TWCs in less than 8 years.
Figure 7: CDF on time to through-wall crack for axial crack Page 12
Volume 2 6.1.1 Regression Analysis on Circumferential Crack Initiation Time The sensitivity analysis on time of first circumferential crack occurrence leads to stable estimates with large R2 (see Table 2). Most of the contribution (more than 90%) to the variance is coming from the uncertainty from the two components of the proportionality constant. The contribution is estimated from adding the main contributions of these two parameters together (57%) with the maximum of the two conjoint contributions (40%). An observation of the scatterplots indicates that these two parameters explain most of the variance (see Figure 8). When the proportionality constant is recomposed by multiplying the two terms, their global influence on initiation time is even more visible (see Figure 9).
Table 2: Regression analysis on time of first circumferential occurrence (DM1 initiation)
Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.83 0.96 0.87 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1C1 0.66 0.66 -0.52 0.45 0.67 0.18 0.75 0.42 0.35 AMULTDM1 0.80 0.15 -0.34 0.28 0.53 0.04 0.69 0.15 0.40 ADM1C2 0.82 0.01 -0.10 0.01 0.06 0.02 0.17 0.01 0.09 AXIALWRS 0.82 0.01 -0.07 0.00 0.03 0.01 0.24 0.01 0.11 FFLAWA5 0.83 0.00 0.01 0.00 0.00 0.00 0.36 0.00 0.16 ADM1C3 0.83 0.01 -0.06 --- --- 0.00 0.00 0.00 0.00 ADM1A1 --- --- --- --- --- 0.00 0.34 0.00 0.15 COA5 --- --- --- --- --- 0.00 0.02 0.00 0.01 ADM1A3 0.83 0.00 -0.02 --- --- 0.00 0.37 0.00 0.16 FFLAWC4 --- --- --- --- --- 0.00 0.38 0.00 0.16 HOOPWRS 0.83 0.00 0.01 0.00 0.00 --- --- 0.00 0.00 ADM1C4 0.83 0.00 -0.03 --- --- 0.00 0.37 0.00 0.16 AMD1A2 0.83 0.00 -0.03 --- --- 0.00 0.00 0.00 0.00 EFPY 0.83 0.00 -0.01 --- --- --- --- 0.00 0.00 ADM1C5 0.83 0.00 -0.01 --- --- 0.00 0.27 0.00 0.12 STHMULT --- --- --- -0.01 0.00 0.00 0.00 0.00 0.01 STHA5 --- --- --- 0.00 0.00 -0.01 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Page 13
Volume 2 Figure 8: Scatterplots of time before first circumferential crack occurrence (months) as function of proportionality constant (left) and axial WRS at ID (right)
Figure 9: Scatterplots of time to first circumferential crack time of occurrence as a function of the recomposed proportionality constant 6.1.2 Regression Analysis on Axial Crack Initiation Time and Occurrence Results for axial crack initiation (Table 3) are like the results for circumferential cracking, with the two components of the proportionality constant being the main parameters explaining the variation in first axial crack initiation time (see Figure 10 and Figure 11).
Page 14
Volume 2 Table 3: Regression analysis on time of first axial occurrence (DM1 initiation)
Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.74 0.96 0.84 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1A1 0.58 0.58 -0.45 0.42 0.68 0.61 0.61 0.50 0.13 AMULTDM1 0.69 0.11 -0.27 0.25 0.51 0.31 0.32 0.20 0.13 HOOPWRS 0.72 0.03 -0.13 0.02 0.15 0.04 0.05 0.03 0.07 AMD1A2 0.74 0.02 -0.10 0.00 0.05 0.02 0.04 0.01 0.03 ADM1A3 0.74 0.00 -0.03 --- --- 0.00 0.00 0.00 0.00 ADM1C1 --- --- --- --- --- 0.00 0.00 0.00 0.00 ADM1C5 0.74 0.00 -0.02 --- --- 0.00 0.01 0.00 0.00 ADM1A5 --- --- --- 0.00 0.01 0.00 0.00 0.00 0.00 STHC2 0.74 0.00 0.01 --- --- --- --- 0.00 0.00 ADM1C3 0.74 0.00 -0.03 --- --- 0.00 0.01 0.00 0.00 ADM1C2 0.74 0.00 -0.02 --- --- 0.00 0.00 0.00 0.00 ADM1A4 0.74 0.00 -0.02 0.00 0.01 0.00 0.01 0.00 0.01 ADM1C4 0.74 0.00 -0.02 -0.01 0.00 0.00 0.00 0.00 0.00 RPYS --- --- --- 0.00 0.01 --- --- 0.00 0.00 BETA1C --- --- --- 0.00 0.01 --- --- 0.00 0.00 WELDE --- --- --- --- --- 0.00 0.00 0.00 0.00 STHA5 --- --- --- --- --- 0.00 0.00 0.00 0.00 COA5 --- --- --- --- --- 0.00 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 10: Scatterplots of occurrence of axial crack in 60 years as function of the most important uncertain parameters: within weld proportionality constant (left), weld to weld proportionality constant (center), hoop WRS at ID (right)
Page 15
Volume 2 Figure 11: Scatterplots of time to first axial crack time of occurrence as a function of the recomposed proportionality constant 6.1.3 Regression Analysis on Time to Axial Through-wall Crack The analysis over time to first axial through wall crack is like the previous section. Table 4 and Figure 12 lead to the same conclusion with a strong first order monotonic effect (High and values) from the proportionality constant, explaining 2/3 of the variance.
Page 16
Volume 2 Table 4: Regression analysis for time to axial through wall crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.73 0.96 0.86 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti ADM1A1 0.57 0.57 -0.44 0.37 0.67 0.54 0.56 0.46 0.15 AMULTDM1 0.68 0.11 -0.27 0.26 0.55 0.30 0.38 0.20 0.17 HOOPWRS 0.71 0.03 -0.14 0.02 0.15 0.05 0.11 0.03 0.09 AMD1A2 0.73 0.02 -0.10 0.00 0.07 0.02 0.03 0.01 0.04 ADM1A5 --- --- --- 0.01 0.01 0.00 0.00 0.00 0.00 ADM1C3 0.73 0.00 -0.03 --- --- 0.00 0.00 0.00 0.00 COA5 --- --- --- --- --- 0.00 0.00 0.00 0.00 ADM1C5 0.73 0.00 -0.02 --- --- 0.00 0.02 0.00 0.01 FFLAWA2 --- --- --- 0.00 0.00 --- --- 0.00 0.00 STHC2 0.73 0.00 0.01 --- --- --- --- 0.00 0.00 ADM1A3 0.73 0.00 -0.03 0.00 0.01 0.00 0.00 0.00 0.01 ADM1C2 0.73 0.00 -0.02 --- --- 0.00 0.01 0.00 0.01 ADM1A4 0.73 0.00 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 ADM1C4 0.73 0.00 -0.02 --- --- 0.00 0.00 0.00 0.00 FFLAWA5 --- --- --- 0.00 0.01 --- --- 0.00 0.00 SURFDIST --- --- --- 0.00 0.01 --- --- 0.00 0.00 JRM --- --- --- 0.00 0.00 --- --- 0.00 0.00 ADM1C1 --- --- --- --- --- 0.00 0.00 0.00 0.00
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 12: Scatterplots of time to axial crack leak and the first three most important uncertain parameters: within weld proportionality constant (left), weld to weld proportionality constant (center) and hoop WRS at ID (right)
Page 17
Volume 2 6.2 Tsuruga Sensitivity Analysis (Initial Flaw)
The previous section has shown that crack initiation uncertainty parameters have such an important effect that they dominate all the regression. The second reference simulation was run for the Tsuruga case with the following changes:
Crack initiation type choice (input 0501) was set to 0 (initial flaw)
Initial depth (1212, 1217) and length (1210, 1215) for both the axial and circumferential crack orientations were set to the distributions used for depth and length for PWSCC, respectively (Section 0)
The purpose of this additional analysis was to focus on the evolution of a crack, conditional on having an existing initial crack. Note that this analysis does not consider multiple cracks in a single direction at this stage, but only one circumferential crack and one axial crack simultaneously.
6.2.1 Interpretation of Conditional Results The probability of crack initiation is 1 by default since one crack is forced to be present in each direction (two cracks). The time of occurrence is always 0 since the cracks are imposed at the beginning of the simulation. The time to through wall crack does not need any correction since the occurrence of crack is at time zero. Figure 13 displays the CDF of time to TWC for the conditional Tsuruga scenario. Only the axial crack results are presented since no circumferential crack goes through-wall in the sample size considered.
Figure 13: CDF on time to through wall crack (in years) for axial crack for conditional Tsuruga run Page 18
Volume 2 The calculated probabilities are lower than the ones estimated for the first probabilistic reference Tsuruga case. This means that the time to TWC is longer when using an initial flaw than when using Direct Model 1. As can be seen in Figure 14, this is a real bias and not variation due to the inaccuracy of the first sets of CDF presented in Figure 7.
The difference between using the initiation model and assuming an initial flaw is small due to the reduced importance of WRS in this analysis (compared to the V.C. Summer scenario), but still noticeable and comes from the same source. WRS uncertainty influences both crack initiation and crack growth, with high values leading to earlier cracks and faster growth. Direct Model 1 realizations that have crack initiation within the 60-year simulation time are therefore associated with larger values of WRS at the ID. Such high values lead to faster crack growth through thickness. It is true that because of the equilibrium condition requirement for circumferential cracks, higher values of WRS around the ID are associated with lower values around the OD, but a change of WRS at the OD is less consequential due to the size of the crack. With an existing crack forced (Figure 13), all realizations will have crack growth, even realizations with low WRS values. These realizations will have slower crack growth, which will have an impact on the CDF of time between initiation and leakage (conditional on having initiation).
Figure 14: Change in CDF of time from initiation to leakage for axial crack when using DM1 and initial flaw 6.2.2 Regression Analysis on Time to Axial Through-wall Crack The analysis of the time to axial TWC is influenced most by the two components of the PWSCC variability factor (fcomp and fflawa1), explaining 30% to 40% of the variance (see Table 5). Hoop WRS plays a bigger role for crack growth compared to crack initiation and accounts for an additional 15% to 20% of the variance. The linear impact is negative as shown in the SRRC sign in Table 5 and negative slope in Figure 15. Having a negative impact means that high values for the variability factors and WRS leads to shorter time to through wall crack growth, which is expected. Table 4 and Figure 12 lead to the same conclusion with a strong first order monotonic Page 19
Volume 2 effect from these three parameters. The conjoint influence seems to be slightly more important for this case.
Table 5: Regression analysis for time to axial through-wall crack Recursive Rank Regression Partitioning MARS Conjoint Main Contribution Final R2 0.87 0.84 0 Contribution R2 R2 Input inc. cont. SRRC Si Ti Si Ti FCOMP 0.33 0.33 -0.58 0.23 0.43 --- --- 0.17 0.08 HOOPWRS 0.81 0.21 -0.46 0.30 0.53 --- --- 0.15 0.10 FFLAWA1 0.60 0.28 -0.51 0.21 0.32 --- --- 0.15 0.05 QG 0.84 0.03 -0.18 0.00 0.06 --- --- 0.01 0.02 INILENA1 0.86 0.02 -0.13 0.00 0.03 --- --- 0.00 0.01 ECPC 0.87 0.01 -0.09 --- --- --- --- 0.00 0.00 EFPY 0.87 0.01 -0.08 0.00 0.01 --- --- 0.00 0.01
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 15 Scatterplots of time to axial crack leak and the first three most important uncertain parameters: weld to weld crack growth multiplier (left), hoop WRS at ID (center) and within weld crack growth multiplier (right) 6.2.3 Regression Analysis on Ratio of Length to Depth for Largest Circumferential Surface Crack Regression analysis was applied to the ratio of length to depth for the largest circumferential crack after 60 years. The result of the regression is reported in Table 6 and scatterplots are displayed in Figure 16. The regressions results are not high, with only 20% to 50% (for Recursive Partitioning) of the variance explained. Nonetheless, all regression analyses lead to the conclusion that the dominant parameter of importance is Axial WRS at the ID.
Page 20
Volume 2 Table 6: Regression analysis for ratio of length to depth for largest circumferential surface crack Recursive Conjoint Rank Regression Partitioning MARS Main Contribution Final R2 0.22 0.5 0.21 Contribution Input R2 inc. R2 cont. SRRC Si Ti Si Ti AXIALWRS 0.22 0.22 0.47 0.35 0.90 0.92 0.96 0.19 0.14 C2MULT 0.22 0.00 -0.04 0.05 0.39 0.00 0.00 0.01 0.09 EFPY 0.22 0.00 0.05 0.01 0.05 0.02 0.02 0.00 0.01 FFLAWC1 0.22 0.00 -0.04 0.00 0.04 0.01 0.01 0.00 0.01 JRM --- --- --- 0.00 0.08 0.01 0.05 0.00 0.02 SURFDIST 0.22 0.00 -0.03 0.00 0.05 0.01 0.00 0.00 0.01
- highlighted in yellow if conjoint contribution larger than 0.1 Figure 16: Scatterplots of ratio between length and depth for the largest circumferential surface crack and the two most important uncertain parameters: initial crack length (left) and weld to weld crack growth multiplier (right)
The influence of axial WRS is hard to distinguish on the Figure 16 scatterplots and can better be represented with another scatterplot. Figure 17 shows a negative correlation between axial WRS at the ID and maximum circumferential crack depth for the case when an initial flaw is assumed (i.e. without use of the crack initiation model). A distinction is made with the color of the plot markers based on the corresponding realizations when the DM1 initiation model is used instead of assuming an initial flaw already exists: if the corresponding realization leads to a circumferential crack occurring when the DM1 model is used, then a blue cross is used. If there was no crack initiation when the DM1 model was used, then an orange cross is used. This figure thus shows that the realizations leading to the largest circumferential crack depth (larger than 46% through the thickness) are all associated with the lowest WRS at the ID. Furthermore, the largest crack Page 21
Volume 2 depths observed are all associated with realizations that did not show circumferential crack occurrence when run with DM1, indicating that very low WRS at ID is also a limiting factor to generate high circumferential crack depth.
Figure 17: Maximum crack depth as function of sampled axial WRS at the ID for the case of an initial flaw (no crack initiation)
This behavior is expected due to the axisymmetric condition which requires force equilibrium for the axial WRS integral through the thickness of the nozzle. The axial WRS profile used for the Tsuruga pressurizer nozzle is displayed below with its mean profile (plain black line) and 95th percentile confidence interval (dashed lines). Due to equilibrium, a higher value at the ID forces lower values around 30% to 60% through the thickness. Lower WRS in this area will stop the crack earlier, giving a shallower depth. With a shallower depth, the ratio between length and depth is increased. As a result, a higher value of axial WRS at the ID will lead to a higher value of the length/depth ratio. This does not occur because the crack length is higher, but because the crack depth is smaller.
Page 22
Volume 2 Figure 18: Axial WRS profile and 95th percentile confidence interval for the Tsuruga scenario 6.3 Uncertainty Analysis Uncertainty analysis refers to the analysis of the uncertainty for the outputs of interest deriving from the uncertainty in the input. It encompasses most of the statistical techniques used to study and summarize distributions.
6.3.1 General Summary A simple summary of the main output of interest at a selected time is often a good and quantitative way to present part of the results.
Table 7 displays such a summary. The first three columns summarize the probabilities of first crack, first leak, and rupture for each crack orientation and globally. The last row presents the same probabilities if the two orientations were completely independent with the following equation:
The probability of having at least one crack occurring (whether axial or circumferential) is estimated to be around 52%, which is lower than the probability if both events were independent (60.25%). This result is expected considering that the multiplier on the proportionality constant
( ) used by the crack initiation model is common for the axial and circumferential directions.
Since the two crack initiation events are linked, the probability of both occurring at the same time is higher as shown in section 6.1 (if a crack in one direction occurs it is more likely that a crack in the other direction occurs also since it is more likely that it is linked to a high value for the parameter ). The equation above shows that when is higher than expected, is in consequence lower than expected.
Page 23
Volume 2 Table 7: Summary of Tsuruga results Probabilities at 60 years sample crack rupture leak crack size orientation 2500 circumferential 0.00% 0.00% 42.96%
2500 axial 0.00% 29.8% 30.32%
2500 both 0.00% 29.8% 51.8%
2500 if independent 0.00% 29.8% 60.25%
Figure 19 displays the probabilities of interest as functions of time. No circumferential crack leads to TWC or rupture for a sample of size 2,500. As a result, the probabilities of such events are estimated to be zero (beyond the capability of prediction of the probabilistic model with a sample of size 2,500) and are not displayed in the figure. The global probability of leakage is, therefore, the same as the probability of leakage for axial cracks only. Due to a higher axial WRS values at the ID (by an average of 100 MPa), the probability of circumferential crack initiation (green dash line) is higher than the probability of axial crack initiation (dash-dot line). The probability of leakage from an axial crack is close to the probability of initiating an axial crack. This confirms that the time between initiation and leakage for axial crack is short (2 to 5 years approximately) relative to plant life (60 years postulated).
Figure 19: Probability of initiation and leakage as functions of time 6.3.2 Axial Crack Depth Crack depth is of interest in this study, since one of the flaws detected was leaking (due to an axial crack) at 16 years due to the finding of boric acid deposits during an inspection. It is also a good indicator on the runs themselves as depth will directly influence time of TWC and time of Page 24
Volume 2 rupture. The complementary cumulative distribution functions (CCDF) of the deepest axial crack at 16 years and 60 years (simulation end time) are plotted Figure 20. The choice of 16 years is associated with the fact that a leak was observed after 16 years of service for the weld under consideration. The almost flat separation between no crack (0 on the x-axis) and TWC (1 on the x-axis) confirms that axial cracks that initiate grow quickly to TWCs. The probability of having leakage (i.e. axial crack whose normalized depth is equal to 1) is about 20% at 16 years meaning that there is a 20% chance of generating a scenario that is consistent with what has been observed at Tsuruga. When looking at the entire simulation time of 60 years, the probability of having leakage due to an axial crack (without any leak detection and inspection) increases to 30%. The distribution bounds are tied to the physical bounds for normalized crack depth: 0 when no axial crack has yet occurred and 1 when the crack becomes through wall.
Figure 20: CCDF of the maximum axial crack depth for Tsuruga scenario 6.3.3 Circumferential Crack Depth Figure 21 displays the CDF for the maximum circumferential crack depth at 60 years when an initial flaw is used. As can be seen, the very low axial WRS between 35% and 50% (see Figure
- 22) through the thickness stops the growth of almost all circumferential cracks in the depth direction. None of the 2,500 realizations led to a crack deeper than 50% through the thickness.
Page 25
Volume 2 Figure 21: CDF of the maximum circumferential crack depth at 60 years for Tsuruga Figure 22: Profiles for the mean axial WRS and q=0.95 confidence interval for Tsuruga As discussed in section 6.2.3, there is a negative correlation between the sampled axial WRS value at ID and the maximum circumferential crack depth reached (see Figure 17): the higher Page 26
Volume 2 the WRS value at the ID, the lower the maximum crack depth reached. Two lognormal distributions (one up to 0.97 and the second for the tail of the distribution from 0.97 ) have been used to fit the data. The corresponding parameters were used for the fitting.
Fitting interval Offset (mean of log) (stdev of log) 0; 0.97 0.35 -2.87 -3.102 0.97; 1 0.35 0.213 0.33 The resulting distribution is shown in red dashed line on top of the original distribution in Figure
- 21. A zoom on the tail of the distribution can be seen in Figure 23.
Figure 23: Zoom of (Figure 21) showing the tail fitting for depth of circumferential crack at 60 years for the Tsuruga scenario The resulting bi-lognormal distribution can then be used to theoretically estimate the likelihood of having a crack growing beyond the compressive axial WRS profile near mid thickness and leading to TWC and rupture. Of course, this estimation is purely theoretical as we do not know how far a crack has to grow to pass this gap. However, when considering the axial WRS profile and the fact that other service load stresses are added for the K calculations (around 100 MPa),
the gap should be between 54% and 66% through the thickness. Focussing on the lower bound, we consider a threshold value to be between 50% and 55% in order to capture other uncertainties. Let be the depth of interest. We then use the fitted distribution to estimate what probability is required to reach selected values for within this range. Table 8 summarizes the resulting probabilities.
Page 27
Volume 2 Table 8: Theoretical probabilities using the fitting distribution 0.5 1.28 10 0.51 5.88 10 0.52 2.73 10 0.53 1.29 10 0.54 6.14 10 0.55 2.97 10 While this estimate remains theoretical, it indicates that the probability of circumferential TWC and potentially rupture should be between 10 and 10 . This estimate will help in selecting a sample size as well as checking importance sampling results.
6.4 Stability Analysis Monte Carlo methods are numerical analysis techniques allowing for the estimation of statistics over a multidimensional space. As with any numerical technique, their accuracy is dependent on the density of coverage of the domain. The density of coverage is itself dependent on (1) the size of the sample used to cover the domain (the larger the size the better is the coverage) and (2) the number of uncertain inputs considered as each new input adds a new dimension to the hyperspace (the larger the number of inputs, the worse the coverage).
Stability analysis is a statistical way to estimate the quality of the estimates (or statistics) generated, and it provides confidence that the conclusions drawn are not affected by the potential variation in the response. Stability depends on both the potential variability of the output of interest and the threshold value or range that is used to make the decision. In fact, estimates far from the threshold value can be more variable without having a large impact on decisionmaking.
Stability of the outputs of interest will be assessed graphically using a 95% centered confidence interval, defined by the lower quantile 0.025 (2.5th percentile) and the upper quantile 0.975 (97.5th percentile) on the distribution of mean values. As a reminder, the interval generated does not represent aleatory or epistemic uncertainty. Rather, it shows the accuracy of the Monte Carlo technique for this particular output and with the considered sample size. By increasing the sample size (or using importance sampling), this interval is expected to be reduced.
The confidence interval is estimated using a binomial distribution for any output defined as an indicator function (such as probability of first crack occurring, probability of first leak or probability of rupture). The methodology used for such indicator function outputs is the same as described in Volume 1. For outputs that are not indicator functions, a classical percentile-bootstrap (or p-bootstrap) approach is used, also as presented in Volume 1.
6.4.1 Probability of First Crack Occurring Figure 24 displays the probability of first circumferential crack occurring for the Tsuruga scenario, as well as the 95th percentile confidence interval around the mean, defined by 0.025 and 0.975 quantiles. On a log scale, the bandwidth is relatively tight with a relative variation of 3%. As is often the case, when the probability is higher (more than 10%) there is no need for a larger sample size to reach adequate stability.
Page 28
Volume 2 Figure 24: 95% confidence interval around probability of first circumferential crack occurring for Tsuruga scenario Stability can also be assessed by looking at the resulting distribution of the means, either constructed using bootstrap or the binomial method. The following aspects are considered in the resulting distribution:
If this is the distribution of mean values, the law of great numbers indicates that ultimately this distribution should be normal. Any deviation from the normal shape indicates potentially less stable results, with a good chance of underestimating the mean when the events are rare.
If the distribution is choppy, it is also an indicator of lack of stability, as it indicates that the same resulting values (mean) are generated repeatedly.
Figure 25 and Figure 26 display respectively the CDF and PMF of the resulting distributions for the probability (mean) of occurrence of circumferential crack at 5 years and 60 years. Even at 5 years, the distribution is stable and smooth and the PMF are smooth enough to look like PDFs of a normal distribution.
Page 29
Volume 2 Figure 25 CDF for the probability of circumferential crack occurrence at 5 years (left) and 60 years (right) for the Tsuruga scenario (2500 realizations)
Figure 26 PMF for the probability of circumferential crack occurrence at 5 years (left) and 60 years (right) for the Tsuruga scenario (2500 realizations)
The probability of first initiation is slightly smaller for axial cracks (Figure 27) than for circumferential cracks. The difference however does not affect stability, leading to the same conclusion that, for crack initiation at least, the initial sample of size 2,500 is enough.
Page 30
Volume 2 Figure 27: 95% confidence interval around probability of first axial crack occurring for Tsuruga scenario 6.4.2 Probability of First Leak Occurring Conditional on One Initial Flaw None of the 2,500 realizations led to circumferential crack leakage (the maximum depth distribution is presented in section 6.3.3). Most of the axial cracks, however, lead to leakage rapidly after initiation. At the end of the simulation time (60 years), there is a probability of 30.3%
to have at least one axial crack occurring and of 29.8% to have leakage due to an axial crack.
The short lapse of time between initiation and leakage can be observed in the CDF displayed in Figure 28: it is less than or equal to 2 years 60% of the time and less than or equal to 4 years 90% of the time.
Page 31
Volume 2 Figure 28: CDF of time between first axial crack initiation and first axial leakage The probability of first axial leakage is presented below with the confidence intervals showing relatively stable results (Figure 29), with confidence bounds similar to those estimated for axial crack initiation.
Page 32
Volume 2 Figure 29: 95% confidence interval around probability of first leakage (both circumferential and axial crack) conditional on initial flaw for Tsuruga scenario 6.4.3 Probability of Pipe Rupture Only axial PWSCC cracks lead to leakage and their growth stops at the interface between the weld and pipe because the material is no longer sensitive to PWSCC growth. The length of the weld is not long enough to cause pipe rupture and the loading is only from pressure.
7.0 Deterministic Sensitivity Studies The deterministic analyses apply variation to the reference deterministic case presented in Section 5.0. Experts change the parameters which may influence the output (based on the previous sensitivity analysis or their own expert judgment) one at a time. Such analyses are also performed to assess the impact of some alternative scenarios (such as extreme conditions and/or mitigation). Deterministic sensitivity studies usually focus on the physics of the system and build confidence in the scientific aspects of the analysis. In these studies, the focus is on a circumferential crack.
7.1 MSIP Analysis The reference deterministic cases, using an initial flaw and median values, generated a circumferential crack that was running through the whole circumference and whose depth stopped Page 33
Volume 2 around 40% through the thickness. Mechanical stress improvement process (MSIP) is one of the three mechanical mitigation methods implemented in xLPR. MSIP is represented with a change in the axial and hoop WRS profiles (all the other inputs are the same as the reference). The new profiles, estimated using the rules described in the template analysis document, are displayed in Figure 30: Axial WRS (left) and hoop WRS (right) mean profiles with and without MSIP consideration for the Tsuruga scenario Four deterministic cases have been run with inclusion of MSIP mitigation respectively at 5, 10, 15, and 20 years. Figure 31 displays the change in crack growth in the depth direction for different mitigation times. The change in depth is relatively small considering that the crack depth increased quickly to its maximum for the unmitigated case. It is barely visible, but the crack stops growing in the depth direction when mitigation occurs.
Page 34
Volume 2 Figure 31: Influence of MSIP on circumferential crack depth at different depth and length stages for Tsuruga This behavior is more visible in Figure 32, which shows the change in crack growth in the circumferential direction. As soon as mitigation is applied, the crack stops growing.
Page 35
Volume 2 Figure 32: Influence of MSIP on circumferential crack half-length as different depth and length stages for Tsuruga The use of median values for deterministic analyses on an axial crack leads to a fast-growing crack reaching the weld length in 1.6 years and the full depth in 2.6 years (leakage). With such a short time, it is difficult to estimate the impact of MSIP. The PWSCC growth law is quite complex with many parameters and constants. Some of these constants will vary considerably in different plants. For these deterministic analyses, median values were used for the constants. Tsuruga is a Japanese plant that uses alloy 32/132 weld metal as opposed to alloy 82/182 used in U.S.
plants. It turns out that two of the important constants which describe the PWSCC growth rate law are fcomp and fflaw. These constants play an important role in PWSCC growth with a very large range in uncertainty ranging from 0.34 to 2.04 (for fcomp) and 0.31 to 2.64 (for fflaw). Since the PWSCC growth law is linearly dependent on these constants, it is useful to consider low end values for these parameters to examine the influence of MSIP on growth more clearly.
Therefore, an additional run was performed with a change in the median values: the component to component variability factor in the weld (fcomp #2592) was modified to be equal to 0.34 (instead of 1) and the fflaw value was changed to 0.31 (both on the low end of the possible input values).
The resulting axial crack becomes through-wall in 24.91 years (24 years and 11 months) if no mitigation is applied. Therefore, a case where MSIP has more influence using these lower bound possible values for these two parameters was examined. Figure 33 and Figure 34 display, respectively, the changes in crack depth and half-length when mitigation is applied at different times (5, 10, 15, and 20 years). The application of MSIP also stops the axial crack from growing.
For the Tsuruga scenario, the impact of MSIP (or at least the way it is modeled in xLPR) is strong as it completely stops crack growth.
Page 36
Volume 2 Figure 33: Influence of MSIP on axial crack depth at different depth and length stages for Tsuruga Figure 34: Influence of MSIP on axial crack half-length as different depth and length stages for Tsuruga Page 37
Volume 2 7.2 Overlay Analysis Weld overlay (WOL) is another mechanical mitigation method often used to treat PWSCC problems in PWRs and is available in the xLPR code. As for MSIP, the WRS profile is affected by WOL. WOL also leads to an additional layer of more resistant alloy 52/152 material on the OD of the nozzle, where it is applied. Here we consider the application of WOL to Tsuruga. The resulting impact on the mean profile can be seen in Figure 35.
Figure 35: Axial WRS (left) and hoop WRS (right) mean profiles with and without WOL consideration for the Tsuruga scenario Beyond the change in WRS, an additional WOL thickness, which is added to the original thickness, needs to be set. Following recommendations from [2], a thickness equal to 1/3 of the original weld thickness made of more PWSCC resistant alloy 52/152 material results at the WOL repair location.
The Tsuruga scenario considers a pressurizer nozzle, whose diameter is smaller than the lines considered for the other scenarios (such as for the reactor pressure vessel). The outer diameter is set to 0.19 m with a thickness of 0.03 m. The resulting ratio of inner diameter to thickness is equal to 2.1666 The K-solutions (i.e., the stress intensity factor interpolations) used in xLPR are defined for 2 20, which are valid for the unmitigated scenario. However, when the overlay adds 1/3 to the initial thickness, the new thickness is equal to 0.04, leading to a ratio of 1.625. This is beyond the domain of validity of the K-solutions as they are implemented within xLPR and leads to an error message and causes the calculation to be aborted. For the purpose of comparison, the pipe initial outer diameter (input #1101) was increased from 0.19 to 0.222 m, which is enough to have slightly over 2 ( ~ 2.025 even after mitigation by WOL. A new set of results, close to the initial ones, was generated and used as a comparison to assess the impact of WOL. A comparison between initial reference and reference for WOL is displayed below for circumferential crack (Figure 36) and axial crack (Figure 37).
Page 38
Volume 2 Figure 36: Comparison of original circumferential crack growth in the depth direction (left) and inner length direction (right) with the growth using increased diameter Figure 37: Comparison of original axial crack growth in the depth direction (left) and inner length direction (right) with the growth using increased diameter Furthermore, the recommended set of parameters for Alloy 52/152 defined by the Inputs Group was used for the mitigation material. The factor of improvements were included by changing the multiplier for the crack initiation proportionality (#2743) constant from 1 to 0.2 (although in the case of WOL, it does not affect crack initiation since they initiate in the original material) and the crack growth factor of improvement (#2796) from 1 to 10.
Figure 38 displays the influence of WOL on circumferential crack depth for different times of application (different crack depths and lengths). The apparent reduction in depth when weld overlay is applied is only because the normalized crack depth over thickness is reported, and thickness changes when the overlay thickness is added. The growth rate was already so low that it is not affected, leading to almost flat curves.
Page 39
Volume 2 Figure 38: Influence of WOL on circumferential crack depth at different depth and length stages for Tsuruga Figure 39 displays similar results for half-length of circumferential crack over time. The crack continues to grow in the length direction, but at a slower pace, when overlay is applied.
Page 40
Volume 2 Figure 39: Influence of WOL on circumferential crack half-length at different depth and length stages for Tsuruga Figure 40 and Figure 41 present similar results for an axial crack. Crack depth is strongly reduced or stopped by the overlay. The half depth is either unaffected (if it already reached the edges of the weld and the crack cannot growth more due to PWSCC) or slowed by the overlay.
Page 41
Volume 2 Figure 40: Influence of WOL on axial crack depth at different depth and length stages for Tsuruga Figure 41: Influence of WOL on axial crack half-length at different depth and length stages for Tsuruga Page 42
Volume 2 An interesting conclusion of the last two analyses is that, while weld overlay performs as expected as a mitigating factor, it does not perform as well as MSIP for this scenario. For the previously analyzed V.C. Summer scenario, the conclusion was the exact opposite, with overlay performing better.
7.3 One-at-a-time Sensitivity Studies One-at-a-time sensitivity studies consider one input of interest and change it in the deterministic reference (presented in Section 5.0) to estimate how much it impacts a specific output. In the following sensitivity studies, circumferential and axial crack depth are studied through time for the Tsuruga scenario. For the probabilistic reference, the pressure and temperature are held constant. Consequently, their influence is not considered in the sensitivity analyses (presented in Sections 6.1 and 6.2), because only parameters with uncertainty can influence the output distribution.
The first study considered pressure and temperature changes emulating a high-dry-low severe accident condition [3]( 16.2 MPa) [4]. During this type of severe accident scenario, both the temperature and the pressure increase to a maximum pressure of about 16.2 MPa (we ignore the temperature change here). Here we consider the case where pressure changed from 15.5 MPa to 16.2 MPa. The pressure was then gradually increased up to 20 MPa. It should be noted that the pressure increases considered in these sensitivity studies are unrealistic, as temperature and pressure are tightly controller in the system. However, these hypothetical scenarios are useful to understand the trends in the xLPR code predictions, and to verify that the code behaves as expected.
Figure 42 illustrates the crack depth over wall thickness as a function of time for both circumferential and axial cracks. Even with the highest pressure (20 MPa), the variation is relatively small.
Figure 42: Effect of changing pressure from 15.5 MPa to 16.2 MPa, 17 MPa, and 20 MPa for the Tsuruga scenario on circumferential crack depth (left) and axial crack depth (right)
A similar analysis for changing the temperature from 325.7°C up to 400°C is shown in Figure 43.
The impact is stronger than pressure for the range of variation covered. Considering that pressure Page 43
Volume 2 is more tightly controlled than temperature, the range of variation seems adequate. Of course, in some severe accident scenarios the temperature can become quite high.
Figure 43: Effect of changing temperature from 345 °C to 350 °C, 360°C, and 400°C for the V.C. Summer IO scenario on circumferential crack depth (left) and axial crack depth (right)
Next, variation of the PWSCC growth parameters was considered. The component-to-component
(#2592: Weld - Cell H134) and within-component (#2593: Weld - Cell H135) variability factors for PWSCC growth are, along with WRS, the most important parameters in the sensitivity analysis performed on time to through-wall crack (see Section 6.2). These are the fflaw and fcomp parameters (discussed above also) for the PWSCC growth law. The distribution associated with fcomp is lognormal, truncated at , 0.34 and , 2.24. The distribution for fflaw is also lognormal, truncated at 0.335 and 2.24. Figure 44 and Figure 45 present the influence of varying these parameters to circumferential crack depth and axial crack depth respectively. The values were taken slightly higher than the initial truncation based on expert elicitation. It shows, however, the higher impact of these parameters compared to temperature and pressure.
Page 44
Volume 2 Figure 44: Effect of changing PWSCC growth variability factors for the Tsuruga scenario on circumferential crack depth: slower growth with fcomp= 0.335 and fflaw=0.313 (left) and faster growth with fcomp=2.04 and fflaw=2.64 (right)
Figure 45: Effect of changing PWSCC growth variability factors for the Tsuruga scenario on axial crack depth: slower growth with fcomp =0.335 and fflaw
=0.313 (left) and faster growth with fcomp =2.04 and fflaw =2.64 (right)
Page 45
Volume 2 8.0 Probabilistic Sensitivity Studies These analyses are the probabilistic equivalent of the deterministic sensitivity studies. The change in input can be a simple shift (as for the deterministic case), but also a change in uncertainty (spread) or point of interest (skewness). These studies include the probabilistic aspect which focuses more on the risk associated with each change.
8.1 Probabilistic Sensitivity Study on Inlay Mitigation As it is not possible for the current code to run a meaningful sensitivity study on inlay mitigation deterministically, a study was performed probabilistically. The axial and hoop WRS profiles post-mitigation are estimated using the approach described in the template model. Their mean values are displayed in Figure 46. The inlay increases the WRS field since it involves depositing new weld metal (of alloy 52/152) along the ID of the nozzle. This results in more resistance to PWSCC but higher tensile WRS fields in general. However, due to already high stress at the ID for Tsuruga, the axial WRS is not strongly affected by the change due to Inlay. Hoop WRS shows more increase.
Figure 46: Axial WRS (left) and hoop WRS (right) mean profiles with and without weld inlay consideration for the Tsuruga scenario For this analysis, the inlay depth was set to 3 mm as per ASME Code Case N-766. Initial material properties developed by the Input group were used. The inlay material is alloy 52/152, which has much slower crack growth response and properties for PWSCC growth and longer crack initiation times than alloy 82/182. Factors of improvements due to the use of the PWSCC-resistant material are implemented by changing the median value of the multiplier proportionality constant on initiation (#2743) from 1 to 0.2 (corresponding to a factor of improvement of 5) and the factor of improvement on growth rate (#2796) from 1 to 101. The inlay mitigation time is set to the mid-time of the simulation (30 years) to help compare results before and after mitigation.
The probabilistic analysis mean results are shown in Figure 47 for the circumferential crack. Since the change in WRS ID is small and a factor of improvement of 5 (change in input #2743) is applied to initiation, it comes to no surprise that the probability of first crack initiation is slightly lower after 1
Note that xLPR does not explicitly have PWSCC constants for alloy 52/152 at present. However, PWSCC growth in alloy 52/152 is much slower. These constants were chosen as estimates based on currently considered improvements for this material compared to alloy 82/182 PWSCC growth rates.
Page 46
Volume 2 mitigation. 95% confidence interval has been added to the probability of initiation to reflect the stability of this particular result.
Figure 47: Effect of inlay applied at 30 years on probabilistic results on probabilities (mean) of initiation, leakage, and rupture As expected, the small change in WRS does not really affect crack growth to the point of changing the probability of leak and rupture. The lack of leakage and rupture makes it difficult to judge of the impact of inlay on crack growth. As a result, the depth profiles presented as probability layers have been displayed in Figure 48. The figure shows the impact of stopping or slowing crack growth in the mitigated area (lower portion of the weld after 30 years - inside the black dash box), resulting in overall lower values of asymptotic convergence of the depth for circumferential crack.
Page 47
Volume 2 Figure 48: Plot of first occurring circumferential crack depth on the first 1,000 realizations A stronger impact can be observed on the circumferential crack inner half-length over time (Figure 49). While the unmitigated case generates (shallow) cracks over the whole circumference, the low susceptibility of the inlay material strongly slows crack growth in this direction. Since none of the cracks grow beyond 40% through the thickness, we cannot see the situation where the outer length grows faster than the inner length, as observed in the template analysis.
Page 48
Volume 2 Figure 49: Effect of inlay applied at 30 years on normalized inner length of circumferential crack for the first 1,000 realizations with factor of improvement of 5 on initiation and 10 on growth 9.0 Revisiting Uncertainty Parameter The sensitivity studies, coupled with the sensitivity analysis, identify the inputs that drive the issues of interest. Revisiting these inputs, either to increase knowledge or consolidate the current state of knowledge, is recommended to further increase confidence in the results presented, since these will most likely be the inputs that are discussed and questioned.
For the Tsuruga scenario, the uncertain parameters that could be revisited are listed below:
The proportionality constants used in DM1 initiation model and the variability factors for PWSCC growth model. These four parameters represent model uncertainty and consider both the variation between welds and the variation within a single weld.
Axial WRS for circumferential cracks and hoop WRS for axial cracks, which represent the most important uncertain physical inputs.
Beyond the initially uncertain inputs, deterministic and probabilistic sensitivity studies have demonstrated that temperature could influence the results, especially at higher temperatures.
The study of the associated uncertainties and potential improvement for the representation of these uncertainties are beyond the purpose and scope of this study. Furthermore, such tasks should be considered by all stakeholders.
Page 49
Volume 2 10.0 More Accurate Analyses In this analysis, the double-V groove weld creates a bottleneck with very low values of axial WRS, which stop crack growth. This type of axial WRS field, which goes from tension to compression near the mid-thickness and then back to tension, happens in certain welds depending on the geometry and the closeness of the stainless steel closure weld to the dissimilar metal weld. In Section 6.3.3, a fitting approach was used to statistically estimate the probability of growing through the WRS bottleneck. As we do not know how deep the crack needs to grow before going through the bottleneck, this estimate is inaccurate and varies between 10 and 10 . Therefore, the most reasonable estimate would be 10 . It is noted that this type of WRS field can possibly lead to large crack growth around the circumference prior to leakage, which can be an issue if the material toughness is low.
One way to improve this estimate could have been to simply run many more realizations with initial crack depths set to the region of interest (50% to 60% through the thickness) to more accurately estimate the critical depth size. It was decided instead to perform importance sampling focused on the region of interest and directly estimate the probability of growing through the bottleneck via Monte Carlo methods. The first change was to increase the sample size from 2,500 to 10,000. The second change was to increase the number of realizations with circumferential cracks occurrences, possibly early during the simulation time. For that, the proportionality constant implementation for crack initiation was changed.
In xLPR, the proportionality constant is composed of a multiplier common for all circumferential and axial cracks and a spatially varying component that is specific to each crack and sampled in each sub-segment. To better understand the impact of the proportionality constant, the value was recomposed by multiplying to the value associated with the first occurring circumferential crack (called in xLPR)
Figure 50 displays the relation between circumferential crack initiation time and this recomposed value. As can be seen, the impact of the proportionality constant is such that no value beyond 0.1 leads to crack initiation. The blue rectangle highlights the area of interest for which the distribution leads to crack initiation. It represents about 40% of the sampled points.
Page 50
Volume 2 Figure 50: Scatterplots of time to first circumferential crack time of occurrence as a function of the recomposed proportionality constant The values generated for have been used to construct a distribution that was fitted using a bi-lognormal distribution (Figure 51).
Figure 51: Bi-lognormal fitting of A distribution The first part of the lognormal fit covers the range of values that never lead to initiation, so it is ignored. Consequently, only the second lognormal distribution fit (Figure 52) is used. The fit Page 51
Volume 2 fitting is not good for 1; 10 ; however, this is not an issue considering that this region seldom leads to crack initiation and the initiation times tend to be longer.
Figure 52: Zoom on the second lognormal fitting As only one crack is considered, this new distribution was applied only to the first circumferential crack (i.e. the one in the sub-segment at top dead center), which is subject to the largest axial stresses. Applying it to all the cracks would result into many circumferential cracks for each realization. As a reminder, this fit represents only 45% of the original distribution displayed in Figure 51. The remaining part of the distribution for did not generate any cracks within the simulation time (60 years) so it is not sampled. Since 45% of the distribution is represented, all the probabilities will be scaled by the ratio 0.45.
This approach allows us to generate more realizations with crack initiation without having to correct for WRS bias as would be necessary if an initial flaw was used. It is not a change in paradigm (supposing that a crack necessarily starts at time zero) but rather an importance sampling scheme (whereby 55% of the input space is ignored because it does not generate any cracks). The link between initiation and growth is preserved while still maintaining the impact of WRS on the initiation model. In other words, this change allows reducing the sample size by more than a factor of two, while obtaining similar results.
The last step in enhancing the sample size is importance sampling on WRS. Here, and as highlighted by Figure 53 (which is the similar plot as Figure 17) with black dashed lines, the importance sampling has been be applied to the low values of axial WRS at the ID. Indeed, due to equilibrium, the lower values from the distribution of WRS at the ID are associated with high values at the bottleneck region (i.e., the region between 40% and 60% through the thickness).
Higher WRS values would be necessary to grow through the bottleneck.
Page 52
Volume 2 Figure 53: Maximum crack depth as function of sampled axial WRS at the ID An importance sampling focusing on the 1st percentile (i.e. 0.01) was implemented for axial WRS. The simulation results for maximum crack depth at 60 years for the enhanced simulation are presented in Figure 54 as a CCDF. Due to the high value of axial WRS at the ID, the crack grows through the circumference long before it goes through the thickness. As a result, any TWC also leads directly to rupture. The analysis estimates a 10-5 probability of such an event occurring, which is consistent with the initial estimate performed with the fit on generated maximum depths in Section 6.3.3 (see Table 8).
Page 53
Volume 2 Figure 54: CDF of circumferential crack depth for the 10,000 enhanced Tsuruga simulation The use of importance sampling helped greatly in stabilizing the results. Twelve realizations out of 10,000 experienced pipe rupture within the simulation time. This results in a rather stable distribution of the mean value as seen in Figure 55. Considering that the sample size was 104, having stable results in the range of 10 is quite good.
Figure 55: CDF and PMF on distribution of mean probability of rupture from a circumferential crack using binomial approach Page 54
Volume 2 BIBLIOGRAPHY
[1] E. J. Sullivan and M. T. Anderson, "Assessment of Weld Overlays for Mitigating Primary Water Stress Corrosion Cracking at Nickel Alloy Butt Welds in Piping Systems Approved for Leak-Before-Break," Pacific Northwest National Laboratory : PNNL-21660, August 2012.
[2] H. Rathbun, M. Benson, R. Iyengar and B. Brust, "Analysis of PWR Hot Leg in Sever Accident Conditions: Creep Rupture and Tensile Instability Initiation MOdeling," in Proceedings Of Integrity of High Temperature Welds, London, UK, Sept. 2012.
[3] B. Brust, R. Iyengar, M. Benson and H. Rathbun, "Severe Accident Condition Modeling in PWR Environment: Creep Rupture Modeling," in Proceedings of the 2013 ASME Pressure Vessels and Piping Conference, July 14-18 2013 Paris, France, PVP2013-98059.
[4] B. Efron and R. J. Tibshirani, An Introduction to the Bootstrap, ISBN: 0-412-04231-2: CRC Press LLC, 1998.
Page 55
VOLUME 3 FINAL REPORT Sensitivity Studies Comparison Analysis ENGINEERING MECHANICS CORPORATION OF COLUMBUS DOMINION ENGINEERING, INC.
STRUCTURAL INTEGRITY ASSOCIATES
Volume 3 TABLE OF CONTENTS 1.0 Overview ............................................................................................................................ 5 2.0 Case Set Up ....................................................................................................................... 5 3.0 Methodologies .................................................................................................................... 9 3.1 SIA Methods ................................................................................................................... 9 3.1.1 Introduction.............................................................................................................. 9 3.1.2 Example Problem .................................................................................................... 9 3.1.3 MPFP Direction Cosines Method .......................................................................... 10 3.1.4 Degree of Separation Method ............................................................................... 12 3.1.5 Results and Conclusion ......................................................................................... 14 3.2 DEI Methods ................................................................................................................. 14 3.2.1 Feature Importance ............................................................................................... 17 3.2.2 Model Comparison ................................................................................................ 17 3.3 Emc2 Methods .............................................................................................................. 19 3.3.1 Rank Regression ................................................................................................... 20 3.3.2 Recursive Partitioning ........................................................................................... 21 3.3.3 Multivariate Adaptive Regression Splines (MARS) ............................................... 21 3.3.4 Metrics for Recursive Partitioning and MARS ....................................................... 21 3.3.5 Summary Indicators .............................................................................................. 22 4.0 Comparison of Rankings for Test Case 1 based on 10,000 Realizations ........................ 24 4.1 Comparison on Probability of Circumferential Crack Initiation (15 Instances out of 10,000 Realizations)................................................................................................................ 24 4.2 Comparison on Number of Axial Cracks based on 10,000 Realizations ...................... 27 4.3 Comparison on Probability of Leakage based on 10,000 Realizations ........................ 28 4.4 Comparison on Leak Rate based on 10,000 Realizations ........................................... 29 5.0 Regression Results for Test Case 2 (Replicate No. 1)..................................................... 31 5.1 First Circumferential Crack Occurrence over 60 Years ................................................ 31 5.2 Number of Axial Cracks Occurring over 60 Years ........................................................ 32 5.3 Probability of Leakage at 60 Years ............................................................................... 34 5.4 Leak Rate over 60 Years .............................................................................................. 35 6.0 Regression Results for Test Case 3 (Replicate No. 6)..................................................... 36 6.1 First Circumferential Crack Occurrence over 60 Years ................................................ 37 6.2 Number of Axial Cracks Occurring over 60 Years ........................................................ 38 Page 2
Volume 3 6.3 Probability of Leakage at 60 Years ............................................................................... 39 6.4 Leak Rate over 60 Years .............................................................................................. 40 7.0 Conclusions ...................................................................................................................... 40 Page 3
Volume 3 LIST OF ACRONYMS COV Coefficient of Variation CV Cross Validation DEI Dominion Engineering, Inc.
DM1 Direct Model 1 Emc2 Engineering Mechanics Corporation of Columbus FORM First Order Reliability Method GBC Gradient Boosting Decision Trees MARS Multivariate Adaptive Regression Splines MPFP Most Probable Failure Point PFM Probabilistic Fracture Mechanics PWSCC Primary Water Stress-Corrosion Cracking RFC Random Forest Decision Trees SIA Structural Integrity Associates SORM Second Order Reliability Method SRRC Standardized Rank Regression Coefficients SSE Sum of Square Error SVC Linear Support Vector Machines WRS Weld Residual Stress xLPR Extremely Low Probability of Rupture Page 4
Volume 3 1.0 Overview An important part in the application of the xLPR code to real life problems is the identification of the major sources of uncertainty in the outputs of interest, which is the purpose of sensitivity analysis. Different techniques may be used to perform such analyses. The purpose of this report is to summarize and compare the results obtained using different approaches when performing sensitivity analyses for xLPR applications.
Several techniques can be used to perform sensitivity analysis. Some of the techniques are qualitative, whereas others are quantitative. The NRC Office of Nuclear Regulatory Research and the Electric Power Research Institute identified this as an important topic and initiated this study to compare potential methods. A test case was selected based on Case 3 (safe end to steam generator inlet nozzle weld in a Westinghouse 4-loop pressurized-water reactor),
Scenario 3 (examines the probability of rupture due to PWSCC without mitigation with both circumferential and axial cracks) as presented in the xLPR Inputs Group report [1]. As the probability of circumferential crack occurrence is extremely low (less than 10-8 for a 60-year period from a separate estimate using Excel) when using the weld residual stress (WRS) profile for a steam generator with no weld repair, the representative 15% repair-depth WRS profile for the steam generator was used instead. Results obtained by Dominion Engineering, Inc. (DEI),
Engineering Mechanics Corporation of Columbus (Emc2), and Structural Integrity Associates (SIA) using regression analyses, machine learning algorithms, and reliability methods, respectively, are presented and compared herein.
2.0 Case Set Up The selected test case associates only a fraction of the 500 or more variables that can be set as uncertain with a probability distribution. For this analysis, only one type of uncertainty is considered. The inner loop (associated to aleatory uncertainty) was used to propagate the uncertainty. Due to GoldSim memory constraints for the current xLPR model, the sample size was set to 1,000 aleatory realizations and 1 epistemic realization, and random samples were obtained using Latin hypercube sampling. Several replicates were run with different random seeds as listed in Table 1.
Table 1: Random seeds used for replicate runs of the selected scenario Rep. Epistemic Aleatory Run No. Random Seed Random Seed By 1 292 542 DEI 2 292 42 Emc2 3 292 1984 Emc2 4 292 1789 Emc2 5 292 153428 Emc2 6 292 713705 Emc2 7 134 604 DEI 8 401 412 DEI 9 605 971 DEI 10 867 954 DEI Page 5
Volume 3 The following outputs were selected for the comparison analysis, and each represents a type of output that may be considered in future analyses:
Probability of circumferential crack initiation over 60 years, which is expected to be around the 10-3 order of magnitude for 60 years. This was considered as a good test example for an output that occurs rarely considering the sample size.
Probability of leakage over 60 years, which is more likely due to axial cracks (expected to be between 20% and 30%) and is a good example of an indicator output.
Number of axial cracks over 60 years, which is a discrete function that is not limited to two values.
Leak rate over 60 years, which is a semicontinuous output (either equal to 0 or a distributed range of values).
All the inputs selected to be randomly sampled are listed below and are color coded as follows:
Black: scalar value considered for this analysis Green: spatially varying parameter considered for this analysis and averaged either at the input level or after the influence level by DEI Red: input not considered for this analysis by SIA and Emc2 (rationale added); input was considered by DEI The number listed with each input corresponds to the numbering used by the xLPR v2.0 code.
The units used (when applicable) as well as the name used in the regression is also included for each uncertain input.
Properties spreadsheet 1001 Effective Full Power Year (yr) [EFPY] [1]
1102 Pipe wall thickness (m) [THICK] [2]
1104 Weld wall thickness (m) (#1102 is used by the code for thickness - #1104 is not used) 1205 PWSCC initial flaw length (m) [INILA##] [INILC##] [1]
1207 PWSCC initial flaw depth (m) [INIDA##] [INIDC##] [2]
3102 Operating temperature (°C) [TEMP] [3]
4350 Hoop WRS pre-mitigation (MPa) [WRSAX_##] [3]
4352 Axial WRS pre-mitigation (MPa) [WRSHP_##] [4]
5101-5108 inspection parameters (inspection impact not considered for the selected outputs) 9002 Surface-crack dist. Rule modifier [SURFRUL] [4]
9003 TW crack dist. Rule modifier (mm) [THWLRUL] [5]
Page 6
Volume 3 Left pipe spreadsheet 2101 yield strength left-pipe (affects COD) (MPa) [YS_LP] [6]
2102 ultimate strength left-pipe (affects COD) (MPa) [UTS_LP] [7]
2105 elasticity modulus left-pipe (affects COD) (MPa) [E_LP] [8]
2106-2108 J-resistance parameters (stability not considered for the selected outputs) 2121-2129 fatigue initiation in pipe (not used by xLPR - used as placeholders to complete the material database) 2163-2169 fatigue growth in pipe (no fatigue growth considered) 2180 Threshold SIF scaling factor for left pipe (not used by xLPR - used as placeholder to complete the material database)
Right pipe spreadsheet 2301 yield strength right-pipe (affects COD) (MPa) [YS_RP] [9]
2302 ultimate strength right-pipe (affects COD) (MPa) [UTS_RP] [10]
2305 elasticity modulus right-pipe (affects COD) (MPa) [E_RP] [11]
2321-2329 fatigue initiation in pipe (not used by xLPR - used as placeholders to complete the material database) 2306-2308 J-resistance parameters (stability not considered for the selected output) 2360-2361 fatigue growth in pipe (no fatigue growth considered) 2380 Threshold SIF scaling factor for left pipe (not used by xLPR - used as placeholders to complete the material database)
Weld spreadsheet 2501-2508 general weld properties (no DM2 initiation considered) 2521-2529 fatigue initiation properties (no fatigue initiation considered) 2531 Zn factor of improvement (Zn constant value lower than Zn constant threshold -
not used) 2542 Proportionality constant A (DM1) (y-1 MPa-1) [A_AC_##] [A_CC_##] [5]
2543 Multiplier to the proportionality constant A (DM1) [A_MULT] [12]
2546 Proportionality constant B (DM2) (DM2 initiation model not used) 2547 Multiplier to the proportionality constant B (DM2) (DM2 initiation model not used) 2551 Weibull vertical intercept error (Weibull initiation model not used) 2552 General Weibull Slope (Weibull initiation model not used) 2570-2575 Fatigue growth weld properties (no fatigue growth considered) 2580 Threshold SIF scaling factor (not used by the code)
Page 7
Volume 3 2591 Activation energy for crack growth (kJ/mol) [QG] [13]
2592 Comp to Comp variability factor [FCOMP] [14]
2593 within comp variability factor [FLAWA_##] [FLAWC_##] [6]
2594 peak to valley ratio [P2V] [15]
2595 characteristic width (mV) [CHARWD] [16]
2604 CYS (DM2 initiation model not used) 2605 CUTS (DM2 initiation model not used)
Mitigation spreadsheet mitigation was not considered in this analysis Transient definitions spreadsheet fatigue was not considered in this analysis TIFFANY Inputs spreadsheet fatigue was not considered in this analysis In total, 16 scalar inputs (numbered in brackets in the above list) were considered as well as 4 spatially varying values around the circumference (which are saved independently for axial and circumferential crack orientations and for each subunit) for Emc2 and SIA. DEI increased the pool to consider a total of 83 inputs (scalar and spatially varying).
The results for the 10 runs show that the number of axial cracks can be high and close to the number of sub-segments1 (set to 19). Consequently, for all spatially varying inputs for axial cracks, all 19 values are considered. Note that the code saves up to 30 values, but only 19 sub-segments are considered in this analysis. Before being considered, these spatially varying inputs are sorted according to the crack initiation times (independently for both crack orientations) so they can be matched to their specific crack occurrence. For instance, the spatially varying component of the Direct Model 1 proportionality constant (A) sampled for a given subunit can be associated with the first crack occurring for the probability of crack initiation. Finally, the WRS is also spatially varying, not circumferentially but through the thickness. For both WRS (axial and hoop), all values at the 26 locations through the thickness are considered.
This represents a total of 152 input values.
1 The Hoop WRS at the ID is relatively high (71MPa +/- 20 MPa) and is raised to the power of 5 in the initiation model. This makes the likelihood of having several axial cracks not negligible.
Page 8
Volume 3 The comparison considered the following cases:
Test Case 1: a case composed of 10 separate runs of size 1,000 for a total sample size of 10,000.
Test Case 2: One of the 1,000 sample size runs for which 2 realizations led to circumferential crack occurrence. The selected run used an epistemic random seed of 292 and an aleatory random seed of 542.
Test Case 3: Another of the 1,000 sample size runs for which 3 realizations led to circumferential crack occurrence. The selected run used an epistemic random seed of 292 and an aleatory random seed of 713705.
3.0 Methodologies This section briefly describes the methods selected by SIA, DEI, and Emc2 for ranking the uncertain inputs according to their importance in the selected output uncertainties.
3.1 SIA Methods 3.1.1 Introduction SIA used the following two methods for the sensitivity analysis:
- 1. Most Probable Failure Point (MPFP) Direction Cosines Method [1]: In this method, the location of the MPFP is estimated from the results of a Monte Carlo simulation.
Information on the values of the input random variables for each of the realizations that result in a failure is used to estimate the location of the so-called MPFP. This method is widely known [1], and has been used to estimate the probability of failure by locating the MPFP by a method other than Monte Carlo (e.g., Rackwitz-Fiessler). The direction cosines of the MPFP in reduced variate space provide a measure of the sensitivity of the failure probability to the corresponding random variable. The use of equivalent normal distributions is required, which complicates the procedure when the input variables are not normal (or lognormal). Correlated random variables are not considered in this method.
- 2. Comparison of Failure Distributions with Underlying Distributions [Degree of Separation]:
A comparison of the distribution of the values of the random variable that resulted in a failure with the underlying distribution of that random variable provides a measure of the sensitivity of the failure probability to that distribution [2]. Similar to the directions cosine method described above, the values of the sampled random variable for each realization that resulted in a failure are stored and analyzed, but now the mean of the random variables is obtained from all of the failures and there is no consideration of equivalent normal distributions. The advantage of this method is its simplicity, and its suitability can be judged from the results of example problems. Correlated random variables are also not considered in this method.
3.1.2 Example Problem A simplified example problem that can be efficiently run and easily implemented is considered.
This is a simple fatigue crack growth problem with six random variables: initial crack size (a),
fracture toughness (KIC), fatigue crack growth rate coefficient (C), and cyclic stress (S). Two additional random variables, Random 1 (R1) and Random 2 (R2), are also included which have Page 9
Volume 3 no effect on the results. Crack size and cyclic stress values are sampled first, from which the cyclic stress intensity factor is calculated. The fatigue crack growth rate coefficient C and KIC are also sampled. The crack is grown using a Paris equation and failure is defined as Kmax exceeding KIC. The stress intensity factor is calculated assuming a center crack in an infinite plate. The cycles to failure are recorded for each realization. The flow of the calculation is shown in Figure 1, and the distributions of the inputs are shown in Table 2.
Figure 1: Cycles to Failure Calculation using Fatigue Crack Growth Table 2: Distribution of input variables Variable Distribution Parameters Crack size (a) Exponential = 0.02 KIC Weibull b = 32.07 C=7 Stress (S) Normal Mean = 16 SD = 2 Fatigue crack Lognormal C50 = 1E-9 growth rate = 0.5 coeff. (C)
Random var. 1 Normal Mean = 5 (R1) SD = 1 Random var. 2 Normal Mean = 10 (R2) SD = 2 3.1.3 MPFP Direction Cosines Method The First and Second Order Reliability Methods (FORM/SORM) are often used to estimate the failure probabilities of a system. The estimates of the failure probabilities are obtained by determining the MPFP. The MPFP is the most likely combination of random input variables that results in failure. The direction cosines corresponding to these variables measure the importance of the variable to the failure [1]. Unlike in the case of FORM/SORM, the MPFP here is obtained by Monte Carlo simulation. The values of the direction cosines are obtained from the coordinates of the MPFP in the reduced variate space.
The steps involved in estimating the sensitivity indices or the importance factors are described here with an example. A Monte Carlo run with 100,000 realizations was performed using the Page 10
Volume 3 example problem described earlier. The failure was defined as cycles to failure, N less than 2,000 cycles. There were 10 realizations in which failure occurred within 2,000 cycles and the values of the random variables are listed in Table 3.
Table 3: Values of random variables corresponding to failure under 2,000 cycles S C KIC a R1 R2 N 18.04 2.91E-09 7.73 0.05518 10.07 5.85 333 20.38 1.51E-09 28.15 0.17390 11.28 4.51 1596 19.44 1.03E-09 17.34 0.15840 16.97 7.26 1632 20.51 2.07E-09 21.09 0.11275 9.12 6.99 1633 20.22 5.13E-09 35.11 0.06472 5.30 5.71 1702 20.87 5.34E-09 31.93 0.05129 6.21 5.72 1813 20.96 3.14E-09 32.88 0.07998 12.44 4.58 1874 19.17 2.38E-09 30.86 0.13922 11.28 4.91 1878 18.29 3.31E-09 24.91 0.11491 11.41 5.75 1917 19.63 3.41E-09 28.67 0.09003 12.54 5.07 1928 The values of the direction cosines, i are calculated as follows assuming that the random variables are normally distributed:
(3-1) where:
= Direction cosine for the variable
, = Mean of the input values of random variable corresponding to the failures
= Mean of all the input values of the random variable
= Standard deviation of all the input values of the random variable (3-2)
(3-3)
For other distribution types, is obtained by use of the equivalent normal distribution as described in [3], which was applied to the random variables a (exponential) and KIC (Weibull).
The importance factor for the variable is then defined as [1]. The calculations are detailed in Table 4. , is the average values of the data listed in Table 2. Instead of using the MPFP corresponding to the smallest value of , the average of all the failures is used as the MPFP.
From experience, the average value corresponding to all the failures is considered more representative of MPFP for the purpose of sensitivity analysis. The parameters of the input distributions listed in Table 2 were used to estimate . The importance factor, is calculated next. The most important random variable for this example is crack depth (43%) followed by C (26%). The fracture toughness is not expected to have a major influence on the cycles to failure, Page 11
Volume 3 and as such, the variable only has 5% importance. The random variables R1 and R2 do not factor into the calculations and show negligible importance.
Table 4: Importance factor calculations using the MPFP direction cosines method S C KIC a R1 R2
, 19.75 2.71E-09 25.87 0.104 10.661 5.634 1.88 2.00 -0.84 2.55 0.33 0.63
(%) 23 26 5 43 1 3 3.1.4 Degree of Separation Method This method involves comparing the values of the random variable that resulted in a failure with the underlying distribution of that random variable [2]. The values of a variable that resulted in failure under 2,000 cycles are placed in one distribution, whereas the values of all the inputs are placed in another distribution. The characteristics of these two distributions can be compared to determine the sensitivity of the variable to the output. By some measure, the differences in these distributions is a measure of the sensitivity of that variable to the output. If the distributions are identical, the output is not sensitive to that variable. The concept is illustrated in Figure 2.
Figure 2: Illustration of degree of separation method The distribution labeled All Data is the histogram of all the sampled values of a specific variable.
The distributions labeled A, B, C, and D are the possible outcomes of that variable that caused failures. The outcome A signifies higher sensitivity of that variable than outcome B. The outcome C is like All Data in terms of its mean or the median value, hence it has low or negligible sensitivity. The outcome D is like outcome B. For example, outcome B could be for a variable like stress where a higher value is more likely to cause a failure than a lower value. On the other hand, the outcome D could be for a variable like the fracture toughness, where a lower value is more likely to cause a failure than a higher value.
Page 12
Volume 3 In the current analysis, the following measure is used to quantify the differences between the distributions:
(3-4) where:
= Sensitivity measure for the variable
, = Mean of the input values corresponding to the failures
, = Mean of all the input values
, = Standard deviation of all the input values The sensitivity measure, vi can be normalized to obtain a sensitivity index, vs,i to obtain relative sensitivity of all the input variables.
, (3-5)
The following strategy is used for calculating , , , , and , :
- 1. If the distribution of the input variable is normal, the mean and the standard deviation are of the actual values.
- 2. If the distribution of the input variable is log-normal, the mean and the standard deviation are of the log of the values.
- 3. For other distributions, if the range of sampled values is large, step 2 is used; otherwise, step 1 is used. The range is quantified as the ratio of the maximum and the minimum of the sampled values. If the ratio is greater than 10, the range is considered large.
One advantage of this method compared to the MPFP Direction Cosine Method previous one is its simplicity of sensitivity estimation. Also, no prior knowledge of the parameters of the distribution is needed since distributions are characterized by the actual sampled values during Monte Carlo simulation. The truncated distributions are also easily handled.
Using the results of Table 3, Table 5 outlines the steps for estimation of the sensitivity index or the importance factor, , . , is the average values of the data listed in Table
- 3. , and , are obtained from 100,000 sampled values of each variable. These values essentially are the mean and the standard deviations of the input distributions; however, they may differ from the values of the input distributions if the distributions are truncated. Finally, the sensitivity index is calculated using Equation 5. Like the MPFP Direction Cosines method, the overall ranking of the importance factors remains similar. The fracture toughness is not expected to have major influence on the cycles to failure and, as such, the variable only has 6% importance.
The random variables R1 and R2 do not factor into the calculations and show negligible importance.
Page 13
Volume 3 Table 5: Importance factor calculations using the degree of separation method S C KIC a R1 R2
, 19.75 -8.57* 25.87 -1.02* 10.66 5.63
, 16.00 -9.00* 30.01 -1.95* 9.99 5.00
, 2.00 0.22* 5.03 0.55* 2.00 1.00 1.87 2.00 -0.82 1.68 0.33 0.63
, (%) 31 35 6 24 1 3
- calculated on the logarithmic scale 3.1.5 Results and Conclusion Two methods were used to perform the sensitivity analysis of an example probabilistic fracture mechanics (PFM) output. Both methods provided similar ranking of importance factors of the input variables as summarized in Table 6. The Degree of Separation Method is computationally simpler compared to the MPFP Direction Cosines Method. Another advantage to the Degree of Separation Method is that it does not require parameters of the input distributions. It can provide importance factors based solely on the PFM output. Small differences in rankings using these two methods are attributed to the calculation of Mf,i. If all the input variables are normally or lognormally distributed without any truncation, both methods would provide identical results.
Two methods for the sensitivity analysis of PFM results are considered in this study: the MPFP Direction Cosines Method and the Degree of Separation Method. These two methods were tested with a simple example problem and then applied to the xLPR test cases. Both methods provided similar rankings for the importance factors of the random input variables. The Degree of Separation Method is simpler to apply and is recommended for performing the sensitivity analysis of PFM results.
Table 6: Comparison of the importance factors obtained by the two methods Importance Factor (%)
Variable MPFP Direction Degree of Cosines Separation Crack Depth, a 43 24 Fatigue Coeff., C 26 35 Cyclic Stress 23 31 Fracture Toughness, KIC 5 6 Unused Random, R2 3 3 Unused Random, R1 1 1 3.2 DEI Methods DEI used machine learning algorithms, implemented using the Python Scikit-Learn Package, to create meta-models to predict outputs from a set of sampled inputs generated over a given number of xLPR realizations. Two appealing features of meta-models are: (1) they are less computationally intensive than running the xLPR software, and (2) many machine learning algorithms include metrics for ranking the relative importance of the inputs from which the meta-models are created. DEI implemented classification and regression learning models to predict Page 14
Volume 3 either discrete or continuous outputs, respectively. The following learning models were considered:
Classification o Gradient Boosting Decision Tress (GBC) o Random Forest Decision Trees (RFC) o Linear Support Vector Machines (SVC)
Regression o Gradient Boosting Decision Trees (GBR)
The primary learning method that was used for both classification and regression was gradient boosting. It is an ensemble method that works on reducing the residual between the value of interest and its representation using a loss function . The method uses weak learners, where each weak learner is a decision tree (see Figure 3), as a stepped approach, selecting the steepest descent (largest reduction in residual) at each step. Each additional tree is constructed to minimize the error from the previous trees. The default loss function is based on least square regression (see http://scikit-learn.org/stable/modules/ensemble.html#classification), which means that the method should give results fairly close to a linear stepwise regression. The main difference is that it builds the regression or classification model on the residual, while the stepwise regression rebuilds the regression at each step.
Yield Stress < 30 N= 50 Thickness < 5 Stress > 25 N= 20 N= 30 Leak No Leak Leak No Leak N=5 N = 15 N = 20 N = 10 Figure 3: Example decision tree classification weak learner RFC is an alternate decision tree ensemble method where each tree is trained separately to predict the value of interest . Each individual tree is trained using a random subsample of the training set with replacement (bagging), and each tree only considers a random subset of the input parameters. Each tree is built by computing the mean decrease in Gini impurity for each sequential decision point (split).
SVC creates hyperplanes in multi-dimensional space to differentiate training data for classification. The hyperplanes are trained using the hinge loss function to maximize the distance between the two classes. This method of classification is conceptually shown in Figure 4 for a two-dimensional dataset.
Page 15
Volume 3 x2 x1 Figure 4: Two-dimensional representation of SVC Spatial xLPR input machine learning models were fit to a given set of sampled inputs of interest and xLPR-calculated outputs from several xLPR realizations. Several of the sampled inputs are also sampled on a spatial basis, with the circumference of the pipe split into several circumferential subunits. As a result, the input distributions for each subunit are sampled 38 times (19 subunits and two crack orientations) in the case of these studies for a given xLPR realization.
Since the 38 samples are not 38 independent inputs, an aggregation methodology was developed to mitigate dilution of the relative importance of the distributions for spatially distributed inputs.
The aggregation method depended on whether the output was available on a subunit basis (e.g.,
probability of initiation within a subunit) or if the output was aggregated over all subunits (e.g.,
total leak rate due to all cracks) as follows:
Pipe subunit outputs were analyzed by fitting meta-models to the results from each subunit. The ranked importance of each input parameter (feature importance) for each fitted model was then averaged to develop an aggregated feature importance rank.
Aggregated outputs were analyzed using a single meta-model from which a single ranked feature importance was determined. Spatially distributed inputs were averaged across all subunits to form a single aggregated input.
The relative agreement of these two methodologies was evaluated by comparing the feature importance ranks for outputs that were available on both an aggregated and subunit basis (e.g.,
the occurrence of crack output). Alternate aggregation methods were considered for combining the spatially distributed inputs for aggregate output analyses including summation and generalized averaging, but these alternate methods were not found to produce substantially different results.
The meta-model fitting process through which the machine learning algorithms builds the meta-model is not entirely automated. The analyst must specify several input parameters that constrain or otherwise control how the algorithm builds and fits the model. For GBC, a subset of these input parameters includes:
number of weak learners (i.e., number of decision trees) learning rate (i.e., the relative contribution from each weak learner on the final prediction) maximum number of decision levels within each decision tree minimum number of samples at each leaf (end decision point)
Page 16
Volume 3 maximum number of inputs from which decision points in each tree are created The input parameter values can have a significant impact on the accuracy, computational efficiency, and the degree to which the model may over-fit the data. The predictive capability of a model using a given set of input parameters was evaluated using cross-validation. K-fold cross-validation segregates the data into k equally sized subsamples (or folds). Each subsample is used as a training set to which the model is fit. The remaining (k-1) subsamples form a testing set against which the predictive accuracy of the model is evaluated using a scoring metric. A cross-validation score is aggregated from the scores of the k models that were fit against each of the k subsamples and scored against each testing set.
To determine the optimal set of input parameters, a grid search algorithm was implemented to evaluate a range of possible input parameters based on their cross-validation scores. The cross-validation was performed using three folds, and the scoring metric depended on the model type.
Classification models were scored by calculating the area under the receiver operating characteristic curve, and regression models were scored using explained variance. The input parameter set that produced a model with the highest cross-validation score (optimal score of 1.0) was then fit to the entire dataset. In addition to the cross-validation score, the model accuracy with respect to the entire dataset was determined.
3.2.1 Feature Importance Feature importance is the metric with which the data input parameters were qualitatively ranked relative to their contribution to the models prediction. For GBC, importance is a measure of the number of times a variable is selected for a tree decision point (split) weighted by the improvement the split provided to the model prediction. Tree feature importance values are then averaged over all trees in the gradient boosted model. The concept that an input parameter that is regularly used in tree decision points would be more important than input parameters that are infrequently used makes intuitive sense, but it is clear that it is not a quantitative measure of the model sensitivity to a given parameter. A similar feature importance metric is also available for random forest decision tree models.
Feature importance can also be derived from linear SVC models. The normal vector that defines the hyperplane which separates the two classes is described by coefficients for each input parameter. The magnitude of the coefficients provides a measure by which the relative importance of each parameter can be determined. A high coefficient in either the positive or negative direction indicates high importance towards either of the two classes. Thus, the ranking is taken as the absolute value of the coefficients.
Feature importance metrics can also be used to eliminate input parameters which are determined to be unimportant from consideration when training the model to the dataset. Feature elimination is a technique that has been found, in some cases, to improve the predictive accuracy of fitted models, but this approach was not taken in the analyses performed by DEI.
3.2.2 Model Comparison The four classification and regression meta-models were fit to sampled inputs and xLPR-calculated outputs for a given number of xLPR realizations. A typical feature importance result is shown in Figure 5 for a GBC fit to the occurrence of leak xLPR output. Figure 5 includes the accuracy and cross-validation score to illustrate the goodness of the model fit, which in all cases is above 0.92. The feature importance ranking suggests there are 4 primary inputs the model is Page 17
Volume 3 using to predict the occurrence of leak, and the remaining 17 of the top 20 inputs are used in incrementally fewer tree decision splits.
The relative agreement, and disagreement, between the four meta-models is illustrated in Figure
- 6. The three plots visually highlight the differences in the ranking between any two meta-models, where the same input parameters in the two respective rankings are connected with a line and the parameters at the top of the figure and with the largest sized red dot are ranked with the highest importance. The primary observation from Figure 6 is that model agreement in feature importance is isolated to the top four input parameters. Below that point, significant differences are observed that indicate low confidence in the result. This is not surprising given the incremental differences observed in the feature importance values for these parameter rankings in Figure 5.
It is concluded from these results that the model prediction is primarily driven by these top parameters.
Figure 5: Typical gradient boosted decision tree classification feature importance rankings Page 18
Volume 3 Figure 6: Comparisons of meta-model feature importance rankings for occurrence leak 3.3 Emc2 Methods Emc2 used the following methods for the sensitivity analysis:
(linear or rank) stepwise regression recursive partitioning multivariate adaptive regression splines (MARS)
As shown in the following subsections, while MARS methods are usually more efficient than stepwise regression when the output of interest is continuous, the use of splines makes the Page 19
Volume 3 method ill-suited when considering on-off outputs or discrete outputs with a small number of potential values. Stepwise regression is close to the gradient boosting method used by DEI and gives the same results. Similarly, random forest is part of the recursive partitioning family of methods. In consequence, the DEI and Emc2 methods should give similar results and should lead to the same conclusions so long as the input sets are equivalent.
3.3.1 Rank Regression The rank regression technique uses a rank transformation over the input and output variables under consideration. The smallest value of a variable is given a rank of one, the next a rank of two, and so on up to the largest value having a rank of n (i.e., the sample size). A stepwise linear regression is then applied to the rank-transformed data. The model is linear and additive, and it is shown in the following form:
(3-6) where represents (for this regression and the subsequent ones) the amount of uncertainty not explained by the model.
The stepwise approach starts with finding the best fit with only one parameter and testing all possible input parameters. It then builds upon this initial fit by selecting the best fit with two parameters, conditional upon keeping the first parameter, and so on. An alpha value, representing the probability for each input effect to be spurious, is selected as a stopping criterion. The default value is set to approximately 15%t, which means that, if there is a 15% chance or more for the variable to be spurious, then it is not included. Rank regression is effective in capturing monotonic relationships between inputs and outputs. The non-parametric aspect makes it less sensitive to outliers. This technique is limited to additive models where no conjoint influences are considered and may perform poorly on non-monotonic relationships.
Three metrics are included for each input variable used to display rank regression results. Two of the metrics are based on the coefficient of determination, noted conventionally as R2, which represents the amount of variance explained by the regression model. The coefficient of determination is a normalized value that varies between 0 (no variance explained) and 1 (all the variance explained).
R2inc gives the cumulative coefficient of determination of the rank regression model when the variable has been added (includes all variables up to the for the model).
R2cont gives the gain in R2 when the variable has been added compared to the model with 1 variables. It is a good indicator of the contribution of this specific variable in explaining the variance of the output in consideration.
Standardized rank regression coefficients (SRRC) display the rank regression coefficients after they have been standardized to take out the unit influence. The rank regression coefficient is an indication of the strength of the influence. An absolute value close to zero means that the parameter does not have an influence, while an absolute value of one represents a very strong influence. The rank regression coefficient also indicates the positive or negative direction of the influence of this input variable on the considered output. A negative sign represents negative influence in which high values of the input lead to low values of the output, and low values of the input lead to high values of the Page 20
Volume 3 output. A positive sign represents positive influence where high values of the input lead to high values of the output, and low values of the input lead to low values of the output.
3.3.2 Recursive Partitioning Recursive partitioning regression, also known as a regression tree, is a regression method that captures conjoint influences. A regression tree splits the data into subgroups in which the values are relatively homogeneous. The regression function is constructed using the sample mean of each subgroup. This approach results in a piecewise constant function over the input space under consideration. The predictive model is shown as follows:
,, (3-7)
Recursive partitioning is well-adapted to the present study as it strives to capture the effect of thresholds (e.g., a low value for one parameter and a high value for another parameter, or when a certain parameter reaches a threshold value). One of the drawbacks of this regression is that it considers so many potential relations that it tends to over-fit by capturing spurious correlations.
Consequently, checking the relations only found by this regression using scatterplots is recommended and was performed.
3.3.3 Multivariate Adaptive Regression Splines (MARS)
MARS is a combination of (linear) spline regression, stepwise model fitting, and recursive partitioning. A regression with a single input starts with a mean-only model and adds basis functions in a stepwise manner while adding the overall linear trend first. A second model using linear regression via least squares is fit to the data. This model is then added to the basis functions in a way that reduces the sum of square error (SSE) between the observations and predictions. A fourth basis function is then added to minimize the SSE again. This process is repeated until M (set by default at 200) basis functions have been added. At this point, the MARS procedure will try to simplify the model using stepwise deletion of basis functions while keeping the y-intercept and linear trend. The 2 candidate leading to the smallest increase of SSE will be selected. This deletion will be applied until regressed to the original linear model.
Stepwise addition and deletion lead to the creation of two different 2 models. The best model is chosen using a generalized cross validation score which corresponds to an SSE normalized by the number of basis functions considered. With multiple inputs, the basis functions will consider main effects and multiple-way interactions. The options used for this analysis consider only two-way interactions to avoid the exponential computational cost of considering more interactions. MARS usually leads to similar results as linear regression with a greater accuracy, and with the inclusion of non-monotonic effects and conjoint influences; however, it performs poorly with discrete inputs due to the use of splines.
3.3.4 Metrics for Recursive Partitioning and MARS Recursive partitioning and MARS are non-monotonic regression techniques, and as such, do not directly provide some indicators of the importance of each variable like the rank-regression or linear regression methods do. Only the estimated coefficient of determination informs the quality of the regression overall. In consequence, a second step is added using the constructed Page 21
Volume 3 regression model. The model is used to run several realizations large enough and with the appropriate structure so that a complete Sobol decomposition can be applied.
The Sobol decomposition technique is a variance decomposition that is used to decompose the output variance and estimate the contribution of each uncertain input with sensitivity indices. In Sobol regression, the first order sensitivity indices are used to estimate only the influence of one of the inputs. It is performed via two samples of the same size. In one sample, all values are varying. In the second, the values for are kept identical to the first sample and all the other values are changed. A comparison between both samples is used to estimate how much by itself influences the regression. This is repeated for each and gives all the values (first order sensitivity indices).
In theory one can repeat the same approach to estimate second order sensitivity indices for any couple , by running two samples again, but this time with both and fixed in the second sample, and so on. This is usually not done because the first order indices already require many estimates, and the number of interactions would quickly become prohibitive. Saltelli and Homma
[5] devised another way to tackle the problem: if one were doing the opposite (fixing all values except in the second sample), one would estimate the influence of all the variables and their interactions, except for . By taking the difference with the total variance, one would have the influence of and all its interactions. This approach requires only 2 samples and thus is used to estimate the Total indices. So, for each input it is possible to calculate estimates of:
its sole influence, called its influence with all interactions ( , . , . , . . , , called The difference ) represents all influences of all interactions that include on the output ,
except for the sole influence of Xi. This is what is used for the conjoint contribution.
(first order sensitivity index): Indicates how much variance of the output is explained by this input solely (can be compared with R2cont from stepwise regression). This represents the influence of the parameter by itself.
(total order sensitivity index): Indicates how much variance of the output is explained by this input and all its interactions (including with itself). There is no equivalent for stepwise regression which is an additive regression (no conjoint influence). The quantity represents the influence of all interactions which input Xi is responsible, except for its influence solely.
3.3.5 Summary Indicators As mentioned above, the non-monotonic regressions estimate the importance of inputs via Sobol decomposition supposing that the regression is perfect. Stepwise regression directly estimates the influence of those parameters. As a result, the influences estimated with recursive partitioning and MARS are pondered via the final . Such a correction is not necessary for stepwise regression.
Main contribution: The main contribution is the indicator used to rank all inputs according to their importance in term of uncertainty; it is the average of normalized influence from three regressions (between and ). If an input is not considered by one regression, its value is set to 0 for this regression, is used directly without any normalization. As Si reflects the contribution if Page 22
Volume 3 the regression model was perfect, it is normalized by multiplying by the final R2 for the regression considered.
Consider the proportionality constant value for the first occurring circumferential crack (A_CC_01) in Table 8 as an example:
0.08 (this parameter explains 8% of the variance according to stepwise regression).
, 0.08 (8% of the recursive partitioning model is explained by this parameter by itself),
but 0.96 meaning Sobol decomposition was performed on a model explaining only 96% of the variance.
, 0.46 (46% of the MARS model is explained by this parameter), but 0.60 meaning Sobol decomposition was performed on a model explaining only 60% of the variance.
No credit is taken for the unexplained variance and no preference is given to any regression technique over another.
The final formula is as follows:
1
, , (3-8) 3 which, with the example numbers, gives:
1 0.08 0.08 0.96 0.46 0.60 0.14 3
The conclusion is that the proportionality constant associated with the first circumferential crack explains at least 14% of the total variance.
Conjoint contribution: The average normalized influence of the two non-additive regressions . As reflects the contribution if the regression model was perfect, they are normalized by multiplying by final for the regression considered. Note that, if the conjoint contribution is greater than 0.1, then it is highlighted in yellow. If not, it is probably better not to consider it.
Using the same example from the proportionality constant value for the first occurring circumferential crack (A_CC_01) in Table 8, we have:
Nothing for stepwise regression. This is an additive regression and does not estimate conjoint influence. We do not include it in the regression.
, , 1.00 0.08 0.92 (92% of the recursive partitioning model is explained by interactions including A), but 0.96 meaning Sobol decomposition was performed on a model explaining 96% of the variance.
Page 23
Volume 3
, , 0.94 0.46 0.48 (48% of the MARS model is explained by interactions including A), but 0.60 meaning Sobol decomposition was performed on a model explaining only 60% of the variance.
No credit is taken for the unexplained variance and no preference is given to any regression technique over another.
The final formula is:
1
, , , , (3-9) 2 Which, with the example numbers, gives:
1 0.92 0.96 0.48 0.60 0.586 2
The conclusion is that, in addition to 14% of the variance explained solely, the proportionality constant of the first occurring circumferential crack explains an additional 59% of the output variance from its interaction with other (uncertain) input parameters.
4.0 Comparison of Rankings for Test Case 1 based on 10,000 Realizations The comparison of rankings shows the results obtained by SIA first, then DEI, and finally Emc2. It is important to note that:
- 1. The order of the inputs reflects only this strategy and not the overall importance. However, the top three parameters are the same for all methods. The SIA ranking may look better as a result.
- 2. Some of the regression values obtained by DEI and Emc2 are not reported to keep a reasonable table size; however, the reported values were around a few percent and usually less than 1%. As a result, the SIA accuracy may look worse, but it is not actually the case.
As indicated in the conclusions, all methods are adequate and lead to similar conclusions when taking the differences in the approaches into account.
4.1 Comparison on Probability of Circumferential Crack Initiation (15 Instances out of 10,000 Realizations)
For the probability of circumferential crack, the rank regression was replaced with a linear regression. The value of the product of the two sampled components of the A parameter (proportionality constant for the PWSCC initiation model) need to be high to lead to crack initiation.
The ranking erases this feature, and as a result, does not capture the relation as well as a simple linear regression. The issue can be highlighted via a linear regression applied to the scatterplots of the actual values and their rank equivalent (see Figure 7). Note that for the left frame, the linear fit is curved due to the semi-log scale used. Also, the DEI results were normalized so that they sum to 1. Table 7 summarizes the rankings obtained by each method.
Page 24
Volume 3 Figure 7: Comparison of scatterplots on actual values for A and occurrence of circumferential crack (left) and their rank equivalent (right)
Table 7: Ranking results for probability of circumferential crack initiation based on 10,000 realizations Probability of Circumferential Crack Initiation (P_CC)
SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_CC_01 55% 15% 31% 37%
A_Mult 26% 23% 1% 16%
WRS_AX_01 18% 12% 0% 33%
THICK 0% 2% 0% 0%
P2V 0% 1% 0% 0%
CHARWD 0% 0% 0% 0%
QG 0% 0% 0% 0%
TEMP 0% 0% 0% 0%
YS_LP 0% 1% 0% 0%
E_RP 0% 1% 0% 0%
YS_RP 0% 0% 0% 0%
E_RP 0% 1% 0% 0%
UTS_RP 0% 0% 0% 0%
UTS_LP 0% 2% 0% 0%
The same top three parameters are identified by all three methods; however, the rankings are slightly different. The top three parameters are the ones expected from the PWSCC crack initiation equation used by the model.
Page 25
Volume 3 The difference between the DEI results and the two others for the spatially varying component is expected and is because of the averaging over spatial variability that somewhat reduces the importance of the parameter. Although averaging over spatially sampled inputs would reduce the variance of the sampled value for all subunits resulting in a somewhat different ranking of variables, this approach does not impact the overall conclusion that the A parameter (proportionality constant for the PWSCC initiation model) shows the highest importance.
Emc2 method finds part of the conjoint influence but is limited by the fact that only 2 parameter interactions are considered. Furthermore, MARS is not adapted for on-off outputs, which reduces the quality of the results. Table 8 shows the detailed Emc2 regression results obtained using only the inputs selected by SIA for its analysis. Linear regression finds the top 3 parameters in that order. Recursive partitioning finds the two components of the A parameter, but MARS is affected by many spurious correlations.
Table 8: Emc2 detailed ranking results for regression on probability of circumferential crack initiation using 22 uncertain inputs Recursive linear regression Partitioning MARS Main Conjoint Final R 2 0.11 0.96 0.6 Contri- Contri-R2 R2 bution bution
- Input inc. cont. SRRC Si Ti Si Ti A_CC_01 0.08 0.08 0.28 0.08 1.00 0.46 0.94 0.14 0.59 UTS_LP --- --- --- --- --- 0.09 0.02 0.03 0.00 A_MULT 0.10 0.02 0.14 0.00 0.90 0.01 0.62 0.01 0.62 WRSAX_01 0.11 0.00 0.06 --- --- 0.00 0.12 0.00 0.03 THICK 0.11 0.00 0.02 --- --- 0.00 0.00 0.00 0.00 YS_RP 0.11 0.00 0.02 --- --- 0.00 0.00 0.00 0.00 UTS_RP 0.11 0.00 -0.02 --- --- 0.00 0.00 0.00 0.00 EFPY --- --- --- --- --- 0.00 0.47 0.00 0.14 TEMP --- --- --- --- --- 0.00 0.09 0.00 0.03 YS_LP --- --- --- --- --- 0.00 0.53 0.00 0.16 E_LP --- --- --- --- --- 0.00 0.35 0.00 0.10 E_RP --- --- --- --- --- 0.00 0.00 0.00 0.00 QG --- --- --- --- --- 0.00 0.84 0.00 0.25 P2V --- --- --- --- --- 0.00 0.43 0.00 0.13 CHARWD --- --- --- --- --- 0.00 0.00 0.00 0.00 This first comparison shows one of the limitations of the MARS method, which thus may be reconsidered as one of the techniques for comparison, especially when there are many on-off switches in the outputs of interest. Furthermore, it is believed that the different values for spatially varying parameters should be considered separately.
Page 26
Volume 3 4.2 Comparison on Number of Axial Cracks based on 10,000 Realizations Table 9 compares the different rankings obtained for the number of axial cracks having occurred after 60 yrs.
Table 9: Ranking results for number of axial cracks based on 10,000 realizations SIA DEI Emc2 Variable Importance Factor (%) normalized importance factor Si (Ti-Si)
A_AC_01 68% 9% 22% 17%
A_Mult 30% 53% 40% 19%
WRSHP_01 2% 11% 1% 13%
TEMP 0% 5% 0% 4%
A_CC_01 0% 0% 0% 0%
INILC_01 0% 0% 0% 0%
FCOMP 0% 0% 0% 0%
FLAWA_01 0% 0% 0% 0%
E_LP 0% 2% 0% 0%
P2V 0% 0% 0% 0%
E_RP 0% 0% 0% 0%
CHARWD 0% 0% 0% 0%
WRS_AX_01 0% 0% 0% 0%
QG 0% 0% 0% 0%
INIDC_01 0% 0% 0% 0%
INILA_01 0% 0% 0% 0%
YS_LP 0% 1% 0% 0%
UTS_RP 0% 0% 0% 0%
UTS_LP 0% 1% 0% 0%
YS_RP 0% 0% 0% 0%
INIDA_01 0% 0% 0% 0%
THICK 0% 0% 0% 0%
As mentioned previously in the method description, the method used by SIA looks at the probability of axial crack and not the number of cracks. As a result, the weight of the spatial component A_AC is emphasized. The degree of separation method using different threshold values was used resolve the issue (the same for leak rate which leads to similar results considering that the leak rate is proportional to the number of axial cracks occurring). As discussed in Section 4.1, the method used by DEI takes an average over all 19 potential values so its importance is diminished but remains in the top three. Emc2s method splits the importance amongst the different sampled values for the A parameter as can be seen in Table 10. This table shows also that with a more diverse output (the number of axial cracks can range from 0 to 19),
MARS performs a lot better.
Page 27
Volume 3 Table 10: Regression analysis results for number of axial cracks (all replicates)
Recursive Main Conjoint Rank Regression Partitioning MARS Contri- Contri-Final R2 0.71 0.95 0.74 bution bution
- Input R2 inc. R2 cont. SRRC Si Ti Si Ti A_MULT 0.67 0.13 0.29 0.58 0.83 0.70 0.90 0.40 0.19 A_AC_01 0.53 0.53 0.42 0.13 0.26 0.00 0.31 0.22 0.17 A_AC_02 0.69 0.02 0.12 0.01 0.08 0.07 0.57 0.03 0.22 WRSHP_01 0.70 0.01 0.07 0.00 0.02 0.02 0.33 0.01 0.13 A_AC_14 --- --- --- 0.00 0.02 0.00 0.00 0.00 0.01 A_AC_03 0.70 0.00 0.05 0.00 0.08 0.00 0.10 0.00 0.07 TEMP 0.71 0.00 0.03 --- --- 0.00 0.12 0.00 0.04 A_AC_07 --- --- --- 0.00 0.01 --- --- 0.00 0.00 A_AC_19 --- --- --- --- --- 0.00 0.38 0.00 0.14 A_AC_13 0.71 0.00 -0.01 --- --- 0.00 0.12 0.00 0.04 A_AC_15 0.71 0.00 -0.01 --- --- 0.00 0.32 0.00 0.12 A_AC_04 0.70 0.00 0.03 0.00 0.11 0.00 0.13 0.00 0.10 A_AC_12 0.71 0.00 -0.02 0.00 0.01 0.00 0.32 0.00 0.13 A_AC_11 0.71 0.00 -0.01 --- --- 0.00 0.32 0.00 0.12 A_AC_18 0.71 0.00 -0.01 --- --- 0.00 0.30 0.00 0.11 A_AC_10 0.71 0.00 -0.01 --- --- 0.00 0.00 0.00 0.00 INILA_07 0.71 0.00 0.01 --- --- --- --- 0.00 0.00 A_AC_17 0.71 0.00 -0.01 --- --- --- --- 0.00 0.00
- highlighted in yellow if conjoint contribution is larger than 0.1 This second comparison shows one of the limitations of the reliability technique when applied to an output different from the on-off switch. It requires the addition of a weighting function which may introduce some subjectivity.
4.3 Comparison on Probability of Leakage based on 10,000 Realizations The results for probability of leakage are consistent with the previous analysis. The rankings from the SIA and Emc2 methods are the same, while there is the switch between the A multiplier and spatially varying A from the DEI method. As indicated in Section 4.1, it is believed that the separate treatment of spatially varying inputs would eliminate this difference and would not change the conclusions from any of the approaches. On an indicator function with enough occurrences, all the approaches converge to the same result. It is worth noting that MARS is not as efficient and tends to find more spurious correlations, but they are minimized with the global approach used by Emc2.
Page 28
Volume 3 Table 11: Ranking results for probability of leak based on 10,000 realizations normalized Importance importance Variable Factor (%) factor Si (Ti-Si)
A_AC_01 82% 18% 43% 22%
A_Mult 17% 34% 17% 23%
WRSHP_01 1% 9% 1% 5%
TEMP 0% 4% 0% 9%
FCOMP 0% 1% 0% 0%
INILA_01 0% 1% 0% 0%
A_CC_01 0% 0% 0% 0%
INILC_01 0% 0% 0% 0%
FLAWA_01 0% 1% 0% 0%
E_LP 0% 0% 0% 0%
QG 0% 0% 0% 0%
THICK 0% 0% 0% 0%
INIDC_01 0% 0% 0% 0%
INIDA_01 0% 0% 0% 0%
YS_LP 0% 1% 0% 0%
CHARWD 0% 1% 0% 0%
UTS_RP 0% 0% 0% 0%
WRS_AX_01 0% 1% 0% 0%
UTS_LP 0% 1% 0% 0%
SIGY_RP 0% 0% 0% 0%
E_RP 0% 1% 0% 0%
P2V 0% 1% 0% 0%
4.4 Comparison on Leak Rate based on 10,000 Realizations While the three methods identify the same important parameters, the rankings are different. As expected, SIAs method tends to overemphasize the role of the spatially varying component of A when applied to non-switch type outputs, whereas DEIs method tends to underemphasize its importance due to the averaging technique. Emc2s method misses the role of WRS, which is mostly found by the DEI method and the rank regression method. However, there is agreement on the two most important variables.
Page 29
Volume 3 Table 12: Ranking results for leak rate based on 10,000 realizations normalized Si (Ti-Si)
Importance importance Variable Factor (%) factor A_AC_01 68% 9% 19% 31%
A_Mult 29% 45% 19% 54%
WRSHP_01 2% 9% 0% 0%
TEMP 0% 4% 0% 1%
FCOMP 0% 1% 0% 0%
A_CC_01 0% 0% 0% 0%
E_LP 0% 2% 0% 0%
INILC_01 0% 0% 0% 0%
E_RP 0% 0% 0% 0%
FLAWA_01 0% 0% 0% 0%
P2V 0% 1% 0% 0%
YS_LP 0% 5% 0% 0%
CHARWD 0% 0% 0% 0%
INIDA_01 0% 0% 0% 0%
UTS_RP 0% 0% 0% 0%
UTS_LP 0% 0% 0% 0%
QG 0% 1% 0% 0%
INILA_01 0% 0% 0% 0%
THICK 0% 0% 0% 0%
WRS_AX_01 0% 0% 0% 0%
SIGY_RP 0% 0% 0% 0%
INIDC_01 0% 0% 0% 0%
Table 13 lists the estimated probability for the two selected indicators (occurrence of circumferential cracks and of leakage over 60 years). The results are as expected with a larger coefficient of variation for the rare event.
Page 30
Volume 3 Table 13: Expected values of selected indicators for each replicate run Rep. Estimated Probability Estimated Probability No. of 1st Circumferential of 1st Leakage Crack Occurrence 1 2 /1000 248/1000 2 0 /1000 257/1000 3 1/1000 255/1000 4 1/1000 258/1000 5 1/1000 223/1000 6 3/1000 251/1000 7 2/1000 243/1000 8 1/1000 248/1000 9 3/1000 249/1000 10 1/1000 256/1000 AVG 1.5 10 2.488 10 COV 0.615 0.039 The regression analysis requires at least a few occurrences and we would not trust and recommend an analysis performed on an output with a single occurrence out of 1,000, especially when using about 150 inputs as the likelihood of spurious correlation would be too high. As a result, three analyses were performed: one using replicate number 1, a second using replicate number 6, and a third using all the replicates together (equivalent to a run of size 10,000) 5.0 Regression Results for Test Case 2 (Replicate No. 1)
Replicate number 1 used the random seed 292 for the epistemic loop and 542 for the aleatory loop. For this analysis, the probability of circumferential crack initiation at 60 years is compared with all techniques.
5.1 First Circumferential Crack Occurrence over 60 Years As observed for Test Case 1, most of the influence is associated with Amult from the DEI analysis because the influence of the spatially varying inputs is averaged from 19 values. The results from Emc2 are not as good for this case due to strange behavior of the MARS method. Simple linear regression and recursive partitioning were, however, consistent with results from the other techniques.
Page 31
Volume 3 Table 14: Ranking results for probability of circumferential crack (rep. E292_A542)
Probability of Circumferential Crack Initiation (P_CC)
SIA DEI Emc2 normalized Variable Importance importance Si (Ti-Si)
Factor (%) factor A_CC_01 47% 0% 14% 50%
A_Mult 22% 53% 0% 0%
WRS_AX_01 20% 24% 0% 42%
TEMP 5% 0% 0% 0%
E_LP 2% 0% 0% 0%
CHARWD 1% 0% 0% 0%
THICK 1% 0% 0% 0%
P2V 1% 0% 0% 0%
YS_LP 0% 0% 0% 41%
QG 0% 0% 0% 0%
UTS_LP 0% 0% 0% 0%
E_RP 0% 0% 0% 0%
SIGY_RP 0% 0% 0% 0%
UTS_RP 0% 0% 0% 0%
5.2 Number of Axial Cracks Occurring over 60 Years The results for the number of axial cracks presented in Table 15 are once again consistent with those from Test Case 1 (Section 4.2). The main difference between the DEI and Emc2 results comes from the distinction between spatially varying A values versus comparing them as a group.
In Table 16 one can see the importance of each sampled value on the number of cracks, which can also be observed from the scatterplots (Figure 8). The value of A1 influences the chance of having at least 1 crack, the value of A2 the chance of having at least 2 cracks, and so on.
Table 15: Regression analysis results for number of axial cracks (rep. E292_A542)
Number of Axial Cracks SIA DEI Emc2 normalized Variable Importance importance Si (Ti-Si)
Factor (%) factor A_MULT - 28% 17% 60%
A_AC_xx - 7% 25% 47%
WRSHP_xx - 5% 0% 0%
SURFRUL - 3% 0% 0%
UTS_W - 3% 0% 0%
E_LP - 2% 0% 0%
TEMP - 2% 0% 32%
Page 32
Volume 3 Table 16: Detailed results of Emc2 analysis for number of axial cracks (rep. E292_A542)
Recursive Rank Regression Partitioning MARS Main Conjoint Final R2 0.71 0.94 0.89 Contri- Contri-R2 R2 bution bution
- Input inc. cont. SRRC Si Ti Si Ti A_AC_01 0.52 0.52 0.41 0.17 0.27 0.00 0.75 0.23 0.38 A_MULT 0.66 0.14 0.29 0.36 0.73 0.05 1.00 0.17 0.60 A_AC_02 0.69 0.03 0.12 0.03 0.21 0.00 0.61 0.02 0.35 A_AC_03 0.69 0.01 0.06 0.00 0.05 0.00 1.00 0.00 0.47 A_AC_04 0.70 0.00 0.03 0.01 0.23 0.00 0.00 0.00 0.10 TEMP 0.70 0.00 0.04 --- --- 0.00 0.72 0.00 0.32 A_AC_07 0.70 0.00 -0.02 0.00 0.02 0.00 0.61 0.00 0.28 Page 33
Volume 3 Figure 8: Scatterplots of number of axial cracks occurring over 60 years as a function of sampled values for the spatially varying component of A 5.3 Probability of Leakage at 60 Years The results are consistent with what was observed for Test Case 1 (Section 4.3). This was expected because the number of realizations with leakage is large enough with a 1,000 sample run to lead to stable estimates and regressions.
Page 34
Volume 3 Table 17: Regression analysis results for probability of leakage (rep. E292_A542)
Probability of Leak SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_MULT - 18% 14% 49%
A_AC_xx - 12% 32% 47%
WRSHP_xx - 4% 0% 1%
TEMP - 3% 0% 1%
QG - 3% 0% 0%
beta0_C - 3% 0% 0%
J1C_W - 3% 0% 0%
INIDA_xx - 3% 0% 0%
SURFRUL - 2% 0% 0%
5.4 Leak Rate over 60 Years The leak rate regression results are close to those for the number of axial cracks. A scatterplot between the number of axial cracks and leak rate value as shown in Figure 9 explains why the results between the two regressions are close.
Figure 9: Scatterplot showing leak rate over 60 years (y-axis) as a function of the number of axial cracks at 60 years (x-axis)
Page 35
Volume 3 Table 18: Regression analysis results for leak rate (rep. E292_A542)
Leak Rate SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_MULT - 28% 41% 19%
A_AC_xx - 7% 25% 10%
WRSHP_xx - 5% 1% 3%
SURFRUL - 3% 0% 0%
beta0_A - 3% 0% 0%
JR_C_LP - 3% 0% 0%
TEMP - 2% 0% 1%
E_LP - 2% 0% 0%
YS_W - 2% 0% 0%
6.0 Regression Results for Test Case 3 (Replicate No. 6)
This regression is applied to replicate number 6, which used 292 as the epistemic random seed and 713705 as the aleatory random seed. These results compare the SIA-, DEI-, and Emc2-chosen regression methods. The observations are similar to those reported for Test Case 1 (Section 4.0.0) and Test Case 2 (Section 5.0.0).
Page 36
Volume 3 6.1 First Circumferential Crack Occurrence over 60 Years Table 19: Regression analysis results for probability of circumferential crack (rep. E292_A713705)
Probability of Circumferential Crack Initiation (P_CC)
SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_CC_01 55% 12% 18% 50%
WRS_AX_01 17% 12% 0% 37%
A_Mult 16% 11% 0% 1%
QG 4% 0% 0% 0%
CHARWD 3% 0% 0% 0%
P2V 2% 0% 0% 0%
E_LP 1% 0% 0% 0%
TEMP 1% 0% 0% 0%
THICK 1% 0% 0% 0%
E_RP 1% 0% 0% 0%
YS_LP 0% 0% 0% 0%
SIGY_RP 0% 0% 0% 0%
UTS_LP 0% 0% 0% 0%
UTS_RP 0% 0% 0% 0%
Page 37
Volume 3 6.2 Number of Axial Cracks Occurring over 60 Years Table 20: Regression analysis results for number of axial cracks (rep. E292_A713705)
Number of Axial Cracks SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_AC_01 67% 7% 21% 37%
A_Mult 30% 27% 24% 55%
WRSHP_01 3% 6% 1% 9%
INILA_01 0% 0% 0% 0%
WRS_AX_01 0% 0% 0% 0%
INIDC_01 0% 0% 0% 0%
FLAWA_01 0% 0% 0% 0%
E_LP 0% 1% 0% 0%
TEMP 0% 3% 0% 0%
INILC_01 0% 0% 0% 0%
CHARWD 0% 0% 0% 0%
P2V 0% 0% 0% 0%
E_RP 0% 0% 0% 0%
A_CC_01 0% 0% 0% 0%
INIDA_01 0% 1% 0% 0%
QG 0% 0% 0% 0%
YS_LP 0% 1% 0% 0%
THICK 0% 1% 0% 0%
FCOMP 0% 0% 0% 0%
UTS_RP 0% 1% 0% 0%
UTS_LP 0% 0% 0% 0%
YS_RP 0% 1% 0% 0%
Page 38
Volume 3 6.3 Probability of Leakage at 60 Years Table 21: Regression analysis results for probability of leakage (rep. E292_A713705)
Probability of Leak SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_AC_01 81% 8% 30% 50%
A_Mult 17% 19% 14% 44%
WRSHP_01 1% 4% 0% 1%
INILA_01 0% 0% 0% 24%
TEMP 0% 4% 0% 17%
INIDC_01 0% 0% 0% 0%
WRS_AX_01 0% 0% 0% 0%
QG 0% 1% 0% 0%
FLAWA_01 0% 1% 0% 0%
THICK 0% 0% 0% 0%
INILC_01 0% 0% 0% 25%
A_CC_01 0% 0% 0% 0%
FCOMP 0% 0% 0% 0%
INIDA_01 0% 2% 0% 0%
E_LP 0% 1% 0% 0%
CHARWD 0% 0% 0% 0%
E_RP 0% 1% 0% 0%
YS_LP 0% 1% 0% 0%
P2V 0% 0% 0% 0%
UTS_LP 0% 1% 0% 0%
UTS_RP 0% 0% 0% 0%
SIGY_RP 0% 0% 0% 0%
Page 39
Volume 3 6.4 Leak Rate over 60 Years Table 22: Regression analysis results for leak rate (rep. E292_A713705)
Leak Rate SIA DEI Emc2 Variable normalized Importance importance Si (Ti-Si)
Factor (%) factor A_AC_01 67% 5% 19% 40%
A_Mult 29% 32% 27% 47%
WRSHP_01 3% 6% 0% 11%
INILA_01 0% 1% 0% 10%
WRS_AX_01 0% 2% 0% 0%
INIDC_01 0% 0% 0% 0%
TEMP 0% 3% 0% 18%
CHARWD 0% 1% 0% 0%
FLAWA_01 0% 0% 0% 0%
FCOMP 0% 1% 0% 0%
P2V 0% 0% 0% 0%
A_CC_01 0% 0% 0% 0%
QG 0% 1% 0% 0%
E_RP 0% 0% 0% 0%
E_LP 0% 1% 0% 0%
INILC_01 0% 0% 0% 0%
INIDA_01 0% 1% 0% 0%
THICK 0% 0% 0% 0%
YS_LP 0% 2% 0% 0%
UTS_RP 0% 1% 0% 0%
UTS_LP 0% 0% 0% 0%
YS_RP 0% 1% 0% 0%
7.0 Conclusions From this study, Emc2 made the following observations:
Even with a very small number of occurrences, linear or rank regression and recursive partitioning can identify the most important parameters that can be considered for importance sampling.
MARS works better when the output is more continuous. The results are more consistent with recursive partitioning.
Sometimes, linear regression may work better than rank regression.
The regression methods reach satisfactory levels of stability when the number of events of interest is more than 10.
Page 40
Volume 3 Replicate analysis helps to confirmation that the most important inputs have been correctly identified.
When many input variables are strongly correlated such as in WRS, considering only one representative value leads to a better analysis.
While it is recommended to reduce the number of inputs to those which really have an impact to avoid spurious correlations (especially when few events occur), the regression analyses still identify the most important variables when the input set includes a reasonably large number of non-influential variables (e.g., more than 130 in the case of the probability of circumferential crack and the number of axial cracks).
As observed in the past, when the analysis is performed using an initiation model with a probability of crack initiation lower than 50%, the main mechanism is crack initiation and the most important uncertain parameters are the ones associated with this model for all the outputs considered.
With respect to the comparisons of the various methods:
All approaches are adequate for finding the most important factors for different types of outputs. Each method has its associated strengths and weaknesses.
DEI and Emc2 used similar methods and the differences are mostly due to: (a) the averaging of spatially varying inputs in DEIs methods, and (2) use of MARS in Emc2s method. While the results differ somewhat, the conclusions remain the same from both approaches.
MARS reduced the quality of the general regression for Emc2 when dealing with switch-type outputs.
SIAs approach differed but still found the most important parameters (as long as the monotonic assumption is preserved and the output can be expressed as a reliability measure).
To improve the Emc2 method, its recommended to implement the SIA method for switch-type outputs in lieu of the MARS technique, but still use MARS for other outputs.
Page 41
Volume 3 BIBLIOGRAPHY
[1] M. Homiack and M. Burkardt, "xLPR version 2.0 Technical Basis Document - Input Group Report," US NRC & EPRI, 2017.
[2] A. E. Hami and B. Radi, Uncertainty and Optimization in Structural Mechanics, John Wiley and Sons, 2013.
[3] D. M. Hamby, "A Review of Techniques for Parameter Sensitivity Analysis of Environmental Models," Environmental Monitoring and Assessment, vol. 32, pp. 135-154, 1994.
[4] A. H.-S. Ang and W. H. Tang, Probability Concepts in Engineering Planning and Design -
Volume II: Decision, Risk and Reliability, John Wiley and Sons, 1984.
[5] A. Saltelli and T. Homma, "Importance measures in global sensitivity analysis of nonlinear models," Reliability Engineering and System Safety, vol. 52, pp. 1-17, 1996.
[6] I. Sobol, "Sensitivity estimates for nonlinear mathematical models," Mathematical Modeling &
Computational Experiment, vol. 1, pp. 407-414, 1993.
Page 42