ML18121A245

From kanterella
Jump to navigation Jump to search
Draft SE, Benchmarks for Qualifying Fuel Reactivity Depletion Uncertainty, Topical Reports 3002010613 and 3002010614
ML18121A245
Person / Time
Site: Nuclear Energy Institute, 99902028
Issue date: 05/31/2018
From: Brian Benney
NRC/NRR/DLP/PLPB
To: Mccullum R
Nuclear Energy Institute
Benney B
Shared Package
ML18121A243 List:
References
3002010613, 3002010614
Download: ML18121A245 (35)


Text

1 DRAFT SAFETY EVALUATION BY THE OFFICE OF NUCLEAR REACTOR REGULATION 2 TOPICAL REPORT 3002010613, BENCHMARKS FOR QUALIFYING FUEL REACTIVITY 3 DEPLETION UNCERTAINTYREVISION 1 AND TOPICAL REPORT 3002010614, 4 UTILIZATION OF THE EPRI DEPLETION BENCHMARKS FOR BURNUP CREDIT 5 VALIDATIONREVISION 1 6 PROJECT NO. 689/DOCKET NO. 99902028 7

8

1.0 INTRODUCTION

9 10 In a letter dated January 3, 2013 (McCullum, 2013), the director of Used Fuel Programs at the 11 Nuclear Energy Institute (NEI) requested an exemption from U.S. Nuclear Regulatory 12 Commission (NRC) fees to review NEI 12-16, Guidance for Performing Criticality Analyses of 13 Fuel Storage at Light-Water Reactor Power Plants, Revision 0 (NEI, 2013a). The letter states:

14 15 NEI 12-16 provides guidance for performing criticality analyses at light water 16 reactor power plants in accordance with 10 CFR [Title 10 to the Code of Federal 17 Regulations] 50.68 and 10 CFR Part 50, Appendix A, GDC 62. As a means to 18 achieve regulatory efficiency and effectiveness, we recommend that the NRC 19 review NEI 12-16 for potential endorsement through a Regulatory Guide. This 20 proposal would fulfill the need for more durable guidance identified in the 21 NRC/NRR Action Plan, On Site Spent Fuel Criticality Analyses, as updated 22 May 19, 2012.

23 24 It continues, stating:

25 26 The purpose of submitting NEI 12-16 is to assist the NRC in completing the 27 process of updating and stabilizing the regulatory framework governing 28 spent fuel pool criticality analyses through publication of the planned 29 regulatory guide, which meets the requirements for an exemption of fees in 30 10 CFR 170.11(a)(1)(ii). NEI 12-16 is a guidance document, it is not a topical 31 report, and we believe that the NRC would be the primary beneficiary of its 32 review. This request also meets the requirements of 10 CFR 170.11(a)(1)(iii),

33 which permits an exemption from fees for exchanging information between 34 industry and the NRC for the specific purpose of supporting NRCs ongoing 35 generic regulatory improvements and development of a Regulatory Guide. In 36 order to facilitate a full review of NEI 12-16, we request that the exemption cover 37 the review of the pre-submittal draft as well as the review of guidance submitted 38 in March 2013 leading to NRC endorsement through a regulatory guide.

39 40 The NRC reviewed the request for fee exemption and found that the appropriate requirements 41 for exemption were met (Dyer, 2013).

42 43 NEI 12-16, Revision 2 (NEI, 2013b), contains various references supporting its various 44 subsections. Subsection 4.2.3, PWR Depletion Bias and Uncertainty, references two reports 45 created by the Electric Power Research Institute (EPRI) detailing methods for validating Enclosure

1 pressurized water reactor (PWR) criticality calculations that credit depleted fuel in spent fuel 2 pool (SFP) storage configurations. One report, Benchmarks for Quantifying Fuel Reactivity 3 Depletion Uncertainty (referred to as EPRI benchmark report for the remainder of this 4 document), details the use of flux map data to infer the uncertainty associated with depletion 5 reactivity calculations using Studsvik Scandpowers CASMO-5 and SIMULATE-3 reactor 6 analysis tools (Smith et al., 2011; Smith et al., 2017). The other report, Utilization of the EPRI 7 Depletion Benchmarks for Burnup Credit Validation (referred to as EPRI utilization report for 8 the remainder of this document), relates to the benchmark report by providing eleven 9 calculational PWR depletion benchmarks allowing for determination of an application-specific 10 depletion reactivity bias adjustment (Lancaster, 2012; Akkurt and Cummings, 2017).

11 12

2.0 REGULATORY EVALUATION

13 14 As stated in the introduction, criticality safety analyses (CSAs) pertaining to light-water reactor 15 (LWR) power plant SFP storage must meet the applicable regulatory requirements in 10 CFR 16 50.68 and 10 CFR Part 50, Appendix A, General Design Criterion 62. The regulation at 10 CFR 17 50.68(b)(4) states:

18 19 If no credit for soluble boron is taken, the k-effective of the spent fuel storage 20 racks loaded with fuel of the maximum fuel assembly reactivity must not 21 exceed 0.95, at a 95 percent probability, 95 percent confidence level, if flooded 22 with unborated water. If credit is taken for soluble boron, the k-effective of the 23 spent fuel storage racks loaded with fuel of the maximum fuel assembly reactivity 24 must not exceed 0.95, at a 95 percent probability, 95 percent confidence level, if 25 flooded with borated water, and the k-effective must remain below 1.0 26 (subcritical), at a 95 percent probability, 95 percent confidence level, if flooded 27 with unborated water.

28 29 In order for NRC licensees to fulfill the 10 CFR 50.68(b)(4) requirement, uncertainty evaluations 30 must be performed so that the NRC staff can come to a reasonable assurance determination 31 with respect to satisfying the k-effective acceptance criteria at a 95 percent probability, 32 95 percent confidence level. One component of a CSA uncertainty evaluation is the depletion 33 uncertainty and pertains to the ability of a set of calculational tools (i.e., depletion and criticality 34 computer codes) to accurately characterize the reactivity associated with depleted fuel in a SFP 35 storage environment. The following section documents the NRC staffs technical evaluation of 36 EPRIs methodology for determining application-specific depletion code calculational bias and 37 uncertainty as contained in the EPRI benchmark and utilization reports, which are referenced by 38 the NEI 12-16 LWR SFP criticality safety guidance document.

39 40

3.0 TECHNICAL EVALUATION

41 42 Summary of Technical Information Provided by EPRI Defining the Scope of Review 43 44 Initial evaluation of the two EPRI reports resulted in request for additional information (RAI) 45 questions that were issued on September 22, 2014 (Holonich, 2014). Responses to the RAI 46 questions, which supplement the original EPRI reports, were received on March 2, 2015 47 (Cummings, 2015a). Minor corrections to these responses were made and submitted to the 48 NRC on May 19, 2015 (Cummings, 2015b). Follow-up RAI questions were also issued and 49 responses were submitted to the NRC on April 13, 2016 (Cummings, 2016a). On August 25, 50 2016, the NEI submitted a white paper as a counterpoint to the NRC staffs concern regarding a 51 non-conservative regression fit uncertainty treatment discussed in detail in the Regression Fit

1 and Associated Uncertainty section below (Cummings, 2016b). Finally, on January 9, 2016, 2 NEI submitted a supplementary response to address the open item related to the April 13, 2016, 3 follow-up RAI 1 response (Cummings, 2017). Revised EPRI reports were also submitted 4 incorporating all methodological changes resulting from RAI questions and public meeting 5 discussions (Smith et al., 2017; Akkurt and Cummings, 2017).

6 7 Definition of Depletion Uncertainty 8

9 Previous NRC staff guidance regarding the treatment of depletion uncertainty, discussed in 10 DSS-ISG-2010-01, the current guidance document (NRC 2010), states:

11 12 Depletion Analysis: NCS analysis for [spent nuclear fuel] for both boiling-water 13 reactors (BWRs) and pressurized-water reactors (PWRs) typically includes a 14 portion that simulates the use of fuel in a reactor. These depletion simulations 15 are used to create the isotopic number densities used in the criticality analysis.

16 17 a. Depletion Uncertainty: The Kopp memorandum (Reference 2) states the 18 following:

19 20 A reactivity uncertainty due to uncertainty in the fuel depletion 21 calculations should be developed and combined with other 22 calculational uncertainties. In the absence of any other 23 determination of the depletion uncertainty, an uncertainty equal 24 to 5 percent of the reactivity decrement to the burnup of interest 25 is an acceptable assumption.

26 27 The staff should use the Kopp memorandum as follows:

28 29 i. Depletion uncertainty as cited in the Kopp memorandum 30 should only be construed as covering the uncertainty in the 31 isotopic number densities generated during the depletion 32 simulations.

33 34 ii. The reactivity decrement should be the decrement associated 35 with the k-effective of a fresh unburned fuel assembly that has 36 no integral burnable neutron absorbers, to the k-effective of the 37 fuel assembly with the burnup of interest either with or without 38 residual integral burnable neutron absorbers, whichever results 39 in the larger reactivity decrement.

40 41 In DSS-ISG-2010-01, it states that the NRC staff should interpret depletion uncertainty as the 42 uncertainty in the isotopic number densities generated during the depletion simulations. The 43 uncertainty in the isotopic number densities can arise from uncertainty associated with the 44 depletion code (i.e., based on chosen models and methods) and the underlying nuclear data 45 used by the depletion code - this also includes how the nuclear data is implemented by the 46 depletion code. Both of these uncertainty components can have a significant impact on the 47 isotopic number densities output by the depletion code. The EPRI depletion benchmarks

1 attempt to quantify this depletion uncertainty in terms of uncertainty in the reactivity worth of 2 depleted fuel1.

3 4 The EPRI approach directly determines the uncertainty in terms of reactivity rather than 5 uncertainty in the number densities of individual isotopes. However, this is a challenging 6 approach because justification for extrapolating from the hot reactor environment to a cold 7 SFP environment must be given. The EPRI approach, rather than trying to quantify the 8 reactivity effect associated with number density uncertainties of individual isotopes, uses 9 isotopic number densities output by a benchmarked depletion code2 directly in a series of 10 criticality calculations. These criticality calculations serve as calculational benchmarks that 11 cover a range of depletion conditions consistent with the benchmarked depletion code. A set of 12 reference reactivity decrements are determined from these benchmarks and form a basis for 13 subsequent comparison with the results of other criticality codes that are to be used for SFP 14 CSA applications incorporating isotopic number densities from any acceptable depletion code.

15 16 Part of the NRC staffs assessment determines whether: (1) the reference reactivity 17 decrements are derived from calculational benchmarks that are sufficient in number and scope 18 to generally validate the simulation of depleted PWR fuel in the SFP environment using an 19 acceptable criticality code incorporating isotopic number densities from an acceptable depletion 20 code, and (2) the process described in the EPRI benchmark and utilization reports is sufficient 21 for NRC licensees to perform application-specific depletion uncertainty analyses.

22 23 To ensure that appropriate biases and uncertainties associated with reference reactivity 24 decrements are accounted for, EPRI-chosen depletion codes (i.e., those used to define the 25 calculational benchmarks) were benchmarked to measured in-core flux map data. However, 26 this requires the following conditions to be met, which are also assessed by the NRC staff in this 27 report:

28 29 1. The reactivity decrement uncertainty inferencing process is appropriate.

30 31 2. Identification and appropriate accounting of all significant contributors to the reactivity 32 decrement uncertainty.

33 34 3. The depletion uncertainty as determined in the reactor environment appropriately 35 translates to the SFP environment.3 36 1

The reactivity of depleted fuel is also referred to as the reactivity decrement, which by the most general definition, is the difference in k-effective between a depleted fuel state of interest and the fresh fuel state.

2 In the context of this discussion, benchmarked depletion code means that the code has a bias and uncertainty associated with it in terms of reactivity over a range of fuel burnup. The benchmarking by inference process is discussed in the following section.

3 This is important since depletion uncertainty is being quantified in terms of reactivity rather than isotopic number density variation. While isotopic number density doesnt change between environments, this is not necessarily true of reactivity.

1 While the EPRI approach characterizes depletion uncertainty in a new way, the NRC staff 2 believes it is still consistent with the existing definitions from the Kopp memorandum and 3 DSS-ISG-2010-01.

4 5 Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty 6

7 The introduction to Section 1, Executive Summary, of the EPRI benchmark report explains 8 that:

9 10 This report provides experimental quantification of PWR fuel reactivity burnup 11 decrement biases and uncertainties obtained through extensive analysis of in-12 core flux map data from operating power reactors. Analytical methods, described 13 in this report, are used to systematically determine experimental fuel sub-batch 14 reactivities that best match measured reaction rate distributions and to evaluate 15 biases and uncertainties of computed lattice physics fuel reactivities.

16 17 Regarding the reactivity decrement error inferencing methodology, Section 1.1, Analytical 18 Methods, of the EPRI benchmark report explains that:

19 20 Forty-four cycles of flux map data from Duke Energys Catawba (Units 1 and 2) 21 and McGuire (Units 1 and 2) plants have been analyzed with Studsvik 22 Scandpowers CASMO [(Rhodes, Smith, and Lee, 2006)] and SIMULATE-3 23 [(DiGiovine and Rhodes III, 2005)] reactor analysis codes. By systematically 24 searching for fuel sub-batch reactivities that best match measured reaction rate 25 distributions, biases and uncertainties of computed CASMO reactivity 26 decrements are experimentally determined. These analyses employ more than 27 8 million SIMULATE-3 nodal core calculations to extract approximately 3000 28 measured sub-batch reactivities from flux map data. The individual estimates of 29 the reactivity decrement bias (measured minus calculated reactivity decrement) 30 form a large data set...as a function of sub-batch burnup....

31 32 The NRC staffs review of the EPRI benchmark report covered:

33 34 1. The process by which errors are derived, 35 36 2. How the benchmark work performed is generically applicable to PWR operation, 37 38 3. How the errors derived in the reactor environment translate to the SFP environment, 39 40 4. How the sensitivity/uncertainty analysis was performed.4 4

The NRC staff is concerned with both the reactor and SFP environment analyses; however, reactor environment analyses (e.g., the effects of: reactor operational characteristics, fuel design, sub-batch grouping, data filtering, statistical process, optimization algorithm, etc.) are the subject of the EPRI benchmark report, and are therefore of interest in this section.

1 Reactivity Decrement Error Determination 2

3 Deduction Process 4 Section 6, Details of Analysis Implementation, describes the algorithm used to deduce 5 reactivity decrement errors as a function of average sub-batch5 burnup. The basic premise is to 6 globally search for the minimum deviation6 in sub-batch reactivity between the calculated 7 reactivity and that inferred from the in-core flux map measurements.7 The basic steps are as 8 follows:

9 10 Perform the best-estimate simulation.

11 12 1. Determine sub-batch k-effective and flux shapes.

13 14 2. Change burnup to get simulation flux shapes to match simulated flux map measurements 15 as closely as possible.

16 17 3. When the best flux shape matches are found for all sub-batches simultaneously, calculate 18 perturbed sub-batch k-effective.

19 20 4. Calculate change in sub-batch k-effective at the best-estimate calculated average 21 sub-batch burnup.8 22 23 A search occurs for each unique sub-batch corresponding to a unique point in time and includes 24 the effects of all other sub-batch searches so that a local minimum deviation between the 25 inferred measurement of sub-batch reactivity and the calculated sub-batch reactivity can be 26 found across all sub-batches simultaneously. That is, more accurate search results are 27 obtained for an individual sub-batch by accounting for the influence of all other sub-batch 28 burnup changes. Within a search, each sub-batch nodal burnup is iteratively multiplied by a 5

A sub-batch is a group of fuel assemblies that share similar characteristics. As defined by EPRI, this is assembly type, enrichment, burnable absorber (BA) configuration, and burnup batch.

6 The minimum deviation is defined by EPRI as the root-mean-square (RMS) deviation between measured and computed detector signals for each fuel sub-batch in the reactor core. EPRI explains in the benchmark report RAI 9 response that the RMS cannot be driven to zero because of: (1) flux map measurement uncertainty on the order of 0.5 percent and (2) computer code biases and uncertainties aside from those which are caused by fuel depletion. EPRI further explains that errors in core reactivity that are independent of depletion will be addressed as a separate item (normally in the comparisons to cold critical benchmarks) by applicants on a case-specific basis. In other words, SIMULATE-3 is used as a measurement tool through a relative differencing process to infer only reactivity decrement errors resulting from CASMO, therefore quantification of SIMULATE-3 biases and uncertainties is not necessary.

7 The difference in k-effective between the unadjusted calculation and the burnup-adjusted calculation is the reactivity decrement error described by EPRI.

8 This k-effective change represents a means to estimate all sources of depletion code uncertainty - for example, uncertainty introduced by nuclear data, manufacturing tolerances, thermal hydraulic conditions, etc. - as long as measurement uncertainties are properly accounted for or shown to be insignificant.

1 range of burnup multipliers. The multipliers are appropriately spaced to achieve a resolution 2 fine enough to accurately capture the minimum deviation as seen in Figure 3-2, Change in 3 r.m.s. Fission Rate Error vs. Sub-batch Multiplier, of the EPRI benchmark report.

4 5 The global search is performed both on a two-dimensional (2D) axially-integrated-radial basis 6 and a three-dimensional (3D) nodal basis, and it was found that the inferred reactivity 7 decrement is largely insensitive to the global search type. Since the 2D method puts less 8 emphasis on the axial ends versus the 3D method and both are in relative agreement, this 9 indicates that the global search process does not significantly depend on the observed higher 10 calculated-to-measured (C/M) flux differences at the nodes corresponding to the fuel assembly 11 axial ends. The observed higher C/M flux differences at the nodes corresponding to the fuel 12 assembly axial ends are indicative of inaccuracies in SIMULATE-3 models rather than 13 CASMO-5 models since SIMULATE-3 model accuracy begins to degrade closer to the model 14 boundaries where there is increased reliance on simplified reflector models (see the response to 15 RAI 3.a.ii. for additional discussion). If CASMO-5 models were inaccurate, the high C/M values 16 observed at the SIMULATE-3 model axial ends would still persist closer to the axial center of 17 SIMULATE-3 models. However, this is not the case. Therefore, it is concluded that reactivity 18 decrement error data sufficiently represents the accuracy of CASMO-5 models as realized 19 through SIMULATE-3 simulations.9 20 21 As further explained in EPRI benchmark report Section 3.4, Flux Map Perturbation 22 Calculations:

23 24 The reason for choosing a sub-batch burnup multiplier is that if there are errors in 25 reactivity predictions of the lattice depletion code, the errors would be seen by all 26 assemblies in the sub-batch. For example, if fission rates predicted in all 27 assemblies of a sub-batch were either consistently low or consistently high, this 28 would be a strong indication of lattice code depletion errors (e.g., nuclide 29 concentration errors, cross-section data errors, resonance modeling 30 approximations, approximations in solving neutron transport equations, 31 approximations in solving the nuclide depletion equations, approximations in 32 modeling of boron history, etc.) The data often shows, however, that fission rate 33 differences vary in both sign and magnitude within a sub-batch. This indicates 34 that most of the differences in fission rates are due to factors not directly related 35 to errors in [lattice depletion code] reactivity predictions with burnup.

36 37 The relationship shown in Figure 7-1, CASMO-5 Bias in Reactivity, of the EPRI benchmark 38 report quantifies the difference in sub-batch k-effectives between the initial unperturbed states 39 and final minimized-local-error states by minimizing the RMS deviation between measured and 40 computed detector signals corresponding to approximately 3000 flux maps.10 In this figure, 41 each data point represents a unique sub-batch corresponding to a single flux map - this 9

CASMO-5 is the code being validated as measured via SIMULATE-3. It should be noted that CASMO-5 is the reference depletion code used to define a subsequent set of 11 calculational benchmarks which are to be used to determine application-specific depletion code biases - this is discussed further in subsequent sections of this report.

10 Forty-four fuel cycles from 4 PWRs with 12-18 flux maps per cycle and 5-12 sub-batches per cycle.

1 reactivity decrement error relationship to average sub-batch burnup forms the basis for the final 2 reported reactivity decrement error bias and bias uncertainty.

3 4 Additional work utilizing the Benchmark for Evaluation And Validation of Reactor Simulations 5 (BEAVRS) flux map data - documented in a report released by EPRI titled, PWR Fuel 6 Reactivity Depletion Verification Using Flux Map Data - has since been performed evaluating 7 reactivity decrement errors as a function of sub-batch average burnup using CASMO-5 directly 8 instead of through SIMULATE-3 (Smith and Gunow, 2014). As discussed in the report, this 9 work allows for an assessment of the impact of the following SIMULATE-3 modeling 10 approximations on the calculation of reactivity decrement error bias as a function of burnup:

11 12

  • Differences in batch spectra vs. CASMO-5 lattice assumption (zero leakage) 13 14
  • Differences in intra-assembly spatial flux distributions vs. lattice assumption 15 16
  • Errors in SIMULATE-3 nodal and detector physics models 17 18
  • Errors in SIMULATE-3 cross-section data fitting models 19 20 The report utilizing the BEAVRS flux map data explains what was briefly touched on above:

21 22 The EPRI/Studsvik study demonstrated that accurate fuel reactivity errors could 23 be determined by minimizing either the 2D (axially-integrated) or [3D] (nodal) 24 r.m.s. differences between measured and calculated 3D SIMULATE-3 fission 25 rates distributions. Hence, it is clear that if 2D calculations can predict the radial 26 fission rate distributions with similar accuracy to 3D calculations, then direct 2D 27 calculations can be used to infer errors in fuel reactivity burnup decrements using 28 the analytical procedure developed in the original [EPRI benchmark report].

29 30 The report utilizing the BEAVRS flux map data concludes the following:

31 32 It has been successfully demonstrated that the reactivity decrement errors 33 inferred using flux map data and 2D full-core multi-group transport calculations 34 are smaller than those inferred by using 3D nodal diffusion calculations to 35 compute reactor fission rate distributions.

36 37 Fuel reactivity errors inferred from 3D SIMULATE-3 and 2D CASMO-5 full-core 38 transport calculations are within 250 [percent millirho (pcm)] of one another for all 39 flux map/batches examined here.[11]

40 41 The most important outcome of this study is that nodal method approximations 42 have now been demonstrated to contribute insignificantly to individual batch 43 reactivity errors. Consequently, nodal methods do not contribute significantly to 11 The maximum difference was within 250 pcm and the average was much smaller at 33 pcm +/- 78 pcm at a 1-sigma standard deviation.

1 inferred reactivity decrement biases and uncertainties postulated in the original 2 [EPRI benchmark report].

3 4 Although the BEAVRS work has been done with a different set of flux map data spanning only 5 the first two cycles12 of a 4-loop Westinghouse reactor containing one type of 17x17 fuel 6 assemblies with a maximum enrichment of 3.4 wt% U-235, the insights are nonetheless 7 valuable and provide additional assurance that the SIMULATE-3-derived reactivity decrement 8 errors are consistent with those derived solely from the higher fidelity physics models of a lattice 9 depletion code such as CASMO-5. The results of the report utilizing the BEAVRS flux map data 10 also provides assurance that - as characterized by EPRI - measured biases and uncertainties 11 are not unrealistically low because of some fortuitous cancellation of errors in the numerous 12 approximations made in the 3D nodal diffusion core models that were employed [in the 13 SIMULATE-3 calculations described in the EPRI benchmark report].

14 15 Overview of Deduction Process Uncertainties 16 17 For the given dataset, potentially offsetting reactivity decrement error effects are postulated to 18 be caused from:

19 20 1. Use of SIMULATE-3 and its models with CASMO-5 generated cross-sections instead of 21 CASMO-5 models directly, 22 23 2. Imprecise knowledge of fuel temperatures when modeling the reactor environment in 24 SIMULATE-3, 25 26 3. Uncertainties in the measured fluxes from flux maps input into SIMULATE-3, 27 28 4. Uncertainties in the actual geometrical dimensions of components modeled in 29 SIMULATE-3, 30 31 5. The algorithm used to arrive at individual sub-batch reactivity decrement errors.

32 33 Item (1) was explicitly addressed as explained in the preceding section, (2) has been explicitly 34 addressed by EPRI as discussed further in the Uncertainty Analysis section below, while it is 35 reasonably argued that (3) and (4) have negligible impact on reactivity decrement error bias 36 uncertainty.13 Regarding Item (5), due to the algorithm chosen to derive individual sub-batch 37 reactivity decrement errors, there is substantial uncertainty introduced into the burnup-38 dependent reactivity decrement data of Figure 7-1 of the benchmark report that cannot be 39 separated from the CASMO-5 reactivity decrement error bias uncertainty characterization.

40 EPRI refers to this uncertainty as arising from sub-batches with low sensitivity meaning that 12 The maximum batch burnup is 23 gigawatt-days per initial metric ton of uranium (GWd/MTU).

13 See RAI 2, 8, and 9 responses as part of responses to RAIs for the EPRI benchmark report for additional discussion. As part of this discussion, EPRI also states that modeling simplifications that create deviations from reality - e.g., deviations in the actual geometrical dimensions of components -

increases the magnitudes of uncertainty attributed to the reactivity decrement errors, which is conservative.

1 they have low sensitivity to the algorithm being used to deduce reactivity decrement error. This 2 means that the reactivity decrement errors of these low sensitivity sub-batches do not change 3 as drastically as higher sensitivity sub-batches as burnup changes. EPRI argues that these low 4 sensitivity sub-batches are not relevant to the burnup-dependent reactivity decrement error 5 database as they do not complement the implemented deduction technique. In the initial 6 revision of the EPRI benchmark report, EPRI attempted to filter these low sensitivity 7 sub-batches out of the dataset. However, this is problematic because what the filtering criteria 8 should be is not clear. For example, how would one know which filtering criteria is reasonable 9 and which is unreasonable? Therefore, these low sensitivity sub-batches cannot be justifiably 10 filtered out and can be viewed as a penalty that must be taken with the chosen reactivity 11 decrement error deduction algorithm. Even the modest filtering originally proposed by EPRI 12 resulted in large burnup-dependent reactivity decrement errors. This algorithm-associated 13 uncertainty is an integral part of the regression fit uncertainty discussed in the Uncertainty 14 Analysis section below.

15 16 Regarding the connection between the burnup-dependent reactivity decrement data to the 17 CASMO-5 lattice physics solver, after accounting for the above, uncertainty associated with use 18 of CASMO-5 to quantify reactivity decrement error for cold in-rack conditions using data from 19 hot in-core conditions must still be accounted for. Of the CASMO-5 inputs, uncertainties 20 associated with temperature-dependent nuclear cross-section data creates additional 21 uncertainty when defining cold versus hot reactivity decrement uncertainties. Table 8-3, [Hot 22 Full Power (HFP)] to Cold Reactivity Uncertainty (2-sigma) as Function of Burnup, of the 23 benchmark report shows uncertainties on the order of approximately 500 pcm from 24 TSUNAMI-3D analyses - this is discussed further in the Uncertainty Analysis section below.

25 In summary, the major uncertainties that are likely driving the reactivity decrement error data 26 are: (1) temperature-dependent cross-section uncertainties, (2) fuel temperature uncertainties 27 during reactor operation, and (3) the algorithm used to arrive at individual sub-batch reactivity 28 decrement errors.

29 30 Method Applicability 31 32 In-Core Flux Map Benchmarking 33 34 In the context of the regression analysis performed by EPRI, there is one main issue with 35 EPRIs argument of generic applicability. That is, the population of fuel types in the 36 benchmarking effort is limited to a single Westinghouse 17x17 fuel design with some burnable 37 absorber (BA) variation and a single AREVA 17x17 fuel design with some BA variation.

38 Similarly, the population is restricted to a subset of fuel enrichments and soluble boron histories.

39 This makes extrapolation to all PWRs difficult since there is no physical evidence that the 40 variance, using traditional estimation techniques, would not significantly increase due to the 41 variation in the various cycle-specific parameters that has occurred in past PWR operation, 42 does occur in current PWR operation, and will occur in future PWR operation. Given that the 43 benchmark report is intended to be applicable to all PWR fuel (as discussed in the response to 44 RAI 7a), additional study may be warranted for other fuel designs over a range of BA types and 45 loadings.

46 In EPRI benchmark report RAI 7b, the NRC staff asked EPRI to assess the impact of 47 operational characteristics that might be considered atypical or unexpected (i.e., any operational 48 characteristics that would be considered to be a significant deviation from those forming the 49 basis of the EPRI benchmark work) on the reactivity decrement error uncertainty. EPRI states 50 that:

51

1 The 44 Duke reactor cycles used in this study (approximately 65 reactor-years) 2 are a small percentage (<1%) of the many thousands of PWR reactor-years of 3 operation that have occurred. However, many PWRs have been operated with 4 fuel and operational strategies that are very similar to those of the Duke reactors, 5 and as such, the fuel used in this report is representative of the majority of the 6 discharged fuel in spent fuel pools in the US.

7 8 EPRI provides further qualitative discussion regarding why the 44 Duke reactor cycles are 9 appropriate to cover all PWR operations stating that:

10 11 While fuel in the Duke reactors was 17x17 fuel, there also exist 14x14, 15x15, 12 and 16x16 fuel in US spent fuel pools. All of these other fuel types are depleted 13 in reactors with very similar fuel-to-coolant ratios, operational power densities, 14 fuel temperatures, and soluble boron concentrations. Thus, we expect that 15 reactivity decrement biases and uncertainties would be very similar for these 16 other fuels.

17 18 The NRC staff agrees that reactivity biases and uncertainties are expected to be similar for 19 other fuel types depleted in reactors with similar fuel-to-coolant ratios, operational power 20 densities, fuel temperatures, and soluble boron concentrations; however, the NRC staff would 21 add BAs to this list as CASMO-5 cross-section uncertainties and depletion simulation capability 22 may change as a function of BA.

23 24 In EPRI benchmark report RAI 7d, the NRC staff asked for clarification of a statement made 25 regarding a 200 pcm reactivity decrement error sensitivity to soluble boron, fuel enrichment, and 26 BAs. In the response to RAI 7d, EPRI implies that there would be minor sensitivity to reactivity 27 decrement error to various cycle-specific parameters. This is inferred by the qualitative 28 observation that the reactivity decrements varied by only 200 pcm (i.e., a small amount) when 29 developing the Kopp 5 percent decrement curves. In other words, a small percentage of a small 30 amount will be an even smaller amount, therefore EPRI believes it is justified in saying that 31 there would not be any significant sensitivity to soluble boron, fuel enrichment, and BAs.

32 Based on the discussion above, the NRC staff finds that the 44 cycles of PWR data modeled 33 using CASMO-5 and SIMULATE-3 is sufficient to allow for quantification of the reactivity 34 decrement error and associated uncertainty due to PWR fuel depletion. However, the NRC staff 35 also agrees with the statement in Section 10.3, Range of Fuel Applications, of the initial 36 revision of the EPRI benchmark report:

37 38 The results presented in [the EPRI benchmark] report are, strictly speaking, 39 applicable only to those fuel types included in the analysis, namely: 1) 3.5 - 5.0%

40 enrichment, 2) Westinghouse RFA fuel with [Integral Fuel Burnable Absorber 41 (IFBA)] and [Wet Annular Burnable Absorbers (WABAs)], and 3) AREVA 42 MarkBW fuel with [lumped burnable poison (LBP)] pins. For other fuel types, 43 additional analysis may be needed to demonstrate that results of this study are 44 [applicable] to those fuel types.

45 46 Spent Fuel Storage Applicability Studies Via ck Analysis 47 48 EPRI proposes to use the bias and uncertainty calculated at hot in-core conditions to cold 49 in-rack SFP conditions. Consequently, the NRC staff asked for a more detailed justification for 50 this extrapolation of bias and uncertainty calculated at hot in-core conditions to cold in-rack SFP 51 conditions. In the EPRI benchmark report RAI 5 response, EPRI explains that possible

1 differences in neutron energy spectrum between SFP geometries and power reactor core 2 geometries are understood by comparison between SCALE Version 6.0 (ORNL, 2009) 3 TSUNAMI-IP correlation coefficients,referred to as ck values) which are given in Table 8-4, 4 Correlation Coefficients, ck, Between Reactor Conditions by Lattice and Burnup, of the EPRI 5 benchmark report.

6 7 As explained by EPRI, similarity coefficients were generated using a series of SCALE 8 Version 6.0 TSUNAMI sequences. The TSUNAMI-3D code was used to generate application-9 specific sensitivity data used as input to the TSUNAMI-IP sequence, which generates the ck 10 values that serve as a measure of similarity between two systems. The similarity is quantified in 11 terms of system k-effective sensitivity to nuclear cross-section data uncertainty. The closer a ck 12 value is to one, the closer two systems are to sharing identical sources of nuclear data 13 uncertainty (Broadhead et al., 2004). In other words, similar reactivity changes can be expected 14 of two systems that correspond to a high ck value when the cross-section data are changed in a 15 systematic way - this strongly implies physical similarity and reactivity sensitivity similarity 16 between the two systems. However, strictly speaking, the only thing that can be said is that the 17 two systems will share similar nuclear data uncertainties (Mennerdahl, 2014).

18 19 Many things contribute to the similarity of two systems including neutron energy spectra, spent 20 fuel isotopic concentrations, presence of other non-fuel materials, and the geometric 21 configuration of the two systems. EPRI states that for all lattices of the EPRI study, the 22 correlation coefficients between the in-core conditions and the SFP rack geometry are greater 23 than 0.95, demonstrating that the physical characteristics between the two environments are 24 very similar, and justifying the application of nuclear data uncertainties from hot in-core 25 conditions to cold in-rack SFP conditions.

26 27 The RAI 5 response focuses on differences in neutron energy spectrum between the two 28 environments referring to the high correlation coefficient similarity. EPRI explains that:

29 30 ...at hot conditions the neutron spectrum is hardened by the low water density, 31 core soluble boron, and Xenon/Samarium absorbers - just as the SFP neutron 32 spectrum is hardened [by] the presence of absorber panels. Thus, even though 33 the SFP has a much higher water density than the PWR core, the spectral 34 softening of the water is offset by other hardening phenomenon to make the 35 energy spectrum and reactivity sensitivities very similar to those in the reactor 36 core.

37 38 In EPRI benchmark report RAI 7a, the NRC staff asked EPRI to formally define the area of 39 applicability as it relates to the benchmarking effort performed. The response provides the 40 burnup range, the U-235 enrichment range, neutron energy spectrum range, and specific power 41 range applicable to a SFP CSA application that would reference the EPRI benchmark work.

42 However, the area of applicability was also evaluated using TSUNAMI-IP and similarity tests by 43 analyzing ck values which allow for applicability to be defined more generally. In the General 44 Response to the EPRI utilization report RAIs, a high degree of similarity is indicated between a 45 range of 2-D lattice models (with varying fuel enrichment, BA type, soluble boron, temperature, 46 and power) and a range of in-rack SFP models (with varying fuel type, storage rack type, areal 47 density, fuel enrichment, and burnup). EPRI notes that:

48 49 Of the 56 spent fuel pool configurations investigated all but 3 had a ck greater 50 than 0.9 for at least one of the benchmark cases. The three that had a maximum 51 ck less than 0.9 were associated with flux trap (Region 1) designs with low

1 burnup ( 20 GWd/T) and CE 16x16 fuel (see the bottom of Table GR-1, page 7 2 of 7). Flux traps add more water to the system than contained in the benchmarks.

3 All of the non-flux trap designs (Region 2) had very good agreement with the 4 minimum of the [maximum] cks being 0.9821 for W 17x17 fuel and 0.9710 for CE 5 16x16 fuel. Flux trap racks are principally designed to accommodate fresh fuel 6 and therefore do not usually require burnup credit. In fact two of the three cases 7 which had ck values less than 0.9 were for rack designs which did not need 8 burnup credit. The only spent fuel configuration that is likely to need burnup 9 credit and has a ck less than 0.90 are flux trap designs that do not credit absorber 10 panels. However, these cks are marginally below 0.90.

11 12 Consequently, the presented evidence strongly supports the claim in the RAI 7a response that 13 the area of applicability is all current PWR fuel assembly designs for applications with most 14 rack designs and the NRC staff considers the EPRI utilization report calculational benchmarks 15 to be comprehensive. However, a similar limitation to the one mentioned in the previous section 16 regarding in-core flux map benchmarking of PWR fuel types is applied to the utilization report 17 calculational benchmarks. That is, the calculational benchmarks presented in the EPRI 18 utilization report are, strictly speaking, applicable only to the storage of fuel types considered in 19 the EPRI benchmark report stored under SFP storage conditions that are similar to those 20 considered in the similarity analyses supporting the EPRI utilization report.14 For other fuel 21 types, BAs, or other SFP storage conditions, additional analysis may be needed to demonstrate 22 that results of the EPRI benchmark report are applicable to a given application.

23 24 Uncertainty Analysis 25 26 Uncertainty Based on Application of HFP Reactor Benchmarks to Cold SFP Conditions 27 28 The in-core depletion benchmarks were performed at HFP conditions which creates uncertainty 29 in estimates of reactivity decrement error at cold conditions. This is due to not having precise 30 knowledge of certain fuel properties at HFP conditions. Also, by performing the flux map 31 measurements at HFP conditions rather than cold conditions representative of the SFP, 32 cross-section uncertainties at cold conditions must be considered.

33 34 One property that can have an effect on system reactivity is fuel temperature, which cannot be 35 directly measured during HFP operations. The uncertainty treatment is explained in 36 Section 8.2, Fuel Temperature Uncertainties, of the EPRI benchmark report. The driving 37 positive reactivity effect is increasing plutonium content in the fuel as a function of burnup, with 38 more plutonium present when operating at higher fuel temperatures. This positive reactivity 39 effect is masked at HFP in-core conditions by an increasingly negative fuel temperature 40 reactivity feedback effect with increasing burnup, which is also due to higher fuel temperatures.

41 However, at cold SFP conditions, the fuel temperature feedback effect is not present to offset 42 the positive reactivity effects of increased plutonium at higher temperatures.

43 14 Refer to the General Response given in the EPRI utilization report RAI responses for a description of the similarity analyses performed.

1 Using INTERPIN-4 (Grandi and Hagrman, 2007), which provides data for average fuel pin 2 temperatures as a function of burnup and linear heat generation rate for the Studsvik CMS 3 codes, EPRI determines an average fuel pin temperature as a function of burnup to find the 4 minimum and maximum fuel temperatures expected for a given fuel pin. The average fuel pin 5 temperature as a function of burnup used is shown in Figure 8-1, Typical INTERPIN-4 Fuel 6 Temperature Change With Burnup, of the benchmark report. Using these minimum and 7 maximum temperatures, EPRI calculates two separate reactivity effects to bound any potential 8 increase in the reactivity decrement error data due to temperature effects. The first reactivity 9 effect is an instantaneous effect at HFP conditions and the second is a history effect associated 10 with the plutonium build-in as a function of burnup which manifests at cold conditions. Initially, 11 EPRI selected both the maximum instantaneous and history effects over the analyzed burnups, 12 -150 and 206 pcm respectively, and combined them by the root of the sum of the squares 13 (RSS). However, to obtain a more realistic estimate of the uncertainty, EPRI opted to combine 14 the two fuel temperature uncertainty components by the RSS as a function of burnup in the final 15 revision of the EPRI benchmark report. The NRC staff finds this approach to be appropriate for 16 the intended application of conservatively deriving HFP temperature uncertainties since an 17 appropriately validated fuel performance code was used in a bounding manner to account for all 18 relevant reactivity effects caused by uncertainty in fuel temperatures.

19 20 CASMO-5 was used to quantify the reactivity effects due to the difference between minimum 21 and maximum fuel temperatures by performing sensitivity analyses at various fuel 22 burnups - calculations were performed at both hot and cold in-core conditions.15 The maximum 23 reactivity effects are highlighted in Table 8-1, Fuel Temperature Effect on Hot and Cold Lattice 24 Reactivity, of the benchmark report.

25 26 Another source of uncertainty caused by performing the depletion benchmarks at HFP 27 conditions comes from the temperature-dependent nuclear data. The goal of the depletion 28 benchmarking effort is to ultimately provide an estimate for reactivity decrement error bias and 29 uncertainty at cold SFP conditions rather than HFP conditions. Therefore, quantification of the 30 nuclear data uncertainty arising from creation of the HFP depletion benchmarks at elevated 31 temperatures instead of at cold SFP conditions is necessary.

32 33 TSUNAMI-3D was used to determine the temperature-based nuclear data uncertainty. Upon 34 completion of a TSUNAMI-3D sequence run, an estimate of problem-dependent nuclear data 35 uncertainty is produced in terms of reactivity using problem-specific reactivity sensitivity 36 coefficients and a temperature-specific nuclear data covariance library. EPRI has performed 37 separate TUSNAMI-3D calculations using the appropriate temperature-dependent nuclear 38 cross-section data corresponding to both HFP in-core conditions and cold SFP conditions for a 39 range of configurations as detailed in Section 8.5, TSUNAMI Analysis results. The results of 40 these calculations were then adjusted for correlation between hot in-core and cold in-rack 41 states; the results are provided in Table 8-7, HFP to Cold Uncertainty Matrix (2-sigma) at Cold 42 Conditions. Also, the uncertainties associated with fresh fuel nuclear data uncertainties were 43 statistically subtracted by RSS, and the resulting uncertainties are provided in Table 8-9, HFP 44 to Cold Additional Uncertainty Matrix (2-sigma) at Cold Conditions. The NRC staff finds this 15 Cold in-core reactivity effects were determined by performing CASMO-5 branch-to-cold calculations over the range of respective hot condition calculations.

1 acceptable because fresh fuel nuclear data uncertainties are treated separately in SFP CSA 2 applications by benchmarking the criticality code with cold fresh fuel critical experiments.

3 4 However, in Section 8.5 of the revised EPRI benchmark report, EPRI modified the methodology 5 based on a re-interpretation of the Table 8-9 data, explaining that the additional uncertainty 6 curve does not approach 0.0 for low burnups as one expects from the definition of reactivity 7 decrement, and consequently proposes to use the 0.5 [GWd/MTU] step as the reference for 8 zero burnup, as displayed in the bottom red curve of Figure 8-2. This data adjustment, 9 reflected in Table 8-10, HFP to Cold Additional Uncertainty Matrix (2-sigma) at Cold 10 Conditions, effectively shifts all of the uncertainty values non-conservatively, claiming that the 11 uncertainty doesnt approach 0 pcm fast enough moving from 0.5 GWd/MTU burnup to 12 0 GWd/MTU burnup. Without a physical basis for the adjustment, the NRC staff found this to be 13 insufficient to support use of the adjustment.16 14 15 Regression Fit and Associated Uncertainty 16 17 Section 7, Measured HFP Reactivity Bias and Uncertainty, of the EPRI benchmark report, 18 discusses the analysis and interpretation of the reactivity decrement error versus average 19 sub-batch burnup data. This includes an estimate of the regression fit of the reactivity 20 decrement error data and regression fit uncertainty as a function of assembly-average burnup.

21 In the initial EPRI benchmark report revision, Section 7.5, Burnup Reactivity Decrement Biases 22 and Confidence Intervals, describes why formal statistics cannot strictly be applied to the data 23 in order to determine the regression fit uncertainty based on a 95 percent probability, 95 percent 24 confidence interval consistent with 10 CFR 50.68(b)(4) k-effective determination requirements.

25 Consequently, Section 7.6, Burnup Reactivity Decrement Biases and Uncertainties, describes 26 a direct method by which the regression fit uncertainty is estimated.

27 28 Based on the Section 7.6 demonstration, the NRC staff had concerns with the relatively small 29 estimate of the regression fit uncertainty in light of the relatively large variation in reactivity 30 decrement error versus average sub-batch burnup, especially given that formal statistics was 31 not applied in the determination of the estimate.

32 33 EPRI benchmark report RAI 3 covers many NRC staff concerns regarding the approach used to 34 determine the regression fit uncertainty and explains why the approach was unacceptable to the 35 NRC staff. The NRC staff issued EPRI benchmark report RAI 7c to gain an understanding of 36 the implications of not meeting certain statistical conditions when performing the regression 37 analysis, such as, for example understanding whether variance estimates might be 38 overestimated or underestimated. EPRI does not provide any practical evaluation of not 39 meeting all strict statistical conditions and the impact on the associated regression fit and its 40 variance, therefore the quantitative implications (e.g., magnitude of overestimation or 41 underestimation) of not meeting the strict statistical conditions when estimating the regression fit 42 and its variance were not clear to the NRC staff. To address concerns described in RAI 3 and 16 Note that the values in Table 8-9 are used instead of those in Table 8-10 in the development of Table 1, Bias and Uncertainty (% Reactivity Decrement) Versus Burnup (GWd/MTU) for the EPRI Depletion Reactivity Benchmarks, of this safety evaluation, which contains the results of the NRC staffs confirmatory data analysis.

1 RAI 7c, EPRI developed a statistically-based approach to more rigorously quantify the 2 regression fit uncertainty.

3 4 The procedure in Attachment 2 of NEIs April 13, 2016, letter (Cummings 2016a) modifies the 5 regression confidence interval, at select burnups, by multiplying the Students t-factor by the 6 k-factor divided by the t-factor in Step 8 in the Summary of Analysis Procedure subsection.

7 Ultimately, this procedure does not allow for the correct statistical inference as it still focuses on 8 the average reactivity decrement as a function of sub-batch average burnup rather than 9 individual reactivity decrements as discussed in a follow-up RAI, RAI 1 (Cummings, 2016a).

10 11 During a June 8, 2016, public meeting, the staff again requested that NEI/EPRI re-align the 12 procedure already developed in Attachment 2 to reflect a 95/95 confidence limit17 to be based 13 on the correct population parameter - the individual reactivity decrement values rather than the 14 mean of these values - for the characterization of the 95/95 uncertainty consistent with the 15 explicit requirements of 10 CFR 50.68(b)(4) (Hsueh, 2016a).

16 17 EPRI responded with a white paper titled, A Conservative Approach to Depletion Analysis for 18 Spent Fuel Pool Criticality Analysis (Cummings, 2016b). This qualitative approach argues that 19 use of NEI 12-16 guidance will introduce conservatisms that would offset any potential 20 non-conservatisms attributed to deficient depletion code qualification approaches. NEI first 21 claims that Section 4.2.1, Depletion Analysis, of NEI 12-16 will produce a bounding reactivity 22 compared to actual depleted fuel assemblies in the spent fuel pool. NEI then claims additional 23 conservatism is present via guidance in Section 5.1.4, Reactor Burnup Record Uncertainty, of 24 NEI 12-16 which specifies that use of a 5 percent burnup uncertainty is bounding. NEI further 25 notes that the inclusion of both the depletion uncertainty and the burnup [record] uncertainty is 26 essentially a double-counting of the uncertainty of the depletion code to accurately calculate 27 the change in reactivity with burnup.

28 29 NEI 12-16, Section 4.2.1 guidance may lead to conservative results, but the amount of 30 conservatism will vary depending on the methodology used to perform the depletion analysis.

31 Therefore, to bolster this argument, there would need to be some discussion via quantification 32 of the range of conservatism to be expected.

33 34 Regarding NEIs statement on the burnup record uncertainty, NEI implies that one could 35 generally take credit for the conservatism of the burnup record uncertainty without justification 36 for why this would be appropriate on an application-specific basis. For some applications this 37 may be conservative but for others it may not be. Therefore, generically crediting inherent 38 conservatism in this term does not create a success path to address non-conservatism in the 39 EPRI depletion code validation methodology. Furthermore, the NRC staff does not agree with 40 NEIs position that the inclusion of both the depletion uncertainty and the burnup [record]

41 uncertainty is essentially a double-counting of the uncertainty of the depletion code to 42 accurately calculate the change in reactivity with burnup without additional explanation from 43 NEI.

44 17 The NRC staff further clarified that this is also referred to as a one-sided regression tolerance interval in this context.

1 On October 14, 2016, a public meeting was held to discuss the proposed closure of the open 2 item related to EPRIs regression fit and associated uncertainty (Hsueh, 2016b). At the 3 meeting, EPRI provided another update of the analysis to address remaining NRC staff 4 concerns and NEI agreed to supplement the previous follow-up RAI 1 response to document 5 this modification to the statistical analysis. Modification of the methodology to support relative 6 bias and uncertainty characterization was also discussed during the meeting and NEI agreed to 7 include the technical basis for this modification as part of the supplemental RAI response and in 8 a subsequent benchmark report revision.

9 10 The NRC staff received the supplemental follow-up RAI 1 response submitted on 11 January 9, 2017 (Cummings, 2017). Upon review of the supplement, it was still not clear why 12 certain assumptions made were appropriate with respect to developing 95/95 regression 13 tolerance intervals (e.g., data collapsing to treat data dependence rather than attempting to 14 model the data dependence without collapsing).

15 16 The supplement also described the conversion of absolute values of the bias and uncertainty 17 into relative values in terms of percent reactivity decrement due to fuel depletion. This is 18 desirable because it allows for a scaling of the depletion bias and uncertainty relative to the 19 density of depleted fuel loaded in a given storage configuration. For example, use of an 20 absolute uncertainty would apply the same uncertainty magnitude to a 4-out-of-4 storage array 21 of depleted fuel as would be applied to the same storage array with only 2 of the fuel 22 assemblies present in the array (i.e., 2 empty storage cells checkerboarded with 2 depleted 23 fuel assemblies) resulting in unwarranted additional conservatism. However, this re-formulation 24 requires establishment of an appropriately representative reactivity decrement. In principle, the 25 smaller the reactivity decrement assumed, the higher the relative uncertainty. Therefore, in the 26 supplemental RAI response, EPRI uses the minimum cold out-of-rack reactivity decrement, 27 dependent on burnup, determined from the calculational benchmarks defined in the EPRI 28 utilization report; only the 7 nominal lattices that are depleted, branched to cold conditions, and 29 decayed for 100 hours0.00116 days <br />0.0278 hours <br />1.653439e-4 weeks <br />3.805e-5 months <br /> were considered.18 EPRI also notes that cold depletion reactivities and 30 uncertainties...are smaller in-rack [(i.e., in a SFP storage rack)] than out-of-rack, as reported in 31 the original EPRI [benchmark] report Tables 8-7 and 8-8. Therefore, using out-of-rack 32 reactivities as the basis for the conversion to relative bias and uncertainty is most appropriate 33 (i.e., cold out-of-rack reactivity bias or uncertainty is divided by cold out-of-rack reactivity). EPRI 34 selects the smallest cold out-of-rack reactivity decrement for each increment of burnup across 35 all 7 nominal lattices. Since this has the effect of maximizing the relative bias and uncertainty, 36 this is conservative, and therefore appropriate.

37 38 To resolve the open item regarding the regression analysis, as discussed in the revised EPRI 39 depletion benchmark report (Smith et al., 2017), corresponding RAI responses, and meeting 40 summaries, the NRCs supporting consultants from Pacific Northwest National Laboratory 41 performed a confirmatory analysis to verify the acceptability of the EPRI-determined regression 42 fit and associated uncertainty. As a result of this analysis, the NRC staff determined a 43 regression fit and associated uncertainty based on the results of the confirmatory analysis.19 18 See EPRI benchmark report, Table 10-1, Measured Cold Reactivity Decrements (in pcm) for Nominal Benchmark Lattices for the reactivity decrements considered.

19 See Appendices A and B of this report for confirmatory analysis details.

1 The main difference arises from determination of the bias from a more appropriate linear 2 regression fit instead of a quadratic fit, which was used by EPRI, because the quadratic fit 3 appears to overfit the data at higher burnups - this is discussed in more detail in Appendix B.

4 The confirmatory analysis also bases the 95/95 regression fit uncertainty on a first-order 5 autoregressive model to account for the correlation structure in the sub-batches of the 6 EPRI-generated reactivity decrement error dataset without questionable collapsing of the sub-7 batch data within each cycle to remove data correlation. The burnup-dependent bias and total 8 uncertainty20 is shown in Table 1 below.

9 10 Table 1: Bias and Uncertainty (% Reactivity Decrement) Versus Burnup (GWd/MTU) for the 11 EPRI Depletion Reactivity Benchmarks.

Burnup Bias Uncertainty 10 0.00 3.89 20 0.28 3.28 30 0.43 3.04 40 0.52 3.02 50 0.58 3.06 60 0.60 3.14 12 13 Since the results in Table 1 of this report are not bounded by those in Table 10-2, Measured 14 CASMO-5 Cold Reactivity Decrement Biases and Tolerance Limits Expressed as Percentage of 15 Depletion Reactivity Decrement, in the revised EPRI benchmark report, additional NRC staff 16 confirmatory analyses were performed, as documented in Appendix C, to assess whether 17 differences between the NRC staff confirmatory analysis results, as provided in Table 1 of this 18 report, were significant enough from those in Table 10-2 of the revised EPRI benchmark report 19 such that modification of the EPRI-determined bias and uncertainty would be warranted.

20 21 Based on the analysis documented in Appendix C of this report, the NRC staff found that the 22 values of reactivity decrement uncertainty in EPRI benchmark report Table 10-2, while not 23 consistently bounding relative to the NRC staffs confirmatory analysis, are nonetheless 24 approximately the same. That is, the estimated final reactivity impact on a theoretical end-25 users SFP CSA was found to differ by less than approximately 100 pcm independent of burnup 26 - a relatively small amount.

27 28 Finding 29 30 The [Spent Nuclear Fuel] Rack cases from Table 8-4 of the EPRI benchmark report show that 31 similar reactivity sensitivities to changes in nuclear cross-section data are expected for both the 32 HFP reactor and cold SFP environments. This provides strong indication that the reactivity 33 decrement error bias and uncertainty calculated in the HFP reactor environment is also 34 applicable to the cold SFP environment after appropriately accounting for fuel temperature and 35 temperature-dependent nuclear data uncertainties. This indication is also supported by the 20 The total uncertainty shown was determined by combining the regression fit uncertainty with all other uncertainty components as determined by EPRI.

1 observations in the NRC staff confirmatory analysis documented in Appendix C of this report.

2 Consequently, the NRC staff finds that there is sufficient evidence showing that the benchmarks 3 conducted in the reactor environment are applicable to the SFP environment.

4 5 Regarding the reactivity decrement error bias and uncertainty analysis, the NRC staff finds that:

6 7 1. Reactivity decrement error bias and uncertainty, as quantified in Table 10-2 of the EPRI 8 benchmark report, is acceptable, 9

10 2. Various reactivity decrement uncertainty components were appropriately derived, and 11 12 3. Various reactivity decrement uncertainty components were appropriately combined.

13 14 The NRC staff finds that there is sufficient evidence showing that all significant k-effective 15 uncertainties necessary to allow for benchmarking of depletion codes in support of PWR burnup 16 credit in SFP CSA applications have been appropriately accounted for and applied.

17 18 Utilization of the EPRI Depletion Benchmarks for Burnup Credit Validation 19 20 The corresponding depletion code bias and uncertainty to be applied in a NRC licensee CSA 21 application is first taken to be the bias and uncertainty associated with the depletion code used 22 to infer reactivity decrement errors based on in-core flux map data as discussed in the EPRI 23 benchmark report; the bias and uncertainty data is tabulated in Table 10-2 of the EPRI 24 benchmark report with recommended uncertainty usage given in Appendix C of the EPRI 25 benchmark report. As indicated in Section 9.3 of the revised EPRI benchmark report, the bias 26 data in Table 10-2 has already been applied to the reactivity decrement tables in Appendix C, 27 therefore there is no need for an end user to account for this bias data separately in their CSA 28 application.

29 30 EPRI has defined 11 reference calculational benchmarks,21 which have been modeled using 31 CASMO the same reactor analysis code used to infer reactivity decrement errors from 32 in-core flux map data as discussed in the EPRI benchmark report and the corresponding section 33 above. These calculational benchmarks are designed to represent a broad range of depletion 34 conditions typical of fuel stored in PWR SFPs and analyzed in PWR SFP CSA applications. To 35 provide an example of how a NRC licensee might use these 11 calculational benchmarks, EPRI 36 models the benchmarks using the SCALE, Version 6.1.2, TRITON T5-DEPL sequence (ORNL, 37 2013), for depletion calculations, and the SCALE, Version 6.1.2, CSAS5 sequence (using the 38 KENO-V.a criticality code) for criticality calculations, as explained in Section 3, Comparison of 39 Measured Versus Predicted Reactivity Decrements Using SCALE, of the revised EPRI 40 utilization report (Akkurt and Cummings, 2018).22 41 42 Next, a process is described by which additional depletion code bias is determined by 43 comparison of reference calculational benchmark reactivity decrement values to the reactivity 21 There are six burnups and three cooling times per benchmark.

22 In this demonstration, both Version V of Evaluated Nuclear Data File/Brookhaven (ENDF/B-V) and ENDF/B-VII libraries are used.

1 decrement values derived from the example computer codes; the reference values are also 2 tabulated in Appendix C of the EPRI benchmark report. In effect, inter-code comparisons are 3 made between the previously validated depletion code and an NRC licensees or applicants 4 depletion code, and any observed difference is accounted for as additional depletion code bias 5 in the bias and uncertainty analysis as part of the CSA application.

6 7 The NRC staffs review of the EPRI utilization report covered:

8 9 1. How the 11 proposed benchmarks are generally representative of the PWR SFP 10 environment, 11 12 2. How the sensitivity/uncertainty analysis performed,23 and 13 14 3. How biases and uncertainties will be calculated and applied by NRC licensees and 15 applicants in their SFP CSA applications.

16 17 Determination of Application-Specific Bias 18 19 In EPRI utilization report RAI 2a, the NRC staff asked how the 11 calculational benchmarks are 20 sufficient to produce an application-specific depletion reactivity decrement bias - defined as the 21 bias associated with a users chosen depletion and criticality codes24 - that is consistent with 22 the 10 CFR 50.68(b)(4) requirements. In the RAI 2a response, EPRI explains that the worst 23 bias of the 66 benchmark cases (11 benchmarks at 6 different burnups) is used for the bias, 24 which is a bounding approach that is conservative with respect to the development of a 25 statistically based confidence interval provided that the 11 benchmarks are sufficiently 26 applicable to the population of all possible SFP storage configurations. The details of the 27 General Response provided in the EPRI utilization report (discussed above in the 28 Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty subsection titled 29 Applicability Studies Via Ck Analysis) demonstrates that the area of applicability is all current 30 PWR fuel assembly designs for applications with most rack designs since it is indicated that all 31 cases from the General Response exhibit similar reactivity sensitivities to nuclear data 32 uncertainties.

33 34 As EPRI states in the EPRI utilization report RAI 7 response:

35 36 The General Response to [the EPRI utilization report] RAIs provides a similarity 37 analysis to a range of rack and fuel designs and shows excellent agreement with 38 non-flux trap racks designs and good agreement with flux trap designs with low 39 burnup fuel. The criticality safety analyst can rely on the similarity analysis given 40 in the general response and only needs to do further analysis if the rack or fuel is 23 The NRC staff is concerned with both the reactor and SFP environment analyses; however, SFP environment analyses (e.g., sensitivity of the 11 calculation benchmark biases and uncertainties described in the utilization report to the entire population of actual SFP configurations) are the subject of the EPRI utilization report, and are therefore of interest in this section.

24 This bias term is to be determined through the criticality code to be validated as part of the NRC licensee/applicant SFP CSA application.

1 significantly different than current racks and fuel. If there is a new rack or fuel 2 design significantly different [than] the current generation racks or fuels then the 3 analyst should confirm similarity or use alternate methods to establish a bias and 4 uncertainty for burned fuel in the spent fuel rack.

5 6 Finding 7

8 The NRC staff agrees that if a new rack or fuel design is significantly different from those 9 analyzed in the EPRI utilization report, then the analyst should confirm similarity or use alternate 10 methods to establish a bias and uncertainty for burned fuel storage in the SFP.

11 12 Given the above discussion, the NRC staff finds that:

13 14 1. The 11 calculational benchmarks are sufficient in number and diversity to be representative 15 of the population of SFP storage configurations, and 16 17 2. The use of a bounding approach for determination of the application-specific reactivity 18 decrement bias is consistent with the intent of 10 CFR 50.68(b)(4).

19 20 Expected NRC Licensee Application of the EPRI Utilization Report Process 21 22 Section 9.4, End-Users Application of Experimental Reactivity Decrements, of the EPRI 23 benchmark report summarizes the general process, based on the EPRI benchmarking activities 24 discussed in the EPRI benchmark and utilization reports, that EPRI believes end-users should 25 use to validate their application-specific depletion code for the purposes of crediting the reduced 26 reactivity of depleted fuel in a SFP environment. The NRC staff, in general, finds this process to 27 be acceptable, which is outlined in more detail in the EPRI utilization report. End-users should 28 also take note of the reactivity decrement determination process described in Appendix C of the 29 EPRI benchmark report.

30 31 Additional implementation guidance is given in the EPRI utilization report RAI 8 response 32 regarding conservative treatment of reactivity decrement bias:

33 34 The most limiting bias is not merely the largest of the calculated biases but could 35 include perturbations off of case 3 when a number of these perturbations 36 simultaneously exist in the application. For example, the application could be 37 5 wt% enriched fuel run at 150% power. For this example, the application bias 38 would be the case 3 bias (4.25% U-235, 100% power) plus the difference 39 between case 3 and case 2 (5 wt% U-235, 100% power) plus the difference 40 between case 3 and case 11 (4.25 wt% U-235, 150% power). If any of the 41 differences were negative (i.e., non-conservative), then that difference would be 42 set to zero. This example was chosen to provide a clear explanation; however, 43 for the actual implementation, it is recommended to start with case 3 and then 44 conservatively add all the biases from all the deltas off of case 3 to determine a 45 single bounding bias for the range of benchmarks.

46 47 Some applications may have enrichment less than 3.25 wt% U-235 from first 48 core fuel, so extrapolation of the bias may be necessary. No general method for 49 extrapolation is provided. It is expected that the applicant will use a conservative 50 extrapolation consistent with their available margin. The extrapolation will be 51 reviewed by the NRC and can be used to judge the acceptability of the

1 extrapolation in the totality of the margin to criticality. In the Utilization report, 2 analysis of the trend with [increasing] enrichment produced higher biases with 3 [increasing] enrichment. Therefore, the bias from the highest enrichment would 4 be sufficient to cover enrichments below 3.25 wt% U-235.

5 6 Since application margin is part of the decision on the extrapolation method and 7 the amount of conservatism to add, no generic approach is proposed.

8 9 When applying reactivity differences between cases, it would be conservative to consider any 10 positive bias calculated, regardless of the Monte Carlo uncertainty. Not doing this would require 11 some evidence that reactivity differences are not statistically significant, which would require 12 consideration of the Monte Carlo uncertainty in support of this argument.

13 14 4.0 LIMITATIONS AND CONDITIONS 15 16 1. Based on the RAI 7b response, EPRI indicated that it would be more appropriate to 17 address the issue of general method applicability directly in the NEI 12-16 guidance 18 document that references the EPRI benchmark report. However, the NRC staff 19 emphasizes the point made in the following statement in Section 10.3, Range of Fuel 20 Applications, of the initial revision of the EPRI benchmark report:

21 22 The results presented in [the EPRI benchmark] report are, strictly 23 speaking, applicable only to those fuel types included in the analysis, 24 namely: 1) 3.5 - 5.0% enrichment, 2) Westinghouse RFA fuel with IFBA 25 and WABAs, and 3) AREVA MarkBW fuel with LBP pins. For other fuel 26 types, additional analysis may be needed to demonstrate that results of 27 this study are [applicable] to those fuel types.

28 29 With the exception of reactors using operational strategies and fuel/core designs that 30 significantly differ from those of the EPRI benchmark report, method applicability 31 generally extends to all PWR fuel. For example, the applicability to fuel designs that use 32 gadolinia (Gd) as a burnable absorber would be questionable as the EPRI benchmark 33 report did not consider any fuel designs with Gd. Specifically, CASMO-5 Gd depletion 34 capability was not qualified. Additionally, there are no calculational benchmarks in the 35 utilization report that can qualify Gd depletion for the users application.

36 37 2. The NRC staff considers the EPRI utilization report calculational depletion benchmarks to 38 be sufficiently comprehensive. However, a similar limitation to Limitation 1 regarding 39 in-core flux map benchmarking of PWR fuel types is applied to the utilization report 40 calculational depletion benchmarks. That is, the calculational depletion benchmarks 41 presented in the EPRI utilization report are, strictly speaking, applicable only to the storage 42 of fuel types considered in the EPRI benchmark report stored under SFP storage 43 conditions that are similar to those considered in the similarity analyses supporting the 44 EPRI utilization report.25 For other fuel types or other SFP storage conditions, additional 25 Refer to the General Response given in the EPRI utilization report RAI responses for a description of the similarity analyses performed.

1 analysis may be needed to demonstrate that results of the EPRI benchmark report are 2 applicable to those fuel types.

3 4

5.0 CONCLUSION

5 6 The NRC staff has reviewed the EPRI benchmark and utilization reports, including supplemental 7 information, and the NRC staff finds that the reports provide a sufficient technical basis for the 8 determination of depletion code bias and uncertainty as part of a SFP criticality safety 9 uncertainty analysis application.

10 11

6.0 REFERENCES

12 Akkurt, H., and Cummings, K., 2018, EPRI Report 3002010614, Utilization of the EPRI 13 Depletion Benchmarks for Burnup Credit ValidationRevision 1, Electric Power Research 14 Institute, ADAMS Accession No. ML18088B395.

15 Broadhead, B. L., Rearden, B. T., Hopper, C. M., Wagschal, J. J., and Parks, C. V., 2004, 16 Sensitivity-and Uncertainty-Based Criticality Safety Validation Techniques, Nuclear Science 17 and Engineering 146, American Nuclear Society: 340-66.

18 Cummings, K., 2015a, Responses to Requests for Additional Information for EPRI 19 Reports 1022909 and 1022503 Referenced in NEI 12-16, ADAMS Accession 20 No. ML15061A351.

21 Holonich, J., 2015b, Updated Responses to Requests for Additional Information for EPRI 22 Report 1022503 Referenced in NEI 12-16, ADAMS Accession No. ML15139A074.

23 NEI, 2016a, Response to Request for Additional Information (RAI) Questions Regarding EPRI 24 Report 1025203, Utilization of the EPRI Depletion Benchmarks for Burnup Credit Validation, 25 and EPRI Report 1022909, Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty, 26 ADAMS Accession No. ML16104A332.

27 NEI, 2016b, White Paper on a Conservative Approach to Depletion Analysis for Spent Fuel 28 Pool Criticality, Nuclear Energy Institute, ADAMS Accession No. ML16272A233.

29 NEI, 2017, Supplementary Response to Request for Additional Information Regarding EPRI 30 Report 1022909, Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty, ADAMS 31 Accession No. ML18018A852.

32 DiGiovine, A.S., and Rhodes III, J.D., 2005, SIMULATE-3, Advanced Three-Dimensional 33 Two-Group Reactor Analysis Code, Studsvik Scandpower SSP-95/15 Rev. 3.

34 35 Dyer, J. E., 2013, Letter to Nuclear Energy Institute Regarding Fee Waiver Under Part 170, 36 ADAMS Accession No. ML13261A080.

37 Grandi, G. M., and Hagrman, D., 2007, Improvements to the INTERPIN Code for High Burnup 38 and MOX Fuel, Transactions-American Nuclear Society 97, American Nuclear Society: 614-15.

39 Holonich, J., 2014, Request for Additional Information Related to Benchmarks for Quantifying 40 Fuel Reactivity Depletion Uncertainty and Utilization of the EPRI Benchmarks for Burnup Credit 41 Validation, ADAMS Accession No. ML14238A517.

1 Hsueh, K., 2016a, Summary of June 8, 2016, Meeting with the Nuclear Energy Institute to 2 Discuss the Electric Power Research Institute Depletion Code Validation Approach, ADAMS 3 Accession No. ML16175A323.

4 Hsueh, K., 2016b, Summary of October 14, 2016, Meeting with the Nuclear Energy Institute to 5 Discuss EPRI Depletion Code Validation Approach, ADAMS Accession No. ML16335A107.

6 Lancaster, D., 2012, EPRI Report 1022503, Utilization of the EPRI Depletion Benchmarks for 7 Burnup Credit Validation, Revision 0. Report. Electric Power Research Institute, ADAMS 8 Accession No. ML12165A456.

9 McCullum, R., 2013, Request for Exemption from NRC Fees to Review NEI 12-16, Guidance 10 for Performing Criticality Analyses of Fuel Storage at Light-Water Reactor Power Plants, and 11 Solicitation of Feedback on Pre-Submittal Draft, Dated January 2013, ADAMS Accession 12 No. ML13004A392.

13 Mennerdahl, D., 2014, Correlations of Error Sources and Associated Reactivity Influences, 14 Transactions-American Nuclear Society 110, American Nuclear Society: 292-94.

15 NEI, 2013a, Guidance for Performing Criticality Analyses of Fuel Storage at Light-Water 16 Reactor Power Plants, Revision 0, Nuclear Energy Institute, ADAMS Accession 17 No. ML130840163.

18 NEI, 2013b, Guidance for Performing Criticality Analyses of Fuel Storage at Light-Water 19 Reactor Power Plants, Revision 2, Nuclear Energy Institute, ADAMS Accession 20 No. ML130840163.

21 NRC, 2010, Staff Guidance Regarding the Nuclear Criticality Safety Analysis for Spent Fuel 22 Pools, ADAMS Accession No. ML110620086.

23 ORNL, 2009, SCALE: A Comprehensive Modeling and Simulation Suite for Nuclear Safety 24 Analysis and Design, Version 6.0.

25 ORNL, 2013, SCALE: A Comprehensive Modeling and Simulation Suite for Nuclear Safety 26 Analysis and Design, Version 6.1.2. Computer Program.

27 Rhodes, J., Smith, K., and Lee, D., 2006, CASMO-5 Development and Applications, Proc.

28 ANS Topical Meeting on Reactor Physics (PHYSOR-2006), American Nuclear Society.

29 Smith, K. and Gunow, G., 2014, PWR Fuel Reactivity Depletion Verification Using Flux Map 30 Data, Electric Power Research Institute 31 http://www.epri.com/abstracts/Pages/ProductAbstract.aspx?ProductId=000000003002001948.

32 Smith, K., Tarves, S., Bahadir, T., and Ferrer, R., 2011, EPRI Report 1022909, Benchmarks 33 for Quantifying Fuel Reactivity Depletion Uncertainty, Revision 0, Electric Power Research 34 Institute, ADAMS Accession No. ML12165A457.

35 Smith, K., Tarves, S., Bahadir, T., and Ferrer, R., 2017, EPRI Report 3002010613, 36 Benchmarks for Quantifying Fuel Reactivity Depletion UncertaintyRevision 1, Electric Power 37 Research Institute, ADAMS Accession No. ML18088B397.

38

1 APPENDIX A: HETEROSCEDASTIC REGRESSION POINT-WISE TOLERANCE INTERVALS 2

3 Introduction 4 This appendix provides the details behind the derivation of tolerance interval K-factors for use 5 with linear regressions when the error structure is heteroscedastic. The heteroscedastic errors 6 require that a weighted least squares be used for the parameter estimation. The derivation of 7 the one-sided point-wise tolerance intervals involve the cumulative distribution function of the 8 non-central t distribution. The derivations of the one-sided and two-sided K-factors are more 9 complicated than for the standard tolerance intervals applied to a simple random sample. The 10 tolerance intervals considered here are point-wise for given values of the independent variable 11 and do not provide joint confidence. The derivation given below uses a quadratic regression 12 model to illustrate the analysis. Extensions to other polynomial and nonlinear regression 13 models are also possible. A similar problem and approach is considered in Myhre et al., 200926.

14 Weighted Least Squares 15 Suppose the data ( , ) may be modeled with a quadratic reqression:

16 = + + + (1) 17 for = 1,2, , , where , , and are unknown parameters and the Normally distributed 18 random errors are independent with zero mean and variances equal to ( ) for a known 19 function () and an unknown parameter . Some common examples are ( ) = and 20 ( )= .

21 Let the vector contain the values . Let the matrix have three columns: the first a column of 22 ones, the second containing the values , and the third containing the values . Let be a 23 diagonal matrix containing ( ) as the diagonal elements. The weighted least squares 24 estimates are:

25 =( )

26 where the primes indicate the matrix transpose. The estimated covariance matrix associated 27 with the weighted least squares estimates is:

28 29 = ( )

30 where, 26 Myhre, Janet, Daniel R. Jeske, Michael Rennie, and Yingtao Bi, (2009). Tolerance intervals in a heteroscedastic linear regression context with applications to aerospace equipment surveillance, International Journal of Quality, Statistics, and Reliability, 2009:1-8. doi:10.1155/2009/126283.

1 = + +

2 and, 3 = ( ) .

4 5 Aspects of the Predictive Distribution 6 Consider the distribution of values for a future fixed value of assuming the model in 7 Equation 1. The distribution is assumed to be Normal with mean = + + and 8 variance = ( ). The mean is estimated as = + + and the variance is 9 estimated as = ( ). The variance of is which is estimated as the quadratic form 10 = , where the vector = (1, , ). The degrees of freedom associated with is 11 3.

12 One-Sided Tolerance Intervals 13 A general upper one-sided tolerance interval on the distribution of the previous section has 14 confidence x 100% of containing the lower x 100% of the distribution. The upper one-sided 15 tolerance limit takes the form:

16 + ( )

17 The notation indicates that the one-sided K-factor depends on the value of (as well as on , ,

18 and ). Because the mean estimator is normally distributed and the variance estimator is 19 distributed as / times a Chi-squared random variable with = 3 degrees of freedom, it 20 may demonstrated that the K-factor ( ) is:

1 21 ( )=  ; 3, 22 where (; , ) is the cumulative distribution function of the non-central t distribution with 23 degrees of freedom and non-centrality parameter and is the x 100% quantile of the 24 standard Normal distribution. The value of the factor is / which reduces to:

/

( )

25 =

( )

26 a function of the data and , but not the data.

27 28 The same K-factor is used for a lower one-sided tolerance interval. Also, the K-factor ( ) is 29 not constant nor is it linear in .

30 Two-Sided Tolerance Intervals 31 The limits of the two-sided tolerance interval take the form:

32 +/- ( )

1 The computation of the exact two-sided K-factor is complicated. However, a simpler 2 approximation usually suffices:

/

1 1+

( )

3 ( )

4 where = (1 + )/2, , is the critical value of the Chi-Squared distribution with degrees of 5 freedom = 3 that is exceeded with probability , and ( ) is the effective for :

6 ( )=

7 The interpretation of ( ) is that we can treat as if it were the average from a simple 8 random sample of size ( ).

9 If the variance is reduced by an estimate of the ground truth variance, that is = ( )

10 , then it is necessary to modify the factor:

/

( )

11 =

12 and the degrees of freedom (using Satterthwaites formula) accordingly.

1 APPENDIX B: TOLERANCE INTERVAL CONFIRMATORY ANALYSIS RESULTS 2

3 The NRCs consultant, Pacific Northwest National Laboratory, analyzed the burnup/decrement 4 bias data, provided by EPRI, using a heteroskedastic/generalized least squares approach to 5 independently confirm the validity of EPRIs data analysis as described in Section 7 of the 6 revised EPRI benchmark report. The focus was on producing reasonable one-sided 7 95/95 tolerance intervals on the decrement bias as a function of burnup.

8 The data within sub-batches was assumed to be correlated. These correlations required a 9 modification of the tolerance interval calculations described in detail in Appendix A.

10 The first attempt at modeling the correlation structure in the sub-batches involved a simple 11 random effects model (following Nichols and Schaffer, 200727). Examination of the data 12 suggested that this model was inappropriate. The second attempt at modeling the correlation 13 structure in the sub-batches involved a first-order autoregressive model, written as AR(1), which 14 was found to provide a better fit to the data.

15 The results of the analysis assuming a linear function for the average decrement bias as a 16 function of burnup are shown in Figure 1. The correlated heteroskedastic error structure within 17 each sub-batch was assumed to be an AR(1) (with scaling parameter of 0.83 and a single 18 stationary variance for all sub-batches) multiplied by the burnups.

19 20 Figure 1. Results of the Linear Model Fitting.

27 Nichols, Austin and Schaffer, Mark, (2007), Clustered standard errors in Stata, United Kingdom Stata Users Group Meetings 2007, Stata Users Group, http://EconPapers.repec.org/RePEc:boc:usug07:07.

1 The estimate of the stationary standard deviation was 14.15. So the estimate of the standard 2 deviation of the decrement biases for a burnup of is 14.15 . The resulting tolerance intervals 3 are nearly linear. The one-sided upper 95/95 tolerance interval fails to include 155 of the 4 2856 data points (5.427 percent). The one-sided lower 95/95 tolerance interval fails to include 5 89 of the 2856 data points (3.116 percent). These are reasonable values as the intervals are 6 constructed to contain 95 percent of the decrement biases with 95 percent confidence.

7 The results of the analysis assuming a quadratic function for the average decrement bias as a 8 function of burnup are shown in Figure 2. The estimate of the stationary standard deviation was 9 14.14. So the estimate of the standard deviation of the decrement biases for a burnup of is 10 14.14 . The one-sided upper 95/95 tolerance interval fails to include 146 of the 2856 data 11 points (5.112 percent). The one-sided lower 95/95 tolerance interval fails to include 85 of the 12 2856 data points (2.976 percent). The estimate of the quadratic term is not very statistically 13 significant (the t-value is only -1.66), so the linear model analysis is preferred.

14 15 Figure 2. Results of the Quadratic Model Fitting.

16 Figure 3 supports the assumed heteroskedastic error structure because dividing the decrement 17 biases by the burnups produces a substantially more homoscedastic looking data plot. Some 18 obvious structure in the data is not accounted for in the modeling. The lines of data are not 19 explained by the sub-batching of the data. This type of structure is likely caused by the use of 20 discrete burnup multipliers in the error deduction algorithm described in Section 6.4, Iteration 21 Implementation, of the EPRI benchmark report.

22 The tolerance intervals may be viewed as being conservative as no attempt was made to 23 remove measurement error from the decrement bias errors.

1 2 Figure 3. Plot of the Decrement Bias divided by Burnup.

1 APPENDIX C: CONFIRMATION OF MEASURED COLD REACTIVITY BIAS AND 2 UNCERTAINTY 3

4 Introduction 5 The nature of the in-core measurement benchmarks (i.e., at hot in-core conditions instead of 6 cold in-rack conditions) led EPRI to develop a methodology to translate the depletion reactivity 7 worth uncertainty at hot in-core conditions to an effective worth at cold in-rack conditions 8 representative of a SFP environment. EPRI benchmark report28 Section 7, Measured HFP 9 Reactivity Bias and Uncertainty, is devoted to use of statistical analysis requiring many 10 assumptions to deduce the depletion worth bias and uncertainty as a function of burnup at hot 11 in-core conditions. Additionally, EPRI benchmark report Section 8, Measured Cold Reactivity 12 Bias and Uncertainty, is devoted to quantifying the effect of fuel temperature uncertainty during 13 fuel depletion and how this might propagate to cold in-rack conditions while also translating 14 cross-section uncertainties at cold in-rack conditions to those at hot in-core conditions again 15 requiring several assumptions to be made.

16 As a simple confirmatory check against the numerous assumptions in Sections 7 and 8 of the 17 EPRI benchmark report to translate the hot in-core depletion worth uncertainty to cold in-rack 18 depletion worth uncertainty applicable to SFP CSAs, two hypothetical SFP configurations were 19 created to compare CASMO-5 quantified hot in-core depletion worth uncertainty to an effective 20 cold in-rack depletion worth uncertainty directly without use of statistical analysis or fuel 21 temperature and cross-section uncertainty analysis.

22 Confirmatory Method 23 The idea is to transfer EPRI-generated nuclide concentrations from one axial node of a 24 SIMULATE-3 calculation to a potential SFP storage configuration. However, the EPRI-25 generated nuclide concentrations for all axial nodes from all SIMULATE calculations are 26 unavailable. Instead, EPRI has provided the burnup multipliers that serve to correct the 27 calculated sub-batch average burnups to a measured sub-batch average burnup. Therefore, 28 this analysis uses the effective measured sub-batch average burnup from SIMULATE to derive 29 a corresponding lattice depletion code model to determine representative measured sub-batch 30 average nuclide concentrations. Since CASMO-5 was unavailable for use to determine 31 measured sub-batch average nuclide concentrations, the NRC confirmatory SCALE/TRITON29 32 depletion code was used instead. Note that EPRI has shown in Table 3-1, Difference Between 33 Calculated and Measured Reactivity Decrements for EPRI Benchmarks with 100-Hour Cooling 34 Using ENDF/B-VII Cross Section Library, from the EPRI utilization report30 that using 28 ADAMS Accession No. ML18088B397.

29 SCALE Version 6.2 (April 2016) was used for all confirmatory calculations. The specific TRITON depletion sequence used was the T-DEPL sequence with the 252 group ENDF/B-VII nuclear data library collapsed to 56 groups. The specific KENO-VI criticality sequence used was the CSAS6 sequence with the 252 group ENDF/B-VII nuclear data library.

30 ADAMS Accession No. ML18088B395.

1 SCALE/TRITON with ENDF/B-VII cross-section data produces excellent agreement with 2 CASMO-5 over the range of relevant fuel burnups. Consequently, no correction has been 3 applied to account for code-to-code differences for the confirmatory analyses performed; 4 however, it is expected that this be done when applying EPRIs method in SFP CSA licensing 5 applications.

6 The SFP storage configuration chosen was arbitrary, but it is noted that a fuel design different 7 from those that form the basis of the data in the EPRI benchmark report is used.

8 The basic steps used to carry out the analyses are as follows:

9 1. Choose a data point from those presented in EPRI benchmark report Figure 7-1, 10 CASMO-5 Bias in Reactivity.

11 12 2. Specify the enrichment and SIMULATE-calculated average sub-batch burnup 13 corresponding to Step 1 in a SCALE/TRITON depletion input file.

14 3. Using the input file in Step 2, add the following specifications:

15 16 a. Borated (natural) water at 1500 ppm; the value is an arbitrarily chosen simplification 17 instead of using a letdown curve but it is valid.

18 19 b. Fuel temperature at 922 K; the value was chosen to approximate the burnup 20 averaged fuel temperature of the data given in Figure 8-1, Typical INTERPIN-4 21 Fuel Temperature Change With Burnup.

22 23 c. Moderator density at 0.654 grams per cubic centimeter and 598 K; the values were 24 arbitrarily chosen but are valid.

25 26 d. Westinghouse 14x14 fuel assembly design with 119 IFBA; this is an arbitrarily 27 chosen fuel design but it is valid.

28 29 4. Run the hot in-core depletion case; at completion, run a decay case for five days to 30 minimize the short-lived fission products and maximize SFP reactivity.

31 5. Transfer the fuel nuclide concentrations to a SCALE/KENO-VI SFP criticality input file, 32 which models:

33 34 a. A semi-infinite representative 2x2 stainless steel storage cell array without fixed 35 neutron absorber panels (i.e., modeled with periodic boundary conditions on the 36 x- and y- coordinate faces and vacuum boundary conditions on the z-faces).

37 38 b. Three out of four storage cells filled with depleted fuel using the transferred fuel 39 nuclide concentrations.

40 41 6. Run the cold in-rack criticality case modeled in Step 5.

1 7. Repeat Steps 2-6, but specify the average sub-batch burnup in Step 2 to be equal to the 2 calculated average sub-batch burnup multiplied by the burnup multiplier31 determined by 3 applying the methodology in Section 6.4, Iteration Implementation, of the EPRI 4 benchmark report. This is effectively the inferred measured average sub-batch burnup.

5 8. Using the same criticality model from Step 5, replace depleted fuel with fresh fuel with IFBA 6 removed and run the case.

7 9. Calculate the reactivity difference between Step 6 and Step 7. This is an estimate of the 8 depletion worth uncertainty.

9 10. Calculate the reactivity difference between Step 6 and Step 8. This is an estimate of the 10 depletion worth.

11 11. Divide the result of Step 9 by the result of Step 10. This is an estimate of the depletion 12 worth uncertainty as a percentage of the depletion worth.

13 If the assumptions of Sections 7 and 8 of the EPRI benchmark report are valid, the estimate of 14 the uncertainty calculated in Step 11 is expected be similar (ideally it would be bounded) to that 15 specified by EPRI in Table 10-2, Measured CASMO-5 Cold Reactivity Decrement Biases and 16 Tolerance Limits Expressed as Percentage of Absolute Value of Depletion Reactivity 17 Decrement. This is verified as discussed in the Observations section below.

18 Additionally, it is expected that the hot in-core depletion worth uncertainty without temperature 19 and cross-section uncertainty correction for the two sub-batches analyzed will be approximately 20 the same as the nodal cold in-rack depletion worth uncertainty in order to substantiate EPRIs 21 claim of similarity between hot in-core conditions and cold in-rack conditions. This is also 22 verified and discussed in the Observations section below.

23 Cases 24 The confirmatory method described above was applied to two cases taken from the EPRI 25 depletion benchmark data - one at a relatively low burnup and one at a relatively high burnup:

26 1. Calculated average sub-batch burnup equal to 15.913 GWd/MTU with an initial enrichment 27 of 3.86 wt% UO2 and a burnup multiplier equal to 0.95. For this case, the EPRI-determined 28 reactivity decrement bias is equal to 474 pcm. That is, in EPRI benchmark report 29 Figure 7-1, this represents a single data point at a burnup of 15.913 GWd/MTU and 30 reactivity decrement bias of 474 pcm.

31 32 2. Calculated average sub-batch burnup equal to 42.625 GWd/MTU with an initial enrichment 33 of 3.81 wt% UO2 and a burnup multiplier equal to 0.96. For this case, the EPRI-determined 34 reactivity decrement bias is equal to 977 pcm.

31 Burnup multipliers and corresponding reactivity decrement biases for the various sub-batches were provided in Attachment 2 to a letter dated January 9, 2017 (ADAMS Accession No. ML18018A852).

1 Results 2 The table below contains the results of applying the confirmatory method described above to the 3 two cases described above. The values in columns Step6 through Step8 are the calculated 4 k-effective values from the SCALE/KENO-VI runs, those in columns Step9 and Step10 are in 5 units of k, and Step11 is the depletion code uncertainty in units of percent reactivity 6 decrement.

7 Table 1: Confirmatory Analysis Results Case Step6 Step7 Step8 Step9 Step10 Step11 1 1.02132 1.02596 1.13006 -0.00464 -0.10874 4.267059 2 0.86958 0.87857 1.13006 -0.00899 -0.26048 3.451321 8 In the EPRI utilization report, EPRI defines burnup dependent uncertainty data in terms of 9 percent reactivity decrement in Table 4-1, Measured Reactivity Decrement Biases and 10 Tolerance Limits Expressed as Percentage of Depletion Reactivity Decrement, which is based 11 on Table 10-2 in the EPRI benchmark report. The bias and uncertainty data is reproduced in 12 Table 2 below. EPRI notes that the bias term in Table 10-2 of the EPRI benchmark report has 13 been added to the measured reactivity decrement values tabulated in Tables C-3 to C-5 of the 14 EPRI benchmark report, which are used to determine application-specific bias, therefore an 15 end-user of the EPRI utilization report does not need to consider this bias term separately.

16 However, it is included in Table 2 below to allow for appropriate comparisons in this analysis.

17 Table 2: EPRI-Defined Bias and Uncertainty Burnup Bias Uncertainty 10 0.58 3.05 20 0.50 2.66 30 0.38 2.33 40 0.23 2.12 50 0.05 1.95 60 0.13 1.81 18 Finally, the results of applying four methods per case are presented:

19 1. The EPRI-defined percent reactivity decrement values (labeled EPRI),

20 2. The NRC-adjusted EPRI-defined percent reactivity decrement values based on the linear fit 21 confirmatory analysis described in Appendix B (labeled NRC-adj),

22 3. The NRC confirmatory analysis result produced using the confirmatory method outlined in 23 this appendix (labeled NRC-conf), and 24 4. The historical Kopp 5 percent method.

25 In order to compare methods (1) and (2) to (3) and (4), the percent reactivity decrements must 26 be put in terms of a net reactivity effect applied to a typical SFP CSA. All methods have an 27 uncertainty component that would be added, typically by RSS, to establish a net uncertainty 28 term. Therefore, to simulate addition by RSS, the uncertainty values are divided by a factor to 29 estimate the bottom-line reactivity impact in the SFP CSA. Based on experience in past

1 licensing applications, the depletion code uncertainty contributes approximately a tenth of the 2 net uncertainty at low burnup and about a fourth at higher burnup. Therefore, Case 1 3 uncertainties are reduced by a factor of 10 and Case 2 uncertainties are reduced by a factor of 4 4; the respective terms are then added directly to the corresponding bias components since bias 5 components have a direct effect on the bottom-line reactivity impact (this is the value in the 6 RelImpact column of Table 3). The AbsImpact column is the RelImpact column multiplied 7 by the depletion worth from Table 1.

8 Table 3: Net SFP CSA Impact in Terms of Percent of Depletion Worth Case Method Bias Uncertainty RelImpact AbsImpact 1 EPRI 0.53 2.82 0.89 88 1 NRC-adj 0.17 3.53 0.52 57 1 NRC-conf 0.00 4.27 0.43 46 1 Kopp 5% 0.00 5.00 0.50 54 2 EPRI 0.18 2.08 0.70 182 2 NRC-adj 0.54 3.03 1.30 338 2 NRC-conf 0.00 3.45 0.86 225 2 Kopp 5% 0.00 5.00 1.25 326 9 Observations 10 The main observations to highlight, which support verification of the assumptions in Sections 7 11 and 8 of the EPRI benchmark report, are:

12 1. The hot in-core depletion worth uncertainty without temperature and cross-section 13 uncertainty correction for the two sub-batches analyzed was seen to be approximately the 14 same as the nodal cold in-rack depletion worth uncertainty.32 15 2. The net SFP CSA bias plus uncertainty determined from the confirmatory method 16 described in this appendix is either bounded by EPRIs method or is comparable.

17 3. All four methods give approximately the same net SFP CSA bias plus uncertainty; refer to 18 the AbsImpact column of Table 3 which is the absolute magnitude of the impact on the 19 SFP CSA bias plus total uncertainty.

20 These observations support EPRIs recommended burnup-dependent bias and uncertainty 21 values as provided in Table 4-1 of the EPRI utilization report.

32 The hot in-core depletion worth uncertainty for Case 1 and 2, calculated by EPRI using SIMULATE-3, is 474 pcm and 977 pcm, respectively. The cold in-rack depletion worth uncertainty for Case 1 and 2, taken from Table 1 in column Step9, is 464 pcm and 899 pcm, respectively.