ML14238A517

From kanterella
Jump to navigation Jump to search
RAIs for EPRI Reports 1022909 and 1025203 Referenced in NEI 12-16 Rev 2
ML14238A517
Person / Time
Site: PROJ0669
Issue date: 09/22/2014
From: Joseph Holonich
Licensing Processes Branch (DPR)
To: Mccullum R
Nuclear Energy Institute
Holonich J, DPR/PLPB, 415-7297
References
1022909, 1025203, NEI 12-16, Rev 2
Download: ML14238A517 (12)


Text

September 22, 2014 Rod McCullum, Director Used Fuel Programs Nuclear Energy Institute 1201 F Street, NW, Suite 1100 Washington, DC 20004

SUBJECT:

REQUEST FOR ADDITIONAL INFORMATION RELATED TO "BENCHMARKS FOR QUANTIFYING FUEL REACTIVITY DEPLETION UNCERTAINTY "AND "UTILIZATION OF THE EPRI DEPLETION BENCHMARKS FOR BURNUP CREDIT VALIDATION"

Dear Mr. McCullum:

By letter dated June 6, 2012 (Agencywide Documents Access and Management System (ADAMS) Accession No. ML12165A455), the Electric Power Research Institute submitted two reports, "Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty "and "Utilization of the EPRI Depletion Benchmarks for Burnup Credit Validation." Both of these reports support the ongoing revision of NEI 1216, Guidance for Performing Criticality Analyses of Fuel Storage at LightWater Reactor Power Plants. Upon review of the information provided, the NRC staff has determined that additional information is needed to complete the review.

In an email dated September 10, 2014, Mr. Kristopher Cummings, representing the Nuclear Energy Institute, and I agreed that the NRC staff will receive your response to the enclosed Request for Additional Information (RAI) questions within 75 days of the date of this letter.

If you have any questions regarding the enclosed RAI, please contact me at (301) 415-7297.

Sincerely,

/RA/

Joseph J. Holonich, Sr. Project Manager Licensing Processes Branch Division of Policy and Rulemaking Office of Nuclear Reactor Regulation Project No. 689

Enclosures:

RAI questions

ML14238A517 *concurred via email NRC-088 OFFICE DPR/PLPB DPR/PLPB* DSS/SRXB DPR/PLPB DPR/PLPB NAME JHolonich DHarrison UShoop AMendiola JHolonich DATE 09/12/2014 09/4/2014 09/17/2014 09/18/2014 09/22/2014 Request for Additional Information Specific to the Electric Power Research Institute Report 1022909 "Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty"

1. Provide Studsvik Report, SSP-11/409-C Rev. 0 for review.
2. A more rigorous statistical analysis of variance may provide estimates for the various sources of variability (i.e., reaction rate measurement uncertainties, modeling approximations, and uncertainties in assembly reactivities as listed at the bottom of page 1-2) in the differences between predicted and measured assembly reactivities.

Why wasnt rigorous statistical analysis of variance used to further explore the listed sources of variability?

3. In addition to the operating conditions, two differences between core and spent fuel pool (SFP) reactivity analyses are: 1) SFPs are commonly filled with regions of highly burned fuel assemblies that were in low power regions of the core during their last cycle (and thus in low importance areas of the core), and 2) the reactivity of the SFP is often dominated by the reactivity at the axial ends of the assembly (low burnup rates and lower importance to the overall core reactivity than the core center). Provide additional information to assure that using all of the sub-batch reactivity decrements does not dilute or dominate the bias and or bias uncertainty of the in-core assembly locations that are important to SFP criticality safety analyses. The following examples highlight portions of the document where this question is relevant.
a. Assembly ends may have power/burnup that is approximately 20 percent of the core average (approximately 10 gigawatt-days per metric ton of uranium (GWd/MTU)). These areas are of lower interest for core licensing analyses, therefore, a higher level of uncertainty or error may be accepted by operations.
i. Core measurements are used to verify that the reactor is operating within its limits, thus the measurements can have a conservative bias with respect to reactor operations but not with respect to SFP criticality safety analyses. Provide additional information for not applying regional weighting of reactivity decrement error based on regional importance in SFP criticality analyses as a function of burnup.

ii. Since Table 5-7 only gives mean nodal root mean squared (r.m.s.)

differences, provide a histogram of calculated minus experimental values over the entire spatial domain sampled. What locations correspond to the histogram tails? Explain the source(s) causing disagreement at the histogram tails.

Enclosure

b. In Section 7.2 it is stated that the sensitivity filter does not largely change the data shape. A more quantitative assessment is requested as to why the use of the sensitivity filter does not affect reactivity decrement error bias and bias uncertainty results. Furthermore, in Section 7.2, it is stated that reduction of data scatter is the primary motivation for applying sensitivity screening. Reduction of scatter is not a valid argument for discarding data. If sensitivity screening is to be used, provide appropriate justification for discarding data.
c. Section 7.2 states that reaction rate sensitivities to sub-batch burnups be sufficiently large to overcome measurement uncertainties - or the signals used to deduce sub-batch reactivity errors will not be meaningful. Given that assemblies stored in the SFP will likely be discharged from these low sensitivity locations, provide additional discussion as to why the burnup characteristics and bias and bias uncertainty in these low power regions are expected to be bounded by the bias and bias uncertainty based on average sub-batch burnups.
d. Section 6.3 states that cases with low sub-batch sensitivity are removed from the analysis. Later in the document (pg. 8-14) it is concluded that, the post-minimization screening has not impacted the regression results - within the uncertainty estimates. Provide additional discussion on this conclusion explaining why the excluded data is not applicable to the benchmark analyses and provide the impact to the bias and bias uncertainty.
e. Explain the cause(s) of the reactivity decrement errors near the red 5%

Decrement curves in Figure 8-4 and Figure 8-5?

4. Nodal methods with homogenized assembly approximations generally have difficulties with streaming effects at the assembly ends. To what distance from the assembly ends do the non-homogeneous streaming effects propagate into the active core region? Is there a resultant restriction on the applicability of the benchmark near the assembly ends or is there a different bias and bias uncertainty that would be applicable for calculations at the assembly ends?
5. The basic approach in this report entails extrapolation of a cores hot full power (HFP) calculations/measurements to SFP conditions. In addition to temperature (addressed in Section 8) summarize the parameters considered in this extrapolation (e.g., absence of short lived fission products and neutron spectral changes) and their individual impact on the bias and uncertainty applicable to SFP conditions.
6. The best fit for the reactivity decrement error versus sub-batch burnup will have uncertainty in fit due to internal sub-batch variation between fuel assemblies (especially when super-batches are formed).

Section 3.6 concludes that residual r.m.s. differences are caused by factors other than sub-batch reactivity. How has the internal variability within each of the sub-batches and super-batches been included in the uncertainty of the reactivity decrement error data?

7. The bias and bias uncertainty have been developed for an agglomerated dataset over the area of applicability. The benchmarks generated isolate some area of applicability parameters (e.g., pin size, burnable absorber, and power density). Consider the following regarding data applicability.
a. Section 2.3 states, one way of viewing reactor data is that they provide a great many instances of the depleted fuel criticals that we desire.

Clearly define the area of applicability for these depleted fuel criticals. As stated in NUREG/CR-6698, Guide for Validation of Nuclear Criticality Safety Calculational Methodology, the purpose of defining the area of applicability is to verify that the neutron physics will not be unduly affected by parameters not accounted for in experiments. The guidance provided in Section 2.5 of NUREG/CR-6698 identifies important considerations for formally defining the area of applicability.

b. Section 2.3 states that the benchmark study is based on 44 cycles of measured reactor data from four Duke Energy PWRs [pressurized water reactors]. What percentage of all PWR reactor-years does this data cover and how representative is this data of all PWR fuel operation considering different fuel design types and operational characteristics? Discuss any operational outliers (i.e., operational characteristics that might be considered atypical or unexpected) and provide the corresponding reactivity decrement errors for these outliers and the relative magnitude of the errors as a function of burnup relative to the overall data.
c. Section 7.5 lists formal statistical conditions on the data that are not satisfied.

Provide an assessment of the significance of not meeting these conditions and why alternate statistical methods that account for this type of data were not used.

Such analyses could be used to develop the importance, trends, and bias and bias uncertainty for the parameters in the area of applicability. Specifically address the potential impacts and/or limitations on bias and bias uncertainty for the benchmarks generated. Alternatively, provide additional justification for the general applicability of the bias and bias uncertainty summarized in Table 7-1 by performing parameter-specific trending analysis for each parameter in the area of applicability.

d. In Section 7.5, the CASMO-SIMULATE reactivity decrement calculations were identified as not being sensitive (less than 200 pcm) to soluble boron, fuel enrichment, and burnable absorbers.
i. Given that the resulting uncertainty in the decrement error is on the order of 300-500 pcm, explain why calculation sensitivities of this magnitude were dismissed.

ii. What is the basis for using quadratic data fits versus linear data fits?

iii. Discuss the statistical process used to show that reactivity decrement sensitivity to the studied parameters is less than 200 percent millirho (pcm).

iv. In Section 7.7, what is the basis for the assumed boron error, in units of parts per million (ppm), to pcm conversion constant of 9 pcm/ppm?

8. Section 3.2 discusses flux map measurements. How are flux measurement errors accounted for in this work?
9. Section 3.3 explains how reactivity decrement errors are captured by attempting to separate the spatially-dependent reactivity decrement errors (characterized in this work as the depletion uncertainty) from the spatially-independent reactivity errors. This benchmark work seems to focus only on capturing the spatially-dependent component of the reactivity error while neglecting the spatially-independent component, further evidenced by the fact that the r.m.s. errors are minimized, but non-zero. In addition to not accounting for flux map measurement error, not accounting for these residual biases seems to result in only a partial benchmarking of the CASMO/SIMULATE tools.

Explain how these residual biases have been accounted for or explain why it is appropriate to ignore them?

10. Regarding Figure 3-4 and all other figures like it, do the curves correspond to the left y-axis and the points correspond to the right y-axis? It is confusing to have the curves on the plots if they aren't numerically tied to one of the y-axes. Why dont the curve minima correspond to the points?
11. The iteration implementation discussed in Section 6.4, Step 6.b. describes setting the burnup multiplier to a value of 1.0 if the number of assemblies in the sub-batch is less than 12. At this point, have the super-batches discussed in Section 6.1 already been defined?

It would seem that this check is unnecessary if the super-batches were already created since there wouldn't be any sub-batches that exist with less than 12 assemblies. Is this implying that not all sub-batches with less than 12 assemblies are used to create a super-batch? If data is being excluded, it should be explicitly stated and characterized.

12. In Section 7.1, the following statements are made:

no attempt will be made to quantify the reactivity decrement biases or uncertainties for burnups less than 10 GWd/MTU. One should note that since reactivity decrement biases and uncertainties at zero burnup are by definition zero, it should be easy to estimate reactivity decrement biases and uncertainties in this range.

If there is no attempt to quantify reactivity decrement biases or uncertainties for burnups less than 10 GWd/MTU, this implies the validation study is limited to burnup credit of greater than 10 GWd/MTU. Is this the intent of the statement quoted above? If not, provide clarification for the applicable burnup range and justification for the entire range based on the data that was used in the validation study.

13. In Section 7.5, the reactivity decrement error data shows an increase in error with batch exposure. What type of regression algorithm was used to incorporate the assumed linear change of variance of the reactivity decrement errors with respect to batch burnup and what is the basis for assuming that the variance grows linearly with burnup?
14. In Section 7.5, it does not seem appropriate to force the regression fit to go through zero pcm at zero burnup. Theoretically, the uncertainty due to depletion should be zero at zero burnup, but if the data doesn't support this, it might be indicative of an inherent bias in the code that should be accounted for. Part of code validation is determining the ability of the code (and associated data libraries) to predict reality, which in this case, would include any inherent code bias (and bias uncertainty) at zero burnup (especially considering this work does not benchmark the ability of CASMO/SIMULATE to explicitly calculate isotopic number densities but instead attempts to infer only the reactivity associated with isotopic number density uncertainties). Provide appropriate justification for forcing the regression fit to pass through zero pcm at zero burnup.
15. Regarding Section 7.5, it is not acceptable to claim that the true uncertainty of the regression fit lies between the confidence and prediction intervals, since those intervals seem to be based on the assumptions in the bulleted list on page 7-15, which have not been met. Since the necessary conditions have not been met, conclusions that depend on those conditions being met cannot be formed.

Furthermore, other statistical methods may indicate that the true uncertainty of the regression fit is greater than the prediction intervals presented in the regression analysis figures shown in Section 7.5. Provide appropriate justification for the last sentence in Section 7.5 or revise the sentence accordingly.

16. Calculation of the error of the regression fit, as described in Section 7.6, is essentially based on introducing a significant known deficiency and then comparing the regression fit of the deficient reactivity decrement data to a known deficiency as a function of burnup. It is then inferred that the reactivity decrement error uncertainty sensitivity to any large error in the code will always result in small regression fit uncertainty changes that are far less than the prediction interval. However, this conclusion is based on a single known deficiency for a single fuel type depleted with soluble boron at a specific concentration. There are theoretically many deficiencies that are unknown, so how does analyzing a single case give confidence that the regression fit uncertainty will always be bounded by a value of 250 pcm, which is independent of burnup? Since determining the regression fit uncertainty based on a single scenario is questionable and does not provide reasonable assurance that the regression fit uncertainty will be bounded with 95 percent probability, at a 95 percent confidence level consistent with the requirements of Title 10 of the Code of Federal Regulations (10 CFR) 50.68, revise the discussion on regression fit uncertainty accordingly. As mentioned in RAI-3.e., the NRC staff is concerned with accounting for the reactivity decrement data furthest away from the regression fit.
17. Are the Table 8-7 and Table 8-8 uncertainty values taken directly from TSUNAMI-IP output files?
18. In Section 8.6, it is assumed that the HFP-to-cold uncertainty changes are equally applicable to CASMO-4, CASMO-5, and SCALE 6 data. Why is this an appropriate assumption?
19. Regarding Appendix B reactivity benchmark specification descriptions (pp. B-3 to B-14):
a. Revise the descriptions to include units for all specifications.
b. Some descriptions contain missing sections (e.g. Structural Material Description and Coolant Description for Section B.2); revise the descriptions accordingly so that each description stands alone.
c. The title for Case 9 in Section B.10 is ambiguous. For clarity, include a more complete description explaining the purpose of each benchmark. Also, explain why the eleven benchmarks proposed provide sufficient coverage to validate all PWR depletion analyses for any SFP storage configuration.

Request for Additional Information Specific to the Electric Power Research Institute Report 1025203, "Utilization of the EPRI Deletion Benchmarks for Burnup Credit Validation"

1. In Section 2, it is stated that, In particular, it is important to assure that the neutron energy spectrum for the critical system is bounded by the neutron energy spectrum in benchmarks used for the determination of kd. With the large variation in possible Spent Fuel Pool (SFP) configurations, how is it ensured that the neutron energy spectrum in benchmarks used for the determination of kd are similar (or bound) the neutron energy spectrum for all possible critical systems?
a. In order to justify the bias and bias uncertainty analysis based on the 3-D reactor environment, there should be sufficient similarity to a sufficient range of 3-D SFP environments (rather than 2-D). If sufficient similarity does not exist, then applying depletion bias and bias uncertainty from the reactor benchmarks to the SFP environment becomes questionable.

A limited similarity assessment has been performed in Electric Power Research Institute (EPRI) Report 1022909, "Utilization of the EPRI Depletion Benchmarks for Burnup Credit Validation," looking at correlation coefficients (or ck values) to justify the treatment of cross-section uncertainties when going from hot full power (HFP) in-core conditions to cold in-rack SFP conditions. How do ck values compare between the 3-D reactor environment and the 3-D SFP environment and how sensitive are these ck values to differences in the 3-D reactor environment, differences in the 3-D SFP environment, or both?

b. In Table 8-4 of EPRI Report 1022909, there is only a single spent nuclear fuel rack where ck relative to HFP in-core conditions is calculated. This rack is described as a simplified uniform rack...with a 0.0625 cm thick borated aluminum poison sheet having a width of 19 cm, and a B-10 areal density of 0.006 g/cm2. What is the ck sensitivity to different areal density, poison width, poison thickness, soluble boron, temperatures, low power density (rather than high), etc.? At what ck would the bias and bias uncertainty estimates begin to breakdown in terms of applicability?
2. The suggested bias uncertainty values are not reported as being based on a 95 percent confidence interval that bounds 95 percent of the population. Provide additional information/discussion on the confidence interval associated with the reported bias values.
a. The 11 supplied independent benchmarks in EPRI Report 1025203 are not sufficient in number or diversity to establish a statistically based confidence interval that bounds 95 percent of all reactivity decrement error bias with 95 percent confidence. Discuss the limitations or acceptance criteria on the accuracy and precision of the calculated results relative to the benchmark results that would ensure that a licensee or applicant who would use this methodology will satisfy the regulatory requirements of Title 10 of the Code of Federal Regulations Part 50, Section 50.68 (10 CFR 50.68).
b. A dependency in the bias is reported with respect to specific power - 38.1 Watts/gram (W/g) and 57.2 W/g. The assembly ends may have a specific power well below 38.1 W/g, however, the provided benchmarks do not cover the power range necessary to assess a calculations performance at lower specific powers.

Provide additional benchmarks at fuel conditions important to SFP criticality safety analyses to demonstrate applicability within the specific power range of concern.

3. EPRI Report 1025203 bases the 11 benchmark criticality case bias and uncertainty estimates, intended to cover reactivity calculations in SFP conditions, on bias and uncertainty estimates inferred from HFP operating reactor measurements. Application of bias and uncertainty in this manner requires bias and uncertainty extrapolation in time (decay), fuel temperature, moderator temperature, etc. Provide additional discussion on the basis for the bias and uncertainty, and justification for the limitations and assumptions associated with the following conditions:
a. an extension of operating reactivity measurements to 100 hours0.00116 days <br />0.0278 hours <br />1.653439e-4 weeks <br />3.805e-5 months <br /> (Were restart critical configurations considered?),
b. an increased bias/uncertainty (of 0.0025) for cooling times beyond 100 hours0.00116 days <br />0.0278 hours <br />1.653439e-4 weeks <br />3.805e-5 months <br />,
c. consideration of fuel temperature sensitivities,
d. consideration of moderator temperature and density sensitivities, and
e. the use of storage rack absorber materials.
4. Provide a list of all of the significant isotopes that were included in the benchmark analyses. Additionally, since volatile fission products are not typically credited in SFP criticality safety analyses, an assessment of the change in the bias and uncertainty when volatile and soluble nuclides are excluded is necessary.
5. In Section 4, the following is stated: SCALE 6.1 TRITON has no branching capability, unlike most fuel management tools. NUREG/CR-7041, SCALE/TRITON Primer: A Primer for Light Water Reactor Lattice Physics Calculations, discusses the type of branch calculations that SCALE 6.1 TRITON can perform. Revise the statement in Section 4 accordingly.
6. In Section 4, it is stated that ENDF/B-VII has only one group wise library and it uses 238 groups. ENDF/B-VII data is not restricted to 238 groups. Revise the statement accordingly.
7. Section 6 states, Care must be taken to cover all the depletion and rack conditions.

How will the criticality safety analyst know if all rack conditions are covered? What if the safety analysis rack conditions are not similar to one of the 11 benchmark conditions?

8. Section 6 states the following:

However, a quick review of the biases shown in Section 5 reveals a slight trend to more negative biases as one goes down in enrichment.

Therefore, a bias from the high enrichments would be conservative for the low enrichments. Note that other cross-section libraries could have a trend in the opposite direction, and in that case, the trend would have to be projected and conservatism added.

It is not clear, from the limited discussion above, how bias trends are to be handled.

Provide specific guidance for handling of bias trends amongst benchmarks (e.g. When is extrapolation appropriate? How much extrapolation is appropriate? How much conservatism should be added when extrapolating?).

9. Section 6 states the following:

The maximum difference in the biases in Table 5-2 between Case 11 and Case 3 is 0.0004. Since the bias is the difference between two Monte Carlo cases each with a 0.0002 uncertainty, the difference in the bias due to a large change in power is insignificant.

Explain why a difference of 0.0004 is insignificant and provide guidance for determining significance. When taking the difference between two numbers, each with its own uncertainty, why wouldnt propagation of error apply?

10. Section 6 states the following:

The wet annular burnable absorbers (WABAs) are typically not credited in the criticality analysis, so the criticality analysis is actually done with only the change in reactivity of the fuel being taken into account. The difference in bias is still small with the maximum difference being 0.0007.

Adjusted to cover 24 WABA pins instead of the 20 in the analysis would only be about 0.0001.

How is the adjustment referred to above being made and what is the basis for the adjustment? Provide specific guidance for the types of bias adjustments that are appropriate, when they are to be applied, and how they are to be applied.

11. Section 6 states the following:

It should be noted that the geometric parameters of the rack need to be covered in the selection of the fresh fuel critical experiments. The rack condition changes are to explore if the delta k of depletion is impacted by rack conditions. Since this is a fuel effect, the most important concerns are changes in the fuel, so the rack condition changes are a change in the water temperature and density and a change in the boron ppm.

It is not clear that the bias and uncertainty in the delta k of depletion does not change as a function of the rack material and rack geometry. Provide quantitative evidence that the rack material and conditions are not important enough to be considered as rack condition changes.

12. Page 6-3 in Section 6 suggests that the bias be increased by 0.001 to 0.0025 for cooling time credit. What is the basis for increasing the bias by 0.001? What bias would be applicabe beyond 15 years cooling?
13. Section 6 references Tables 5-2, 5-3 and 5-4 relative to depletion spectrum and wetness. Extrapolation to fuel that wasnt considered in the EPRI depletion benchmark work is not appropriate. A trend found on wetness is not a defensible basis for extrapolating to other fuel types as it is not known that wetness is fundamentally causing the trend and if the trend actually holds in the region of extrapolation.
a. Define the difference between wetness and the more commonly used H/X parameter.
b. Why is wetness an appropriate figure of merit to show depletion spectrum coverage?
c. What is the basis for the 0.001 additional bias to be applied for W14x14 Standard and W16x16 fuel (also mentioned on page 9-1)?
d. Provide appropriate tables with spectra along with a quantitative illustration of a spectrum comparison.
14. Regarding Appendix B reactivity benchmark specification descriptions (pp. B-2 to B-13):
a. Revise the descriptions to include units for all specifications.
b. Some descriptions contain missing sections (e.g. Structural Material Description and Coolant Description for Figure B-2); revise the descriptions accordingly so that each description stands alone.
c. The title for Case 9 in Figure B-10 is ambiguous. For clarity, include a more complete description explaining the purpose of each benchmark. Also, explain why the eleven benchmarks proposed provide sufficient coverage to validate all PWR depletion analyses for any SFP storage configuration.