NUREG/CR-7308, Sensitivity/Uncertainty Methods for Nuclear Criticality Safety Validation

From kanterella
Jump to navigation Jump to search
NUREG/CR-7308, Sensitivity/Uncertainty Methods for Nuclear Criticality Safety Validation
ML25099A002
Person / Time
Issue date: 04/30/2025
From: Celik C, Dupont M, Thomas Greene, Lucas Kyriazidis, Marshall W
Office of Nuclear Regulatory Research, Oak Ridge
To:
References
ORNL/TM-2024/3277 NUREG/CR-7308
Download: ML25099A002 (1)


Text

Sensitivity/Uncertainty Methods for Nuclear Criticality Safety Validation Office of Nuclear Regulatory Research NUREG/CR-7308 ORNL/TM-2024/3277

AVAILABILITY OF REFERENCE MATERIALS IN NRC PUBLICATIONS NRC Reference Material As of November 1999, you may electronically access NUREG-series publications and other NRC records at NRCs Library at www.nrc.gov/reading-rm.html. Publicly released records include, to name a few, NUREG-series publications; Federal Register notices; applicant, licensee, and vendor documents and correspondence; NRC correspondence and internal memoranda; bulletins and information notices; inspection and investigative reports; licensee event reports; and Commission papers and their attachments.

NRC publications in the NUREG series, NRC regulations, and Title 10, Energy, in the Code of Federal Regulations may also be purchased from one of these two sources.



1. The Superintendent of Documents U.S. Government Publishing Office Washington, DC 20402-0001 Internet: https://bookstore.gpo.gov/

Telephone: 512-1800 Fax: (202) 512-2104

2. The National Technical Information Service 5301 Shawnee Road Alexandria, VA 22312-0002 Internet: https://www.ntis.gov/

1-800-553-6847 or, locally, (703) 605-6000 A single copy of each NRC draft report for comment is available free, to the extent of supply, upon written request as follows:

Address: U.S. Nuclear Regulatory Commission

Office of Administration Program Management and Design Service Branch Washington, DC 20555-0001 E-mail: Reproduction.Resource@nrc.gov Facsimile: (301) 415-2289 Some publications in the NUREG series that are posted at NRCs Web site address www.nrc.gov/reading-rm/doc-collections/nuregs are updated periodically and may differ from the last printed version. Although references to material found on a Web site bear the date the material was accessed, the material available on the date cited may subsequently be removed from the site.

Non-NRC Reference Material Documents available from public and special technical libraries include all open literature items, such as books, journal articles, transactions, Federal Register notices, Federal and State legislation, and congressional reports.

Such documents as theses, dissertations, foreign reports and translations, and non-NRC conference proceedings may be purchased from their sponsoring organization.

Copies of industry codes and standards used in a substantive manner in the NRC regulatory process are maintained at The NRC Technical Library Two White Flint North 11545 Rockville Pike Rockville, MD 20852-2738 These standards are available in the library for reference use by the public. Codes and standards are usually copyrighted and may be purchased from the originating organization or, if they are American National Standards, from American National Standards Institute 11 West 42nd Street New York, NY 10036-8002 Internet: https://www.ansi.org/

(212) 642-4900 Legally binding regulatory requirements are stated only in laws; NRC regulations; licenses, including technical specifications; or orders, not in NUREG-series publications. The views expressed in contractor prepared publications in this series are not necessarily those of the NRC.

The NUREG series comprises (1) technical and administrative reports and books prepared by the staff (NUREG-XXXX) or agency contractors (NUREG/CR-XXXX), (2) proceedings of conferences (NUREG/CP-XXXX), (3) reports resulting from international agreements (NUREG/IA-XXXX), (4) brochures (NUREG/BR-XXXX), and (5) compilations of legal decisions and orders of the Commission and Atomic and Safety Licensing Boards and of Directors decisions under Section 2.206 of NRCs regulations (NUREG-0750),

Knowledge Management prepared by NRC staff or agency contractors (NUREG/KM-XXXX).

DISCLAIMER: Where the papers in these proceedings have been authored by contractors of the U.S.

Government, neither the U.S. Government nor any agency thereof, nor any U.S. employee makes any warranty, expressed or implied, or assumes any legal liability or responsibility for any third partys use or the results of such use, of any information, apparatus, product, or process disclosed in these proceedings, or represents that its use by such third party would not infringe privately owned rights. The views expressed in these proceedings are not necessarily those of the U.S. Regulatory Commission.

NUREG/CR-7308 ORNL/TM-2024/3277 Office of Nuclear Regulatory Research Sensitivity/Uncertainty Methods for Nuclear Criticality Safety Validation Manuscript Completed: December 2024 Date Published: April 2025 Prepared by:

William J. Marshall Travis M. Greene Alex M. Shaw Cihangir Celik Mathieu N. Dupont Oak Ridge National Laboratory Oak Ridge, TN 37831 Lucas Kyriazidis, NRC Project Manager

iii ABSTRACT The computational methods used in nuclear criticality safety analyses must be validated to ensure compliance with the consensus standard for operations with fissionable material outside of reactors. This validation requires the comparison of computational results with measurements of physical systems which are neutronically similar to those used in the safety analysis being performed. To this end, this document examines sensitivity/uncertainty (S/U) analysis methods and their applications primarily to nuclear criticality safety validation activities. This document reviews relevant prior written guidance issued between 1999 and 2015. A brief theoretical background is provided on sensitivity coefficients, methods of calculating keff sensitivity coefficients, nuclear covariance data, uncertainty analysis, and similarity assessment. Specific recommendations for using S/U methods to calculate sensitivity coefficients, confirm their accuracy, perform uncertainty analysis of validation gaps, and assess benchmark similarity are also provided. There is also a brief review of publicly available sensitivity data which can be used to perform similarity assessments. Three case studies are described demonstrating the use of S/U methods for the generation of sensitivity coefficients, similarity assessment, and validation gap margin estimation. Finally, advanced S/U capabilities are summarized, including a discussion of challenges associated with deployment of these techniques.

v TABLE OF CONTENTS ABSTRACT................................................................................................................... iii LIST OF FIGURES........................................................................................................ vii LIST OF TABLES.......................................................................................................... xi EXECUTIVE

SUMMARY

............................................................................................. xiii ACKNOWLEDGMENTS............................................................................................... xv ABBREVIATIONS AND ACRONYMS........................................................................ xvii 1 INTRODUCTION..................................................................................................... 1-1 1.1 Purpose...................................................................................................................1-1 1.2 Background.............................................................................................................1-1 2 VALIDATION OVERVIEW....................................................................................... 2-1 2.1 Safety Analysis Model..............................................................................................2-1 2.2 Benchmark Critical Experiments..............................................................................2-2 2.3 Sources of Bias and Bias Uncertainty......................................................................2-3 2.3.1 Sources of Bias...........................................................................................2-3 2.3.2 Sources of Bias Uncertainty........................................................................2-4 3 PREVIOUS GUIDANCE ON THE USE OF S/U METHODS.................................... 3-1 3.1 NUREG/CR-6655....................................................................................................3-1 3.2 Rearden et al., Nuclear Technology Article..............................................................3-3 3.3 TSUNAMI Primer.....................................................................................................3-5 3.4 Jones Thesis............................................................................................................3-6 3.5 Summary.................................................................................................................3-6 4 THEORETICAL ASPECTS OF S/U ANALYSIS APPLIED TO NCS VALIDATION........................................................................................................... 4-1 4.1 Sensitivity Coefficients, Adjoint Perturbation Theory, and Nuclear Data...................4-1 4.1.1 Sensitivity Coefficients................................................................................4-1 4.1.2 Adjoint Perturbation Theory.........................................................................4-5 4.1.3 Nuclear Data...............................................................................................4-6 4.2 TSUNAMI Implementation of keff Sensitivity Methods...............................................4-7 4.2.1 Multigroup Methods.....................................................................................4-7 4.2.2 Continuous-Energy Methods.......................................................................4-8 4.3 Nuclear Covariance Data....................................................................................... 4-10 4.4 Uncertainty Analysis.............................................................................................. 4-11 4.5 Similarity Assessment............................................................................................ 4-14 5 APPLICATION RECOMMENDATIONS FOR S/U METHODS IN NCS VALIDATION........................................................................................................... 5-1 5.1 Direct Perturbation Calculations...............................................................................5-1 5.1.1 DP Candidates............................................................................................5-1

vi 5.1.2 Number and Magnitude of Perturbations.....................................................5-2 5.1.3 Perturbed Input Creation.............................................................................5-3 5.1.4 Result Post-Processing...............................................................................5-4 5.1.5 Comparison of DP and TSUNAMI Sensitivities...........................................5-6 5.2 SCALE Multigroup Methods.....................................................................................5-8 5.2.1 TSUNAMI-1D..............................................................................................5-8 5.2.2 TSUNAMI-3D..............................................................................................5-8 5.3 SCALE Continuous-Energy Methods..................................................................... 5-15 5.3.1 Iterated Fission Probability........................................................................ 5-15 5.3.2 CLUTCH................................................................................................... 5-17 5.4 Uncertainty Analysis.............................................................................................. 5-20 5.5 Similarity Assessment............................................................................................ 5-25 5.6 Sources of Available Sensitivity Data for Benchmark Experiments........................ 5-29 6 CASE STUDIES...................................................................................................... 6-1 6.1 BWR Fresh Fuel Shipping Package.........................................................................6-1 6.1.1 Sensitivity Coefficient Generation with TSUNAMI-3D..................................6-2 6.1.2 Identification of Applicable Benchmarks......................................................6-7 6.1.3 Benchmark Set Gaps and Weaknesses.................................................... 6-17 6.2 Drum-Type Package Containing TRISO Fuel......................................................... 6-21 6.2.1 Sensitivity Coefficient Generation with TSUNAMI-3D................................ 6-23 6.2.2 Identification of Applicable Benchmarks.................................................... 6-27 6.2.3 Benchmark Set Gaps and Weaknesses.................................................... 6-30 6.3 Generic Storage Cask for PWR SNF..................................................................... 6-32 6.3.1 Sensitivity Coefficient Generation with TSUNAMI-3D................................ 6-34 6.3.2 Identification of Applicable Benchmarks.................................................... 6-37 6.3.3 benchmark set gaps and weaknesses....................................................... 6-37 7 ADVANCED CAPABILITIES................................................................................... 7-1 7.1 TSURFER................................................................................................................7-1 7.1.1 TSURFER Example....................................................................................7-2 7.1.2 TSURFER Limitations.................................................................................7-4 7.2 TSAR and Reactivity Sensitivity Coefficients............................................................7-7 7.2.1 TSAR Example...........................................................................................7-7 7.2.2 TSURFER with TSAR Reactivity Sensitivities........................................... 7-10 8

SUMMARY

AND CONCLUSIONS.......................................................................... 8-1 9 REFERENCES........................................................................................................ 9-1

vii LIST OF FIGURES Figure 2-1 Cross Sections for 238U (n,), 1H Elastic Scattering, and 10B (n,).....................2-4 Figure 4-1 Energy-Dependent Total Sensitivity Profiles for 235U and 238U in LEU-COMP-THERM-042-004...................................................................................4-2 Figure 4-2 Energy-Dependent Sensitivity Profiles for Moderator 1H in LEU-COMP-THERM-042-004...............................................................................................4-3 Figure 4-3 LCT Benchmark C/E Values Compared with Data-Induced Uncertainty in keff................................................................................................................... 4-13 Figure 5-1 Example Plot from D.A. Reed Spreadsheet......................................................5-5 Figure 5-2 Overview of the DP Calculation Process [28]....................................................5-6 Figure 5-3 Sample Gridgeometry Input for PU-SOL-THERM-034-001 [14]...................... 5-10 Figure 5-4 Inputs Showing Manual Subdivision to Improve Flux Tally Resolution............ 5-11 Figure 5-5 Renderings of Regions Added Via Manual Subdivision................................... 5-11 Figure 5-6 A Model of LCT-010-001 Showing Three Water Mixtures............................... 5-12 Figure 5-7 Fission Cross Section (Top) and Distribution (Bottom) for 235U..................... 5-14 Figure 5-8 F*(r) Mesh and Function for a 32 PWR Assembly Storage Cask [64].............. 5-18 Figure 5-9 F*(r) Statistical Convergence Edit................................................................... 5-19 Figure 5-10 F*(r) Visualization without Filtering to Remove Default 1.0 Values.................. 5-19 Figure 5-11 Recommended Covariance Patching Parameters........................................... 5-22 Figure 5-12 235U Sensitivities for GBC-68 and Two ICSBEP Benchmarks.......................... 5-28 Figure 6-1 Cross-Sectional View of the Single Package Model Containing Two BWR Assemblies.......................................................................................................6-2 Figure 6-2 Illustration of Models with Different Package Array Sizes..................................6-2 Figure 6-3 Normalized keff Results Plotted vs................................................................6-5 Figure 6-4 235U Sensitivity for the Single Package Model Using ENDF/B-VII.1 CE Data..................................................................................................................6-9 Figure 6-5 235U Sensitivity Profiles for the Single Package Model with Both Data Sets................................................................................................................ 6-10 Figure 6-6 Uncertainty in 235U in ENDF/B-VII.1 and ENDF/B-VIII.0............................... 6-10 Figure 6-7 235U Sensitivity for the Infinite Array Models with Both Libraries...................... 6-11 Figure 6-8 235U Uncertainty in ENDF/B-VII.1 and ENDF/B-VIII.0...................................... 6-11 Figure 6-9 235U Sensitivities for Each Array Size........................................................... 6-13 Figure 6-10 1H, 16O, and 238U Scattering Sensitivities for the Single Package Model.......... 6-13 Figure 6-11 1H, 16O, and 238U Scattering Uncertainty Data................................................. 6-14

viii Figure 6-12 Sensitivities to Average Total Number of Neutrons Released per 235U Fission for Each of the Different Array Size Models with ENDF/B-VII.1 Data................................................................................................................ 6-14 Figure 6-13 1H Elastic Scattering Uncertainties for ENDF/B-VII.1 and ENDF/B-VIII.0........ 6-16 Figure 6-14 1H (n,) Uncertainties for ENDF/B-VII.1 and ENDF/B-VIII.0............................ 6-16 Figure 6-15 1H (n,) Sensitivity in the Single Package Model............................................. 6-16 Figure 6-16 155Gd and 157Gd (n,) Sensitivities for the Infinite Array Model......................... 6-19 Figure 6-17 155Gd and 157Gd (n,) Uncertainty Data........................................................... 6-20 Figure 6-18 Cutaway Rendering of the Flooded Generic Drum Package........................... 6-22 Figure 6-19 Flooded Generic Drum Package Enlarged to Show Texture........................... 6-22 Figure 6-20 Individual TRISO Grain with UO2 Kernel (Black) with Carbon (Gray) and SiC (Off-White) Coatings................................................................................. 6-23 Figure 6-21 235U and 238U Total Sensitivity Profiles in the Drum Package........................... 6-23 Figure 6-22 Energy-Dependent Sensitivity Profiles for Moderator 1H in the Drum Package.......................................................................................................... 6-24 Figure 6-23 1H and C-Graphite Scattering Sensitivity Profiles in the Drum Package.......... 6-25 Figure 6-24 Normalized keff Results Plotted vs. for 235U Perturbations.......................... 6-26 Figure 6-25 Normalized keff Results Plotted vs. for 1H Perturbations............................. 6-27 Figure 6-26 Sensitivity to Average Total Number of Neutrons Released per 235U Fission Profiles for Drum Package and LCT-028-017..................................... 6-29 Figure 6-27 56Fe (n,) Sensitivity Profiles for Drum Package and LCT-028-017................. 6-30 Figure 6-28 3D Rendering of the Bottom Half of the GBC-32 Model.................................. 6-33 Figure 6-29 Radial Slice of the GBC-32 Model................................................................... 6-33 Figure 6-30 Major Sensitivities in Node 17 of the GBC-32 Model....................................... 6-35 Figure 6-31 10B and 149Sm Sensitivity Profiles in the GBC-32 Model.................................. 6-35 Figure 6-32 DP Results Plotted as Normalized keff vs................................................... 6-37 Figure 7-1 keff C/E Results Before and After Adjustment....................................................7-3 Figure 7-2 TSURFER Cross-Section Adjustments for 238U (n,), 235U Fission, and 239Pu Fission.....................................................................................................7-3 Figure 7-3 TSAR Input for LCT-079 Cases 2 and 5............................................................7-8 Figure 7-4 keff Sensitivity Profiles for 235U, 1H, and 103Rh in LCT-079-005...........................7-8 Figure 7-5 Reactivity Sensitivity Profiles for 235U, 1H, and 103Rh for LCT-079 Case 5 Compared to LCT-079 Case 2..........................................................................7-9 Figure 7-6 Reactivity Sensitivity Profiles for 103Rh for LCT-079 Cases 3 and 5 Compared to LCT-079 Case 2........................................................................ 7-10 Figure 7-7 Example Input of Reactivity Sensitivity Coefficients to TSURFER................... 7-10 Figure 7-8 keff and Reactivity C/E Results Before and After Adjustment........................... 7-11

ix Figure 7-9 TSURFER Cross Section Adjustments with and Without Reactivity Sensitivity Data for 238U (n,), 235U Fission, and 239Pu Fission......................... 7-11 Figure 7-10 TSURFER Cross Section Adjustments with and Without Reactivity Sensitivity Data for 103Rh (n,)......................................................................... 7-12 Figure 7-11 Correlation Matrix for 238U (n,)....................................................................... 7-12 Figure 7-12 Correlation Matrix for 103Rh (n,)..................................................................... 7-13

xi LIST OF TABLES Table 4-1 Comparison of Bias and Data-Induced Uncertainty for Multiple Systems........ 4-13 Table 5-1 Calculated 1H and Water Sensitivities in LCT-010-001 Model......................... 5-12 Table 5-2 Default Values for Adjoint Monte Carlo Calculation in TSUNAMI-3D............... 5-13 Table 5-3 Uncertainty Terms for 19F in Example Model................................................... 5-24 Table 5-4 Top ck Contributors for Two Benchmarks Compared to GBC-68..................... 5-27 Table 5-5 Uncertainty Contributors for GBC-68 Cask and LCT-010-001......................... 5-29 Table 6-1 TSUNAMI and DP Integral Sensitivities for 235U................................................6-3 Table 6-2 TSUNAMI and DP Integral Sensitivities for 238U................................................6-4 Table 6-3 TSUNAMI and DP Integral Sensitivities for Moderator 1H.................................6-4 Table 6-4 Raw and Normalized keff Results for 235U Single Package DP Calculations.......6-5 Table 6-5 Integral Total Sensitivities for 235U with Both Libraries.......................................6-6 Table 6-6 Integral Total Sensitivities for 238U with Both Libraries.......................................6-6 Table 6-7 Integral Total Sensitivities for Moderator 1H for Both Libraries..........................6-7 Table 6-8 Summary of ck Results Considering ENDF/B-VII.1 and ENDF/B-VIII.0.............6-7 Table 6-9 Top Contributor to Highest ck Value for Each Array Size...................................6-9 Table 6-10 Top Five ck Contributors for All Array Sizes with ENDF/B-VII.1 Data............... 6-12 Table 6-11 Top Five ck Contributors for All Array Sizes with ENDF/B-VIII.0 Data.............. 6-15 Table 6-12 ICSBEP LCT Benchmarks with Steel Separators............................................ 6-17 Table 6-13 ICSBEP LCT Benchmarks Containing Gadolinium Absorber.......................... 6-18 Table 6-14 Integral Total Sensitivity for Gadolinium Isotopes in the Infinite Array Model.............................................................................................................. 6-19 Table 6-15 Data-Induced Uncertainty in the Infinite Array Model from Gd Isotopes.......... 6-20 Table 6-16 TSUNAMI and DP Integral Sensitivities for Drum-Type Package.................... 6-25 Table 6-17 Raw and Normalized keff Results for 235U Drum-Type Package DP Calculations.................................................................................................... 6-26 Table 6-18 Four Identified Experiments with ck Values Above 0.9..................................... 6-27 Table 6-19 Evaluations Containing Experiments with ck Values Above 0.8....................... 6-28 Table 6-20 Top 10 Contributors to ck Between LCT-028-017 and Drum Package............. 6-29 Table 6-21 Top 10 Nuclide/Reaction Pair Contributors to Uncertainty in Drum Package.......................................................................................................... 6-31 Table 6-22 Top Five Nuclide Contributors to Uncertainty in Drum Package...................... 6-31 Table 6-23 Enrichments of Applicable Experiments.......................................................... 6-32 Table 6-24 Nuclides Included in SNF Composition in GBC-32 Model............................... 6-34 Table 6-25 DP Summary for GBC-32 Model..................................................................... 6-36

xii Table 6-26 DP Calculation Results................................................................................... 6-36 Table 6-27 Data-Induced Uncertainty for Minor Actinides and Major Fission Products..... 6-39 Table 7-1 LCT-079 Case Matrix........................................................................................7-8

xiii EXECUTIVE

SUMMARY

Sensitivity/uncertainty (S/U) tools were first introduced for validation activities in the late 1990s, and their use has evolved with practical application over the last quarter century. This report summarizes current best practices and recommendations for using S/U tools for nuclear criticality safety validation activities.

Sensitivity coefficients describe the expected response in a system to a change in an input to that system. In criticality safety analyses, the response of interest is almost always the neutron multiplication factor, keff. Within the context of S/U methods, the input changes being examined are related to nuclear data. The determination of sensitivity coefficients Via adjoint perturbation theory allows for accurate predictions of keff effects caused by small changes to reaction cross sections, fission neutron energy distributions, and neutron emission distributions. The sensitivity coefficients are calculated for specific application systems of interest to allow the propagation of nuclear data changes to these specific safety systems of interest.

Generally, the nuclear data changes posited in S/U analysis are the covariance data evaluated to characterize the uncertainties associated with the nuclear data. These data come from a variety of data evaluation projects and compilations obtained in a manner similar to how the best-estimate data for these parameters are drawn from Evaluated Nuclear Data File (ENDF) libraries in the United States.

S/U methods have been used extensively to assess system similarity to support criticality safety validation. The use of neutron transport methods in establishing process limits, process controls, and design parameters for nuclear criticality safety must be validated by comparison to measured critical benchmark experiments that are similar to the safety system of interest.

Traditionally, engineering judgement has been used to assess the similarity of benchmarks and application systems based on the materials and neutron energy spectra of the systems. S/U tools allow a much more rigorous comparison based on quantifying the shared nuclear data-induced uncertainty between a benchmark and an application system, and these tools may be used in coordination with engineering judgment and other methods of benchmark selection to support validation.

Nuclear data-induced uncertainty is a useful parameter for judging system similarity because errors in the nuclear data are the primary source of bias in contemporary neutron transport codes used in nuclear criticality safety. Errors in the nuclear data are bounded by reported uncertainties, so shared nuclear data uncertainties provide an indication of shared sources of bias. Benchmarks with the same bias sources that are exercised in the same way as the relevant safety application system will manifest the same bias as the application system and are therefore the appropriate benchmarks to use in validation.

S/U tools can also be used to estimate reactivity margins for gaps in the benchmark data set.

The nuclear data uncertainty should bound the bias in the data and should also provide a bounding estimate of the potential bias from that particular piece of data. Propagating that uncertainty with system-specific sensitivity can help to determine the potential impact on system keff caused by a lack of validation for that nuclide.

xiv This report provides an overview of the theoretical basis for S/U methods and practical guidance for their application. A series of three case studies is also provided to demonstrate application of the tools for real-world validation scenarios. These examples illustrate the utility of the S/U tools, when used correctly, to identify similar benchmarks for nuclear criticality safety validation and to justify penalties for identified benchmark data gaps.

xv ACKNOWLEDGMENTS This work was performed under contract with the U.S. Nuclear Regulatory Commission (NRC)

Office of Nuclear Regulatory Research (RES). The authors thank Lucas Kyriazidis, the NRC Project Manager, Drew Barto and Jeremy Munson of the Office of Nuclear Material Safety and Safeguards (NMSS), and Nate Hudson, Mike Rose, and Andy Bielen of RES. The authors also wish to thank Jordan McDonnell and Ugur Mertyurek of Oak Ridge National Laboratory (ORNL) for their reviews, and Rose Roberts and Kathy Jones, also of ORNL, for assistance in editing, formatting, and preparing the final document.

xvii ABBREVIATIONS AND ACRONYMS 1D one dimensional 2D two dimensional 3D three dimensional AEF average energy of fission AEG average energy group of fission AFP actinide and fission product AGN=

number of adjoint generations AOA area of applicability APG=

number of particles per generation ASG=

targeted uncertainty in the adjoint calculation ASK=

number of initial skipped generations BUC burnup credit BWR boiling water reactor CE continuous-energy C/E calculated over expected CFP number of latent generations CLUTCH contributon-linked eigenvalue sensitivity/uncertainty estimation via track-length importance characterization CSAS Criticality Safety Analysis Sequence CSEWG Cross Section Evaluation Working Group CSI criticality safety index DICE Database for the International Criticality Safety Benchmark Experiments DP direct perturbation EALF energy of the average lethargy causing fission FCSS Fuel Cycle Safety and Safeguards GEN=

number of generations GLLSM generalized linear least squares methodology HALEU high-assay low-enriched uranium HEU highly enriched uranium HTC Haut Taux de Combustion (French for high burnup)

H/X ratio of moderating to fissile nuclei, strictly hydrogen-to-fissile ratio ICSBEP International Criticality Safety Benchmark Evaluation Project IEU intermediate enrichment uranium IFP iterated fission probability ISG Interim Staff Guidance KSEN keff sensitivity coefficients LANL Los Alamos National Laboratory LCT LEU-COMP-THERM LEU low enriched uranium

xviii MADD models and derived data MCNP Monte-Carlo N-Particle MG multigroup NEA Nuclear Energy Agency NCS nuclear criticality safety NPG=

number of particles per generation NSK=

the number of generations to skip for initial source convergence OECD Organisation for Economic Co-operation and Development ORNL Oak Ridge National Laboratory PBMR pebble bed modular reactor PNNL Pacific Northwest National Laboratory PWR pressurized water reactor SAMS Sensitivity Analysis Module for SCALE SDF sensitivity data file SIG=

desired final uncertainty S/U sensitivity/uncertainty SNF spent nuclear fuel TRG technical review group TRISO tristructural-isotropic TSL thermal scattering law USL upper subcritical limit VALID Verified, Archived Library of Inputs and Data WPEC Working Party on Nuclear Data Evaluation Cooperation

1-1 1 INTRODUCTION 1.1 Purpose The purpose of this report is to summarize best practices regarding the use of sensitivity/uncertainty (S/U) methods for nuclear criticality safety (NCS) validation assessments.

Separate guidance is available in multiple sources [1, 2, 3, 4] on the broader topic of validation, and those other sources provide more complete recommendations in many aspects of the validation activity. A brief overview of relevant portions of the validation process is provided in Section 2 of this document to provide context, but the primary focus of this document is S/U methods and their applications for benchmark experiment selection and validation gap assessment.

In 1999, NUREG/CR-6655 Volumes I and II introduced the use of S/U methods in criticality safety validation [5, 6]. The intervening quarter century has seen vastly expanded development and use of these methods, with leveraging of greater computing capabilities and application to a wide variety of systems. This document aims to provide updated guidance based on the accumulated experience of developers and users since the publication of NUREG/CR-6655. It is expected that these tools may be of greater use in the future to demonstrate the applicability of existing critical benchmark experiments to the validation of non-light-water reactors and fuel forms.

The primary focus of this document is the S/U methods in the SCALE code package [7]. Use of SCALE S/U tools date from the late 1990s as documented in NUREG/CR-6655 [5, 6] and other sources. The extensive use of these tools provides an experience base from which the recommendations provided in this report are drawn. In some cases, comparisons are made to sensitivity calculation methods included in other code packages, especially when the same methods have been implemented. The majority of the discussion and user recommendations are relevant to the SCALE TSUNAMI tools and may not be equally valid for other code packages or implementations. The generic descriptions of S/U methods and their usefulness in NCS validation are expected to hold consistently across code packages, irrespective of implementation details.

1.2 Background

NCS evaluations set limits on processes to ensure the safety of fissionable material in storage, transportation, handling, and processing [8]. If these limits are derived from computer calculations, then the computational method used must be validated by comparison to measured critical configurations [1, 9]. Well-characterized critical configurations are referred to as benchmark experiments and are the preferred systems for use in this validation. These benchmark experiments must be neutronically similar to the safety analysis system being analyzed so that the calculational marginbias and bias uncertaintythat is derived from the validation can be applied when setting limits for the system of interest [4].

One of the primary advantages of S/U methods is the ability to quantify the similarity of two systems. This rigorously quantified similarity can form a defensible basis for benchmark experiment selection. The theoretical background for this is discussed in Section 4.5, and application recommendations are summarized in Section 5.5.

1-2 S/U methods also provide a capability to estimate the potential bias in the effective neutron multiplication factor, keff, using nuclear data. This bias estimate can be used as a basis for deriving a validation gap penalty for system components for which sufficient validation is not possible. The uncertainty propagation theory is discussed in Section 4.4, and the guidance for application is provided in Section 5.4.

2-1 2 VALIDATION OVERVIEW Validation establishes the applicability of a computational method to a particular safety analysis model or class of safety analysis models [1]. In practical terms, this generally involves the determination of a bias and bias uncertainty for the computational method, although these terms may be combined into a single calculational margin. As mentioned in Section 1-11.2, one of the most important steps in the validation process is the selection of similar and thus applicable benchmark experiments to use in the validation. The use of S/U tools for supporting these similarity assessments is a focus of this report and is discussed in Sections 4.5 and 5.5. S/U selection of experiments specifically for criticality safety analyses using burnup credit has been demonstrated in multiple efforts, as described in the literature [10, 11, 12]. Demonstrations of using S/U tools for similarity assessment are also presented for example systems in Section 6 of this report.

The remainder of this section discusses more relevant background on the validation process to provide context for both similarity and gap assessments. Complete discussion and guidance on NCS validation is readily available in other sources [2, 3, 4]. This section discusses impacts of the safety analysis model, sources and assessment of benchmark critical experiments, and sources of bias and bias uncertainty in the validation process.

2.1 Safety Analysis Model Validation is performed to provide an estimate of the bias and bias uncertainty of the computational method for a specific safety analysis model or a class of safety analysis models.

As discussed in Section 2.3 below, the nuclear data used in the calculations are the primary source of bias in most cases. It is therefore essential that the nuclear data used in the safety analysis model(s)the same energy-dependent data for the same nuclidesare the nuclear data used in the benchmark models included in the validation suite. This requirement is the driving consideration in selecting benchmark experiments similar to the intended safety analysis model for validation.

The nuclear data being used in the safety analysis model are a function of the materials in the system and the neutron energy spectrum in the system. A fast-spectrum system only exercises the nuclear data in the 100s of keV to a few MeV range, but a thermal-spectrum system exercises this fast range, as well as the intermediate energy range for neutron thermalization, and the thermal range. Biases in the data at any energy can be relevant for thermal systems, but only biases in the fast data are likely to be relevant for fast systems. Similarly, the complex interactions of thermal neutrons with light nuclei in the system, as characterized by a thermal neutron scattering law (TSL) or S(,) data, are only relevant to thermal neutrons. The TSL is also material dependent and can have a significant impact on the calculated keff for a system.

The benchmark experiments should have the same primary fissile material, such as 235U or 239Pu, the same moderating species, such as light-water or polyethylene, and the same major absorbers, such as 10B or fission products, as the safety application system(s).

Characterization of the neutron energy spectrum is more complicated. In some cases, a coarse characterization of thermal, fast, or intermediate may be sufficient. Several NCS codes, including SCALE [7] and Monte Carlo N-Particle (MCNP) [13], report tabulated parameters characterizing the neutron energy spectrum. These parameters include the energy of the average lethargy causing fission (EALF), the average energy of fission (AEF), and/or the average energy group of fission (AEG). Still other parameters may be used to characterize

2-2 systems based on the mixture of fissile and moderating species. The ratio of moderating to fissile species, commonly referred to as H/X, is particularly common for characterizing fissile solutions. The spectrum present in the safety analysis model defines the desired spectrum for the benchmarks.

For cases in which a generic validation is applied to several different safety analysis models, an area of applicability (AOA) is defined during the validation process. The analyst must confirm that the safety analysis model is within the defined AOA for each system using a generic validation. It should also be noted that the consensus standard on NCS validation [1] uses the term validation applicability for the range of parameters in which the validation is applicable.

2.2 Benchmark Critical Experiments Explicit critical experiments demonstrating safe limits for handling or storage of material were common decades ago when each facility handling fissionable material had the facilities, staff, and expertise to perform such experiments. Over time, these expensive facilities have been closed, and very few remain. Fortunately, computing capabilities have expanded dramatically over the same time period, and adequate safety can be demonstrated through modeling and simulation. It is important that these computational methods be validated through comparison to critical configurations. The best of these configurations have been thoroughly characterized and documented and can thus be used as benchmark critical experiments. The characterization allows precise uncertainty quantification and reliable model construction. Complete documentation provides confidence in the models and in the assessment of the expected value of keff for the documented benchmark model. The assessment of the characterization and documentation varies significantly among different individuals and has varied significantly over time. Improved computational capacity allows analysts to resolve effects using modern computers beyond the resolution power of computational methods in the past. This has generally increased the expected level of details in characterization and documentation related to the experiment. Benchmark models have also become significantly more detailed in the last 5-10 years because more explicit components can be included without exceeding the capabilities of the codes and computers.

The best public source for high-quality critical experiment evaluations is the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook [14], published by the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA). The ICSBEP has a rigorous process to review new benchmark experiment evaluations.

Each proposed evaluation is reviewed by an internal reviewer and an external reviewer before being presented to a technical review group (TRG). Before the evaluation can be published in the next annual version of the ICSBEP Handbook, it must be approved by the TRG and, if necessary, a subgroup of TRG members charged with ensuring that comments generated during the review are addressed. Each evaluation contains a description of the experiment, an evaluation of the experimental data, a description of the benchmark model to be used, sample results, and relevant references. Evaluation of the experimental data and generation of the benchmark model result in the assessment of the expected keff value for the benchmark model and its uncertainty. These values are essential for normalization of calculated results during validation [2, 4].

The ICSBEP was started in the 1990s, and at this writing it has amassed evaluations of over 5,000 individual critical configurations covering various fissile materials and energy spectra. The ICSBEP Handbook is freely available from the NEA to citizens of NEA member countries working in relevant technical areas.

2-3 Thousands of critical experiments have been performed around the world that are not documented in the ICSBEP Handbook. Many of these experiments are relevant to systems in use today and may be useful for validation. It is important that analysts performing a validation including these experiments develop and document their own benchmark models and associated uncertainty estimates. No model of a real system can be representative without simplifications and approximations, and the impact of those modeling approximations on the calculated keff must be considered.

2.3 Sources of Bias and Bias Uncertainty There are many potential sources of bias and bias uncertainty in neutron transport calculations.

This section highlights a few of the most important and common sources of bias and bias uncertainty to inform the benchmark selection process and increase the probability that the calculational margin developed in the validation process is applicable to the safety analysis model(s).

2.3.1 Sources of Bias The primary source of bias in high fidelity radiation transport codes (e.g., continuous-energy

[CE] three-dimensional [3D] Monte Carlo codes) is most likely to be nuclear data. These codes can provide exhaustive detail of the geometric aspects of systems. Approximations of the physical system are not part of the validation process, strictly speaking, but they are part of safety analysis model development. The models used in the safety analysis must be described and defended separately from the computational method validation. The computational validation assesses the ability of the method to correctly calculate a fully characterized system.

It does not assess the ability of the code system to faithfully represent a real-world configuration.

Rigorous particle tracking and physics algorithms have been developed over the years and implemented in production codes such as SCALE [7] and MCNP [13]. These and other production codes are tested and validated extensively by the software developers [15, 16]. The user base also provides significant testing and interrogation while using the codes for a range of applications. Although no code is free of bugs, the significant amount of testing performed by developers and users provides evidence that production codes are performing neutron transport simulations reliably.

In general, fidelity of the CE data representations is designed to match the evaluation descriptions at all points to within 1% or less. This is typically sufficient for accurate simulations, but in some cases, more accurate libraries are generated to target 0.1% as the maximum discrepancy from the evaluation values. Ultimately, the limiting consideration for CE data fidelity is the amount of disk space needed to contain and distribute the data. Exceptionally large data libraries also increase computational run-times while the computer searches for the necessary data. As ever, there is a trade-off between accuracy and time.

However, representation of evaluated data is not the primary source of error in nuclear data.

Nuclear data are exceedingly complex and variable. The range of neutron energies relevant to criticality safety calculations covers 1012 orders of magnitude, from mega-to milli-electronvolts (MeV to meV). Neutrons are born from fission at high energies, scatter off surrounding nuclei, pass through the resonance energy region in which cross sections can vary by orders of magnitude in a few eV, and eventually reach thermal energies. At thermal energies, atomic physics can become relevant in ways other than those related to nuclear physics: the scattering

2-4 properties of nuclei become dependent not only on the target nucleus, but also on the molecule or crystal in which that target nucleus is located. As an example of the complexity of nuclear data, Figure 2-1 provides the ENDF/B-VII.1 [17] cross sections for (1) 238U radiative capture (n,), (2) 1H elastic scattering, and (3) 10B neutron absorption with alpha particle emission (n,).

The variability of the cross sections between reactions and within a single reaction at different energies demonstrates the difficulties encountered when measuring, evaluating, processing, and using such nuclear data. Remarkably, these challenges are generally overcome to yield small biases for most categories of systems of general interest in nuclear safety.

Figure 2-1 Cross Sections for 238U (n,), 1H Elastic Scattering, and 10B (n,)

Aside from the obvious difficulties associated with measurement and evaluation of such complicated data, further evidence that the data are an underlying cause of bias comes from validation studies such as those conducted by Scaglione et al. [10], Greene and Marshall [15],

and by Posey et al. [16]. Such reports always show variations among the biases reported for data from different nuclear data libraries, even for the same code. The changes in the bias can generally be traced to specific nuclear data changes.

2.3.2 Sources of Bias Uncertainty Sources of uncertainty in nuclear systems, including benchmark experiments, are more numerous. The discussion here focuses on sources of uncertainties in benchmark experiments and can be generally sorted into four categories: material composition, geometry, temperature, and reactivity measurement.

Reactivity measurement uncertainties are typically the smallest uncertainties. These uncertainties are related to detectors and detector placement, electronics, and counting systems. The detector systems for critical experiments are designed to be accurate and sensitive to neutron leakage multiplication because this is the primary measurement for most of these systems. The reactor period can be determined with high accuracy. The larger component of uncertainty is typically the effective delayed neutron fraction used to convert this into a reactivity and thus a keff value. The uncertainty of the reactivity measurement can be on the order of a few cents [18], where a cent is one one-hundredth of the delayed neutron fraction.

2-5 For 235U systems, a cent is thus on the order of 0.00007 k, or 7 pcm. The smaller delayed neutron fraction for 239Pu indicates that the corresponding reactivity uncertainty is only about 2 pcm for these systems.

Similarly, temperature uncertainties are typically small. The vast majority of critical experiments have been performed at ambient room temperature, which presents different challenges for system validation at other temperatures. Regardless, for the purposes of evaluation, the temperature is well known and has relatively small impacts on the uncertainty of the benchmark models. Uncertainties of 5 pcm or less are not uncommon [18, 19]. The impact of temperature on the density of materials in the experiment may be greater and would likely be categorized separately in composition and dimension uncertainties. There is also evidence from recent critical experiment design studies that temperature may have a large impact on the TSL for plastic solid-moderator systems [20]. These experiments have not yet been performed, so it is not yet clear how strong this sensitivity really is.

The majority of the uncertainty in the evaluated critical experiment comes from the inability to completely characterize the materials and arrangements in the experiment. These uncertainties relate to the composition of the materials and their geometrical arrangements. Generally, the experiments are carefully designed with stringent quality assurance on procurement of parts and materials to minimize uncertainties. Samples of most materials are analyzed for composition and impurity descriptions. Components are measured using state-of-the-art calibrated methods and tools to minimize and characterize dimension uncertainties. Each known uncertainty is then propagated to an uncertainty in keff. Most of the content of many recent ICSBEP evaluations are dedicated to characterizing, analyzing, and describing composition and geometry uncertainties.

The uncertainties are combined using appropriate uncertainty propagation rules from the relevant sections of the ICSBEP evaluations. Again, for benchmark experiments that are not included the ICSBEP Handbook, this uncertainty analysis must be performed by the validation analyst. No simple rule exists for the acceptable final keff uncertainty for a benchmark experiment. A rule of thumb for many ICSBEP evaluations has been to use 0.01 k, or 1,000 pcm. Many benchmark uncertainties in the ICSBEP Handbook range between 100 and 500 pcm, although some are higher, and a few are lower. Solution experiments tend to have higher uncertainties given the additional uncertainties associated with characterizing the solution properties. Experiments involving fuel rod arrays near optimum moderation can have especially low uncertainties because the uncertainty in the pitch has essentially no reactivity impact. The LEU-COMP-THERM-102 evaluation [14] provides a useful example of this phenomenon. The first few and last few cases have uncertainties of approximately 100-120 pcm, but the middle cases near optimum moderation have uncertainties as low as 65 pcm.

The Monte Carlo stochastic uncertainty associated with the calculation is typically significantly lower than the evaluated uncertainty of the benchmark model. Contemporary Monte Carlo calculations can routinely achieve uncertainties of 0.010.02 %k, or 1020 pcm. These uncertainties must be accounted for in a complete assessment of bias uncertainty [4], but when propagated with the evaluation uncertainty, the stochastic uncertainty nearly disappears.

3-1 3 PREVIOUS GUIDANCE ON THE USE OF S/U METHODS Several different reports have been generated over the years to provide guidance on the use of S/U methods, specifically in NCS validation. The first such report, as mentioned in Section 2, was NUREG/CR-6655 [5, 6]. This two-volume report lays out the theoretical basis for sensitivity methods, uncertainty analysis, and similarity assessment, and the case study is presented Volume 2. Another key reference is an article in a special issue of Nuclear Technology [21]

presenting the theory and application of the TSUNAMI S/U tools included with the SCALE 6 release. A TSUNAMI Primer was also developed to provide step-by-step instructions on the use of the tools and relevant interfaces, also specifically for SCALE 6 [22]. Development of CE methods for TSUNAMI required generation of new guidance [23], although many older, more complete references have not been expanded to include discussion of CE TSUNAMI-3D. Other conference papers and presentations, training classes, and workshops have provided guidance and recommendations over the years but are not covered here because they lack the breadth of the documents mentioned above. Many detailed recommendations derived from these studies are included in the application recommendations provided in Section 5.

Outside the SCALE/TSUNAMI system, documentation of the MCNP keff sensitivity coefficients (KSEN) capability is available in the MCNP manual [24]. A detailed review of this guidance is not provided here because code-specific recommendations for MCNP are outside the experience base of the authors. Guidance and recommendations specific to MCNP are available from Los Alamos National Laboratory (LANL).

3.1 NUREG/CR-6655 As noted in Section 3, NUREG/CR-6655 introduces S/U methods for NCS validation. This report predates the TSUNAMI sequences in SCALE using the original sequence name of SEN1. As is also evident from the use of only one-dimensional (1D) methods in the report, the implementation of sensitivity calculations into the KENO V.a and KENO-VI 3D Monte Carlo codes had not yet been performed. Most of the sensitivity data used in NCS validation today come from 3D Monte Carlo models. Calculation of sensitivity coefficients using adjoint perturbation theory predates their application to NCS validation applications by decades, so implementation in 3D Monte Carlo codes did not change the theoretical background or many aspects of practical implementation. NUREG/CR-6655 discusses the integral parameter ck and recommends a value in excess of 0.8 for identifying similar experiments. The ck parameter is still used today, and discussion and research towards a robust, quantitative basis for similarity assessment with this parameter continues [25]. More discussion on ck is provided in Sections 4.5 and 5.5.

Other validation approaches and similarity metrics discussed in NUREG/CR-6655 have not achieved wide acceptance. The generalized linear least squares methodology (GLLSM) has been implemented in the TSURFER sequence starting with SCALE 6 and is incorporated into the LANL NCS validation tool Whisper [26]. The GLLSM approach is discussed further in Section 7.1, but significant challenges remain to be solved prior to using GLLSM directly for determining bias and bias uncertainty for NCS validation. NUREG/CR-6655 also makes use of a series of D parameters to quantify the difference between sensitivity profiles for, scattering, and absorption between the application and a potential benchmark experiment. These parameters were largely superseded by the E parameter, which has never seen widespread use. Obviously, NUREG/CR-6655 is silent on practical guidance for Monte Carlo-based sensitivity coefficient calculations because the methods were still under development.

3-2 The details of the multigroup (MG) cross section process implementation in SCALE for both the TSUNAMI-1D and TSUNAMI-3D sequences has also advanced beyond the treatments described in NUREG/CR-6655.

The sensitivity theory provided in NUREG/CR-6655 remains applicable. No derivation of the sensitivity coefficients or adjoint perturbation equations is necessary here because the discussion provided in Section 2 of NUREG/CR-6655 is still applicable and is the basis for all the sensitivity calculation methods discussed in Section 4.2 of this document. The physical interpretation of sensitivity coefficients and covariance data also remains unchanged. As is generally true in many areas of modeling and simulation, the fundamental principles have remained unchanged for the last 25 years, but more computing power has allowed further development and more comprehensive implementation of these first-principles models.

Volume 2 of NUREG/CR-6655 presents a case study applying S/U tools to NCS validation.

Sensitivity data are generated for a series of critical experiments to be applied to the validation of a UO2 system enriched to 11 wt% 235U. It is interesting that this was the demonstration application chosen given that there is increased interest in the use of S/U tools because of the potential for increased enrichment fuel in commercial power plants. The validation demonstration uses trending techniques. Initially, traditional trending parameters such as EALF, H/X, and enrichment are used. Subsequent sections present investigation of trending with ck and D values. Ultimately, the sum of the three D values, denoted as Dsum, was chosen as the trending parameter to investigate the efficacy of validation as a function of D value. NUREG/CR-6655 notes that one shortcoming for trending with either D or ck is that the trend must be extrapolated because an exact match to the system itself is not possible with other critical benchmark experiments. This is not necessarily unique to these S/U-based parameters, but experiment selection for traditional parameters usually aims to create a suite of benchmarks interpolated to the value of the trending parameter for the safety analysis model. A small extrapolation from a sufficiently large set of sufficiently similar experiments is reasonable, but it can be a challenge to identify benchmarks that are reasonably similar to some systems.

NUREG/CR-6655 concludes that benchmarks with ck values of 0.8 or higher are sufficiently similar for use in validation. The NUREG/CR also recommends having at least 20 such benchmarks, although this recommendation is based on results of GLLSM validation and may not be appropriate for a trending validation approach. Volume 1 of NUREG/CR-6655 concludes that only 10 experiments were needed for a GLLSM bias estimate to converge, but it does not consider parametric or nonparametric validation techniques without trending.

Validation with S/U-based parameters also requires that a trend be generated for each application to be validated. In a traditional trending analysis, multiple application systems can be validated by determining the bias and bias uncertainty values from the trend for different system-specific values of the trending parameter. For the S/U-based parameters, however, the system of interest has the target set of sensitivities, and it is impossible to extract meaningful calculational margin values from that trend for other systems. In reality, this is not a significant burden because the effort to calculate a set of ck values is minimal compared to the effort to calculate the sensitivity data. The subsequent statistical analysis of trending and extrapolating bias and bias uncertainty values is also trivial in comparison.

3-3 3.2 Rearden et al., Nuclear Technology Article The SCALE 6 Nuclear Technology article [21] is currently the most commonly cited reference for a description of the SCALE TSUNAMI tools. It was written in 2009 and 2010 and published in 2011, after the development of the TSUNAMI-1D and TSUNAMI-3D sequences. Significant development and application work was performed in the decade since publication of NUREG/CR-6655. Also, the TSUNAMI-IP code had been developed and deployed to facilitate uncertainty propagation and similarity determinations. Much of the discussion and many of the recommendations in this article are still applicable today. This article is the reference for much of the theory and application guidance provided in Sections 4 and 5 of this document. The primary shortcoming of Rearden et al. [21] is that it predates the development of CE TSUNAMI-3D. Later references document these capabilities [23, 27].

Rearden et al. begins with a brief discussion of the validation process. The TSUNAMI suite of tools is then presented including TSUNAMI-1D, TSUNAMI-3D, TSUNAMI-IP, TSURFER, TSAR, and USLSTATS. TSUNAMI-1D uses the XSDRNPM 1D discrete ordinates code to perform forward and adjoint keff calculations for the purposes of sensitivity coefficient calculations.

TSUNAMI-3D performs the same function, but with KENO V.a or KENO-VI for 3D Monte Carlo neutron transport. At that time, both of these sequences were only capable of generating sensitivities from MG calculations. Further development added CE sensitivity capabilities to TSUNAMI-3D [27]. TSUNAMI-IP calculates similarity indices such as ck and propagates nuclear covariance data with system sensitivities to calculate the nuclear data-induced uncertainty in keff. The TSUNAMI-3D and TSUNAMI-IP sequences are the most frequently used TSUNAMI tools in NCS validation analysis today. TSURFER is an implementation of the GLLSM validation approach and is discussed in Section 7.1. TSAR is used to calculate reactivity sensitivities that differ from keff sensitivities; this is particularly useful for highlighting the importance of materials used in substitution experiments. These experiments can be static experiments, like LEU-COMP-THERM-079, in which typically small amounts of a material of interest are introduced into a critical assembly or reactor to isolate the impact of the added material. In the case of LEU-COMP-THERM-079, 103Rh foils were introduced to enhance validation of this important fission product. Oscillation measurements have also been performed in which a small sample of a material of interest is cycled into and out of a reactor to allow reactivity measurements associated with the sample. The reactivity sensitivities calculated in TSAR are used within a TSURFER GLLSM analysis, so they are not widely used in the NCS community today. Further discussion of TSAR and reactivity sensitivities is provided in Section 7.2.

The second main section of Rearden et al. introduces sensitivity coefficients. Calculation of sensitivities by directly perturbing the model inputs, a process referred to as direct perturbation (DP), is included. Rearden et al. do not provide guidance on performing DP calculations or criteria for comparing reference DP results with TSUNAMI-calculated sensitivities. A subsequent series of papers published years later provides guidance on performing DP calculations and comparing the sensitivity estimates [28, 29, 30]. A derivation of the adjoint perturbation theory methods used in the TSUNAMI sequences is also provided. It is initially the same as the derivation provided in NUREG/CR-6655 [5], but it provides more explicit equations for the calculation of different sensitivity coefficients. The six specific sensitivity equations provided are capture, fission, scattering, total,, and. As a reminder, is the number of fission neutrons produced per fission, and is the energy distribution of neutrons emerging from fission.

The details of the implicit sensitivity treatment necessary for the calculation of a correct sensitivity coefficient within a MG calculation are also presented. MG cross sections must be generated with an appropriate flux spectrum for the system of interest. The impact of changes in

3-4 the nuclear datathat is, the sensitivityis thus composed of the explicit change in the transport calculation and the implicit change in the MG cross section. The total of these two, referred to as the complete sensitivity coefficient, is needed for accurate predictions of system sensitivity. The implicit sensitivities are calculated using the BONAMIST code.

The next major area of discussion is uncertainty theory and nuclear covariance data. The discussion covers a number of topics on uncertainties and uncertainty propagation. It also provides a thorough discussion of sources of nuclear covariance data available at the time. A complete description of the SCALE 6 covariance library is provided, along with its sources of data. This library is still in use today as the 44-group covariance library distributed with SCALE

[7]. At the time of its release, the library was considered one of the best available, complete, reliable compilations of nuclear covariance data.

Section V in Rearden et al. introduces similarity metrics. The first suggested approach is a simple visual comparison of sensitivity profiles. Different tools have been developed over the years for visual display of sensitivity profiles. NUREG/CR-6655 used a module called PLOT to generate profiles. Javapeno was a java-based plotting tool developed for SCALE 5 that plotted sensitivity data from the sensitivity data files (SDFs). Fulcrum, which has been the SCALE graphical user interface since the release of SCALE 6.2, is capable of plotting sensitivity data contained in SDFs. Rearden et al. presents a number of quantitative indices developed to provide rigorous similarity assessments. The ck parameter is included, as are metrics for similarity of individual nuclides, including individual ck and g. The index g, also known as little g, integrates the differences in sensitivity profiles between an application and an experiment. This sum is normalized by the total application sensitivity and subtracted from 1 so that numbers near 1 indicate high similarity, and numbers near 0 indicate low similarity. The little g index has not seen wide use in NCS validation, primarily because benchmark selection is expected to be based on system similarity and not individual nuclide similarity. Comparisons of individual nuclides are therefore of little direct applicability. Likewise, although the individual ck index has not been widely used, it can be useful for assessing similarity of a particular nuclide between systems. The individual ck is defined in the same way as the integral index ck, except it only considers shared data-induced uncertainty for a single nuclide.

After a brief discussion of validation by trending analysis, the TSUNAMI penalty assessment is introduced. The penalty assessment forms a composite sensitivity profile from all the benchmarks in the validation suite. The construction of this composite profile ignores sensitivity greater than that shown in the application model, and it ignores multiple occurrences of sensitivity in the same reactions and the same energy ranges in different benchmarks. These shortcomings eventually came to be recognized as problematic. It is entirely possible that the bias in a particular nuclide/reaction is nonconservative, so additional sensitivity may lead to a nonconservative result for the benchmark or the entire suite of benchmarks. Because of this, the penalty assessment is no longer recommended. As discussed in Sections 4.4 and 5.4, the currently recommended approach to validation gap assessment relies on uncertainty analysis using the application model sensitivities. This approach has been used in assessing the validation gap penalties needed for fission products and minor actinides in pressurized water reactor (PWR) and boiling water reactor (BWR) burnup credit (BUC) [10, 11].

An extensive exposition of the GLLSM and its implementation in TSURFER follows the gap assessment section in Rearden et al. This is a logical next step because the promise of the GLLSM validation technique, as discussed in Section 7.1, is that relevant information can be extracted from each benchmark experiment, regardless of its similarity to the target application system being validated. Combining this information from a large set of experiments thus

3-5 provides a more complete view of the biases in all nuclide/reaction pairs. These individual biases can then be broadcast to the target system by propagating the reaction bias estimates with the sensitivities of the safety analysis system. Some practical implementation difficulties remain and will be discussed in Section 7.1.

Rearden et al. also includes a small section discussing the use of S/U tools in the design of critical experiments. The similarity of a critical experiment to a specific target safety analysis model can be greatly enhanced through calculation of sensitivities during the design process.

This process can ensure that a useful, similar, and thus applicable experiment is designed and performed. This assessment has been integrated into the experiment design process for the US Department of Energy Nuclear Criticality Safety Program and will likely be leveraged in design of new experiments for advanced reactor fuel forms. Further discussion of S/U tools improving experiment design can be found in Rearden et al. [31] and Clarity et al. [32].

The remaining part of Rearden et al. is mostly dedicated to an example validation exercise for the GBC-32 BUC cask model [33]. The example provides a thorough demonstration of the process, beginning with calculation of sensitivity coefficients for the application model in TSUNAMI-3D, confirmation of their accuracy with DP calculations, examination of uncertainty contributors, and performance of a similarity assessment based on the ck index. A more nuanced view of similarity determination with the ck index is presented based on studies in Broadhead et al. [34]. The recommendation presented in Rearden et al. is that ck values over 0.9 are highly similar to the target application, whereas values between 0.8 and 0.9 are marginally similar. The ck value used in the demonstration was 0.7, which allowed for a larger number of critical benchmarks to be used in the trending analysis. It does not appear that the use of 0.7 is endorsed, but instead, it was adopted as a convenience for the application and the experiments available. A penalty assessment is also demonstrated. Finally, bias assessments using TSURFER and TSURFER with TSAR data are presented.

The article concludes with a brief discussion of the availability of sensitivity data and a general conclusion. At the time of the publication of Rearden et al., the primary source of available sensitivity data was generated at Oak Ridge National Laboratory (ORNL) using the Verified Archived Library of Inputs and Data (VALID) [35], originally known as the archive of Models and Derived Data (MADD). The NEA subsequently developed sensitivity data for most of the configurations on the ICSBEP Handbook [36], so sensitivity data are available for a significant number of critical benchmark experiments using the Database for the International Criticality Safety Benchmark Experiments (DICE) database. DICE is distributed with the ICSBEP Handbook. These sensitivity data and their use are discussed further in Section 5.6.

3.3 TSUNAMI Primer The TSUNAMI Primer [22] was written in 2008 and published in early 2009. It contains much of the same guidance as Rearden et al. [21], given that it was generated at about the same time and by the same authors. However, the focus of the primer is different because its purpose is to provide detailed, step-by-step instructions for generating input and performing analysis. Even the demonstration portion of Rearden et al. does not provide the detailed guidance analysts need to use the tools. There is almost no theory discussion and few recommendations. Many interfaces presented and discussed in the primer were eliminated, and their functionalities were incorporated into the Fulcrum interface in SCALE 6.2.

3-6 One expanded area of discussion in the primer that is lacking from Rearden et al. is the extended ck edit from TSUNAMI-IP. This option generates a table in the output, providing the ck contribution from each nuclide/reaction pair in the calculation. The individual ck contributions will sum to the integral index ck. The ck contributions are not uncertainties and therefore are combined using simple addition and not uncertainty propagation rules. The table also includes individual ck values for each nuclide/reaction pair. The extended ck table can be very useful in understanding the contributing reactions to system similarity.

3.4 Jones Thesis A masters thesis published by E. L. Jones [23] provided the first user guidance on CE TSUNAMI-3D calculations [27]. The thesis presents a set of 11 benchmark experiments from the ICSBEP Handbook [14] and identifies sets of parameters that result in acceptable sensitivity calculations in TSUNAMI-3D for each experiment. CE TSUNAMI-3D is discussed in more detail in Section 4.2.2. Two CE TSUNAMI-3D methods have been implemented: the Iterated Fission Probability (IFP) method, and the contribution-linked eigenvalue S/U estimation Via track-length importance characterization (CLUTCH) method. The CLUTCH method is the CE TSUNAMI method investigated in Joness thesis, alongside the established MG method.

The recommendations of Joness thesis include running CLUTCH in parallel to reduce the wall time associated with the large runtime requirements of CE TSUNAMI calculations. Sensitivity calculations are slower than forward keff calculations, and CE calculations generally require more runtime than MG calculations. The F*(r) importance function used in CLUTCH is tabulated during the skipped generations, so Jones recommends a much larger number of these generations than typically required for source convergence. This is somewhat mitigated by the goal of running large generations to improve the efficiency of parallel calculations. Several simulations skipped 500 generations, and some skipped 1,000 or even 2,000 generations.

Generations were 10,000 to 200,000 histories each, representing a huge number of discarded histories invested in calculating reliable F*(r) functions. Jones also recommends using the FST=yes option to generate a 3dmap file containing the F*(r) function for visualization with Fulcrum. This is a helpful diagnostic step in assessing the reasonableness of the importance function. The other important feature of the F*(r) function is the mesh on which it is tabulated.

Jones typically used voxels that were 12 cm on a side. In some models, the axial dimension was increased slightly if there were no noticeable axial gradients.

It is important to note that there is no evidence that Jones attempted to arrive at optimum parameters. Rather, the reported parameters were used and resulted in acceptable agreement between the TSUNAMI-3D and reference DP sensitivities. It is likely that less extreme parameters could be identified to reduce the computational burden associated with CLUTCH calculations, but this would require a significant expansion in the total computational burden.

3.5 Summary In summary, two guidance documents are readily available that discuss the theory and recommended application of S/U methods to NCS validation. Both NUREG/CR-6655 [5, 6] and the Rearden et al. Nuclear Technology article [21] were comprehensive at the time but are now incomplete because of more recent developments since their publication. The TSUNAMI Primer

[22] provides guidance on using TSUNAMI tools in SCALE 6, but updates to the interfaces and available methods have also recast many of the details in that document. The recommendations from Jones in her thesis are still applicable, but they may yield slightly longer runtimes than possible with optimized parameters.

4-1 4 THEORETICAL ASPECTS OF S/U ANALYSIS APPLIED TO NCS VALIDATION Analysts performing any modeling and simulation activity should be familiar with the theoretical underpinnings of their applications and tools, and S/U tools are no exception. A thorough understanding of the relevant theories can help identify unanticipated aberrant results. A working understanding of the tools can also lead to more efficient application of the tools, allowing greater focus on the analysis aspects of the work. The theoretical discussions here are targeted for a user or analyst and not for a code or methods developer. As discussed in Sections 3.1 and 0, the derivation of the adjoint perturbation theory equations used to calculate sensitivity coefficients is available in the literature [5, 7, 24, and 27]. Interested individuals seeking these additional details are directed to these sources for the relevant information.

4.1 Sensitivity Coefficients, Adjoint Perturbation Theory, and Nuclear Data The first concepts that are prerequisites to any meaningful discussion of the implementation of S/U techniques are sensitivity coefficients, adjoint perturbation theory, and nuclear data. The sensitivity coefficients are extremely useful for understanding system behavior, propagating uncertainties from nuclear data, and assessing system similarity. Defining the sensitivity coefficients and specifying some terminology is essential for understanding the details that follow. Adjoint perturbation theory is the basis for the calculation of sensitivity coefficients in MG and CE, so it must also be introduced to support understanding of the implementation discussions presented in Section 4.2. Furthermore, the sensitivity coefficients are related to changes in nuclear data, and the uncertainties are propagated from the nuclear data. Therefore, a brief discussion of the relevant aspects of nuclear data is also important here.

4.1.1 Sensitivity Coefficients A sensitivity coefficient is conceptually simple. It describes the change in some system parameter that results from a change to some system input. The sensitivity coefficients used in NCS applications are almost always the sensitivity of keff to a change in some particular nuclear data. Other sensitivity coefficients can be determined; see Section 7.2 for a discussion of reactivity coefficients. These other sensitivity coefficients will always be explicitly identified in this work. In other words, the default sensitivity coefficient being discussed is the sensitivity of keff to a change in nuclear data.

The physical interpretation of a sensitivity coefficient is straightforward. The sensitivity coefficient is the change in system keff caused by a change in data. Although this could be expressed simply as the ratio of the change in keff to the change in nuclear data, in practice it is more useful to express the sensitivity Via a dimensionless ratio, as shown in Eq. (1):

=

(1) where S is the sensitivity coefficient, k is the change in keff, k is the unperturbed system keff, is the nuclear data perturbation, and is the unperturbed nuclear data value.

4-2 As an example, if a sensitivity coefficient equals 0.2, and the relative change in the macroscopic cross section is 0.5%, then the expected change in keff would be 0.2 x 0.005 = 0.001 k/k. In a critical system with unperturbed keff equal to 1, the result of this change is 100 pcm.

In this document, the terms sensitivity and sensitivity coefficient are essentially interchangeable.

This is generally reflective of the use of the terms in the domestic NCS community. In some communities, usage differs slightly. The sensitivity is the overall change in the output parameter of interest, such as the change in keff. The sensitivity coefficient is created by dividing the change in the output parameter of interest by the change in the input parameter of interest so that it is a relative change. The definition provided in Eq. (1) is the sensitivity coefficient.

Sensitivity coefficients can be positive or negative, and the sign of the sensitivity may change at different energies. Processes that increase keff, such as fission, for example, have positive sensitivities, whereas reactions such as capture have negative sensitivities. An increase in an absorption cross section will lower keff, so the negative sign of the sensitivity coefficient is logical and consistent with its definition. The energy-dependent total sensitivity coefficients for 235U and 238U in the LEU-COMP-THERM-042-004 benchmark are shown in Figure 4-1. Note that the 235U sensitivity is positive nearly everywhere, although there is a small negative component in the resonance region. The 238U sensitivity coefficient is more interesting. It is positive at high energies where 238U experiences fission, and it is negative throughout the resonance region and thermal region where radiative capture is the dominant reaction.

Figure 4-1 Energy-Dependent Total Sensitivity Profiles for 235U and 238U in LEU-COMP-THERM-042-004 Sensitivity coefficients can also be simply added, so multiple reactions can be summed for a total sensitivity coefficient for a nuclide, and multiple nuclide sensitivity coefficients can be added to determine the sensitivity coefficient for an entire material or mixture. For example, the elastic scattering, radiative capture, and total sensitivities for 1H in the water around the fuel rods in the LEU-COMP-THERM-042-004 benchmark are shown in Figure 4-2.

4-3 Figure 4-2 Energy-Dependent Sensitivity Profiles for Moderator 1H in LEU-COMP-THERM-042-004 A large amount of detailed physics information is illustrated in Figure 4-2. First, it is clear that the system is (1) highly sensitive to 1H elastic scattering above approximately 0.3 eV, and (2) is essentially insensitive to radiative capture above 1 eV. The sensitivity to the elastic scattering reaction is generally positive, but there is a point below 0.1 eV at which it becomes negative. At these low energies, the cross sections for most absorption reactions increase proportionally with the inverse of neutron velocity (1/v), so further energy loss increases the probability of absorption. Perhaps the most striking aspect is the appearance of resonance features in the 1H scattering sensitivity. A review of the 1H elastic scattering cross section shown in Figure 2-1 clearly shows that there are no resonances, which is as expected because the 1H nucleus is a single proton. These resonance features demonstrate that if a neutron has an elastic scatter at these specific energies, then it will downscatter and escape an absorption resonance in some other nuclide. In the case of the LEU-COMP-THERM-042-004 sensitivities shown in Figure 4-2, 238U resonances create these features. The elastic scattering sensitivity is strongly influenced by the resonant absorbers present in the system.

The energy-dependent sensitivity for a particular reaction can be integrated over energy to determine the integral sensitivity for that reaction. The integrated total sensitivity for 235U in LEU-COMP-THERM-042-004 shown in Figure 4-1 is 0.242, whereas for 238U, it is -0.140. The integral total sensitivity coefficient for 1H in water around the fuel rods is 0.240, which is almost as high as the primary fissile species in the model. This indicates an undermoderated lattice because increases in moderation increase keff. An overmoderated system would have a negative sensitivity coefficient, and the sensitivity of a system at optimum moderation would be near zero because small changes in the moderation would have little impact on the system.

The intuitive process for calculating a sensitivity coefficient is to make a change in the input parameter of interest and to rerun the model. The change in the relevant output parameter is the sensitivity to the modified input. Sensitivity calculations of this sort are performed routinely in NCS analyses to quantify the impact of uncertainty parameters such as manufacturing tolerances. This approach is simple and direct, but it provides a limited amount of insight

4-4 because only the integral result can be determined. However, because of its reliability, this approach is the reference method for calculating sensitivity coefficients.

This reference methodology is referred to as a direct perturbation, as described in Section 5.1.

Changing the input number density for a nuclide, element, or mixture has the same impact in the model as changing the total cross section for that species by perturbing the macroscopic total cross section. The sensitivity coefficient is calculated by dividing the change in keff by the change in the number density of interest, as shown in Eq. (2).

=

x +

+,

(2) where SDP is the direct perturbation sensitivity coefficient, is the input quantity, typically a number density, k is the calculated keff value,

+ is the positive (increased) perturbation of the number density,

- is the negative (decreased) perturbation of the number density, k+ is the calculated keff value for the positive perturbation, and k-is the calculated keff value for the negative perturbation.

Recommendations for what species to consider for DP calculations, the magnitude of the perturbation, and the number of explicit calculations to perform are provided in Section 5.1.

Detailed sensitivity coefficients with respect to specific reactions as a function of energy are more useful, although they are more difficult to calculate. These sensitivities could be calculated by perturbing the nuclear data before providing it to the transport code, but a huge number of such perturbations would be needed to generate the sensitivity profiles for all energies of all reactions of all nuclides. The alternative is to perform a more complicated calculation using adjoint perturbation theory to determine the sensitivity coefficients for all reactions or all nuclides at all energies simultaneously.

Several methods for calculating sensitivity coefficients have been developed and implemented; the SCALE implementations of these methods are reviewed in Section 4.2. The sensitivity coefficients are representative of the system being modeled, and if they are correct, then they are invariant to the method used to generate them. This means that sensitivity data from different methods can be compared directly among different codes or methods.

One final note on sensitivity coefficients relevant here is the separation of implicit and explicit components of the complete sensitivity coefficient for MG calculations. The implicit sensitivity coefficient, mentioned briefly in Section 3.2, is a result of changes in nuclear data impacting the neutron slowing down solution used to weight the MG cross sections used in transport. The explicit portion of the sensitivity coefficient is the impact in the transport solution of changing the nuclear data. The impact of the perturbed cross section on the transport calculation itself is not considered; this is reasonable because the entire process assumes that the cross section perturbations are small. The term complete sensitivity coefficient is used for the combination of the two components to avoid confusion with the total sensitivity coefficient, which denotes the sensitivity to the total cross section.

4-5 Generally, users will not see the two components, but MG TSUNAMI sequences can provide a table in the output with the data. Rearden et al. [21] notes that the implicit sensitivity can be up to 40% of the complete sensitivity coefficient. Resonant absorbers such as 238U typically have large implicit contributions caused by the large impacts of the resonances in the MG cross sections. Fast neutron energy spectrum systems tend to have smaller implicit sensitivities because few neutrons slow down into the resonance region.

The implicit sensitivity is calculated in BONAMIST for MG TSUNAMI sequences.

Fundamentally, the chain rule is used to follow the change in one reaction for one nuclide to others. The result of this calculation is the sensitivity in nuclide j with respect to changes in the data for nuclide i. The explicit sensitivity coefficient for nuclide i can then be used to determine the magnitude of the implicit sensitivity coefficient for nuclide j. Summing the implicit and explicit sensitivity coefficients for nuclide j results in the complete sensitivity coefficient. The calculation of the explicit sensitivity coefficients, propagation with the implicit coefficients, and summing to form the complete sensitivity coefficients all occur in the Sensitivity Analysis Module for SCALE (SAMS) with the MG TSUNAMI sequences.

CE sensitivity calculations do not have an implicit sensitivity coefficient because the pointwise nuclear data are used directly, and no average cross sections are generated or used.

4.1.2 Adjoint Perturbation Theory Adjoint perturbation theory can be used to estimate the impact of small changes on system keff [34]. One important limitation of this approach is that the nuclear data perturbations must be small [21], although a strict definition of small is impossible to state in a manner that is applicable to all scenarios. Fundamentally, this is one reason that confirmation of sensitivities calculated with adjoint perturbation theory should be confirmed with reference DP calculations, as discussed in Section 5.1. This limitation holds for MG and CE sensitivity calculations because the same theory is the basis for both approaches.

A derivation of the adjoint perturbation theory equations for the keff sensitivity coefficients is not reproduced here but is available in Rearden et al. [21], Broadhead et al. [34], and in a simplified form in Volume 1 of NUREG/CR-6655 [5]. Equation (12) from Rearden et al. is repeated here as Eq. (3) to facilitate discussion.

,() = ()

() = ()

()([()]

()

1

[()]

() ) ()

() 1 2 [()]()

(3) where is the keff sensitivity coefficient of a particular cross section,

is the operator for all reactions except fission in the Boltzmann transport equation, is the fission operator in the Boltzmann transport equation, is the forward neutron flux, is the adjoint neutron flux, is the position, is the phase-space vector, and angle brackets denote integration over all variables.

4-6 Further discussion of how Eq. (3) is integrated into the MG TSUNAMI sequences is provided in Rearden et al. [21] and is briefly reviewed in Section 4.2.1. The important consideration at this point in the discussion is that with adjoint perturbation theory, once the operators and fluxes are known, the sensitivity for any reaction in any energy group can be determined. This is the power of adjoint perturbation methods: reasonable estimates for the impact of small changes in any piece of nuclear data can be generated from the results obtained by solving the forward and adjoint transport equation.

Implementation of the perturbation theory capabilities into the transport codes was a significant effort. Several years and millions of dollars were invested from the late 1990s into the early 2000s to add these capabilities into SCALE in the MG TSUNAMI sequences. Significant development was required again in the 2010s to develop and implement the CE TSUNAMI-3D methodologies. Direct perturbation is an inexpensive, easy way to examine a few coarse sensitivities, whereas adjoint perturbation methods are expensive, but they provide comprehensive, detailed sensitivity information for the system being evaluated.

4.1.3 Nuclear Data Nuclear data can be complicated, but fortunately, a cursory understanding is sufficient for most analysts, even within S/U applications. Three types of nuclear data generally relevant to S/U analyses are reaction cross sections, neutron multiplicity (), and the fission neutron energy spectrum (). Neutron multiplicity and the neutron spectrum are distributions and are not probabilities. The detailed equations for calculating the sensitivities to each of the different reaction cross sections and distributions is provided in Rearden et al. [21]. A limited number of points can be made here about the best-estimate data. The covariance data are discussed in Section 4.3.

The sensitivities of the two distributions and cannot be confirmed by manual DP calculations in the same way that the reaction cross sections can. The DP reference solutions can be calculated by modifying the number density because this perturbs the macroscopic cross section in a manner equivalent to perturbing the microscopic cross section. The neutron multiplicity, the number of neutrons emitted per fission, and the energy distribution of those fission neutrons cannot be manipulated in the user input. These sensitivities are omitted from the calculation of the total sensitivity for fissionable nuclides in TSUNAMI. This is consistent with the definition of the total sensitivity, just as the sensitivity to the total cross section, but the and sensitivities are not required to be removed from the total sensitivity before being compared with results of DP calculations. This is a convenient benefit for the user.

The original implementation of the calculation for the sensitivity was unconstrained: that is, it was not under any limitation. Because is the energy distribution of fission neutrons, it must integrate to one. An unconstrained result can be similar to the sensitivity because it essentially acts to increase or decrease the number of neutrons emitted from a fission event.

This is clearly nonphysical, so a constrained was developed that targets an integrated keff sensitivity of zero [21]. This constraint was first developed for the SAGEP code [37].

Sensitivities rarely change significantly with different nuclear data libraries. Although the libraries contain updated cross sections and distributions that can have noticeable impacts on calculated keff, those changes generally do not change the system response to changes in the nuclear data. The best published study of this phenomenon is in Section 4.3.1 of Greene and Marshall [38].

4-7 4.2 TSUNAMI Implementation of keff Sensitivity Methods This section provides additional details on the implementation of keff sensitivity coefficients in the TSUNAMI sequences within SCALE [7]. Additional details of more interest to developers are available in the referenced material, particularly in Rearden et al. [21] for the MG methods, and in Perfetti [27] for the CE TSUNAMI-3D methods. The focus of this discussion is to provide sufficient background for users to understand the strengths and potential pitfalls to help assess the accuracy of the results of TSUNAMI sensitivity coefficient calculations. Ultimately, all TSUNAMI calculations should be confirmed by comparison to DP calculations. Practical recommendations for this DP comparison process are provided in Section 5.1.

4.2.1 Multigroup Methods There are two SCALE MG sequences used for calculating keff sensitivity coefficients that are potentially relevant to NCS analysis: TSUNAMI-1D and TSUNAMI-3D. The theoretical underpinnings are largely the same for both approaches, but implementation differs some given the differences in the transport codes used. TSUNAMI-1D incorporates the XSDRNPM 1D discrete ordinates solver, and TSUNAMI-3D uses the KENO V.a or KENO-VI 3D Monte Carlo solvers. SCALE also deploys a TSUNAMI-2D sequence that uses the NEWT two-dimensional (2D) discrete-ordinates code, but the primary application for this sequence is reactor physics, so it will not be discussed here. The MG sensitivity methods have not been implemented in the Shift 3D Monte Carlo code in SCALE 6.3 because Shift does not have an implemented solver for the adjoint keff problem [7]. The available TSUNAMI-1D and TSUNAMI-3D sequences are discussed here, with practical analysis suggestions discussed in Section 5.2.

4.2.1.1 TSUNAMI-1D Conceptually, the calculation of keff sensitivity coefficients in TSUNAMI-1D flows directly from the perturbation theory equation provided in Eq. (3). TSUNAMI-1D uses the XSDRNPM discrete-ordinates transport solver to calculate the operator terms and the fluxes. Before the transport solutions can be performed, MG cross section processing is performed with XSProc.

Two separate XSDRN calculations are then performed by the sequence: a forward calculation and an adjoint calculation. BONAMIST calculates the derivatives needed to determine the implicit sensitivity coefficients. The forward and adjoint fluxes and the derivatives calculated in BONAMIST are then provided to SAMS, which calculates the sensitivities. The sensitivities are stored in an SDF in the TSUNAMI/A format because there are no stochastic uncertainties associated with the sensitivities. Section 9.1 of the SCALE manual [7] presents the XSDRNPM code, Section 6.1 addresses the TSUNAMI-1D sequence, and Section 6.3 describes SAMS.

The TSUNAMI-1D sequence has limited applicability because it is capable of modeling only systems that can be described as a 1D model: primarily, spherical systems in NCS applications.

A number of benchmark experiments involve fast metal systems, and a few solution benchmarks can be accurately modeled with TSUNAMI-1D. One published comparison of TSUNAMI-1D and TSUNAMI-3D, specifically for the HEU-MET-FAST-028 (Flattop) benchmark, is provided in Marshall et al. [39].

4.2.1.2 TSUNAMI-3D The calculation of keff sensitivity coefficients in TSUNAMI-3D also follows from Eq. (3), but another layer of implementation is used to gather the necessary fluxes as a function of location, energy, and angle in a 3D Monte Carlo transport code. The code development to allow this is

4-8 tallying flux moments in KENO V.a and KENO-VI to capture the directional dependence of the flux through the spherical harmonics approximation [21]. This means that Legendre moments are calculated; a higher order calculation should be more accurate, whereas a lower order calculation will require less computational memory. The large increases in run-time and memory associated with these tallies have historically been challenges when performing TSUNAMI-3D calculations. Recent increases in computational power, especially in available memory, have largely reduced this burden.

The tally implementation in the KENO codes was not originally implemented to facilitate this sort of large-scale flux tabulation. Flux is collected in the code at the level of units and regions. In practice, this means that a single material region is represented by a single average flux value.

The unit level restriction becomes particularly relevant in repeated structures such as arrays. A single average flux value will be tallied by KENO across all instances of a unit in a model. For instance, if 264 fuel rods are modeled in an array with the same unit, then KENO only tallies one average value across those 264 regions. This leads to poor results in many systems, so a mesh tally capability was added to KENO to separate these repeated regions into separate voxels [7].

Optimization of this user-specified mesh is discussed in Section 5.2.2.

As with TSUNAMI-1D, in the TSUNAMI-3D sequence the forward and adjoint fluxes are tabulated during two transport calculations. These data are provided to SAMS on the KENO restart files and are used to calculate the sensitivity coefficients. XSProc provides cross-section processing for the transport calculations, and BONAMIST determines the derivatives necessary for SAMS to calculate the implicit portion of the complete sensitivity coefficient. The sequence generates a TSUNAMI/B-formatted SDF containing the sensitivities, and it also generates estimates of the stochastic uncertainty in these sensitivity coefficients. Relevant sections of the SCALE manual [7] are Section 8.1 for the KENO Monte Carlo codes, Section 6.2 for the TSUNAMI-3D sequence, and Section 6.3 for SAMS. The majority of benchmark and application models developed for NCS applications use the TSUNAMI-3D sequence.

4.2.2 Continuous-Energy Methods There are two CE methods for calculating keff sensitivity coefficients starting with SCALE 6.2

[27]: the IFP method and the CLUTCH method. The IFP method was originally developed by Kiedrowksi [40] and deployed in MCNP. CLUTCH was developed by Perfetti [27] for inclusion with SCALE. The IFP method has also subsequently been implemented in the Shift Monte Carlo code with the release of SCALE 6.3 [41]. As with the MG methods presented in Section 4.2.1, a brief synopsis of the relevant theory behind each of the CE sensitivity coefficient methods is presented here. Interested readers are referred to the references for more details.

The CE methods do not directly solve the adjoint transport equation because of complexities in the adjoint radiation transport physics [40]. Because the adjoint is not available, other methods must be used to estimate the importance of events in the system. The difference in the proxies for the adjoint is the primary difference between the IFP and CLUTCH methods.

Both CE methods are implemented only in TSUNAMI-3D using 3D Monte Carlo transport calculations. The sensitivity coefficients are calculated directly in the Monte Carlo code and not in a subsequent module such as SAMS. As mentioned in Section 4.1.1, there is no implicit contribution to the sensitivity calculation because the pointwise data are used directly. Both methods generate a TSUNAMI/B-formatted SDF which contains estimates of the stochastic uncertainties in the sensitivity coefficients. The sensitivity edits in the TSUNAMI-3D output are generated by SAMS, so they have the same output format as the MG TSUNAMI-3D sequence.

4-9 4.2.2.1 Iterated Fission Probability The importance of an event in the IFP method is derived from the number of neutrons that are descendants of that event in the asymptotic neutron population [40]. In other words, the progeny are tracked to the end of time after an event has occurred. The fraction of the neutron population present in that asymptotic population is representative of the importance of the original event. This is a logical and physically intuitive interpretation, but it is not practical to implement in radiation transport simulations. Fortunately, the asymptotic population can be estimated reasonably accurately after only a few subsequent neutron generations are simulated. These generations between the event of interest and the assessment of the neutron population are referred to as latent generations. The number of latent generations is a user-input parameter in all implementations of the IFP method [7, 24], although there are terminology differences. The SCALE input parameter is CFP= and refers only to the number of latent generations [7]. The MCNP input parameter is blocksize and refers to the number of latent generations, the reference generation, and the asymptotic generation. Blocksize is therefore CFP+2 for an equal number of latent generations.

The number of latent generations necessary to reach a reasonable estimate of importance and thus also sensitivity is a system-dependent parameter. Generally, a larger number of latent generations will result in a more accurate sensitivity estimate, but the stochastic uncertainty will be larger. The increased uncertainty is a result of fewer progeny surviving all the latent generations to tally in the sensitivity result. In many systems, 510 latent generations are required, and some guidance from developers indicates that 20 generations are generally sufficient for all systems [7, 40, and 41]. There is some evidence that in some benchmark and application systems the IFP-calculated sensitivity coefficients are still changing after 30 or more latent generations [42]. More relevant discussion of this parameter is provided in Section 5.3.1.

The IFP method typically contains few assumptions or approximations and generally yields accurate results. The number of latent generations is a tunable parameter and impacts the sensitivity coefficients, so results should still be confirmed with DP calculations. The user input is very simple because the only parameter relevant to the sensitivity calculation that was not already present in the keff calculation is the number of latent generations. The primary drawback to the IFP method is the large memory requirement to track all region-, isotope-, reaction-, and energy-dependent information throughout the latent generations. This memory requirement increases with the number of latent generations in the SCALE implementation, which acts as another constraint on the number of latent generations to be considered in a calculation.

Parallel calculations using the IFP method are not supported in KENO in SCALE 6.2 or 6.3, but this capability is supported by Shift [7]. This has obvious implications for reducing lengthy runtime requirements for the IFP calculation and for distributing the memory load across processors.

4.2.2.2 CLUTCH The CLUTCH method calculates the sensitivities by calculating an importance function, F*(r),

which represents the expected importance in the system of a neutron generated at point r. This importance is applied to all the events in the subsequent neutron history, thus allowing the calculation of sensitivity coefficients to all reactions and not just fission. CLUTCH calculates the F*(r) function by estimating the unconstrained sensitivity using the IFP method during the inactive cycles in the neutron transport simulation [7]. F*(r) is also accumulated on a Cartesian

4-10 mesh and not as a continuous function, so all fission chains originating in a voxel are assigned the same importance value. Recommendations for the number of skipped generations, the number of latent generations, and the F*(r) mesh are discussed in Section 5.3.2.

CLUTCH has a more complicated derivation and is more complex to use, but it offers some advantages with respect to IFP. The primary advantage is that the memory use is significantly less than IFP because the importance for events has been pre-tabulated. This eliminates the requirement to retain all the collision information through all the latent generations in the calculation. Significant memory use is not generally needed for the calculation of F*(r).

Parallel calculations are supported in CLUTCH in KENO in both SCALE 6.2 and 6.3 [7]. The implementation of CLUTCH in Shift, specifically the tabulation of the F*(r) function, was not completed in SCALE 6.3 and should not be used.

4.3 Nuclear Covariance Data The uncertainty in sensitivity/uncertainty is nuclear covariance or uncertainty data. Covariance is the technically rigorous and correct word for these uncertainties because they often include correlations across energy groups, between reactions, and sometimes even between nuclides.

Considerable disagreement still exists within the nuclear data community regarding the evaluation of nuclear data uncertainties. This disagreement has led to significant differences in estimates of covariance in various evaluations over the years. Some efforts led through the NEA Working Party on Nuclear Data Evaluation Cooperation (WPEC) have attempted to develop standardized approaches for estimating covariance [43] or templates for documenting the necessary data [44].

The covariance data distributed with evaluated nuclear data have never been as thoroughly evaluated, reviewed, or validated as the best-estimate nuclear data. The deployment of S/U methods in several different code systems over the last few decades has significantly increased the focus on the covariance data, although it is not clear that this has translated into increased accuracy [45, 46]. In fact, a README included in the published ENDF/B-VIII.0 data clearly identifies that using the covariance data will result in incorrect overestimates of the nuclear data-induced uncertainty [47].

Early work at ORNL in support of the deployment of S/U methods established a fairly complete covariance library by surveying multiple sources of evaluated data [21, 48]. This was supplemented with low-fidelity estimates generated in a collaboration with Brookhaven National Laboratory and LANL [49]. This culminated in the 44-group covariance data distributed with SCALE 6 [21]. The 56-group and 252-group covariance libraries released with SCALE 6.2 also included curated covariance data; the two libraries are based on the same data but represent the data with different energy group structures [50]. The 56-group covariance data library distributed with SCALE 6.3 simply contains the covariance data processed onto the 56-group structure [7]. No improvements or corrections to the data to conform with the expectations of the nuclear data group at ORNL are included.

Despite the previous and continued difficulties in generating reliable covariance data consistent with the evaluated nuclear data distributed in the ENDF libraries, a review of some of the uncertainty theory and relevant matrix algebra can provide a foundation for understanding the processing and use of nuclear covariance data in NCS applications. A covariance matrix, C, can be constructed in which the numbers of rows and columns are equal to the product of the number of nuclide/reaction pairs in the model and the number of energy groups used to

4-11 represent the data. The diagonal elements of this matrix represent the variance of the relevant nuclear data, and the off-diagonal elements represent the covariance between the relevant energy groups and/or nuclide/reaction pairs. This covariance matrix can then be multiplied by a vector of sensitivities to propagate the uncertainties with the covariance, as discussed in Section 4.4. The reality of the covariance data in the library is more complex than described here. There are covariance matrices for each nuclide, but this detail is not imperative for a conceptual understanding of the process. Using several smaller matrices is more computationally efficient than using a single huge, very sparse matrix.

It should also be noted that the covariances are stored as relative uncertainties averaged over the energy group. This can lead to nonphysical uncertainties, especially in the first group of a threshold reaction. This scenario results from a particularly low average cross section in the denominator of the relative uncertainty because the reaction only occurs in the upper extreme of the group. These issues are unavoidable in arbitrary MG structures, so the data processing codes such as AMPX and the uncertainty analysis codes such as TSUNAMI-IP contain checks for such large uncertainties. The default treatment is to cap these uncertainties at 100%

uncertainty, but TSUNAMI-IP allows user input to patch covariance data in this scenario and for reactions with missing covariance data. More information regarding the patching of covariance data is provided in Sections 5.4 and 5.5.

Finally, it is expected that nuclear data uncertainties are larger than the errors in the data. The best estimate values may not be completely accurate: the real values of the parameters are expected to be within the range indicated by the best-estimate value plus or minus the uncertainty. In other words, the covariance data are believed to bound the data errors.

4.4 Uncertainty Analysis Uncertainty analysis in NCS analyses covers a wide range of uncertainties and tolerances. In this context, the focus is on propagating the nuclear data covariance with sensitivities to examine the uncertainty in keff resulting from the uncertainty in the nuclear data. Throughout this document, this is referred to as data-induced uncertainty.

As alluded to in Section 4.3, this process is conceptually straightforward and is simply multiplying the covariances by the sensitivities. A dimensional analysis from Eq. (1) shows that if some nuclear data uncertainty,

, is multiplied by a sensitivity coefficient, S, the result is an impact on keff,

. The reality is not quite as simple, because the covariance matrix is multiplied with a vector of sensitivities. For such an operation to be well formed, the transpose of the sensitivity vector is also needed. The result is the so-called sandwich equation shown in Eq. (4),

with the covariance matrix between the two sensitivity vectors.

2 =,

(4) where

2 is the variance (square of standard deviation) in keff resulting from the covariance data, S is the vector of sensitivity coefficients from a system, C is the covariance data, and ST is the transpose of the vector of sensitivity coefficients for the system.

4-12 The result of the covariance propagation is a single number which is the data-induced uncertainty for the system. Technically, the result is a variancethe square of the standard deviation. TSUNAMI-IP presents the result as a standard deviation.

As mentioned in Section 4.3, it is expected that the errors in the nuclear data are smaller in magnitude than the corresponding uncertainty estimates in the covariance data. As discussed in Section 2.3, the primary source of bias in NCS computational methods is the nuclear data. The propagated covariance data should therefore provide an estimate of an upper bound of the bias in the computational method. If the error in the datathat is, the biasis less than the uncertainty, then propagating the uncertainties to the system keff value with its specific sensitivities will provide a bound on the bias.

The above assumptions can be tested. This has been done graphically in some instances [10],

and a similar plot is presented in Figure 4-3. The individual calculated-over-expected (C/E) values are plotted for a series of LEU-COMP-THERM (LCT) benchmarks, which are systems of UO2 rods in water. The data-induced uncertainty is calculated for each benchmark and plotted as a pair dashed lines showing the width of this uncertainty band about unity. It is evident from the figure that the majority of points are within this uncertainty band. For the CE library included in SCALE 6.2 based on ENDF/B-VII.1 and the associated SCALE 6.2 56-group covariance library, all 140 LCT benchmarks are within one sigma of the expected benchmark value [51]. A summary of these results for 8 categories of ICSBEP benchmark from the SCALE 6.2.2 validation report [51] was presented to the Cross Section Evaluation Working Group (CSEWG) in 2017 [52] and is reproduced here in Table 4-1. It demonstrates that the LCT results were not unique and that in all 8 categories of experiments, the observed bias is less than that predicted by the propagated covariance data. The results in Table 4-1 also show the standard deviation of the C/E values for each category of experiments. This is a reasonable estimate for the variability of the C/E values and may be more directly comparable with the predicted uncertainty from the covariance data. The observed variability is still significantly lower than the data-induced uncertainty, thus providing confidence that the data-induced uncertainty bounds the systemic keff bias for the entire system. This continues to be true for ENDF/B-VIII.0 [45] and will almost certainly also be true for ENDF/B-VIII.1 [46]. The misprediction of system keff is the primary concern of validation, so it is useful to have proof that the covariance data provide a bounding estimate of the computational bias.

4-13 Figure 4-3 LCT Benchmark C/E Values Compared with Data-Induced Uncertainty in keff Table 4-1 Comparison of Bias and Data-Induced Uncertainty for Multiple Systems Benchmark Category Number of Benchmarks Bias (k)

St. Dev. of C/E Values (pcm)

Data-Induced Uncertainty (pcm)

HEU-MET-FAST 49 0.00014 477 1,366 HEU-SOL-THERM 52 0.00198 588 1,050 IEU-MET-FAST 13 0.00329 367 1,528 LEU-COMP-THERM 140 0.00044 167 677 LEU-SOL-THERM 19 0.00134 266 716 MIX-COMP-THERM 49 0.00351 337 633 PU-MET-FAST 10 0.00020 128 586 PU-SOL-THERM 81 0.00302 420 850 The covariance data should also provide a bounding estimate for the bias for any single nuclide.

This means that an estimate for the potential bias in the total system keff from any nuclide can be generated by determining the data-induced uncertainty from that nuclide in the system of interest. This reactivity can be used as an estimate of the bias contribution from that nuclide.

This information is particularly useful for cases in which no validation data are available in applicable benchmark experiments for a nuclide of interest. The data-induced uncertainty estimate of the bias can be used as a basis for a validation gap margin for the unvalidated nuclide. This approach has been used in developing a basis for fission product and minor actinide validation for BUC analyses [10,11, 53, and 54].

4-14 One area of concern in this gap assessment process is the range of testing available for covariance data. Aside from cross correlations that are rigorously treated as covariances, nuclear data-induced uncertainties are independent, so the uncertainties are combined in quadrature. This means that the large contributors to uncertainty overwhelm the smaller contributors. An example of this is seen in Tables 3 and 4 of Jessee et al. [55], which show the total data-induced uncertainty and the top 10 contributors for several systems. The total data-induced uncertainty for the UO2/Zircaloy-4 system in Table 3 is 544 pcm. The contributions from 235U and 238U are 445 and 307 pcm, respectively, as shown in Table 4. This means that the combined uncertainty of these two actinides is approximately 540 pcm, which is more than 99%

of the total uncertainty. This indicates that the primary covariance data tested by comparisons with benchmark variability are the major actinides in the system. It is possible to test some additional pieces of covariance data through examination of substitution experiments, especially when combined with data adjustment techniques, but available experiments do not exist for all nuclides. On the one hand, this causes concern about the potential gap assessment because it is not possible to directly test the covariance data used in the assessment. On the other hand, if such data were available, then they would be directly incorporated into the validation. Validation gap assessments inherently require an estimate of the reactivity impact of unvalidated data because there are no experiments to provide validation. The actual performance of the computational method in the unvalidated area can only be quantified by additional experiments, but estimates should be generated to inform the safety basis presented in the analysis.

4.5 Similarity Assessment Because most of the bias in the computational method comes from nuclear data errors and those errors are bounded by their uncertainties, a significant degree of shared nuclear data-induced uncertainty should indicate a high degree of similarity. This is the fundamental premise of S/U-based similarity assessment. This is especially true for the consideration of bias itself. In this case, nuclear data uncertainty functions as an indicator of potential bias in a system, so if the same bias potentials are exercised in the same ways between two systems, then they will have the same bias. This comparison can be performed in a rigorous, quantitative way in an expansion of the uncertainty propagation in Eq. (4). Instead of the S terms being vectors of sensitivities, they are themselves matrices. Each row can be considered a sensitivity vector for a system. In this case the result of the uncertainty propagation is not a single value but is instead a square matrix of values with the numbers of rows and columns equal to the numbers of systems considered in the S and S transpose matrices. An example assuming three input systems is shown in Eq. (5):

= [

11 2

12 2

13 2

21 2

22 2

23 2

31 2

32 2

33 2

],

(5) where S is the matrix with 3 systems sensitivity vectors, C is the covariance data, ST is the transpose of the systems sensitivity matrix, and

2 is the resulting variance or covariance.

The resulting covariance matrix contains the variance for each system on the diagonal and contains covariance terms on the off-diagonal terms. The matrix is symmetric, so 13 2 is equal to 31 2. The system numbering is essentially arbitrary, but if system 1 is an application, and systems 2 and 3 are benchmarks, then the covariance between the two benchmarks and the

4-15 one application can be calculated in this manner. This calculated covariance is the nuclear data-induced uncertainty shared between the two systems. Covariance itself is often difficult to interpret, but covariance can be scaled with the individual system standard deviations to calculate the linear or Pearson correlation coefficient [56], as shown in Eq. (6):

=

12 2

(11 2 ) (22 2 )

(6) where is the correlation coefficient, 12 2 is the covariance between systems 1 and 2, 11 2 is the variance of system 1, and 22 2 is the variance of system 2.

The three variance terms needed in Eq. (6) are all present in the final results matrix in Eq. (5).

This correlation coefficient, given the statistical symbol, is used in S/U analysis as the integral index ck. This is derived as a correlation coefficient of keff, particularly the correlation with respect to nuclear data-induced uncertainty.

It is clear that higher correlation coefficients, or ck values, represent more similarity between the systems that are being compared. It is not entirely obvious what value of ck indicates meaningful similarity or denotes that a benchmark is sufficiently similar to be useful in validation.

Recommendations have been generated by ORNL in several reports [5, 6, 21, and 34] and generally indicate that ck values in excess of 0.9 are indicative of systems that are highly similar and that ck values between 0.8 and 0.9 are marginally similar. The ORNL reports are not completely consistent in word use or specific recommendations, but there is a clear theme that ck values above 0.9 are best, and that benchmarks with ck values between 0.8 and 0.9 deserve consideration. Most ORNL-generated reports using the ck integral index for validation

[6, 10, and 11] use experiments that exceed a value of 0.8.

An earlier recommendation made by NRC staff was to consider ck values of 0.95 and 0.9 indicative of a very high degree of similarity and a high degree of similarity, respectively. This recommendation was integrated into NUREG-1520, Appendix 5B [57]. This appendix is incorporated directly from Fuel Cycle Safety and Safeguards Interim Staff Guidance 10 (FCSS ISG-10) [58], which was initially issued in 2006. The relevance of this history and publication date is that the staff recommendation specifically mentions the limited use of the code to date.

Limitations on recommendations based on limited use were prudent and appropriate in 2006 but are no longer relevant today.

Most ORNL guidance currently rests on GLLSM studies described in NUREG/CR-6655 [5] or documented in Broadhead et al. [34]. The results presented, especially those in Broadhead et al., appear to indicate some changes in predicted bias behavior with experiments with lower ck values. The ck value associated with the change in behavior varies for different systems, but it generally occurs in the range of 0.80.9. This appears to form the basis for the current recommended cutoffs of 0.8 and 0.9. Recent studies attempting to assess ck threshold values using more typical validation approaches [25] have not yet reached a point of providing recommended generically applicable ck thresholds. More discussion of recommended practices for assessing similarity for selection of applicable benchmark experiments is provided in Section 5.5.

4-16 As mentioned in Section 4.1.1, sensitivity coefficients are independent of the methodology used to calculate them and are representative of the underlying system. This means that SDFs generated by different codes or using different methods can be used for similarity assessment.

5-1 5 APPLICATION RECOMMENDATIONS FOR S/U METHODS IN NCS VALIDATION Section 4 summarizes the theoretical considerations underlying S/U analysis techniques focused on NCS validation activities. This section focuses on practical guidance and implementation suggestions to improve the use of such tools and increase the probability that the tools are used correctly. One of the strengths of S/U tools within the validation context is greater confidence in benchmark experiment selection and defense; this can be seriously undermined if the tools are misused. Practical guidance and suggestions can increase the efficiency of S/U implementation.

This section addresses the guidance in six main areas: direct perturbation calculations, MG sensitivity coefficient generation, CE sensitivity coefficient generation, uncertainty analysis, similarity assessment, and sources of available SDFs. Recommendations in each of these areas are provided in subsections, and sample demonstrations are provided in Section 6.

5.1 Direct Perturbation Calculations As discussed in Section 4.1.1, DP calculations provide a reference solution for the total sensitivity coefficient for a particular nuclide, element, or mixture. The perturbations are conceptually simple because the number density or mass density of the desired species is modified in the Criticality Safety Analysis Sequence (CSAS) input. The practical aspects considered here include determination of what species to study, how many perturbations and of what magnitudes, how to create the perturbed inputs, how to post-process the results, and how to compare the DP results with the TSUNAMI results. General guidance on performing DP calculations and comparing the results to TSUNAMI can be found in a series of papers by Jones et al. [28, 29, and 30].

5.1.1 DP Candidates The purpose of DP calculations is to provide reference solutions that allow an analyst to confirm that sensitivity coefficients generated in a TSUNAMI sequence are accurate. In general, this process confirms that the model and options selected work well and resolve the important flux and/or importance gradients in the problem. The MG methods explicitly calculate these forward and adjoint fluxes, whereas the CE methods (IFP and CLUTCH) use different proxies for importance. The final conclusion is that the TSUNAMI-calculated sensitivities, including the details of energy-and reaction-dependent results, are deemed accurate by confirming that the large, important sensitivities are correct. The parameters that support the accurate calculation of these sensitivities should also be sufficient for smaller sensitivities. This may be clearer in the MG methods with direct calculation of forward and adjoint fluxes, but it is also true in the CE methods with fewer user-selected inputs.

The primary DP candidates are the principal fissile, moderating, and absorbing species in the system to the extent that they exist. A fast metal system such as Lady Godiva (HMF-001) or Jezebel (PMF-001) has no moderating species and no appreciable absorption. Pin array benchmarks have moderation, and in some cases, the system sensitivity to moderation is larger than the sensitivity to the primary fissile nuclide. This is particularly true for significantly undermoderated arrays. Recall that moderators in overmoderated systems and absorbers will have negative sensitivities because increasing the cross section for these nuclides will decrease keff.

5-2 DP calculations should also be performed for any special nuclide or particularly important component of the system. Strong absorbers frequently have small sensitivities, despite having huge integral effects in a system. These black absorbers tend to remove neutrons from the system so efficiently that small changes in their cross sections do not noticeably change the absorption rate, so keff remains unchanged. This counterintuitive result is a direct consequence of the fact that the sensitivity coefficients only measure impacts to small changes.

Small sensitivities can be difficult to confirm with DP calculations in Monte Carlo transport sequences. The details are discussed in Section 5.1.2, but increasing the magnitude of the perturbation can lead to a nonlinear response. Alternatively, small perturbations may not create a large enough change in keff to be statistically meaningful. In this scenario, the reference solution is not useful and cannot confirm the TSUNAMI-calculated sensitivity. The generally recommended minimum sensitivity for confirmation with DP calculations is 0.02 [28]. Note that it is not necessary to confirm every sensitivity that exceeds this threshold.

DP calculations can be performed on an individual nuclide, a single element, or an entire mixture. Total sensitivities are reported in the TSUNAMI output for each nuclide in each mixture.

These results can be summed over all isotopes in an element to generate the total sensitivity for an element. This can be helpful if elemental number densities are input; iron is a frequent example of this situation with multiple isotopes having a noticeable sensitivity. Mixture sensitivities are also provided in the output. Water is probably the most common compound input with a mass density and no further details. The DP inputs can be modified to provide number densities explicitly for both hydrogen and oxygen or even for all five nuclides, but mass perturbations can also be used to perform DP calculations for the entire water mixture. Safety analysis models are more likely to have element or mixture definitions, and ICSBEP benchmarks are more likely to have elemental or isotopic number densities. A DP calculation at the nuclide level is not necessarily better than one at the mixture level, so simplicity of analysis should guide these selections. A mix of nuclide, element, and/or mixture DP calculations is also reasonable, depending on the input specifications.

In some cases, sensitivity coefficients calculated for a similar model can be used to confirm accurate results instead of DP calculations. For example, if sensitivities for several cases within an evaluation are similar, then a limited set of cases can be confirmed with DP calculations, and the other cases can be confirmed by comparison. This approach can also be used for a set of similar application models.

5.1.2 Number and Magnitude of Perturbations Most DP calculations can be performed with the nominal result and two perturbations. DP calculations for confirming TSUNAMI-1D sensitivities are frequently performed with +/-2%

modifications to underlying number density inputs.

Perturbations for confirming TSUNAMI-3D sensitivities generally target a keff change of approximately +/-0.5 %k. This has been found to provide a reasonable balance between the magnitude of the density change and the statistical significance of the resulting trend line. The formula for calculating the density change to cause this impact is provided in Eq. (7) based on the calculated sensitivity in the TSUNAMI output:

5-3

=

(0.005

)

(7) where is the density change to the species of interest,

is the reference nominal keff value, and STotal is the total sensitivity of the species of interest.

For example, for a critical benchmark experiment with a reference nominal keff of 1.0 and a total sensitivity of 0.25 for the species of interest, the perturbation would be +/-0.02. Conversely, this results in a density change of 25% or less, with the recommended 0.02 minimum magnitude for sensitivities to confirm with DP calculations. The impact of the reference keff is more noticeable for application systems that are deeply subcritical. With the same integral sensitivity of 0.25 but a nominal keff of 0.8, the resulting density change is +/-0.025.

A typical 3-point DP calculation can yield an uncertainty of approximately 1% if the nominal case and both perturbed cases are executed until the Monte Carlo uncertainty has been reduced to approximately 7 pcm. A DP uncertainty of approximately 1.3% will result from calculations with stochastic uncertainties of 10 pcm. In most cases, these uncertainties of 11.5% in the sensitivity are sufficiently small for reliable comparisons, as discussed in Section 5.1.5.

Fast metal systems tend to be sensitive only to the primary fissile species. The sensitivities also tend to be quite largeon the order of 0.8thus creating challenges for comparisons of the DP-and TSUNAMI-calculated sensitivities. As discussed in Section 5.1.5, the desired agreement between DP and TSUNAMI-calculated sensitivities is within an absolute magnitude of 0.01 and within 2. A typical 3-point DP calculation can yield an uncertainty of approximately 1.5%,

which for a sensitivity as large as 0.8 is an uncertainty of 0.012. This leaves an uncertainty band that is larger than the desired absolute agreement. The best approach to resolve this issue is to run multiple calculations at each end of the perturbation using different random number seeds (rnd= parameter in CSAS) to reduce the uncertainty in the DP sensitivity [59]. The number of separate calculations to be performed can be estimated by the desired uncertainty reduction and the 1/N relationship for uncertainty in Monte Carlo calculations.

5.1.3 Perturbed Input Creation Making the necessary number of input changes to several files can be a cumbersome process, and some level of scripting to make these changes and handle creation of the perturbed inputs is generally a benefit. Various in-house scripts have been developed to facilitate this at ORNL, but the development of a SCALE sequence to do so has not been completed as of the SCALE 6.3.2 release. Development is underway, with a goal of delivering a DP sequence supporting TSUNAMI-3D calculations in the near future.

The TSUNAMI-1DC sequence has been available for years, but it is functionally equivalent to the CSAS1X sequence, and it only provides a mechanism to run a single forward XSDRN calculation. It does not provide any support for perturbing inputs or gathering output results.

TSUNAMI-1D cases are relatively rare and generally fairly simple, so few DP calculations are usually needed for these applications. Using TSUNAMI-1DC by adding C to the end of the initial TSUNAMI-1D sequence identifier allows users to perform DP calculations with XSDRN.

The only other necessary change is to the intended number density in the composition block.

5-4 Sampler is another option that can facilitate the creation of multiple inputs. The parametric option is particularly well suited to DP calculations with respect to generating multiple inputs with variations in a single parameter. Unfortunately, Sampler has no ability to read and modify pieces of input, so the new number densities are still determined by the user. The user could define a variable block in the Sampler input to calculate the perturbed number densities, but both the nominal number density and the density multiplier,, would be required user input.

The base case for the DP calculations may be different from the CSAS model generated for the original benchmark model or safety application model. Some model modifications are introduced to capture sensitivity differences, as discussed in Sections 5.2 and 5.3. The most common of these changes is to introduce duplicate mixtures to separate different sensitivities for the same material. A frequent example of this is water, especially in pin array systems. The water within the fissile lattice is acting primarily as a moderator, and its sensitivity will be dictated by the moderation regime of the lattice. Water external to the lattice acts primarily as a reflector and has a very different energy-dependent sensitivity. SCALE generates sensitivities for each nuclide in each mixture, so the different 1H sensitivities can only be captured if the water in the two different zones is represented with different mixtures. The composition will be identical, but the artificial separation allows more detailed and accurate sensitivities to be calculated and reported. These changes must be reflected in the base case for the DP calculations to modify the correct mixtures consistently in the model geometry.

A final note relevant to this section is that the DP calculations should be performed in the same transport code as the TSUNAMI calculations. In other words, XSDRN should be used for DP calculations supporting TSUNAMI-1D calculations and KENO V.a, KENO-VI, or Shift for TSUNAMI-3D calculations. The Monte Carlo solvers should also be used consistently in MG or CE mode between TSUNAMI and DP calculations, but there are some applications in which this is not true. For example, CE DP calculations can be used to demonstrate that sensitivities calculated with MG TSUNAMI-3D are applicable to CE transport with the same code. This approach has been recommended in the past when CE TSUNAMI-3D was not available to allow S/U-based experiment selection based on ck for the validation of CE Monte Carlo transport calculations. DP calculations can also be used in a similar way to provide confidence that SDFs from outside sources are sufficiently accurate for similarity determinations, as discussed in Section 5.6.

5.1.4 Result Post-Processing 5.1.4.1 TSUNAMI-1D Many NCS analysts are unfamiliar with XSDRN input and output because the CSAS1X sequence for 1D analysis is not used routinely. The TSUNAMI-1DC sequence was developed to facilitate DP calculations with minimal changes to the user input. The keff output is labeled as lambda in the output based on the historical use of in the transport equation as the divisor of,

the neutron multiplicity, to balance the production and loss terms. This choice is somewhat unfortunate because the output is the keff, not its inverse, as the use of implies.

XSDRN is a deterministic transport solver and therefore has no stochastic uncertainty in the solution. The simplicity of the systems analyzed with TSUNAMI-1D tends to reduce the number of DP calculations necessary and results in a simplified analysis. A linear trend of the two perturbed keff results combined with the nominal keff value is typically used to generate the DP result. Recall that the slope of the best fit line represents the sensitivity coefficient because it is the change in keff per change in the input parameter. The LINEST() function in Excel can be used to generate the linear fit, as well as a linear fit line in a scatter plot of the results.

5-5 5.1.4.2 TSUNAMI-3D DP calculations for TSUNAMI-3D models tend to be more numerous and more complex than those for TSUNAMI-1D because of the greater complexity of the systems considered. The recommended keff value to use from a CSAS output is labeled as the best estimate system k-eff [60, 61]. This value is the minimum variance estimate based on discarding at least the number of requested skipped generations after termination of the simulation [7].

A spreadsheet first developed by D.A. Reed has historically been used at ORNL to collect the DP result keff values and their uncertainties, along with the nominal case keff and uncertainty.

The spreadsheet considers density changes from the nominal and normalizes the keff values to the calculated nominal keff value. An uncertainty-weighted linear regression is performed for normalized keff vs. density change, and the resulting slope and its uncertainty are generated, along with a plot of the trend line and the keff results. The plot provides a quick visual confirmation that the results follow expectations and that the response is linear over the range examined. An example of the resulting plot and the reported DP sensitivity data are shown in Figure 5-1. This information can be particularly important with smaller sensitivities that result in larger perturbations. Higher order fits can be used with the associated derivatives calculated at a of zero, but uncertainty propagation for these fits is more complicated and is generally ignored. The higher order fits can also be used in some cases to provide confidence in the linear regression of points that may appear somewhat nonlinear.

Figure 5-1 Example Plot from D.A. Reed Spreadsheet

5-6 This spreadsheet has typically been distributed with TSUNAMI training materials as part of SCALE training, but it is by no means the only acceptable method for post-processing CSAS DP calculations. More recent post-processing tools have been developed at ORNL in Perl and Python, so many solutions can be created. Propagation of the uncertainties in the keff values to uncertainty in the slope of the best fit line is desired because it informs the comparison of the DP and TSUNAMI sensitivities as discussed in Section 5.1.5.

5.1.5 Comparison of DP and TSUNAMI Sensitivities The DP sensitivity is the reference result used to provide confidence that the large number of sensitivities calculated with TSUNAMI are accurate and can be used for further analysis.

Comparison of the reference results with the TSUNAMI results is therefore of the utmost importance in confirming their validity. A flow chart of the entire DP process, taken from Marshall et al. [28], is reproduced here as Figure 5-2. It emphasizes the importance of confirming sensitivities with DP calculations prior to using the SDF.

Figure 5-2 Overview of the DP Calculation Process [28]

Two points should be emphasized before proceeding to the detailed discussion of the quantitative comparisons. First, the DP calculation is the reference solution and should be considered the correct sensitivity. The TSUNAMI calculations are generally more complicated and may suffer from violations of methodology assumptions. Second, and directly related, if a discrepancy is identified, then the first step is to confirm that the DP calculations have been performed and analyzed correctly. The direct perturbation process is simple, but because the process includes several manual steps, it can be error prone. Errors can be made in the creation of perturbed inputs, and results can be copied or manipulated incorrectly. The DP calculations are also typically much shorter in duration, so fixing any errors on the DP side can be resolved more quickly than issues with TSUNAMI calculations.

5-7 As discussed in Section 5.1.3, it is important that the DP base case matches the TSUNAMI model. Differences in the base case model can cause DP results to differ from those calculated in TSUNAMI and can be a difficult source of discrepancy to detect.

The ultimate goal of the DP calculations is to provide confidence that the TSUNAMI-calculated sensitivities are correct and can be used in further analysis. Clearly, the desired outcome is for close agreement between the two sets of sensitivities, but a rigorous definition of close agreement is difficult to defend. No concrete set of criteria has ever been established, studied, or vetted for TSUNAMI-1D calculations. This is at least partially because TSUNAMI-1D calculations are typically in very good agreement with DP calculations, and discrepancies of less than 1% are typical [39, 62].

During the development of TSUNAMI-3D at ORNL in the 2000s, however, a heuristic set of criteria was developed for assessing the accuracy of TSUNAMI calculations [28]. The difference between the DP-and TSUNAMI-calculated sensitivity is assessed with respect to three criteria:

the absolute difference in the sensitivities, the relative difference in the sensitivities, and the difference in terms of propagated standard deviations. The goal is for all the sensitivities checked with DP calculations to agree within 0.01 in sensitivity, 5% relative to the DP sensitivity, and within 2 standard deviations. These are general guidelines and not strict acceptance criteria. There is no formal logic of two-out-of-three or all three failing to indicate a problematic difference in sensitivities. Several nuclides with minor discrepancies may be less worrisome than a single nuclide with a large discrepancy.

As mentioned in Section 5.1.2, these criteria imply some constraints on one another. The constraint encountered most frequently relates to very high sensitivities common in some simple, typically fast-spectrum metal benchmarks. The 235U and the 239Pu sensitivities in the single sphere models of HEU-MET-FAST-001 and PU-MET-FAST-001, respectively, are 0.8 or higher. The uncertainty in a typical DP sensitivity coefficient resulting from the stochastic uncertainty in a 3-point central difference calculation is on the order of 11.5%. This 1 uncertainty applied to these large sensitivities is approximately 0.008-0.012 in sensitivity, which makes a 2 criterion incredibly wide in terms of absolute difference. The most efficient approach to lowering the overall uncertainty was investigated in Greene et al. [59] and was determined to be running multiple independent calculations with the perturbation targeted to induce a change of approximately 0.5% k. A similar approach could be used to reduce the uncertainty in a species with a low sensitivity but for which DP confirmation was desired because of its importance in the overall safety case. Generally, the low sensitivity would suffice to provide confidence in the small potential impact of small data errors, but explicit confirmation of the sensitivity may be possible.

Discrepancies between DP and TSUNAMI calculations investigated in Jones et al. [28, 29, and 30] point to another root cause of discrepancies. In the HEU-MET-MIXED-017 evaluation, a series of disks made of highly enriched uranium (HEU), tungsten, and polyethylene is reflected with polyethylene. The sensitivity of 235U in the HEU disks varies axially along with the flux, with more important disks and thus higher sensitivities in the central region of the assembly. A single mixture could not resolve these differences, and the DP calculations revealed a mismatch.

Modeling each disk with a unique copy of the fuel composition yielded good agreement between the DP-and TSUNAMI-calculated sensitivities. This scenario of using multiple mixture numbers to successfully calculate sensitivities in different regions of a problem is mentioned in Section 5.1.1.

5-8 DP calculations should include all perturbed cases created. In some cases, selecting a subset of results may yield a DP result in better agreement with the TSUNAMI-calculated sensitivity than the full set of results. Similarly, points should not be discarded to increase the uncertainty in the DP sensitivity such that the standard deviation criterion can be satisfied. A large number of essentially equivalent statistical manipulations can be created, but in all cases, this approach should be avoided. The purpose of the DP calculations is to provide a reference to which the TSUNAMI-calculated sensitivities can be compared to provide confidence of their accuracy. A significant amount of analysis can be performed once the TSUNAMI sensitivities have been confirmed.

5.2 SCALE Multigroup Methods As discussed in Section 4.2.1, SCALE uses MG methods in TSUNAMI-1D, and a MG option also exists in TSUNAMI-3D. Recommendations for running each of these sequences are provided in the next two subsections.

One piece of input that is common to both sequences is the celldata block, which provides the necessary input for MG cross-section processing. In general, SCALE supports four types of unit cells for MG processing: infinite homogeneous, lattice cell, multiregion, and doubly heterogeneous [7]. A complete description of selecting and implementing appropriate cross section processing models is beyond the scope of this report, but it is important to note that appropriate cross-section processing is essential for accurate sensitivity results. Within the TSUNAMI-1D and -3D sequences, XSProc still performs cross-section processing for transport, and by default, the capabilities are incorporated from BONAMI and CENTRM/Worker [21]. The information from the celldata block is also used by BONAMIST to generate the derivates necessary for SAMS to calculate the implicit portion of the complete sensitivity coefficient, as discussed in Section 4.1.1. Finally, there is no implicit sensitivity support for the doubly heterogenous cells at this time. This functionality is desired to support analysis of tristructural-isotropic (TRISO) fuel forms and may be developed in the near future.

5.2.1 TSUNAMI-1D As discussed in Section 4.2.1.1, TSUNAMI-1D uses the XSDRNPM 1D discrete-ordinates transport code to perform explicit forward and adjoint transport calculations. The fluxes are used in SAMS to calculate sensitivities. SAMS may also propagate nuclear covariance data with the sensitivities to determine the data-induced uncertainty in keff. Experience has shown that the default parameters are generally sufficient to generate sensitivity coefficients in good agreement with DP calculations [39, 62]. In some cases, the angular quadrature (isn=) may be increased to improve results. The default Legendre expansion order for cross-section data is 5, which is typically sufficient for the complexity of systems that can be represented in 1D.

5.2.2 TSUNAMI-3D The theoretical aspects of TSUNAMI-3D are discussed in Section 4.2.1.2, and the implementation is simple in principle. Flux moments are tallied in either the KENO V.a or KENO-VI Monte Carlo transport code and are provided to SAMS for sensitivity coefficient generation. Unlike TSUNAMI-1D, it is not uncommon for MG-TSUNAMI-3D [28] to generate incorrect sensitivity coefficients, and a number of parameters are available to improve sensitivity calculations.

5-9 Most TSUNAMI-3D calculations should incorporate a mesh for accumulating the flux moments necessary for the calculation of sensitivity coefficients. As discussed in Section 4.2.1.2, the mesh allows for significant refinement of the tallied flux moments beyond the default tally capability in KENO, which is limited to region and unit. The mesh can be used to capture flux gradients within a single region, such as a radial reflector, or to differentiate the flux in different elements of a geometrical array. This can significantly improve calculated sensitivity coefficients in fuel assemblies.

The downside to using the mesh flux is that it can be memory intensive. A finer mesh is more likely to generate accurate results, but a larger memory footprint will be required. The large arrays needed to store the tallies are also more difficult to traverse and thus slow to execute.

CSAS calculations that only take a few hours can require tens of gigabytes of memory and can continue running for several days after conversion to TSUNAMI-3D. An optimized mesh can manage the memory usage and runtime with accurate results. The optimum mesh will typically have a higher mesh density in regions with steeper flux gradients and fewer mesh elsewhere.

The memory demands are also not as difficult to accommodate on current computing platforms as they were when TSUNAMI-3D was first developed two decades ago.

Generic guidance for an initial mesh size has been developed over the years through modeling a range of critical benchmark experiments and application systems. There is no one-size-fits-all solution, but a good initial guess can significantly reduce iterations and thereby increase analysis efficiency. For single unit models, the recommended initial mesh size is one-tenth of the outer dimension of the unit. For arrays, the recommended mesh size is the pitch, except with the mesh lines set to a quarter of the individual rods in the array. Especially for arrays typical of reactor fuel assemblies, the axial variation is significantly lower than the radial gradients. This allows for use of much larger mesh intervals in the axial dimension. Accurate sensitivities have been generated using models with axial mesh intervals of 1020 cm or more in some cases. In all cases for all models of all types, finer mesh intervals should be targeted for regions with greater flux gradients.

The mesh flux tally option is activated by setting MFX=yes in the parameter block in the TSUNAMI-3D input. A uniform mesh can be specified with the MSH= option, which is also in the parameter block. KENO will create this uniform mesh over the entire geometry with the uniform user-specified mesh size. A user-specified variable mesh can be created in the Gridgeometry block in the TSUNAMI-3D input, as described in Section 8.1.3.14 of the SCALE 6.3 manual [7].

This mesh must cover the entire extent of the global unit in MG TSUNAMI-3D, and any portions of uncovered geometry are identified at execution and terminate the calculation. As discussed in the manual, the mesh can be constructed with a combination of explicit plane specifications and linear interpolations. Duplicate planes are identified and removed in KENO. A sample Gridgeometry block is shown in Figure 5-3. The mesh dimensions are specified in the global coordinate system.

By default, the flux moments are collected up to the third order. Higher order moment tallies may increase accuracy, but they tend to significantly increase memory requirements. It is therefore recommended that the pnm= parameter not be changed for most calculations. In some large models with fine mesh requirements, pnm= can be decreased to reduce the memory allocation required for the large number of mesh cells. The tradeoff between accuracy and memory use can be investigated, keeping in mind that the DP calculations provide the reference total sensitivity value. Typically, mesh refinements are the most effective approach to resolving discrepancies with DP results.

5-10 read grid

' mesh is ~2cm in x & y in tank region: 30 mesh across tank

' ~3cm in x & y in reflector: 7 mesh on each side in reflector tank

' ~2cm in z in solution: 8 mesh in solution

' ~5cm axially below and above solution: 1 mesh below and above tank

' total mesh is 44 x 44 x 16 = 30,976 mesh intervals 1

xlinear 30 -30.514 30.514 ylinear 30 -30.514 30.514 xlinear 7 -50.81 -30.514 xlinear 7 50.81 30.514 ylinear 7 -50.81 -30.514 ylinear 7 50.81 30.514 zlinear 8 22.177 37.617 zplanes -0.1 8.5885 17.177 42.617 67.6 92.6 117.6 143.1 end end grid Figure 5-3 Sample Gridgeometry Input for PU-SOL-THERM-034-001 [14]

The initial implementation of flux moment tallies in KENO for TSUNAMI-3D did not include a mesh tally option. Therefore, in some spherical or cylindrical geometries, the flux moments could cancel when crossing opposite sides of the same surface. To prevent this, a coordinate transformation capability was added to allow the unit coordinate system to be transformed and thus to eliminate this cancelation. This option is turned on by default in TSUNAMI-3D calculations to minimize the chances that a user will experience this difficulty. The transform is effective, but it slows down tracking by as much as 20%. The mesh tally is a better option for generating accurate sensitivities, so the recommendation to users is to disable the coordinate transform. This is accomplished by specifying tfm=no in the parameter block in the TSUNAMI-3D input.

Flux gradients can also be captured through manual subdivision of a region in the geometry, although care must be used with this approach for spherical or cylindrical regions which may experience moment cancellation. Because KENO tallies flux by region and by unit, the introduction of an artificial region containing the same mixture in the geometry can improve the accuracy of calculated sensitivity coefficients. Example inputs are provided in Figure 5-4, in which a cylindrical water-filled region has been added around a fuel rod. The lefthand example is a unit in a KENO V.a model, and the righthand example is from KENO-VI. In both examples the region added as a manual subdivision is highlighted in boldface type. The geometries for both units are shown in Figure 5-5, again with the KENO V.a geometry on the left and the KENO-VI geometry on the right. In both renderings, fuel is shown as black, cladding as gray, and water as light blue. Manual subdivision can be slightly more memory efficient than using additional mesh because the additional tallies are only in the impacted units. Added mesh planes traverse the entire geometry and often tally detailed flux information where it is not needed, but no adaptive mesh has been implemented in KENO.

5-11 Figure 5-4 Inputs Showing Manual Subdivision to Improve Flux Tally Resolution Figure 5-5 Renderings of Regions Added Via Manual Subdivision Section 5.1.3 mentions modifying models to create multiple copies of a mixture so that different sensitivities in different regions of a problem are calculated correctly. A common example of this situation is when water is being modeled within a fuel storage array or pin array benchmark. The water within the fuel assembly or pin array acts primarily as a moderator, but the water outside the fuel region is primarily a reflector. There can be more complex arrangements, as well.

Several pin array experiments performed at Pacific Northwest National Laboratory (PNNL) contain three arrays of rods with the separation distance between the central and side arrays controlling criticality. Many of these experiments also have metal reflectors along the long side of the fuel array, with a variable thickness water reflector separating the fuel from the metal reflector. Depending on the separation distance of the metal reflector and the fuel, this water region can have a positive or negative sensitivity. Capturing the behavior of this region requires that it be represented with another water mixture. The same composition is used in all the different mixtures, but the artificial separation into different mixtures facilitates more accurate sensitivity calculations and more insight into the physics of the system.

KENO V.a Model KENO-VI Model KENO V.a Model KENO-VI Model

5-12 Figure 5-6 shows a model of LCT-010-001 [14] with three different water mixtures. In this case, the lead reflecting wall is in contact with the sides of the fuel arrays, so there is no water gap between the fuel arrays and the reflector along the long side of the array. The water mixture shown in the lightest blue is in the fuel rod unit cells and provides moderation. The second water mixture, shown in a medium blue, separates the fuel arrays. The third water mixture, in dark blue, is farthest from the fuel and is a reflector. The 1H and total mixture sensitivities for these different water compositions are provided in Table 5-1 to demonstrate the magnitude of the differences captured with this approach. To some degree, this use of multiple mixtures is relevant in MG keff calculations because the different fluxes in different regions of the problem lead to different group-average cross sections. The impact in water mixtures is rarely large enough to be noticed in keff, but the tabulation of different sensitivities is a clear benefit here. As mentioned in Section 5.1.3, this approach also may be necessary to calculate directly comparable DP sensitivities.

Figure 5-6 A Model of LCT-010-001 Showing Three Water Mixtures Table 5-1 Calculated 1H and Water Sensitivities in LCT-010-001 Model 1H H2O Mixture Sensitivity Uncertainty (%)

Sensitivity Uncertainty (%)

Moderator 0.19211 1.08 0.22589 0.921 Interstitial 0.018155 8.62 0.012678 12.4 Reflector 0.0054063 16.7 0.0018437 49.0 A volume calculation is also needed to determine the region mesh volumes. KENO V.a can analytically determine the volumes of all regions in a problem and does so as part of the CSAS5 sequence, but these analytic expressions may not apply once a mesh has been generated. The region mesh volumes are needed to calculate sensitivities in SAMS.

The region volumes calculated by TSUNAMI-3D are provided in the output file, along with uncertainties in the volume estimates. The edited values apply to the entire mixture used in the geometry, so the uncertainty estimates are low with respect to the individual mesh volumes.

Generally, mesh volume calculations using tens of millions to a few billion points should provide sufficiently low uncertainty for typical benchmark or application models. The calculation time for these volume calculations ranges from only a few minutes to a few hours, so it is a fairly small

5-13 burden within the context of a MG TSUNAMI-3D calculation. There are rare scenariostypically involving benchmark models such as LCT-079 [14], which contain foilsin which a much lengthier volume calculation is needed to generate low uncertainty estimates for small but important regions of a model. The volume calculation is controlled with input in the volume block, as described in Section 8.1.3.13 of the SCALE 6.3.1 manual [7]. The recommended approach is to use RANDOM option to sample points randomly in the geometry to determine the volumes. The points and batches parameters function like NPG= and GEN= in a neutron transport simulation, except that there is no need to discard initial batches in a volume calculation. There is no source to iterate, so all points can be used for the volume calculation.

The Monte Carlo transport parameters for the number of particles per generation, NPG=, the number of generations, GEN=, the number of generations to skip for initial source convergence, NSK=, and desired final uncertainty, SIG=, are user-supplied input for the forward calculation.

The forward simulation is essentially the same as a standard keff calculation, but with additional tallies, so the generic guidance on these parameters is applicable. The adjoint Monte Carlo calculation is performed after the forward solution and uses analogous parameters in controlling the calculation. The TSUNAMI-3D input parameters are APG= for the number of particles per generation, AGN= for the number of adjoint generations, ASK= for the number of initial skipped generations, and ASG= for the targeted uncertainty in the adjoint calculation. These parameters and their default values are listed in Table 6.2.4 of the SCALE manual [7] and are reproduced here in Table 5-2.

Table 5-2 Default Values for Adjoint Monte Carlo Calculation in TSUNAMI-3D Adjoint Parameter Default Value for TSUNAMI-3D Forward Parameter Description AGN GEN - NSK + ASK GEN Total number of adjoint generations -

default value leads to same number of active generations in both forward and adjoint calculations ASK NSK x 3 NSK Number of initial skipped generations in the adjoint calculation APG NPG x 3 NPG Number of adjoint particles per generation ASG SIG SIG If > 0, keff standard deviation at which to terminate the adjoint calculation The number of adjoint particles per generation can be a particularly important parameter, especially in small fast-spectrum systems with large neutron leakage rates. In the adjoint Monte Carlo simulation, the cross sections are reversed, so a particle is born from the energy distribution of the fission cross section and must scatter to the distribution to cause an adjoint fission. The fission cross section is large in the thermal energy region, where is zero. It is essentially impossible for a neutron in a fast system to experience a large enough number of scattering events to traverse this difference, so only particles sampled in the fast region of the fission spectrum will tally in the distribution. This mismatch is shown in Figure 5-7, with the fission cross section shown in the top pane, and the distribution in the lower pane. Note that the fission cross section is on the order of one barn in the range above 100 keV in which is nontrivial. The fission cross section below 1 eV ranges from approximately 100 to 10,000 barns.

Clearly the probability of selecting a point above 100 keV randomly from the fission cross section is quite small. If no histories in a generation of the adjoint calculation tally in the distribution, then the calculation terminates. This is analogous to a generation in the forward

5-14 calculation creating no fissions, so no source is available for the next generation. There is no simple solution to this problem except to increase APG to large numbers. Adjoint generations of tens of thousands of particles may be necessary to successfully complete these simulations.

Figure 5-7 Fission Cross Section (Top) and Distribution (Bottom) for 235U Production calculations at ORNL frequently target forward keff uncertainties of 0.00010 k for CSAS calculations and may maintain this level of convergence for TSUNAMI-3D calculations as well. The convergence for the adjoint keff estimate is typically significantly less stringent with values in the range of 0.001000.00250 keff. Experience has shown that these uncertainties in keff also yield fairly low uncertainties in the sensitivity coefficients. No rigorous study results have been published to establish the desired uncertainties of the sensitivity coefficients. The uncertainty target is at least in part dependent on the intended use of the sensitivities, and it is likely that a strong correlation with uncertainty in ck values applicable for all systems would be difficult to establish.

Common iterated-source Monte Carlo simulation metrics such as source convergence have not been studied at the same level of detail for adjoint simulations as for forward simulations.

Experience has shown that the forward and adjoint keff values are typically statistically equivalent, and large deviations, especially those any larger than 0.5 %k, should be investigated. Good agreement between TSUNAMI-calculated and DP sensitivities provides confidence that the adjoint flux solutions are adequate for the purpose in these simulations.

Some of the problem summary edits printed in the output, especially the EALF, may differ significantly between the forward and adjoint calculations. It is common for the EALF values to differ by two orders of magnitude between the forward and adjoint calculations for a thermal system. This does not represent a significant difference in the simulations, but rather, a difference in the physical meaning of the EALF parameter as determined in the reversed group structure used in the adjoint calculation. It is not clear whether the adjoint EALF value has a meaningful physical interpretation.

5-15 5.3 SCALE Continuous-Energy Methods As discussed in Section 4.2.2, SCALE contains two CE methods for calculating sensitivity coefficients in TSUNAMI-3D. Both the IFP and CLUTCH options are available in KENO in SCALE 6.2 and SCALE 6.3. Only the IFP method has been implemented in the Shift Monte Carlo transport code in SCALE 6.3. Recommendations for using IFP and CLUTCH within KENO are provided in the next two subsections. Although it is expected that the guidance for using IFP in KENO is also applicable to Shift, the Shift implementation will not be emphasized here given the limited experience performing calculations with it at the time of writing.

Two differences between the MG and CE sensitivity methods are worth highlighting at this stage because they apply generically to CE calculations compared to MG calculations. The first difference is that no implicit sensitivity component exists with CE methods because the pointwise nuclear data are used directly. Without the generation of flux-weighted average cross sections, there is no connection among cross sections as there is in the MG process. The elimination of cross-section processing for the transport calculation and the implicit effects allows for accurate calculations for systems that have heterogeneity in multiple dimensions. The limitation of cross-section processing requiring representative 1D models can be impossible to overcome in MG TSUNAMI-3D: implementation of CE TSUNAMI-3D allows for sensitivity coefficient calculations in systems that were previously very difficult or impossible [29, 30, and 63]. The second significant difference is that both CE methods calculate sensitivity coefficients in a single forward calculation and do not perform an explicit adjoint calculation. The primary difference between the two CE methods is the different importance determination methods used in lieu of the explicit adjoint. Also note that the sensitivity coefficients are calculated directly in the 3D Monte Carlo transport code in the CE methods. SAMS is still used in the CE sequences, but only to generate edits from the SDF generated by the transport codes. Some of the SAMS options described in the SCALE manual [7] apply only to sensitivity coefficient calculations and thus are inapplicable in the CE TSUNAMI-3D sequences.

One important similarity between the CE and MG methods is that sensitivities are still tabulated and reported by mixture. The use of duplicate mixtures to capture different sensitivities in different regions of a model, as discussed in Sections 5.1 and 5.2, is still relevant for both IFP and CLUTCH.

The coordinate transform discussed in Section 5.2.2 is also not needed in CE TSUNAMI-3D calculations because they do not tabulate flux moments for sensitivity calculations. This function is still activated by default in TSUNAMI-3D to improve accuracy in MG calculations. It should be disabled by setting TFM=no in the parameter block, thus saving runtime in the execution of CE TSUNAMI-3D calculations.

5.3.1 Iterated Fission Probability The theory behind the IFP method is discussed in Section 4.2.2.1: essentially, there is only a single user-specified parameter: the number of latent generations, CFP=. Greene [42] provides a study of the impact of the number of latent generations on sensitivity coefficient calculations.

Both Shift and KENO results are presented for IFP, and KENO results using the CLUTCH method are also presented, as discussed in Section 5.3.2. As expected from the theory, a larger number of latent generations leads to more accurate but more uncertain sensitivity coefficients.

Five to ten latent generations are generally sufficient to calculate accurate sensitivities for most nuclides in the two benchmark models considered in Greene [42]. In some cases, especially the large spent nuclear fuel (SNF) storage canister model, 2040 latent generations are required.

5-16 More study of large, complex systems may be warranted, because methods development tends to focus on benchmark experiments that are smaller and therefore easier to model and calculate.

The memory requirement for the calculation also increases significantly with an increased number of latent generations. In the KENO and Shift implementations of IFP, each generation contains reference events that can contribute to the sensitivity coefficients. This means that with more latent generations, there are more sets of histories being tracked from initiation to asymptotic population. In other words, with 5 latent generations, all the history information must be stored for 5 generations before the first generations information can be released at the end of the calculation. This increases to 10 generations of stored history information if the number of latent generations is increased to 10. The block implementation in MCNP avoids this memory increase with the number of latent generations and instead requires more total generations to be simulated to reach the same number of generations contributing to sensitivity tallies. Aside from the algorithmic difference, there is also a difference in the user input to specify the number of latent generations. The TSUNAMI-3D input, CFP=, specifies the number of latent generations, whereas the MCNP KOPTS input, BLOCKSIZE=, includes the latent generations, the reference generation, and the asymptotic generation. To specify 5 latent generations, the TSUNAMI-3D input would be CFP=5, and the KOPTS input would be BLOCKSIZE=7.

The default number of latent generations for IFP calculations in SCALE 6.3 is 5 [7], which is probably a reasonable balance among accuracy, uncertainty, and memory usage. It might be possible to reduce this value slightly for high-leakage fast metal systems, and it would likely need to be increased some for large, complex systems. The IFP method has proved to be generally reliable for calculating accurate sensitivity coefficients, and there are no known generic types of models at this writing for which IFP struggles to generate accurate sensitivities.

As with all TSUNAMI-3D calculations, however, DP calculations should be used to confirm that the parameters selected yield accurate sensitivity coefficients.

IFP calculations are often run until the keff uncertainty is 0.000200.00050 k, depending on model complexity and runtime. The uncertainties in the resulting sensitivities are generally low enough to be useful for similarity assessments and other analysis. However, as discussed in Section 5.2.2, there is no solid guidance on how low the uncertainties in the sensitivities really should be. The ability to run the calculations in parallel in Shift makes lower uncertainties feasible with shorter wall times.

5-17 5.3.2 CLUTCH The theory behind the CLUTCH methodology is discussed in Section 4.2.2.2. An extensive study of CLUTCH performance for a range of different benchmark configurations from the VALID library [35] is provided in Jones [23], and a more limited study of sensitivity coefficient calculations for large fuel storage systems is provided in Marshall et al. [64].

CLUTCH requires more user input than IFP. The mesh on which the F*(r) importance function is to be calculated must be specified, along with the number of latent generations for the IFP calculation of the F*(r) function. The number of skipped generations, NSK=, during which the F*(r) function is calculated is also specified and should have a different value than in a traditional forward keff calculation. The F*(r) function can also be written into a 3dmap file for visualization in Fulcruma highly recommended option activated by setting FST=yes in the parameter block in the TSUNAMI-3D input. Instructions for visualizing a 3dmap file in Fulcrum can be found in Section 8.3 of the KENO V.a primer [60] and in Section 9.3 of the KENO-VI primer [61].

As discussed in Section 4.2.2.2, the F*(r) function is used as the importance function for fission chains based on the location of the initiating fission event. Therefore, F*(r) is the proxy for an explicit adjoint calculation in the CLUTCH methodology. The F*(r) mesh must at least cover the regions of the model containing fissionable material. Generally, this is less than the requirement to cover the entire global unit geometry with the mesh in MG TSUNAMI-3D, as discussed in Section 5.2.2. Covering the entire global unit is acceptable but unnecessary in CLUTCH. A uniform mesh over the entire geometry can be specified by setting CGD=yes in the parameter block and specifying the size of the cubic mesh with the MSH= parameter. Alternatively, a variable mesh can be defined in the Gridgeometry block as discussed in Section 5.2.2. If multiple mesh geometries are created in the TSUNAMI-3D input, then the mesh to be used for F*(r) must be specified with the CGD= parameter. An error message is generated and execution is terminated if a fission occurs outside of the F*(r) mesh.

The geometric resolution of this mesh is recommended by Perfetti [27] and in the SCALE manual [7] to be approximately 12 cm. This generally worked well for Jones [23] but can be problematic for large models such as fuel storage casks [63]. The challenge for larger models is to achieve statistical convergence of the tallied importance in a large number of small voxels.

The generic guidance in the SCALE manual is to select the number of particles per generation (NPG=) and the number of skipped generations (NSK=) such that on average, each F*(r) voxel will be scored in by 10100 histories. As an example, for a 10 x 10 x 10 F*(r) mesh with a total of 1,000 voxels, a total of 10,000 to 100,000 histories would be desired in the discarded generations. For a reasonable NPG value of 10,000, this requires only 110 skipped generations. More skipped generations than this are likely needed to converge the fission source, so this is not an additional burden on the simulation. In the case of a storage cask, however, even a 2 cm cubic mesh could be on the order of 120 x 120 x 270 for a total of almost four million voxels. This would then suggest 40,000,000 to 400,000,000 inactive histories, corresponding to thousands of inactive generations. Fortunately, a much coarser F*(r) mesh allowed accurate calculations with 500 discarded generations of 50,000 particles each. The radial detail of the mesh was half the assembly storage cell size, quartering each storage cell.

The axial mesh was variable, with very large intervals in the bottom portion of the assembly that experience very few fissions, and smaller intervals of approximately 2.5 in. in the upper portions of the assembly [64]. The F*(r) function and its uncertainties were examined to ensure reasonable values. The F*(r) function presented in Figure 1 of Marshall and Greene [64] is

5-18 reproduced here as Figure 5-8. The lefthand image shows the cask geometry, F*(r) mesh, and F*(r) function. The righthand image shows only the fueled potion of the cask and eliminates the mesh lines so that the underlying function can be viewed more clearly.

The recommended mesh size of 12 cm is a reasonable value for most benchmark experiment models. As discussed above, some application models may require a user-specified mesh tailored with an understanding of the expected variation of neutron importance within the model.

Jones recommends 1,000 skipped generations [23], which is likely reasonable for the NPG values typically used in serial CSAS calculations. A smaller number of skipped generations is likely feasible if the number of particles per generation is increased significantly to improve the efficiency of parallel calculations. As noted in the previous paragraph, 500 generations of 50,000 particles were sufficient for accurate sensitivity results for the SNF storage cask model shown in Figure 5-8.

Figure 5-8 F*(r) Mesh and Function for a 32 PWR Assembly Storage Cask [64]

CLUTCH calculations can be performed in parallel in KENO in SCALE 6.2 and SCALE 6.3.

Larger generations are more efficient for parallel calculations, so NPG values of 100,000 or more are not uncommon for these calculations. These large generations enable CLUTCH calculations with only a few hundred skipped generations but still simulate the millions of histories needed to generate an accurate F*(r) function. The elimination of the explicit adjoint calculation can make CLUTCH calculations more efficient than those performed using MG TSUNAMI-3D, especially for fast systems that have particular difficulties with the adjoint calculation [23], as discussed in Section 5.2.2. This improved efficiency for fast systems allowed the use of CLUTCH for the TSUNAMI-3D calculations for the fast benchmarks in the SCALE 6.2.2 and 6.2.4 validation reports [51, 15].

The statistical convergence of the F*(r) function is likely important for obtaining accurate sensitivity calculations with CLUTCH. An F*(r) convergence edit is printed in the output directly after the table of generations and execution termination message. An example edit is provided in Figure 5-9. The edit provides the fraction of voxels in the F*(r) mesh with non-zero values and uncertainties above 5%, 10%, 20%, and 50%. This quantifies the statistical uncertainty distribution in the F*(r) function. Unfortunately, Jones [23] was able to demonstrate that there is no clear correlation between these values and accurate sensitivity calculations. A review of the F*(r) function and its uncertainties in Fulcrum is the recommended approach for assessing its quality.

5-19 F*(r) Convergence Statistics:

WARNING: Of the 44 F*(r) mesh intervals that scored tallies...

29.55% of the F*(r) tallies contain more than 5% uncertainty; 11.36% of the F*(r) tallies contain more than 10% uncertainty; 9.09% of the F*(r) tallies contain more than 20% uncertainty; and 0.00% of the F*(r) tallies contain more than 50% uncertainty.

Figure 5-9 F*(r) Statistical Convergence Edit The default value assigned to all voxels of the F*(r) function is 1. This implementation prevents a fission that occurs in a voxel that did not have any fission events in the discarded generations having a zero importance. The default value is of little importance to the execution of a CLUTCH calculation or the assessment of the resulting sensitivities, but it can make the visual display and assessment of the F*(r) function more difficult. In SCALE 6.2 releases, this default value of 1 is included in the 3dmap file used for visualization, although this has been removed in SCALE 6.3. For 3dmap files generated with SCALE 6.2, the MAVRIC utilities for manipulating 3dmap files, discussed in Section 4.3.2 of the SCALE manual [7], can be used to remove the values of exactly 1.0 and to improve the visualization of the F*(r) function. The mtFilter utility can be used twice on the underlying F*(r) 3dmap file to create a 3dmap file with only values above 1 and only values below 1. The mtAdder utility can then be used to add these 3dmap files together to create a final 3dmap of the F*(r) function that can be visualized in Fulcrum. The lefthand image of Figure 5-8 is reproduced below in Figure 5-10 without showing this filtering process to demonstrate the additional clarity provided by the process. This process is probably not necessary for routine analysis work but can be helpful in generating improved figures for reports and presentations.

Figure 5-10 F*(r) Visualization without Filtering to Remove Default 1.0 Values

5-20 The CLUTCH methodology has some shortcomings that manifest most often in poor estimates of reflector sensitivities [39, 65]. The poor results for fissionable material reflectors [39] results from insufficient fission events during the skipped generations. Accurate sensitivity coefficients for the reflector region can be achieved given a sufficiently large number of discarded generations. Inaccurate sensitivity coefficients for polyethylene reflectors [65] are most likely caused by an F*(r) mesh that is too coarse. Mesh refinements could result in improved calculations, but this would require larger numbers of skipped generations to converge the F*(r) function in these smaller regions. KENO only supports Cartesian mesh geometries, so sufficient mesh refinement for cylindrical or spherical systems could be essentially impossible. Generally, other TSUNAMI-3D methods are preferred for systems with significant reflector sensitivities.

These shortcomings may be addressed with ongoing research to develop a hybrid method for calculating the F*(r) function in a deterministic calculation [66].

CLUTCH calculations are typically executed in parallel at ORNL given the availability of compiled parallel executables. This allows for longer calculations with shorter total runtimes.

Calculations are often run to a keff uncertainty of 0.00010 k, typically resulting in very low uncertainties on relevant sensitivity coefficients. This can result in large efficiencies for CLUTCH calculations. CLUTCH does not typically reduce the uncertainty in hydrogen or other moderator materials as quickly, which contributes to the preference for using CLUTCH for fast neutron spectrum systems. Implementation in Shift, future methodology improvements such as deterministic F*(r) calculation, and more complex mesh support may improve CLUTCH performance in the future and could enable significantly more efficient sensitivity coefficient calculations than are possible with the IFP approach.

5.4 Uncertainty Analysis As discussed in Section 4.3, the uncertainty in S/U analysis is related to uncertainty in the nuclear data. This uncertainty is tabulated in a series of covariance matrices that comprise a covariance library. Much of the discussion in Sections 4.34.5 simplifies the description to a single matrix, but the reality is more complicated. Fortunately, that complexity is largely handled for the analyst behind the scenes by the analysis codes.

Within the TSUNAMI suite, uncertainty propagation is handled by SAMS and TSUNAMI-IP.

SAMS will calculate and print the total data-induced uncertainty in keff, along with an extended uncertainty edit that provides the uncertainty contribution from each covariance matrix. In this context, each relationship is referred to as a matrix because it is an energy-dependent matrix of data. These matrices are all contained within the single covariance library used in the calculation, aside from cases in which data are patched. Covariance data patching is discussed in more detail below. TSUNAMI-IP can also perform these uncertainty propagation calculations.

Specifying the uncert parameter causes TSUNAMI-IP to calculate the total data-induced uncertainty for all SDFs provided in the experiments and applications blocks in the input.

TSUNAMI-IP also requires the keyword values to print the table of values in the output file. The uncert_long option will generate and print the extended uncertainty edit for each SDF listed in the applications block.

The propagated uncertainty is physically relevant because it represents the potential bias in the application system resulting from each nuclide. The primary source of bias is nuclear data errors, and the errors are expected to be bounded, at least at a 1 level, by the uncertainties in the data. As discussed in Section 4.4 and shown in Figure 4-3 and Table 4-1, there is ample evidence that this is true, at least for common nuclear systems. For fast and thermal spectrum systems, whether fueled by low enriched uranium (LEU), HEU, or Pu, the data-induced

5-21 uncertainty bounds the observed bias. The sensitivity coefficients allow for precise application of the covariance data to the application system through the uncertainty propagation shown in Eq. (4). The generic data uncertainties are converted into application-specific keff margins because by definition, the sensitivity coefficient converts a change in data into a change in keff, as discussed in Section 4.4. A dimensional analysis, as shown in Eq. (8), reinforces this:

(

) = (

) x (

),

(8) where is the data-induced uncertainty in keff, is the uncertainty in the nuclear data, expressed as a relative uncertainty, and S is a sensitivity coefficient, as defined in Eq. (1).

The entire effort to develop, implement, deploy, learn, and use S/U methods is to perform this task of recasting generic nuclear data uncertainty into an application-specific understanding of potential impacts on keff.

The quantification of bias potential in this way also allows for quantification of similarity for the purposes of bias estimation. In other words, the similar benchmark experiments that should be used in validation can be selected on the basis of shared nuclear data-induced uncertainty to provide a measure of how much of the most likely bias sources are shared between an application and a benchmark. The mechanics of this similarity assessment are discussed in Section 5.5. It is important to understand in this context that the main contributors to uncertainty are expected to be the main contributors to bias. The same nuclides must be primary sources of uncertaintyand therefore biasto ensure reasonable similarity between the systems in terms of bias manifested in a code system.

The SCALE nuclear data libraries contain data on over 400 nuclides for ENDF/B-VII.1 and for over 500 nuclides for ENDF/B-VIII.0 [7]. The distributed data contain at least some covariance data for 187 nuclides in ENDF/B-VII.1 and 252 nuclides in ENDF/B-VIII.0. The missing nuclides use data from the SCALE 6.1 covariance library [21], mostly from the low-fidelity evaluation project [49]. Even with this compilation of data, some nuclide/reaction pairs still lack data for at least some energy groups. There are also some energy groups in which the uncertainty is unrealistic, typically for threshold reactions that begin near the top of an energy group. This can lead to a situation in which the relative uncertainty is more than 100% given the small value of the cross section averaged across the entire energy group. As mentioned in Section 4.3, TSUNAMI-IP allows user-specified data to be patched into the covariance library to allow for missing data or for relative uncertainties above a user-specified threshold.

Two options are recommended for use in tandem within either the SAMS or TSUNAMI-IP modules to patch missing or aberrant covariance data. These options are the cov_fix and use_dcov parameters. The cov_fix option activates patching for zero or high relative uncertainties. The threshold for high uncertainties is controlled with the large_cov parameter with a default value of 10. This means that only uncertainties of more than 1,000% are patched for being too large by default. The data that are patched into the working covariance matrices for uncertainty propagation can be user defined, or a default value of 5% can be used. The default 5% is fully correlated across all energy groups by default.

5-22 The relevant TSUNAMI tools also support a more complex user-specified covariance treatment which is recommended for use in most cases. Users are allowed to specify covariance data to apply in the thermal range, the intermediate range, and the fast range by providing the udcov_therm=, udcov_inter=, and udcov_fast= parameters, respectively. The user-supplied values are assumed to be fully correlated with the relevant energy zone. The correlation within the zone can also be specified by the user but is generally unnecessary. It should also be noted that the input provided is the standard deviation and not the variance. This approach may seem inconsistent with the name of the parameter, but it is much better aligned to the typical engineering practice of specifying uncertainties as standard deviations and not as variances.

ORNL has historically used a thermal uncertainty of 5%, an intermediate uncertainty of 1020%,

and a fast uncertainty of 40%. The bases for using these values are not documented, but they were determined based on expert judgement gleaned from reviewing typical values of covariance data. Figure 5-11 provides an example section of the parameter block for specifying and applying patched covariance data.

cov_fix use_dcov udcov_therm=0.05 udcov_inter=0.15 udcov_fast=0.40 Figure 5-11 Recommended Covariance Patching Parameters The extended uncertainty edit identifies nuclide/reaction pairs for which default covariance data have been used. A single asterisk indicates that default data have been used because no covariance data are available for the specified nuclide/reaction. Three asterisks indicate that default data have been used to patch zero or large values in specific energy ranges. A list of all nuclide/reaction pairs with patched data is also generated in an output table labeled Covariance Warnings in creating working COVERX library. Generally, the patches are applied to rare, unimportant reactions or to the covariance in low-energy regions in which it has been set to zero. This table should be reviewed to ensure that patched data are not applied to nuclide/reaction pairs important to the system. This can happen when the SCALE ID for a nuclide differs between the covariance library and the SDF, generally when they used different data libraries. An example of this mismatch was documented for graphite in the initial testing of ENDF/B-VIII.0 covariance [45]. In this case, the issue was caused by the introduction of isotopic cross sections for graphite in ENDF/B-VIII.0, which led to an inconsistency with the SDF generated with elemental data from ENDF/B-VII.1.

The uncertainty analysis capability also makes possible quantification of validation gap margins.

Without sufficient validation for a system component, a margin should be assessed that reduces the upper subcritical limit (USL) to account for the additional uncertainty represented in this missing data. See Clarity et al. [4] and the consensus standard on validation [1] for more discussion of the derivation of the USL, and see specifically Section 7 of Clarity et al. for more discussion of identifying and addressing validation gaps and weaknesses. Historically, this margin has been estimated based on engineering judgment, possibly combined with sensitivity calculations, to determine the impact of changes in the unvalidated component on the system.

Using S/U techniques, an estimate of the potential bias resulting from a nuclide can be quantitatively estimated by propagating the covariance data for that nuclide with the sensitivity of the nuclide in the application system of interest.

5-23 This process has been used to determine validation penalty terms for PWR BUC in NUREG/CR-7109 [10] and for BWR BUC in NUREG/CR-7252 [11]. The recommendations from NUREG/CR-7109 have been incorporated into NUREG-2215 [53] and NUREG-2216 [54]. The conclusions in NUREG/CR-7109 were determined using the SCALE code package and were later confirmed to also be applicable for MCNP in NUREG/CR-7205 [67]. This demonstration was achieved by performing DP calculations with MCNP to show that the TSUNAMI-calculated sensitivities were also accurate predictions for the sensitivities in MCNP. Such a confirmation could be accomplished today Via use of the KSEN capability within MCNP, but KSEN had not been deployed when NUREG/CR-7205 was written.

The mechanics for generating an estimate of such a validation gap start with an extended uncertainty edit either from SAMS or from TSUNAMI-IP. The uncertainty contribution of each reaction of each nuclide is tabulated in the extended uncertainty edit. The contribution from each of the reactions for a nuclide must be combined to determine the overall contribution from that nuclide. Most individual reaction uncertainties are believed to be independent, so most of the contributions can be combined using standard uncertainty propagation and taking the square root of the sum of the squares of the components.

The situation is more complicated for some reactions for which the sum of multiple reactions may be known with more precision than each of the components. Neutron absorption is one example, and scattering is another. Scattering events are characterized as elastic or inelastic and are thus represented using different cross sections. The total number of scattering events is measured more precisely than identification of the type of scattering event that occurred. This leads to uncertainty in the total scattering cross section and additional uncertainty in the partitioning of the scattering events between elastic and inelastic. This creates a covariance term in the data linking the uncertainties of the two reactions. These cross terms are reported in the code output as negative uncertainties. Logically, there is not a negative uncertainty, but this term represents the amount of additional uncertainty that would be present in the total if the cross correlation were not considered. The details of neutron absorption are similar but more complicated, as neutron absorption reactions include fission, (n,), (n,), (n,2n), and others.

Again, the total absorption cross section is known with more precision than the individual reactions.

The arithmetic necessary to account for these cross correlation terms is slightly more complicated than typical uncertainty propagation. The sum of the squares of the negative terms is subtracted from the sum of the squares of the positive terms. The square root of the difference then represents the total data-induced uncertainty for the nuclide in question. This process is shown in Eq. (9):

= ( )2 ( )2,

(9) where k is the total data-induced uncertainty for a nuclide,

( )2 is the sum of the squares of the positive uncertainties, and

( )2 is the sum of the squares of the negative uncertainties.

A numerical example can be used to clarify this process. A notional UF6 package was developed as an example application model with a keff value of 0.93592 +/- 0.00019. A validation gap could be derived for fluorine under the assumption that no fluorine would be present in the set of potential benchmark experiments available for validation. This assumption may not be valid but is made for the purpose of this exercise. After sensitivities were calculated for the

5-24 application model, the covariances from the 56-group SCALE library based on ENDF/B-VII.1 [7]

were propagated. All of the uncertainty terms for 19F were extracted from the output file and are presented in Table 5-3. The presence of negative terms associated with cross correlations between elastic and inelastic scatter, denoted as n,n, is noted. There are other cross correlations among scattering and absorption reactions. Note that the uncertainty contributions are reported by TSUNAMI in the unit %k/k, making the numbers 100 times larger than if they were edited in k/k.

Table 5-3 Uncertainty Terms for 19F in Example Model Covariance Matrix Nuclide/Reaction with Nuclide/Reaction Uncertainty Resulting from This Matrix (%k/k) elastic elastic 7.7453E-2 +/- 1.1507E-4 (n, n) elastic 5.4268E-2 +/- 9.2636E-5 (n, n)

(n, n) 4.9825E-2 +/- 1.0833E-4 (n, )

(n, )

3.3215E-2 +/- 1.1823E-6 (n, )

elastic 1.0290E-2 +/- 1.9386E-6 (n, p)

(n, p) 3.3352E-3 +/- 3.1509E-8 (n, )

(n, )

2.1309E-3 +/- 1.1539E-9 (n, p) elastic 1.9835E-3 +/- 5.3537E-8 (n, d) elastic 4.4065E-4 +/- 3.1825E-9 (n, )

elastic 3.9572E-4 +/- 1.6918E-9 (n, d)

(n, d) 3.5893E-4 +/- 1.0069E-9 (n, 2n) elastic 1.9932E-4 +/- 1.9929E-9 (n, t) elastic 1.2846E-4 +/- 2.9849E-10 (n, t)

(n, t) 5.6156E-5 +/- 5.4188E-11 (n, n)

(n, 2n) 1.8009E-5 +/- 7.4920E-11 (n, 2n)

(n, 2n) 1.0539E-5 +/- 1.0567E-11 From the 16 nuclide/reaction uncertainty contributions listed above, the 12 positive and four negative values can be extracted. The sum of the squares of the positive terms is 0.00971

(%k/k)2, and the sum of the squares of the negative terms is 0.00295 (%k/k)2. The difference between these two numbers is 0.00677 (%k/k)2 (allowing for roundoff), and the square root of the difference is 0.082 %k/k. This value should be multiplied by the system keff of 0.93592 to determine the reactivity margin in %k, but this step could be neglected conservatively here because it lowers the validation penalty. Including the system keff reduces the total 19F data-induced uncertainty to 0.077 %k, or 77 pcm. This nominally provides a 1 estimate of the uncertainty, so it could be increased to a 95% confidence interval by multiplying by 1.96, assuming a normally distributed uncertainty and two-sided statistics. Two-sided statistics seem reasonable given that a data error could raise or lower keff. The one-sided multiplier is also quantitatively lower at 1.645, so in this case, the application of two-sided statistics is also conservative. The final result of this calculation is a validation gap penalty of 151 pcm. It is not clear that a validation gap penalty such as this needs to be defended as representing a 95%

5-25 confidence interval because the engineering judgement-based values used historically for this sort of assessment cannot realistically be interpreted with any statistical rigor. It should also be noted that the uncertainties in the uncertainty contributions were not propagated, but again, this seems like a reasonable omission given that they are two or more orders of magnitude less than the uncertainty estimates themselves.

There are reasonable questions regarding the reliability of such a gap assessment process. As noted in Section 4.3, covariance estimates have varied significantly among different evaluations.

The impact of this can be assessed by performing the analysis with multiple covariance evaluations as demonstrated by Marshall [62]. The study presented considered all four covariance libraries distributed with SCALE 6.3 [7], which ultimately contain only two different covariance evaluations for the nuclide of interest in the application model, 35Cl. This is a limitation on this approach, and the result of considering both covariance estimates varied noticeably between the two application models considered. In one application model, the difference was only approximately 10 pcm, but for the other application it was over 200 pcm.

Although it is impossible to draw generic conclusions from these sorts of results, it is informative that such results can be generated specifically for safety analysis models. A more difficult issue raised in Section 4.4 is that it is nearly impossible to assess the quality of covariance data for non-actinide nuclides. Clearly, these concerns mean that using covariance-based assessments of validation gap penalties should be approached cautiously, but the ability to generate a quantitative estimate of the reactivity bias that may be represented by an unvalidated nuclide could be an extremely powerful tool.

5.5 Similarity Assessment The uncertainty analysis described in Section 5.4 provides the basis for a rigorous, quantitative assessment of similarity between systems. As has been mentioned in Sections 2.3, 4.3, 4.5, and 5.4, the majority of the bias in contemporary 3D Monte Carlo computational tools used in NCS assessments comes from errors in the nuclear data. The direct corollary of this is that two systems should have similar computational biases if they exercise the same nuclear data. This is the reason that similar benchmark experiments must be used to determine an applicable computational bias in validation.

The greatest benefit of S/U methods in NCS validation is precisely the ability to make defensible, informed decisions on benchmark experiment applicability. The historical engineering judgement-based approach to experiment selection has generally worked well, but it relies on experts and requires the development of experience. A rigorous physics-based justification for experiment selection is also sometimes difficult to generate and defend.

The use of S/U tools does not eliminate these challenges, but it does shift many of them.

Experience and expertise with the S/U tools are desirable to improve confidence in the results and to identify unexpected results. This is no different from any other aspect of computational analysis associated with NCS or nuclear engineering in general. Opinions differ on sufficient similarity, but the discussions are focused on the relevant physics of the system. Challenges abound in generating reliable, consistent covariance data, but multiple libraries exist, allowing for a range of assessments. Use of S/U tools in validation does not guarantee smaller biases or lower bias uncertainties, but it should improve the justification of experiment selection and therefore the entire validation analysis.

5-26 The recommended parameter for similarity assessment in the TSUNAMI suite is the integral index ck. As discussed in Section 4.5, the ck describes system similarity in terms of a correlation coefficient describing how much data-induced uncertainty is shared between two systems. The calculation is performed in TSUNAMI-IP by specifying the c parameter in the parameter block.

As with the uncertainty assessment discussed in Section 5.4, the keyword values also must be provided to print the table of values. The ck value is calculated for each SDF provided in the experiments block, with each SDF provided in the applications block. A summary table can also be generated by TSUNAMI-IP that contains the experiments with ck values exceeding a user-specified value. The table is created by specifying csummary in the parameter block, and the threshold value can be set with cvalue=. The default value for this threshold is 0.9.

As discussed in Section 3.2, the guidance for ck values in validation contained in Rearden et al.

[21] is taken from Broadhead et al. [34]. The exact wording in Rearden et al.,Section IX.E, is as follows: Past studies have indicated that systems with ck values of 0.9 and above are highly similar to the application, those with values of 0.8 to 0.9 are marginally similar, and those with values <0.8 may not be similar in terms of computational bias. As discussed in Section 4.5, the practical implementation of this in several ORNL studies and reports is concerned with the number of experiments with ck values of at least 0.8 [6, 10, and 11]. Clarity et al. [4] make no specific recommendation on the value of the ck index which represents sufficient similarity to use an experiment in validation. Guidance for NRC-regulated fuel cycle facilities also exists in NUREG-1520 [57], but as mentioned in Section 4.5, this guidance may be outdated. Generally, it appears that using a ck threshold of 0.8 or higher is likely reasonable. A lower threshold would require some justification, and a higher threshold would likely be viewed as rigorous.

The covariance patching issue discussed in Section 5.4 is also relevant in similarity assessment because it impacts the data-induced uncertainty. The recommended approach illustrated in Figure 5-11 is also recommended in similarity assessment because it is the best approach for dealing with holes in the covariance data. The use of patched data can have impacts on ck if important reactions have data patched with inappropriately large covariance data. A large ck discrepancy was identified in work supporting an intercomparison of USLs determined using S/U methods [68] because patched data were used for a vanadium reflector in a fast spectrum HEU benchmark. The user-specified 40% default uncertainty was applied instead of the evaluated covariance data. This issue was identified by reviewing the extended ck and uncertainty edits and recognizing that the top uncertainty contributor in the benchmark was patched data. In this case, the SCALE ID had changed with an updated nuclear data evaluation updating elemental data to isotopic data. The SDF was updated to include the new SCALE ID, and the appropriate covariance data were applied. A questioning attitude and understanding of the available tools can help identify aberrant results and provide better understanding of the potential causes and explanations of the unexpected values.

Guidance is lacking to address the situation in which no benchmark experiments with ck values of 0.8 or higher can be identified. The first step in this situation should always be a review of the ICSBEP Handbook [14] to identify other experiments. The available SDFs, discussed more in Section 5.6, facilitate this search. The DICE tool, distributed with the ICSBEP Handbook and available from the NEA website, can also be extremely helpful in identifying candidate experiments. If there are still none or only a limited number of experiments with high ck values, then a reasonable approach is to perform a validation with the most similar experiments available and to take an additional validation gap penalty for poor similarity. The use of nonparametric methods, as addressed in Section 6.2 of Clarity et al. [4], may also be warranted to increase conservatism. The data adjustment methods discussed in Section 7.1 may also provide a viable bias estimate or method for generating a validation gap penalty.

5-27 TSUNAMI-IP will generate an extended uncertainty edit if the c_long parameter is specified in the input. Like the extended uncertainty edit discussed in Section 5.4, this option generates and prints a table of contributions to the integral index ck coming from each nuclide/reaction covariance matrix. Unlike the uncertainty edit, the ck contributions are summed to determine the total ck value. This edit identifies the top contributors to ck and is helpful in understanding the similar sources of uncertainty between an application and an experiment.

As an example, NUREG/CR-7252 [11] examines potential critical experiments for use in validation of BWR BUC calculations. In Section 5.1 of the document, a similarity assessment is performed for fuel with a burnup of 25 GWd/MTU modeled with the actinide-only isotope set stored in the GBC-68 computational storage cask model [69]. The LCT-008-009 experiment has a ck value of 0.8850, whereas the LCT-010-001 experiment has a ck value of only 0.6438. The top five contributors to ck and their contributions are given in Table 5-4. The same reactions are the top contributors to ck for both experiments, but the similarity is clearly higher for LCT-008-009. A plot of the three relevant sensitivities is provided in Figure 5-12 showing that the sensitivity is much lower for the GBC-68 model than for either benchmark. The lower sensitivity of LCT-008-009 is clearly a closer match than the LCT-010-001 sensitivity. Translating the sensitivity differences into quantitative differences in the ck index is difficult, but this process can be used to gain understanding of the sources of and differences in similarity among different candidate benchmark experiments.

Table 5-4 Top ck Contributors for Two Benchmarks Compared to GBC-68 LCT-008-009 LCT-010-001 Nuclide/reaction ck contribution Nuclide/reaction ck contribution 235U 0.40558 235U 0.30959 238U (n, )

0.21637 238U (n, )

0.09580 235U (n, )

0.06966 235U (n, )

0.06303 235U (n, fission) 0.05445 1H (n, )

0.05884 1H (n, )

0.03701 235U (n, fission) 0.03157

5-28 Figure 5-12 235U Sensitivities for GBC-68 and Two ICSBEP Benchmarks In a similar fashion, the extended uncertainty edit discussed in Section 5.4 can be used to help identify the dissimilar uncertainty contributors and important contributions from missing nuclides.

A comparison of the top uncertainty contributors between the application case and LCT-010-001 can be informative regarding the missing similarity. Logistically, it is important to note that TSUNAMI-IP will only generate extended uncertainty edits for SDFs listed in the applications block. For this reason, it is sometimes necessary to list an experiment SDF as an application.

The top five contributors to uncertainty for LCT-010-001 and the GBC-68 application are provided in Table 5-5. The total data-induced uncertainty is also provided, as is the running total from the top contributors. This provides an indication of how much of the uncertainty is coming from the top contributor, the top two contributors, and so on. The results in the table indicate that the 235U uncertainty is a much larger contributor in the GBC-68 application model than in the benchmark. Radiative capture in 238U is the second largest contributor to uncertainty in the GBC-68 application, but it is not even in the top five contributors for LCT-010-001. Those top five nuclide/reaction pairs contribute more than 90% of the data-induced uncertainty in the benchmark, so it becomes clear why this benchmark is not applicable for validation of this application: the top uncertainty contributors are simply different nuclide/reaction combinations.

5-29 Table 5-5 Uncertainty Contributors for GBC-68 Cask and LCT-010-001 GBC-68 LEU-COMP-THERM-010-001 Total Data-Induced Uncertainty 0.419 Total Data-Induced Uncertainty 0.713 Nuclide/

Reaction Unc.

(%k/k)

Running Total Percentage of total Nuclide/

reaction Unc.

(%k/k)

Running Total Percentage of Total 235U 0.254 0.254 60.7 235U 0.388 0.388 54.4 238U (n,)

0.174 0.308 73.5 235U 0.364 0.532 74.6 235U (n,)

0.110 0.327 78.0 238U (n,n) 0.265 0.594 83.3 239Pu (n,fission) 0.101 0.342 81.7 1H (n,)

0.186 0.622 87.3 1H (n,)

0.095 0.355 84.7 235U (n,)

0.172 0.645 90.5 5.6 Sources of Available Sensitivity Data for Benchmark Experiments As discussed in Section 3.2 and in Clarity et al. [70], the largest collection of available sensitivity data for benchmark experiments is distributed with the DICE tool, along with the ICSBEP Handbook. Approximately 5,000 SDFs are available in the collection. Over 4,000 SDFs have been generated at NEA using an automated process [36]. A total of 543 SDFs generated at ORNL are also distributed with DICE. Both sets of SDFs will be discussed briefly in the remainder of this section.

The ORNL-generated SDFs come from two sources: the VALID library [35] and a project to perform validation for storage and processing of 233U [71]. All the SDFs are in an ornl folder within the DiceData folder on the ICSBEP distribution. There are TSUNAMI-1D and TSUNAMI-3D SDFs generated for 233U benchmarks, all of which were generated for the work documented in Mueller et al. [71]. This effort also generated SDFs in TSUNAMI-3D for the LEU-COMP-THERM-049 benchmark. The remaining 295 SDFs were provided to NEA after they were added to the VALID library. Distribution of SDFs to NEA as they were added to VALID stopped after NEA began generation of their own SDFs [36]. The ORNL-generated SDFs were mostly generated using the 238-group library based on ENDF/B-VII data, but the results are still applicable to more modern data sets [10, 38]. An advantage to the ORNL SDFs is that they were all checked with DP calculations and known to be accurate. The TSUNAMI-1D SDFs are also applicable for similarity assessments because the sensitivities to nuclear data are independent of the method used to generate them, as discussed in Sections 4.1.1 and 4.5. The independence of the sensitivity coefficients from the underlying calculational methodology has a strong theoretical basis and has also been demonstrated for multiple systems [39, 65].

Therefore, it can be concluded that all 543 ORNL-generated SDFs should be useful for performing similarity assessments.

The NEA-generated SDFs represent a rich resource for similarity assessment. Over 4,000 SDFs are available for many different categories of benchmark experiments. These SDFs are frequently used at ORNL for performing validation applicability assessments [11, 55, 72, and 73]. Most of these SDFs were also generated with libraries based on ENDF/B-VII, but again,

5-30 this does not invalidate the calculated sensitivities for similarity assessment. The automated process used to generate the SDFs confirmed that MG KENO keff calculations agreed well with CE keff calculations, but this was the only confirmation performed on calculated results. ORNL staff performed informal comparisons of some NEA-generated SDFs with ORNL-generated sensitivity data that had been confirmed with DP calculations, and the agreement was excellent.

The SDFs are therefore believed to be sufficiently accurate for use in screening experiments for similarity for validation [70]. It is unlikely that the sensitivity data contain any errors of sufficient magnitude to invalidate such comparisons, and S/U-based experiment selection is generally superior to engineering judgement-based approaches. However, it is recommended that the sensitivities be confirmed with DP calculations if the data or ck values are going to be used directly in the validation [70]. The most likely scenario for this would be a validation trend based on the ck values, but data adjustment using the NEA-generated SDFs may also be performed.

Some logistical advice is provided in Clarity et al. [70] regarding efficient use of the NEA SDFs in TSUNAMI-IP to assist users in making use of this resource.

6-1 6 CASE STUDIES It is informative to provide some case studies demonstrating the S/U methods for which the theoretical background is provided in Section 4 and some application-specific recommendations are given in Section 5. These case studies provide further clarification and demonstration of the application of the SCALE TSUNAMI S/U tools in NCS validation.

Hall et al. [73] studied the capability of some current transportation packages to incorporate high-assay low-enriched uranium (HALEU), that is, uranium with an enrichment between 5 and 20 wt% 235U as allowed contents. This assessment included an examination of the reactivity effect of the higher enrichment and potential limits to keep the package in compliance with 10 CFR 71 [74] requirements or tradeoffs to return the package into compliance. Hall et al. also examined the impact of the increased enrichment on the benchmarks applicable for validation.

The applicability assessment was performed with the integral index ck.

Three case studies are presented: a fresh fuel package containing two BWR fuel assemblies, a drum-type package containing TRISO fuel, and a generic SNF storage canister containing irradiated PWR assemblies. Each of the case studies presents different challenges and highlights different aspects of using S/U tools in NCS validation. The case studies do not proceed to actual bias and bias uncertainty determination because other guidance is available for processing a selected set of benchmark experiment results, including Dean and Tayloe [2],

Lichtenwalter et al. [3], and Clarity et al. [4]. The first two case studies are selected at least in part based on analysis results included in Hall et al. [73].

6.1 BWR Fresh Fuel Shipping Package One of the packages included in Hall et al. [73] is a fresh fuel transportation package for BWR assemblies. Different size arrays of packages were considered as part of the analysis to determine the criticality safety index (CSI) for the selected package. It also became evident that there was a relationship between the size of the array modeled and the ck values with many of the benchmark experiments. This effect is noted in Appendix C of Hall et al. and is the basis for selecting this package as one of the case studies to include in this document.

This section includes a demonstration of the analysis, including the generation of sensitivity data, confirmation with direct perturbation calculations, similarity assessment, and a discussion of validation gaps and weaknesses. The similarity assessment section includes an examination of the relationship between array size and similarity, along with an explanation for the effect. It also includes a comparison of similarity assessed with ENDF/B-VII.1 data vs. with ENDF/B-VIII.0 data. These examples demonstrate how to use the TSUNAMI tools to perform in-depth analysis of the ck results and to understand the physics involved in the similarity assessment.

Models were generated for a single package containing two assemblies, as shown in Figure 6-1. In Figure 6-1, the fuel rods are shown in blue, with gadolinia-bearing rods in yellow. Some fuel rods were omitted from the lattice in the safety analysis report to bound the reactivity of different fuel assembly types. The gadolinia rods are grouped in one quadrant of the lattice to minimize their worth for a given number and loading of rods. Taken together, the missing rods and clustered gadolinia rods are intended to generate a conservative keff value for the licensing of the package. These assumptions are neither confirmed nor tested here and are generally not relevant to the generation of sensitivity coefficients, similarity assessments, or gap assessments performed in this section.

6-2 Figure 6-1 Cross-Sectional View of the Single Package Model Containing Two BWR Assemblies Different arrays of packages were also modeled, and sensitivity coefficients were generated for each model to calculate ck values for different array sizes. The array sizes used were 2 x 1 x 2, 5 x 1 x 5, 10 x 1 x 10, 15 x 1 x 15, 50 x 1 x 50, and 100 x 1 x 100. A model was also created with a reflective boundary condition applied to a single unreflected package to represent an infinite array of packages. An illustration of the models with increasing array sizes is provided in Figure 6-2 which also shows a 30 cm water reflector around the outside of the arrays in red.

This is a different water mixture from the that inside the fuel lattice, capturing the difference in sensitivities between these two regions, as discussed in Sections 5.1.1, 5.1.3, and 5.2.2.

Figure 6-2 Illustration of Models with Different Package Array Sizes 6.1.1 Sensitivity Coefficient Generation with TSUNAMI-3D All sensitivity calculations were performed with SCALE 6.3.0 using the KENO V.a transport code with the TSUNAMI-3D sequence. The CLUTCH method was used to generate sensitivities for small models, but the IFP method was used for the 15 x 1 x 15, 50 x 1 x 50, and 100 x 1 x 100 array models after it demonstrated better agreement with DP calculations. The KENO V.a transport code was selected because it has sufficient geometric capabilities to model the package and is significantly faster than KENO-VI. The CLUTCH method was selected to take advantage of the parallelism available for calculating sensitivities, and IFP was used for the large models because it provided more accurate results for the largest arrays explicitly modeled in the study. The largest arrays of packages would likely have required too much memory for a sufficiently resolved mesh for MG TSUNAMI-3D.

6-3 The CLUTCH calculations simulated 100,000 particles per generation. The F*(r) function was tabulated on a user-defined mesh focusing on the fuel assemblies in each model. Five latent generations were used in the IFP calculation of F*(r), which is tabulated for 1,000 generations for the single package model, the 2 x 1 x 2 array model, the 5 x 1 x 5 array model, and the infinite array model. The 10 x 1 x 10 array model used 2,000 skipped generations to tabulate F*(r) because it resulted in better agreement with the DP calculations. KENO simulates at least twice as many active generations as skipped generations, so the models simulating 1,000 skipped generations complete after 3,001 total generations, and the models simulating 2,000 skipped generations terminate after 6,001 generations. The final stochastic uncertainty on the system keff was approximately 6 pcm for the calculations skipping 1,000 generations and approximately 4 pcm for the calculations skipping 2,000 generations.

The IFP calculations used five latent generations. The calculations were performed with 100,000 particles per generation, skipping the first 100 for source convergence. The simulations were targeted for an uncertainty of 10 pcm on the final keff value, so different numbers of generations were used to achieve this uncertainty in each calculation.

Sensitivities were generated using both the ENDF/B-VII.1 [17] and ENDF/B-VIII.0 [47] CE libraries to determine if the observed relationship between array size and ck was library dependent. This approach also provides the opportunity to study sensitivity coefficient generation with two different libraries and the effects of using two different covariance libraries.

DP calculations were performed for select nuclides to confirm sensitivities greater than 0.02.

The selected nuclides varied among the different models as the integral sensitivity values changed. Generally, DP calculations were performed for 235U, 238U (except in the single package model), and 1H in the fuel assembly. Comparisons of the 235U, 238U, and 1H sensitivities are provided in Table 6-1, Table 6-2, and Table 6-3, respectively, for the ENDF/B-VII.1 library.

Table 6-1 TSUNAMI and DP Integral Sensitivities for 235U Array Size TSUNAMI DP Comparison S

S S

S/S (%)

S ()

Single 0.2170 0.0002 0.2136 0.0028 0.0034 1.59%

1.22 2 x 1 x 2 0.2206 0.0002 0.2147 0.0042 0.0059 2.74%

1.39 5 x 1 x 5 0.2191 0.0001 0.2188 0.0036 0.0003 0.14%

0.09 10 x 1 x 10 0.2182 0.0001 0.2178 0.0033 0.0004 0.16%

0.11 15 x 1 x 15 0.2162 0.0003 0.2128 0.0032 0.0034 1.59%

1.06 50 x 1 x 50 0.2165 0.0003 0.2225 0.0037 0.0061 2.72%

1.63 100 x 1 x 100 0.2162 0.0003 0.2151 0.0034 0.0011 0.51%

0.32 Infinite 0.2159 0.0001 0.2128 0.0017 0.0032 1.49%

1.89

6-4 Table 6-2 TSUNAMI and DP Integral Sensitivities for 238U Array Size TSUNAMI DP Comparison S

S S

S/S (%)

S ()

Single 0.0159 0.0002 No DP calculations because sensitivity is less than 0.02.

2 x 1 x 2 0.0358 0.0002 0.0357 0.0007 0.0001 0.30%

0.15 5 x 1 x 5 0.0591 0.0001 0.0581 0.0010 0.0009 1.59%

0.95 10 x 1 x 10 0.0698 0.0001 0.0718 0.0010 0.0020 2.72%

1.91 15 x 1 x 15 0.0736 0.0003 0.0727 0.0011 0.0010 1.35%

0.88 50 x 1 x 50 0.0772 0.0003 0.0786 0.0015 0.0014 1.82%

0.95 100 x 1 x 100 0.0778 0.0003 0.0778 0.0012 0.0000 0.00%

0.00 Infinite 0.0831 0.0001 0.0826 0.0011 0.0005 0.59%

0.43 Table 6-3 TSUNAMI and DP Integral Sensitivities for Moderator 1H Array Size TSUNAMI DP Comparison S

S S

S/S (%)

S ()

Single 0.3797 0.0010 0.3729 0.0050 0.0068 1.83%

1.34 2 x 1 x 2 0.2780 0.0009 0.2847 0.0054 0.0067 2.35%

1.23 5 x 1 x 5 0.1691 0.0006 0.1736 0.0028 0.0046 2.63%

1.62 10 x 1 x 10 0.1110 0.0006 0.1125 0.0014 0.0015 1.38%

1.04 15 x 1 x 15 0.0919 0.0012 0.0929 0.0013 0.0009 1.01%

0.55 50 x 1 x 50 0.0680 0.0011 0.0668 0.0011 0.0012 1.79%

0.74 100 x 1 x 100 0.0660 0.0011 0.0679 0.0010 0.0019 2.84%

1.31 Infinite 0.0251 0.0008 0.0283 0.0003 0.0031 11.12%

3.81 The agreement is generally good between TSUNAMI and DP results. The difference in the moderator 1H is larger than desired, with a discrepancy of over 3 and more than 11%, but it only represents an absolute error of 0.0031 in sensitivity, which is regarded as generally acceptable. The impact of this misprediction is small for similarity assessment because 1H has historically been a small contributor given its small associated nuclear data uncertainties. This assumption may not be entirely appropriate starting with ENDF/B-VIII.0 and its larger 1H uncertainties. The impact of this discrepancy on uncertainty propagation could also be of concern, but in this case, there is no reason to believe that 1H sensitivities will be used for uncertainty propagation because many benchmarks are available using water. Based on studies presented in Greene et al. [42], it is likely that an increased number of latent generations would improve the TSUNAMI result for 1H. However, the results for the DP calculations for 235U and 238U provide confidence that the parameters used in the CLUTCH calculation for the infinite array case are generally acceptable.

6-5 The details of the DP calculations for 235U for the single package are presented here as a demonstration of the process described in Sections 5.1.25.1.4. The integral total sensitivity is 0.2170 +/- 0.0002, so by using Eq. (7) and a calculated keff value of 0.62984 +/- 0.00006, the recommended value is +/-0.037. The actual value selected was +/-0.023, and the raw and normalized keff values are provided in Table 6-4. The results are plotted in Figure 6-3 and show excellent linear behavior. The slope of the uncertainty-weighted linear regression is 0.2136 +/-

0.0028, as shown in Table 6-1. It is important to note that this is a reliable DP result, even though the recommended value was not used. Users can assess the reliability of the DP calculations independent of the resulting keff changes. This sort of mismatch can result from a preliminary, high uncertainty TSUNAMI-3D calculation being performed to generate sensitivity estimates for DP calculations. This is a single illustration of the effort to generate the DP results for all the values shown in Table 6-1, Table 6-2, and Table 6-3. This effort can be time consuming, but it is important for ensuring that the TSUNAMI-3D sensitivities are accurate and can be used in further analyses such as similarity and validation gap assessments.

This example also demonstrates that the density perturbations do not need to strictly adhere to the recommended guidance presented in Section 5.1.2 to yield accurate results. Input creation errors can result in asymmetric perturbations, but these are also likely acceptable as long as the actual values are used in the slope calculation.

Table 6-4 Raw and Normalized keff Results for 235U Single Package DP Calculations keff Normalized keff 0.023 0.62665 0.00006 0.99493 0.00009 0

0.62984 0.00006 1

0.00009

+0.023 0.63285 0.00006 1.00478 0.00009 Figure 6-3 Normalized keff Results Plotted vs.

6-6 The integral total sensitivities calculated with ENDF/B-VII.1 and ENDF/B-VIII.0 are compared for 235U, 238U, and moderator 1H in Table 6-5, Table 6-6, and Table 6-7, respectively. The sensitivities are generally in very good agreement, as mentioned in Section 4.1.3 and in Greene and Marshall [38]. Both 235U and 238U show very good agreement. None of the noted differences are as large as 0.002, and all the sensitivity coefficients are within 3% between the two libraries for cases with sensitivities with magnitudes larger than 0.02. The agreement between the 1H sensitivities is also good, with all differences less than 0.006, and only one in excess of a 5%

change. As discussed in Section 5.1.1, this comparison and the corresponding agreement could be used to confirm the sensitivities calculated with ENDF/B-VIII.0 in lieu of DP calculations because the ENDF/B-VII.1 results have been confirmed.

Table 6-5 Integral Total Sensitivities for 235U with Both Libraries Array Size ENDF/B-VII.1 ENDF/B-VIII.0 Comparison S

S S

S/S (%)

Single 0.2170 0.0002 0.2159 0.0002 0.0011 0.50 2 x 1 x 2 0.2206 0.0002 0.2193 0.0002 0.0013 0.57 5 x 1 x 5 0.2191 0.0001 0.2177 0.0001 0.0014 0.64 10 x 1 x 10 0.2182 0.0001 0.2167 0.0001 0.0015 0.69 15 x 1 x 15 0.2162 0.0003 0.2145 0.0003 0.0017 0.79 50 x 1 x 50 0.2165 0.0003 0.2149 0.0003 0.0016 0.76 100 x 1 x 100 0.2162 0.0003 0.2145 0.0003 0.0017 0.79 Infinite 0.2159 0.0001 0.2144 0.0001 0.0015 0.69 Table 6-6 Integral Total Sensitivities for 238U with Both Libraries Array Size ENDF/B-VII.1 ENDF/B-VIII.0 Comparison S

S S

S/S (%)

Single 0.0159 0.0002 0.0169 0.0002 0.0010 6.12 2 x 1 x 2 0.0358 0.0002 0.0368 0.0002 0.0010 2.85 5 x 1 x 5 0.0591 0.0001 0.0593 0.0002 0.0003 0.44 10 x 1 x 10 0.0698 0.0001 0.0700 0.0001 0.0002 0.25 15 x 1 x 15 0.0736 0.0003 0.0734 0.0003 0.0002 0.23 50 x 1 x 50 0.0772 0.0003 0.0770 0.0003 0.0002 0.22 100 x 1 x 100 0.0778 0.0003 0.0771 0.0003 0.0007 0.94 Infinite 0.0831 0.0001 0.0821 0.0001 0.0010 1.18

6-7 Table 6-7 Integral Total Sensitivities for Moderator 1H for Both Libraries Array Size ENDF/B-VII.1 ENDF/B-VIII.0 Comparison S

S S

S/S (%)

Single 0.3797 0.0010 0.3823 0.0010 0.0025 0.67 2 x 1 x 2 0.2780 0.0009 0.2792 0.0009 0.0012 0.44 5 x 1 x 5 0.1691 0.0006 0.1691 0.0009 0.0000 0.02 10 x 1 x 10 0.1110 0.0006 0.1104 0.0006 0.0006 0.50 15 x 1 x 15 0.0919 0.0012 0.0866 0.0011 0.0053 5.81 50 x 1 x 50 0.0680 0.0011 0.0666 0.0011 0.0014 2.05 100 x 1 x 100 0.0660 0.0011 0.0646 0.0011 0.0014 2.13 Infinite 0.0251 0.0008 0.0263 0.0008 0.0012 4.58 6.1.2 Identification of Applicable Benchmarks Hall et al. [73] assembled a set of 1,584 SDFs for benchmark experiments taken from the VALID library [35] and the NEA-generated data. This set contained benchmarks from the LEU and intermediate enrichment uranium (IEU) ranges, where the ICSBEP definition of IEU is 10 60 wt% 235U [14]. A larger set of 2,104 benchmarks was generated in NUREG/CR-7309 [72] and is used here. This set contains SDFs from the VALID library, the NEA data and the Haut Taux de Combustion (HTC) experiments for BUC validation [7578]. A summary of the ck results is shown in Table 6-8, including results based on the ENDF/B-VII.1 and ENDF/B-VIII.0 data. In this case, the same library was used for both the TSUNAMI-3D calculations to determine sensitivities and for the covariance data used in TSUNAMI-IP.

Table 6-8 Summary of ck Results Considering ENDF/B-VII.1 and ENDF/B-VIII.0 Array Size ENDF/B-VII.1 ENDF/B-VIII.0 Number 0.8 Number 0.9 Number 0.8 Number 0.9 Single 1,288 641 1,326 1,008 2 x 1 x 2 1,363 892 1,354 1,069 5 x 1 x 5 1,340 784 1,341 976 10 x 1 x 10 1,075 175 1,332 826 15 x 1 x 15 730 99 1,322 760 50 x 1 x 50 390 19 1,312 569 100 x 1 x 100 355 19 1,311 566 Infinite 128 11 1,298 331 Some clear patterns are seen in the results in Table 6-8. The number of applicable or marginally applicable experiments decreases as array size increases, and this trend is particularly stark for the ENDF/B-VII.1 data. There are also generally more applicable experiments for the ENDF/B-

6-8 VIII.0 data than for the ENDF/B-VII.1 data. Both trends can be examined using extended ck and uncertainty edits and, where applicable, differences in the sensitivity coefficients. The two trends are examined separately in the following subsections.

6.1.2.1 ENDF/B-VII.1 vs. ENDF/B-VIII.0 There are modest differences in the number of marginally applicable benchmarks (ck 0.8) identified using ENDF/B-VII.1 data compared to ENDF/B-VIII.0 data for the single array case and small finite arrays. The difference increases dramatically as array size increases until the difference is more than an order of magnitude for the infinite array. The number of benchmarks with ck values of 0.9 or more show larger differences between the two libraries. The differences increase from almost 60% for the single package case to more than a factor of 30 for the infinite case. The same benchmark sensitivity data are used in both comparisons, so the differences shown in Table 6-8 can only be explained by differences in the application sensitivity coefficients generated with the different libraries or with differences in the covariance data libraries. The results shown in Table 6-5, Table 6-6, and Table 6-7 provide strong evidence that the differences in assessed similarity are caused by covariance data changes between ENDF/B-VII.1 and ENDF/B-VIII.0.

The extended ck edit, activated with the c_long parameter in TSUNAMI-IP, provides a table listing the contribution of each nuclide/reaction pair to the ck value. From this table, the top contributor is extracted for the experiment with the highest ck value for each array size and both libraries. This information is presented in Table 6-9. The results show that for the ENDF/B-VII.1 data, the top contributor is 235U for the single package and for small arrays, and then the top contributor shifts to 235U. This is an important indication relating to the next investigation that will explore the trend of ck values with array size. The ENDF/B-VIII.0 results are dominated by 235U for all array sizes. It is also clear that the ck contribution from 235U is higher for ENDF/B-VIII.0 than for ENDF/B-VII.1.

The 235U covariance data are patched as shown in Table 6-9. This could be an indication of default data being used in the top contributor of similarity, and as such, it should be further investigated. All groups and reactions for which data gaps are patched are listed in the TSUNAMI-IP output as discussed in Section 5.4. This edit reveals that default values of 15% will replace zero values in groups 3340, and default values of 5% will replace zero values in groups 4156. This is obviously the lower portion of the intermediate range and the entire thermal range. The 56-group energy structure is provided in Table 10.1.9 of the SCALE 6.3.1 manual [7] and shows that group 33 starts at 7 eV. Essentially no fission neutrons are emitted below 7 eV, so patched data in this regime are not a concern. As confirmation, the 235U sensitivity for the single package model using the ENDF/B-VII.1 CE data shown in Figure 6-4 clearly indicates that there is no sensitivity to below 1 keV. The use of patched data is therefore of no concern in this analysis.

6-9 Table 6-9 Top Contributor to Highest ck Value for Each Array Size Nuclear Data Array Size Experiment c(k)

Nuclide/

Reaction c(k)

Contribution ENDF/B-VII.1 Single LCT-018-001 0.9842 235U 4.13E-01***

2 1 2 LCT-018-001 0.9842 235U 3.69E-01***

5 1 5 LCT-047-002 0.9848 235U 3.59E-01 10 1 10 LCT-047-001 0.9758 235U 4.29E-01 15 1 15 LCT-047-001 0.9749 235U 4.38E-01 50 1 50 LCT-047-001 0.9654 235U 4.44E-01 100 1 100 LCT-047-001 0.9635 235U 4.45E-01 Infinite LCT-051-002 0.9330 235U 4.45E-01 ENDF/B-VIII.0 Single LCT-018-001 0.9711 235U 5.32E-01 2 1 2 LCT-047-002 0.9750 235U 5.67E-01 5 1 5 LCT-047-003 0.9771 235U 5.95E-01 10 1 10 LCT-047-003 0.9723 235U 6.09E-01 15 1 15 LCT-047-001 0.9699 235U 5.98E-01 50 1 50 LCT-047-001 0.9673 235U 6.00E-01 100 1 100 LCT-047-001 0.9675 235U 5.99E-01 Infinite LCT-043-003 0.9646 235U 5.85E-01

      • Corrections made to the covariance data Figure 6-4 235U Sensitivity for the Single Package Model Using ENDF/B-VII.1 CE Data

6-10 The 235U sensitivities for the single package models resulting from the ENDF/B-VII.1 and ENDF/B-VIII.0 calculations are shown in Figure 6-5. The sensitivity profiles are nearly identical, which is consistent with expectations from Greene [38] and the results shown in Table 6-5. A change in the ck contribution must therefore come from the covariance data. The 235U covariance data for both libraries is shown in Figure 6-6, revealing higher uncertainty in the ENDF/B-VII.1 data in the regions near and above 100 keV. This is the energy regime in which nearly all of the sensitivity is located, making the overall uncertainty contribution larger for the ENDF/B-VII.1 covariance data. The larger uncertainty indicates a higher probability for bias and a larger ck contribution. The differences in the covariance explain the differences in the observed ck contributions between the two libraries.

Figure 6-5 235U Sensitivity Profiles for the Single Package Model with Both Data Sets Figure 6-6 Uncertainty in 235U in ENDF/B-VII.1 and ENDF/B-VIII.0 A comparison of the 235U sensitivity profiles for the infinite array case is shown in Figure 6-7, indicating very little difference between the sensitivities. This high degree of similarity is expected and consistent with the results for 235U. The uncertainty in the 235U distribution in both libraries is shown in Figure 6-8. It is evident in the figure that the uncertainty is larger for 235U at all energies. The majority of the sensitivity is just below 0.1 eV, where the ENDF/B-

6-11 VIII.0 uncertainty is approximately 21% higher than for ENDF/B-VII.1. The larger uncertainty provides a greater weight to the 235U in the ck calculation because the larger uncertainty indicates a greater potential for bias in the data. As with the distribution, the change in the covariance data explains the difference in the ck values between the two data sets.

Figure 6-7 235U Sensitivity for the Infinite Array Models with Both Libraries Figure 6-8 235U Uncertainty in ENDF/B-VII.1 and ENDF/B-VIII.0 In summary, the differences in assessed similarity are a direct result of differences in covariance data between ENDF/B-VII.1 and ENDF/B-VIII.0 and are not caused by the minor differences in sensitivity profiles. The extended ck edit in TSUNAMI-IP provides helpful data to identify key nuclide/reaction pairs responsible for the differences, and examination of the underlying sensitivities and covariances provide the necessary information to understand the observed differences in assessed similarity. The impacts of these changes vary somewhat among different benchmarks because the similarity in the key profiles varies, but the approach demonstrated here can be extended to understand the full impact of these changes in the covariance data.

6-12 6.1.2.2 Ck and Array Size As shown in Table 6-8, there is a clear trend of decreasing ck values with increasing array size.

This trend was first identified for this system in Hall et al. [73], which includes a study of the effect in Appendix C. Three system sizes were considered by Hall: a single package, a 10 x 1 x 10 array, and an infinite array. More different package array sizes are considered here to further investigate the relationship between ck values and array size.

Table 6-10 shows the top five contributors to ck for the benchmark with the highest ck value for each array size with the ENDF/B-VII.1 data. The top contributor for the single and 2 x 1 x 2 array models is 235U. For the larger arrays, the top contributor is 235U, and the capture reactions in 238U, 56Fe, and 235U are the second, third, and fourth largest contributors, respectively. The 238U inelastic scattering contribution drops from the second largest contribution in the single package model to the third largest contributor in the 2 x 1 x 2 and 5 x 1 x 5 arrays to fifth in the 10 x 1 x 10 array model. For larger arrays, 238U inelastic scattering is not a top contributor to similarity, even though the reaction has a fairly high uncertainty. Similarly, the 16O elastic scattering reaction is a large contributor to the single and 2 x 1 x 2 array models. These scattering reactions reduce neutron leakage and are therefore important in the small system.

These trends illustrate that leakage dominates the similarity for the small systems, whereas neutron production and absorption are the key reactions for large systems. This illustration is reinforced by review of the sensitivity for all 8 application systems, the results of which are shown in Figure 6-9. The sensitivity starts out large for the single package and becomes progressively smaller as the model size increases. This is a logical progression because leakage is important for the small systems, but as the models grow larger, the keff approaches

k. In the infinite system, neutron multiplication is simply the ratio of production over absorption:

exactly as indicated by the ck contributions. The leakage effect is indicated in the small systems by the sensitivities to and inelastic scattering having large impacts. Neutrons born at lower energies are less likely to leak from the system, so the contribution of this sensitivity is significant when leakage is a significant effect.

Table 6-10 Top Five ck Contributors for All Array Sizes with ENDF/B-VII.1 Data Single 2 x 1 x 2 5 x 1 x 5 10 x 1 x 10 235U 235U 235U 235U 238U (n,n) 235U 235U 238U (n,)

235U 238U (n,n) 238U (n,n) 56Fe (n,)

16O elastic 235U (n,)

238 (n,)

235U (n,)

235U (n,)

16O elastic 235U (n,)

238U (n,n) 15 x 1 x 15 50 x 1 x 50 100 x 1 x 100 Infinite 235U 235U 235U 235U 238U (n,)

238U (n,)

238U (n,)

238U (n,)

56Fe (n,)

56Fe (n,)

56Fe (n,)

56Fe (n,)

235U (n,).

235U (n,)

235U (n,)

235U (n,)

235U fission 235U fission 235U fission 235U fission

6-13 Figure 6-9 235U Sensitivities for Each Array Size It is reasonable to assume that 1H should also be a contributor to ck if scattering is an important reaction for the small systems. The scattering sensitivities for 1H, 16O, and 238U are presented in Figure 6-10 and show that the 1H elastic scattering sensitivity is significantly larger than the 16O elastic scattering or 238U inelastic scattering sensitivities. The uncertainties for the three reactions are provided in Figure 6-11 and show that the 1H uncertainty is smaller than the 16O elastic scattering uncertainty and that 238U inelastic scattering has a much higher uncertainty.

The 238U inelastic scattering uncertainty is so large that it is the second largest contributor to uncertainty in the single package model, with 16O elastic scattering the fourth largest contributor, and 1H elastic making the eighth largest contribution. The total data-induced uncertainty for the model is 911 pcm, with 466 pcm from 238U inelastic scattering, 201 pcm from 16O elastic scattering, and 147 pcm from 1H elastic scattering. Recall that uncertainties are combined Via the square root of the sum of the squares, so the larger components contribute significantly more than the smaller ones. Ultimately, the 1H elastic scattering contribution to ck is sixth, so it falls just short of being included in Table 6-10. The extended ck and uncertainty edits provide a deeper understanding of why the 1H elastic scattering reaction is not as large a contributor to ck as the other two scattering reactions.

Figure 6-10 1H, 16O, and 238U Scattering Sensitivities for the Single Package Model

6-14 Figure 6-11 1H, 16O, and 238U Scattering Uncertainty Data The same physics obviously determine system behavior with ENDF/B-VIII.0 nuclear data, but the ck contributions are different, as shown in Table 6-9. Section 6.1.2.1 provides a discussion of the differences between the two data sets, illustrating that the uncertainty in 235U drops in ENDF/B-VIII.0 relative to ENDF/B-VII.1 and that the uncertainty in 235U increases. This causes the ck contribution from 235U to be higher with the ENDF/B-VIII.0 data, even for smaller packages, as shown in Table 6-9. The importance of the 235U reaction also explains why the ENDF/B-VIII.0 similarity does not change as much as a function of array size, as does the similarity when assessed with ENDF/B-VII.1 nuclear data. The 235U sensitivities calculated with ENDF/B-VII.1 data for all array sizes are provided in Figure 6-12, showing that the dominant reaction is essentially invariant to model size. The impacts of the scattering and capture reactions are still shifting with array size, as shown in Table 6-11, but the impacts are reduced because of the prevalence of the 235U contribution.

Figure 6-12 Sensitivities to Average Total Number of Neutrons Released per 235U Fission for Each of the Different Array Size Models with ENDF/B-VII.1 Data

6-15 Table 6-11 Top Five ck Contributors for All Array Sizes with ENDF/B-VIII.0 Data Single 2 x 1 x 2 5 x 1 x 5 10 x 1 x 10 235U 235U 235U 235U 1H elastic 1H (n,)

1H (n,)

1H (n,)

1H (n,)

1H elastic 1H elastic 235U fission 235U fission 235U fission 235U fission 1H elastic 56Fe (n,)

56Fe (n,)

56Fe (n,)

56Fe (n,)

15 x 1 x 15 50 x 1 x 50 100 x 1 x 100 Infinite 235U 235U 235U 235U 1H (n,)

1H (n,)

1H (n,)

56Fe (n,)

235U fission 235U fission 235U fission 1H (n,)

56Fe (n,)

56Fe (n,)

56Fe (n,)

235U fission 1H elastic 1H elastic 1H elastic 238U (n,)

It is also clear when comparing the reactions listed in Table 6-10 and Table 6-11 that 1H plays a larger role in similarity with ENDF/B-VIII.0 covariance than it does with ENDF/B-VII.1 data. At this point, the covariance is the known culprit because the sensitivities to several reactions have been shown several times to be nearly identical with different data sets. The covariance for the 1H elastic scatting cross section is shown for both libraries in Figure 6-13. This is one of the major changes resulting from an end of curating the covariance data released with SCALE, as discussed in Section 4.3. Significant disagreement exists within the nuclear data community about the uncertainty in the 1H elastic scattering reaction, which is clearly evident in the figure. It is also clear that the 1H elastic scattering reaction will have larger ck contributions in thermal systems when assessed with ENDF/B-VIII.0 covariance data given that the uncertainty is higher by a factor of six. The impact is moderated by the fact that the ENDF/B-VIII.0 uncertainty is lower above approximately 500 keV, and as shown in Figure 6-10, a significant fraction of the 1H scattering sensitivity is at these high energies. The ENDF/B-VIII.0 covariance data for 1H (n,)

are also significantly larger than in the ENDF/B-VII.1 data, as shown in Figure 6-14. The uncertainty has increased by nearly a factor of two below 1 keV, and as shown in Figure 6-15, essentially all the 1H (n,) sensitivity is below 100 eV for both libraries.

6-16 Figure 6-13 1H Elastic Scattering Uncertainties for ENDF/B-VII.1 and ENDF/B-VIII.0 Figure 6-14 1H (n,) Uncertainties for ENDF/B-VII.1 and ENDF/B-VIII.0 Figure 6-15 1H (n,) Sensitivity in the Single Package Model

6-17 6.1.3 Benchmark Set Gaps and Weaknesses The two features of the package most likely to cause validation problem gaps or weaknesses are the presence of Gd2O3 rods in the assembly and the steel package wall. The thin steel walls acting as absorbers in the large array models are not necessarily well represented in critical benchmark experiments. Steel reflectors are present in many benchmarks, so the steel in the package wall in the smaller array models and single package model is of less concern. Each of these potential gaps or weaknesses will be discussed for each set of identified applicable benchmark experiments in the next two subsections.

The number of benchmarks with ck values of at least 0.8 and at least 0.9 are provided in Table 6-8 and in Section 6.1.2 considering both ENDF/B-VII.1 and ENDF/B-VIII.0 covariance data.

This exercise only considers the experiments with ck values greater than 0.9 because they are fewer in number. Also, only the infinite array case is examined because it leads to the smallest number of applicable benchmarks with both data sets. Neither of these decisions is inherently appropriate for validation, but both are made to increase the probability of a significant identified gap or weakness in the applicable benchmark set.

6.1.3.1 Interstitial Steel Absorption One concern, especially for the large array models, is that the presence of many package bodies creates a configuration that is not well represented in many benchmark experiments.

Many storage applications face this issue because fuel assemblies are often stored in stainless-steel racks or canisters. This limited data set was a primary motivator for recommendation 4 of NUREG/CR-7109 [10], which is to be sure to account for structural material in validation. There are a limited number of ICSBEP benchmarks with stainless-steel separating arrays of LEU fuel rods. A search in the DICE tool identifies 6 LCT evaluations with a total of 22 configurations with steel separation material [14]. The geometry of the 8 stainless-steel separating wall cases in the LCT-051 benchmark are particularly relevant because the experiment was designed to represent fuel assemblies in a storage rack or a transportation package. The other cases included in experiments performed at PNNL are less prototypic, but they may still have some validity for testing radiative capture in Fe. A list of all 22 ICSBEP benchmarks that DICE identifies as having steel separators is provided in Table 6-12.

Table 6-12 ICSBEP LCT Benchmarks with Steel Separators LCT-009-001 LCT-009-002 LCT-009-003 LCT-009-004 LCT-012-001 LCT-013-001 LCT-016-001 LCT-016-002 LCT-016-003 LCT-016-004 LCT-016-005 LCT-016-006 LCT-016-007 LCT-042-001 LCT-051-002 LCT-051-003 LCT-051-004 LCT-051-005 LCT-051-006 LCT-051-007 LCT-051-008 LCT-051-009 There are 11 critical benchmarks with ck values above 0.9 for the infinite array model with similarity assessed using the ENDF/B-VII.1 covariance data. Eight of those experiments are the LCT-051 cases with steel-separating walls. This is not a large number of experiments with which to judge the impact of interstitial steel, but it may be sufficient to address this concern. A comparison of the C/E values for the highly similar experiments with and without these

6-18 interstitial steel absorbers would provide some indication of any associated potential bias difference. This comparison is not explicitly performed here because it is unrelated to S/U methods and thus outside the scope of this discussion. Note that the small total number of experiments is much more concerning than the fact that only 8 of them contain steel separating panels. As discussed above, this constraint is artificially imposed here to highlight validation set shortcomings.

The ENDF/B-VIII.0 covariance library identifies a more robust 331 benchmarks with a ck value above 0.9. This set includes the LCT-051 experiments, as well as LCT-012-001, LCT-013-001, and LCT-042-001, for a total of 11 steel separator experiments. It may be difficult to extract a statistically meaningful difference between these 11 experiments and the remaining 320, but almost all the available experiments are applicable for validating this model. As with the much smaller set of experiments identified as highly applicable using the ENDF/B-VII.1 covariance data, a comparison of the bias resulting from the 11 steel separator cases can be made to the bias resulting from the other experiments to determine if a potential issue exists. This comparison would not necessarily involve S/U methods and is therefore not provided here. It is also conceivable that either set of experiments could be used to make a case that no gap exists for steel separation because several cases in both sets include this feature.

6.1.3.2 Gadolinium There are only a limited number of Gd-bearing benchmarks in the ICSBEP Handbook. A search in DICE results in 6 LCT evaluations, with a total of 50 configurations containing Gd [14]. A summary of these benchmarks is presented in Table 6-13. Burnable absorber materials such as gadolinium are identified in NUREG/CR-7109 [10] as a potential validation concern, along with the structural materials discussed in Section 6.1.3.1.

Table 6-13 ICSBEP LCT Benchmarks Containing Gadolinium Absorber LCT Case(s)

LCT-036 2744 LCT-043 19 (All cases)

LCT-046 1217 LCT-054 1

LCT-058 16 LCT-091 19 (All cases)

None of the 11 cases identified as being applicable for validation of the infinite array model using ENDF/B-VII.1 data contain gadolinium. This represents a gap in the validation set that was also identified in NUREG/CR-7252 [11]. An uncertainty assessment can be performed to generate a quantitative basis for a validation gap penalty. A similar approach was used in NUREG/CR-7252, but this scenario is slightly different because it involves fresh fuel instead of burned fuel. This means that all naturally occurring isotopes are present in the model, although the primary absorbing isotopes are 155Gd and 157Gd. The integral sensitivity for all seven isotopes in the infinite array model is provided in Table 6-14.

6-19 Table 6-14 Integral Total Sensitivity for Gadolinium Isotopes in the Infinite Array Model Isotope Sensitivity Uncertainty 152Gd 1.0314E-5 3.8384E-07 154Gd 4.0093E-5 7.5401E-07 155Gd 3.4293E-3 2.7610E-06 156Gd 1.5949E-4 2.4506E-06 157Gd 1.0167E-2 8.8341E-06 158Gd 1.1167E-4 2.5123E-06 160Gd 1.9273E-5 2.4344E-06 As expected, 157Gd has the largest sensitivity, followed by 155Gd. The sensitivity for 157Gd is approximately three times that of 155Gd, which is nearly an order of magnitude higher than 156Gd as the third highest absorber. The (n,) sensitivity profiles of 155Gd and 157Gd are shown in Figure 6-16, and the (n,) uncertainties are shown in Figure 6-17. The covariance data for these reactions of these isotopes were not changed in ENDF/B-VIII.0, so even though the uncertainty data are drawn from the ENDF/B-VII.1 library, they are identical to the ENDF/B-VIII.0 data.

There are other reactions for both relevant isotopes of Gd, but all are more than two orders of magnitude lower than the radiative capture sensitivities. This reaction in these two isotopes will dominate the data-induced uncertainty for Gd.

Figure 6-16 155Gd and 157Gd (n,) Sensitivities for the Infinite Array Model

6-20 Figure 6-17 155Gd and 157Gd (n,) Uncertainty Data The propagated nuclear data uncertainty for all reactions of all 7 Gd isotopes are provided in Table 6-15, thus accounting for the keff for the infinite array model of 1.045391 +/- 0.00006 and all cross correlations in the data. As expected, 157Gd is the dominant isotope, and there is a small additional effect from 155Gd. The other five isotopes combine to increase the uncertainty by almost 0.1 pcm. The total data-induced uncertainty for Gd in the infinite array model is just under 46 pcm. Applying a two-sided 95% confidence multiplier would increase the uncertainty to just over 89 pcm. Part of the reason this uncertainty is so small is that Gd is an extremely strong absorber, and when it is lumped in a fuel rod, it becomes essentially black to neutrons. That means that small differences in the cross section have very little impact because essentially all neutrons are absorbed. The integral sensitivities shown in Figure 6-16 are 0.0034 and 0.0102, which are both relatively small sensitivities. The integral reactivity suppression provided by these absorbers is quite large, but the impact of small changes in the cross sections is small. The propagated nuclear data uncertainty can be used as a basis to propose a validation gap penalty. It is expected that the bias in the infinite array application is less than 89 pcm with 95% confidence. As discussed in Sections 4.4 and 5.4, although there may be reason for skepticism of the absolute reliability of this approach, it definitely provides insight regarding the potential bias in the applications system from Gd nuclear data.

Table 6-15 Data-Induced Uncertainty in the Infinite Array Model from Gd Isotopes Isotope Data-Induced Uncertainty (pcm) 152Gd 0.03 154Gd 0.12 155Gd 13.94 156Gd 2.32 157Gd 43.33 158Gd 1.68 160Gd 0.43 Total 45.61

6-21 When assessed with ENDF/B-VIII.0 nuclear data, the applicable benchmark set includes experiments from the LCT-043, LCT-046, LCT-054, LCT-058, and LCT-091 series. A total of 31 of these benchmarks with Gd are in the set, so there is no validation gap related to Gd in this data set. Once again, the larger data set allows coverage of a potential validation weakness.

6.2 Drum-Type Package Containing TRISO Fuel Hall et al. [73] includes an analysis of the Versa-Pac [80, 81] because it is licensed to transport uranium material up to 100% enriched 235U. This work uses a similar generic drum-type package developed by Elzohery et al. [82]. The package analyzed by Elzohery consists of a 55-gallon drum containing 364 fuel pebbles based on the Pebble Bed Modular Reactor (PBMR) 400 design. In this work, the pebble design was modified to increase reactivity. The grain packing fraction was increased from approximately 9% to 17%, and the enrichment was increased from 9.6 wt% 235U to 20 wt% 235U. Both changes were made to increase keff to something closer to the expected limiting value for a commercial-scale transportation package.

The analysis condition is an infinite square array of flooded packages with air between the packages. The packages in the array are essentially touching. The maximum impact of array spacing reported by Elzohery is only approximately 300 pcm [82]. The air between the packages contains approximately 1 wt% water, 75.5 wt% nitrogen, 22.2 wt% oxygen gas, and 1.3 wt% argon. The calculated keff value for the model is 0.96061 +/- 0.00020.

The Shift Monte Carlo transport code supports random geometry capability for TRISO grains up to a packing fraction of 17%, which was how it was selected for this model. This capability allows the grains to be randomly placed within the inner volume of the pebble without intersections or clipping of the grains. The 364 pebbles were placed as holes in the model generated by Elzohery [82] so that there are no overlaps and no clipping of the pebbles. A 3D rendering of the model of a single package is shown in Figure 6-18, and an enlarged image illustrating the random grains in some of the pebbles is shown in Figure 6-19. In the figures, the carbon steel structure of the package is shown in gray, fiberglass radial insulation is shown in yellow, polyurethane axial insulation is shown in red, water is shown in blue, and graphite is shown in tan. The region outlines are shown in Figure 6-19, where the fuel grains appear as black dots; the fuel kernel and coating layers are shown in Figure 6-20. The fuel kernel is UO2, shown in black, and the coating layers are carbon or SiC. The inner low-density carbon coating is shown in light gray, and the outer, higher density carbon is shown in a darker gray. The SiC layer in between is rendered in off-white.

6-22 Figure 6-18 Cutaway Rendering of the Flooded Generic Drum Package Figure 6-19 Flooded Generic Drum Package Enlarged to Show Texture

6-23 Figure 6-20 Individual TRISO Grain with UO2 Kernel (Black) with Carbon (Gray)

(and SiC Off-White) Coatings 6.2.1 Sensitivity Coefficient Generation with TSUNAMI-3D The random grain modeling for TRISO fuel is only available in SCALE 6.3 with the Shift Monte Carlo code, so Shift must be used to calculate sensitivity coefficients. As noted in Section 5.3, only the IFP method is available for sensitivity coefficient calculations in Shift. The calculations are run with five latent generations. Each generation simulates 50,000 histories, and the first 100 generations are skipped to ensure source convergence. Execution was terminated upon achieving a stochastic uncertainty of approximately 20 pcm.

The total sensitivity profiles for 235U and 238U are shown in Figure 6-21. The physics of graphite-moderated pebble systems are very different from water-moderated oxide rod arrays. The intermixing of fuel and moderator in such close proximity causes essentially immediate thermalization and an elimination of the fast flux and fast fission events. The fission neutrons have essentially exited the kernel and entered the surrounding graphite sea before any events can take place. This explains the almost complete lack of fast sensitivity in either isotope of uranium.

Figure 6-21 235U and 238U Total Sensitivity Profiles in the Drum Package

6-24 The 1H sensitivity profiles for scatter, radiative capture, and the total are shown in Figure 6-22.

This figure can be compared with Figure 4-2 in Section 4.1.1, which shows 1H sensitivities for the LCT-042-004 benchmark. The scattering sensitivity just above 1 MeV is approximately 0.01 in the pebble system compared to approximately 0.04 in the LEU fuel rod array system. The negative integral total sensitivity indicates that a reduction in the 1H cross section will increase keff. This means the system is overmoderated, but it does not necessarily mean that a dry system is more reactive. The maximum reactivity likely occurs with an intermediate density water filling the interstitial space inside the drum.

Figure 6-22 Energy-Dependent Sensitivity Profiles for Moderator 1H in the Drum The graphite and 1H scattering sensitivities are shown in Figure 6-23. The presence of a significant amount of graphite provides significant moderation at much shorter distances from the fission sites than the water does. The total amount of moderator available between the graphite and the water reduces the importance of both and yields small scattering sensitivities at high energy. Almost all neutrons collide very quickly with a light element, so the effect of a small cross section changethe sensitivityis correspondingly small. The sensitivity profiles are also highly similar in positive high energy sensitivity, peaks near U resonances, and thermal behavior that is positive at the top of the range and negative below approximately 0.4 eV. The 1H sensitivity is probably larger in magnitude through the intermediate range, but the uncertainties in both sensitivities are large enough to overlap. These sensitivities illustrate the fundamental similarity among moderators and indicate that elastic scattering is physically consistent among light nuclei.

Package

6-25 Figure 6-23 1H and C-Graphite Scattering Sensitivity Profiles in the Drum Package DP calculations were performed to assess the accuracy of TSUNAMI-calculated sensitivities for 235U and 1H in the drum-type package. A summary of the results of the comparisons is provided in Table 6-16. The uncertainty in the DP calculations is larger than would be desired, but the random geometry model requires extremely long computation times. Lower uncertainties are not practical, especially given the good agreement between TSUNAMI-and DP-calculated sensitivities. The total integral sensitivities for 56Fe and graphite in the pebble matrix also had magnitudes in excess 0.02, but these were also not performed given the calculational burden imposed by Shift in the random geometry model.

Table 6-16 TSUNAMI and DP Integral Sensitivities for Drum-Type Package Nuclide TSUNAMI DP Comparison S

S S

S/S (%)

S ()

235U 0.3797 0.0010 0.3729 0.0050 0.0068 1.83%

1.34 1H 0.2780 0.0009 0.2847 0.0054 0.0067 2.35%

1.23 Based on the 235U integral sensitivity of 0.3797 and the calculated keff 0.96061, the recommended density perturbation resulting from Eq. (7) in Section 5.1.2 is +/-0.015. For 1H, the recommended perturbation is +/-0.042, but perturbations of +/-0.040 were performed. The raw and normalized keff values for both sets of DP calculations are provided in Table 6-17, and the plotted results are shown in Figure 6-24 for 235U and in Figure 6-25 for 1H. Both sets of results may show slightly quadratic behavior, but it is impossible to confirm this with the uncertainty in the individual perturbed results. Lower uncertainty calculations would be necessary to confirm quadratic behavior, which is also not necessarily problematic. The DP results show generally good behavior and are sufficient to confirm the accuracy of the TSUNAMI-calculated sensitivities.

6-26 Table 6-17 Raw and Normalized keff Results for 235U Drum-Type Package DP Calculations keff Normalized keff 235U 0.015 0.95534 0.00020 0.99452 0.00021 0

0.96061 0.00020 1

0.00021

+0.015 0.96488 0.00020 1.00445 0.00021 1H 0.040 0.96478 0.00020 1.00435 0.00021 0

0.96061 0.00020 1

0.00021 0.040 0.95546 0.00020 0.99465 0.00021 Figure 6-24 Normalized keff Results Plotted vs. for 235U Perturbations

6-27 Figure 6-25 Normalized keff Results Plotted vs. for 1H Perturbations 6.2.2 Identification of Applicable Benchmarks The drum-type package was compared against the experiments in the set of 2,104 critical benchmarks discussed in Section 6.1.2. Only the ENDF/B-VII.1-based covariance library distributed with SCALE 6.3 is considered for the similarity assessment of the drum-type package. There are only four experiments identified that have a ck value in excess of 0.9, and 127 more that have a ck value above 0.8. Table 6-18 presents the four cases with ck values above 0.9, and Table 6-19 lists the evaluations from which the 127 cases with ck values between 0.8 and 0.9 are drawn. It should be noted that other cases in both evaluations containing experiments with ck values above 0.9 have ck values above 0.8. The 131 benchmarks identified with ck values of at least 0.8 should be sufficient to perform a validation, although gaps and weaknesses are discussed in Section 6.2.3.

Table 6-18 Four Identified Experiments with ck Values Above 0.9 Benchmark Experiment ck Value LEU-COMP-THERM-028-016 0.9081 LEU-COMP-THERM-028-017 0.9370 LEU-COMP-THERM-028-018 0.9277 IEU-COMP-THERM-002-003 0.9004

6-28 Table 6-19 Evaluations Containing Experiments with ck Values Above 0.8 Evaluation

  1. of Cases Evaluation
  1. of Cases IEU-COMP-THERM-002 4

LEU-COMP-THERM-058 9

LEU-COMP-THERM-022 2

LEU-COMP-THERM-074 1

LEU-COMP-THERM-025 1

LEU-COMP-THERM-077 5

LEU-COMP-THERM-028 9

LEU-COMP-THERM-082 5

LEU-COMP-THERM-032 3

LEU-COMP-THERM-083 3

LEU-COMP-THERM-043 9

LEU-COMP-THERM-084 1

LEU-COMP-THERM-044 10 LEU-COMP-THERM-089 4

LEU-COMP-THERM-045 4

LEU-COMP-THERM-090 9

LEU-COMP-THERM-046 22 LEU-COMP-THERM-091 9

LEU-COMP-THERM-047 3

LEU-COMP-THERM-092 6

LEU-COMP-THERM-054 8

The top 10 contributors to ck for the LCT-028-017 experiment are shown in Table 6-20. This is the experiment with the highest similarity to the application. The top contributor is 235U, and the 10 contributors shown in Table 6-20 provide over 99% of the similarity between the systems.

The sensitivity profiles of the benchmark and the drum package are shown in Figure 6-26 for top contributor 235U, and for 56Fe (n,), the second contributor, they are presented in Figure 6-27.

The similarity in the 235U profiles is excellent, as would be expected given their large contributions to ck. In Section 6.2.3.1, Table 6-21 shows the top 10 uncertainty contributors for the package. The agreement in the 56Fe profiles is not as good, but strong similarities are clear.

The top two contributors are 56Fe (n,) and 235U, thus reinforcing why these highly similar profiles are also large contributors to ck. The relative agreement of the 235U profiles compared to that of the 56Fe profiles explains why 235U is the higher contributor to ck despite being the lower contributor to data-induced uncertainty.

6-29 Table 6-20 Top 10 Contributors to ck Between LCT-028-017 and Drum Package Covariance Matrix Nuclide/Reaction with Nuclide/Reaction ck Contribution Running Total Percent of Total ck Total N/A 0.9370 100 u-235 nubar u-235 nubar 0.3429 0.3429 36.6 fe-56 n,gamma fe-56 n,gamma 0.2363 0.5791 61.8 h-1 n,gamma h-1 n,gamma 0.1403 0.7194 76.8 u-235 n,gamma u-235 n,gamma 0.0561 0.7754 82.8 u-235 fission u-235 fission 0.0517 0.8271 88.3 u-235 chi u-235 chi 0.0480 0.8751 93.4 u-235 fission u-235 n,gamma 0.0220 0.8971 95.7 u-235 n,gamma u-235 fission 0.0209 0.9180 98.0 u-238 n,gamma u-238 n,gamma 0.0091 0.9271 98.9 h-1 elastic h-1 elastic 0.0044 0.9315 99.4 Figure 6-26 Sensitivity to Average Total Number of Neutrons Released per 235U Fission Profiles for Drum Package and LCT-028-017

6-30 Figure 6-27 56Fe (n,) Sensitivity Profiles for Drum Package and LCT-028-017 6.2.3 Benchmark Set Gaps and Weaknesses All the applicable experiments identified in Section 6.2.2 are water-moderated systems, and most of them are fueled with LEU. A review of the evaluations and specific cases identified as applicable shows that there is no significant source of graphite or carbon in any form in the experiments. This lack of graphite validation is certainly a gap, and the enrichment may be a weakness. Although a more thorough review of the applicable experiments associated with a safety basis validation might identify more gaps or weaknesses, only these two potential issues are considered here.

6.2.3.1 Graphite The lack of graphite in any of the benchmarks identified as being applicable seems counterintuitive. Logically, if graphite is the primary moderator, then similar benchmarks should contain graphite. Similarity assessment Via the ck parameter focuses on sources of nuclear data-induced uncertainty and not necessarily reaction rates. The top ten nuclide/reaction pairs contributing to uncertainty in the flooded drum-type package are provided in Table 6-21, and the top five nuclide total uncertainty contributions are shown in Table 6-22. Graphite does not appear in either list; elastic scattering in graphite is the eleventh highest contributor on a nuclide/reaction basis, and graphite is the sixth highest nuclide. This explains the identification of similar experiments without graphite: it does not contribute sufficient uncertainty to be identified as a top contributor to similarity.

6-31 Table 6-21 Top 10 Nuclide/Reaction Pair Contributors to Uncertainty in Drum Package Covariance Matrix Nuclide/Reaction with Nuclide/Reaction Uncertainty Resulting from This Matrix (%k/k) 56Fe (n,)

56Fe (n,)

4.0022E-01 +/- 6.3462E-05 235U 235U 3.8051E-01 +/- 3.8810E-05 1H (n,)

1H (n,)

2.6147E-01 +/- 8.3909E-06 235U (n,)

235U (n,)

1.5296E-01 +/- 5.5957E-06 235U (n,fission) 235U (n,fission) 1.5031E-01 +/- 1.5267E-05 235U (n,fission) 235U (n,)

1.3527E-01 +/- 6.7802E-06 235U 235U 1.2754E-01 +/- 5.5580E-03 54Fe (n,)

54Fe (n,)

2.9049E-02 +/- 3.9156E-07 238U (n,)

238U (n,)

2.7836E-02 +/- 3.9592E-07 1H elastic 1H elastic 2.7654E-02 +/- 5.2962E-06 Table 6-22 Top Five Nuclide Contributors to Uncertainty in Drum Package Nuclide Uncertainty

(%k/k)

Running Total Percent of Total Total 0.677 100 235U 0.475 0.475 70.1 56Fe 0.401 0.621 91.8 1H 0.263 0.675 99.6 238U 0.029 0.675 99.7 54Fe 0.029 0.676 99.8 The total uncertainty contribution from graphite in the flooded drum-type package model is 0.029 %k/k, so accounting for the model keff reduces the impact to 0.028 %k. Applying a 95%

confidence multiplier of 1.96 increases the potential validation gap penalty to 55 pcm. As discussed in Section 5.4, it is not clear that the confidence factor is needed in this process.

Regardless, the impact of this small factor on a safety analysis would be quite small.

A separate discussion regarding the validity of this approach for one of the two primary moderating species in the model may be warranted. Most of the accepted uncertainty assessment validation gap penalties have applied to minor constituents of the model, but not generally to major fissile species or moderators. Further study of models like these and a thorough review of the performance of graphite-moderated benchmarks compared to light-water moderated systems might provide confidence in this approach. Again, the existence of applicable data to generate this proof would likely eliminate the need for it in the first place because the graphite-moderated benchmarks could be used directly in the validation set.

6-32 6.2.3.2 Enrichment The enrichments of the 21 evaluations from which experiments with ck values of at least 0.8 are drawn are provided in Table 6-23. The enrichment in the flooded drum model is 20 wt% 235U, which is above the maximum enrichment of 17.0 wt% 235U in the IEU-COMP-THERM-002 experiments [14]. A review of a wide range of benchmarks covering the relevant enrichment range such as that presented in Marshall et al. [83] might demonstrate that no penalty is needed for this system. The particulars of such a justification are outside the scope of this report because they would likely not rely on S/U methods.

Table 6-23 Enrichments of Applicable Experiments Evaluation Enrichment Evaluation Enrichment IEU-COMP-THERM-002 17.0 LEU-COMP-THERM-058 4.35 LEU-COMP-THERM-022 9.83 LEU-COMP-THERM-074 4.74 LEU-COMP-THERM-025 7.41 LEU-COMP-THERM-077 4.35 LEU-COMP-THERM-028 4.31 LEU-COMP-THERM-082 4.35 LEU-COMP-THERM-032 9.83 LEU-COMP-THERM-083 4.35 LEU-COMP-THERM-043 4.35 LEU-COMP-THERM-084 4.35 LEU-COMP-THERM-044 4.35 LEU-COMP-THERM-089 4.35 LEU-COMP-THERM-045 4.46 LEU-COMP-THERM-090 4.35 LEU-COMP-THERM-046 4.35 LEU-COMP-THERM-091 4.35 LEU-COMP-THERM-047 3.01/7.00 LEU-COMP-THERM-092 4.35 LEU-COMP-THERM-054 4.35 6.3 Generic Storage Cask for PWR SNF The GBC-32 cask model was developed by Wagner [33] to be a nonproprietary model representative of typical SNF storage casks in use around the turn of the millennium. The model has been used in numerous ORNL studies and reports since that time and continues to be a valuable computational workhorse for examining BUC methods, validation techniques, S/U tools, and other relevant NCS studies. The GBC-32 cask was used in NUREG/CR-7109 [10]

and NUREG/CR-7309 [72]; the recent update of NUREG/CR-7109 investigates new nuclear data libraries, increased initial enrichments, and higher discharged burnups.

The GBC-32 cask is made up of 32 storage cells, each with an inside dimension of 22 cm. The storage cell walls are 304 stainless steel and are 0.75 cm thick. A neutron absorber panel is sandwiched between the cell walls and consists of a 0.2057 cm thick central absorber with 0.0254 cm aluminum cladding on both sides of the absorber core. The absorber is a mixture of boron carbide and aluminum with an areal density of 0.0225 g 10B/cm2. This density was selected as 75% of the nominal areal density of 0.030 g 10B/cm2 based on the recommendations of the relevant NRC standard review plans in effect at the time the GBC-32 cask model was developed [8486]. The fuel storage cells and absorber panels are 144 in. (365.76 cm) in height. The inner diameter of the cask body is 175 cm, and its inside height is 410.76 cm. The cask wall is stainless steel that is 20 cm thick in the radial direction, and the lid and baseplate

6-33 are both 30 cm thick. The Westinghouse 17 x 17 optimized fuel assemblies are modeled centered in the storage cells, and only the active height of the fuel itself is modeled. Fuel assembly hardware, including top and bottom nozzles, grids, and so on, are neglected in the model. These details are all provided in NUREG/CR-6747 [33].

The computational model of the GBC-32 cask used in NUREG/CR-7309 is a half-cask model, with only the half in the positive Y half-space. A reflective boundary condition is placed on the negative Y face to model the entire cask. This simplification is routinely used for MG TSUNAMI analysis of the GBC-32 cask so that the flux tally mesh can be more refined without using excessive amounts of memory on the computer. A 3D rendering of the bottom half of the model is shown in Figure 6-28, and a 2D radial slice through the region containing fuel is shown in Figure 6-29.

Figure 6-28 3D Rendering of the Bottom Half of the GBC-32 Model Figure 6-29 Radial Slice of the GBC-32 Model

6-34 The results reported here include the actinide and fission product (AFP) isotope set used extensively in ORNL reports, including Wagner [33] and NRC standard review plans [53, 54].

The set includes 9 major actinides, 3 minor actinides, and 16 fission products, as shown in Table 6-24.

Table 6-24 Nuclides Included in SNF Composition in GBC-32 Model Major Actinides 234U 235U 238U 238Pu 239Pu 240Pu 241Pu 242Pu 241Am Minor Actinides 236U 237Np 243Am Major Fission Products 95Mo 99Tc 101Ru 103Rh 109Ag 133Cs 147Sm 149Sm 150Sm 151Sm 152Sm 143Nd 145Nd 151Eu 153Eu 155Gd The results and discussion presented here are based on a limited subset of the analysis presented in NUREG/CR-7309. Interested readers are referred to NUREG/CR-7309 [72] for a more complete set of study results and a more extensive investigation of BUC validation.

NUREG/CR-7109 [10] also provides more complete validation study results for BUC than are within the scope of this report.

Three main areas are examined in the remainder of this section. First is a discussion of generating sensitivity coefficients for the model with the TSUNAMI-3D sequence. Second is selection of applicable benchmark experiments using TSUNAMI-IP and the integral index ck.

Finally, gaps and weaknesses in the validation set are presented, and potential resolutions are discussed.

6.3.1 Sensitivity Coefficient Generation with TSUNAMI-3D All sensitivity calculations were performed with SCALE 6.3.0 using the KENO V.a transport code with the MG TSUNAMI-3D sequence. The GBC-32 model has been used in multiple ORNL-generated reports. The prior experience with the system has allowed for the generation of an optimized user-defined mesh that generates accurate sensitivity coefficients for the SNF cask.

The results presented here focus on an initial enrichment of 8 wt% 235U depleted to a final assembly-averaged burnup of 80 GWd/MTU. The calculations were run with the 252-group nuclear data library based on ENDF/B-VIII.0.

The KENO V.a calculations simulated 10,000 particles per generation in the forward calculation and 100,000 particles per generation in the adjoint calculation. Both forward and adjoint calculations skipped the initial 100 generations for source convergence. The forward calculation targeted a final keff uncertainty of 10 pcm and the adjoint targeted 100 pcm. This resulted in 6,255 generations in the forward calculation and 2,194 in the adjoint calculation. The forward keff value was 0.95188 +/- 0.00010, and the adjoint keff value was 0.95042 +/- 0.00099. The difference between the two keff estimates is 0.00146 +/- 0.00099, a difference of less than 2 and less than 0.5 %k. As discussion in Section 5.2.2, this is an acceptably small difference between the forward and adjoint keff estimates.

6-35 The sensitivity profiles for 235U, 238U, 239Pu, and 1H in node 17 are shown in Figure 6-30. This node was selected for DP confirmation because the top end of the fuel assembly experiences the majority of the fissions for high-burnup fuel in storage configurations. At this high burnup, the top node, node 18, has slightly higher fission rates and sensitivities, but node 17 was used for consistency with other state points considered in NUREG/CR-7309. It should also be noted that the presence of 240Pu in the model can be deduced from the positive spike in the 1H sensitivity opposite the large capture resonance in 240Pu at approximately 1 eV. Figure 6-31 shows sensitivity profiles for 10B and 149Sm integrated over all mixtures. 10B is included because of its importance in the model as a neutron absorber, and 149Sm is the fission product with the highest sensitivity. Of the six nuclides presented, only 149Sm has a total integral sensitivity less than 0.02. Direct perturbation results for all six nuclides are summarized in Table 6-25.

Figure 6-30 Major Sensitivities in Node 17 of the GBC-32 Model Figure 6-31 10B and 149Sm Sensitivity Profiles in the GBC-32 Model

6-36 Table 6-25 DP Summary for GBC-32 Model Nuclide TSUNAMI DP Comparison S

S S

S/S (%)

S ()

Node 17 235U 6.06E-02 7.37E-05 6.08E-02 7.58E-04 0.0002 0.39%

0.31 238U 2.84E-02 6.35E-05 2.80E-02 3.55E-04 0.0004 1.51%

1.17 239Pu 3.75E-02 5.23E-05 3.70E-02 4.70E-04 0.0006 1.51%

1.18 1H 8.40E-02 9.16E-04 8.39E-02 1.06E-03 0.0001 0.08%

0.05 Entire Model 10B 3.07E-02 2.36E-05 3.09E-02 3.88E-04 0.0002 0.72%

0.57 149Sm 1.41E-02 5.39E-06 1.38E-02 1.71E-04 0.0003 2.00%

1.61 The results shown in Table 6-25 demonstrate excellent agreement between MG TSUNAMI and DP calculations. All six nuclides are in excellent agreement, with a maximum difference in sensitivity of less than 0.001an order of magnitude lower difference that is generally deemed as acceptable. The relative differences are also excellent, with all differences being 2% or less.

This also corresponds to the differences for all six nuclides being less than 2.

The TSUNAMI-calculated total integral sensitivity for 235U in node 17, as shown in Table 6-25, is approximately 0.0606. Given this sensitivity and the keff value presented earlier, the recommended perturbation resulting from application of Eq. (7) in Section 5.1.2 is 0.087. In this case, the analyst performed four DP calculations at +/-0.087 and at +/-0.0435. The additional points with half the recommended standard perturbation were included to reduce the uncertainty in the DP sensitivity and to provide greater confidence in the linear behavior of the trend over the range of the perturbed number densities. The results of the individual calculations are provided in Table 6-26, and the plot of the uncertainty-weighted linear regression is provided in Figure 6-32. The results show excellent behavior for both linearity and symmetry.

Table 6-26 DP Calculation Results keff Normalized keff 0.0870 0.94673 0.00010 0.99473 0.00011 0.0435 0.94934 0.00009 0.99747 0.00010 0

0.95175 0.00010 1

0.00011 0.0435 0.95439 0.00010 1.00277 0.00011 0.0870 0.95676 0.00010 1.00526 0.00011

6-37 Figure 6-32 DP Results Plotted as Normalized keff vs.

6.3.2 Identification of Applicable Benchmarks As mentioned in Section 6.1.2, a set of 2,104 benchmark experiments with available sensitivity data was collected as part of the work described in NUREG/CR-7309 [72]. These experiments include SDFs from the ORNL VALID library [35], the NEA DICE dataset [14, 36], and ORNL-generated results for the HTC experiments [7578]. A complete survey of applicable benchmarks for a range of initial enrichments and discharge burnups using both ENDF/B-VII.1 and ENDF/B-VIII.0 data is available in NUREG/CR-7309, but only the initial enrichment of 8 wt% 235U, the discharge burnup of 80 GWd/MTU, and the ENDF/B-VIII.0 case are considered here.

In total, 107 experiments are identified as having ck values above 0.9 for the GBC-32 application, including 8 wt% initial enrichment fuel depleted to an assembly average burnup of 80 GWd/MTU. All the applicable experiments come from the HTC dataset, including cases from Phase 1; Phase 2, with both soluble boron and soluble Gd; Phase 3; and Phase 4, with both steel and lead reflectors. The set of 107 experiments represents a statistically significant sample that can be used to reliably perform validation analysis.

6.3.3 Benchmark Set Gaps and Weaknesses An assessment of gaps and weaknesses in the set of 107 applicable experiments reveals one notable weakness and one significant gap. The weakness is that all 107 experiments come from the same set of experiments performed at the same facility with the same fuel, and the gap is the lack of minor actinides and fission products in the identified applicable experiments. Each of these issues is discussed further in this section.

There is no way to use S/U methods to assess the potential for correlations among experiments from a single facility with a single set of fissile material. Considerable work has been performed in the last 15 years in an attempt to quantify correlations among critical experiments [79]. For this type of fissile array, the primary concern for correlations is likely the placement of the fuel rods.

Random uncertainties in fuel rod locations will lead to small correlations, as in Scenario E in Stuke and Hoefer [79]. However, random uncertainties applied to a constant fuel rod pitch will lead to much larger correlations, as in Scenario A in Stuke and Hoefer [79]. A detailed review of the HTC experiments [7578] would be required to determine which scenario is more applicable and to generate a reasonable estimate of the correlations: this work is beyond the scope of this analysis.

6-38 It is also impossible to use S/U methods to test for a potential systematic bias in the HTC results. A review of the data assessment included in the evaluation of the HTC experiments

[7578] might provide insight or at least confidence that reasonable measurements were made.

The only way to establish that there is no error in the characterization of the HTC experiments is to compare them with results from other experiments. Ideally, such experiments would be similar to the application, but this is not necessarily required. Clearly, some other benchmarks with mixtures of uranium and plutonium should be used to assess performance of the fissile material. Other aspects of the experiments could be tested with dissimilar experiments that contain similar features. Experiments with soluble boron or gadolinium could be used to assess the Phase 2 results. There are steel-and lead-reflected LEU arrays that could be used, for example, to identify potential issues with these reflectors in water-moderated systems to assess the Phase 4 results. The details of such assessments are beyond the scope of this report, but they should be provided in a complete safety basis validation.

The lack of minor actinides and fission products was a long-standing issue addressed with S/U methods for BWR BUC in NUREG/CR-7109 [10] and for BWR BUC in NUREG/CR-7252 [11].

One of the primary purposes of NUREG/CR-7309 [72] is to ensure that new nuclear data released in ENDF/B-VII.1 and ENDF/B-VIII.0 did not alter the conclusions of NUREG/CR-7109 or existing standard review plans [53, 54] with respect to PWR BUC validation. An uncertainty analysis approach such as that described in Sections 4.4 and 5.4 can be used here to generate a bounding estimate of the reactivity effect possible from a lack of validation for these 19 nuclides. A generic approach is justified in NUREG/CR-7109 based on a fraction of the worth of the unvalidated nuclides, but such generalization is not performed here because the assessment of the validation gap is performed only for the specific model with 8 wt% initial enrichment and a discharge burnup of 80 GWd/MTU.

The nuclear data-induced uncertainty in the 16 fission products and 3 minor actinides credited in the GBC-32 model and absent from the benchmark experiment set is provided in Table 6-27.

The results are based on the ENDF/B-VIII.0 covariance data, and all contributing reactions and cross correlations are accounted for. The uncertainty has also been corrected for the model keff value provided in Section 6.3.1. The total uncertainty of 86 pcm is also provided in Table 6-27.

Applying a multiplier of 1.96 to reach 95% confidence increases the uncertainty to 168 pcm. As discussed in Section 5.4, it is unclear that this level of rigor is required because previous efforts to quantify validation gap penalties have been incapable of assigning statistical confidence to them. These values of 86 pcm and 168 pcm provide a basis to begin discussion of an appropriate validation gap penalty for lack of validation for minor actinides and major fission products. As mentioned in Sections 4.4, 5.4, and 6.1.3.2, additional studies and justification may be warranted for using these values directly in a safety basis validation assessment.

6-39 Table 6-27 Data-Induced Uncertainty for Minor Actinides and Major Fission Products Nuclide Data-Induced Uncertainty (pcm)

Minor Actinides 236U 27 237Np 26 243Am 3

Major Fission Products 95Mo 7

99Tc 12 101Ru 13 103Rh 27 109Ag 3

133Cs 23 147Sm 8

149Sm 20 150Sm 7

151Sm 14 152Sm 8

143Nd 48 145Nd 27 151Eu 0

153Eu 13 155Gd 18 Total (Minor Actinides + FP) 86

7-1 7 ADVANCED CAPABILITIES Two S/U tools deployed within SCALE have yet to be discussed significantly in this document, primarily because they have not seen significant production use in validation. The first is the TSURFER code (see Sections 3.1 and 0), which is an implementation of the GLLSM for validation. The second is the TSAR sequence which calculates reactivity sensitivities based on keff sensitivities generated for two state points. Each of these tools is discussed briefly in the subsequent subsections. There is also a discussion of remaining challenges facing use of TSURFER in safety basis applications for determination of bias and bias uncertainty.

7.1 TSURFER The TSURFER code is an implementation of the GLLSM for performing data adjustment calculations. The code minimizes the generalized 2 parameter to create the most consistent possible set of nuclear data and measured responses. The nuclear covariance data are used to constrain adjustments on the data such that more uncertain parameters are more likely to be adjusted, and they might be adjusted more than less uncertain parameters. Measured and calculated results are also adjusted within their uncertainties and account for covariance data related to the measurements. Multiple options are available to filter outlier responses that are likely incorrect to prevent these aberrant results from affecting the adjustment process. The GLLSM approach is a simplified version of Bayesian statistics representing a completely different approach from the approaches recommended in current NCS validation guidance used in the United States, such as in Dean and Tayloe [2] and in Lichtenwalter et al. [3]. TSURFER results are presented in Appendix C of NUREG/CR-7109 [10] and provide a defense-in-depth argument for fission product credit without representing the primary method used to justify the margins derived in the main body of the report.

The potential usefulness of the GLLSM approach is that nuclear data can be adjusted based on a wide range of critical experiments and other responses. The importance of the similarity between application systems and benchmarks is removed because each benchmark is used to constrain the data to which it is most sensitive. This is most advantageous for extracting useful validation data from a dissimilar benchmark system. For example, the LCT-079 evaluation contains LEU fuel rod arrays and similar arrays with Rh foils. This is a strong test of the nuclear data for Rh, but the experiment is generally of no value for BUC validation because the LEU fuel is too dissimilar from the mixture of uranium and plutonium in SNF. A similar situation exists with Sm in the LCT-050 evaluation. Using all the applicable data can reduce the magnitude of validation gaps and weaknesses in application systems.

The bias calculation within TSURFER is performed by propagating the proposed data adjustments with the application sensitivities. Once again, a set of data changes is multiplied by the system sensitivities to determine the impact in the application system. This is the same matrix algebra discussed for uncertainty propagation, except the data adjustments are used in place of uncertainties. In principle, the adjustments that make all the measurements and nuclear data most consistent should represent the errors in the nuclear data. The impact of these errors in a particular application system is the bias of the computational method for that system.

7-2 It is important to note that TSURFER does not generate adjusted data libraries for use in transport calculations. It generates a set of proposed adjustments which should represent the nuclear data and measurement errors present in the benchmarks. These data adjustments can then be propagated with the system sensitivities to determine the bias these adjustments represent in the application system of interest.

The remainder of this section presents a brief demonstration of TSURFER, followed by a discussion of some limitations that should be addressed before TSURFER is used directly for safety basis bias and bias uncertainty determinations.

7.1.1 TSURFER Example An example drawn from the TSUNAMI training classes taught at ORNL is summarized here. A GBC-32 SNF cask model is the application, and a set of 199 benchmarks is used. The benchmark set is selected to cover a wide range of systems, including HEU, LEU, and Pu, as well as fast and thermal benchmarks. Most of these benchmarks are not similar to the SNF cask application.

The results of the benchmark adjustments are reported in the output and can be plotted in Fulcrum. An example of this plot is provided in Figure 7-1. In the figure, the nominal calculated C/E results are shown as red dots, and the post-adjustment C/E results are shown as blue triangles. Most experiments can be successfully adjusted to a C/E of 1.0. Cases in which there is a red dot for a nominal calculation but no blue triangle for the adjusted result represent experiments that are rejected in the adjustment process as being discrepant. The adjustments required to make these particular experiments consistent with the other experiments exceeds their uncertainties. This most likely indicates an error in the experiment description. Rejection could also indicate an error in the model if the benchmark models have not been thoroughly reviewed and verified. In this case, 17 of the 199 experiments are rejected by the 2 filter. There are several options to control the 2 filter, as discussed in Section 6.8.5.1 of the SCALE manual

[7]. The default filter is the slowest, but it is least likely to reject experiments. This means that the user could select a less rigorous filter in the input that would result in shorter runtimes for larger benchmark sets. These approaches may eliminate more experiments than strictly necessary. The reduced number of experiments included in the adjustment process does not imply that the resulting bias would be conservative or nonconservative. The number of experiments in the adjustment process does not directly impact the uncertainty as it does in a traditional frequentist validation technique.

7-3 Figure 7-1 keff C/E Results Before and After Adjustment The cross section adjustments identified by TSURFER are provided in the text output and are also available to plot in Fulcrum in the xs-adjust ptp file. An example of these adjustments for 238U (n,), 235U fission, and 239Pu fission is provided in Figure 7-2. By default, Fulcrum presents these data on a linear energy scale. This can be changed by selecting the axes option in the Plot options window and changing the scale option from linear to log.

Figure 7-2 TSURFER Cross-Section Adjustments for 238U (n,), 235U Fission, and 239Pu Fission The Application and Bias Summary table in the TSURFER output provides the final estimated bias, residual uncertainty, and adjusted keff value for the application. In this case, the bias was estimated to be 2.8e-4 k, or 28 pcm. The KENO-calculated keff value is 0.94289, and the adjusted value is therefore 0.94261. The adjustment of all the nuclear data incorporating all the measured results from the experiments allows the data-induced uncertainty to be lowered from a pre-adjustment value of 0.420 %k to a post-adjustment value of 0.180 %k. In other words, the data-induced uncertainty is lowered from 420 pcm to 180 pcm. Exactly what this remaining 180 pcm of uncertainty represents is one of the challenges discussed in Section 7.1.2.

7-4 The TSURFER output also provides estimates of the bias for each nuclide/reaction pair. This calculation is straightforward because it is the adjustment to that data propagated with its sensitivity in the application model. For this GBC-32 model, the top contributors to bias are 238U (n,), 235U, 238U, 239Pu (n,), and 56Fe (n,). Extended uncertainty edits are also provided for both the pre-and post-adjusted total data-induced uncertainty. The largest contributor to uncertainty is 235U in both cases, but the uncertainty contribution is lowered from 200 pcm before adjustment to 138 pcm after adjustment.

7.1.2 TSURFER Limitations The six points discussed here are limitations on using TSURFER for direct bias and bias uncertainty determination in validation. To different degrees, these limitations can be addressed by analysts, or they require solutions or demonstrations in code development and testing. The limitations are (1) coverage of all relevant materials from the safety analysis model in the validation suite, (2) exclusion of erroneous measurements, (3) correlations among the measurements, (4) accuracy of the covariance data, (5) uniqueness of solution, and (6) compliance with regulatory requirements. Each of these areas is discussed in this section.

7.1.2.1 Coverage of Relevant Materials The simplest of the issues to be addressed for TSURFER calculations is the coverage of all relevant materials in the benchmarks present in the validation set. This is not a problem unique to TSURFER, but the basis for the problem is somewhat different from in traditional validation techniques. The data adjustment process can only assess the accuracy of data for which relevant measurements are provided. In other words, no adjustments can be made to data for which no measurements are provided to TSURFER. These adjustments are then used to calculate the bias in the application system because they are interpreted as the corrections to be applied to the nuclear data. This means that any bias present in nuclear data will be undetected if no benchmarks that contain that nuclide or that are sensitive to that reaction are present in the benchmark set. In the benchmark set used in the example presented in Section 7.1.1, there are no experiments containing samarium. No adjustment of samarium nuclear data is possible, which is equivalent to assuming that there is no bias in the samarium data.

Again, this problem is not unique to the GLLSM for validation and is in fact made somewhat easier to solve. Relevant data can be drawn from dissimilar experiments, thus allowing TSURFER to adjust the data and determine the bias based on any experiment that is sensitive to relevant nuclides in the applicable spectrum. Benchmarks such as LCT-050 can be used to adjust thermal samarium data, and those adjustments can then be applied to the SNF cask application, regardless of the fact that the SNF composition has a significant amount of plutonium and is no longer highly similar to the LEU composition in the benchmark. The great promise of the GLLSM is this ability to extract bias data from relevant benchmarks, regardless of similarity. The GLLSM can also use measurements other than keff values, such as reactivity sensitivity coefficients derived from substitution experiments, to adjust data. This additional information can dramatically improve the knowledge of specific nuclides. See Section 7.2 for more information about TSAR and reactivity coefficients.

In principle, an analyst can select a broad set of benchmarks for analysis in TSURFER that contain all relevant materials in the safety analysis model. Finding benchmarks to constrain data adjustment is easier when the similarity requirement of traditional validation techniques is relaxed. Failing that, the typical approaches to addressing gaps and weaknesses can be applied as additional margins to ensure a conservative USL.

7-5 7.1.2.2 Exclusion of Erroneous Experiments The data adjustment process requires correct measurements as inputs. Errors in measurements or models would bias the results input to the adjustment process, and the final result could only be incorrect adjustments. The result of the process will always appear to be reduced uncertainty, even though the adjusted parameters are not correct. This issue is not unique to criticality safety validation, but it is an inherent limitation of all Bayesian updating schemes.

TSURFER includes different 2 filtering approaches to identify and exclude outlier results from the adjustment process. The process essentially identifies measurements that are too different from the other results to be made consistent when accounting for measurement uncertainty.

Measurements are progressively excluded until the target 2 value is met. The default 2 value is 1.2, but it can be specified by the user in the parameter block of the TSURFER input. The default filtering option is delta_chi.

A review of the pre-and post-adjustment keff C/E values can provide confidence that outlier experiments have been excluded from the adjustment process. The data to perform this review are available in the text output and are also provided in the ptp file for visual inspection with Fulcrum.

7.1.2.3 Correlations among Experiments Correlations can exist among critical experiments which use the same materials, machines, or procedures. A significant amount of research has been performed since approximately 2012 that has been focused on methods to determine correlations of uncertainties among different critical experiments. A fairly brief summary is presented in Section 6.5 of Clarity et al. [4]. The results of a lengthy study comparing different methods for determining these correlations for LEU pin arrays were published by NEA in 2023 [79]. A smaller study examining HEU solution experiments was published in 2019 [87].

Correlations among experiments are relevant in a TSURFER calculation because they place additional constraints on the adjustments that can be made to the measured values from a series of experiments. For example, if a series of experiments in an LCT evaluation is highly correlated, then they cannot be adjusted in dramatically different ways. Incorrect adjustments are allowed without accounting for this correlation.

Stuke and Hoefer [79] demonstrated that robust methods exist for determining the correlations among the experiments in the ICSBEP Handbook [14]. The difficulty in implementation generally stems from a lack of sufficient detail in the evaluation to unambiguously identify all the shared aspects in a series of experiments. This makes correlations more challenging to determine, and it also makes it more difficult to defend them in a regulatory proceeding. Work continues to establish consensus on the impact of correlationsor the lack of correlationson NCS validation studies.

The impact of correlations could be addressed in several ways. An analyst could determine correlations for all experiments input to TSURFER, although this is likely time-and cost-prohibitive. Another approach would be to select only experiments that are believed to be uncorrelated as demonstrated by Perfetti and Rearden [88]. A third approach would be to assume multiple levels of correlation and determine the impact on the TSURFER-calculated bias and bias uncertainty for the application system. An appropriate choice from the result of this study could be the most conservative result, or if there are no significant differences, then any of the results could be used.

7-6 7.1.2.4 Accuracy of Covariance Data The persistent question of whether covariance data are accurate has impacts on uncertainty analysis, similarity assessment, and data adjustment. Sections 4.3 and 5.4 include discussions of the nuclear covariance data and assessment of its accuracy. It is likely that the impact of the accuracy of covariance data is less for data adjustment than for margin estimation or ck calculations because the covariances are used as a constraint on how much data can be adjusted and not as a direct input to the calculation. The covariances provide an important constraint, and if the relative covariances are inaccurate, then adjustments could be made to the wrong nuclide. It is also possible that if the absolute covariances are too high, as seems likely, then the adjustments made may be too large. Large adjustments may be conservative because they would propagate to overestimates of the bias in the application of interest.

Various tests of the accuracy of covariance data have been performed, but the most rigorous tests compare the variability of the measured benchmarks to the predicted variability based on the covariance data [10, 45, 46, 50, and 52]. As discussed in Section 4.4, there is considerable difficulty in assessing the accuracy of non-actinide covariance data. An analyst could test different covariance libraries to assess the effect of the different libraries. This was demonstrated for uncertainty analysis by Marshall [62], but it could also be performed with different covariance libraries as input to TSURFER. The results of such a study could demonstrate that the covariance data do not change enough among different evaluations to have significant impacts on the calculated bias and bias uncertainty from the GLLSM. If the effects are significant, then an analyst could use the results of the different calculations to select or generate a defensible method to determine and apply the bias and bias uncertainty values in validation.

7.1.2.5 Uniqueness of Solution The GLLSM provides potential data adjustments based on creation of a consistent set of measurements and data. There is no guarantee that there is only one unique set of adjustments that can yield this result given the large number of adjustable parameters and the relatively small number of experiments. Approximately 5,000 configurations are included in the ICSBEP Handbook [14], but when considering the entire energy range of all reactions of all nuclides on the ENDF libraries, there are tens of thousands of parameters. This underconstrained situation means that, mathematically, there is no guarantee of a unique solution from the GLLSM.

However, the problem may yet be solvable because most of the reactions for many of the nuclides are of relatively small importance to the safety application model. This acts to reduce the effective number of free parameters and may improve the ability of the GLLSM to generate unique solutions. Perfetti and Rearden investigated this, but the results were not definitive [88].

An analyst could implement a similar approach and perform a series of TSURFER estimates of the bias and bias uncertainty and then examine the results to understand the potential for nonunique solutions for their particular benchmark sets.

7.1.2.6 Regulatory Requirements Specific uncertainty requirements for NCS analysis are stated in standard review plans [53, 54, 57] and in 10 CFR 50.68 [89]. Typically, these require 95% probability of a 95% confidence level. Historically, there has been no known process for converting the residual uncertainty from a TSURFER adjustment calculation into a confidence interval to demonstrate compliance with

7-7 regulations. The other challenges discussed in this section may seem more strongly related to safety and reliability, but the method cannot be used in safety basis work if it cannot be shown to comply with the regulations.

Work by Abdel-Khalik et al. [90] has provided a potential path forward to convert the post-adjustment uncertainty into a confidence interval. This methodology is under review by the SCALE development team and may be deployed within TSURFER in a future SCALE release.

Other approaches may be possible. Perfetti and Rearden [88] proposed an approach, and a sampling-based framework may allow the generation of a confidence interval through a large number of TSURFER calculations.

7.2 TSAR and Reactivity Sensitivity Coefficients TSAR is a SCALE module used to calculate reactivity sensitivity coefficients. In this case, reactivity is the difference between states and is not necessarily one state measured relative to the critical state. The reactivity sensitivity isolates the impact of factors that change between the two configurations in a way that keff sensitivities do not. TSAR calculates these reactivity sensitivities given the keff sensitivities from both state points, making the TSAR calculation an additional step after two TSUNAMI-1D or -3D calculations. The reactivity sensitivities can be provided to TSURFER as additional measurements with which to adjust nuclear data. This pair of tools is particularly useful for extracting additional information from substitution experiments.

TSAR also propagates nuclear data covariances with the reactivity sensitivities to determine the data-induced uncertainty in the reactivity response.

The use of TSAR in NCS validation is limited because the use of TSURFER is still limited. The largest technical impediment to implementing TSAR in NCS validation alongside critical benchmark experiments is likely the issue of correlations among the experiments and correlations with the keff benchmarks from which the reactivity sensitivities are extracted. A consensus would also have to be formed that accepts a broader range of experiments in NCS validation.

The remainder of this section presents a short demonstration of TSAR and the calculation of reactivity sensitivity coefficients for the LCT-079 evaluation [14], followed by a return to the TSURFER example from Section 7.1.1 incorporating reactivity sensitivity data to improve the adjustment to the 103Rh data.

7.2.1 TSAR Example The LCT-079 evaluation documents a series of critical experiments performed at Sandia National Laboratories with clean arrays of LEU fuel rods and rods containing rhodium foils. These experiments are a perfect example of the utility of TSAR for generating reactivity sensitivity coefficients to gain a better understanding of the 103Rh nuclear data. The 10 cases in the evaluation included two reference cases with no experimental elements and two base cases with experimental elements containing no foils. The two sets of measurements were performed using different fuel rod pitches so that 103Rh absorption was tested in two different spectra. The experimental elements could be opened so that foils could be inserted between fuel pellets. The cases with experimental elements containing no foils isolated the impact of introducing these different fuel rods. Three different foil loadings were measured relative to each base case. Cases 3, 4, and 5 were compared to Case 2. Cases 8, 9, and 10 were compared to Case 7. Cases 1 and 6 were the reference cases. After TSUNAMI-3D calculations were performed for all 10 cases, TSAR inputs could be created to generate reactivity sensitivities for each of the 6 pairs of measurements.

Table 7-1 summarizes the experimental configurations compared with TSAR calculations.

7-8 Table 7-1 LCT-079 Case Matrix 2.0 cm Pitch 2.8 cm Pitch Reference Case 1

6 Base Case 2

7 25 m Foils 3

8 50 m Foils 4

9 100 m Foils 5

10 A TSAR input was created for each pair of cases comparing a different foil thickness with its respective base case. A sample input for the case generating reactivity sensitivities for Cases 2 and 5 is shown in Figure 7-3, and the total keff sensitivity profiles for 235U, 1H, and 103Rh are shown in Figure 7-4. The reactivity sensitivity profiles calculated for these same three nuclides are shown in Figure 7-5. The units of the sensitivity were also changed and are now in pcm per change in cross section.

=tsar LCT-79 5-2 read parameter sdf_file_1=C:\\Users\\wm4\\Desktop\\LEU-COMP-THERM-079-002.sdf sdf_file_2=C:\\Users\\wm4\\Desktop\\LEU-COMP-THERM-079-005.sdf type=absolute end parameter end Figure 7-3 TSAR Input for LCT-079 Cases 2 and 5 Figure 7-4 keff Sensitivity Profiles for 235U, 1H, and 103Rh in LCT-079-005

7-9 Figure 7-5 Reactivity Sensitivity Profiles for 235U, 1H, and 103Rh for LCT-079 Case 5 Compared to LCT-079 Case 2 It is clear that the reactivity sensitivity of the 103Rh was much larger than its keff sensitivity. The Rh foils were added between cases, and the required number of additional fuel rods was added to the outside of the array to balance the negative reactivity inserted by the Rh. Logically, 235U and 103Rh were significant contributors to the reactivity of Case 5 relative to Case 2. Clearly, 1H was also a significant actor as the primary moderator in the array.

The reactivity sensitivity profiles also reveal some of the competing effects in the experiments. A small positive feature in the 235U profiles is apparent just above 1 eV, coincident with the 103Rh resonance. Neutrons interacting with 235U at this energy were no longer available to be absorbed in 103Rh, so this was a positive reactivity impact at a very specific energy. The impact of the 103Rh resonance on the 1H sensitivity was even more pronounced. It is also evident that the 100 m foil was self-shielded because the reactivity sensitivity at the peak of the resonance was less negative than the sensitivity on either side of it. This is further illustrated in Figure 7-6, which compares the reactivity sensitivities of Case 3 with Case 2 to the profile for Case 5 with Case 2. The thicker foils in Case 5 provided a stronger test for the 103Rh cross sections below about 0.5 eV, but they did not provide any more information about the approximately 1 eV resonance. Studies like this can be useful for honing experiment designs to ensure that the experiments test the intended features efficiently [31].

7-10 Figure 7-6 Reactivity Sensitivity Profiles for 103Rh for LCT-079 Cases 3 and 5 Compared to LCT-079 Case 2 7.2.2 TSURFER with TSAR Reactivity Sensitivities The reactivity sensitivity coefficients from TSAR can be combined with a reactivity C/E value as an additional input to TSURFER. This would allow a focused data adjustment for the nuclide that was the focus of the substitution. This approach is particularly helpful for scenarios such as the 103Rh in LCT-079, because 103Rh is a major fission product credited in BUC. As discussed in Section 6.3.3, there are no publicly available benchmarks that contain fission products and actinides applicable to spent fuel validation.

As mentioned above, the input change to TSURFER is to add another experiment. The expected value is the reactivity difference of the expected values of the two evaluation cases, and the measured value is the reactivity difference of the two calculations from the models. For LCT-079 Case 2 and Case 5, for example, the expected keff values are 1.00019 +/- 0.00102 and 1.00046 +/- 0.00102, respectively. This represents a reactivity change of 0.00027 +/- 0.00144. The calculated values are determined by TSURFER based on the keff values contained in the SDFs.

In this case, the response also must be identified as an absolute reactivity coefficient, so abs and type=rho must be provided, along with the path to the TSAR-produced SDF and the expected reactivity and its uncertainty. An example input for LCT-079 Cases 2 and 5 is provided in Figure 7-7.

C:\\Users\\wm4\\Desktop\\tsar-lct-79-5-2.react.sdf abs type=rho ev=27 uv=144 Figure 7-7 Example Input of Reactivity Sensitivity Coefficients to TSURFER After adding all 6 reactivity SDFs from LCT-079, TSURFER can be run again to determine a new set of adjusted data, with more data feeding the adjustment process. The plot of pre-and post-adjustment values is generated with the additional measurements, although they are initially off scale given the different units. The reactivity responses are presented at the right end of Figure 7-8 because they were the last six experiments entered. It is worth noting that all six reactivity responses were retained in the adjustment process.

7-11 Figure 7-8 keff and Reactivity C/E Results Before and After Adjustment The data adjustments resulting from the GLLSM process can be examined in the same manner that they are examined from a TSURFER calculation with only critical benchmarks. Figure 7-2 is expanded to include the adjusted cross-section values, including the TSAR data, as shown in Figure 7-9. Small differences in the adjustments can be noted, especially in the thermal energy range. The 103Rh adjustments based on only the keff data and on both the keff and reactivity sensitivities are provided in Figure 7-10. The impact of adding the reactivity sensitivities is larger here, which is expected given the additional data and larger sensitivity of the substitution experiments. This example illustrates the utility of the TSAR tool to improve nuclear data adjustment for nuclides with available substitution experiments.

Figure 7-9 TSURFER Cross Section Adjustments with and Without Reactivity Sensitivity Data for 238U (n,), 235U Fission, and 239Pu Fission

7-12 Figure 7-10 TSURFER Cross Section Adjustments with and Without Reactivity Sensitivity Data for 103Rh (n,)

A comparison of the adjustments in Figure 7-9 with those in Figure 7-10 shows the impacts of covariance data on the adjustments themselves. There are fewer different levels of adjustment in the 103Rh (n,) data than in the 238U (n,) data. Certainly, many more sensitivity profiles impact the 238U data, but the energy fidelity of the sensitivity data is the same. The difference comes from the high correlations present in the 103Rh covariance data. The correlation coefficient matrices for the 238U (n,) and 103Rh (n,) reactions are provided in Figure 7-11 and Figure 7-12, respectively. The correlation coefficients are easier to inspect visually than the covariance matrices because the correlation coefficients are scaled from 1 to 1. Correlations among a small number of groups is evident in the 238U data, but the 103Rh data are fully correlated within almost the entire thermal range and almost the entire intermediate range. These correlations force the adjustment made by TSURFER to be the same across all correlated groups. It is also worth noting that some entirely anticorrelated groups exist in the fast range for the 103Rh data, but no negative correlation coefficients are present in the 238U data.

Figure 7-11 Correlation Matrix for 238U (n,)

7-13 Figure 7-12 Correlation Matrix for 103Rh (n,)

With the addition of the 103Rh reactivity sensitivity data, the final estimated bias from TSURFER was increased to 63 pcm, changing the adjusted application keff value to 0.94227. This is lower than the adjusted keff value that results from only keff data, as reported in Section 7.1.1. The reduced post-adjustment keff value for the application makes sense given the sizeable increase in the 103Rh radiative capture cross section. The additional data allowed for a stronger adjustment, increasing 103Rh (n,) from the eleventh to the eighth largest source of bias in the GBC-32 application system. The post-adjustment residual data-induced uncertainty is 1 pcm lower when including the LCT-079 reactivity sensitivities, but the contribution from 103Rh (n,) is reduced by approximately 5%. Although the impact of the additional data was small in absolute terms, it underscores the importance of gathering and applying as much measured data as possible in the validation process.

8-1 8

SUMMARY

AND CONCLUSIONS This report provides user guidance and recommendations for S/U methods, especially within the context of NCS validation exercises. These tools offer powerful insights into the systems of interest, and they provide methods for propagating nuclear data uncertainties to specific systems of interest and for assessing similarity between application systems and potentially applicable benchmark experiments. Advanced tools provide the ability to extract applicable data from dissimilar systems and estimate the computational method bias and bias uncertainty for a system of interest. S/U tools can also examine substitution experiments to extract reactivity sensitivity coefficients to provide more rigorous tests of specific nuclear data.

The first two sections introduce this report and establish the context with an overview of the NCS validation activity. The safety analysis model provides a list of important materials and processes, and it is these nuclides and reactions that must be validated. Benchmarks can be constructed for rigorously described critical experiments to provide high-quality reference results with which computational methods can be assessed. Because the majority of the bias in modern neutron transport methods comes from the nuclear data, system similarity with respect to nuclear data use is the primary criterion for identifying applicable benchmark experiments.

Section 3 provides a brief overview of important guidance in the literature. NUREG/CR-6655 [5, 6], a landmark report issued in 1999, introduces many fundamental concepts of S/U methods and tools for NCS validation. These tools reached a level of maturity in the TSUNAMI tools introduced throughout the first decade of the 21st Century, culminating in the release of SCALE

6. Rearden et al. provide an extensive theoretical and practical introduction to the TSUNAMI tools and their use in a 2009 Nuclear Technology article [21] that remains the basis for the tools and their use 15 years later. The introduction of CE TSUNAMI-3D [27] was the most recent significant new capability added to the TSUNAMI suite with the release of SCALE 6.2 in 2016, and the initial guidance for its use is provided by Jones [23].

Users of any computational method should comprehend its theoretical basis to enhance their ability to understand the results of the analyses and to identify potential erroneous results.

Section 4 introduces the definition and use of sensitivity coefficients and provides an overview of the adjoint perturbation theory methods underpinning their calculation. MG and CE methods for generating keff sensitivity coefficients in 1D and 3D are also introduced. The TSUNAMI suite provides a range of flexible methods to efficiently calculate sensitivity coefficients based on the characteristics of the systems being analyzed and the computational method being used in the analysis. The use of nuclear covariance data in uncertainty analysis and similarity assessment is also presented.

An extensive set of user recommendations for the use of S/U methods in NCS validation is provided in Section 5. These recommendations are intended to help analysts use the TSUNAMI tools not only efficiently, but also correctly. Suggestions are provided for using TSUNAMI-1D and three different TSUNAMI-3D methods and for confirming the results of these calculations through the use of DP calculations. Practical guidance is also provided for using nuclear data uncertainty propagation to address validation gaps and weaknesses and for performing similarity assessments with the integral parameter ck. This parameter is probably the most widely discussed output from the TSUNAMI tools, but it cannot be reliable without a solid foundation of correct sensitivity coefficients and covariance data. Section 5.6 discusses the available sensitivity generated by NEA [36] and distributed with the ICSBEP Handbook [14] that allows initial applicability screening with large numbers of benchmarks.

8-2 The theory outlined in Section 4 and the recommendations provided in Section 5 are combined in the three case study demonstrations presented in Section 6. The first example examines benchmark similarity for a fresh BWR assembly transportation package. This assessment provides insight into the impact of the array size in the application model, as well as the impact of different covariance libraries on the calculated ck index. The second study looks to a future of shipping TRISO fuel with HALEU for advanced reactor applications. An assessment of applicable benchmark experiments and validation gaps indicates that transportation packages could be validated with the critical benchmarks that are currently available. The final case study is excerpted from a recent examination of PWR BUC validation [72], which itself expands on a hallmark report that provides a basis for minor actinide and major fission product credit in SNF storage and transportation [10]. Each of these case studies demonstrates TSUNAMI-3D techniques, including CLUTCH, IFP, and the MG method. Each study also includes DP calculations, similarity assessment Via ck, and assessment of validation gaps and weaknesses.

The analysis of the results from the benchmarks to generate a bias and bias uncertainty in validation is left to other guidance documents developed specifically for this purpose, including Dean and Tayloe [2], Lichtenwalter et al. [3], and Clarity et al. [4].

Section 7 examines some of the advanced capabilities incorporated into the TSURFER and TSAR codes within SCALE. The discussion includes the potential of these techniques to expand the validation basis to include nearly all benchmark experiments. The ability of the GLLSM to extract useful bias information from any benchmark and to apply it to the application of interest has the potential to significantly reduce validation gaps. However, there are some technical and regulatory limitations hindering the use of TSURFER at this writing. Active research is being performed to address these issues and to facilitate the addition of this powerful new tool fully into the NCS validation toolbox for systems particularly difficult to validate because of the limited number of applicable critical benchmark experiments.

This document focuses on the available methods and tools with the SCALE TSUNAMI suite, but a number of radiation transport code systems around the world are now deploying S/U capabilities. These other codes provide similar capabilities for keff sensitivity coefficient generation and similarity assessment. The same theoretical basis applies to all these methods, and many of the specific recommendations included here also apply to these other codes.

In conclusion, S/U tools have proven themselves to be valuable for improving the rigor and defensibility of NCS validation efforts for the last quarter century. A thorough understanding of the theoretical bases for the methods helps ensure appropriate use. Actionable guidance from code developers and expert users enables efficient use throughout the NCS community. Correct and efficient use of S/U tools will enhance validation studies in the future and will allow for the maximum utilization of available measured benchmark data.

9-1 9 REFERENCES 1.

ANSI/ANS-8.24-2017, Validation of Neutron Transport Methods for Nuclear Criticality Safety Calculations, American Nuclear Society, La Grange Park, IL, 2017.

2.

J. C. Dean and R. W. Tayloe, Guide for Validation of Nuclear Criticality Safety Calculational Methodology, NUREG/CR-6698, January 2001.

3.

J. J. Lichtenwalter, S. M. Bowman, M. D. DeHart, and C. M. Hopper, Criticality Benchmark Guide for Light-Water-Reactor Fuel in Transportation and Storage Packages, NUREG/CR-6361, March 1997.

4.

J. B. Clarity, W. J. Marshall, D. E. Mueller, S. S. Powers, B. T. Rearden, and S. M.

Bowman, Determination of Bias and Bias Uncertainty for Criticality Safety Computational Methods, NUREG/CR-7311, May 2025.

5.

B. L. Broadhead, C. M. Hopper, R. L. Childs, and C. V. Parks, Sensitivity and Uncertainty Analyses Applied to Criticality Safety Validation: Methods Development, NUREG/CR-6655, Vol. 1, November 1999.

6.

B. L. Broadhead, C. M. Hopper, and C. V. Parks, Sensitivity and Uncertainty Analyses Applied to Criticality Safety Validation: Illustrative Applications and Initial Guidance, NUREG/CR-6655, Vol. 2, November 1999.

7.

W. A. Wieselquist and R. A. Lefebvre, Eds., SCALE 6.3.1 User Manual, ORNL/TM-SCALE-6.3.1, February 2023.

8.

R. A. Knief, Nuclear Criticality Safety: Theory and Practice, American Nuclear Society, La Grange Park, IL, 1985.

9.

ANSI/ANS-8.1-2014, Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors, La Grange Park, IL, 2014.

10.

J. M. Scaglione, D. E. Mueller, J. C. Wagner, and W. J. Marshall, An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Criticality (keff) Predictions, NUREG/CR-7109, April 2012.

11.

W. J. Marshall, J. B. Clarity, and S. M. Bowman, Validation of keff Calculations for Extended BWR Burnup Credit, NUREG/CR-7252, December 2018.

12.

F. Johnansson and H. Liljenfeldt, Use of TSUNAMI in Validation of SCALE 6.1 for the Swedish Spent Fuel Repository - Selection of Experiments, Proceedings of NCSD 2013, Wilmington, NC, October 2013.

13.

M. E. Rising, J. C. Armstrong, S. R. Bolding, et al., MCNP Code Version 6.3.0 Release Notes, LA-UR-22-33103, Rev. 1, January 2023.

14.

International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03, Organisation for Economic Co-operation and Development, Nuclear Energy Agency, Paris, France, 2024.

9-2 15.

T. M. Greene and W. J. Marshall, SCALE 6.2.4 Validation: Nuclear Criticality Safety, ORNL/TM-2020/1500v2, November 2022.

16.

C. J. Posey, A. R. Clark, J. A. Kulesza, E. J. Pearson, and M. E. Rising, MCNP Code Version 6.3.0 Verification & Validation Testing, LA-UR-22-32951, December 2022.

17.

M. B. Chadwick et al., ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data, Nuclear Data Sheets, 112, pp. 2887-2996, December 2011.

18.

M. A. Marshall, ORSPHERE: Critical, Bare, HEU(93.2)-Metal Sphere, HEU-MET-FAST-100, International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03, Paris, France, 2023.

19.

G. A. Harms, Partially-Reflected Water-Moderated Square-Pitched U(6.90)O2 Fuel Rod Lattices with 0.52 Fuel to Water Volume Ratio (0.855 cm pitch), LEU-COMP-THERM-101, International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03, Paris, France, 2023.

20.

J. D. Norris, N. J. Killingsworth, C. G. Percher, E. E. Aboud, V. Ravindra, S. Graham, A.

ONeill, and D. Hill, Intergral Experiment Final Design for Thermal/Epithermal eXperiments (TEX) Using Highly Enriched Uranium and Polyethylene at Low Temperature (IER-479 CED-2 Report), LLNL-TR-838819, August 2022.

21.

B. T. Rearden, M. L. Williams, M. A. Jessee, D. E. Mueller, and D. A. Wiarda, Sensitivity and Uncertainty Analysis Capabilities and Data in SCALE, Nuclear Technology, Volume 174, pp. 236-288, May 2011.

22.

B. T. Rearden, D. E. Mueller, S. M. Bowman, R. D. Busch, and S. J. Emerson, TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE, ORNL/TM-2009/027, January 2009.

23.

E. L. Jones, User Perspective and Analysis of the Continuous-Energy Sensitivity Methods in SCALE 6.2 using TSUNAMI-3D, Master of Science Thesis, University of Tennessee, May 2015.

24.

J. A. Kulesza, T. R. Adams, J. C. Armstrong, et al., MCNP Code Version 6.3.0 Theory &

User Manual, LA-UR-22-30006, Rev. 1, September 2022.

25.

K. L. Reed, W. J. Marshall, V. Karriem, Assessing the Impact of Sensitivity/Uncertainty Selection Criteria on Computational Bias Prediction, Transactions of the American Nuclear Society, Volume 129, pp. 582585, November 2023.

26.

B. C. Kiedrowski, Methodology for Sensitivity and Uncertainty-Based Criticality Safety Validation, LA-UR-14-23202, Los Alamos National Laboratory, Los Alamos, NM, 2014.

27.

C. M. Perfetti, B. T. Rearden, and W. R. Martin, SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations, Nuclear Science and Engineering, Volume 182 Number 3, pp. 332353, March 2017.

9-3 28.

W. J. Marshall, E.L. Jones, B. T. Rearden, and M. E. Dunn, A Case Study in the Application of TSUNAMI-3D - Part 1, Multigroup, Transactions of the American Nuclear Society, Volume 115, pp. 673676, November 2016.

29.

E. L. Jones, W. J. Marshall, B. T. Rearden, M. E. Dunn, and G. I. Maldonado, A Case Study in the Application of TSUNAMI-3D - Part 2, Continuous Energy, Transactions of the American Nuclear Society, Volume 115, pp. 677680, November 2016.

30.

E. L. Jones, J. B. Clarity, W. J. Marshall, B. T. Rearden, and G. I. Maldonado, A Case Study in the Application of TSUNAMI-3D - Part 3, Continuous Energy - Iterated Fission Probability Method, Transactions of the American Nuclear Society, Volume 119, pp.

845848, November 2018.

31.

B. T. Rearden, W. J. Anderson, and G. A. Harms, Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel, Nuclear Technology, Volume 151 Number 2, pp. 133158, August 2005.

32.

J. B. Clarity, W. J. Marshall, B. T. Rearden, and I. Duhamel, Selected Uses of TSUNAMI in Critical Experiment Design and Analysis, Transactions of the American Nuclear Society, Volume 123, pp. 804807, November 2020.

33.

J. C. Wagner, Computational Benchmark for Estimation of Reactivity Margin from Fission Products and minor Actinides in PWR Burnup Credit, NUREG/CR-6747, October 2001.

34.

B. L. Broadhead, B. T. Rearden, C. M. Hopper, J. J. Wagschall, and C. V. Parks, Sensitivity-and Uncertainty-Based Criticality Safety Validation Techniques, Nuclear Science and Engingeering, Volume 146, Number 3, pp. 340-366, March 2004.

35.

W.J. Marshall and B.T. Rearden, The SCALE Verified Archived Library of Inputs and Data - VALID, Proceedings of NCSD 2013, Wilmington, NC, October 2013.

36.

I. Hill, J. Gulliford, J. B. Briggs, B. T. Rearden, and T. Ivanova, Generation of 1800 New Sensitivity Data Files for ICSBEP Using SCALE 6.0, Transactions of the American Nuclear Society, Volume 109, pp. 867-869, November 2013.

37.

A. Hara, T. Takeda, and Y. Kikuchi, SAGEP: A Two-Dimensional Sensitivity Analysis Code Based on Generalized Perturbation Theory, JAERI-M 84-027, February 1984.

38.

T. M. Greene and W. J. Marshall, Nuclear Data and Cross Section Testing Using ENDF/

B-VIII.0, ORNL/TM-2020/1868, February 2021.

39.

W. J. Marshall, J. B. Clarity, and E. M. Saylor, Sensitivity Calculations for Systems with Fissionable Reflector Materials Using TSUNAMI, Transactions of the American Nuclear Society, Volume 119, pp. 787790, November 2018.

40.

B. C. Kiedrowski, F. B. Brown, and P. P. H. Wilson, Adjoint-Weighted Tallies for k-Eigenvalue Calculations with Continuous-Energy Monte Carlo, Nuclear Science and Engineering, Volume 168, Number 3, pp. 226241, July 2011.

9-4 41.

K. B. Bekar, S. R. Johnson, C. M. Perfetti, et al., Iterated Fission Probability Sensitivity Capability in SCALE Via Shift, ORNL/TM-2020/4, February 2020.

42.

T. M. Greene, W. J. Marshall, and J. B. Clarity, Impact of Increased Latent Generations on Sensitivity Calculations with SCALE, Proceedings of NCSD 2022, Anaheim, CA, June 2022.

43.

Investigation of Covariance Data in General Purpose Nuclear Data Libraries, NEA/NSC/R(2021)4, February 2023.

44.

D. Neudecker, B. Hejnal, F. Tovesson, et al., Template for Estimating Uncertainties of Measured Neutron-Induced Fission Cross Sections, Proceedings of the 4th International Workshop on Nuclear Data Covariances, October 2017.

45.

V. Sobes, W. J. Marshall, D. Wiarda, et al., ENDF/B-VIII.0 Covariance Data Development and Testing for Advanced Reactors, ORNL/TM-2018/1037, March 2019.

46.

W. J. Marshall, Covariance Testing Progress for ENDF/B-VIII.11 at ORNL, presentation at CSEWG 2023, November 2023.

47.

D. A. Brown, et al., ENDF/B-VIII.0: the 8th Major Release of the Nuclear Reaction Data Library with CIELO-Project Cross Sections, New Standards and Thermal Scattering Data, Nuclear Data Sheets, Volume 148, pp. 1-142, February 2018.

48.

M. L. Williams and B. T. Rearden, SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data, Nuclear Data Sheets, Volume 109, number 12, pp. 2796-2800, December 2008.

49.

R. C. Little, T. Kawano, G. D. Hale, et al., Low-Fidelity Covariance Project, Nuclear Data Sheets, Volume 109, Number 12, pp. 28282833, December 2008.

50.

W.J. Marshall, M.L. Williams, D. Wiarda, et al., Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2, Proceedings of International Conference on Nuclear Criticality Safety, Charlotte, NC, September 2015.

51.

E. M. Saylor, W. J. Marshall, J. B. Clarity, Z. J. Clifton, and B. T. Rearden, Criticality Safety Validation of SCALE 6.2.2, ORNL/TM-2018/884, September 2018.

52.

W. J. Marshall, D. Wiarda, and M. L. Williams, Evaluation of ENDF/B-VIII Covariance Data, presentation at mini-CSEWG, May 2017.

53.

Standard Review Plan for Spent Fuel Dry Storage Systems and Facilities, NUREG-2215, April 2020.

54.

J. Borowski, M. Call, D. Dunn, et al., Standard Review Plan for Transportation Packages for Spent Fuel and Radioactive Material, NUREG-2216, August 2020.

55.

M. Jessee, J. Yang, U. Mertyurek, W. Marshall, and A. Holcomb, SCALE Lattice Physics Code Assessments of Accident-Tolerant Fuel, ORNL/TM-2019/1400, June 2020.

9-5 56.

D. Lurie, L. Abramson, and J. Vail, Applying Statistics, NUREG-1475, Rev. 1, March 2011.

57.

Standard Review Plan for Fuel Cycle Facilities License Applications, NUREG-1520, Rev. 2, June 2015.

58.

Justification for Minimum Margin of Subcriticality for Safety, FCSS ISG-10, June 2006.

59.

T. M. Greene, W. J. Marshall, and J. B. Clarity, Reducing Direct Perturbation Uncertainty for High-Sensitivity Coefficients, Transactions of the American Nuclear Society, Volume 124, pp. 372375, June 2021.

60.

K. Bekar, J. Clarity, M. Dupont, et al., KENO V.a Primer: Performing Calculations Using SCALEs Criticality Safety Analysis Sequence (CSAS5) with Fulcrum, ORNL/

TM-2020/1664, December 2020.

61.

K. Bekar, J. Clarity, M. Dupont, et al., KENO-VI Primer: Performing Calculations Using SCALEs Criticality Safety Analysis Sequence (CSAS6) with Fulcrum, ORNL/

TM-2020/1601, December 2020.

62.

W. J. Marshall, Lost and Found Opportunities Around the Chlorine Worth Study, Transactions of 12th International Conference on Nuclear Criticality Safety (ICNC2023),

October 2023.

63.

W. J. Marshall and T. M. Greene, Applicability of the ORCEF UF4/CF2 Experiments to Validation of 30 UF6 Cylinders, Proceedings of NCSD 2022, Anaheim, CA, June 2022.

64.

W. J. Marshall, J. B. Clarity, and K. Banerjee, Performing keff Validation of As-Loaded Criticality Safety Calculations using UNF-ST&DARDS: Sensitivity Calculations, Transactions of the American Nuclear Society Volume 122, pp. 479482, June 2020.

65.

W. J. Marshall and A. Lang, Sensitivity Calculations for Systems with Polyethylene Reflector Materials Using CLUTCH, Transactions of the American Nuclear Society, Volume 124, pp. 376378, June 2021.

66.

T. M. Greene, K. Bekar, and W. J. Marshall, Deterministic-Monte Carlo Hybrid Methods for Eigenvalue Sensitivity Coefficient Calculations, Proceedings of the 12th International Conference on Nuclear Criticality Safety (ICNC 2023), Sendai, Japan, October 2023.

67.

D. E. Mueller, W. J. Marshall, D. G. Bowen, and J. C. Wagner, Bias Estimates in Lieu of Validation of Fission Products and Minor Actinides in MCNP keff Calculations for PWR Burnup Credit Casks, NUREG/CR-7205, September 2015.

68.

J. Alwin, F. Brown, J. Clarity, I. Duhamel, F. Fernex, L. Leal, R. Little, B.J. Marshall, M.

Rising, E. Saylor, and K. Spencer, S/U Comparison Study with a Focus on USLs, Transactions of the American Nuclear Society, Volume 123, pp. 780783, November 2020.

69.

D. E. Mueller, J. M. Scaglione, J. C Wagner, and S. M. Bowman, Computational Benchmark for Estimated Reactivity Margin from Fission Products and Minor Actinides BWR Burnup Credit, NUREG/CR-7157, February 2013.

9-6 70.

J. B. Clarity, W. J. Marshall, and E. M. Saylor, User Experiences with ICSBEP Distributed Sensitivity Data Profiles with the SCALE Sensitivity and Uncertainty Methods as of Winter 2019, Transactions of the American Nuclear Society, Volume 120, pp. 550 553, June 2019.

71.

D. E. Mueller, B. T. Rearden, and D. F. Hollenbach, Application of the SCALE TSUNAMI Tools for the Validation of Criticality Safety Calculations Involving 233U, ORNL/

TM-2008/196, January 2009.

72.

W. Metwally, M. Dupont, A. Lang, et al., Validation Studies for High Burnup and Extended Enrichment Fuels in Burnup Credit Criticality Safety Analyses, NUREG/

CR-7309, April 2025.

73.

R. A. Hall, W. J. Marshall, and W. A. Wieselquist, Assessment of Existing Transportation Packages for Use with HALEU, ORNL/TM-2020/1725, September 2020.

74.

Title 10, Code of Federal Regulations, Part 71, Packaging and Transportation of Radioactive Material, October 2020.

75.

F. Fernex, Programme HTC -Phase 1: Réseaux de Crayons dans L'eau Pure (Water-Moderated and Reflected Simple Arrays) Réévaluation des Expériences, DSU/SEC/T/2005-33/D.R., May 2008.

76.

F. Fernex, Programme HTC -Phase 2: Réseaux Simples en eau Empoisonnée (Bore et Gadolinium) (Reflected Simple Arrays Moderated by Poisoned Water with Gadolinium or Boron) Réévaluation des Expériences, DSU/SEC/T/2005-38/D.R., May 2008.

77.

F. Fernex, Programme HTC -Phase 3 : Configurations "Stockage en Piscine" (Pool Storage) Réévaluation des Expériences, DSU/SEC/T/2005-37/D.R., May 2008.

78.

F. Fernex, Programme HTC -Phase 4: Configurations "Châteaux de Transport" (Shipping Cask) Réévaluation des Expériences, DSU/SEC/T/2005-36/D.R., May 2008.

79.

M. Stuke and A. Hoefer, Role of Integral Experiment Covariance Data for Criticality Safety Validation: EG UACSA Benchmark Phase IV, NEA/NCS/R(2021)1, January 2023.

80.

Daher-TLI, Versa-Pac Safety Analysis Report, Rev. 10, US NRC Accession Number ML18087A454, March 2018.

81.

US NRC, Certificate of Compliance #9342, Revision 18, April 2023.

82.

R. Elzohery, D. Hartanto, F. Bostelmann, and W. Wieselquist, "SCALE & MELCOR Non-LWR Fuel Cycle Demonstration Project - High Temperature Gas-Cooled Reactors,"

Presented at NRC Public Workshop, US NRC Accession Number ML23058A213, February 2023.

83.

W. J. Marshall, O. M. Belcher, N. H. Byrne, et al., Expanded Validation of Uranium Systems with the KENO Monte Carlo Codes and SCALE 6.2.4, Proceedings of PHYSOR 2022, pp. 26642673, May 2022.

9-7 84.

Spent Fuel Project Office, Standard Review Plan for Spent Fuel Dry Storage Facilities, NUREG-1567, March 2000.

85.

Spent Fuel Project Office, Standard Review Plan for Transportation Packages for Radioactive Material, NUREG-1609, March 1999.

86.

Spent Fuel Project Office, Standard Review Plan for Transportation Packages for Spent Nuclear Fuel, NUREG-1617, March 2000.

87.

F. Sommer, W. Marshall, and M. Stuke, Correlation of HST-001 Due to Uncertain Technical Parameters - Comparison of Results from SUnCISTT, Sampler, and DICE, Proceedings of the 11th International Conference on Nuclear Criticality Safety (ICNC2019), October 2019.

88.

C. M. Perfetti and B. T. Rearden, Estimating Code Biases for Criticality Safety Applications with Few Relevant Benchmarks, Nuclear Science and Engineering, Volume 193 Number 10, pp. 10901128, May 2019.

89.

Title 10, Code of Federal Regulations, Part 50.68, Criticality Accident Requirements, 71 FR 66648, November 2006.

90.

H. S. Abdel-Khalik, D. Huang, U. Mertyurek, W. J. Marshall, and W. A. Wieselquist, Overview of the Tolerance Limit Calculations with Application to TSURFER, Energies Volume 14, Number 21, 7092, October 2021.

NUREG/CR-7308 ORNL/TM-2024/3277 William J. Marshall, Travis M. Greene, Alex M. Shaw, Cihangir Celik, Mathieu N. Dupont April 2025 Technical Sensitivity/Uncertainty Methods for Nuclear Criticality Safety Validation Oak Ridge National Laboratory Oak Ridge, TN 37831 Division of Systems Analysis Office of Nuclear Regulatory Research U.S. Nuclear Regulatory Commission Washington, D.C. 20555-0001 L. Kyriazidis, NRC Project Manager The US Code of Federal Regulations (CFR) requires validation of the numerical methods used in criticality safety analyses. This validation requires the comparison of computational results with measurements of physical systems which are neutronically similar to those used in the safety analysis being performed. To this end, this document examines sensitivity/uncertainty (S/U) methods and their applications primarily to nuclear criticality safety validation activities. This document reviews relevant prior written guidance issued between 1999 and 2015. A brief theoretical background is provided on sensitivity coefficients, methods of calculating keff sensitivity coefficients, nuclear covariance data, uncertainty analysis, and similarity assessment. Specific recommendations for using S/U methods to calculate sensitivity coefficients, confirm their accuracy, perform uncertainty analysis of validation gaps, and assess benchmark similarity are also provided. There is also a brief review of publicly available sensitivity data which can be used to perform similarity assessments. Three case studies are provided demonstrating the use of S/U methods for the generation of sensitivity coefficients, similarity assessment, and validation gap margin estimation. Finally, advanced S/U capabilities are summarized, including a discussion of challenges associated with deployment of these techniques.

Sensitivity/uncertainty methods Nuclear criticality safety validation

NUREG/CR-7308 Sensitivity/Uncertainty Methods for Nuclear Criticality Safety Validation April 2025