ML25044A147

From kanterella
Jump to navigation Jump to search
Recommendations Report for SSHAC Level 1
ML25044A147
Person / Time
Issue date: 03/31/2025
From: Miriam Juckett, Clifford Munson, Payne R, Rodriguez-Marek A, Stamatakos J, Scott Stovall, Ulmer K, Thomas Weaver
NRC/RES/DE/SGSEB, Pacific Northwest National Laboratory, Southwest Research Institute, Virginia Tech
To:
Shared Package
ML25044A146 List: ... further results
References
RIL 2025-11
Download: ML25044A147 (1)


Text

RIL 2025-11 Recommendations Report for SSHAC Level 1 Demonstration Project March 2025 Cliff Munson1 Ryan Payne1 Adrian Rodriguez-Marek2 John Stamatakos3 Scott Stovall1 Kristin Ulmer3 Thomas Weaver1 Miriam Juckett4 1Nuclear Regulatory Commission 2Department of Civil and Environmental Engineering, Virginia Tech 3Center for Nuclear Waste Regulatory Analysis Southwest Research Institute 4Pacific Northwest National Laboratory Thomas Weaver, NRC Project Manager Research Information Letter Office of Nuclear Regulatory Research

ii DISCLAIMER Legally binding regulatory requirements are stated only in laws, NRC regulations, licenses, including technical specifications, or orders; not in Research Information Letters (RILs). A RIL is not regulatory guidance, although NRCs regulatory offices may consider the information in a RIL to determine whether any regulatory actions are warranted.

The authors of this report are members of the Technical Integration and Hazard Analysis teams, who, in accordance with the requirements for a SSHAC Level 1 process described in NUREG-2213 (NRC, 2018), developed the methods, models, and analyses provided in this report. The quality assurance measures follow those recommended in Section 3.3.2 of NUREG-2213, including full documentation of the study and a participatory peer review. According to Section 3.4 of NUREG-2213, individuals selected for this study are all recognized as subject matter experts with the requisite training, education, and knowledge to take on their assigned responsibilities. Nearly all team members had prior SSHAC experience, and all team members conducted themselves according to their role on the Technical Integration Team, the Hazard Analysis Team, or as members of the Participatory Peer Review Panel. Most importantly, all team members acted impartially and honored the SSHAC commitment to serve as impartial project participants and not as representatives of any external agencies or their own professional organizations.

iii EXECUTIVE

SUMMARY

The U.S. Nuclear Regulatory Commission (NRC) co-sponsored development of the Senior Seismic Hazard Analysis Committee (SSHAC) process in 1997 to provide guidance on how to perform a probabilistic seismic hazard analysis (PSHA) with special attention on how to capture uncertainty and the use of expert judgment. This process has been successfully implemented many times to characterize seismic hazard at nuclear power plants (NPPs) and other nuclear facilities in the U.S. and internationally. Based on years of NRC and industry experience performing SSHAC studies, the NRC developed implementation guidance for SSHAC hazard studies in 2012 and again in 2018. The SSHAC guidance contains four levels of study, and as the SSHAC level increases from one to four, the number of experts participating in the process, transparency due to formal workshops and level of documentation, and regulatory assurance increases. The time and cost for a SSHAC study also increases for higher level SSHAC studies.

SSHAC Level 3 studies have been the standard of practice for probabilistic seismic hazard studies used in siting large light water reactors. Because new and advanced reactors will likely contain less radiological material and/or have passive systems that control the release of radiation in an accident, lower level SSHAC studies are expected to provide sufficient regulatory assurance for developing the Safe Shutdown Earthquake Ground Motion used to design structures, systems, and components, while maintaining adequate protection of people and the environment. This document provides recommendations that can be implemented by industry for performing a SSHAC Level 1 study for such new and advanced reactors that the authors believe will meet the intent of siting regulations. Those regulations require investigating the site and its environs in sufficient scope to permit adequate evaluation that supports arriving at estimates of the Safe Shutdown Earthquake Ground Motion. In particular, the recommendations in this report are intended to guide development of a SSHAC Level 1 PSHA that is more streamlined and efficient than a SSHAC Level 2 or SSHAC Level 3 study, while still meeting the goals of a SSHAC study of capturing the center, body, and range (CBR) of technically defensible interpretations (TDI) of data and models that are used in the PSHA.

This report documents a new approach to capture uncertainty in seismicity that occurs at locations where there are no known faults (spatially smoothed seismicity) and a new generic ground motion model applicable for use in the western United States (WUS). The approach to capture uncertainty in spatially smoothed seismicity and the new generic ground motion model are easy to implement in future SSHAC studies and their implementation can increase the efficiency of future hazard studies.

This report also provides practical recommendations that can be implemented in future SSHAC Level 1 studies to increase efficiency while still meeting SSHAC goals. These recommendations include leveraging existing data and models and incorporating sensitivity analyses to support technical integration team decisions used in developing the PSHA logic tree.

iv TABLE OF CONTENTS DISCLAIMER.................................................................................................................. ii EXECUTIVE

SUMMARY

............................................................................................... iii LIST OF FIGURES......................................................................................................... vi LIST OF TABLES.......................................................................................................... ix ACKNOWLEDGMENTS................................................................................................. x ABBREVIATIONS AND ACRONYMS........................................................................... xi 1

INTRODUCTION............................................................................................... 1-1 1.1 Project Background............................................................................................ 1-1 1.2 Project Objective................................................................................................ 1-1 1.3 SSHAC Process................................................................................................. 1-2 2

HAZARD SIGNIFICANT ISSUES..................................................................... 2-1 2.1 Seismic Sources................................................................................................. 2-1 2.1.1 Source Zones........................................................................................ 2-1 2.1.2 Fault Sources........................................................................................ 2-3 2.2 Ground Motion Models....................................................................................... 2-8 2.3 Site Response Analyses.................................................................................... 2-8 3

SPATIAL SMOOTHING OF SEISMICITY......................................................... 3-1 3.1 Objective............................................................................................................ 3-1 3.2 Background........................................................................................................ 3-1 3.3 Implementation Details....................................................................................... 3-2 3.3.1 Filter Declustered Catalog...................................................................... 3-2 3.3.2 Compute Event Rate.............................................................................. 3-3 3.3.3 Compute Smoothing Distance................................................................ 3-3 3.3.4 Compute Normalization for Gaussian Kernels........................................ 3-4 3.3.5 Compute Smooth Seismicity Rates........................................................ 3-4 3.3.6 Information Gain..................................................................................... 3-4 3.3.7 Center, Body, and Range of Smoothed Rates....................................... 3-5 3.4 Summary.......................................................................................................... 3-10 4

GENERIC WESTERN UNITED STATES (GWUS) MODEL............................ 4-12 4.1 Approach for Quantifying Epistemic Uncertainty in Ground Motions................ 4-12 4.2 Selection of Ground Motion Prediction Equations (GMPEs)............................ 4-13 4.3 Sampling GMMs from a Continuous Distribution of Ground Motions............... 4-15 4.4 Variance Model................................................................................................ 4-16 4.5 Correlation Model............................................................................................. 4-21 4.6 Screening Models for Physicality...................................................................... 4-23 4.7 Visualization of Ground Motion Space............................................................. 4-24 4.8 Discretization of Ground Motion Space............................................................ 4-27 4.9 Final Median Models........................................................................................ 4-29 4.10 Comparison with Previous Studies................................................................... 4-31 4.11 Median Adjustments......................................................................................... 4-35 4.11.1 Reverse Faulting Adjustment................................................................ 4-35 4.11.2 Normal Faulting Adjustment................................................................. 4-38 4.11.3 Hanging Wall Adjustment..................................................................... 4-41 4.11.4 Implementation of Adjustments with the GWUS Median Model........... 4-43 4.12 Sigma Model.................................................................................................... 4-46 4.12.1 Background on Partially non-ergodic PSHA......................................... 4-47 4.12.2 Sigma Logic Tree.................................................................................. 4-52 5

SIMPLIFYING SSHAC STUDIES...................................................................... 5-1

v 5.1 Capturing Epistemic Uncertainty........................................................................ 5-1 5.2 Leveraging Existing Models and Data Sources.................................................. 5-4 5.3 Leveraging Existing Site Data............................................................................ 5-8 5.4 Determining Hazard Significance to Simplify Calculations................................. 5-9 5.5 Capturing the CBR of TDI................................................................................ 5-22 6

CONCLUSIONS AND RECOMMENDATIONS................................................. 6-1 7

REFERENCES.................................................................................................. 7-1

vi LIST OF FIGURES Figure 1-1 Topographic image of the northern Great Basin showing the location of Skull Valley in north central Utah........................................................................................................ 1-2 Figure 2-1 Illustration of the depth parameters required to model future earthquake ruptures...................................................................................................................................... 2-3 Figure 3-1 a) Information gain as a function of nearest neighbor number, b) estimated cumulative mass function developed from the information gain................................................ 3-7 Figure 3-2 Seismicity rates for M5 obtained using a nearest neighbor number of 4 and a b-value of 0.82........................................................................................................................ 3-8 Figure 3-3 Seismicity rates for M5 obtained using a nearest neighbor number of 20 and a b-value of 0.82........................................................................................................................ 3-9 Figure 3-4. Seismicity rates for M5 obtained using a nearest neighbor number of 50 and a b-value of 0.82............................................................................................................... 3-10 Figure 4-1 Comparison of the center, body, and range of ground motions from previous SSHAC Level 3 studies for a magnitude 6 event at distances of 1, 5, 10, 20, 50, and 100 km............................................................................................................................................ 4-14 Figure 4-2 Variance from the Seeds models for 1 Hz (left) and 10 Hz(right)........................... 4-18 Figure 4-3 Variance for DCPP SWUS, Palo Verde SWUS, Hanford SSHAC Level 3, and INL SSHAC Level 3 GMMs (1Hz)............................................................................................ 4-19 Figure 4-4 Variance for DCPP SWUS, Palo Verde SWUS, Hanford SSHAC Level 3, and INL SSHAC Level 3 GMMs (10Hz).......................................................................................... 4-20 Figure 4-5 Seed Correlation and Modeled Correlation for 1Hz, Magnitude 5, Rupture Distance of 10 km.................................................................................................................... 4-22 Figure 4-6 Seed Correlation and Modeled Correlation for 1Hz, Magnitude 7, Rupture Distance of 100 km.................................................................................................................. 4-23 Figure 4-7. Sammons Map covered by 10,000 Models and Seed Models for 1 Hz case....... 4-26 Figure 4-8. Sammons Map covered by 10,000 Models and Seed Models with Signposts for 1 Hz case............................................................................................................................ 4-26 Figure 4-9. Sammons Map covered by 10,000 Models and Seed Models with Signposts and 10%, 75%, and 95% Iso-Contours for 1 Hz case.............................................................. 4-27 Figure 4-10 Discretized Sammons Map covered by 10,000 Models and Seed Models with Signposts and for 1 Hz case............................................................................................. 4-28 Figure 4-11 Discretized Sammons Map covered by 10,000 Models and Seed Models with Signposts and for 10 Hz case........................................................................................... 4-28

vii Figure 4-12 Median Models for M = 6, Rrup = 20 km................................................................ 4-30 Figure 4-13 The 17 Final median models and seed models for M = 6, Rrup = 20 km............... 4-31 Figure 4-14 Comparison of the center, body, and range of ground motions from the GWUS and previous SSHAC Level 3 studies for 1 Hz magnitude 6 event for distances of 1, 5, 10, 20, 50, and 100 km.................................................................................................... 4-32 Figure 4-15 Comparison of the center, body, and range of ground motions from the GWUS, NSHM, and previous SSHAC Level 3 studies for 10 Hz magnitude 6 event for distances of 1, 5, 10, 20, 50, and 100 km................................................................................ 4-33 Figure 4-16 Comparison of the center, body, and range of ground motions from the GWUS, NSHM, and previous SSHAC Level 3 studies for 1 Hz magnitude 7 event for distances of 1, 5, 10, 20, 50, and 100 km................................................................................ 4-34 Figure 4-17 Comparison of the center, body, and range of ground motions from the GWUS, NSHM, and previous SSHAC Level 3 studies for 10 Hz magnitude 7 event for distances of 1, 5, 10, 20, 50, and 100 km................................................................................ 4-35 Figure 4-18 Reverse fault adjustment terms for the INL, SWUS-DC, and GWUS GMMs a) 1 Hz, b) 10 Hz, where the solid blue lines for the GWUS reverse adjustment term represent the 10th, 50th, and 90th percentiles............................................................................ 4-37 Figure 4-19 Normal adjustment terms for the SWUS-DC and GWUS GMMs a) 1 Hz and b) 10 Hz, where the solid orange lines for the GWUS normal adjustment term represent the 10th, 50th, and 90th percentiles of the SWUS adjustment term........................................... 4-39 Figure 4-20 Comparison of the INL and GWUS normal adjustment terms for a) 1 Hz and b) 10 Hz................................................................................................................................... 4-40 Figure 4-21 Simplified adjusted median GMMs logic tree for a normal fault with the site on the hanging wall.................................................................................................................. 4-45 Figure 4-22 Adjusted median ground motions distribution represented by the 5th, 16th, 50th, 84th, and 95th percentiles for the simplified and full ground motion logic trees................. 4-46 Figure 4-23 Coefficients a and b for the model................................................................. 4-49 Figure 4-24 Uncertainty in at the magnitude breakpoints of 5.0 and 6.5 (coefficients a and b, respectively, for the model)................................................................................. 4-50 Figure 4-25 Coefficients of the model................................................................................... 4-51 Figure 4-26. Standard deviation of at the four magnitude breakpoints................................ 4-51 Figure 4-27 Proposed sigma logic tree.................................................................................... 4-54 Figure 5-1 Impact of aleatory variability () on mean hazard curves......................................... 5-3 Figure 5-2 Sample representation of aleatory variability from a logic tree................................. 5-4

viii Figure 5-3 Logic tree for Stansbury fault.................................................................................. 5-11 Figure 5-4 1 and 10 Hz hazard curves for Stansbury fault showing mean and 5 percentile values for the total number (36,450) of alternative SSM and GMM logic tree combinations............................................................................................................................ 5-13 Figure 5-5 Mean 10,000- and 100,000-year return period UHRS comparison plots for three alternative Stansbury fault configurations (FC1 [ABC], FC2 [ABB], FC3 [ABC])

and three alternative seismogenic thicknesses (ST1 [13 km], ST2 [15 km], ST3 [19 km])...... 5-14 Figure 5-6 Mean 10,000- and 100,000-year return period UHRS comparison plots for three alternative Stansbury fault dip angles (DP1 [45°], DP2 [55°], DP3 [65°]) plot and three alternative fault slip rates (SR1 [0.26 mm/yr], SR2 [0.40 mm/yr], SR3 [0.50 mm/yr])..... 5-15 Figure 5-7 Mean 10,000- and 100,000-year return period UHRS comparison plots for two alternative INL GMM long period adjustments (LPA1, LPA2) and three alternative normal faulting adjustments (AL0, AL1, AL2).......................................................................... 5-16 Figure 5-8 Mean 10,000- and 100,000-year return period UHRS comparison plots for five alternative INL GMM anelastic attenuation adjustments (EPA1, EPA2, EPA3, EPA4, EPA5) and five alternative host-to-target adjustments (DCM1, DCM2, DCM3, DCM4, DCM5)...................................................................................................................................... 5-17 Figure 5-9 Mean 10,000- and 100,000-year return period UHRS comparison plots for three alternative INL GMM single-station sigma levels (SD1, SD2, SD3). UHRS comparison plot shows the maximum difference in spectral acceleration between the total mean UHRS and the alternative logic tree branches with the weight of each branch shown in parentheses.............................................................................................................. 5-18 Figure 5-10 Stansbury fault reference hazard curves showing a comparison of the fractile hazard curves for 0.5 Hz and 1 Hz for the complete set of alternative logic tree combinations (36,450) and the reduced set alternative logic tree combinations (2,025)......... 5-19 Figure 5-11 Stansbury fault reference hazard curves showing a comparison of the fractile hazard curves for 10 Hz and 100 Hz for the complete set of alternative logic tree combinations (36,450) and the reduced set alternative logic tree combinations (2,025)......... 5-20 Figure 5-12 1 and 10 Hz hazard mean and fractile curves comparing two alternative values for the host source zone for the Skull Valley site................................................ 5-21

ix LIST OF TABLES Table 3-1 Completeness years for the Gardner-Knopoff earthquake catalog obtained using the Stepp (1972) method.................................................................................................. 3-3 Table 3-2 Nearest neighbor numbers used for adaptive kernel Gaussian spatial smoothing with fractiles and weights from Miller and Rice (1983) used to capture the CBR of nearest neighbor numbers and smoothed rates............................................................ 3-8 Table 4-1 Reverse fault adjustment terms for the GWUS ground motion model, where R1, R2, and R3 represent the 10th, 50th, and 90th percentiles of the SWUS-DC GMM adjustment factors.................................................................................................................... 4-36 Table 4-2 Normal fault adjustment terms for the GWUS ground motion model, where N1, N2, and N3 represent the 10th, 50th, and 90th percentiles of the SWUS-PV GMM adjustment factors.................................................................................................................... 4-38 Table 4-3 Coefficients for the GWUS hanging wall adjustment model.................................... 4-42 Table 4-4 Adjustment terms applied to the GWUS median GMMs at 1 Hz............................. 4-44 Table 4-5 Terminology used for residual components and their standard deviations.

SD denotes the standard deviation operator........................................................................ 4-47 Table 5-1 Suggested Data Sources for Future Seismic Source Characterization..................... 5-7

x ACKNOWLEDGMENTS The TI Team would like to acknowledge all members of the technical community who have contributed to the extensive set of Senior Seismic Hazard Analysis Committee (SSHAC) studies and other hazard evaluations that make the analyses and recommendations in this report possible. The TI Team and the Project Manager are very appreciative of the support from the Participatory Peer Review Panel (Drs. Jon Ake and Gabriel Toro). Their insights, scientific knowledge, technical comments, and editorial reviews, along with the open, frank, and collegial discussions the TI Team had with them during our weekly meetings and formal briefings, were invaluable. The authors of this report also extend sincere thanks to the administrative support staff who assisted in the finalization and publication of this report.

xi ABBREVIATIONS AND ACRONYMS AFE annual frequency of exceedance ASCE/SEI American Society of Civil Engineers/Structural Engineering Institute CBR of TDI center, body, and range of technically defensible interpretations CEUS central and eastern United States DCPP Diablo Canyon Power Plant GMC ground motion characterization GMM ground motion model GMPE ground motion prediction equation GMRS ground motion response spectra GWUS generic ground motion model for the Western United States LLWR large light water reactors MECE mutually exclusive and collectively exhaustive MFD magnitude-frequency distribution NGA-East Next Generation Attenuation Relationships for Central & Eastern North America NGA-West2 Next Generation Attenuation Relationships for the Western United States NPP nuclear power plant NSHM National Seismic Hazard Model NRC U.S. Nuclear Regulatory Commission NUREG Nuclear Regulatory Report PFS Private Fuel Storage PPRP participatory peer review panel PSHA probabilistic seismic hazard analysis PVNGS Palo Verde Nuclear Generating Station RG Regulatory Guide SDC seismic design category SSCs structures, systems, and components SSM seismic source model SSHAC Senior Seismic Hazard Analysis Committee SwRI Southwest Research Institute SWUS Southwestern United States TI Team Technical Integration Team UHRS uniform hazard response spectrum USGS U.S. Geological Survey

xii shear wave velocity WUS Western United States

1-1 1 INTRODUCTION 1.1 Project Background Discussions between U.S. Nuclear Regulatory Commission (NRC) staff and nuclear power plant (NPP) vendors have demonstrated that advanced reactors are expected to have smaller amounts of radiological materials and enhanced safety features compared to existing large light water reactors (LLWR). As a result, some of the structures, systems, and components (SSCs) for these smaller advanced reactors could be designed to higher target performance goal frequencies (or less stringent seismic design categories) and still meet NRC regulations, guidance, and safety goals. Current practice for NPPs is a seismic design based on the ground motion response spectra (GMRS) as defined in Regulatory Guide (RG) 1.208 (NRC, 2007). For LLWRs, the GMRS is derived from probabilistic seismic hazard analysis (PSHA) hazard curves with annual exceedance frequencies between of 1x10-4/year (yr) and 1x10-5/ yr. This is equivalent to the most stringent seismic design category (SDC) [i.e., SDC-5, in American Society of Civil Engineers/Structural Engineering Institute (ASCE) 43-19 (ASCE/SEI, 2020)].

Given the potential for less stringent seismic design requirements for advanced reactors, the NRC staff believe that lower level Senior Seismic Hazard Analysis Committee (SSHAC) studies (i.e., Levels 1 and 2), as defined in NUREG-2213 (NRC, 2018) could be sufficient to address uncertainties in the seismic hazard evaluation for some advanced NPPs located in moderate-to-low hazard regions of the western United States (WUS) and central and eastern United States (CEUS). A SSHAC Level 3 study is currently the accepted approach for analyzing the earthquake hazard for traditional LLWRs with lower level SSHAC studies used to assess the continued viability of previously developed and accepted SSHAC regional models. However, the higher level of resources that are typically needed to perform a SSHAC Level 3 study are not commensurate with siting characterizations that may be necessary for a variety of the designs for advanced, smaller, lower risk reactors (e.g., microreactors) across the United States (U.S.).

As documented in NUREG-2213, the importance of implementing the SSHAC approach is that, if executed properly, the approach captures the center, body, and range (CBR) of technically defensible interpretations (TDI) with a sufficient level of documentation to meet the requirements of Title 10 of the Code of Federal Regulations (10 CFR) 100.23 Geologic and Seismic Siting Criteria.

1.2 Project Objective Because there is little industry or regulatory experience with lower level SSHAC studies, the NRC staff, working with staff at Southwest Research Institute (SwRI) and their consultants, developed this SSHAC Level 1 study as a pilot or demonstration study. The goal of the study is to demonstrate the scope and workflow of an acceptable Level 1 study and to determine what simplifications to a higher level SSHAC study are possible (i.e., simplifying logic tree branches) to adequately capture the CBR of the TDI. The Technical Integration (TI) Team assembled for this SSHAC Level 1 study comprises technical staff from the NRC and SwRI as well as consultants to SwRI. The project also included a Hazard Analysis team and two members of the Participatory Peer Review Panel (PPRP). For this SSHAC Level 1 Demonstration Project, the TI Team selected the Private Fuel Storage (PFS) site in Skull Valley Utah (PFS, 2006) as a pilot site (Figure 1-1). The basis for site selection and details about this site are documented in Stamatakos et al. (2024).

The SSHAC Level 1 Demonstration Project culminated in two main products: (i) the SSHAC L1 PSHA Report (Stamatakos et al., 2025), which is designed to contain what would be the typical

1-2 product of a SSHAC L1 PSHA, and (ii) this recommendations report. The objective of this recommendations report is to describe and document the supporting technical bases for the steps taken throughout the SSHAC L1 demonstration study to simplify the analyses while ensuring that the CBR of TDI is achieved and that the integrity of the SSHAC process was maintained.

Figure 1-1 Topographic image of the northern Great Basin showing the location of Skull Valley in north central Utah. The map is derived from the United States Geologic Survey U.S. Quaternary Interactive Fault Map.

Importantly, this report and the TI Team evaluation and integration in the companion SSHAC Level 1 report (Stamatakos et al., 2025) discuss why SSHAC L1 (or higher) studies are needed to meet the regulatory requirements for advanced nuclear power reactors. These are provided in Section 5.5.

1.3 SSHAC Process As described at length in NUREG-2213 (NRC, 2018) and its predecessors, NUREG-2117 (NRC, 2012) and Budnitz et al., (1997), the SSHAC process was developed to ensure that the complex technical analyses needed to develop PSHAs for commercial NPPs are reliable and

1-3 defensible and achieve the sufficient regulatory assurance to protect public health and safety and protect the environment. The essence of the SSHAC process is the structured interaction among experts to achieve a well-documented hazard study that captures the CBR of TDI.

Central to the success of the process, therefore, is (1) a clear definition of the different roles and responsibilities of each project team member; (2) comprehensive and objective evaluation of available data, models, and methods; (3) transparent and technicality defensible integration of the data models and methods in the seismic source and ground motion models (GMMs); (4) sufficient documentation to allow other experts to reproduce the results; and (5) participatory review to confirm that the SSHAC process was followed, including adequate technical bases for all key decisions made by the TI Team and a complete and transparent documentation of the analyses and results. In addition to these five essential features of a SSHAC study, a sixth essential feature discussed separately in NUREG-2213 is the recognition of cognitive bias.

Because the SSHAC process relies in part on expert judgment, it is important to be aware of cognitive bias - whether intentional or unintentional - in the evaluation and integration processes. NUREG-2213 Section 2.4 provides a detailed summary of the specific types of cognitive bias that can permeate a SSHAC study. In addition, and consistent with the training that is provided in NUREG-2213, the TI lead continually reminds the TI Team members of the potential for bias and the importance of adhering to the roles and responsibilities assigned as TI Team members or members of the Hazard Analysis Team. Consistent with this philosophy, all the authors of this report worked together as a team independent of their roles as members of the NRC staff, contractors to the NRC, or industry consultants.

The four SSHAC levels are intended to bring ever-increasing regulatory assurance to the results, with Level 4 studies culminating in the most complete and most rigorous outcomes.

However, based on the lessons learned during the many applications of the SSHAC process to nuclear projects around the world, the authors of NUREG-2117 concluded that the Level 4 process was cumbersome and costly. In NUREG-2117, the NRC clarified that Level 3 and Level 4 studies should be viewed as equally rigorous alternative approaches without making any distinction in terms of regulatory assurance. Except for the U.S. Department of Energy (DOE)

Yucca Mountain SSHAC studies (DOE, 1998) and the PEGASOS (Renault, 2014; Renault et al., 2010; and Abrahamson et al., 2002) PSHA projects in Switzerland, all SSHAC studies for LLWR NPP applications have been conducted as Level 3 studies. However, it is important to note that the basic attributes of SSHAC studies (e.g., clearly identified roles, evaluation, integration, documentation, and participatory review) apply to all SSHAC levels. Central to the successful implementation of the SSHAC process (irrespective of SSHAC level) is the adherence to the five SSHAC attributes described previously.

The practical challenges in implementing the SSHAC Level 3 and Level 4 process to date have been high costs and lengthy schedules. Most recent SSHAC Level 3 studies have taken more than 24 months to complete. Because of the large teams of participants (Technical Integration, Hazard Analysis, Project Management teams, PPRP, and numerous Resource and Proponent Experts), three workshops, four working meetings, and, often, extensive supporting data collection and external studies, these SSHAC studies cost several million dollars or more. The motivation of this report is, thus, to identify how to adapt and streamline the SSHAC process for a study that is commensurate with its application to advanced lower risk commercial NPPs. The practical goals of the SSHAC Level 1 study for advanced reactors was an approximately six-month schedule costing a fraction of the cost of a typical SSHAC Level 3 study.

1-4 To achieve this goal, the TI Team developed five specific simplifications and SSHAC process improvements while capturing the CBR of TDI in the resulting PSHA. The specific application of these simplifications and process improvements are more fully described in Chapter 5 of this report. For this Level 1 demonstration project,

1. The TI Team relied extensively (or exclusively) on existing geological, geophysical, and geotechnical data. As described in this report and its companion PSHA report, the TI Team mainly adopted fault and earthquake data from U.S. Geological Survey (USGS) and the State of Utah earthquake and fault databases coupled with existing site data from the PFS PSHA report (Geomatrix, 1999) and other local and regional geophysical and seismic studies.
2. The TI Team developed a generic GMMs for the WUS (GWUS), as described and documented in Chapter 4 of this report. This model or other similar NRC-endorsed GMMs could be used in future seismic hazard analyses. GWUS was derived from existing GMMs and incorporates sufficient epistemic uncertainty to capture the CBR of TDI in future applications in shallow crustal earthquakes in the WUS.
3. The Hazard Analysis Team was embedded within the TI Team to continually develop hazard insights throughout the project. This continual feedback kept the TI Teams evaluation and integration activities focused on only those aspects of the source and GMMs that contributed significantly to the hazard. This also allowed the TI Team to simplify logic trees for significantly reduced hazard computations while achieving the same level of hazard reliability and distribution of hazard uncertainty.
4. The TI Team focused on the contributions to hazard levels consistent with the application to smaller advanced reactor seismic designs and seismic risk analyses.

While PSHA studies for LLWR center on hazard levels at annual frequencies of exceedance (AFE) of 10-4/yr to 10-5/yr (consistent with SDC-5 in ASCE 43 19), the AFEs for the design and risk analysis of advanced reactors are less stringent. Therefore, this demonstration project focused on hazard levels in the range of 10-3/yr to 10-4/yr (consistent with SDC-2 to -4 in ASCE/SEI 43-19).

5. The TI Team took advantage of the experiences and lessons learned from recent SSHAC Level 3 studies, especially those developed for sites in the WUS, including Diablo Canyon (PG&E, 2015), Palo Verde (APS, 2015), Hanford (PNNL, 2014), Idaho National Laboratory (INL) (INL,2022), and Southwestern United States (SWUS) Ground Motion Characterization SSHAC (GeoPentech2015). In all these recent studies, the components of the source and GMMs that contributed most to the resulting hazards were very similar (e.g., site shear wave velocity and spectral decay parameter (kappa), epistemic uncertainty in the GMMs, and faulting slip rates).
6. The TI Team relied on continual feedback from the two members of the PPRP during weekly team meetings as well as the more formal presentations made to the PPRP to help ensure that implementation of this project met the SSHAC objective of capturing the CBR of the TDI. This is described more fully in Section 5.5 of this document.

In contrast to the SSHAC process, most non-SSHAC approaches to characterizing the seismic hazard at a site do not provide a similar level of regulatory assurance that result in capturing the CBR of TDI. Significant deficiencies of non-SSHAC studies often include (1) the limited range of key parameters or models necessary to capture sufficient epistemic uncertainty as required by

1-5 10 CFR 100.23; (2) use of generic site amplification factors due to a lack of site-specific subsurface data resulting in a potentially underestimated and inaccurate response of the soil and or rock to a range of ground motion amplitudes; (3) lack of field investigations of local and regional geologic features in instances where adequate information to characterize the site does not already exist; (4) insufficient documentation of the data, models, and methods used to develop the PSHA and document the underlying technical bases for model integration; and (5) lack of participatory peer review to ensure that the SSHAC goal of CBR of TDI is met and appropriate methods are used, potentially leading to lower levels of regulatory assurance.

Given the potentially significant differences between non-SSHAC approaches and the more rigorous and site-specific SSHAC approaches for developing seismic hazards at a site, and considering the differences in the underlying safety goals of the various approaches, the TI Team for this SSHAC Level 1 Demonstration Project conclude that the use of SSHAC approaches is better aligned with regulatory guidance (NUREG 2213) to characterize seismic hazards for power reactors, as required to meet 10 CFR 100.23.

2-1 2 HAZARD SIGNIFICANT ISSUES 2.1 Seismic Sources The seismic source model (SSM) is a conceptual and mathematical representation of the physical characteristics of earthquake sources that are deemed capable of producing hazard-significant ground motions at the site. In a SSHAC study, these sources are identified and assessed by the TI Team from all the information evaluated during the project, including records of past earthquakes, geologic evidence of active tectonic deformation, and an understanding of the current seismotectonic setting. This information is used by the TI Team to model the size, location, characteristics (e.g., dip and style-of-faulting), and timing of future earthquake activity that can impact the site and is a critical factor in all PSHA studies.

There are two types of common sources in the SSM. Seismic source zones are regions of the Earths crust with diffuse seismicity. Fault sources are planar fractures or fracture zones in the Earths crust that localize seismicity. For sites that are relatively proximal to active subduction zones (within less than 1000 km), a third type of source is the subduction plate interface zone.

For the contiguous US, the only subduction zone is the Cascadia interface that accounts for subduction earthquakes the occur as the Juan de Fuca and Gorda tectonic plates are thrust beneath the North American plate. The INL SSHAC Level 3 study (INL, 2022) provides the most recently updated Cascadia interface zone source characterization and summarizes all the information used to construct this model. Any future site in Alaska would likely need to develop a subduction source model for the Alaska-Aleutian subduction zone.

Seismic source zones are used to model the temporal and spatial distribution of seismicity in a volume of the Earths crust where there is insufficient geologic or geophysical evidence to allow the TI Team to assign past recorded earthquakes to a mapped fault. These earthquakes could have been produced by a fault that did not rupture the ground surface, and thus did not leave geological evidence for the earthquake. Alternatively, the fault could have produced surface rupture, but this surface rupture remains obscure and unidentified in the landscape. Seismic source zones are thus constructed in the SSM to account for future earthquakes that may occur on such unidentified fault sources. The modeling details ascribed to each seismic source zone are based on the geological, geophysical, and seismological characteristics of the source zone and the surrounding region of interest. In contrast, fault sources are identified and characterized from geological, geophysical, and seismological evidence of past fault slip or concentrated seismicity that clearly aligns on the fault plane. In the SSM, they are modeled to represent the occurrence of repeated future earthquakes that remain localized along the fault.

2.1.1 Source Zones In prior SSHAC PSHA studies, multiple seismic zones were used to define a volume of crust which had relatively consistent seismotectonic characteristics when compared to adjoining seismic source zones. The specific criteria that TI Teams relied on in these past SSHAC studies were the type of crust (e.g., Mesozoic extended crust vs. non-Mesozoic extended crust),

structural grain, style-of-faulting, topography, and crustal stress state. Where the GMM requires that future earthquakes must be modeled as fault ruptures rather than point sources, future seismicity within each seismic source zone was treated by the TI Team as virtual fault ruptures.

The virtual fault ruptures are modeled to capture the relevant seismicity parameters that are consistent with the seismotectonic setting of the source zone. These parameters include maximum magnitude, thickness of the seismogenic crust, style-of-faulting, and fault geometry.

2-2 Defining multiple source zones is needed for large regional PSHA studies that characterize multiple sites, such as the CEUS source model (EPRI/DOE/NRC, 2012) or the Hanford PSHA (PNNL, 2014). However, when a hazard sensitivity analysis shows that only earthquakes relatively close to the site (within 200 km or less) contribute significantly to hazard, a single host zone is generally sufficient (depending on the specifics of the site or region), as was the case for the PFS demonstration PSHA described in the companion SSHAC report to this document (Stamatakos et al., 2025). In fact, for several of the recent SSHAC Level 3 PSHA studies (e.g., INL, Hanford, and Diablo Canyon), only the host source zones contributed significantly to the total hazard, especially in cases where fault sources were also present and were important hazard contributors. The application of a single source zone as the host zone is not as certain in the CEUS, where there is considerably slower attenuation such that more distant source zones may contribute to the site hazard and where significant changes in the characteristics of the crust (e.g., seismogenic thickness, tectonic grain, or heat flow) require more than one source zone in the SSM.

Once the seismic source zones were established, the TI Team assessed the characteristics of future earthquakes within each zone. Alternative interpretations of future earthquake characteristics are accounted for in the SSM by alternative branches in the logic tree. These alternative branches capture the epistemic uncertainty in the future characteristics of earthquakes in the source zones. Aleatory variability in the future characteristics of earthquakes, which is only included in the development of virtual ruptures, is accounted for in the SSM by continuous probability distributions of strike and dip, and are represented by relative frequency distributions (e.g., strike = 135° +/- 25°). In PSHA studies, these distributions are most commonly uniform (equally likely within the specified range) or normal (centered on a mean value),

depending on the underlying geologic information. Because logic trees, by convention, are intended to represent only epistemic uncertainties and the branches are assessed as mutually exclusive alternatives, aleatory variabilities are not incorporated in the SSM logic tree. Instead, they are captured through integration over the parameter distributions within the PSHA code.

Future earthquake characteristics in the source zones include seismogenic thickness, spatial distribution of earthquakes, style of rupture (e.g., normal, reverse, and strike-slip), focal depth distribution, maximum magnitude, and recurrence. Figure 2-1 illustrates how the different depth parameters are determined. The extent of the brittle crust in which ruptures can originate is represented by the seismogenic thickness. This parameter has often been estimated as the depth above which 90% of the observed earthquake hypocenters occur (D90) and its uncertainty is represented by the depths corresponding to the 85% and 95% occurrences of seismicity. In the hazard calculations, contributions from earthquakes with magnitudes from the minimum to maximum magnitude are integrated. For each magnitude, appropriate empirical models linking magnitude to rupture area and the down-dip geometry are used to obtain rupture parameters such as rupture length and rupture width. To calculate the distance from the site to the source, the rupture needs to be placed in the volume of brittle crust, and this requires a model for the location of the hypocenter within the seismogenic thickness. Finally, to determine the depth of the top of rupture (ZTOR), a model representing the relative location of the hypocenter along the down-dip width of the rupture plane is needed.

Of all these necessary parameters, earthquake recurrence and spatial distribution of future earthquakes are the most important contributors to hazard. These are also site-specific, and thus require analysis by the TI Team in any future SSHAC Level 1 study. These analyses are derived from the catalog of past earthquakes. Chapter 3 of this report provides a detailed description of how spatial smoothing of past earthquakes from the USGS catalog were developed and simplified.

2-3 The rate of future earthquakes modeled within each seismic source zone in the SSM is represented by magnitude-frequency distribution curves. These curves are developed for each source zone using the recurrence parameters obtained from the smoothed seismicity.

Figure 2-1 Illustration of the depth parameters required to model future earthquake ruptures. The red square is the epicentral location at the ground surface.

2.1.2 Fault Sources In PSHA studies, fault sources are defined as localizers of moderate-to-large magnitude earthquakes that have the potential to contribute to the seismic hazard at the site. They are typically defined and modeled as planar features in the upper crust of the Earth that can be depicted in map view as line traces, or as a series of line segments, and in the subsurface as a vertical or dipping plane. The parameters needed to define fault sources in the SSM are (1) fault geometry (strike, dip, dip angle, down-dip width, length); (2) style-of-faulting (normal, reverse, strike-slip, oblique); (3) seismogenic probability (the probability that an identified fault is active);

(4) magnitude (Mmax, Mchar, and Mmin); (5) rate (slip rate or recurrence rate); and (6) a model for the magnitude-frequency distribution (e.g., truncated exponential, characteristic, or maximum magnitude).

Most of these parameters can be drawn from existing data sources such as published geologic maps, prior SSHAC studies, and/or state or national fault databases (e.g., the USGS

2-4 Quaternary Fault and Fold Database of the United States 1). Of these, slip rates and the choice of the magnitude-frequency distribution are the most important contributors to the hazard. For fault sources in the WUS, and as exemplified by the PSHA the TI Team developed for the Skull Valley site, slip rate or recurrence rate is the most significant contributor to hazard; thus, this aspect of the SSM deserves critical evaluation and integration by the TI Team. The seismogenic probability, or p[S], establishes whether a fault is seismically active, and thus whether the fault has the potential to generate future earthquakes that could contribute to the seismic hazard at the site. Faults that the TI Team definitively conclude to be seismically active are assigned a p[S] = 1.0. For uncertain fault activity, the TI Team assigns a p[S] between 0.95 and 0.05. A fault source that is assigned a p[S] equal to zero is considered by the SSM TI Team to be inactive within the current tectonic regime. The criteria used by the TI Team to assess the p[S]

of a fault includes geological evidence of activity within the contemporary tectonic regime, dynamic linkage to a nearby active fault, and the potential contribution of fault rupture to the seismic hazard at the site.

If a fault is an important contributor, specific attention must be paid to its slip rate/recurrence interval because these are the most important contributors to hazard. Following the guidance in ANS 2.27 (ANSI/ANS, 2020), in evaluating the rate of fault slip in the current seismotectonic environment, the following information should be considered: (a) historical and geological evidence regarding the displacement history of the fault (including the measured or estimated tectonic offset of a marker of known or estimated age), (b) pre-instrumental and instrumental seismicity data, (c) structural relationships that indicate kinematic linkages to a Quaternary fault with a known slip rate, and (d) the seismotectonic framework. Recent studies have evaluated whether slip rates are consistent with other fault data and parameters such as rupture geometry, magnitude, and recurrence. Analyses such as moment balancing are useful for checking for inconsistencies and probing implications of rates. In addition to the guidance in the ANS Standard, the TI Team highly recommends field visits to assess active faults. As discussed in the companion SSHAC Level 1 report (Stamatakos et al., 2025), the geological context of fault data (slip rates, length, seismogenic probability) require critical evaluation by the TI Team that are often aided by the first-hand observations of geological relationships observed in the field.

Magnitude-Frequency Distributions (MFDs) define the relative occurrence of various earthquake magnitudes generated by a fault source and are represented by an MFD curve (Table 2-1 and Figure 2-2). The shape and slope of the MFD curve expresses the relative frequency of increasing larger magnitude earthquakes (up to Mmax) as a function of magnitude. The appropriate MFD for a fault source has been the subject of research for many years. Gutenberg and Richter (1954) observed that the appropriate MFD describing regional recurrence, such as for a source zone, is an exponential distribution (truncated at the Mmax). Based on rich historical seismicity and geological datasets, Schwartz and Coppersmith (1984) showed that recurrence relationships on both the Wasatch and San Andreas faults in the WUS did not follow a Gutenberg-Richter model. For moderate-to-large magnitude earthquakes, the exponential distribution underestimated observations from paleoseismic trench studies. In response to these observations, Youngs and Coppersmith (1985) developed the characteristic earthquake distribution model, which they concluded appears to capture the fundamental behavior of earthquake recurrence on the Wasatch and San Andreas faults and appears to apply to many other faults as well (e.g., Hecker et al., 2013).

1 https://usgs.maps.arcgis.com/apps/webappviewer/index.html?id=5a6038b3a1684561a9b0aadf88412fcf accessed in the fall of 2024.

2-5 In prior SSHAC Level 3 studies, the Characteristic model is almost always applied, largely based on the results of Hecker et al. (2013). The maximum moment model (Wesnousky et al.,

1983; Wesnousky et al., 1984; Wesnousky, 1986) is a derivative of the Characteristic model in which all strain is released in the large characteristic earthquakes, with no exponential lower-magnitude tail in the magnitude-frequency distribution. Smaller magnitudes occurring on the fault are either assumed to be dependent events, mostly aftershocks, or earthquakes that are already being captured as part of the host zone seismicity. The Wooddell, Abrahamson, Acevedo-Cabrera, and Youngs (WAACY) model (Wooddell et al., 2015) is a modified version of the Youngs and Coppersmith (1985) characteristic model, modified such that rare large magnitude earthquakes above Mchar that may be associated with multi-segment ruptures of strike-slip systems can occur.

For the SSHAC Level 1 study, as described in Stamatakos et al. (2025), the TI Team solely implemented the maximum moment or magnitude model (Mmax model) rather than a weighted combination of the Mmax model and Characteristic model or just the Characteristic model by itself. The basis for this choice by the TI Team is that implementation of the Characteristic model in combination with the host source zone may result in higher recurrence rates for the virtual faults that are located within the host zone grid cells that are adjacent to the fault source unless the earthquakes that fall within these grid cells are removed from the catalog. This potential for slightly higher host zone grid cell rates is unlikely to have a significant impact on the final hazard results unless there are fault sources that are very close to the site. For the SSHAC Level 1 study site in Skull Valley, there are three fault sources that dominate the hazard, all are close to the site (Stamatakos et al., 2025).

2-6 Table 2-1 Magnitude-Frequency Distributions.

MFD Model Description Truncated Exponential From Gutenberg and Richter (1954), the magnitude probability density function (PDF) is a doubly truncated exponential distribution defined by Mmin, Mmax, and the b-value resulting also in an exponential complementary cumulative distribution (except near the maximum magnitude, where the two diverge).

Useful for source zone recurrence but shown to underpredict fault source recurrence at moderate-to-large magnitude earthquakes.

Characteristic Earthquake From Youngs and Coppersmith (1985), the magnitude PDF has two components: a characteristic portion and a lower-magnitude exponential portion. The characteristic portion is a uniform distribution, centered on Mchar. A 0.5 (+/- 0.25) magnitude wide uniform (boxcar) aleatory distribution centered about the mean Mchar captures the aleatory variability in Mchar. The lower-magnitude portion is a doubly truncated exponential distribution from Mmin to Mchar with a uniform b-value. The input parameters are Mmin, b-value, Mchar,

+/-0.25Mchar, and the percent of the total moment rate in the low magnitude tail.

Maximum Magnitude (also known as the Maximum Moment Model)

Originally from Wesnousky et al. (1983) in which Mmin = Mchar = Mmax.

Modified in Wesnousky et al. (1984) and Wesnousky (1986) to be just the characteristic earthquake without the lower-magnitude portion. Mc is the characteristic magnitude and Mc is the distribution about this magnitude.

WACCY From Wooddell and Abrahamson (2013) and Wooddell at al. (2014), the magnitude PDF has three components: a characteristic portion, a lower-magnitude exponential portion, and a high magnitude exponential tail. The characteristic portion is a Gaussian distribution with mean of Mchar, standard deviation M, and range from M1 to M2, where M1 = Mchar - DM1, with DM1 being the offset between the low magnitude tail and Mchar, and M2 = Mchar +

M Nsig, with Nsig representing a multiple of standard deviations above the mean.

The low magnitude portion fits an exponential distribution from Mmin to M1 with a slope of -b. The high magnitude tail fits a doubly truncated exponential distribution from M2 to Mmax and a b-value of btail. Parameters are Mmin, b, Mchar, M, DM1, Nsig, btail, Mmax, and percent of the total moment rate in the low magnitude portion.

2-7 Figure 2-2 Earthquakes recurrence relationships, shown in non-cumulative (a-c) and cumulative (d-f) plots redrafted from Figure 2.3 of Bommer and Stafford (2009). (a and d) G-R model, (b and e) maximum magnitude model and (c and f) characteristic earthquake model. The WAACY model is shown in (g) and is redrafted from Figure 10-1 of PG&E (2015). See Table 2-1 for explanations of the terms shown in the figure.

2-8 2.2 Ground Motion Models The GMM component of a PSHA typically includes several components, each addressing the characteristics of the source, path, and site effects that contribute to ground shaking at a given location. In a SSHAC Level 2, 3, or 4 study, the expectation is that significant effort would be spent developing each component of the GMM. This might include:

Gathering, processing, and developing a database of ground motion data for the study region.

Characterizing regional parameters for rock motions, such as stress drop, distance attenuation, anelastic attenuation, possibly through inversion analyses using the ground motion database.

Adjusting a backbone model (e.g., an NGA-West2, https://ngawest2.berkeley.edu) ground motion predictive equation (GMPE) or developing an alternative GMM.

The scope of a SSHAC Level 1 study would not allow for the same process outlined above; rather, the TI Team would likely evaluate available GMMs, select a GMM that is appropriate for the region, and justify the selection as an appropriate representation for the site, including sufficient accounting for epistemic uncertainty. For example, a TI Team may select the NGA-East (Goulet et al., 2018) GMM for sites within the CEUS (i.e., east of longitude 105 W), the NGA-Subduction (Parker et al., 2022) GMMs for subduction sources, or the GWUS model described in Section 4 of this report for sites in the WUS. The GWUS GMM developed by the TI Team for this project can be easily implemented for future SSHAC studies in the WUS. For sites that are located close to a previously completed SSHAC study, the TI Team may find that the GMM described by one of those studies would be appropriate (e.g., PNNL, 2014).

2.3 Site Response Analyses In addition to selecting an appropriate GMM, the TI Team needs to adjust the reference conditions specified by the GMM to site-specific conditions through the development of site adjustment factors (SAFs) determined by site response analysis (SRA). Site effects have been shown in multiple SSHAC studies to contribute significantly to the overall uncertainty in a PSHA and may have significant impact on spectral shape relative to -based approaches. Thus, TI Teams in a SSHAC Level 1 study would spend significant effort developing this critical component. The elements of the site-specific adjustment include:

Identifying appropriate analyses for characterizing site response (e.g., equivalent-linear, kappa-corrected equivalent-linear, or nonlinear).

Developing or selecting modulus reduction and damping curves that adequately represent the soil types (e.g., sand or clay) and characteristics (e.g., plasticity and density).

Developing shear wave velocity profiles that capture the range of measured of the various soil layers, both shallow and deep across the site.

Selecting representative values of the spectral decay factor, kappa, to describe the high frequency decay of ground motions.

2-9 For each of the three elements described above, the best estimate and the uncertainties should be carefully considered and accounted for in the PSHA. Details about this process and recommended practices are outlined in a report by Rodriguez-Marek et al. (2021).

3-1 3

SPATIAL SMOOTHING OF SEISMICITY 3.1 Objective Earthquake sources may be located where active faults have not yet been identified. To capture the contribution of unknown seismic sources to the seismic hazard, statistical methods are used with historical seismicity to quantify the spatial distribution of earthquakes in the region near the site. The main consideration in using these models is whether stationarity of past earthquakes can be established from the earthquake record. The assumption of spatial stationarity in seismic hazard analyses posits that the location of past earthquakes provides a reliable basis for predicting where future earthquakes are likely to occur, at least over the lifetime of the proposed facility (e.g., the next 50 to 100 years). This assumption implies that earthquake locations in the source zone are not equally likely at all locations within the zone (at least within the next 50 years) and that there is a preference for future earthquakes to reoccur in areas where past earthquakes were concentrated. The assumption of stationarity also underlies the concept of earthquake clusters, so long as the clusters are likely to continue long enough into the future to predict areas of higher seismicity within the timeframe of the seismic hazard application. Thus, if stationarity can be established, it provides a technical basis to develop probability density maps that control the spatial distribution of future events in the SSM based on the occurrence of past earthquakes in the earthquake record. The assumption of stationarity has been tested and accepted in many recent SSHAC Level 3 studies, including Hanford (PNNL, 2014) and INL (2022). For the CEUS, stationarity was established based on the analysis methods first proposed by Kafka and Walcott (1998) and developed in subsequent analyses (e.g., Kafka and Levin, 2000; Kafka, 2002; 2007). In this study, the TI Team performed analyses using the method proposed by Kafka to demonstrate that stationarity is applicable for the region surrounding the Skull Valley site.

Both the Gaussian kernel and penalized likelihood smoothing methods have been used in prior SSHAC studies to evaluate spatial smoothing of seismicity (PG&E, 2015 and NRC, 2012). An NRC study on earthquake recurrence rate models for the CEUS (Anooshehpoor et al., 2023) shows that both Gaussian kernel and penalized likelihood approaches can be used to capture epistemic uncertainty in the spatial distribution of earthquake rates. Based on its experience with both approaches, the TI Team believes that the adaptive Gaussian kernel smoothing approach can be more easily implemented than the penalized likelihood approach. Therefore, because both methods yield similar results, the TI Team decided to use the adaptive kernel method to achieve the objective of capturing the CBR of spatially smoothed seismicity rates (Anooshehpoor et al., 2023). This approach is described next.

3.2 Background

Helmstetter et al. (2007) presents an adaptive kernel smoothing procedure using a Gaussian distribution. The smoothing process consists of dividing the region of interest into a grid, and then using a Gaussian kernel to distribute the rate of each independent event to adjacent cells within the grid. The Gaussian kernel,, is defined as:

ll 2

Eq. 3-1 Where is the distance between the considered event and the center of an adjacent cell, is the Gaussian distribution standard deviation, which is also known as the smoothing distance,

3-2 and is a normalizing factor so the integral of over an infinite area is equal to 1. Once the normalizing factor,, has been obtained for each event in the declustered catalog and a smoothing distance has been chosen for each event, Eq. 3-1 can be used to distribute the rate to all cells in the grid.

The adaptive kernel procedure defines the smoothing distance,, for each event in the catalog based on the distance to the th closest event (nth nearest neighbor, where is referred to as the nearest neighbor number). This means that if is defined by a nearest neighbor number of 3, the distance from an event that is being smoothed to the 3rd closest event is used as the smoothing distance. This results in shorter smoothing distances where events are clustered near each other and larger smoothing distances where there are a sparse number of events.

Because each event has a different smoothing distance, the kernel smoothing is referred to as an adaptive kernel. Moschetti (2014) used information gain as the basis for selecting the nearest neighbor number to use in determining the smoothing distance.

Information gain describes the probability gain per earthquake of the smoothed model relative to a model with a uniform rate. Moschetti (2014) performed smoothing calculations for a range of nearest neighbor numbers and selected a nearest neighbor number associated with the maximum information gain for computing smoothed rates that are used in the PSHA. To capture epistemic uncertainty in smoothed rates, the USGS uses the adaptive kernel as described above along with smoothed rates using a fixed or specified smoothing distance for the National Seismic Hazard Model (NSHM). In the WUS, the fixed kernel distance used by the USGS is set equal to 502 km (Petersen et al., 2024) for all events. As noted by Petersen et al. (2024), the fixed smoothing approach can result in questionably low rates in areas with few or no observed events, whereas smoothed rates obtained using adaptive kernel smoothing presented by Anooshehpoor et al. (2023) do not appear to produce questionably low rates. The constraints on the minimum nearest neighbor distance should consider the location uncertainty of the earthquake events and ensure that the smoothing results only in positive rates. For this study, instead of using adaptive kernel smoothing with a single nearest neighbor number and a fixed kernel smoothing with a smoothing distance chosen based on expert judgment, the TI Team utilized the information gain as a function of the nearest neighbor number to select a set of nearest neighbor numbers that will result in an objective adaptive kernel smoothing procedure that captures the CBR of smoothed rates.

3.3 Implementation Details The concepts for smoothing seismicity rates using a Gaussian kernel and computing information gain are relatively simple, once the assumption of spatial stationarity is established by the TI Team. Implementation details on how the adaptive smoothing procedure used for this SSHAC Level 1 study at the Skull Valley site are provided here to assist with implementation for future SSHAC Level 1 projects. The process includes (1) filtering the declustered catalog to retain only desired events in the catalog; (2) computing the rate for each event, calculating the smoothing distance that will be used in the Gaussian kernel for each event; (3) computing the kernel normalization factor; (4) applying the kernel to distribute the rate of each event, and then (5) using information gain to identify the nearest neighbor numbers that should be used to capture the CBR of smoothed seismicity rates.

3.3.1 Filter Declustered Catalog The TI Team used a declustered earthquake catalog from which dependent events (foreshocks and aftershocks) were removed to compute smoothed rates. The TI Team used the Gardner-Knopoff declustered catalog. Other declustering methods were considered but were not used

3-3 due to similar results obtained when compared to using the Gardner and Knopoff (1974) approach or due to the TI Team judgment that the resulting declustered catalog was not representative of a Poisson process. The USGS uses the raw catalog in their final estimate of recurrence. As noted by Jordan et al. (2023), use of the raw catalog for determining seismicity rates was a contentious issue. The TI Team position is that the declustered catalog will be more consistent with the Poisson assumption employed in a time-independent seismic hazard analysis and should therefore be used in evaluating seismicity rates.

Prior to using a Gaussian kernel to distribute the rate of each event to adjacent cells in the grid, additional earthquakes from the catalog may be removed from the catalog if the event occurred prior to the completeness year for the event magnitude. The completeness year for each event is based on the event magnitude and was determined by the TI Team using the method developed by Stepp (1972). Other methods for estimating completeness, such as Mulargia et al.

(1987), may be used. Table 3-1 provides the completeness years for the declustered catalog in the 320 km region surrounding the Skull Valley site where hazard was calculated for this demonstration SSHAC Level 1 project. This process resulted in removing all events that had a magnitude less than 3.0, which was the lowest magnitude for which a completeness year was determined. Events were also filtered out of the catalog if they occurred more than 320 km from the site. Filtering out events beyond this distance is not expected to impact rates in the region that will contribute to hazard, which only considers background events within 200 km from the site.

Table 3-1 Completeness years for the Gardner-Knopoff earthquake catalog obtained using the Stepp (1972) method.

Magnitude Bin Completeness Year 3 - 4 1962 4 - 5 1928 5

1894 3.3.2 Compute Event Rate The rate of each event in a declustered catalog is computed prior to being distributed to adjacent cells. When computing the rate for each event,

, the counting factor N*

as described by Mueller (2019) is used for the number of events. The time in the denominator is the number of years from the magnitude completeness year through the final catalog year. This computed rate is distributed to cells in a grid that extends 320 km away from the site. This approach to computing the rate for each event will likely have greater uncertainty than a maximum likelihood approach to smoothing rates.

3.3.3 Compute Smoothing Distance A user specified nearest neighbor number is used to determine the smoothing distance (kernel standard deviation), which is the distance to the nth closest neighbor for each event. This distance is determined by calculating the distance from the event to which the kernel will be applied to all other events in the catalog. The distances are then sorted from smallest to largest and the nth distance is obtained as the smoothing distance. For this study, smoothing distances were constrained by the TI Team at a minimum value of 3 km and a maximum value of 200 km.

3-4 The minimum value was selected to account for location accuracy. Selecting a maximum value maintains consistency with the assumption and observation of stationarity. In addition, given that the area source zone extends 320 km for the purposes of smoothing seismicity rates, there is no practical effect on hazard results by limiting the smoothing standard deviation to 200 km.

3.3.4 Compute Normalization for Gaussian Kernels The Gaussian kernel is used to distribute the rate of each event to cells in the grid surrounding the site. Prior to using the Gaussian kernel, the normalization factor,, must be computed.

The procedure used by the TI Team for computing is described next.

The TI Team created a grid that surrounds each event in the catalog. In theory, the grid surrounding the event should extend infinitely to compute the normalization term. The TI Team extended the grid out to eight times the smoothing distance. Extending the grid out further will not result in significant changes to the normalization term. The TI Team used a grid spacing of 0.1 degrees and then subdivided each of the 0.1° x 0.1° cells to create four smaller cells. The four smaller cells were generated to capture the fluctuation in density over the 0.1° grid cell to obtain a reasonably accurate density for the grid. The density is computed for each of the four subdivided cells using the distance from the event to the center of the subdivided cell. This is repeated for each cell in the grid that surrounds the event, and the density for each subdivided cell is summed to determine the normalization term. The density function produces a density per area, so when computing the density in each subdivided cell, the Gaussian kernel function is first multiplied by the area of the subdivided cell.

3.3.5 Compute Smooth Seismicity Rates With the smoothing distance and normalization term computed, the rate associated with each event can be distributed to the cells surrounding the site. The rate in each cell in the grided zone around the site is initially set to zero. The rate from each event in the catalog is then distributed to each cell in the grid, and the contributions from all events within a cell are summed to obtain the rate for each of the grid cells. When performing the calculations, each cell in the grid is subdivided into four cells. So, the rate is actually distributed to the subdivided cells and summed to determine the rate in the cell. The density computed using the kernel function is assumed to be the density per area, and the kernel function is multiplied by the subdivided cell area to determine the rate distribution to the subdivided cell.

3.3.6 Information Gain Information gain describes the probability gain per earthquake of the smoothed model relative to a model with uniform rate. Information gain is computed using the following equation:

Eq. 3-2 where is the log likelihood for the smoothed model, is the log likelihood for a model having a uniform rate, and is the number of events in a subset of the earthquake catalog called the test catalog.

3-5 The catalog is split into training and test catalogs for evaluating the log likelihood. For this demonstration project, the training catalog included 249 events from 1962 through 1989 and the test catalog included 28 events from 1990 through 2022. In addition to different time periods, the catalogs have different minimum magnitudes, with the training catalog having a minimum magnitude of 3.0 and the test catalog having a minimum magnitude of 4.0. Magnitude 3.0 was selected by the TI Team for the training catalog because that is the minimum magnitude of completeness evaluated for the region around the site. The TI Team selected a minimum magnitude of 4.0 for the testing catalog to ensure there were sufficient events of moderate size.

In an area of higher seismicity, a larger minimum magnitude could reasonably be used for the testing catalog.

To compute the log likelihood values, the TI Team adopted the assumption that earthquake occurrence follows a Poisson distribution. Given this assumption, probabilities for the predicted number of events in each cell obtained from the smoothed rates can be determined as follows:

Eq. 3-3 where

Eq. 3-4 is the rate-normalization that equalizes the number of modeled and observed earthquakes for likelihood testing. Here, is the number of events in cell obtained by multiplying the smoothed rate in the cell (or uniform rate for the reference model) by the number of years in the training catalog, is the number of events in the test catalog, and is the number of events in cell in the test catalog. The probability is computed for each cell and the log likelihood is computed by taking the log of the probability in each cell and summing the log probabilities.

After computing the log likelihood for the smoothed and reference models, the information gain can be computed. Where the information gain is high, the TI Team expects the smoothed model to be more consistent with observations in the testing portion of the catalog.

3.3.7 Center, Body, and Range of Smoothed Rates A review of the information gain in Figure 3-1a shows that using a nearest neighbor number of 3 produces event counts in grid cells that are most likely to be consistent with the historical record. The information gain associated with a nearest neighbor number of 20 is similar to the information gain with a nearest neighbor of 5. The relatively high information gain for nearest neighbor numbers of 5 to 20 suggests that multiple nearest neighbor numbers may be used to capture the CBR of technically defensible smoothed rates. As the nearest neighbor number increases beyond 20, the TI Team see that the information gain decreases as expected until a large enough neighbor number produces a near uniform rate (e.g., Information gain is approximately 1).

3-6 When plotting the information gain () as a function of the nearest neighbor number, the TI Team observed that the information gain could be considered to represent a probability distribution for the nearest neighbor number. To make the information gain consistent with a probability distribution, the TI Team subtracted 1 from the information gain to obtain a probability of approximately zero for a nearest neighbor number of 80. Based on the justification provided for stationarity, the TI Team believes it is justifiable to associate zero probability to a nearest neighbor number that produces uniform or near uniform seismicity rates. The TI Team then integrated 1 as a function of nearest neighbor number to obtain a cumulative mass function. The resulting integral was normalized by the total integral to ensure that the sum of the probability masses is 1. The cumulative mass function is shown in Figure 3-1b.

Using this cumulative mass distribution for the nearest neighbor number, the TI Team selected nearest neighbor numbers associated with three fractiles to capture the CBR of nearest neighbor numbers using a Gaussian quadrature approach for developing a discrete approximation to a probability distribution. Weights for the discrete approximation are from Miller and Rice (1983). The three fractiles, weights, and nearest neighbor numbers are provided in able 3-2. These nearest neighbor numbers were implemented into the logic tree for the host zone to provide three alternative smoothing rates for the PSHA.

The seismicity rates for 5 for nearest neighbor numbers of 5, 20, and 50 are presented in Figure 3-2, Figure 3-3, and Figure 3-4. These rates are computed using a b-value of 0.82 estimated using a maximum likelihood approach from Weichert which is de-coupled from the Gaussian kernel smoothing calculations. The standard deviation in b-value obtained from the maximum likelihood calculations was approximately 0.04. Accounting for this uncertainty in the b-value will not result in a significant change in rates and hazard. Therefore, the TI Team did not add additional epistemic uncertainty associated with the b-value in the hazard calculations.

3-7 Figure 3-1 a) Information gain as a function of nearest neighbor number, b) estimated cumulative mass function developed from the information gain.

3-8 Table 3-2 Nearest neighbor numbers used for adaptive kernel Gaussian spatial smoothing with fractiles and weights from Miller and Rice (1983) used to capture the CBR of nearest neighbor numbers and smoothed rates.

Nearest Neighbor Number Fractile Weight 4

0.084669 0.247614 20 0.500000 0.504771 50 0.915331 0.247614 Figure 3-2 Seismicity rates for M5 obtained using a nearest neighbor number of 4 and a b-value of 0.82. The black and blue circles denote radii of 320 km and 200 km, respectively. The blue dots are locations of earthquakes.

3-9 Figure 3-3 Seismicity rates for M5 obtained using a nearest neighbor number of 20 and a b-value of 0.82. The black and blue circles denote radii of 320 km and 200 km, respectively. The blue dots are locations of earthquakes.

3-10 Figure 3-4. Seismicity rates for M5 obtained using a nearest neighbor number of 50 and a b-value of 0.82. The black and blue circles denote radii of 320 km and 200 km, respectively. The blue dots are locations of earthquakes.

3.4 Summary The TI Team used adaptive kernel smoothing to determine background seismicity rates in the region surrounding the study site. Adaptive kernel smoothing has been used in previous SSHAC studies and is implemented in the USGS NSHM. Details of how the TI Team performed adaptive kernel and information gain calculations are provided in this chapter to facilitate future use of this approach for smoothing. A unique aspect of this work was use of the information gain to define the probability mass distribution of the nearest neighbor number and associated smoothed rates. Using a probability mass function for the nearest neighbor number allowed the TI Team to capture the CBR of background seismicity rates. While this approach captures the parametric uncertainty associated with smoothed seismicity, it does not capture the epistemic statistical uncertainty. TI Team experience with implementing the CEUS SSM for the hazard calculations documented in Anooshehpoor et al. (2023) showed that there was very little

3-11 difference between hazard fractiles when using 24 smoothing branches that capture the epistemic statistical uncertainty compared to fractiles when only using 3 smoothing branches that do not include the statistical epistemic uncertainty.

4-12 4

GENERIC WESTERN UNITED STATES (GWUS) MODEL This section describes the generic Western United States (GWUS) GMM developed for use in characterizing the seismic hazard from shallow crustal earthquakes in active tectonic regions within the WUS. The GWUS GMM provides a set of 17 weighted median predictions for a wide range of magnitudes and distances for a range of oscillator frequencies. For each of the median predictions, the GWUS GMM also includes a set of three adjustments for reverse and normal faulting, as well as for sites located on the hanging wall of the fault. Thus, the GWUS GMM can be easily implemented for sites in the WUS. Because it is intended to be generic, the GWUS GMM purposefully captures a wider range of median predictions relative to previously developed SSHAC Level 3 GMMs for the WUS. This wider range of median ground motion predictions resulting from implementation of the GWUS GMM increases the mean hazard relative to the narrower range of median predictions from previously developed backbone model GMMs (Hanford, INL) that were customized to account for source and path host-to-target adjustments (at a regional or site-specific scale). Therefore, although the GWUS GMM can be readily implemented in hazard studies for sites in the WUS, users should be aware of the increase in mean hazard relative to more site-specific GMMs used to develop site-specific design response spectra (DRS).

The TI Team developed the GWUS GMM using an approach like that of the Next Generation Attenuation for Central and Eastern North America project (NGA-East). NGA-East was a SSHAC Level 3 study that developed a complete ground motion characterization (GMC2) model that captured the CBR of TDI, considering available data and models. The following section provides an overview of the GWUS GMM with subsequent sections describing the approaches used for developing the GMM.

4.1 Approach for Quantifying Epistemic Uncertainty in Ground Motions This section summarizes the approaches used to quantify the epistemic uncertainty in the median GWUS GMM. The GWUS GMM aleatory variability model is discussed in Section 4.12.

The approach used to characterize the ground motion epistemic uncertainty for the GWUS GMM differs from the traditional logic tree approach where individual branches of the tree are defined by GMPEs whose assigned weights are counted as probabilities. As pointed out in Bommer and Scherbaum (2008), if weights on a logic tree are to be counted as probabilities, then the set of branches at any node of the logic tree should be both mutually exclusive and collectively exhaustive (MECE). If the branches on the logic tree do not meet the MECE requirement, then the weights assigned are considered to be merely subjective. The mutually exclusive requirement is violated when two or more of the GMPEs informing the tree have been derived using the same dataset and/or assumptions. The collectively exhaustive requirement is violated because using a limited number of GMPEs only provides a coarse discretization of the ground motion distribution and most likely excludes tail values unless a formal assessment has been conducted that justifies the range of existing models as sufficient.

In developing the GWUS GMM, the TI Team approached the MECE problem by describing the epistemic uncertainty in median ground motions as a continuous distribution from which a manageable set of median ground motion models are computed and considered to meet the MECE requirement. For each oscillator frequency of interest, the TI Team used a set of GMPEs, 2 GMC is a term often used in past SSHAC studies, but its replacement of Ground Motion Model (GMM) is now preferred and has been used in very recent SSHAC studies.

4-13 referred to for the remainder of this section as seed models, to generate ground motion estimates over a range of magnitude () and distance () combinations. The TI Team used the ground motion estimates to form a joint probability distribution where is a high dimensional vector,, of ground motions from each of the combinations of and

. Each marginal distribution ( is defined by the vector of logarithmic response spectral values predicted by the seeds for the and pair. is then approximated using a multivariate normal distribution from which 10,000 sample GMMs are drawn to characterize the continuous distribution of ground motions. Using the 10,000 sampled GMMs directly in hazard calculations is ideal because it describes the CBR in ground motion estimates but is computationally unrealistic. Therefore, the continuous ground motion distribution is discretized into a manageable set of representative median models considered to capture the epistemic uncertainty in ground motions. Because the sample GMMs are multi-dimensional, it is necessary to use high dimensional visualization techniques to reduce the sample distribution into a lower dimension that allows for discretization. For this project the Sammons approach (Sammons, 1969) to domain reduction was used to reduce the sampled high dimensional distribution of ground motions to two dimensions where discretization is performed. The resulting 2-dimensional ground motion distribution is discretized using an objective approach into 17 median models with associated weights to be used in PSHA calculations for capturing epistemic uncertainty in ground motions. The following sections provide details on the models and methods used to develop the GWUS GMM.

4.2 Selection of Ground Motion Prediction Equations (GMPEs)

The TI Team used four of the NGA-West2 GMPEs as the seed GMPEs for developing the GWUS GMM: Abrahamson et al. (ASK14; 2014), Boore et al. (BSSA14; 2014), Campbell and Bozorgnia (CB14; 2014), and Chiou and Youngs (CY14; 2014). These seed GMPEs were selected based on their consideration/implementation in previous SSHAC Level 3 studies (Hanford; 2014, INL; 2022, and SWUS; 2015) conducted for sites in active tectonic regions in the WUS. Figure 4-1 shows the CBR of ground motions from these studies for a magnitude 6 event at multiple distances for 5Hz oscillator frequency.

The Hanford Site is a 1,518 km2 area where multiple nuclear facility sites are located, most notably the Columbia Generating Station, located 16 km North of Richland, Washington. The Hanford Sitewide PSHA (PNNL, 2014) was a SSHAC Level 3 study conducted under sponsorship by the U.S. DOE and the electric utility owner Energy Northwest. In the Hanford study (PNNL, 2014), four qualifying WUS models (ASK14, BSSA14, CB14, and CY14) were selected from which a single model (CY14) was chosen for the scaled backbone approach in PSHA calculations.

4-14 Figure 4-1 Comparison of the center (line inside colored box), body (colored box), and range (whiskers) of ground motions from previous SSHAC Level 3 studies for a magnitude 6 event at distances of 1, 5, 10, 20, 50, and 100 km.

The SWUS GMC project (GeoPentech. 2015) was a SSHAC Level 3 study to assess the seismic hazard for the Diablo Canyon NPP (DCNPP) located near Avila Beach, California and the Palo Verde Nuclear Generating Station (PVNGS) located about 45 miles West of Phoenix, Arizona. Both DCNPPs and PVNGSs GMC are informed by the four candidate GMPEs used in the NSHM (ASK14, BSSA14, CB14, CY14) with four additional models [Akkar et al. 2014 (ASB14), Idriss 2014 (Id14), Zhao et al. 2006 (ZH06), and Zhao and Lu 2011 (ZL11)] used for DCNP and two additional models (ASB14, and Bindi et al. (Bi14; 2014)) for PVNGS.

The DOE also sponsored a site wide SSHAC Level 3 PSHA for the Idaho National Laboratory (INL) (INL; 2022). Like the Hanford study, the INL study considered three candidate WUS GMPEs (ASK14, CB14, and CY14) from which a single model (CY14) was chosen for a scaled backbone approach in PSHA calculations.

The four selected seed GMPEs used to develop the GWUS GMM (ASK14, BSSA14, CB14, and CY14) inform the joint probability of ground motions. However, for this study, the TI Team recognized that these four seed GMPEs alone would not capture the CBR of the TDI of median ground motions because they are not fully independent models. To develop the GWUS GMM, the TI Team selected the following oscillator frequencies () common to all the seed GMPEs

4-15 and consistent with the frequencies for which the median adjustments are implemented, as discussed in Section 4.11:

= 0.1, 0.133, 0.2, 0.333, 0.5, 0.667, 1.0, 1.333, 2.0, 2.5, 3.333, 5.0, 6.667, 10.0, 13.333, 20.0, 33.333, 50.0, 100 Hz.

4.3 Sampling GMMs from a Continuous Distribution of Ground Motions As discussed in Section 4.1, the joint probability distribution can be approximated by a multivariate normal distribution:

~,

4-1 where is a vector characterizing the mean of the marginal distribution ( for each and combination in, and is the covariance between elements in defined as:

4-2 Where and are the standard deviation of the marginal distributions for the and and scenario respectively, and, describes the correlation between the and and scenario, respectively. The vector from Section 4.1 is replaced with because it is the distance metric used to inform the candidate GMPEs for the prediction of ground motion values.

For seed models requiring Joyner-Boore distance (), the conversion of to is calculated by:

0

4-3 Individual random samples drawn from the multivariate normal distribution are vectors of ground motions values that behave like a GMM due to the inherent correlation in. Therefore, for a sufficiently large number of and combinations, each sample is considered a continuous function of and and thus a representative GMM. The number of and combinations selected for is not arbitrary. The values for and should be selected based on their expected level of significance to hazard and account for trends in magnitude and distance scaling. The TI Team selected the following and values to model resulting in 154 and combinations:

= 5, 5.5, 6, 6.5, 7, 7.5, 8

= 0.1, 1, 5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 175, 200 km The TI Team selected a magnitude 5 event at close rupture distance () of 0.1 km as the lower bound and scenario expected to cause adverse effects on NPPs. The upper

4-16 bound of is set to 200 km. Beyond this distance it is not expected that ground motions from the largest magnitude ( 8) event would significantly impact the hazard for sites in the WUS for oscillator frequencies of engineering interest for smaller advanced reactors (above 0.5 Hz).

Each of the seed GMPEs were evaluated for a strike-slip fault with a reference condition of 760 m/s. The 760 m/s reference condition was chosen as a reasonable shear wave velocity above which nonlinear site response is not expected and no significant amplification effects are inherent in the seed GMMs. The reference condition was also chosen for easy implementation of the one-step approach to site response analyses as described in the NRC RIL documenting a SSHAC Level 2 site response study (Rodriguez-Marek et al., 2021).

Although sampling from Equation 4-1 appears straightforward, generating physically realizable GMMs from a multivariate normal distribution informed by a limited set of seed GMPEs can be challenging. If the variance or correlation in ground motions computed from the seeds reflect either erratic or large variations between neighboring and scenarios, it can be difficult to produce random models that behave in a physically realizable manner. Specifically, magnitude and distance scaling effects in individual models may reflect unrealistic behaviors. This can result in an inability to capture models near the tails of the distribution, resulting in an underestimation in the epistemic uncertainty in ground motions. The following sections describe how the TI Team overcame these challenges by choosing to model both the variance and correlation in ground motions.

4.4 Variance Model This section describes the alternative approaches investigated by the TI Team to model the variance in ground motions in Equation 4-2. The variance (diagonal entries of the covariance matrix) associated with the seed models for the and scenario can be calculated as

1 1,

4-4 Where is the number of seed models informing the variance calculation,, is the ground motion estimates from the seed model for the and scenario, and is the mean ground motion of the seed models for the and scenario. However, as discussed in the following subsections, the variance associated with the predicted ground motion from the four seeds may not be sufficient in capturing the CBR of expected ground motions (Al Atik and Youngs, 2014). Therefore, the TI Team chose to explore alternative approaches to modeling the variance that allowed for generating physically realizable GMMs that sufficiently captured the CBR of expected ground motions in the WUS.

One approach investigated by the TI Team was to use the variance from the predicted motions from the four seeds directly. However, as shown in Error! Reference source not found., the variance in the seed models can be very small, with large magnitude and distance ranges having almost no appreciable variance. Error! Reference source not found. and Error!

Reference source not found. show the variance in ground motions computed from the SSHAC Level 3 GMMs described in Section 4.2. The within-study variance values for the SSHAC Level 3 studies are significantly larger than those produced by the four seeds models used in this study. It should be noted that the Diablo Canyon Power Plant (DCPP) GMM was developed for

4-17 estimating ground motions < 50 km from the site, and therefore the variance in ground motions predicted by the DCPP model for 50 km and beyond may not be accurately reflected. The TI Team concluded that using the variance model directly from the seed models would significantly limit the range of the random samples drawn from Equation 4-1, resulting in a GWUS GMM that would not necessarily capture the CBR of expected ground motions for the WUS.

To capture the CBR of the TDI, the TI Team used a constant variance model for all magnitudes and distances. Using a constant variance is advantageous because it allows for GMMs to be sampled from Equation 4-1 that are more likely to be physically realistic and allows for easy adjustment of the variance values to ensure that the final GWUS GMM captures the CBR of the expected median ground motions in the WUS. The TI Team reviewed the variance in ground motions from the previous SSHAC Level 3 studies to inform the selection of a constant variance value. As seen in the Error! Reference source not found. and Error! Reference source not found., the variance for the DCPP and PVNGS GMMs vary quite significantly across the range of and combinations with values ranging between 0.01 to 0.8 across all frequencies.

The Hanford and INL GMMs reflect a nearly constant variance with values ranging between 0.05 to 0.2 across all frequencies. After reviewing this information, the TI Team chose to run multiple analyses across alternative constant variance models and ultimately decided on a constant variance of 0.3 for all frequencies. The value 0.3 was chosen because it results in the GWUS GMM capturing the CBR of the previously developed SSHAC Level 3 WUS GMMs for all frequencies.

4-18 Figure 4-2 Variance from the Seeds models for 1 Hz (left) and 10 Hz(right)

4-19 Figure 4-3 Variance for DCPP SWUS, Palo Verde SWUS, Hanford SSHAC Level 3, and INL SSHAC Level 3 GMMs (1Hz).

DCPP SWUS Palo Verde SWUS Hanford INL3

4-20 Figure 4-4 Variance for DCPP SWUS, Palo Verde SWUS, Hanford SSHAC Level 3, and INL SSHAC Level 3 GMMs (10Hz)

DCPP SWUS Palo Verde SWUS Hanford INL3

4-21 4.5 Correlation Model This section describes the TI Teams approach to modeling the correlation used in Equation 4-2 for generating sample models. The covariance of the seed models can be calculated as 1

1,,

4-5 where is the number of seed models informing the covariance,, and, are the ground motion estimates from the seed model for the and and scenario, respectively, and and indicates the mean ground motion of the seed models for the and and scenario, respectively. However, as shown in Figure 4-5 (left) and Figure 4-6 (left), the correlation in predicted ground motions across different and combinations may not vary smoothly. This can lead to randomly generated models that are not physically realizable nor defensible because their magnitude and distance scaling do not behave in a manner consistent with seismological observations and theory. This can result in limited sampling in the tails of the ground motion distribution. To overcome this effect, the TI Team adopted the approach used in NGA-East for estimating the correlation coefficients. NGA-East modeled the correlation coefficient used in Equation 4-2 using a functional form for the covariance borrowed from the field of Gaussian process regression (Chapter 4 of Rasmussen and Williams (2006)):

, 1 0

0

2

0 0

4-6 where x and x are vectors describing the and scenarios. The first part of Equation 4-6 is known as the isotropic covariance function and the second part (the dot product) models the linear trend in ground motions with magnitude and distance. Parameter describes the variance, through describe how much the correlation is preserved between and scenarios, and and control the slopes of the linear trend in and.

The parameters through were estimated by maximizing the marginal likelihood, l,,

where is the vector of the median predictions from the seed models and is the vector containing all and scenarios. Once the parameters through have been optimized, the correlation coefficients are computed by

4-7 Where, is the covariance computed from Equation 4-5 for the and and scenarios. Examples of modeled correlation with respect to two reference and scenarios using Equation 4-7 are shown in Figure 4-5 (right) and Figure 4-6 (right). The resulting modeled correlations are much smoother across and scenarios than those computed just from the four seed models. The benefit of the modeled correlation is that it more likely produces

4-22 random sampled models from the multivariate distribution (Equation 4-1) that will have physically realizable magnitude and distance scaling, allowing for better sampling of the tails of the ground motion distribution.

Figure 4-5 Seed Correlation (Left) and Modeled Correlation (Right) for 1Hz, Magnitude 5, Rupture Distance of 10 km.

4-23 Figure 4-6 Seed Correlation (Left) and Modeled Correlation (Right) for 1Hz, Magnitude 7, Rupture Distance of 100 km.

4.6 Screening Models for Physicality This section describes the acceptance/rejection process for GMMs sampled from. As discussed in Section 4.1, the joint probability distribution of ground motions can be approximated by a multivariate normal distribution informed by a vector of means and a covariance matrix (Equation 4-1). The covariance matrix was constructed from the variance and correlation models selected by the TI Team as described in Sections 4.4 and 4.5, respectively.

The vector of values computed directly from the marginal distributions of were used to inform Equation 4-1. However, as pointed out in NGA-East, this can result in some loss in the magnitude and distance scaling effects inherent in the individual seed models. Therefore, the TI Team chose to randomly select one of the seed models to inform the vector in Equation 4-1 for each sample GMM drawn.

Each random sample GMM was checked to ensure that both magnitude and distance scaling effects are realizable. This is accomplished through the following screening criteria:

For each spectral frequency, the ground motions at a = 7 and distance must be larger than those for a = 6 at the same distance.

For each spectral frequency, the ground motions at a = 6 and distance must be larger than those for a = 5 at the same distance.

For each spectral frequency, the ground motions at a specified and distance of 20 km must be larger than those for the same at distance of 100 km.

For each spectral frequency, the ground motions at a specified and distance of 100 km must be larger than those for the same at distance of 200 km.

4-24 Any random sample drawn from the distribution that does not meet the screening criterion was rejected, and another sample was drawn. This process continued until a pre-defined number of samples is obtained. For this study, the pre-defined number was set at 10,000. The TI Team chose this number based on the work in NGA-East that showed for 374 at combinations, 10,000 models was sufficient to represent the ground motion space. Since the GWUS GMM considers less than half the number of and combinations of NGA-East, the TI Team determined 10,000 samples to be sufficient for this study.

4.7 Visualization of Ground Motion Space This section describes the domain reduction technique known as Sammons mapping used to visualize the high dimensional ground motion space sampled from. An ideal path for capturing the full epistemic uncertainty in ground motions would be to simply perform PSHA calculations using all 10,000 sampled GMMs. However, this is computationally unrealistic, and an alternative approach is required. Determining where each of the 10,000 sampled models falls in ground motion space is challenging. This is non-trivial because each of the sampled GMMs represents a single point in a 154-dimensional ground motion space developed from the and combinations. To overcome this challenge, the technique known as Sammons mapping was used to reduce the ground motion space from 154 dimensions down to two dimensions, where discretization is achievable.

Sammons mapping is a nonlinear dimension reduction technique that maps the distance distribution of points from a higher dimension to that of a lower dimension by minimizing the Sammons stress defined here for ground motions:

1

4-8 where, is the difference in ground motion between sampled GMMs and in high dimensional space and, is the difference in ground motion between sampled GMMs and in a two-dimensional Euclidian space. It is important to understand that the Sammons approach does not alter the GMMs in any way, it only attempts to maintain the same relative distance between individual GMMs in low dimensional space as in high dimensional space.

Figure 4-7 shows the 1 Hz Sammons map for 10,000 models sampled from projected from 154 dimensions down to two dimensions. The red dots in the figure show where the four seed models occupy the ground motion space compared to those sampled from. The results of the Sammons map are considered a representation of the high dimensional ground motion space; and, therefore, each map (19 maps, 1 for each oscillator frequency) approximates which describes the epistemic uncertainty in ground motions. Because is a joint normal probability distribution, the resulting 2-dimension projection of is also normally distributed. This is important in understanding how an appropriate discretization of the Sammons map results in an approximation of the CBR of the ground motion space. Checks are also made on the first two principal components of to provide confidence that the desired level of uncertainty in the full distribution is being captured. While not necessary for discretization, the maps can be rotated so that each map is oriented in a consistent way and allows for visualization of magnitude and distance scaling effects in the ground motion space.

This is accomplished by adding what are referred to as signposts to the maps. The signposts added to the maps are as follows:

4-25 S-and S+ are scaled versions of the average model (mix) of the ground motion space used for a consistent orientation of the maps M-and M+ represent the average model scaled with change in the direction of magnitude scaling defined by 6.5 with 0.4 and 0.4 respectively R-and R+ represent the average model scaled with change in the direction of distance scaling defined by 100 Figure 4-7 shows the same Sammons map and Figure 4-8 includes the signposts.

4-26 Figure 4-7. Sammons Map covered by 10,000 Models (grey dots) and Seed Models (red dots) for 1 Hz case.

Figure 4-8. Sammons Map covered by 10,000 Models (grey dots) and Seed Models (red dots) with Signposts for 1 Hz case.

4-27 4.8 Discretization of Ground Motion Space Understanding that the models in Sammons space represent a 2-dimensional normal (Gaussian) distribution of ground motions makes it possible to discretize the ground motion space using iso-contour lines representing the 10th (center), 75th (body), and 95th (range) percentile of the ground motion space. The iso-contour lines are constructed with the understanding that the confidence interval for a 2-dimensional distribution can be represented by an error ellipse (Figure 4-9). The ellipses are constructed by computing the Rayleigh inverse cumulative distribution function for the 10th, 75th, and 95th percentiles that result in scale factors of 0.46,1.66, and 2.45, respectively, that are then applied to the standard deviations of the 2-dimensional distribution for formation of the ellipses.

The ellipses are considered to capture the CBR of the ground motions but are not sufficient to discretize the ground motion distribution. Therefore, the TI Team adopted the approach used by the NGA-East project that partitioned the three ellipses into 17 individual cells (Figure 4-10 and Figure 4-11), where the central ellipse defines a single cell, and the outer two ellipses create areas that are partitioned into 16 cells based on equal angular distances of 45 degrees. The NGA-East TI Team investigated several partitioning schemes and determined that an ellipse discretized by a total of 17 cells was sufficient and stable in capturing the epistemic uncertainty.

The final step in discretizing the ground motion space is to choose a representative median model from each cell to capture the epistemic uncertainty in the ground motion distribution.

Figure 4-9. Sammons Map covered by 10,000 Models (grey dots) and Seed Models (red dots) with Signposts and 10%, 75%, and 95% Iso-Contours for 1 Hz case.

4-28 Figure 4-10 Discretized Sammons Map covered by 10,000 Models (grey dots) and Seed Models (red dots) with Signposts and for 1 Hz case.

Figure 4-11 Discretized Sammons Map covered by 10,000 Models (grey dots) and Seed Models (red dots) with Signposts and for 10 Hz case.

4-29 4.9 Final Median Models The final models to be used in PSHA calculations are constructed by computing the arithmetic average of the models in each cell for all and scenarios. However, each frequency is treated independently, which can result in median model response spectra for a given and scenario exhibiting a somewhat jagged nature between adjacent frequencies (Figure 4-12 circles). This can be attributed to the varying magnitude and distance scaling effects for individual frequencies. These effects can appear more significant if the frequency resolution used to compute the median models is sparse, making it appear as if the response spectra are significantly different over large frequency ranges. Similar results with jagged response spectra were seen in the NGA-East GMM initial results. The NGA-East TI Team chose to smooth the median models using a functional form. For this study, the TI Team chose to compute the average spectral shape from all 17 median models, resulting in a smooth shape that was then anchored to the peak ground acceleration of each median model (Figure 4-12 solid line). This approach was used because the median models for all the oscillator frequencies exhibited a similar shape to the mean model for all and scenarios. Figure 4-13 shows an example of the final smoothed spectra for all 17 median models along with the seed models. There is a set of 17 median predictions for the range of magnitude and distance combinations, described above, for each of the 19 oscillator frequencies. The final smoothed median models for all oscillator frequencies are provided in Appendix C of the companion SSHAC Level 1 report for the study site (Stamatakos et al., 2025).

The approach used to weight the median models is identical to that of the heaviest weighted approach used in NGA-East. For each cell from which a median model was computed, the fraction of models occupying the cell to the total number of sampled models generated for the Sammons map defines the weight applied to that model. NGA-East also considered two alternative data-based approaches to weighting the median models, the maximum likelihood and minimized residuals approach each given 0.1 weight. These two approaches were weighted low due to the TI Teams assessment that heavily weighting such models would lead to a distribution of GMMs that was too narrow because of the narrow range of available data. While there is significantly more data available in the WUS compared to the CEUS, the TI Team felt that there were still significant gaps in WUS recorded data across all and scenarios used to develop the GWUS GMM which would result in assigning low weight and have little impact on the final weights.

4-30 Figure 4-12 Median Models for M = 6, Rrup = 20 km. Circles Represent Median Values Computed from Discretized Ground Motion Space and Solid Black line represents the Mean Scaled to Each Median Models PGA.

4-31 Figure 4-13 The 17 Final median models and seed models for M = 6, Rrup = 20 km.

4.10 Comparison with Previous Studies As was stated in Section 4.1, the purpose for developing a generic GMM was to provide a model that could be easily implemented in WUS PSHA studies to improve the efficiency of future SSHAC studies. Sections 4.2 through 4.9 demonstrate the development of the GWUS GMM. The second purpose for developing the GWUS GMM was to capture the CBR of the expected ground motions in the WUS. Figure 4-14 through Figure 4-17 shows comparisons between the GWUS and the other SSHAC Level 3 studies. These figures show that the GWUS captures a wider body and range than previously developed SSHAC GMMs for the WUS. As seen in the figures, for of 100 km DCPP GMM shows a larger range than that of the GWUS. However, as was stated in Section 4.4, the DCPP GMM was developed for estimating ground motions < 50 km from the site and therefore ground motions predicted by the DCPP model for 50 km and beyond may not be accurately reflected. Therefore, the TI Team expects the body and range of ground motions to be larger from the GWUS GMM compared to site-specific GMMs developed in the WUS.

4-32 Figure 4-14 Comparison of the center (line inside colored box), body (colored box), and range (whiskers) of ground motions from the GWUS and previous SSHAC Level 3 studies for 1 Hz magnitude 6 event for distances of 1, 5, 10, 20, 50, and 100 km.

4-33 Figure 4-15 Comparison of the center (line inside colored box), body (colored box), and range (whiskers) of ground motions from the GWUS, NSHM, and previous SSHAC Level 3 studies for 10 Hz magnitude 6 event for distances of 1, 5, 10, 20, 50, and 100 km.

4-34 Figure 4-16 Comparison of the center (line inside colored box), body (colored box), and range (whiskers) of ground motions from the GWUS, NSHM, and previous SSHAC Level 3 studies for 1 Hz magnitude 7 event for distances of 1, 5, 10, 20, 50, and 100 km.

4-35 Figure 4-17 Comparison of the center (line inside colored box), body (colored box), and range (whiskers) of ground motions from the GWUS, NSHM, and previous SSHAC Level 3 studies for 10 Hz magnitude 7 event for distances of 1, 5, 10, 20, 50, and 100 km.

4.11 Median Adjustments The spectral accelerations obtained from the median models must be adjusted to account for reverse and normal faulting and for hanging wall effects if the site is located on the hanging wall of a fault. The TI Team adopted median adjustments developed for the SWUS GMC SSHAC Level 3 project (PG&E 2015) as described next. The adjustments are natural log (ln) based and are intended to be added to the median base model predictions.

4.11.1 Reverse Faulting Adjustment The TI Team used the reverse faulting adjustment distribution from the SWUS GMC SSHAC Level 3 project (PG&E 2015) to develop three alternative reverse adjustments. The reverse fault adjustment for the SWUS-Diablo Canyon (SWUS-DC) GMMs is

. Figure 4-18 shows the SWUS-DC reverse adjustment terms that span a range of approximately 0 to 0.27 for 1 Hz and 0 to 0.35 for 10 Hz. The TI Team used the weights associated with each of the SWUS-DC adjustment terms and developed a cumulative distribution from which the 10th, 50th, and 90th percentile adjustment values were obtained and are referred to as R1, R2, and R3, respectively.

These three reverse adjustment terms are also shown in Figure 4-18. The INL reverse adjustment is also shown in Figure 4-18. Although the INL reverse adjustment is close to or

4-36 exceeds the 90th percentile of the SWUS reverse adjustments, reverse adjustments from most NGA-W2 ground motions models are close to zero. Therefore, the TI Team believes that the CBR of adjustments are being captured. Reverse fault adjustment terms for the GWUS GMM are provided in Table 4-1.

Table 4-1 Reverse fault adjustment terms for the GWUS ground motion model, where R1, R2, and R3 represent the 10th, 50th, and 90th percentiles of the SWUS-DC GMM adjustment factors.

Frequency (Hz)

Period (s)

R1 R2 R3 0.1 10 0.000302 0.015157 0.108215 0.133 7.5 0.000302 0.015157 0.108215 0.2 5

0.000302 0.015157 0.108215 0.25 4

0.000302 0.015157 0.108215 0.333 3

0.000302 0.015157 0.108215 0.5 2

0.000432 0.037095 0.160651 0.667 1.5 0.002168 0.028611 0.121245 1

1 0.003420 0.021800 0.121195 1.333 0.75 0.004641 0.085697 0.199768 2

0.5 0.009102 0.111115 0.223088 2.5 0.4 0.021596 0.088255 0.151346 3.333 0.3 0.004454 0.057373 0.203251 4

0.25 0.003873 0.054550 0.133430 5

0.2 0.001067 0.074359 0.122112 6.667 0.15 0.002075 0.025637 0.134542 10 0.1 0.001296 0.034661 0.157248 13.333 0.075 0.006006 0.022625 0.089351 20 0.05 0.000987 0.067738 0.114926 33.333 0.03 0.004203 0.038320 0.129958 50 0.02 0.009144 0.031798 0.168838 100 0.01 0.005195 0.051177 0.124411

4-37 Figure 4-18 Reverse fault adjustment terms for the INL, SWUS-DC, and GWUS GMMs a) 1 Hz, b) 10 Hz, where the solid blue lines for the GWUS reverse adjustment term represent the 10th, 50th, and 90th percentiles

4-38 4.11.2 Normal Faulting Adjustment The TI Team used the normal faulting adjustment distribution from the SWUS GMC SSHAC Level 3 project (GeoPentech, 2015) to develop three alternative normal adjustments. The normal fault adjustment for the SWUS-Palo Verde (SWUS-PV) GMMs is 9 2 and is implemented only when the faulting mechanism is normal. Figure 4-19 shows the SWUS-PV normal adjustment terms that span a range of approximately -0.4 to 0 for 1 Hz and - 0.48 to 0.0 for 10 Hz. The TI Team used the weights associated with each of the SWUS-PV normal adjustment terms and developed a cumulative distribution from which the 10th, 50th, and 90th percentile adjustment values were obtained and are referred to as N1, N2, and N3, respectively.

These three normal adjustment terms are also shown in Figure 4-19. The INL normal adjustment is compared with the GWUS adjustments in Figure 4-20. The GWUS normal adjustments are similar to the INL normal adjustments, and the GWUS normal adjustments capture a slightly larger range than the INL normal adjustments. While INL used a larger dataset of normal ground motions to capture the CBR of the TDI for normal adjustment, the TI Team decided to use the slightly larger range of normal adjustments captured by the SWUS-PV model. The TI Team believes that the SWUS-PV normal adjustment epistemic uncertainty is adequate for use in a generic model for the WUS. Normal faulting adjustment terms for the GWUS GMM are provided in Table 4-2.

Table 4-2 Normal fault adjustment terms for the GWUS ground motion model, where N1, N2, and N3 represent the 10th, 50th, and 90th percentiles of the SWUS-PV GMM adjustment factors.

Frequency (Hz)

Period (s)

N1 N2 N3 0.1 10

-0.16870

-0.01728

-0.00219 0.133 7.5

-0.16870

-0.01728

-0.00219 0.2 5

-0.16870

-0.01728

-0.00219 0.25 4

-0.16870

-0.01728

-0.00219 0.333 3

-0.16870

-0.01728

-0.00219 0.5 2

-0.19343

-0.01584

-0.00465 0.667 1.5

-0.20424

-0.04854

-0.00833 1

1

-0.18800

-0.04080

-0.00331 1.333 0.75

-0.24504

-0.10107

-0.01033 2

0.5

-0.25518

-0.10962

-0.01153 2.5 0.4

-0.28128

-0.05578

-0.00378 3.333 0.3

-0.18701

-0.06297

-0.00734 4

0.25

-0.19393

-0.09170

-0.01876 5

0.2

-0.27009

-0.05637

-0.00934 6.667 0.15

-0.27141

-0.13781

-0.03413 10 0.1

-0.36785

-0.12636

-0.05638 13.333 0.075

-0.32414

-0.12909

-0.03166 20 0.05

-0.33524

-0.13345

-0.05968 33.333 0.03

-0.26846

-0.13411

-0.05569 50 0.02

-0.21402

-0.12851

-0.02315 100 0.01

-0.42561

-0.20510

-0.06600

4-39 Figure 4-19 Normal adjustment terms for the SWUS-DC and GWUS GMMs a) 1 Hz and b) 10 Hz, where the solid orange lines for the GWUS normal adjustment term represent the 10th, 50th, and 90th percentiles of the SWUS adjustment term

4-40 Figure 4-20 Comparison of the INL and GWUS normal adjustment terms for a) 1 Hz and b) 10 Hz

4-41 4.11.3 Hanging Wall Adjustment The hanging wall adjustment developed for the SWUS GMC SSHAC Level 3 project (GeoPentech, 2015) was used to capture hanging wall effects for the GWUS model. The hanging wall adjustment captures the following five aspects of hanging wall effects:

1. Hanging wall effects directly above the rupture plane for faults rupturing to the surface,
2. The effect of moving off the hanging wall,
3. The effect of depth to the top of the fault rupture,
4. The effect of magnitude, and
5. The effect of the dip of the rupture plane.

The following function is used to capture the hanging wall effect:

cos1 tanh

cos 1 7_, _

4-9 where is magnitude, is the dip angle of the fault rupture, is the fault rupture width, is the horizontal distance to the top edge of the rupture measured perpendicular to the strike,

_, is a taper function to account for moving away from the hanging wall, and

_ is a taper function to account for the effect of depth to the top of the rupture. The taper functions are provided below.

_, 1

0.1 4-10

_max 0,1 12 4-11 where is the horizontal distance to the surface projection of the rupture, is the closest distance to the rupture plane, and is the depth to the top of the rupture.

PG&E (2015) developed five equally likely hanging wall factors. The five hanging wall factors implement a model-dependent coefficient while the coefficients,,, and, are held constant for all five models. The TI Team reduced the five model-dependent coefficients to three model-dependent coefficients. Given the epsilon values for the five equally weighted probability bins are -1.5, -0.5, 0, 0.5, and 1.5 (PG&E 2015), an associated standard deviation was computed for models 1, 2, 4, and 5. The average standard deviation from these four models was then computed and smoothed. The smoothed standard deviation with epsilon values of -1.282, 0, and 1.282, which correspond to 10th, 50th, and 90th percentiles of a normal distribution were used to determine the three model-dependent coefficients. The TI Team assigned weights of 0.3, 0.4, and 0.3 to these three models. The coefficients for the models are provided in Table 4-3. A comparison of the five hanging wall model adjustments and three

4-42 hanging wall model adjustments are shown in Figure. Figure shows that the three models capture the same CBR as the five models.

When using the hanging wall adjustment, the TI Team implemented the three hanging wall models with the 17 median GMMs by randomly selecting one of the three hanging wall models.

Due to the weighting of each model, model 1 is associated with five ground motions models, model 2 is associated with seven GMMs, and model 3 is associated with five GMMs.

Table 4-3 Coefficients for the GWUS hanging wall adjustment model Frequency (Hz)

Period (s)

Model-dependent C1 Coefficients Coefficients held constant for all three models Model HW 1 Model HW2 Model HW3 C2 C3 C4 0.1 10 0

0 0

0.1616 1.6740 0.3314 0.133 7.5 0

0 0

0.1616 1.6740 0.3314 0.2 5

0 0

0 0.1616 1.6740 0.3314 0.25 4

0 0.088 0.217 0.1616 1.6740 0.3314 0.333 3

0.154 0.304 0.454 0.1616 1.6740 0.3314 0.5 2

0.424 0.609 0.794 0.1559 1.7996 0.3246 0.667 1.5 0.540 0.740 0.940 0.1559 1.8336 0.3195 1

1 0.658 0.872 1.086 0.1571 1.8526 0.3143 1.333 0.75 0.779 0.997 1.215 0.1713 1.8697 0.1866 2

0.5 0.776 0.982 1.188 0.2053 2.0041 0.1719 2.5 0.4 0.812 1.011 1.210 0.2090 2.0249 0.1624 3.333 0.3 0.869 1.041 1.213 0.2019 2.0179 0.1658 4

0.25 0.894 1.044 1.194 0.1988 1.9931 0.1767 5

0.2 0.883 1.082 1.281 0.2131 1.9746 0.1834 6.667 0.15 0.895 1.080 1.265 0.2169 2.0162 0.1814 10 0.1 0.890 1.135 1.380 0.2213 1.9974 0.1717 13.333 0.075 0.896 1.133 1.370 0.2218 1.9906 0.1817 20 0.05 0.880 1.121 1.362 0.2199 1.9870 0.1699 33.333 0.03 0.886 1.067 1.248 0.2178 2.0163 0.1670 50 0.02 0.893 1.046 1.199 0.2172 2.0260 0.1666 100 0.01 0.893 1.038 1.183 0.2160 2.0289 0.1675

4-43 Figure 4-21 Hanging wall adjustment term 4.11.4 Implementation of Adjustments with the GWUS Median Model Three adjustments have been developed to modify the GWUS median model to account for reverse and normal focal mechanisms and for hanging wall effects. The TI Team implemented these models by randomly assigning an adjustment term to one of the 17 median GMMs with consideration for the adjustment weight of 0.3, 0.4, and 0.3. This results in five R1, N1, and C1-HW1 adjustment terms, seven R2, N2, and C2-HW2 adjustment terms, and five R3, N3, and C1-HW3 adjustment terms being randomly assigned to the 17 median models for each period and a unique assignment of adjustment terms for each frequency. An example assignment of adjustment terms for 1 Hz is shown in Table 4-4. An example simplified median GMMs logic tree with inclusion of adjustment factors for a normal fault with a site on the hanging wall is illustrated in Figure 4-21.

4-44 Table 4-4 Adjustment terms applied to the GWUS median GMMs at 1 Hz.

Median Model (T = 1 Hz)

FN FRV FHW 1

N3 R1 HW2 2

N2 R2 HW3 3

N1 R1 HW2 4

N3 R1 HW1 5

N2 R2 HW1 6

N1 R3 HW2 7

N2 R2 HW3 8

N1 R2 HW2 9

N1 R3 HW2 10 N2 R3 HW1 11 N3 R2 HW1 12 N1 R3 HW3 13 N2 R3 HW2 14 N3 R2 HW3 15 N2 R2 HW2 16 N3 R1 HW3 17 N2 R1 HW1

4-45 Figure 4-21 Simplified adjusted median GMMs logic tree for a normal fault with the site on the hanging wall.

To evaluate the effect of randomly assigning adjustment terms on the distribution of ground motion, the TI Team compared ground motion fractiles from the simplified logic tree described above with ground motion fractiles from a full logic tree for a Stansbury fault scenario (a normal fault) with the site located on the hanging wall. The Stansbury fault was a significant fault source for ground motion evaluated in the TI Team SSHAC Level 1 Demonstration Project (Stamatakos et al., 2025). The full logic tree consisted of 17 median GMMs, three normal adjustment branches, and three hanging wall adjustment branches for a total of 153 adjusted median ground motion branches. Figure 4-22 compares the 5th, 16th, 50th, 84th, and 95th percentile median ground motions from the simplified and full logic trees. Because of the general agreement of the fractiles shown in this comparison, the TI Team concluded that the simplified logic tree captures the CBR of ground motions.

4-46 Figure 4-22 Adjusted median ground motions distribution represented by the 5th, 16th, 50th, 84th, and 95th percentiles for the simplified and full ground motion logic trees. The whiskers represent the 5th and 95th percentile motions, the lower and upper bounds of the box represent the 16th and 84th percentile median motions, and the line in the middle of the box represents the 50th percentile motion.

4.12 Sigma Model This section summarizes recommendations for a generic model for aleatory variability (sigma) for use in the WUS. The model proposed is that of Al Atik (2015), which was developed for the SSHAC Level 3 NGA-East project (Goulet et al., 2018) and was adopted, with little to no variations, in several recent SSHAC projects, including, in particular, the SWUS SSHAC Level 3 Project (GeoPentech, 2015) and the INL SSHAC Level 3 project (INL 2022).

The aleatory variability model is postulated within the framework of a partially non-ergodic seismic hazard analysis (Anderson and Brune, 1999), which implies the use of a single-station sigma model. Section 4.12.1 introduces the concept of a partially non-ergodic PSHA. The elements of the proposed sigma model are also presented in Section 4.12.1. The full sigma model, including the sigma model logic tree, is presented in Section 4.12.2.

4-47 4.12.1 Background on Partially non-ergodic PSHA The PSHA for a site-specific study entails the predictions of future ground motions over time for the site under analysis. To this effect, the GMM used in the PSHA is tailored to the source, path, and site characteristics of a given site. The modification for site characteristics is achieved either via the use of recordings at a site or, more commonly, through modeling via site response analyses (Rodriguez-Marek et al., 2021). The objective of the SRA is to estimate the repeatable site effects at a site of interest. When these repeatable effects are accounted for in a PSHA, the analysis is said to be partially non-ergodic (the adjective partial is used because ergodicity can also result from repeatable source effects). In contrast, the aleatory variability in common GMMs is estimated within an ergodic framework, where variability in time (e.g., from earthquake-to-earthquake) and space (e.g., from site-to-site) are both incorporated into the aleatory variability (Rodriguez-Marek et al., 2014; Stewart et al., 2017).

The residuals of an ergodic GMMs,, are given by S2S WS 4-12 where the subscripts denote an observation for event at station, is the repeatable event term (e.g., the event-term), 2 represents the systematic deviation of the observed ground motion at site (e.g., the site term) from the median event-corrected ground motion predicted by the GMM, and is the site-and event-corrected residual. The standard deviation of the

, 2 and terms are denoted by, and, respectively. Table 4-5 lists the components of the total residual, their respective standard deviations, and the terminology used for each standard deviation component.

Table 4-5 Terminology used for residual components and their standard deviations. SD denotes the standard deviation operator (from PNNL 2014).

Residual Component Residual Notation Standard Deviation Component Definition of Standard Deviation Component Total residual Total or ergodic standard deviation Event term Between-event (or inter-event) standard deviation ()

Event-corrected residual Within-event (or intra-event) standard deviation (phi)

Site term 2

Site-to-site variability 2

Site-and event-corrected residual Event-corrected single-station standard deviation (single-station phi)

4-48 In traditional (e.g., ergodic) PSHA, all residual components are considered as part of the aleatory variability, such that:

4-13 In the partially non-ergodic approach, the site term (2) is assumed to be known (or knowable); and, hence, its standard deviation () is excluded from Equation 4-14. In this case, the standard deviation is known as the single-station standard deviation and is given by:

4-14 The principal motivation to adopt a single-station sigma approach for this project is to avoid double counting uncertainty. This double counting would result if the site-to-site variability ()

is included in the total sigma and the site term is assigned an epistemic uncertainty. An additional motivation for the adoption of a single-station sigma approach is that the value of single-station phi () has proven to be relatively constant across different regions and tectonic environments (Rodriguez-Marek et al., 2013).

The single-station sigma values are lower than their ergodic counterparts, but the use of single-station sigma requires that the median site term (2) must be estimated, along with its epistemic uncertainty. The emphasis on this requirement is important: without the proper estimate of the median and the epistemic uncertainty of 2, it is not possible to use the lower single-station sigma values. In addition, the value of single-station sigma itself has epistemic uncertainty that needs to be accounted for.

When estimates of the site term (2) are made via site response modeling, then the epistemic uncertainty of 2 must be estimated using site response logic trees to account for the uncertainty in the input parameters to the site response analyses. In addition, the epistemic uncertainty of 2 must also account for the modeling error in the site response analyses.

Approaches to capture these uncertainties are discussed in detail in Rodriguez-Marek et al.

(2021a; see also Rodriguez-Marek et al., 2021b).

Sigma Model Elements The proposed model for aleatory variability (sigma) has the following elements:

a model for median values of and,

a model for the uncertainty in and, and a model for the shape of the distribution of residuals.

In addition, the final model accounts for the effects of spatial correlation on the estimate of aleatory variability. The models for and are based on the Al Atik (2015) model, as were adopted in the SSHAC Level 3 project for the INL (INL, 2021). Al Atik (2015) developed the

4-49 model from the residuals of four NGA West2 models, as part of the SSHAC Level 3 project for the CEUS (Goulet et al., 2018). These elements are discussed in the following subsections.

Median and aleatory variability model for The selected is magnitude dependent and is given by:

for 5.0 5.0 1.5 for 5.0 6.5

for 6.5 4-15 where a and b are model coefficients and correspond to the values of at magnitude breakpoints (5.0 and 6.5, respectively) shown in Figure 4-23.

The model for the epistemic uncertainty of is computed in terms of the variance component (e.g.,

). The standard deviation of was computed by Al Atik (2015) and is given in Figure 4-24.

Figure 4-23 Coefficients a and b for the model (from Al Atik, 2015).

4-50 Figure 4-24 Uncertainty in at the magnitude breakpoints of 5.0 and 6.5 (coefficients a and b, respectively, for the model). (From Al Atik, 2015).

Median and aleatory variability model for The proposed model for the between-event aleatory variability () is given by:

for 4.5 4.5 0.5 for 4.5 5.0 5.0 0.5 for 5.0 5.5 5.5 1

for 5.5 6.5

for 6.5 4-16 where is magnitude and,,, and are the model coefficients at the magnitude breakpoints of 4.5, 5.0, 5.5, and 6.5, respectively. These coefficients are shown in Figure 4-25.

A particularity of the tau model is that the values of tau are frequency independent. The parameters are obtained from first averaging four NGA West2 models (Abrahamson et al.,

2014; Boore et al., 2014; Campbell and Bozorgnia, 2014; and Chiou and Youngs, 2014) at magnitude breakpoints (i.e., where the tau has a break in magnitude for each individual model),

and then averaging the resulting values across frequency. Note in Figure 4-25 that the tau model has, prior to averaging, a peak value at a frequency between 10 and 20 Hz; this peak is attributed to regional kappa differences in California and not to systematic higher uncertainty associated to the source (GeoPentech, 2015; INL, 2021). This line of argumentation led to the decision in past SSHAC Level 3 projects to remove frequency-dependency from the tau model.

The epistemic uncertainty in tau is computed from the standard deviation of the variance (),

by considering the model-to-model variability in between the four NGA West2 models and the within-model variability computed as part of the regression of the Chiou and Youngs (2014) model. The resulting values are shown in Figure 4-26

4-51 Figure 4-25 Coefficients of the model. The dashed lines show the value of the frequency independent coefficients. The thin solid lines are the average values from the NGA West2 models (Al Atik, 2015).

Figure 4-26. Standard deviation of at the four magnitude breakpoints (Al Atik, 2015).

4-52 Shape of the distribution The traditional assumption in the derivation of GMMs is that the distribution of ground motion residuals is log-normal (e.g., the natural log of ground motion residuals is normally distributed).

An evaluation of several datasets in the SWUS SSHAC Level 3 study (Shahi et al., 2015) indicated that the tail of the within-event single-station ground motion residuals (in log-space) are heavy-tailed and not properly captured by a normal distribution. The SWUS, INL, and the Hanford SSHAC Level 3 studies (GeoPentech, 2015; PNNL, 2014: INL, 2021) used a mixture model of two equally weighted normal distributions, with standard deviations given by 1.2 and 0.8 of, to capture the observed heavy tails. With this proposed mixture model, the conditional probability of exceeding a ground motion level z for ground motion parameter can be written as:

0.5 1

0.5 1

4-17 where and are the standard deviations obtained by combining 1.2 and 0.8 with the between-event standard deviation, and is the mean of the ground motion parameter. Note that the mixture model is still parameterized by the two components of Equation 4-14 (i.e., the mixture model does not add any parameters to the traditional normal distribution). The Hanford and SWUS studies assigned the mixture model a weight of 0.8 and the traditional normal distribution a weight of 0.2, but the INL study (INL 2021) assigned the mixture model a weight of 1.0. The latter choice was based on the fact that the normal distribution is not supported by statistical evidence from multiple datasets. Moreover, hazard sensitivity studies for these projects did not show strong sensitivity for the choice of sigma model. For this reason, the proposed sigma model in these projects assumed a mixture model for the shape of the distribution.

4.12.2 Sigma Logic Tree The elements of the sigma model presented in the previous section are combined by assuming that and (Table 4.11.1) are statistically independent (Al Atik, 2015). Thus:

4-18 and

4-19 Ang and Tang (2007) indicate that the sample variance of a normal distribution follows a scaled Chi-Square distribution with degrees of freedom, denoted by

. This distribution has a mean of and a variance of 2. The sigma logic tree is built using a discrete three-point distribution by sampling the scaled distribution at the 5th, 50th, and 95th percentiles, with weights of 0.185, 0.63, and 0.185, respectively (Keefer and Bodily 1983). The percentiles of the scaled distribution was obtained from (GeoPentech, 2015, Appendix P.1):

0.5 4-20

4-53 0.95 4-21 0.05 4-22 where,

is the inverse cumulate distribution function of the distribution for percentile,

and is the scaling factor given by:

2 4-23 where is the value of the central, high, or low single-station sigma branch and is the standard deviation of single-station sigma. The degree-of-freedom parameter is given by:

2 4-24 A final element of the sigma logic tree accounts for the impact of spatial distribution. Jayaram and Baker (2009) observed that ground motion residuals are spatially correlated. Shahi et al.

(2015) performed an evaluation of residuals in the NGA West2 database using the Chiou and Youngs (2014) and Abrahamson et al. (2014) datasets and concluded that the impact of spatial correlation results in a net increase of of about 5%. To account for this bias, the SWUS SSHAC Level 3 Project modified the weights of the three-point distribution to increase the value of consistent with the observed increase in. These weights are illustrated in the proposed logic tree in Figure 4-27.

4-54 Figure 4-27 Proposed sigma logic tree.

5-1 5

SIMPLIFYING SSHAC STUDIES The SSHAC guidelines in NUREG/CR-6372 specify four different levels of SSHAC hazard studies, increasing in complexity from Level 1 to Level 4. Regardless of the level of the study, the objective of a SSHAC study remains the development of a hazard estimate that reflects the CBR of TDI, and all SSHAC studies must include the key features of a SSHAC process (NUREG-2213). The increase in SSHAC level is associated with an increased scope for data collection, an increase in the size of the TI Team(s) and the PPRP, larger resources for the hazard calculation, a generally longer project duration, and broader participation of the technical community. These changes increase the likelihood of effectively capturing the CBR of TDI, which in turn increases the likelihood of regulatory assurance (NRC, 2018).

The reduced complexity of a lower level SSHAC study implies that resources for the development of the SSM, the GMM, and the hazard calculations are thus reduced. The direct result of this is that lower level SSHAC projects are likely to draw more from existing data, models, and methods. The larger involvement of the technical community and the additional resources invested in higher level SSHAC studies becomes evident by the fact that the state-of-the-art in hazard assessment tends to be advanced in higher level SSAHC studies. These advancements include the use of single-station sigma in the PEGASOS Refinement Project (Renault, 2014; Renault et al., 2010) and a new approach to evaluate the host-to-target correction (INL, 2022). It has also become a common feature of higher level SSHAC studies to extend the data compilation to include the generation of ground motion catalogs and the conduct of ground motion inversions using small magnitude data to help in the development of the GMM.

The increase in SSHAC level also results in a significant increase in cost and duration of the study, which incentivizes the use of lower-level studies for facilities with a lower risk. The teams developing SSMs and GMMs for lower level SSHAC studies still have the onus of properly capturing the CBR of the TDI, which remains a requirement for regulatory assurance. In the absence of extended ground motion databases, detailed field investigations of active faults, and the limited time and effort invested in the development of new models and methods, this will likely be accomplished by establishing a broader range of epistemic uncertainty, consistent with the data, models, and methods under consideration.

5.1 Capturing Epistemic Uncertainty The uncertainties in a seismic hazard analysis are generally identified as aleatory variability or epistemic uncertainties. Aleatory variability refers to uncertainty that is inherent to a process and as such cannot be reduced with additional data, whereas epistemic uncertainty refers to uncertainty that is related to lack of knowledge; thus, it can be reduced with additional data or with improved models (Bommer and Abrahamson, 2006; Baker et al., 2021). Both sources of uncertainty must be accounted for in an analysis.

Epistemic uncertainty can arise from multiple sources. The primary driver of epistemic uncertainty is the lack of data to properly constrain a model, leading to alternative interpretations of the most appropriate way to represent the physical processes underlying seismic hazard. On the other hand, aleatory variability results from variability that is inherent to a process. In some cases, the distinction between epistemic uncertainty and aleatory variability can be difficult to make. However, the distinction is easier when postulated in terms of a model. Once a model is selected, the variability that is not captured by changes in the model parameters is aleatory, while uncertainties in the choice of model or in the value of input parameters to the model is

5-2 epistemic. In this context, it is the duty of the TI Team members to make the proper distinction (Der Kiureghian and Ditlevsen, 2009).

The distinction between aleatory variability and epistemic uncertainty is also important because both have different impacts on the hazard. Aleatory variability is incorporated in hazard analysis via random variables and is accounted for in the hazard integral. Changes in aleatory variability result in changes to the slope of the hazard curve (Figure 5-1). On the other hand, epistemic uncertainty is accounted for via logic trees (Kulkarni et al., 1984). Each end-branch in a logic tree results in a different hazard curve. The results are aggregated either via the mean hazard or by considering hazard fractiles (Figure 5-2). The use of logic trees has received much attention in the literature (e.g., Atkinson et al., 2014) due to its importance for hazard analysis.

For NPP hazard analyses, the stated purpose of a logic tree is to capture the CBR of the TDI (NRC, 2018).

The focus of the development of a logic tree for a hazard study at a critical facility must be on hazard significant issues. The determination of which issues are hazard significant can be made from preliminary studies. For example, the hazard curve from a logic tree that includes a logic tree node with a broad range of epistemic uncertainty is compared with a hazard curve that only considers the best-estimated value (or model) from that logic tree node. As demonstrated below in Section 5.5, if the hazard curves do not differ significantly, that logic tree node (i.e., the hazard input that is sampled in the logic tree node) is not considered to be hazard-significant.

Whether an element is hazard significant or not depends on many considerations, such as the AFE of importance in the hazard, the amplitude of the spectral accelerations, and the oscillator periods under consideration.

Another factor to consider in assessing hazard significance is the relative contribution of the various sources in the SSM. For instance, if source A contributes 80 percent to the mean hazard and source B contributes 20 percent, it is clear that both sources make significant contributions to the mean hazard and must be retained. However, if the uncertainty in the hazard from both sources is similar, a very simplified representation of the epistemic uncertainties in source B (but with the same or nearly identical mean hazard) will produce nearly the same fractiles of the total hazard as the full representation. Care must be exercised in preliminary analyses used to determine the hazard significance of the branches for a logic tree node.

The focus on hazard-significant issues results in several benefits. Primarily, in a lower level SSHAC process, it allows for limited resources to be focused on those issues that are primary drivers of the hazard. In addition, simplified logic trees can lead to non-trivial reduction in computational time.

A common fallacy in constructing logic trees is that, in the presence of limited data, the logic tree is built to fully sample the existing data without considering the potential range of models or parameters that may not be reflected in the limited data. A common example of this fallacy is when only one measurement is available for a certain parameter and a modeler assumes that the measured parameter is the best estimate of such parameter. To avoid this fallacy, it is essential that, in the absence of data, a logic tree is first built to cover a wide range of uncertainty, which can only be reduced when additional knowledge (e.g., data or physical models) is brought to bear.

An illustrative example of this process is in the construction of GMC logic trees. Several recent SSHAC Level 3 projects have adopted a backbone approach (Atkinson et al., 2014), whereas a

5-3 single model is scaled (the scaling can occur across multiple parametric dimensions) to fully capture the epistemic uncertainty. Generally, the host model is selected from a host region with ample ground motion measurements and is thus a well-constrained GMM. This host model is then scaled with the application of host-to-target conversion approaches so that it covers the range of expected ground motions at the target region. The host-to-targe scaling changes not only the median of the GMM, but it ensures that the range of motions covers the epistemic uncertainty in source and path characteristics of the target region (Boore et al., 2022).

In lower level SSHAC studies, where complex host-to-target conversion approaches are beyond the scope of a given study, it is imperative that the GMM starts with a generic model that has a broad enough uncertainty to capture the CBR of the TDI. Such a model was presented in Section 4. The epistemic uncertainty in the model can be reduced only if additional data is used to constrain the host-to-target conversions.

An alternative approach to reduce epistemic uncertainty is to leverage existing model and data sources from analogous regions. This is discussed further in Section 5.2. An element of epistemic uncertainty that cannot rely on generic models is the capturing of local site effects.

The next section (Section 5.3) discusses this in more detail.

Figure 5-1 Impact of aleatory variability () on mean hazard curves [from Baker et al., 2021].

5-4 Figure 5-2 Sample representation of aleatory variability from a logic tree [from Baker et al., 2021]

5.2 Leveraging Existing Models and Data Sources The lengthy time and high cost associated with some past SSHAC studies is due to the additional data collection and the additional efforts to develop complex logic trees to capture the CBR of the TDI. Additional data collection often includes independently developing site-specific earthquake catalogs. The TI Team for this study believes that focusing on existing models and data sources to the maximum extent practicable will improve efficiency in future SSHAC studies.

The GWUS GMM from Section 4 is one example of an existing model that can be leveraged for future SSHAC studies in the WUS. USGS data resources for earthquake catalogs and fault characterization should also be evaluated and considered for incorporation into a SSHAC Level 1 study. In general, the TI Team should perform an extensive review of the literature, including regional studies (e.g., the Utah Earthquake Working Group3) and any existing seismic hazard studies in the same tectonic region. Often, sources of these studies are power and energy generation companies as well as agencies such as the U.S. Army Corps of Engineers and the Bureau of Reclamation.

The TI Team expects that many seismic source characterization data sources used for the Skull Valley SSHAC Level 1 study are applicable to other WUS sites that will use a SSHAC Level 1 study. These data sources are listed in Error! Reference source not found.. The TI Team found that the NSHM-Fault-Sections repository did not include all faults that significantly 3 https://geology.utah.gov/hazards/info/workshops/working-groups/ accessed in December, 2024.

5-5 contribute to hazard at the Skull Valley site. Therefore, the TI Team recommends reviewing the USGS Interactive Quaternary Fault Database and initially considering that all Quaternary faults may significantly contribute to hazard at the site. The TI Teams survey of available information on fault characterization provided in the USGS data sources showed some variability in information, and the TI Team recommends future SSHAC Level 1 studies review and evaluate the USGS data sources following the SSHAC process. As indicated previously, it is important to remember that for all elements of the ground motion and source characterization models, epistemic uncertainty should be larger for cases where available data is limited.

In the case of the SSHAC Level 1 PSHA study for the Skull Valley site (Stamatakos et al.,

2025), the PSHA results highlight the need for a thorough evaluation of data, models, and methods. For the Skull Valley site, the PSHA hazard curves in Figure 7-7 of Stamatakos et al.

(2025) show that the two local fault sources (Stansbury and Mid-Valley East) were the main contributors to hazard at both the 10Hz and 1.0Hz oscillator frequencies. At the 10-4 AFE, these two faults were responsible for nearly 90 percent of the total hazard. At 10-3 AFE, these two faults were also the main contributors to hazard, accounting for nearly 50 percent of the total hazard. However, at this higher AFE and for the 10Hz hazard, two other fault sources (the East Cedar Mountain and Oquirrh faults) also contributed significantly, accounting for nearly 14% of the total hazard. These results underscore the benefits of a properly conducted SSHAC Level 1 study. Specifically:

1. Although the Mid-Valley East fault appears in the USGS Quaternary Fault and Fold database, it is not included in the USGS NSHM. The Mid-Valley East fault source would have been missing from the PSHA results had the USGS NSHM been adopted without the careful evaluation by the TI Team in a SSHAC study.
2. Because of the potential to design SSCs for advanced reactors to lower intensity ground motion (e.g., SDC-2, SDC-3, or SDC-4), these results also show that a broader set of faults (including other local faults) need to be evaluated because of the potential contributions to the hazard at higher AFEs (e.g., 10-4 to 10-3). Like the Mid-Valley East fault, the East Cedar Mountain fault is not included in the USGS NSHM, but it was a contributor to the hazard at the Skull Valley site. Its contribution would have been omitted without the careful evaluation of the TI Team.

SSHAC Level 1 studies for the CEUS should make use of the CEUS SSHAC study (EPRI/DOE/NRC, 2012) for the SSM and NGA-East for the GMMs (Goulet et al., 2018). Since these regional SSHAC Level 3 studies were developed and published, they have been adopted and used for updated and new PSHA studies. In many cases, these updated and new PSHA studies made modifications to the CEUS SSM or NGA-East GMM to account for new data, models, and methods. These updated PSHA studies included hazard updates which were developed following the accident at Fukushima, as summarized in Munson et al., (2021). There have been additional SSHAC studies for several DOE facilities, including the Savannah River Site and the Pantex Plant in west Texas.

Should the CEUS SSM or the NGA-East GMM be used in a SSHAC Level 1 or SSHAC Level 2 study, the TI Teams adopting these models have the responsibility to evaluate and integrate (as necessary) new data, models, and methods that have arisen since the CEUS SSM and NGA-East GMM were first developed and published. SSHAC Level 1 studies should also implement guidance from the NRC SSHAC Level 2 Site Response Research Information Letter (Rodriguez-Marek et al, 2021) for capturing uncertainty in site response. Because the CEUS SSHAC study was completed in 2012, new information on recurring large magnitude faults and

5-6 the earthquake catalog may need to be incorporated as part of the updating, refining, replacing and correcting procedures in a SSHAC process as described in Chapter 4 of NUREG-2213 (NRC, 2018)

The TI Team concludes that the USGS earthquake catalogs are adequate for evaluating source zone seismicity for future SSHAC Level 1 studies. The USGS earthquake catalog is primarily developed from the Advanced National Seismic System (ANSS) Comprehensive Earthquake Catalog (ComCat) which combines data from multiple contributing networks. The TI Team should evaluate any available catalogs that are not incorporated into the ANSS Comprehensive Earthquake Catalog to determine if events should be added to the USGS catalog.

The TI Team recommends reviewing available deformation models to inform the CBR of the TDI of fault slip rates. Based on the TI Team experience with the use of geodetic data on other SSHAC projects, the TI Team places more weight on geologic evidence for slip rates than for geodetic-based rates. The TI Team believes that the geodetic models may still be too immature to solely rely on slip rates developed from these models. The TI Team also recommends critically evaluating rates obtained from the WUS geologic deformation model. The TI Team found that some slip rates in the WUS geologic deformation model produced rates that did not seem realistic (outside the range of the TDI) given existing topography and reasonable assumptions about erosion rates.

The TI Team also recommends a thorough review of existing subsurface data sets in preparation for a SRA or site investigation. Information on the shallow velocity structure should always come from site-specific investigations. For the deeper portion of the profile, resources at the state or local levels, in particular, may contain the most relevant data, including borehole and well logs, seismic, hydrologic, or geologic data. This study reviewed data from the Utah Geological Survey, the Utah Division of Water Rights and Division of Oil, Gas, and Mining, both within Utahs Department of Natural Resources, and University of Utahs Seismograph Stations.

The TI Team also relied on peer-reviewed journal publications to evaluate the most recent work in the vicinity of the study area (e.g., Zeng et al, 2022). National-level databases, such as the USGS Compilation Dataset (McPhillips et al., 2020) or the U.S. Community Shear Wave Velocity Profile Database (Kwak et al., 2021), can provide confirmatory data to supplement the construction of a site-specific profile.

In addition to the available data sources, the TI Team recommends utilizing existing SSHAC Level 3 studies where possible to assist with capturing the CBR of technically defensible data and models. Publicly available SSHAC studies include the following: Hanford Sitewide Probabilistic Seismic Hazard Analysis (PNNL, 2014; INL, 2021), Seismic Source Characterization for the PVNGS, SSHAC Level 3 (APS 2015), Seismic Source Characterization for the DCPP, San Luis Obispo County, California (PG&E 2015), SWUS GMC SSHAC Level 3 (GeoPentech 2015), and the CEUS Seismic Source Characterization for Nuclear Facilities (EPRI/DOE/NRC, 2012). Additional SSHAC Level 2 studies in the CEUS also have data that may be made available upon request, including the hybrid SSHAC Level 2/Level 3 study for the Pantex Plant, and the updated SSHAC Level 2 for the Savannah River Site.

5-7 Table 5-1 Suggested Data Sources for Future Seismic Source Characterization Web Address https://www.sciencebase.gov/catalog/item/

64ff902fd34ed30c2057b527 https://usgs.maps.arcgis.com/apps/

webappviewer/

index.html?id=5a6038b3a1684561a9b0aadf88 412fcf https://code.usgs.gov/ghsc/nshmp/nshm-fault-sections/-/tree/main?ref_type=heads https://www.sciencebase.gov/catalog/item/

612d61abd34e40dd9c08c7d6 https://www.sciencebase.gov/catalog/item/

62bf3457d34e82c548ced92a Use Characterizing source zone seismicity Identifying Quaternary faults near the site of interest Identify faults used in the 2023 NSHM Estimate of slip rates for identified faults Estimate of slip rates for identified faults Product USGS seismicity catalogs for the conterminous U.S.

USGS Interactive Quaternary Fault Database NSHM-Fault-Sections Western U.S. Geologic Deformation Model for Use in the U.S. NSHM 2023, version 1.0 Geodetic Deformation Model Results and Corrections for use in the U.S. NSHM 2023

5-8 5.3 Leveraging Existing Site Data A variety of site data will be acquired prior to and during any construction project to help refine the design process and lessen risks during construction. With careful thought, such data can be critical in defining the uncertainty of key site parameters used in seismic hazard analysis; in particular, the site amplification, seismic sources, and GMMs.

For the current study, the project team deliberately chose a data-rich site so that the TI Team would have sufficient information to make the informed recommendations provided in this report. The TI Team recognizes that most sites will not have a similar abundance of site data, but the team provided details of all analyses with the goal of demonstrating potentially simpler methods at sites that might have limited data. Data for the current study included seismic reflection profiles (three lines), seismic refraction profiles (two lines), magnetometer data, gravity data, extensive borehole logging (24 boreholes), seismic cone penetration tests (136 measurements), downhole seismic measurements for a single borehole, and an extensive suite of geotechnical laboratory measurements, including resonant column testing (PFS, 2006).

Borehole logs along with several seismic reflection lines were used by the TI Team to determine the soil layering for the site. Seismic cone penetration (SCPT) data and downhole measurements were cross-referenced with the soil layering to create a near-surface profile.

The TI Team used modulus reduction and damping (MRD) curves from published studies; however, a single pair of curves that were previously developed from a resonant column test on site samples was also used for a limited portion of the subsurface layers. Gravity data assisted the TI Team in determining the details of faulting under the site.

Each component of the seismic hazard analysis relies on similar types of information about the site, namely the stratigraphy and lithology, the depths to bedrock and the water table, and approximations of the shear wave () velocity profile and the MRD behavior under loading conditions. In general, the estimation of the near-surface profile will significantly benefit from the availability of data from multiple types of tests, including invasive (e.g., SCPT, downhole, cross-hole, suspension logging) and non-invasive (e.g., microtremor arrays, active surface wave) geophysical tests. In addition, passive geophysical tests to estimate the horizontal-to-vertical spectral ratios (HVSR) combined with dispersion curve data are simple and effective approaches for establishing the fundamental site characteristics and developing realistic profiles that capture the CBR of velocities across the site. Without being prescriptive, there are a multitude of geophysical and geotechnical methods that may be used to constrain any of the above parameters. While the specific techniques and methods to include in a site investigation will be governed by the planned construction and the site itself, a well-planned site investigation should increase the technical knowledge of each of these parameters in a way that balances cost with data needs.

The methods appropriate for the site investigation will depend on existing uncertainties for the site. For example, uncertainty in the extent of faulting at a site could be improved by using several seismic techniques (refraction, surface wave analysis), completing a resistivity survey, or combining both methods. Other methods exist to help identify faults and fracture zones, potentially even estimating displacement on key faults, as part of the seismic source characterization, or to estimate the stiffness of subsurface material as part of the SRA. Just as important, site data can be used to evaluate the applicability of published GMMs or MRD curves for the site location. These applications, and the methods used for each, can vary based on the specifics of the site, the proposed design, and the uncertainty allowed in the hazard analysis.

5-9 Finally, consideration should be given to the extent of the site investigation. A less extensive or less detailed site investigation should be compensated for with an increase in the level of epistemic uncertainty, as long as the site investigation properly captures the variability in site properties. It is probable that a less extensive investigation, or no site-specific investigation at all, will miss significant changes in ground conditions, which could invalidate the design choices made prior to the investigation. A site investigation should balance the consequence of missing any variations with the minimum data required for the planned analyses. For all sites, the TI Team will need to determine the extent to which site data is representative and justify the relative weight given to different data sources.

The Geomatrix (1999) report and supporting studies, including the Bay Geophysical Associates (1999) report offered the TI Team existing data, models, and methods needed for the evaluation of fault sources in the Skull Valley. This existing information provided the TI Team the technical basis it needed to make technical assessments without significant need for additional data collection or field checks of geological information. However, this may not always be the case, and the TI Team recommends that future SSHAC Level 1 TI Teams will need to conduct field work to obtain new data and develop a first-hand appreciation of the geological setting to make informed technical decisions, per the guidance provided in ANS 2.27 (ANSI/ANS, 2020). There are two specific examples from the Skull Valley SSAHC Level 1 study (Stamatakos et al., 2025) that illustrate this point.

1. As described in Section 4.1.2 of Stamatakos et al. (2025), the subsurface stratigraphy of Skull Valley is well known based on extensive field work, borings, and seismic profiles.

This subsurface stratigraphy included several prominent and easily identified paleosols that limited the uncertainty of the slip rates of the local faults in the Geomatrix (1999)

PSHA. Absent these kinds of data, the fault slip rates in the SSM logic tree would need to include significantly more epistemic uncertainty to account of this lack of knowledge.

Alternatively, the TI Team could have conducted targeted field investigations to develop the geological information needed to constrain the fault slip rates. Efforts to develop well-constrained fault slip rates is usually warranted because these rates are almost always one of the most significant contributors of the SSM to the total hazard

2. In the Skull Valley SSM, the TI Team assigned a p[S] = 0.5 to the East Cedar Mountain fault because of the equivocal nature of the geological data indicating active faulting. In their examination of aerial photography of the region, Geomatrix (1999) noted that the scarps identified by Sack (1993) are at the same elevation as sinuous lake shoreline features to the southwest, and thus may be a Provo paleo-shoreline and not tectonic in origin. For future SSHAC Level 1 studies, existing studies may not be available for the TI Team to rely on to assess p[S]. In those cases, field reconnaissance may be needed for the TI Team to make their assessments.

5.4 Determining Hazard Significance to Simplify Calculations As part of its preliminary hazard evaluation for the Skull Valley site, the TI Team conducted a sensitivity study to determine the relative contributions of each of the nodes from the SSM and GMM logic trees to the hazard for the Stansbury fault, which is located adjacent to the site and was expected to be a significant contributor to the total hazard. As both the SSM and GMM logic trees contain a large number of alternative branches for each of the logic tree nodes, the purpose of this preliminary evaluation was to determine if the SSM and GMM logic trees could be simplified to determine the hazard for the distant fault sources to save computer run time.

Based on the consolidation of the SRA logic tree to seven median site adjustment factors

5-10 and a single standard deviation for each of the 19 oscillator frequencies, the TI Team concluded that further simplification of the SRA logic tree would be unwarranted.

Figure 5-3 shows the logic tree for the Stansbury fault and Figure shows the logic trees for the median predictions and sigma values from the INL Level 3 SSHAC GMM, which was used for this preliminary hazard sensitivity study. For the Stansbury fault logic tree, there are three alternative (1) fault configurations, (2) fault dip angles, (3) seismogenic thicknesses, and (4) fault slip rates for a total of 81 alternative combinations. For the INL GMM median prediction logic tree, there are branches for (1) two alternative long period adjustments, (2) three alternative normal faulting adjustments, (3) five alternative adjustments for anelastic attenuation, and (4) five alternative host-to-target adjustments, for a total of 150 alternative combinations.

For the INL GMM single-station sigma logic tree there are three alternative levels (low, best estimate, and high). Combining these SSM and GMM logic trees results in 36,450 alternative hazard curve combinations for the Stansbury fault, which are shown in Error! Reference source not found. for 1 and 10 Hz.

To assess the relative contribution of each of the logic tree nodes to the total hazard for the Stansbury fault, the TI Team isolated all of the branches associated with each node, calculated the mean hazard for each branch, and then calculated the UHRS for the 10-4 and 10-5 AFE levels. This process was repeated for all branches emanating from each node, and for all of the nodes in both logic trees. Error! Reference source not found. through Figure 5-9 show these spectra using one figure panel for each node and comparing the UHRS resulting from each individual branch compared to the overall mean UHRS. Tabulated on each figure is the maximum difference (maximum over frequency) in spectral acceleration (delta value) between the mean hazard from each logic tree branch relative to the mean UHRS from all the combined hazard curves. Based on this initial assessment, the TI Team was able to determine which nodes of the SSM and GMM logic trees for the distant fault sources could be collapsed to a single branch or eliminated. Based on minor contributions of the alternative fault configurations and seismogenic thicknesses from the SSM logic tree and long period adjustments from the GMM logic tree, the TI Team determined that the two logic trees for the distant fault sources could be reduced to a total of 2,025 alternative combinations. For the final verification of the change to the epistemic uncertainty contributed by the simplification of the logic trees, the TI Team calculated the 5th, 10th, 50th, 90th, and 95th percentile hazard curves for the reduced set of 2,025 alternative hazard curves for comparison with the same set of percentile hazard curves from the full set of 36,450 hazard curves. Error! Reference source not found. and Error!

Reference source not found. show that the two sets of percentile hazard curves match very closely for 0.5 and 1 Hz (Error! Reference source not found.) and 10 and 100 Hz (Error!

Reference source not found.) oscillator frequencies. Therefore, the TI Team concluded that simplified logic trees for the SSM for the distant fault sources would be warranted. Similar reductions to the GMM logic tree would also have been warranted had the TI Team continued to use the INL GMM; however, the TI Team decided to use the GWUS logic tree, which is described in Section 4 of this report. The approach used by the TI Team to assess the relative contributions of each of the logic tree alternatives could be simplified by assessing the SSM logic tree using a single alternative path of the GMM logic tree and vice versa for assessing the GMM logic tree using a single alternative path of the SSM logic tree; however the relative comparison would not be to the overall mean UHRS for the combined logic trees but rather a simplified version of one of the logic trees.

For the host source zone, the TI Team developed a logic tree that included three alternative seismogenic thicknesses and three alternative smoothing rate maps, but only one branch 6.9. Generally, logic trees for source zones should include multiple alternative

5-11 branches; however, for this study, the TI Team chose a single value for given the high probability of expressing surface rupture associated with a magnitude 6.9 event and observations of faults in the region that have expressed surface rupture (Section 4.2.11 of the SSHAC report provides further rationale for a single ). Furthermore, the TI Team postulated that the contribution to the total hazard from a moderate increase in would be minor at most. To test this postulate, the TI Team ran the hazard for the host zone using an value of 7.2. Figure 5.13 shows that increase in hazard from the larger value of 7.2 is minor for 1 Hz (from 0.13 to 0.15g) and for 10 Hz (from 0.46 to 0.49g) at 10-4 AFE. It should be emphasized that use of a single may not be appropriate for a host zone; however, for this site in Skull Valley, UT, the TI Team was able to justify this simplification by reference to the regional geology and by performing this sensitivity analysis.

The TI Team concluded that sensitivity assessments to determine the relative contribution of each element of the logic trees to simplify the models warrant a comparison of fractile hazard curves between the full and reduced models and not just a comparison of mean hazard results.

Figure 5-3 Logic tree for Stansbury fault.

5-12 Figure 5-4 Logic trees for INL SSHAC Level 3 GMM for median predictions (top) and single-station sigma (bottom).

5-13 Figure 5-4 1 and 10 Hz hazard curves for Stansbury fault showing mean (blue) and 5 percentile values for the total number (36,450) of alternative SSM and GMM logic tree combinations.

5-14 Figure 5-5 Mean 10,000- and 100,000-year return period UHRS comparison plots for three alternative Stansbury fault configurations (FC1 [ABC], FC2 [ABB ], FC3 [ABC ]) on the left plot and three alternative seismogenic thicknesses (ST1 [13 km], ST2 [15 km],

ST3 [19 km]) on the right plot. Each UHRS comparison plot shows the maximum difference in spectral acceleration between the total mean UHRS and the alternative logic tree branches with the weight of each branch shown in parentheses.

5-15 Figure 5-6 Mean 10,000- and 100,000-year return period UHRS comparison plots for three alternative Stansbury fault dip angles (DP1

[45°], DP2 [55°], DP3 [65°]) on the left plot and three alternative fault slip rates (SR1 [0.26 mm/yr], SR2 [0.40 mm/yr], SR3 [0.50 mm/yr])

on the right plot. Each UHRS comparison plot shows the maximum difference in spectral acceleration between the total mean UHRS and the alternative logic tree branches with the weight of each branch shown in parentheses.

5-16 Figure 5-7 Mean 10,000- and 100,000-year return period UHRS comparison plots for two alternative INL GMM long period adjustments (LPA1, LPA2) on the left plot and three alternative normal faulting adjustments (AL0, AL1, AL2) on the right plot. Each UHRS comparison plot shows the maximum difference in spectral acceleration between the total mean UHRS and the alternative logic tree branches with the weight of each branch shown in parentheses.

5-17 Figure 5-8 Mean 10,000- and 100,000-year return period UHRS comparison plots for five alternative INL GMM anelastic attenuation adjustments (EPA1, EPA2, EPA3, EPA4, EPA5) on the left plot and five alternative host-to-target adjustments (DCM1, DCM2, DCM3, DCM4, DCM5) on the right plot. Each UHRS comparison plot shows the maximum difference in spectral acceleration between the total mean UHRS and the alternative logic tree branches with the weight of each branch shown in parentheses.

5-18 Figure 5-9 Mean 10,000- and 100,000-year return period UHRS comparison plots for three alternative INL GMM single-station sigma levels (SD1, SD2, SD3). UHRS comparison plot shows the maximum difference in spectral acceleration between the total mean UHRS and the alternative logic tree branches with the weight of each branch shown in parentheses.

5-19 Figure 5-10 Stansbury fault reference hazard curves showing a comparison of the fractile hazard curves for 0.5 Hz and 1 Hz for the complete set of alternative logic tree combinations (36,450) shown in red and the reduced set alternative logic tree combinations (2,025) shown in blue.

5-20 Figure 5-11 Stansbury fault reference hazard curves showing a comparison of the fractile hazard curves for 10 Hz and 100 Hz for the complete set of alternative logic tree combinations (36,450) shown in red and the reduced set alternative logic tree combinations (2,025) shown in blue.

5-21 Figure 5-12 1 and 10 Hz hazard mean and fractile curves comparing two alternative values for the host source zone for the Skull Valley site.

5-22 5.5 Capturing the CBR of TDI The fundamental purpose of the SSHAC process is to capture the CBR of TDI and following the SSHAC process improves the likelihood that this goal is met. First, to capture the CBR of TDI, the TI Team members must evaluate all available data, models, and methods and then act as impartial and objective assessors of this information. During the integration phase of the project, TI Teams construct seismic source and GMMs in a manner that captures the epistemic uncertainty of the data, models, and methods and the aleatory variability of the model components. One way to test whether the assessments of epistemic uncertainty and aleatory variability are sufficient is by comparing them with existing SSHAC source, ground motion, and site response models from prior SSHAC projects, thereby leveraging the technical expertise of those prior TI Teams. In addition, by working closely with the hazards analysis team, the TI Team can use sensitivity analyses to test the contributions of each of the model components and focus on those aspects of the source and GMMs that contribute most to hazard.

Capturing the CBR of TDI also requires that the TI Team members avoid cognitive bias in their evaluations and assessments as described in Section 2-4 of NUREG 2213 (NRC, 2018). To that end, an important responsibility of the TI Lead is discussing cognitive bias with the TI Team members and making them aware that efforts will be devoted throughout the project to counter bias, particularly in working meetings where the subject matter experts are offering their judgments. Likewise, the PPRP Chair is responsible for reminding the PPRP members of the importance of being attentive to cognitive bias, both in terms of the potential for it to be present among the TI Team members and within the PPRP itself. Expert interactions among the TI Team members that specifically include technical challenges intended to reveal potential biases of the TI Team member or external experts assessments are key to countering cognitive bias.

The PPRP is an indispensable element of any SSHAC study. The PPRP fulfills two parallel roles, conducting both a technical and a process review. In the technical review, the PPRP makes sure that the full range of data, models, and methods are duly considered in the evaluation phase and that the CBR of TDI was captured in the integrated seismic source, ground motion, and site response models. Importantly, the PPRP also ensures that all technical decisions are adequately justified and documented. As part of the process review, the panel ensures that the SSHAC process was followed. This process review also requires the PPRP to verify that all (1) TI Team members actively participated in the discussions during the evaluation and integration phases of the project, (2) technical decisions were openly and freely debated by the TI Team members, (3) final technical decisions were supported by an adequate and justifiable technical basis, and (4) these decisions reflected the consensus of the TI Team. It is this technical and process review by the PPRP that ultimately ensures that the goal to capture the CBR of TDI is met.

Developing or applying defensible models and appropriately characterizing uncertainty is important to meet the regulatory requirements for nuclear power reactor licensing, thus underscoring the need for a structured, endorsed process, such as SSHAC. The goal of the SSHAC process is to evaluate the CBR of TDI and integrate this uncertainty into the source and GMMs used in the PSHA. This ensures that there is adequate regulatory confidence, regulatory assurance, and regulatory stability in the site-specific PSHA. In this context, regulatory confidence is the belief that an organization's practices, systems, or operations comply with all applicable regulatory requirements. Regulatory assurance is the concept that, among the informed technical community of geologists and seismologists and engineers, there is a high degree of certainty that the PSHA results constitute a reliable prediction of future ground shaking that could occur at a site over the engineering lifetime of a facility. Regulatory stability is

5-23 the concept that the hazard results characterized by the spread in uncertainty will endure over the foreseeable future and remain valid even as new data is discovered, and new models and methods are developed.

To create regulatory confidence, regulatory assurance, and regulatory stability, it is vital that the full range of technically defensible interpretations about seismic sources, ground motion prediction, and site response within the informed technical community are considered and objectively evaluated by the SSHAC TI Team. Regulatory assurance and regulatory stability protect the large capital investments needed to develop and build these facilities against costly and complicated design and construction changes. By contrast, hazard studies that do not involve a thorough evaluation of available information and the rigorous technical defense of the integrated models are much more susceptible to being challenged. This is because they may be built on a limited and constrained data set (e.g., exclusive application of geodetic data to predict slip rates without consideration of geological observations) or because other subject matter experts are advancing their own preferred models. New information can also undercut the validity of a PSHA because the original model didnt properly account for variability and uncertainty.

Proper accounting for uncertainty and variability is especially important to the NRC licensing process because of the regulatory requirements to account for uncertainty. An important aspect of accounting for uncertainty is the development of hazard fractiles for comparison with the mean hazard results. Accurately capturing these uncertainties and variabilities is necessary to meet the requirements in 10 CFR Part 100.23. In addition, licensing requirements under 10 CFR Part 50 and 52 (and under the proposed 10 CFR Part 53) include a seismic probabilistic risk assessment (SPRA) that uses the uncertainty and variability from the PSHA (e.g., ASME/ANS RA-S-1.4) to accurately determine the uncertainty in the plant risk to seismic loading.

In addition to properly accounting for uncertainty, by following the SSHAC process, the TI Teams evaluation and integration will be thoroughly documented (providing transparency) and fully reviewed by the PPRP, which provides additional clarity and reproducibility for future uses of the hazard outputs. Applying the SSHAC process thus provides the regulatory confidence, regulatory assurance, and regulatory stability that otherwise cannot be acquired through less rigorous and less structured approaches.

Two specific examples of the significant utility of implementing the SSHAC approach for the siting of critical facilities arose during the course of the Skull Valley SSHAC Level 1 demonstration study:

During the Level 1 demonstration study, the TI Team reviewed the wealth of available geological, seismological, and geotechnical data, especially from the various USGS and state geological surveys. As expected, the TI Team discovered fairly significant differences among these sources in the characterization of faults. For example, the fault length and slip rate of the Stansbury fault in the USGS Interactive Quaternary Fault Database4 differed from its characterization in the USGS NSHM (Petersen et al., 2024).

Given these inconsistencies in available data, the SSHAC process provided a needed mechanism by which to objectively evaluate the available information, sort through 4 https://usgs.maps.arcgis.com/apps/webappviewer/index.html?id=5a6038b3a1684561a9b0aadf88412fcf, accessed on December 12, 2024.

5-24 inconsistencies, account for defensible alternative interpretations, and then integrate the evaluation into the SSM so that the resulting PSHA captures the CBR of TDI.

During the Level 1 demonstration study, the TI Team reviewed multiple previously developed SSHAC Level 3 GMMs for nuclear facilities. In particular, the TI Team focused on the suitability of these GMMs for the Skull Valley site and concluded that none of the GMMs should be used because they had been built specifically for their given sites through the application of unique source, path, and site adjustments. Rather than following a similar approach for the Skull Valley site, the TI Team decided to develop a generic WUS GMM whose median predicted ground motions would match the mean of median predicted ground motions for the other previously developed WUS SSHAC GMMs, but whose range would exceed the range of previously predicted ground motions from the WUS SSHAC GMMs. Section 4 of this report describes how the TI Team developed the GWUS GMM to meet this objective.

In addition to these two examples, another significant advancement in the siting of critical facilities is the importance of the SRA. Over the past several years, many seismic hazard characterizations for critical facilities have found that there are significant differences in the site amplification factors developed in the SRA depending on the data, models, and methods used.

There are many examples in the literature where generic site response analyses (including the use of generic soil classes based on ) do not provide an accurate site hazard (e.g.,

Stamatakos et al., 2018; NRC, 2021).

In nearly all recent SSHAC studies to produce a PSHA, the site amplification factors are consistently one of the most significant contributors to hazard uncertainty across the range in applicable AFEs (e.g., 10-3 to 10-5/yr). The importance of the site amplification factors led to the pursuit of extending the SSHAC approach to the SRA in order to systematically capture the uncertainties through the use of weighted logic trees and the evaluation of multiple data, models, and methods (Rodriguez et al., 2021). Key tasks that need to be performed to develop the site response are (1) assess the accuracy and reliability of existing data, (2) make decisions regarding the weights assigned to the various site characterization methods (and the associated technical bases), and (3) ensure that the resulting model accounts for the aleatory variability and epistemic uncertainty. The degree of scrutiny and careful evaluation needed for a reliable site response model can best be achieved under the disciplined approach offered by a SSHAC process. This underscores the need for a site-specific PSHA to capture and properly account for the seismological and geotechnical characteristics of a site, especially for critical facilities such as NPPs.

In summary, proper accounting for uncertainty and variability is especially important to the NRC licensing process because of the regulatory requirements to do so. Advanced reactor designs may have some SSCs that could be designed to higher target performance goal frequencies (or less stringent seismic design categories) and still meet NRC regulations, guidance, and safety goals compared to large LLWRs. As such, the SSHAC process provides an endorsed approach that can be appropriately streamlined commensurate with the level of risk while still producing defensible, stable, and approved hazard characterizations.

6-1 6

CONCLUSIONS AND RECOMMENDATIONS This document develops a set of conclusions and provides recommendations that can be implemented by industry for performing a SSHAC Level 1 study for new and advanced reactors that the authors believe will meet the intent of regulations that require investigating the site and its environs in sufficient scope to permit adequate evaluation that supports arriving at estimates of the Safe Shutdown Earthquake Ground Motion. In particular, the recommendations in this report are intended to guide development of a SSHAC Level 1 seismic hazard analysis that is more efficient and cost effective than a SSHAC Level 2 or SSHAC Level 3 study, while still meeting the goals of a SSHAC study of capturing the CBR of TDI of data, models, and methods that are used in the PSHA. This report complements the PSHA study that was developed by the TI Team at the PFS site in Skull Valley Utah that was developed to demonstrate the feasibility of a SSHAC Level 1 study (Stamatakos et al., 2025).

The conclusion and recommendations of this report fall into four general categories: (1) SSHAC process implementation tools, (2) simplifications to the SSM developed through sensitivity studies and the reliance on detailed existing databases, (3) development of a new generic GMM applicable to the WUS, and (4) application of lessons learned from prior SSHAC studies, especially the SSHAC Level 2 site response study (Rodriguez-Marek et al., 2021).

In terms of the SSHAC process, the TI Team recognizes that the benefit of frequent and active participation with the PPRP is vital in ensuring that the SSHAC goal of capturing the CBR of TDI is achieved. The experience and knowledge of the PPRP members guides the TI Team to find and evaluate a broad range of existing data, models, and methods. Their observations of the TI Team meetings provide the TI Team lead with timely and effective feedback to make sure all TI Team members are actively participating, and that each TI Team members opinions and judgments are being objectively considered by the TI Team in making consensus decisions. The PPRPs observations of the TI Team members can also help identify cognitive bias (cognizant or unwitting) as the TI Team formulates the models and develops the technical bases for all critical decisions. The PPRP reviews ensure that the entire process was adequately documented as required in all SSHAC studies. Future SSHAC Level 1 studies would benefit from active participation of a knowledgeable and experienced PPRP.

The TI Team also recognized the benefits of active participation of the Hazard Analysis Team (HAT). Throughout the project, the HAT worked closely with the TI Team to continually identify and simplify model development by helping the TI Team focus on only those aspect of the SSM and GMM that contribute to the hazard results. Two specific results from this study clearly demonstrate this improvement.

In the first example, the TI Team initially evaluated all identified Quaternary faults within the region of study but retained only those that were shown to contribute significantly to the total hazard. This subset was then retained for further investigation and characterization for the final SSM. For these characterizations, the TI Team was especially interested in understanding the contribution to the total hazard at low (1 Hz) and high (10 Hz) frequencies over the range of AFE values for which the DRS will be defined (e.g., lower SDC facilities with DRS at AFE values of 10-4/yr or higher). Future SSHAC Level 1 studies for advanced reactors should therefore focus on these oscillator frequencies and AFEs.

In the second example, the TI Team identified which aspects of the logic tree for the Stansbury fault were significant contributors to hazard (e.g., slip rate) and those that were not (e.g.,

seismogenic thickness or fault dip). The TI Team then compared the hazard results for the

6-2 Stansbury fault from the full logic tree to a significantly pruned logic tree. This comparison showed that the pruned logic tree captures the full distribution of hazard results (mean and fractiles) as captured in the full logic tree. Based on this result, abbreviated logic trees were implemented for the more distant faults. These types of in-process sensitively studies can greatly simplify the logic trees without any loss in their ability to capture the CBR of TDI.

This approach in using the HAT throughout the project differs from the more traditional approach in SSHAC studies, in which the HAT provides sensitivity results at two designated points in the schedule: first during the Workshop 1 to highlight hazard sensitivities based on prior PSHA results for the site or region, and during Workshop 3 based on the preliminary Hazard Input Document (HID) developed by the TI Team.

All SSHAC studies for sites in the WUS can benefit from evaluating readily available resources to determine parameters such as fault slip rates, fault geometries, and magnitudes from previous SSHAC studies as well as the data used by the USGS for the NSHM. These include existing declustered earthquake catalogs (e.g., Mueller, 2019), fault locations, fault lengths, geometries, and slip rates from active fault databases (e.g., the USGS Quaternary Fault and Fold Database of the United States), and constraints on seismogenic thickness from prior SSHAC studies (e.g., PNNL, 2012, INL, 2022). Capturing epistemic uncertainty for earthquake recurrence parameters can be simplified by using an adaptive kernel approach and likelihood based on information gain, as discussed in Section 3.3.7. For sites in the CEUS, previously developed and approved regional SSHAC studies (EPRI/DOE/USNRC, 2012; Goulet et al.,

2018) can be used to characterize the seismic hazard together with a site-specific SSHAC study to assess the suitability of the regional SSHAC studies for the site.

Future WUS SSHAC studies can implement the GWUS model, developed as part of this project, for PSHA studies that do not have the resources to perform inversion to determine host-to-target adjustments for seismic source and path parameters for use with a backbone GMPE. The GWUS GMM was developed to provide a wider range of median predictions relative to previous SSHAC WUS GMMs. It includes median adjustments for normal, reverse, and hanging wall adjustments. Because of the need to include sufficient epistemic uncertainty, an implementation of GWUS GMM may result in higher mean hazard due to wider range of median predictions.

The HID in the SSHAC Level 1 study (Stamatakos et al., 2025) provides the median and sigma ground motion tables as well as equations showing how to implement the adjustments to the median ground motions for normal, reverse, and sites located on the hanging wall of the fault.

Included in the HID are tables with the weights for the median and sigma ground motions as well as instructions for how to interpolate the median values for specific magnitudes and distances.

Finally, future SSHAC Level 1 and 2 studies should take advantage of the wealth of existing data, models and methods in existing SSHAC L3 studies that have been conducted nation-wide, as well as the experience gained by all the experts who participated in those studies as TI Team members or as part of the PPRPs. These prior SSHAC studies should be used to establish the most important hazard contributors from the SSM, GMM, and SRA in order to focus a Level 1 or 2 study on key parameters and streamline the resources that are typically needed for a high-level SSHAC study. In addition, the lessons learned from the SSHAC Level 2 site response study (Rodriguez-Marek et al, 2021) are also extremely important and should be implemented for future SSHAC studies.

7-1 7

REFERENCES Abrahamson, N. A., W.J. Silva, and R. Kamai. (2013). Update of the AS08 Ground-Motion Prediction Equations Based on the NGA-West2 Data Set. Pacific Earthquake Engineering Research Center Report 2013-04. University of California, Berkeley, CA.

https://peer.berkeley.edu/sites/default/files/webpeer-2013 norman_a._abrahamson_walter_j._silva_and_ronnie_kamai_.pdf Abrahamson, N.A., P. Birkhauser, M. Koller, D. Mayer-Rosa, P. Smit, C. Sprecher, S. Tinic and R. Graf. (2002). PEGASOS - A Comprehensive Probabilistic Seismic Hazard Assessment for Nuclear Power Plants in Switzerland. 12th European Conference on Earthquake Engineering, London, Paper No. 633.

Al Atik, L. (2015). NGA-East: Ground-Motion Standard Deviation Models for Central and Eastern North America, PEER Reports 2015-07. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA.

https://peer.berkeley.edu/sites/default/files/webpeer-2015-07-linda_al_atik.pdf Al Atik, L. and R. Youngs, (2014). Epistemic Uncertainty for NGA-West2 Models. Earthquake Spectra. (30). 1301-1318. doi:10.1193/062813EQS173M.

American Nuclear Society (ANS). (2020) Probabilistic Seismic Hazard Analysis. American National Standard ANSI/ANS-2.29-2020. American Nuclear Society. La Grange Park, Illinois, USA.

American Nuclear Society (ANS). (2020). Criteria for Investigations of Nuclear Facility Sites for Seismic Hazard Assessments. American National Standard ANSI/ANS-2.27-2020. La Grange Park, Illinois, USA. 2020.

American Society of Civil Engineers/Structural Engineering Institute (ASCE/SEI). (2020)

Seismic Design Criteria for Structures, Systems, and Components in Nuclear Facilities.

ASCE/SEI 43-19. American Society of Civil Engineers, Reston, Virginia, USA.

Anooshehpoor, R., T. Weaver, J. Ake, C. Munson, M. Moschetti, D. Shelly, and P. Powers.

(2023). Magnitude Conversion and Earthquake Recurrence Rate Models for the Central and Eastern United States. Research Information Letter (RIL) 2023-03. U.S. NRC.

Washington DC. ML23073A370.

Arizona Public Service (APS). (2015) Seismic Source Characterization for the Palo Verde Nuclear Generating Station, Seismic Source Characterization, Technical Report.

Prepared by Lettis Consultants International for Westinghouse Electric Company.

Concord, CA. 2015.

Atkinson, G.M., J.J. Bommer, and N.A. Abrahamson. (2014). Alternative Approaches to Modeling Epistemic Uncertainty in Ground Motions in Probabilistic Seismic Hazard analysis. Seismological Research Letters, 85(6),1141-1144. doi:10.1785/0220140120.

Baker, J., B. Bradley, and P. Stafford. (2021). Seismic Hazard and Risk Analysis. Cambridge University Press.

Bommer, J.J. and N.A. Abrahamson. (2006). Why do Modern Probabilistic Seismic-Hazard Analyses Often Lead to Increased Hazard Estimates? Bulletin of the Seismological Society of America, 96(6), pp.1967-1977. doi: 10.1785/0120060043.

7-2 Bommer, J.J., and F. Scherbaum. (2008). The Use and Misuse of Logic Trees in Probabilistic Seismic Hazard Analysis. Earthquake Spectra. 24(4), 997-1009.

doi:10.1193/1.2977755 Boore, D.M., R.R. Youngs, A.R Kottke, J.J. Bommer, R. Darragh, W.J., Silva, P.J. Stafford, L. Al Atik, A. Rodriguez-Marek, and J. Kaklamanos, J. (2022). Construction of a Ground-Motion Logic Tree Through Host-to-Target Region Adjustments Applied to an Adaptable Ground-Motion Prediction Model. Bulletin of the Seismological Society of America, 112(6), 3063-3080.

Boore, D., J. Stewart, E. Seyhan, and G. Atkinson. (2013). NGA-West2 Equations for Predicting Response Spectral Accelerations for Shallow Crustal Earthquakes. Pacific Earthquake Engineering Research Report 2013-05. University of California, Berkeley, California. https://peer.berkeley.edu/sites/default/files/webpeer-2013 david_m._boore_jonathan_p._stewart_emel_seyhan_and_gail_m._atkinson.pdf Budnitz, R.J., G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell and P.A.

Morris. (1997). Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and the Use of Experts. US Nuclear Regulatory Commission NUREG/CR-6372, Washington DC, USA.

Campbell K.W. and Y. Bozorgnia. (2014). NGA-West2 Ground Motion Model for the Average Horizontal Components of PGA, PGV, and 5% Damped Linear Acceleration Response Spectra. Earthquake Spectra, 30(3), 1087-1115. doi:10.1193/062913EQS175M.

Department of Energy (DOE). (1998). Probabilistic Seismic Hazard Analyses for Fault Displacement and Vibratory Ground Motion at Yucca Mountain, Nevada. WBS 1.2.3.2.8.3.6. Civilian Radioactive Waste Management System and Management Operating Contractor. Las Vegas, Nevada: ML090690430.

Der Kiureghian, A. and O. Ditlevsen. (2009). Aleatory or epistemic? Does it matter? Structural safety, 31(2), pp.105-112.

Electric Power Research Institute/ Department of Energy/United States Nuclear Regulatory Commission (EPRI/DOE/USNRC). (2012). Central and Eastern United States Seismic Source Characterization for Nuclear Facilities. NUREG-2115. US Nuclear Regulatory Commission, Washington DC, USA. http://www.ceus-ssc.com.

Geomatrix (1999). Fault Evaluation Study and Seismic Hazard Assessment, Private Fuel Storage Facility, Skull Valley, Utah. Geomatrix Consultants, Inc. San Francisco, CA.

ML010360150.

GeoPentech (2015). Southwestern United States Ground Motion Characterization SSHAC Level 3 - Technical Report Rev.2. GeoPentech Inc. Santa Anna, CA.

Goulet, C., Y. Bozorgnia, N. Abrahamson, N. Kuehn, L. Al Atik, R. Youngs, R. Graves, G.

Atkinson. (2018). Central and Eastern North America Ground-Motion Characterization, NGA-East Final Report, Pacific Earthquake Engineering Research (PEER) Center Report No. 2018/08.

Gutenberg, B., and C. F. Richter, (1954), Seismicity of the Earth and Associated Phenomena, Princeton University Press, Princeton.

7-3 Hecker S., N.A. Abrahamson, and K.E. Wooddell. (2013). Variability of Displacement at a Point: Implications for Earthquake-Size Distribution and Rupture Hazard on Faults. Bulletin of the Seismological Society of America. 103(A2), 651-674.

doi:10.1785/0120120159.

Helmstetter, A., Y.Y. Kagan, and D.D. Jackson. (2007). High-Resolution Time-Independent Grid-Based Forecast for M>= 5 Earthquakes in California. Seismological Research Letters, 78(1), 78-86.

Idaho National Laboratory (INL). (2022). Idaho National Laboratory Sitewide SSHAC Level 3 Probabilistic Seismic Hazard Analysis. U.S. Department of Energy.

Idriss, I.M. (2014). An NGA-West2 Empirical Model for Estimating the Horizontal Spectral Values Generated by Shallow Crustal Earthquakes. Earthquake Spectra, 30(3), 1155-1177, doi: 10.1193/070613EQS195M.

Kafka, A. L. (2002). Statistical Analysis of the Hypothesis that Seismicity Delineates Areas where Future Large Earthquakes are Likely to Occur in the Central and Eastern United States. Seismological Research Letters, 73(6), 990-1001. doi: 10.1785/gssrl.73.6.992.

Kafka, A.L. (2007). Does seismicity delineate zones where future large earthquakes are likely to occur in intraplate environments? Continental Intraplate Earthquakes: Science, Hazard, and Policy Issues, edited by Seth Stein and Stéphane Mazzotti. 35-48. doi:

10.1130/2007.2425(03).

Kafka, A.L. and S.Z. Levin. (2000). Does the Spatial Distribution of Smaller Earthquakes Delineate Areas Where Larger Earthquakes are Likely to Occur? Bulletin of the Seismological Society of America, 90(3), 724-738. doi: 10.1785/0119990017.

Kafka, A.L. and J.R. Walcott. (1998). How Well Does the Spatial Distribution of Smaller Earthquakes Forecast the Locations of Larger Earthquakes in The Northeastern United States? Seismological Research Letters, 69(5), 428-439. doi: 10.1785/gssrl.69.5.428.

Kulkarni, R.B., Youngs, R.R. and K.J. Coppersmith. (1984). Assessment of Confidence Intervals for Results of Seismic Hazard Analysis. In Proceedings of the Eighth World Conference on Earthquake Engineering (1), 263-270.

Kwak D.Y, S.K. Ahdi, P. Wang, P. Zimmaro, S.J. Brandenberg, and J.P. Stewart. (2021). Web Portal for Shear Wave Velocity and HVSR Databases in Support of Site Response Research and Applications. UCLA Geotechnical Engineering Group.

https://www.vspdb.org/ doi: 10.21222/C27H0V.

McPhillips, D.F., J.A. Herrick, S. Ahdi, A.K. Yong, and S. Haefner. (2020). Updated Compilation of VS30 Data for the United States: U.S. Geological Survey Data Release, https://earthquake.usgs.gov/data/vs30/us/, doi: 10.5066/P9H5QEAC.

Miller, A.C. and T.R. Rice (1983). Discrete Approximations of Probability Distributions.

Management Science, 29(3), 352-362.

7-4 Moschetti, M.P. (2015). A Long-Term Earthquake Rate Model for the Central and Eastern United States from Smoothed Seismicity. Bulletin of the Seismological Society of America, 105(6), 2928-2941. doi: 10.1785/0120140370 Mueller, C.S. (2019). Earthquake Catalogs for the USGS National Seismic Hazard Maps.

Seismological Research Letters, 90(1), 251-261. doi: 10.1785/0220170108.

Mulargia, F., P. Gasperini, and S. Tinti. (1985). Contour Mapping of Italian Seismicity.

Tectonophysics, 142, 203-216.

Munson C., J. Ake, J. Stamatakos, and M. Juckett. (2021). Seismic Hazard Evaluations for U.S. Nuclear Power Plants: Near-Term Task Force Recommendation 2.1 Results.

NUREG/KM-0017. US Nuclear Regulatory Commission, Washington, DC, USA.

https://www.nrc.gov/reading-rm/doc-collections/nuregs/knowledge/km0017/index.html NRC. (2007). A Performance-Based Approach to Define the Site-Specific Earthquake Ground Motion. Regulatory Guide 1.208. US Nuclear Regulatory Commission, Washington, DC, USA.

NRC. (2012). Practical Implementation Guidelines for SSHAC Level 3 and 4 Hazard Studies.

NUREG-2117 Revision 1. US Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC, USA.

NRC. (2018). Updated Implementation Guidelines for SSHAC Hazard Studies. NUREG-2213.

US Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC, USA.

Pacific Gas and Electric Company (PG&E). (2015). Seismic Source Characterization for the Diablo Canyon Power Plant, San Luis Obispo County, California; Report on the Results of a SSHAC Level 3 Study, Rev. A. Pacific Gas and Electric Company, San Luis Obispo, CA. http://www.pge.com/dcpp-ltsp Pacific Northwest National Laboratory (PNNL). (2014). Hanford Sitewide Probabilistic Seismic Hazard. Prepared for the U.S. Department of Energy Under Contract DE-AC06076RL01830, and Energy Northwest, Pacific Northwest National Lab Report PNNL-23361.

Parker G.A., J.P. Stewart, D.M. Boore, G.M. Atkinson, and B. Hassani. (2022). NGA-Subduction Global Ground Motion Models with Regional Adjustment Factors. Earthquake Spectra, 38(1):456-493. doi:10.1177/87552930211034889 Private Fuel Storage, LLC (PFS). (2006). Final Safety Analysis Report. Docket 72-22. La Private Fuel Storage Limited Liability Company Crosse, Wisconsin, USA. 2000.

ML061590385.

Petersen, M. D., et al., (2024). The 2023 US 50-State National Seismic Hazard Model:

Overview and Implications. Earthquake Spectra, 40(1), 5-88.

doi:10.1177/87552930231215428.

Renault, P. (2014). "Approach and Challenges for the Seismic Hazard Assessment of Nuclear Power Plants: The Swiss Experience." Bollettino di Geofisica Teorica ed Applicata, 55(1), 149-164.

7-5 Renault, P., S. Heuberger, and N.A. Abrahamson. (2010). PEGASOS Refinement Project: An Improved PSHA for Swiss Nuclear Power Plants. Proceedings of the 14th European Conference of Earthquake Engineering, Ohrid, Republic of Macedonia. Paper ID 991.

Rodriguez-Marek, A., Rathje, E., Ake, J., Munson, C., Stovall, S., Weaver, T., Ulmer, K., and M.

Juckett. (2021). Documentation Report for SSHAC Level 2: Site Response. Research Information Letter (RIL) 2021-15. U.S. Nuclear Regulatory Commission, Washington DC.

Schwartz, D.P. and K.J. Coppersmith. (1984). Fault behavior and characteristic earthquakes:

examples from the Wasatch and San Andreas fault zones. Journal of Geophysical Research, 89(B7), 5681-5698.

Stamatakos, C. Munson, R. Payne, A. Rodriguez-Marek, S, Stovall, K. Ulmer, T. Weaver, and M. Juckett (2025). Documentation Report for Skull Valley SSHAC Level 1 Demonstration Research Information Letter (RIL) 2025-10. U.S. Nuclear Regulatory Commission, Washington DC. ML25044A212.

Stamatakos, J., M. Juckett, T. Weaver, C. Munson, and A. Rodriguez-Marek. (2024). The Basis for Site Selection of The Private Fuel Storage Site in Skull Valley, Utah for The SSHAC Level 1 Demonstration Project. Center for Nuclear Waste Regulatory Analysis, San Antonio TX. U.S. Nuclear Regulatory Commission, Washington DC. ML25050A192.

Stepp, J. C. (1972). Analysis of Completeness of the Earthquake Sample in the Puget Sound Area and its Effect on Statistical Estimates of Earthquake Hazard. In Proceedings of the International Conference on Microzonation, (2), 897-910.

Rasmussen, C.E., and C.K.I. Williams. (2006). Gaussian Processes for Machine Learning.

MIT Press, Cambridge, MA.

Wesnousky, S.G., C.H. Scholz, K. Shimazaki, T. Matsuda. (1983). Earthquake Frequency Distribution and the Mechanics of Faulting. Journal of Geophysical Research: Solid Earth, 88(B11), 9331-9340, doi: 10.1029/JB088iB11p09331.

Wesnousky, S.G., C.H. Scholz, K. Shimazaki, T. Matsuda. (1984). Integration of Geological and Seismological Data for the Analysis of Seismic Hazard: A Case Study of Japan. Bulletin of the Seismological Society of America, 74(2), 687-708. doi:

10.1785/BSSA0740020687.

Wesnousky, S.G. (1986). Earthquakes, Quaternary Faults, and Seismic Hazard in California. Journal of Geophysical Research: Solid Earth, 91(B12), 12587-12631.

Wooddell, K.E., N.A. Abrahamson, A.L. Acevedo-Cabrera, and R.R. Youngs, (2015).

Alternative Magnitude-Frequency Distribution for Evaluating the Hazard of Multi-Segment Ruptures. Diablo Canyon Seismic Source Characterization, Attachment G-1:

WAACY Magnitude PDF Manuscript (Revision A). Pacific Gas and Electric (PG&E).

Avila Beach, California.

Youngs, R.R. and K.J. Coppersmith. (1985). Implications of Fault Slip Rates and Earthquake Recurrence Models to Probabilistic Hazard Estimates. Bulletin of the Seismological Society of America, (75), 939-964.

7-6 Zhao J.X., J. Zhang, A. Asano,Y. Ohno, T. Oouchi, T. Takahashi, H. Ogawa, K. Irikura, H.K.

Thio, P.G. Somerville, Y. Fukushima, and Y. Fukushima. (2006) Attenuation Relations of Strong Ground Motion In Japan Using Site Classification Based on Predominant Period. Bulletin of the Seismological Society of America, 96(3), 898-913. doi:

10.1785/0120050122.

Zeng, Q., F.-C. Lin, and A.A. Allam. (2022). 3D Shear Wave Velocity Model of Salt Lake Valley via Rayleigh Wave Ellipticity Across a Temporary Geophone Array. The Seismic Record, 2(2), 127-136, doi: 10.1785/0320220016.