ML20045F285

From kanterella
Jump to navigation Jump to search
RIL-2001, Part 4, - Proceedings of NRC Annual Probabilistic Flood Hazard Assessment Research Workshops I-IV
ML20045F285
Person / Time
Issue date: 02/14/2020
From: Thomas Aird, M'Lita Carr, Joseph Kanney
Office of Nuclear Regulatory Research
To:
M. Carr 415-6322
References
RIL-2001, Pt. 4
Download: ML20045F285 (551)


Text

RIL-2001 PROCEEDINGS OF NRC ANNUAL PROBABILISTIC FLOOD HAZARD ASSESSMENT RESEARCH WORKSHOPS I-IV 2015-2019 Rockville, MD Date Published: February 2020 Prepared by:

M. Carr T. Aird J. Kanney U.S Nuclear Regulatory Commission Rockville, MD 20852 3DUW)RUWK$QQXDO15&3UREDELOLVWLF)ORRG

+D]DUG$VVHVVPHQW5HVHDUFK:RUNVKRS Research Information Letter Research Office of Nuclear Regulatory Research

Disclaimer Legally binding regulatory requirements are stated only in laws, NRC regulations, licenses, including technical specifications, or orders; not in Research Information Letters (RILs). A RIL is not regulatory guidance, although NRCs regulatory offices may consider the information in a RIL to determine whether any regulatory actions are warranted.

ABSTRACT The U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research (RES) is conducting a multiyear, multi-project Probabilistic Flood Hazard Assessment (PFHA) Research Program to enhance the NRCs risk-informed and performance-based regulatory approach with regard to external flood hazard assessment and safety consequences of external flooding events at nuclear power plants (NPPs). It initiated this research in response to staff recognition of a lack of guidance for conducting PFHAs at nuclear facilities that required staff and licensees to use highly conservative deterministic methods in regulatory applications. Risk assessment of flooding hazards and consequences of flooding events is a recognized gap in NRCs risk-informed, performance-based regulatory framework. The objective, research themes, and specific research topics are described in the RES Probabilistic Flood Hazard Assessment Research Plan. While the technical basis research, pilot studies and guidance development are ongoing, RES has been presenting Annual PFHA Research Workshops to communicate results, assess progress, collect feedback and chart future activities. These workshops have brought together NRC staff and management from RES and User Offices, technical support contractors, as well as interagency and international collaborators and industry and public representatives.

These conference proceedings transmit the agenda, abstracts, presentation slides, summarized questions and answers, and panel discussion for the first four Annual U.S. Nuclear Regulatory Commission (NRC) Probabilistic Flood Hazard Assessment Research Workshops held at NRC Headquarters in Rockville, MD. The workshops took place on October 14-15, 2015; January 23-25, 2017; December 4-5, 2017; and April 30-May 2, 2019. The first workshop was an internal meeting attended by NRC staff, contractors, and partner Federal agencies. The following workshops were public meetings and attended by members of the public; NRC technical staff, management, and contractors; and staff from other Federal agencies. All of the workshops began with an introductory session that included perspectives and research program highlights from the NRC Office of Nuclear Regulatory Research and also may have included perspectives from the NRC Office of New Reactors and Office of Nuclear Reactor Regulation, the Electric Power Research Institute (EPRI), and industry representatives. NRC and EPRI contractors and staff as well as invited Federal and public speakers gave technical presentations and participated in various styles of panel discussion. Later workshops included poster sessions and participation from academic and interested students. The workshops included five focus areas:

(1) leveraging available flood information (2) evaluating the application of improved mechanistic and climate probabilistic modeling for storm surge, climate and precipitation (3) probabilistic flood hazard assessment frameworks (4) potential impacts of dynamic and nonstationary processes (5) assessing the reliability of flood protection and plant response to flooding events iii

TABLE OF CONTENTS ABSTRACT ................................................................................................................................... III ABBREVIATION AND ACRONYMS ............................................................................................. X INTRODUCTION .................................................................................................................... XXXVII BACKGROUND ........................................................................................................................ XXXVII WORKSHOP OBJECTIVES ........................................................................................................ XXXVII WORKSHOP SCOPE ............................................................................................................... XXXVIII

SUMMARY

OF PROCEEDINGS ................................................................................................. XXXVIII RELATED WORKSHOPS ............................................................................................................ XXXIX 1 FIRST ANNUAL NRC PROBABILISTIC FLOOD HAZARD ASSESSMENT RESEARCH WORKSHOP .............................................................................................................................1-1

1.1 INTRODUCTION

......................................................................................................................1-1 1.1.1 Organization of Conference Proceedings..................................................................................1-1 1.2 WORKSHOP AGENDA ............................................................................................................1-3 1.3 PROCEEDINGS ......................................................................................................................1-5 1.3.1 Day 1: Session I: Program Overview ..........................................................................................1-5 1.3.1.1 Opening Remarks. .............................................................................................................................. 1-5 1.3.1.2 NRC PFHA Research Program Overview. .................................................................................... 1-7 1.3.1.3 NRO Perspectives on Flooding Research Needs. ..................................................................... 1-24 1.3.1.4 Office of Nuclear Reactor Regulation Perspectives on Flooding Research Needs. ............ 1-36 1.3.2 Day 1: Session II: Climate ......................................................................................................... 1-50 1.3.2.1 Regional Climate Change ProjectionsPotential Impacts to Nuclear Facilities................... 1-50 1.3.3 Day 1: Session III: Precipitation ............................................................................................... 1-63 1.3.3.1 Estimating PrecipitationFrequency Relationships in Orographic Regions......................... 1-63 1.3.3.2 Numerical Simulation of Local Intense Precipitation. ................................................................. 1-86 1.3.3.3 SHAC-F (Local Intense precipitation). ........................................................................................ 1-129 1.3.4 Day 2: Session IV: Riverine and Coastal Flooding Processes .......................................... 1-147 1.3.4.1 PFHA Technical Basis for Riverine Flooding............................................................................. 1-147 1.3.4.2 PFHA Framework for Riverine Flooding..................................................................................... 1-166 1.3.4.3 State of Practice in Flood Frequency Analysis. ......................................................................... 1-174 1.3.4.4 Quantification and Propagation of Uncertainty in Probabilistic Storm Surge Models ........ 1-190 1.3.4.5 USBR Dam Breach Physical Modeling....................................................................................... 1-206 1.3.5 Day 2: Session V: Plant Response to Flooding Events ...................................................... 1-220 1.3.5.1 Effects of Environmental Factors on Flood Protection and Mitigation Manual Actions. .... 1-220 1.3.5.2 Flooding Information Digests. ....................................................................................................... 1-238 1.3.5.3 Framework for Modeling Total Plant Response to Flooding Events. .................................... 1-250 1.3.5.4 Performance of Penetration Seals............................................................................................... 1-261 1.4

SUMMARY

......................................................................................................................... 1-265 1.5 WORKSHOP PARTICIPANTS ............................................................................................... 1-267 2 SECOND ANNUAL NRC PROBABILISTIC FLOOD HAZARD ASSESSMENT RESEARCH WORKSHOP .............................................................................................................................2-1 v

2.1 INTRODUCTION

......................................................................................................................2-1 2.1.1 Organization of Conference Proceedings..................................................................................2-1 2.2 WORKSHOP AGENDA ............................................................................................................2-3 2.3 PROCEEDINGS ......................................................................................................................2-7 2.3.1 Day 1: Session 1A - Introduction ................................................................................................2-7 2.3.1.1 Welcome ............................................................................................................................................... 2-7 2.3.1.2 PFHA Research Needs for New and Operating Reactors ........................................................ 2-12 2.3.1.3 Use of Flooding Hazard Information in Risk-Informed Decision-making................................ 2-22 2.3.1.4 Flooding Research Needs: Industry Perspectives on Development of External Flood Frequency Methods ........................................................................................................................ 2-30 2.3.1.5 NRC Flooding Research Program Overview ............................................................................... 2-38 2.3.1.6 EPRI Flooding Research Program Overview .............................................................................. 2-46 2.3.2 Day 1: Session 1B - Storm Surge Research ........................................................................... 2-50 2.3.2.1 Quantification of Uncertainty in Probabilistic Storm Surge Models ........................................ 2-50 2.3.2.2 Probabilistic Flood Hazard AssessmentStorm Surge ............................................................ 2-75 2.3.3 Day 2: Session 2A - Climate and Precipitation....................................................................... 2-85 2.3.3.1 Regional Climate Change Projections: Potential Impacts to Nuclear Facilities .................... 2-85 2.3.3.2 Numerical Modeling of Local Intense Precipitation Processes ................................................ 2-98 2.3.3.3 Extreme Precipitation Frequency Estimates for Orographic Regions................................... 2-148 2.3.3.4 Local Intense Precipitation Frequency Studies, ........................................................................ 2-165 2.3.4 Day 2: Session 2B - Leveraging Available Flood Information I ......................................... 2-177 2.3.4.1 Development of Flood Hazard Information Digest for Operating NPP Sites ....................... 2-177 2.3.4.2 At-Streamgage Flood Frequency Analyses for Very Low Annual Exceedance Probabilities from a Perspective of Multiple Distributions and Parameter Estimation Methods ............ 2-184 2.3.4.3 Extending Frequency Analysis beyond Current Consensus Limits....................................... 2-199 2.3.5 Day 2: Session 2C - Leveraging Available Flood Information II ........................................ 2-213 2.3.5.1 Collection of Paleoflood Evidence ............................................................................................... 2-213 2.3.5.2 Paleofloods on the Tennessee RiverAssessing the Feasibility of Employing Geologic Records of Past Floods for Improved Flood Frequency Analysis ........................................ 2-224 2.3.6 Day 2: Session 2D - Reliability of Flood Protection and Plant Response I ...................... 2-243 2.3.6.1 EPRI Flood Protection Project Status ......................................................................................... 2-243 2.3.6.2 Performance of Flood-Rated Penetration Seals ....................................................................... 2-256 2.3.7 Day 2: Daily Wrap-Up Question and Answer Period ........................................................... 2-266 2.3.8 Day 3: Session 3A - Reliability of Flood Protection and Plant Response II ..................... 2-267 2.3.8.1 Effects of Environmental Factors on Manual Actions for Flood Protection and Mitigation at Nuclear Power Plants ................................................................................................................... 2-267 2.3.8.2 Modeling Total Plant Response to Flooding Event .................................................................. 2-284 2.3.9 Day 3: Session 3B - Frameworks I ......................................................................................... 2-303 2.3.9.1 Technical Basis for Probabilistic Flood Hazard Assessment ................................................. 2-303 2.3.10 Day 3: Session 3C - Frameworks II ...................................................................................... 2-318 2.3.10.1 Evaluation of Deterministic Approaches to Characterizing Flood Hazards ....................... 2-318 2.3.10.2 Probabilistic Flood Hazard Assessment Framework Development .................................... 2-334 2.3.10.3 Riverine Flooding and Structured Hazard Assessment Committee Process for Flooding (SHAC-F), ....................................................................................................................................... 2-349 2.3.11 Day 3: Session 3D - Panel Discussion ................................................................................ 2-367 2.3.11.1 National Oceanic and Atmospheric Administration/National Weather Service (NOAA/NWS)

........................................................................................................................................................... 2-367 2.3.11.2 U.S. Army Corps of Engineers ................................................................................................... 2-370 2.3.11.3 Tennessee Valley Authority (TVA) ............................................................................................ 2-375 2.3.11.4 U.S. Department of Energy (DOE) ............................................................................................ 2-387 2.3.11.5 Institut de Radioprotection et de Sûreté Nucléaire ................................................................. 2-391 vi

2.3.11.6 Discussion ...................................................................................................................................... 2-396 2.3.12 Day 3: Session 3E - Future Work in PFHA .......................................................................... 2-402 2.3.12.1 Future Work in PFHA at EPRI .................................................................................................... 2-402 2.3.12.2 Future Work in PFHA at NRC .................................................................................................... 2-407 2.4

SUMMARY

......................................................................................................................... 2-417 2.5 PARTICIPANTS .................................................................................................................. 2-419 3 THIRD ANNUAL NRC PROBABILISTIC FLOOD HAZARD ASSESSMENT RESEARCH WORKSHOP .............................................................................................................................3-1

3.1 INTRODUCTION

......................................................................................................................3-1 3.1.1 Organization of Conference Proceedings..................................................................................3-1 3.2 WORKSHOP AGENDA ............................................................................................................3-3 3.3 PROCEEDINGS ......................................................................................................................3-9 3.3.1 Day 1: Session 1A - Introduction ................................................................................................3-9 3.3.1.1 Welcome ............................................................................................................................................... 3-9 3.3.1.2 NRC Flooding Research Program Overview ............................................................................... 3-11 3.3.1.3 EPRI Flooding Research Program Overview .............................................................................. 3-20 3.3.2 Day 1: Session 1B - Climate and Precipitation....................................................................... 3-29 3.3.2.1 Regional Climate Change Projections: Potential Impacts to Nuclear Facilities .................... 3-29 3.3.2.2 Numerical Modeling of Local Intense Precipitation Processes ................................................ 3-42 3.3.2.3 Research on Extreme Precipitation Estimates in Orographic Regions .................................. 3-70 3.3.3 Day 1: Session 1C - Storm Surge ............................................................................................. 3-94 3.3.3.1 Quantification of Uncertainty in Probabilistic Storm Surge Models ......................................... 3-94 3.3.3.2 Probabilistic Flood Hazard Assessment - Storm Surge.......................................................... 3-109 3.3.4 Day 1: Session 1D - Leveraging Available Flood Information I ......................................... 3-116 3.3.4.1 Flood Frequency Analyses for Very Low Annual Exceedance Probabilities using Historic and Paleoflood Data, with Considerations for Nonstationary Systems ............................... 3-116 3.3.4.2 Extending Frequency Analysis beyond Current Consensus Limits....................................... 3-135 3.3.4.3 Development of External Hazard Information Digests for Operating NPP sites ................. 3-149 3.3.5 Day 1: Session 1E - Paleoflood Studies ................................................................................ 3-163 3.3.5.1 Improving Flood Frequency Analysis with a Multi-Millennial Record of Extreme Floods on the Tennessee River near Chattanooga, .................................................................................. 3-163 3.3.5.2 Collection of Paleoflood Evidence ............................................................................................... 3-179 3.3.6 Day 2: Daily Wrap-up Session / Public Comments .............................................................. 3-191 3.3.7 Day 2: Poster Session.............................................................................................................. 3-195 3.3.7.1 Poster Abstracts .............................................................................................................................. 3-195 3.3.7.2 Posters .............................................................................................................................................. 3-200 3.3.8 Day 2: Session 2A - Reliability of Flood Protection and Plant Response I ...................... 3-227 3.3.8.1 Performance of Flood- Rated Penetration Seals ...................................................................... 3-227 3.3.8.2 EPRI Flood Protection Project Status ......................................................................................... 3-234 3.3.8.3 A Conceptual Framework to Assess Impacts of Environmental Conditions on Manual Actions for Flood Protection and Mitigation at Nuclear Power Plants ................................. 3-240 3.3.8.4 External Flooding Walkdown Guidance...................................................................................... 3-250 3.3.8.5 Erosion Testing of Zoned Rockfill Embankments ..................................................................... 3-258 3.3.9 Day 2: Session 2B - Frameworks I ......................................................................................... 3-295 3.3.9.1 A Framework for Inland Probabilistic Flood Hazard Assessments: Analysis of Extreme Snow Water Equivalent in Central New Hampshire ........................................................................... 3-295 3.3.9.2 Structured Hazard Assessment Committee Process for Flooding (SHAC-F) for Riverine Flooding ........................................................................................................................................... 3-304 vii

3.3.10 Day 2: Session 2C - Panel Discussions .............................................................................. 3-316 3.3.10.1 Flood Hazard Assessment Research and Guidance Activities in Partner Agencies ....... 3-316 3.3.10.2 External Flooding Probabilistic Risk Assessment (PRA): Perspectives on Gaps and Challenges ...................................................................................................................................... 3-351 3.3.11 Day 2: Session 2D - Future Work in PFHA.......................................................................... 3-375 3.3.11.1 Future Work in PFHA at EPRI .................................................................................................... 3-375 3.3.11.2 Future Work in PFHA at NRC .................................................................................................... 3-380 3.3.12 Day 2: Final Wrap-up Session / Public Comment .............................................................. 3-388 3.4

SUMMARY

......................................................................................................................... 3-389 3.5 WORKSHOP PARTICIPANTS ............................................................................................... 3-391 4 FOURTH ANNUAL NRC PROBABILISTIC FLOOD HAZARD ASSESSMENT RESEARCH WORKSHOP .............................................................................................................................4-1

4.1 INTRODUCTION

......................................................................................................................4-1 4.1.1 Organization of Conference Proceedings..................................................................................4-1 4.2 WORKSHOP AGENDA ............................................................................................................4-2 4.3 PROCEEDINGS ......................................................................................................................4-9 4.3.1 Day 1: Session 1A - Introduction ................................................................................................4-9 4.3.1.1 Introduction........................................................................................................................................... 4-9 4.3.1.2 NRC Flooding Research Program Overview............................................................................... 4-12 4.3.1.3 EPRI External Flooding Research Program Overview. ............................................................. 4-23 4.3.1.4 Nuclear Energy Agency, Committee on the Safety of Nuclear Installations (CSNI): Working Group on External Events (WGEV). ............................................................................................ 4-28 4.3.2 Day 1: Session 1B - Coastal Flooding ..................................................................................... 4-33 4.3.2.1 KEYNOTE: National Weather Service Storm Surge Ensemble Guidance. ........................... 4-33 4.3.2.2 Advancements in Probabilistic Storm Surge Models and Uncertainty Quantification Using Gaussian Process Metamodeling. ............................................................................................... 4-56 4.3.2.3 Probabilistic Flood Hazard Assessment Using the Joint Probability Method for Hurricane Storm Surge. .................................................................................................................................... 4-72 4.3.2.4 Assessment of Epistemic Uncertainty for Probabilistic Storm Surge Hazard Assessment Using a Logic Tree Approach........................................................................................................ 4-80 4.3.2.5 Coastal Flooding Panel. ................................................................................................................... 4-91 4.3.3 Day 1: Session 1C - Precipitation ............................................................................................. 4-98 4.3.3.1 KEYNOTE: Satellite Precipitation Estimates, GPM, and Extremes. ....................................... 4-98 4.3.3.2 Hurricane Harvey Highlights: Need to Assess the Adequacy of Probable Maximum Precipitation Estimation Methods. .............................................................................................. 4-111 4.3.3.3 Reanalysis Datasets in Hydrologic Hazards Analysis. ............................................................ 4-112 4.3.3.4 Current Capabilities for Developing Watershed Precipitation-Frequency Relationships and Storm-Related Inputs for Stochastic Flood Modeling for Use in Risk-Informed Decisionmaking.............................................................................................................................. 4-125 4.3.3.5 Factors Affecting the Development of Precipitation Areal Reduction Factors. ................... 4-142 4.3.3.6 Precipitation Panel Discussion. .................................................................................................... 4-156 4.3.4 Day 2 Session 2A - Riverine Flooding ................................................................................... 4-162 4.3.4.1 KEYNOTE: Watershed Level Risk Analysis with HEC-WAT. ................................................ 4-162 4.3.4.2 Global Sensitivity Analyses Applied to Riverine Flood Modeling........................................... 4-195 4.3.4.3 Detection and Attribution of Flood Change Across the United States. ................................. 4-206 4.3.4.4 Bulletin 17C: Flood Frequency and Extrapolations for Dams and Nuclear Facilities. ....... 4-206 4.3.4.5 Riverine Paleoflood Analyses in Risk-Informed Decisionmaking: Improving Hydrologic Loading Input for USACE Dam Safety Evaluations. ............................................................... 4-227 viii

4.3.4.6 Improving Flood Frequency Analysis with a Multi-Millennial Record of Extreme Floods on the Tennessee River near Chattanooga, TN. .......................................................................... 4-243 4.3.4.7 Riverine Flooding Panel Discussion. ........................................................................................... 4-252 4.3.5 Day 2: Session 2B - Modeling Frameworks .......................................................................... 4-261 4.3.5.1 Structured Hazard Assessment Committee Process for Flooding (SHAC-F). .................... 4-261 4.3.5.2 Overview of the TVA PFHA Calculation System. ..................................................................... 4-272 4.3.5.3 Development of Risk-Informed Safety Margin Characterization Framework for Flooding of Nuclear Power Plants. .................................................................................................................. 4-287 4.3.5.4 Modeling Frameworks Panel Discussion. .................................................................................. 4-306 4.3.6 Day 2: Poster Session 2C ........................................................................................................ 4-311 4.3.6.1 Coastal Storm Surge Assessment using Surrogate Modeling Methods. ............................. 4-312 4.3.6.2 Methods for Estimating Joint Probabilities of Coincident and Correlated Flooding Mechanisms for Nuclear Power Plant Flood Hazard Assessments. ................................... 4-312 4.3.6.3 Modelling Dependence and Coincidence of Flooding Phenomena: Methodology and Simplified Case Study in Le Havre in France. ......................................................................... 4-315 4.3.6.4 Current State-of-Practice in Dam Risk Assessment. ............................................................... 4-315 4.3.6.5 Hurricane Harvey Highlights Challenge of Estimating Probable Maximum Precipitation. 4-320 4.3.6.6 Uncertainty and Sensitivity Analysis for Hydraulic Models with Dependent Inputs............ 4-320 4.3.6.7 Development of Hydrologic Hazard Curves Using SEFM for Assessing Hydrologic Risks at Rhinedollar Dam, CA. ................................................................................................................... 4-323 4.3.6.8 Probabilistic Flood Hazard Analysis of Nuclear Power Plant in Korea. ................................ 4-328 4.3.7 Day 3: Session 3A - Climate and Non-Stationarity .............................................................. 4-329 4.3.7.1 KEYNOTE: Hydroclimatic Extremes Trends and Projections: A View from the Fourth National Climate Assessment. .................................................................................................... 4-329 4.3.7.2 Regional Climate Change Projections: Potential Impacts to Nuclear Facilities. ................. 4-349 4.3.7.3 Role of Climate Change/Variability in the 2017 Atlantic Hurricane Season. ....................... 4-364 4.3.7.4 Climate Panel Discussion.............................................................................................................. 4-374 4.3.8 Day 3: Session 3B - Flood Protection and Plant Response ............................................... 4-378 4.3.8.1 External Flood Seal Risk-Ranking Process. .............................................................................. 4-378 4.3.8.2 Results of Performance of Flood-Rated Penetration Seals Tests. ........................................ 4-386 4.3.8.3 Modeling Overtopping Erosion Tests of Zoned Rockfill Embankments. .............................. 4-398 4.3.8.4 Flood Protection and Plant Response Panel Discussion. ....................................................... 4-419 4.3.9 Day 3: Session 3C - Towards External Flooding PRA......................................................... 4-423 4.3.9.1 External Flooding PRA Walkdown Guidance. ........................................................................... 4-423 4.3.9.2 Updates on the Revision and Expansion of the External Flooding PRA Standard. ........... 4-435 4.3.9.3 Update on ANS 2.8: Probabilistic Evaluation of External Flood Hazards for Nuclear Facilities Working Group Status. ................................................................................................................. 4-446 4.3.9.4 Qualitative PRA Insights from Operational Events of External Floods and Other Storm-Related Hazards. ........................................................................................................................... 4-456 4.3.9.5 Towards External Flooding PRA Discussion Panel. ................................................................ 4-464 4.4

SUMMARY

......................................................................................................................... 4-475 4.5 WORKSHOP PARTICIPANTS ............................................................................................... 4-477 5

SUMMARY

AND CONCLUSIONS ....................................................................................... 5-489 5.1

SUMMARY

......................................................................................................................... 5-489

5.2 CONCLUSION

S .................................................................................................................. 5-489 ACKNOWLEDGEMENTS ........................................................................................................ 5-490 ix

ABBREVIATION AND ACRONYMS sigma, standard deviation

°C degrees Celsius

°F degrees Fahrenheit 13 C-NMR carbon-13 nuclear magnetic resonance 14 C carbon-14 17B Guidelines for Determining Flood Flow FrequencyBulletin 17B, 1982 17C Guidelines for Determining Flood Flow FrequencyBulletin 17C, 2018 1-D one dimensional 20C 20th Century Reanalysis 2BCMB Level 2DPR and GMI Combine 2-D two dimensional 3-D three dimensional AAB Accident Analysis Branch in NRC/RES/DSA AB auxiliary building AC, ac alternating current ACCP Alabama Coastal Comprehensive Plan ACE accumulated cyclone energy, an approximation of the wind energy used by a tropical system over its lifetime ACM alternative conceptual model ACME Accelerated Climate Modeling for Energy (DOE)

ACWI Advisory Committee on Water Information AD anno Domini ADAMS Agencywide Documents Access and Management System ADCIRC ADvanced CIRCulation model AEP annual exceedance probability AEP4 Asymmetric Exponential Power distribution AFW auxiliary feedwater AGCMLE Assistant General Counsel for Materials Litigation and Enforcement in NRC/OGC/GCHA AGCNRP Assistant General Counsel for New Reactor Programs in NRC/OGC/GCHA AGFZ Azores-Gibraltar Transform Fault AGL above ground level AIC Akaike Information Criterion x

AIMS assumptions, inputs, and methods AIRS Advanced InfraRed Sounder AIT air intake tunnel AK Alaska AM annual maxima AMJ April, May, June AMM Atlantic Meridional Mode AMO Atlantic Multi-Decadal Oscillation AMS annual maxima series AMSR-2 Advance Microwave Scanning Radiometer AMSU Advanced Microwave Sounding Unit ANN annual ANO Arkansas Nuclear One ANOVA analysis of variance decomposition ANS American Nuclear Society ANSI American National Standards Institute ANVS Netherlands Authority for Nuclear Safety and Radiation Protection AO Assistant for Operations in NRC/OEDO AOP abnormal operating procedure APF annual probability of failure APHB Probabilistic Risk Assessment Operations and Human Factors Branch API application programming interface APLA/APLB Probabilistic Risk Assessment Licensing Branch A/B in NRC/NRR/DRA APOB PRA Oversight Branch in NRC/NRR/DRA AR atmospheric river AR Arkansas AR4, AR5 climate scenarios from the 4th/5th Intergovernmental Panel on Climate Change Reports / Working Groups ARA Applied Research Associates ArcGIS geographic information system owned by ESRI ARF areal reduction factor ARI average return interval ARR Australian Rainfall-Runoff Method AS adjoining stratiform ASM annual series maxima xi

ASME American Society of Mechanical Engineers ASN French Nuclear Safety Authority (Autorité de Sûreté Nucléaire)

ASTM American Society for Testing and Materials ATMS Advance Technology Microwave Sounder ATWS anticipated transient without scram AVHRR Advance Very High Resolution Radiometer B&A Bittner & Associates BATEA Bayesian Total Error Analysis BB backbuilding/quasistationary BC boundary condition Bel V subsidiary of Belgian Federal Agency for Nuclear Control (FANC)

BHM Bayesian Hierarchical Model BIA Bureau of Indian Affairs BMA Bayesian Model Averaging BQ Bayesian Quadrature BWR boiling-water reactor CA California CAC common access card CAPE Climate Action Peer Exchange CAPE convective available potential energy CAS corrective action study CAS2CD CAScade 2-Dimensional model (Colorado State)

Cat. category on the Saffir-Simpson Hurricane Wind Scale CBR center, body, and range CC Clausius-Clapeyron CC climate change CCCR Center for Climate Change Research CCDP conditional core damage probability CCI Coppersmith Consulting Inc.

CCSM4 Community Climate System Model version 4 CCW closed cooling water CDB current design basis CDF core damage frequency CDF cumulative distribution function xii

CE common era CEATI Centre for Energy Advancement through Technological Innovation CEET cracked embankment erosion test CENRS National Science and Technology Council Committee on Environment, Natural Resources, and Sustainability CESM Community Earth System Model CFD computational fluid dynamics CFHA comprehensive flood hazard assessment CFR Code of Federal Regulations CFSR Climate Forecast System Reanalysis CHIPs Coupled Hurricane Intensity Prediction System CHiRPs Climate Hazards Group infraRed Precipitation with Station Data CHL Coastal and Hydraulics Laboratory CHRP Coastal Hazard Rapid Prediction, part of StormSIM CHS Coastal Hazards System CI confidence interval CICS-NC Cooperative Institute for Climates and SatellitesNorth Carolina CIPB Construction Inspection Management Branch in NRC/NRO/DLSE CIRES Cooperative Institute for Research in Environmental Sciences CL confidence level CL-ML homogeneous silty clay soil CMC Canadian Meteorological Center forecasts CMIP5 Coupled Model Intercomparison Project Phase 5 CMORPH / C-MORPH Climate Prediction Center Morphing Technique CNE Romania Consiliul National al Elevilor CNSC Canadian Nuclear Safety Commission CO Colorado CoCoRaHS Community Collaborative Rain, Hail & Snow Network (NWS)

COE U.S. Army Corps of Engineers (see also USACE)

COL combined license COLA combined license application COM-SECY NRC staff requests to the Commission for guidance CONUS Continental United States COOP Cooperative Observer Network (NWS) xiii

COR contracting officers representative CPC Climate Prediction Center (NOAA)

CPFs cumulative probability functions CR comprehensive review CRA computational risk assessment CRB Concerns Resolution Branch in NRC/OE CRL coastal reference location CRPS continuous ranked probability score CSNI Committee on the Safety of Nuclear Installations CSRB Criticality, Shielding & Risk Assessment Branch in NRC/NMSS/DSFM CSSR Climate Science Special Report (by the U.S. Global Change Research Program)

CSTORM Coastal Storm Modeling System CTA Note note to Commissioners Assistants CTXS Coastal Texas Study CV coefficient of variation CZ capture zone DC District of Columbia DAD depth-area-duration DAMBRK Dam Break Flood Forecasting Model (NWS)

DAR Division of Advanced Reactors in NRC/NRO DayMet daily surface weather and climatological summaries dBz decibel relative to z, or measure of reflectivity of radar DCIP Division of Construction Inspection and Operational Programs in NRC/NRO DDF depth-duration-frequency curve DDM data-driven methodology DDST database of daily storm types DE Division of Engineering in NRC/RES DHSVM distributed hydrology soil vegetation model, supported by University of Washington DIRS Division of Inspection and Regional Support in NRC/NRR DJF December, January, February DLBreach Dam/Levee Breach model developed by Weiming Wu, Clarkson University DLSE Division of Licensing, Siting, and Environmental Analysis in NRC/NRO xiv

DOE U.S. Department of Energy Dp pressure deficit DPI power dissipation index DPR Division of Preparedness and Response in NRC/NSIR DPR Dual Frequency Precipitation Radar DQO data quality objective DRA Division of Risk Assessment in NRC/NRR DRA Division of Risk Analysis in NRC/RES DREAM Differential Evolution Adaptive Metropolis DRP Division of Reactor Projects in NRC/R-I DRS Division of Reactor Safety In NRC/R-I and R-IV DSA Division of Systems Analysis in NRC/RES DSEA Division of Site Safety and Environmental Analysis, formerly in NRC/NRO, now in DLSE DSFM Division of Spent Fuel Management in NRC/NMSS DSI3240 NCEI hourly precipitation data DSMS Dam Safety Modification Study DSMS digital surface models DSPC USACE Dam Safety Production Center DSRA Division of Safety Systems, Risk Assessment and Advanced Reactors in NRC/NRO (merged into DAR)

DSS Division of Safety Systems in NRC/NRR DSS Hydrologic Engineering Center Data Storage System DTWD doubly truncated Weibull distribution DUWP Division of Decommissioning, Uranium Recovery, and Waste Programs in NRC/NMSS DWOPER Operational Dynamic Wave Model (NWS) dy day EAD expected annual damage EB2/EB3 Engineering Branch 2/3 in NRC/R-IV/DRS EBTRK Tropical Cyclone Extended Best Track Dataset EC Eddy Covariance Method EC environmental condition ECC ensemble copula coupling ECCS emergency core cooling systems pump xv

ECs environmental conditions EDF Électricité de France EDG emergency diesel generator EF environmental factor EFW emergency feedwater EGU European Geophysical Union EHCOE NRC External Hazard Center of Expertise EHID External Hazard Information Digest EIRL equivalent independent record length EIS environmental impact statement EKF Epanechikov kernel function EMA expected moments algorithm EMCWF European Centre for Medium-Range Weather Forecasts EMDR eastern main development region (for hurricanes)

EMRALD Event Model Risk Assessment using Linked Diagrams ENSI Swiss Federal Nuclear Safety Inspectorate ENSO El Nino Southern Oscillation EPA U.S. Environmental Protection Agency EPIP emergency plan implementing procedure EPRI Electric Power Research Institute ER engineering regulation (USACE)

ERA-40 European ECMWF reanalysis dataset ERB Environmental Review Branch in NRC/NMSS/FCSE ERDC Engineer Research and Development Center (USACE)

ERL equivalent record length ESCC Environmental and Siting Consensus Committee (ANS)

ESEB Structural Engineering Branch in NRC/RES/DE ESEWG Extreme Storm Events Work Group (ACWI/SOH)

ESP early site permit ESRI Environmental Systems Research Institute ESRL Earth Systems Research Lab (NOAA/OAR)

EST Eastern Standard Time EST empirical simulation technique ESTP enhanced storm transposition procedure xvi

ET event tree ET evapotranspiration ET/FT event tree/fault tree ETC extratropical cyclone EUS eastern United States EV4 extreme value with four parameters distribution function EVA extreme value analysis EVT extreme value theory EXHB External Hazards Branch in NRC/NRO/DLSE Exp experimental f annual probability of failure (USBR, USACE)

F1, F5 tornado strengths on the Fujita scale FA frequency analysis FADSU fluvial activity database of the Southeastern United States FAQ frequently asked question FAST Fourier Analysis Sensitivity Test FBPS flood barrier penetration seal FBS flood barrier system FCM flood-causing mechanism FCSE Division of Fuel Cycle Safety, Safeguards & Environmental Review in NRC/NMSS FD final design FDC flood design category (DOE terminology)

FEMA Federal Emergency Management Agency FERC Federal Energy Regulatory Commission FFA flood frequency analysis FFC flood frequency curve FHRR flood hazard reevaluation report FITAG Flooding Issues Technical Advisory Group FL Florida FLDFRQ3 U.S. Bureau of Reclamation flood frequency analysis tool FLDWAV flood wave model (NWS)

FLEX diverse and flexible mitigation strategies Flike extreme value analysis package developed University of Newcastle, Australia xvii

FLO-2D two-dimensional commercial flood model FM Approvals Testing and Certification Services Laboratories, originally Factory Mutual Laboratories f-N annual probability of failure vs. average life loss, N FOR peak flood of record FPM flood protection and mitigation FPS flood penetration seal FRA Flood Risk Analysis Compute Option in HEC-WAT FRM Fire Risk Management, Inc.

FSAR final safety analysis report FSC flood-significant component FSG FLEX support guidelines FSP flood seal for penetrations FT fault tree ft foot FXHAB Fire and External Hazards Analysis Branch in NRC/RES/DRA FY fiscal year G&G geology and geotechnical engineering GA generic action GCHA Deputy General Counsel for Hearings and Administration in NRC/OGC GCM Global Climate Model GCRP U.S. Global Change Research Program GCRPS Deputy General Counsel for Rulemaking and Policy Support in NRC/OGC GEFS Global Ensemble Forecasting System GeoClaw routines from Clawpack-5 (Conservation Laws Package) that are specialized to depth-averaged geophysical flows GEO-IR Geostationary SatellitesInfraRed Imagery GEV generalized extreme value GFDL Geophysical Fluid Dynamics Lab (NOAA)

GFS Global Forecast System GHCN Global Historical Climatology Network GHCND Global Historical Climatology Network-Daily GIS geographic information system GISS Goddard Institute for Space Studies (NASA) xviii

GKF Gaussian Kernel Function GL generic letter GLO generalized logistic distribution GLRCM Great Lakes Regional Climate Model GLUE generalized likelihood uncertainty estimation GMAO Global Modeling and Assimilation Office (NASA)

GMC ground motion characterization GMD geoscientific model development GMI GPM microwave imager GMSL global mean sea level GNO generalized normal distribution GoF goodness-of-fit GPA/GPD generalized Pareto distribution GPCP SG Global Precipitation Climatology ProjectSatellite Gauge GPLLJ Great Plains lower level jet GPM Gaussian process metamodel GPM global precipitation measurement GPO generalized Pareto distribution GPROF Goddard profile algorithm GRADEX rainfall-based flood frequency distribution method Grizzly simulated component aging and damage evolution events RISMC tool GRL Geophysical Research Letters GRS Gesellschaft für Anlagen- und ReaktorsicherheitGlobal Research for Safety GSA global sensitivity analysis GSFC Goddard Space Flight Center GSI generic safety issue GUI graphical user interface GW-GC Well-graded gravel with clay and sand GZA a multidisciplinary consulting firm h second shape parameter of four-parameter Kappa distribution h/hr hour H&H hydraulics and hydrology HAMC hydraulic model characterization xix

HBV rainfall runoff model Hydrologiska Byrns Vattenbalansalvdening, supported by the Swedish Meteorological and Hydrological Institute HCA hierarchical clustering analysis HCTISN Supreme Committee for Transparency and Information on Nuclear Safety (France)

HCW hazardous convective weather HDSC NOAA/NWS/OWP Hydrometeorological Design Studies Center HEC Hydrologic Engineering Center, part of USACE/Institute for Water Resources HEC-1 see HEC-HMS HEC-FIA Hydrologic Engineering Center Flood Impact Analysis Software HEC-HMS Hydrologic Modeling System HEC-LifeSim Hydrologic Engineering Center life loss and direct damage estimation software HEC-MetVue Hydrologic Engineering Center Meteorological Visualization Utility Engine HEC-RAS Hydrologic Engineering Center River Analysis System HEC-ResSim Hydrologic Engineering Center Reservoir System Simulation HEC-SSP Hydrologic Engineering Center Statistical Software Package HEC-WAT Hydrologic Engineering Center Watershed Analysis Tool HEP human error probability HF human factors HFRB Human Factors and Reliability Branch in NRC/RES/DRA HHA hydrologic hazard analysis HHC hydrologic hazard curve HI Hawaii HLR high-level requirement HLWFCNS Assistant General Counsel for High-Level Waste, Fuel Cycle and Nuclear Security in NRC/OGC/GCRPS HMB Hazard Management Branch in NRC/NRR/JLD, realigned HMC hydraulic/hydrologic model characterization HMR NOAA/NWS Hydrometeorological Report HMS hydrologic modeling system HOMC hydrologic model characterization hPa hectopascals (unit of pressure) xx

HR homogenous region HRA human reliability analysis HRL Hydrologic Research Lab, University of California at Davis HRRR NOAA High-Resolution Rapid Refresh Model HRRs Fukushima Hazard Reevaluation Reports (EPRI term)

HRU hydrologic runoff unit approach HUC hydrologic unit code for watershed (USGS)

HUNTER human actions RISMC tool HURDAT National Hurricane Centers HURricane DATabases Hz hertz (1 cycle/second)

IA integrated assessment IA Iowa IAEA International Atomic Energy Agency IBTrACS International Best Track Archive for Climate Stewardship IC initial condition ICOLD International Commission on Large Dams ID information digest IDF intensity-duration frequency curve IDF inflow design flood IE initiating event IEF initiating event frequency IES Dam Safety Issue Evaluation Studies IHDM Institute of Hydrology Distributed Model, United Kingdom IID independent and identically distributed IL Illinois IMERG Integrated Multi-satellitE Retrievals for GPM IMPRINT Improved Performance Research Integration Tool in inch IN information notice INES International Nuclear and Radiological Event Scale INL Idaho National Laboratory IPCC Intergovernmental Panel on Climate Change IPE individual plant examination IPEEE individual plant examination for external events xxi

IPET Interagency Performance Evaluation Taskforce for the Performance Evaluation of the New Orleans and Southeast Louisiana Hurricane Protection System IPWG International Precipitation Working Group IR infrared IR inspection report IRIB Reactor Inspection Branch in NRC/NRR/DIRS IRP Integrated Research Projects (DOE)

IRSN Institut de Radioprotection et de Sûreté Nucléaire (Frances Radioprotection and Nuclear Safety Institute)

ISG interim staff guidance ISI inservice inspection ISR interim staff response IT information technology IVT integrated vapor transport IWR USACE Institute for Water Resources IWVT integrated water vapor tendency J joule JJA June, July, August JLD Japan Lesson-learned Directorate or Division in NRC/NRR, realigned JPA Joint Powers Authority (FEMA Region II)

JPA joint probability analysis JPM joint probability method JPM-OS Joint Probability Method with Optimal Sampling K degrees Kelvin KAERI Korea Atomic Energy Research Institute KAP Kappa distribution kd erodibility coefficient kg kilogram kHz kilohertz (1000 cycles/second) km kilometer KS Kansas LA Louisiana LACPR Louisiana Coastal Protection and Restoration Study LAR license amendment request xxii

L-Cv coefficient of L-variation LEO low earth orbit LER licensee event report LERF large early release frequency LIA Little Ice Age LiDAR light imaging, detection and ranging; surveying method using reflected pulsed light to measure distance LIP local intense precipitation LMI lifetime maximum intensity LMOM / LMR L-moment LN4 Slade-type four parameter lognormal distribution function LOCA localized constructed analog LOCA loss-of-coolant accident LOOP loss of offsite power event LOUHS loss of ultimate heat sink event LPIII / LP-III, LP3 Log Pearson Type III distribution LS leading stratiform LS local storm LSHR late secondary heat removal LTWD Left-truncated Weibull distribution LULC land use and land cover LWR light-water reactor LWRS Light-Water Reactor Sustainability Program m meter MA Massachusetts MA manual action MAAP coupling accident conditions RISMC tool MAE mean absolute error MAM March, April, May MAP mean annual precipitation MASTODON structural dynamics, stochastic nonlinear soil-structure interaction in a risk framework RISMC tool mb millibar MCA medieval climate anomaly MCC mesoscale convective complex xxiii

MCI Monte Carlo integration MCLC Monte Carlo Life-Cycle MCMC Markov chain Monte Carlo method MCRAM streamflow volume stochastic modeling MCS mesoscale convective system MCS Monte Carlo simulation MCTA Behrangi Multisatellite CloudSat TRMM Aqua Product MD Maryland MDL Meteorological Development Laboratory (NWS)

MDR Main Development Region (for hurricanes)

MDT Methodology Development Team MEC mesoscale storm with embedded convection MEOW Maximum Envelopes of Water MetStorm storm analysis software by MetStat, second generation of SPAS MGD meta-Gaussian distribution MGS Engineering engineering consultants MHS microwave humidity sounder MIKE SHE/ MIKE 21 integrated hydrological modeling system MLC mid-latitude cyclone MLE maximum likelihood estimation mm millimeter MM5 fifth-generation Penn State/NCAR mesoscale model MMC mesh-based Monte Carlo method MMC meteorological model characterization MMF multimechanism flood MMP mean monthly precipitation MN Minnesota MO Missouri Mode 3 Reactor Operation Mode: Hot Standby Mode 4 Reactor Operation Mode: Hot Shutdown Mode 5 Reactor Operation Mode: Cold Shutdown MOM Maximum of MEOWs MOU memorandum of understanding MPE multisensor precipitation estimates xxiv

mph miles per hour MPS maximum product of spacings MRMS Multi-Radar Multi-Sensor project (NOAA/NSSL)

MS Mississippi MSA mitigating strategies assessment MSFHI mitigating strategies flood hazard information MSL mean sea level MSWEP multisource weighted-ensemble precipitation dataset MVGC multivariable Gaussian copula MVGD multivariable Gaussian distribution MVTC multivariable students t copula N average life loss (USBR, USACE)

NA14 NOAA National Atlas 14 NACCS North Atlantic Coast Comprehensive Study NAEFS North American Ensemble Forecasting System NAIP National Agricultural Imagery Program NAM-WRF North American Mesoscale ModelWRF NAO North Atlantic Oscillation NARCCAP North American Regional Climate Change Assessment Program NARR North American Regional Reanalysis (NOAA)

NARSIS European Research Project New Approach to Reactor Safety Improvements NASA National Aeronautics and Space Administration NAVD88 North American Vertical Datum of 1988 NBS net basin scale NCA3/NCA4 U.S. Global Change Research Program Third/Fourth National Climate Assessment NCAR National Center for Atmospheric Research NCEI National Centers for Environmental Information NCEP National Centers for Environmental Prediction (NOAA)

ND North Dakota NDFD National Digital Forecast Database (NWS)

NDSEV number of days with severe thunderstorm environments NE Nebraska NEA Nuclear Energy Agency xxv

NEB nonexceedance bounds NEI Nuclear Energy Institute NESDIS NOAA National Environmental Satellite, Data, and Information Service NEUTRINO a general-purpose simulation and visualization environment including an SPH solver NEXRAD next-generation radar NHC National Hurricane Center NI DAQ National Instruments Data Acquisition Software NID National Inventory of Dams NIOSH National Institute for Occupational Safety and Health NLDAS North American Land Data Assimilation System nm nautical miles NM New Mexico NMSS NRC Office of Nuclear Material Safety and Safeguards NOAA National Oceanic and Atmospheric Administration NOED notice of enforcement discretion NPDP National Performance of Dams Program NPH Natural Phenomena Hazards Program (DOE)

NPP nuclear power plant NPS National Park Service NRC U.S. Nuclear Regulatory Commission NRCS Natural Resources Conservation Service NRO NRC Office of New Reactors NRR NCEP-NCAR Reanalysis NRR NRC Office of Nuclear Reactor Regulation NSE Nash-Sutcliffe model efficiency coefficient NSIAC Nuclear Strategic Issues Advisory Committee NSIR NRC Office of Nuclear Security and Incident Response NSSL National Severe Storms Laboratory (NOAA)

NSTC National Science and Technology Council NTTF Near-Term Task Force NUREG NRC technical report designation NUVIA a subsidiary of Vinci Construction Group, offering expertise in services and technology supporting safety performance in nuclear facilities NWS National Weather Service xxvi

NY New York OAR NOAA Office of Oceanic and Atmospheric Research OE NRC Office of Enforcement OECD Organization for Economic Co-operation and Development OEDO NRC Office of the Executive Director for Operations OGC NRC Office of the General Counsel OHC ocean heat content OK Oklahoma OR Oregon ORNL Oak Ridge National Laboratory OSL optically stimulated luminescence OTC once-through cooling OWI Ocean Wind Inc.

OWP NOAA/NWS Office of Water Prediction P present P/PET precipitation over PET ratio, aridity Pa pascal PB1 Branch 1 in NRC/R-I/DRP PBL planetary boundary layer PCA principal component analysis PCHA probabilistic coastal hazard assessment PCMQ Predictive Capability Maturity Quantification PCMQBN Predictive Capability Maturity Quantification by Bayesian Net PD performance demand PDF probability density function PDF performance degradation factor PDS partial-duration series PE3 Pearson Type III distribution PeakFQ USGS flood frequency analysis software tool based on Bulletin 17C PERSIANN-CCS Precipitation Estimation from Remotely Sensed Information using Artificial Neural NetworksCloud Classification System (University of California at Irvine Precipitation Algorithm)

PERT program evaluation review technique PET potential evapotranspiration P-ETSS Probabilistic Extra-Tropical Storm Surge Model xxvii

PF paleoflood PF/P-F precipitation frequency PFAR precipitation field area ratio PFHA probabilistic flood hazard assessment PFM potential failure mode PI principal investigator P-I pressure-impulse curve PIF performance influencing factor PILF potentially influential low flood PM project manager PMDA Program Management, Policy Development & Analysis in NRC/RES PMF probable maximum flood PMH probable maximum hurricane PMP probable maximum precipitation PMW passive microwave PN product number PNAS Proceedings of the National Academy of Sciences of the United States of America PNNL Pacific Northwest National Laboratory POANHI Process for Ongoing Assessment of Natural Hazard Information POB Regulatory Policy and Oversight Branch in NRC/NSIR/DPR POR period of record PPRP participatory peer review panel PPS Precipitation Processing System PR Puerto Rico PRA probabilistic risk assessment PRAB Probabilistic Risk Assessment Branch in NRC/RES/DRA PRB Performance and Reliability Branch in NRC/RES/DRA PRISM a gridded dataset developed through a partnership between the NRCS National Water and Climate Center and the PRISM Climate Group at Oregon State University, developers of PRISM (the Parameter-elevation Regressions on Independent Slopes Model)

PRMS USGS Precipitation Runoff Modelling System Prométhée IRSN software based on PROMETHEE, the Preference Ranking Organization METhod for Enrichment Evaluation PRPS Precipitation Retrieval Profiles Scheme xxviii

PS parallel stratiform PSA probabilistic safety assessment, common term for PRA in other countries PSD Physical Sciences Division in NOAA/OAR/ESRL PSF performance shaping factor psf pounds per square foot PSHA probabilistic seismic hazard assessment PSI paleostage indicators PSSHA probabilistic storm surge hazard assessment P-Surge probabilistic tropical cyclone storm surge model PTI project technical integrator PVC polyvinyl chloride Pw/PW precipitable water PWR pressurized-water reactor Q quarter QA quality assurance QC quality control QI Quality Index QPE quantitative precipitation estimates QPF quantitative precipitation forecast R a statistical package R 2.1 NTTF Report Recommendation 2.1 R&D research and development R2 coefficient of determination RAM regional atmospheric model RASP Risk Assessment of Operational Events Handbook RAVEN risk analysis in a virtual environment probabilistic scenario evolution RISMC tool RC reinforced concrete RCP (4.5, 8.5) representative concentration pathways RELAP-7 reactor excursion and leak analysis program transient conditions RISMC tool RENV Environmental Technical Support Branch in NRC/NRO/DLSE REOF rotated empirical orthogonal function RES NRC Office of Nuclear Regulatory Research xxix

RF riverine flooding RFA regional frequency analysis RFC River Forecast Center (NWS)

RG regulatory guide RGB red, green, and blue imagery (NAIP)

RGB-IF red, green, blue, and infrared imagery (NAIP)

RGC regional growth curve RGGIB Regulatory Guidance and Generic Issues Branch in NRC/RES/DE RGS Geosciences and Geotechnical Engineering Branches now in NRC/NRO/DLSE, formerly in NRC/NRO/DSEA RHM Hydrology and Meteorology Branch formerly in NRC/NRO/DSEA RI Rhode Island R-I, R-II, R-III, R-IV NRC Regions I, II, III, IV RIC Regulatory Information Conference, NRC RIDM risk-informed decisionmaking RILIT Risk-Informed Licensing Initiative Team in NRC/NRR/DRA/APLB RISMC risk information safety margin characterization Rmax radius to maximum winds RMB Renewals and Materials Branch in NRC/NMSS/DSFM RMC USACE Risk Management Center RMSD root-mean-square deviation RMSE root mean square error ROM reduce order modeling ROP Reactor Oversight Process RORB-MC an interactive runoff and streamflow routing program RPAC formerly in NRC/NRO/DSEA RRTM Rapid Radiative Transfer Model Code in WRF RRTMS RRTM with GCM application RS response surface RTI an independent, nonprofit institute RV return values SA storage area SACCS South Atlantic Coastal Comprehensive Study SAPHIR Sounding for Probing Vertical Profiles of Humidity xxx

SAPHIRE Systems Analysis Programs for Hands-on Integrated Reliability Evaluations SBDFA simulation-based dynamic flooding analysis framework SBO station blackout SBS simulation-based scaling SC safety category (ANS 58.16-2014 term)

SC South Carolina SCAN Soil Climate Analysis Network SCRAM immediate shutdown of nuclear reactor SCS curve number method SD standard deviation SDC shutdown cooling SDP significance determination process SDR Subcommittee on Disaster Reduction SECY written issues paper the NRC staff submits to the Commission SEFM Stochastic Event-Based Rainfall-Runoff Model SER safety evaluation report SGSEB Structural, Geotechnical and Seismic Engineering Branch in NRC/RES/DE SHAC-F Structured Hazard Assessment Committee Process for Flooding SHE Systém Hydrologique Européan SITES model that uses headcut erodibility index by USDA-ARS and University of Kansas "Earthen/Vegetated Auxiliary Spillway Erosion Prediction for Dams" SLC sea level change SLOSH Sea Lake and Overland Surges from Hurricanes (NWS model)

SLR sea level rise SMR small modular reactor SNOTEL snow telemetry SNR signal-to-noise ratio SOH Subcommittee on Hydrology SOM self-organizing map SON September, October, November SOP standard operating pressure SPAR standardized plant analysis risk SPAS Storm Precipitation Analysis System (MetStat, Inc.)

xxxi

SPH smoothed-particle hydrodynamics SPRA PRA and Severe Accidents Branch in NRC/NRO/DESR (formerly in DSRA)

SRA senior reactor analyst SRES A2 NARCCAP A2 emission scenario SRH2D/SRH-2D USBR Sedimentation and River HydraulicsTwo-Dimensional model SRM staff requirements memorandum SRP standard review plan SRR storm recurrence rate SSAI Science Systems and Applications, Inc.

SSC structure, system, and component SSHAC Senior Seismic Hazard Assessment Committee SSM Swedish Radiation Safety Authority (Strl skerhets mydigheten)

SSMI Special Sensor Microwave Imager SSMIS Special Sensor Microwave Imager/Sounder SSPMP site-specific probable maximum precipitation SST sea surface temperature SST stochastic simulation technique SST stochastic storm transposition SSURGO soil survey geographic database ST4 or Stage IV precipitation information from multisensor (radar and gauges) precipitation analysis STEnv severe thunderstorm environment STM stochastic track method StormSIm stochastic storm simulation system STSB Technical Specifications Branch in NRC/NRR/DSS STUK Finland Radiation and Nuclear Safety Authority STWAVE STEady-state spectral WAVE model SÚJB Czech Republic State Office for Nuclear Safety SWAN Simulation Waves Nearshore Model SWE snow-water equivalent SWL still water level SWMM EPA Storm Water Management Model SWT Schaefer-Wallis-Taylor Climate Region Method TAG EPRI Technical Assessment Guide xxxii

TC tropical cyclone TCI TRMM Combined Instrument Td daily temperature TDF transformed extreme value type 1 distribution function (four parameter)

TDI technically defensible interpretations TELEMAC two-dimensional hydraulic model TELEMAC 2D a suite of finite element computer programs owned by the Laboratoire National d'Hydraulique et Environnement (LNHE), part of the R&D group of Électricité de France T-H thermohydraulic TI technical integration TI technology innovation project TL training line TMI Three Mile Island TMI TRMM Microwave Imager TMPA TRMM Multisatellite Precipitation Analysis TN Tennessee TOPMODEL two-dimensional distributed watershed model by Keith Beven, Lancaster University TOVS Television-Infrared Observation Satellite (TIROS) Operational Vertical Sounder TP-# Test Pit #

TP-29 U.S. Weather Bureau Technical Paper No. 29 TP-40 Technical Paper No. 40, Rainfall Frequency Atlas of the U.S., 1961 TR USACE technical report TREX two-dimensional, runoff, erosion, and export model TRMM Tropical Rainfall Measuring Mission TRVW Tennessee River Valley Watershed TS technical specification TS trailing stratiform TSR tropical-storm remnant TUFLOW two-dimensional hydraulic model TVA Tennessee Valley Authority TX Texas U.S. or US United States UA uncertainty analysis xxxiii

UC University of California UH unit hydrograph UKF uniform kernel function UKMET medium-range (3- to 7-day) numerical weather prediction model operated by the United Kingdom METeorological Agency UL Underwriters Laboratories UMD University of Maryland UNR user need request UQ uncertainty quantification URMDB Uranium Recovery and Materials Decommissioning Branch in NRC/NMSS/DUWP USACE U.S. Army Corps of Engineers (see also COE)

USACE-NWD USACE NorthWest Division USBR U.S. Bureau of Reclamation USDA U.S. Department of Agriculture USDA-ARS United State Department of AgricultureAgricultural Research Service USFWS U.S. Fish and Wildlife Service USGS United States Geological Survey UTC coordinated universal time VA Virginia VDB validation database VDMS Validation Data Management System VDP validation data planning VIC Variable Infiltration Capacity model VL-AEP very low annual exceedance probability W watt WAK Wakeby distribution WASH-1400 Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants [NUREG-75/014 (WASH-1400)]

WB U.S. Weather Bureau WBT wet bulb temperature WEI Weibull distribution WGEV Working Group on External Events WGI Working Group I WI Wisconsin xxxiv

WinDamC USDA/NRCS model for estimating erosion of earthen embankments and auxiliary spillways of dams WL water level WMO World Meteorological Organization WRB Willamette River Basin WRF Weather Research and Forecasting model WRR Water Resources Research (journal)

WSEL / WSL water surface elevation WSM6 WRF Single-Moment 6-Class Microphysics Scheme WSP USGS Water Supply Paper XF external flooding XFEL external flood equipment list XFOAL external flood operation action list XFPRA external flooding PRA yr year yrBP years before present Z Zulu time, equivalent to UTC xxxv

INTRODUCTION

Background

The NRC is conducting a multiyear, multi-project Probabilistic Flood Hazard Assessment (PFHA)

Research Program. It initiated this research in response to staff recognition of a lack of guidance for conducting PFHAs at nuclear facilities that required staff and licensees to use highly conservative deterministic methods in regulatory applications. The staff described the objective, research themes, and specific research topics in the Probabilistic Flood Hazard Assessment Research Plan, Version 2014-10-23, provided to the Commission in November 2014 (ADAMS Accession Nos. ML14318A070 and ML14296A442). The PFHA Research Plan was endorsed in a joint user need request by the NRC Office of New Reactors and Office of Nuclear Reactor Regulation (UNR NRO-2015-002, ADAMS Accession No. ML15124A707). This program is designed to support the development of regulatory tools (e.g., regulatory guidance, standard review plans) for permitting new nuclear sites, licensing new nuclear facilities, and overseeing operating facilities. Specific uses of flooding hazard estimates (i.e., flood elevations and associated affects) include flood-resistant design for structures, systems, and components (SSCs) important to safety and advanced planning and evaluation of flood protection procedures and mitigation.

The lack of risk-informed guidance with respect to flooding hazards and flood fragility of SSCs constitutes a significant gap in the NRCs risk-informed, performance-based regulatory approach to the assessment of hazards and potential safety consequences for commercial nuclear facilities.

The probabilistic technical basis developed will provide a risk-informed approach for improved guidance and tools to give staff and licensees greater flexibility in evaluating flooding hazards and potential impacts to SSCs in the oversight of operating facilities (e.g., license amendment requests, significance determination processes (SDPs), notices of enforcement discretion (NOEDs)) as well as licensing of new facilities (e.g., early site permit applications, combined license (COL) applications), including proposed small modular reactors (SMRs) and advanced reactors. This methodology will give staff more flexibility in assessing flood hazards at nuclear facilities so the staff will not have to rely on the use of the current deterministic methods, which can be overly conservative in some cases.

The main focus areas of the PFHA Research Program are to (1) leverage available frequency information on flooding hazards at operating nuclear facilities and develop guidance on its use, (2) develop and demonstrate a PFHA framework for flood hazard curve estimation, (3) assess and evaluate application of improved mechanistic and probabilistic modeling techniques for key flood-generating processes and flooding scenarios, (4) assess potential impacts of dynamic and nonstationary processes on flood hazard assessments and flood protection at nuclear facilities, and (5) assess and evaluate methods for quantifying reliability of flood protection and plant response to flooding events. Workshop organizers used these focus areas to develop technical session topics for the workshop.

Workshop Objectives The Annual PFHA Research Workshops serve multiple objectives: (1) inform and solicit feedback from internal NRC stakeholders, partner Federal agencies, industry, and the public about PFHA research being conducted by the NRC Office of Nuclear Regulatory Research (RES), (2) inform internal and external stakeholders about RES research collaborations with Federal agencies, the Electric Power Research Institute (EPRI) and the French Institute for Radiological and Nuclear xxxvii

Security (IRNS) and (3) provide a forum for presentation and discussion of notable domestic and international PFHA research activities.

Workshop Scope Scope of the workshop presentations and discussions included:

  • Current and future climate influences on flooding processes
  • Significant precipitation and flooding events
  • Statistical and mechanistic modeling approaches for precipitation, riverine flooding, and coastal flooding processes
  • Probabilistic flood hazard assessment frameworks
  • Reliability of flood protection and mitigation features and procedures
  • External flooding probabilistic risk assessment Summary of Proceedings These proceedings transmit the agenda, abstracts, and slides from presentations and posters presented, and chronicle the question and answer sessions and panel discussions held, at the U.S. Nuclear Regulatory Commissions (NRCs) Annual Probabilistic Flood Hazard Assessment (PFHA) Research Workshops, which take place approximately annually at NRC Headquarters in Rockville, MD. The first four workshops took place as follows:
  • 1st Annual NRC PFHA Research Workshop, October 14-15, 2015
  • 2nd Annual NRC PFHA Research Workshop, January 23-25, 2017 (Agencywide Documents Access and Management System (ADAMS) Accession No. ML17040A626)

These proceedings include presentation abstracts and slides and a summary of the question and answer sessions. The first workshop was limited to NRC technical staff and management, NRC contractors, and staff from other Federal agencies. The three workshops that followed were meetings attended by members of the public; NRC technical staff, management, and contractors; and staff from other Federal agencies. Public attendees over the course of the workshops included industry groups, industry members, consultants, independent laboratories, academic institutions, and the press. Members of the public were invited to speak at the workshops. The fourth workshop included more invited speakers from the public than from the NRC and the NRCs contractors.

The proceedings for the second through fourth workshops include all presentation abstracts and slides and submitted posters and panelists slides. Workshop organizers took notes and audio-recorded the question and answer sessions following each talk, during group panels, and during end-of-day question and answer session. Responses are not reproduced here verbatim and were generally from the presenter or co-authors. Descriptions of the panel discussions identify the speaker when possible. Questions were taken orally from attendees, on question cards, and over the telephone.

xxxviii

Related Workshops An international workshop on PFHA took place on January 29-31, 2013. The workshop was devoted to sharing information on PFHAs for extreme events (i.e., annual exceedance probabilities (AEPs) much less than 2x10-3 per year) from the Federal community). The NRC issued the proceedings as NUREG/CP-302, Proceedings of the Workshop on Probabilistic Flood Hazard Assessment (PFHA), in October 2013 (ADAMS Accession No. ML13277A074).

xxxix

4 FOURTH ANNUAL NRC PROBABILISTIC FLOOD HAZARD ASSESSMENT RESEARCH WORKSHOP 4.1 Introduction This chapter details the 4th Annual NRC Probabilistic Flood Hazard Assessment (PFHA)

Research Workshop held at the U.S. Nuclear Regulatory Commission (NRC) Headquarters in Rockville, MD, on April 30-May 2, 2019. These proceedings include presentation abstracts and slides, selected posters, and a summary of question and answers and panel discussions. The workshop was a public meeting attended by members of the public; NRC technical staff, management, and contractors; and staff from other Federal agencies.

The workshop began with an introduction from Ray Furstenau, Director, NRC Office of Nuclear Regulatory Research (RES). Following the introduction, RES and Electric Power Research Institute (EPRI) staff described their flooding research programs. Additionally, John Nakoski, RES, provided an overview of internal flood hazard efforts underway by the Nuclear Energy Agency, Committee on the Safety of Nuclear Installations (CSNI), Working Group on External Events (WGEV) Flooding.

Technical sessions followed the introduction session. Most sessions began with an invited keynote speaker, followed by several technical presentations, and concluded with a panel of all speakers, who discussed the session topic in general. At the end of each day, participants provided feedback and asked generic questions about research related to PFHA for nuclear facilities.

4.1.1 Organization of Conference Proceedings Section 4.2 provides the agenda for this workshop. The agenda is also available in the NRCs Agencywide Documents Access and Management System (ADAMS) at Accession No. ML19156A448.

Section 4.3 presents the proceedings from the workshop, including abstracts, presentation slides, selected posters, and summaries of the question and answer sessions and panel discussions for each technical session.

The summary document of session abstracts for the technical presentations is available at ADAMS Accession No. ML19156A447. The complete workshop presentation package is available at ADAMS Accession No. ML19156A446.

Section 0summarizes the workshop, and Section 0lists the workshop attendees, including remote participants.

4-1

4.2 Workshop Agenda 4th Annual NRC Probabilistic Flood Hazard Assessment Research Workshop at NRC Headquarters in Rockville, Maryland AGENDA: TUESDAY, APRIL 29, 2019 09:00-09:10 Welcome & Logistics Session 1A - Introduction Session Chair: Meredith Carr, NRC/RES 09:10-09:25 Introduction 1A-1 Raymond Furstenau*, Director, Office of Nuclear Regulatory Research 09:25-09:45 NRC Flooding Research Program Overview 1A-2 Joseph Kanney*, Meredith Carr, Tom Aird, Elena Yegorova, Mark Fuhrmann and Jacob Philip, NRC/RES 09:45-10:05 EPRI External Flooding Research Program Overview 1A-3 Marko Randelovic*, EPRI 10:05-10:20 Nuclear Energy Agency: Committee on the Safety of Nuclear 1A-4 Installations (CSNI): Working Group on External Events (WGEV)

Flooding Overview John Nakoski*, NRC/RES 10:20-10:35 BREAK Session 1B - Coastal Flooding Session Chair: Joseph Kanney, NRC/RES 10:35-11:05 KEYNOTE: National Weather Service Storm Surge Ensemble 1B-1 Guidance Arthur Taylor*, National Weather Service/Office of Science and Technology Integration/Meteorological Development Laboratory 11:05-11:30 Advancements in Probabilistic Storm Surge Models and Uncertainty 1B-2 Quantification Using Gaussian Process Metamodeling Norberto C. Nadal-Caraballo*, Victor M. Gonzalez and Alexandros Taflanidis, USACE R&D Center, Coastal and Hydraulics Laboratory 11:30-11:55 Probabilistic Flood Hazard Assessment Using the Joint Probability 1B-3 Method for Hurricane Storm Surge Michael Salisbury^, Atkins North America, Inc.;

Marko Randelovic*, EPRI

  • denotes presenter, ^ denotes remote presenter 4-2

continued Session 1B - Coastal Flooding Session Chair: Joseph Kanney, NRC/RES 11:55-12:20 Assessment of Epistemic Uncertainty for Probabilistic Storm Surge 1B-4 Hazard Assessment Using a Logic Tree Approach Bin Wang*, Daniel C. Stapleton and David M. Leone, GZA GeoEnvironmental, Inc.

12:20-13:00 Coastal Flooding Panel 1B-5 Arthur Taylor, National Weather Service Victor Gonzalez, USACE Coastal and Hydraulics Laboratory Michael Salisbury, Atkins North America, Inc.

Bin Wang, GZA GeoEnvironmental, Inc.

Guest Panelist: Chris Bender, Taylor Engineering 13:00-14:00 LUNCH Session 1C - Precipitation Session Chair: Elena Yegorova, NRC/RES 14:00-14:30 KEYNOTE: Satellite Precipitation Estimates, GPM, and Extremes 1C-1 George J. Huffman*, National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC) 14:30-14:55 Hurricane Harvey Highlights: Need to Assess the Adequacy of 1C-2 Probable Maximum Precipitation Estimation Methods Shih-Chieh Kao, Scott T. DeNeale and David B. Watson, ORNL 14:55-15:20 Reanalysis Datasets in Hydrologic Hazards Analysis 1C-3 Jason Caldwell, USACE, Galveston District. Presented by John England, USACE/RMC 15:20-15:35 BREAK 15:35-16:00 Current Capabilities for Developing Watershed Precipitation-Frequency 1C-4 Relationships and Storm-Related Inputs for Stochastic Flood Modeling for Use in Risk-Informed Decisionmaking Mel Schaefer*, MGS Engineering Consultants, Inc.

16:00-16:25 Factors Affecting the Development of Precipitation Areal Reduction 1C-5 Factors Shih-Chieh Kao* and Scott DeNeale, ORNL 16:25-17:05 Precipitation Panel Discussion 1C-6 George J. Huffman, NASA/GSFC Shih-Chieh Kao, ORNL John England, USACE, Risk Management Center Mel Schaefer, MGS Engineering Consultants Guest Panelist: Kevin Quinlan, NRC/NRO/DLSE/EXHB 17:05-17:20 Daily Wrap-up 4-3

AGENDA: WEDNESDAY, MAY 1, 2019 08:20-08:30 Day 2 Welcome Session 2A - Riverine Flooding Session Chairs: Meredith Carr and Mark Fuhrmann, NRC/RES 08:30-9:00 KEYNOTE: Watershed level Risk Analysis with HEC-WAT 2A-1 Will Lehmann*, Lea Adams and Chris Dunn, USACE, Institute for Water Resources, Hydrologic Engineering Center (IWR/HEC) 09:00-09:25 Global Sensitivity Analyses Applied to Riverine Flood Modeling 2A-2 Claire-Marie Duluc*, Vincent Rebour, Vito Bacchi, Lucie Pheulpin

& Nathalie Bertrand, Institut de radioprotection et de sûreté nucléaire Radioprotection and Nuclear Safety Institute 09:25-09:50 Detection and Attribution of Flood Change Across the United States 2A-3 Stacey A. Archfield*, Water Mission Area, U.S. Geological Survey

- Presentation Cancelled 09:50-10:15 Bulletin 17C: Flood Frequency and Extrapolations for Dams and 2A-4 Nuclear Facilities John F. England* and Haden Smith, USACE, Risk Management Center; Brian Skahill, USACE R&D Center, Coastal and Hydraulics Laboratory 10:15-10:35 BREAK Session 2A - Riverine Flooding, continued Session Chairs: Meredith Carr and Mark Fuhrmann, NRC/RES 10:35-11:00 Riverine Paleoflood Analyses in Risk-Informed Decisionmaking: 2A-5 Improving Hydrologic Loading Input for USACE Dam Safety Evaluations Keith Kelson*, USACE, Sacramento Dam Safety Production Center; Justin Pearce, USACE, Risk Management Center; and Brian Hall, Dam Safety Modification Mandatory Center of Expertise 11:00-11:25 Improving Flood Frequency Analysis with a Multi-Millennial Record of 2A-6 Extreme Floods on the Tennessee River near Chattanooga, TN Tess Harden*, Jim OConnor and Mackenzie Keith, USGS 11:25-12:05 Riverine Flooding Panel Discussion Will Lehmann, USACE/IWR Hydrologic Engineering Center Claire-Marie DuLuc, IRSN John F. England, USACE, Risk Management Center Keith Kelson, USACE, Sacramento Dam Safety Protection Center Tess Harden, U.S. Geological Survey 12:05-13:25 LUNCH 4-4

Session 2B - Modeling Frameworks Session Chair: Thomas Nicholson, NRC/RES 13:25-13:50 Structured Hazard Assessment Committee Process for Flooding 2B-1 (SHAC-F)

Rajiv Prasad^ and Philip Meyer, Pacific Northwest National Laboratory; Kevin Coppersmith, Coppersmith Consulting 13:50-14:15 Overview of the Tennessee Valley Authority (TVA) PFHA Calculation 2B-2 System Shaun Carney*, RTI International, Water Resource Management Division; Curt Jawdy, Tennessee Valley Authority 14:15-14:40 Development of Risk-Informed Safety Margin Characterization 2B-3 Framework for Flooding of Nuclear Power Plants M.A. Andre, George Washington University; E. Ryan, Idaho State University, Idaho National Laboratory; Steven Prescott, Idaho National Laboratory; N. Montanari and R. Sampath, Centroid Lab; L. Lin, A. Gupta and N. Dinh, North Carolina State University; and Philippe M. Bardet*, George Washington University 14:40-15:20 Modeling Frameworks Panel Discussion 2B-4 Rajiv Prasad, Pacific Northwest National Laboratory Shaun Carney, RTI International Philippe M. Bardet, George Washington University Will Lehmann, USACE/IWR, HEC Guest Panelist: Joseph Kanney, NRC/RES 15:20-15:35 Daily Wrap-up 4-5

15:35-16:50 Session 2C - Poster Session Session Chair: Meredith Carr, NRC/RES 2C-1 Coastal Storm Surge Assessment using Surrogate Modeling Methods Azin Al Kajbaf and Michelle Bensi, Department of Civil and Environmental Engineering, University of Maryland 2C-2 Methods for Estimating Joint Probabilities of Coincident and Correlated Flooding Mechanisms for Nuclear Power Plant Flood Hazard Assessments Michelle (Shelby) Bensi and Somayeh Mohammadi, Center for Disaster Resilience, University of Maryland; Scott DeNeale and Shih-Chieh Kao, Environmental Sciences Division, Oak Ridge National Laboratory 2C-3 Modelling Dependence and Coincidence of Flooding Phenomena: Methodology and Simplified Case Study in Le Havre in France A. Ben Daoued, Sorbonne UniversityUniversité de Technologie de Compigne; Y. Hamdi, Institut de Radioprotection et de Sûreté Nucléaire; Mouhous-Voyneau, Sorbonne UniversityUniversité de Technologie de Compigne; and P. Sergent, Cerema 2C-4 Current State-of-Practice in Dam Risk Assessment Scott DeNeale, Environmental Sciences Division, Oak Ridge National Laboratory; Greg Baecher, Center for Disaster Resilience, University of Maryland; and Kevin Stewart, Environmental Sciences Division, Oak Ridge National Laboratory 2C-5 Hurricane Harvey Highlights the Challenge of Estimating Probable Maximum Precipitation Shih-Chieh Kao, Scott T. DeNeale and David B. Watson, Environmental Sciences Division, Oak Ridge National Laboratory 2C-6 Uncertainty and Sensitivity Analysis for Hydraulic Models with Dependent Inputs Lucie Pheulpin, Vito Bacchi and Nathalie Bertrand, Institut de Radioprotection et de Sûreté Nucléaire, Fontenay-aux-Roses, France 2C-7 Development of Hydrologic Hazard Curves using SEFM for Assessing Hydrologic Risks at Rhinedollar Dam, CA Bruce Barker, MGS Engineering Consultants, Inc.; Nicole Novembre, Brava Engineering, Inc.; Matthew Muto and John Dong, Southern California Edison; Blake Allen and Katie Ward, MetStat, Inc.; Jason Caldwell, Weather & Water, Inc.

2C-8 Probabilistic Flood Hazard Analysis of Nuclear Power Plant in Korea Beomjin Kim, Ph.D. Candidate, Kyungpook National University, Korea; Kun-Yeun Han, Professor, Department of Civil Engineering, Kyungpook National University; Minkyu Kim, Principal Researcher, Korea Atomic Energy Institute, Korea 18:00 Group Dinner 4-6

AGENDA: THURSDAY, MAY 2, 2019 08:20-08:30 Day 3 Welcome Session 3A - Climate and Non-stationarity Session Chair: Joseph Kanney, NRC/RES 08:30-09:00 KEYNOTE: Hydroclimatic Extremes Trends and Projections: A View 3A-1 from the Fourth National Climate Assessment Kenneth Kunkel*, North Carolina State University 09:00-09:25 Regional Climate Change Projections: Potential Impacts to Nuclear 3A-2 Facilities L. Ruby Leung*, Rajiv Prasad, Pacific Northwest National Laboratory 09:25-09:50 Role of Climate Change/Variability in the 2017 Atlantic Hurricane 3A-3 Season Young-Kwon Lim*; NASA/GSFC, Global Modeling and Assimilation Office, Goddard Earth Sciences, Technology, and Research/I.M. Systems Group; Siegfried Schubert and Robin Kovach; NASA/GSFC, Global Modeling and Assimilation Office and Science Systems and Applications, Inc.; Andrea Molod and Steven Pawson, NASA/GSFC, Global Modeling and Assimilation Office 9:50-10:30 Climate Panel Discussion 3A-4 Kenneth Kunkel, North Carolina State University L. Ruby Leung, Pacific Northwest National Laboratory Young-Kwon Lim, NASA/GSFC Guest Panelist: Kevin Quinlan, NRC/NRO/DLSE/EXHB 10:30-10:50 BREAK Session 3B - Flood Protection and Plant Response Session Chair: Thomas Aird, NRC/RES 10:50-11:15 External Flood Seal Risk-Ranking Process 3B-1 Ray Schneider*, Westinghouse; and Marko Randelovic*, EPRI 11:15-11:40 Results of Performance of Flood-Rated Penetration Seals Tests 3B-2 William (Mark) Cummings, Fisher Engineering, Inc.

11:40-1205 Modeling Overtopping Erosion Tests of Zoned Rockfill Embankments 3B-3 Tony Wahl^, U.S. Bureau of Reclamation 12:05-12:45 Flood Protection and Plant Response Panel Discussion 3B-4 Ray Schneider, Westinghouse William (Mark) Cummings, Fisher Engineering, Inc.

Tony Wahl^, U.S. Bureau of Reclamation Guest Panelist: Jacob Philip, NRC/RES/DRA/DE 4-7

12:45-13:45 LUNCH Session 3C - Towards External Flooding PRA Session Chair: Joseph Kanney, NRC/RES 13:45-14:10 External Flooding PRA Walkdown Guidance 3C-1 Andrew Miller*, Jensen Hughes; and Marko Randelovic*, EPRI 14:10-14:35 Updates on the Revision and Expansion of the External Flooding PRA 3C-2 Standard Michelle (Shelby) Bensi*, University of Maryland 14:35-15:00 Update on ANS 2.8: Probabilistic Evaluation of External Flood Hazards 3C-3 for Nuclear Facilities Working Group Status Ray Schneider, Westinghouse 15:00-15:25 Qualitative PRA Insights from Operational Events of External Floods 3C-4 and Other Storm-Related Hazards Nathan Siu, Ian Gifford*, Zeechung (Gary) Wang, Meredith Carr and Joseph Kanney, NRC/RES 15:25-16:05 Towards External Flooding PRA Discussion Panel 3C-5 Andrew Miller, Jensen Hughes Michelle (Shelby) Bensi, University of Maryland Ray Schneider, Westinghouse Ian Gifford, NRC/RES Guest Panelist: Suzanne Denis, NRC/RES Guest Panelist: Jeremy Gaudron, EDF 16:05-16:25 Wrap-up Discussion 4-8

4.3 Proceedings 4.3.1 Day 1: Session 1A - Introduction Session Chair: Meredith Carr, NRC/RES/DRA/FXHAB There are no abstracts for this introductory session.

4.3.1.1 Introduction. Raymond Furstenau*, Director, Office of Nuclear Regulatory Research (Session 1A-1) 4.3.1.1.1 Presentation 4-9

4-10 4-11 4.3.1.2 NRC Flooding Research Program Overview. Joseph Kanney*, Meredith Carr, Thomas Aird, Elena Yegorova, Mark Fuhrmann and Jacob Philip, NRC/RES (Session 1A-2; ADAMS Accession No. ML19156A449) 4.3.1.2.1 Presentation 4-12

4-13 4-14 4-15 4-16 4-17 4-18 4-19 4-20 4-21 4-22 4.3.1.3 EPRI External Flooding Research Program Overview. Marko Randelovic*, EPRI (Session 1A-3; ADAMS Accession No. ML19156A450) 4.3.1.3.1 Presentation 4-23

4-24 4-25 4-26 4-27 4.3.1.4 Nuclear Energy Agency, Committee on the Safety of Nuclear Installations (CSNI):

Working Group on External Events (WGEV). John Nakoski*, NRC/RES (Session 1A-4; ADAMS Accession No. ML19156A451) 4.3.1.4.1 Presentation 4-28

4-29 4-30 4-31 4-32 4.3.2 Day 1: Session 1B - Coastal Flooding Session Chair: Joseph Kanney, NRC/RES/DRA/FXHAB 4.3.2.1 KEYNOTE: National Weather Service Storm Surge Ensemble Guidance.

Arthur Taylor*, National Weather Service/Office of Science and Technology Integration/Meteorological Development Laboratory (Session 1B-1; ADAMS Accession No. ML19156A452) 4.3.2.1.1 Abstract The National Weather Service (NWS) Meteorological Development Laboratory (MDL) is tasked with developing storm surge guidance to help protect life and property from disastrous storms.

After developing the Sea Lake and Overland Surges from Hurricanes (SLOSH) storm surge model in the 1980s, we recognized that storm surge is highly dependent on the location of the winds with regard to the underlying bathymetry, so one of the largest errors in storm surge guidance was the quality of the wind forecasts. Thus, to save lives, we had to account for wind uncertainty in a timely manner, even if that meant erring on the side of caution in regard to the storm surge guidance.

Initially, our approach was to develop Maximum Envelopes of Water (MEOWs) and Maximum of MEOWs (MOMs). MEOWs and MOMs are the maximum storm surge attained in each grid cell from a set of hypothetical hurricanes. As such, they approximate the potential inundation for an area from a specific type of hurricane. MEOWs and MOMs form the basis of the hurricane evacuation plans in the United States 7. The National Storm Surge Hazard map, developed by the 7

Shaffer WA, Jelesnianski CP, Chen J (1989) Hurricane storm surge forecasting. Preprints, 11th Conf on Probability and Statistics in Atmospheric Sciences, Monterey, CA, Amer Meteor Soc 53-58.

4-33

National Hurricane Center (NHC), was created by merging the MEOWs and MOMs from each computational domain onto a uniform grid.

In the 2000s, we addressed the issue via the Probabilistic Tropical Cyclone Storm Surge model (P-Surge)8. P-Surge is a real-time ensemble based on parameterizing an active storm and permuting it via NHCs 5-year average forecasting errors. MEOWs and MOMs are based on an undefined error space, hypothetical storms, and an unknown time and tide, whereas P-Surge is based on a defined error space, an active storm, and the current time and tide. NWSs storm surge watch and warning is primarily based on P-Surge.

More recently, in the 2010s, we treated wind uncertainty for extratropical and post-tropical storms via the Probabilistic Extra-Tropical Storm Surge model (P-ETSS). Extratropical storms do not lend themselves to parameterization, so instead of permuting through an error space, P-ETSS is based on running a storm surge model with each of the 21 members of the Global Ensemble Forecasting System (GEFS). P-ETSS is intended for storms that are not well represented by a parametric hurricane wind model, such as broader extratropical storms, weaker tropical storms, or post-tropical depressions.

This talk will briefly describe the SLOSH model and then focus on the details of P-Surge and P-ETSS and future plans for development. The audience should learn why NWS uses P-Surge and P-ETSS for the forecasting problem (as opposed to design).

4.3.2.1.2 Presentation 8

Taylor, AA, & Glahn, B (2008). Probabilistic guidance for hurricane storm surge. In 19th Conference on probability and statistics: Vol. 74.

4-34

4-35 4-36 4-37 4-38 4-39 4-40 4-41 4-42 4-43 4-44 4-45 4-46 4-47 4-48 4-49 4-50 4-51 4-52 4-53 4-54 4.3.2.1.3 Questions and Answers Question:

I'm interested in how you would apply this to the Great Lakes. How would you use this approach for nuclear power plants (NPPs) on Lake Ontario?

Answer:

Unfortunately, we haven't done basins in the Great Lakes. The Great Lakes have a different model. We've done basins for Lake Okeechobee and so we have an Okeechobee result. You would do something very similar for the Great Lakes. You would first need to build a computational basin. With P-Surge, you are not ending up with a hurricane in the Great Lakes region, so it wouldn't be applicable for P-Surge. However, for an extratropical event, you could easily do the P-ETSS runs in the Great Lakes area. The only piece missing is having a basin. We would need funding to build a basin in that area, just as we need to have funding to build some higher resolution basins in the Bering Sea. We are also working with P-ETSS on ensemble models from the GEFS to provide quantitative precipitation forecasts (QPFs). At some point, I want to take the river input from the 21 different GEFS ensemble member QPFs) as river inputs. I think that would be very important for the Great Lakes to complete the river ensembles. I plan to go first to the West Coast for the river work, but the Great Lakes would be very interesting.

4-55

4.3.2.2 Advancements in Probabilistic Storm Surge Models and Uncertainty Quantification Using Gaussian Process Metamodeling. Norberto C. Nadal-Caraballo*, Victor M. Gonzalez*,

Alexandros Taflanidis, USACE R&D Center, Coastal and Hydraulics Laboratory (Session 1B-2; ADAMS Accession No. ML19156A453) 4.3.2.2.1 Abstract The application of probabilistic storm surge models for the PFHA of critical infrastructure in coastal zones requires a comprehensive uncertainty quantification framework. The new approach for treating uncertainty in probabilistic storm surge studies consists of quantifying aleatory and epistemic uncertainties associated with the application of data, methods, and models in each step of the analysis. The U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory (ERDC-CHL), is performing a comprehensive assessment of uncertainties in probabilistic storm surge models in support of the NRCs efforts to develop a framework for probabilistic storm surge hazard assessment for NPPs.

In the case of the joint probability method (JPM), which is the standard probabilistic model used to assess coastal storm hazard in hurricane-prone coastal regions of the United States, the error is incorporated in the integration of the hazard curve. Recent advancements by ERDC-CHL include the computation of spatially varying hydrodynamic modeling errors and the development of Gaussian process metamodels (GPMs) based on existing JPM storm suites. The GPMs emulate the response of hydrodynamic numerical models, such as the Advanced CIRCulation model (ADCIRC), and enable the development of augmented storm suites consisting of tens of thousands to millions of tropical cyclones without introducing significant error. This, in turn, facilitates the evaluation of JPM and Monte Carlo methods and other probabilistic models and approaches for the integration of uncertainty that would otherwise be unfeasible due to computational burden constraints.

The treatment of epistemic uncertainty in the present study expands upon the traditional JPM approach of considering this uncertainty as an error term in the JPM integral by estimating the epistemic uncertainty that arises from the selection and application of alternate technically defensible data, methods, and models at each step of the probabilistic storm surge modeling. The approach followed is based on NRC guidance on probabilistic seismic hazard assessments (PSHAs), where the uncertainty is propagated through the use of logic trees. The epistemic uncertainty is then quantified through the development of a family of hazard curves, with individual curves corresponding to evaluated data sources and methods associated with the different applications of probabilistic storm surge models. The range of the epistemic uncertainty is conveyed through the fractiles of the family of storm hazard curves, which are equivalent to nonexceedance confidence limits (e.g., 0.05, 0.16, 0.5 (median), 0.84, and 0.95).

4-56

4.3.2.2.2 Presentation 4-57

4-58 4-59 4-60 4-61 4-62 4-63 4-64 4-65 4-66 4-67 4-68 4-69 4-70

4-71 4.3.2.2.3 Questions and Answers No time was available for questions.

4.3.2.3 Probabilistic Flood Hazard Assessment Using the Joint Probability Method for Hurricane Storm Surge. Michael Salisbury^, Atkins North America, Inc.; and Marko Randelovic*,

EPRI (Session 1B-3; ADAMS Accession No. ML19156A454) 4.3.2.3.1 Abstract Hurricane-induced storm surge can be significant, depending on the circumstances. For nuclear plants to adequately assess the risk posed by storm surge, we need to better understand the expected frequencies of storms that would produce a storm surge that could affect a site. A PFHA can be used to determine the frequency of a storm surge that would be expected to exceed a particular flood height. This report explains how to use the JPM and numerical simulations to obtain an estimate of annual exceedance probabilities as a function of surge height. This method can be used to determine the appropriate design basis and develop the flood hazard curvethe relationship between surge level and frequencyfor a probabilistic risk assessment. The report includes discussions on developing and validating storm surge models for project sites, determining storms to be simulated, and identifying storm surge parameters and their associated probabilistic uncertainties. The methodology presented in the report is applicable to any coastal NPP that could be impacted by tropical cyclones (hurricanes). A practical example is presented to help demonstrate the method.

4.3.2.3.2 Presentation 4-72

4-73 4-74 4-75 4-76 4-77 4-78 4-79 4.3.2.3.3 Questions and Answers Question:

One of the big issues with this topic is you are in a part of the parameter space that we have not really encountered. As you mentioned, there are a lot of details about setting up the model, like picking friction values and the wind drag model. Did you recommend how to deal with the fact that those issues, and others like erosion, are things we have not really observed at the scale of storms, such as what we are probably looking at here. It is hard to know how one might extrapolate those sorts of things to calibrate the model, as we do not have representative events.

Answer:

Thats a good point. Adding to that point, you often have topographic or bathymetric data that represent recent conditions and you are trying to simulate a storm from 1950 or 1960 as one of your validation storms, where obviously coastal landscapes could have looked different, land use looks different, and the data available to develop the wind field are certainly sparse and limited.

That is certainly built into it when you are quantifying the uncertainty in the model. If you have measured data, you are able to quantify the uncertainty. But certainly, it is somewhat of a limitation when you are factoring in erosion and similar considerations dynamically for this type of event. Because when you are looking at a million-year return interval storm event, or types of storms that contribute to that risk level, there are no historical data. At some level, you are taking a leap of faith that the available historical data represent that million-year event. There are some limitations with the abilities of the model, even though there has been a lot of progress in advancements of the best state of the art.

4.3.2.4 Assessment of Epistemic Uncertainty for Probabilistic Storm Surge Hazard Assessment Using a Logic Tree Approach. Bin Wang*, Daniel C. Stapleton and David M.

Leone, GZA (Session 1B-4; ADAMS Accession No. ML19156A455) 4.3.2.4.1 Abstract Probabilistic storm surge hazard assessment (PSSHA) requires the characterization of the mean storm surge frequency, inclusive of consideration of epistemic uncertainty. The PSSHA often involves inevitably significant uncertainty, especially at the low-frequency range that is often applicable for hazard evaluations at critical infrastructures and facilities. Epistemic uncertainty arises due to the use of models to characterize the hazard input such as data source, probability distribution, storm rate, and storm surge modeling. A logic tree approach uses alternatives at various input nodes and generates a family of hazard curves from which a mean hazard curve can be derived. Branch weights are assigned based on the analysts confidence that the selected alternatives are the best representation of the hazard input. The logic tree provides a transparent, structured framework for systematic characterization and quantification of potential epistemic uncertainties. The final product is a weighted mean flood hazard curve with confidence intervals (CIs) based on the family of flood hazard curves from the logic tree.

This presentation provides an overview of the logic tree methodology and key engineering considerations including the role of engineering judgment. In the presented examples, each hazard curve was developed using the JPM with Optimal Sampling and Response Surface method (JPM-OS-RS), which allows a full coverage of the hurricane parameter space with minimum modeling effort and surge interpolation with reasonable accuracy. However, alternative methods (e.g., empirical track method) can also be used for the logic tree. Multiple data sources 4-80

are discussed, including historical and synthetic hurricane tracks. The presentation also discusses sensitivity of various nodes and engineering decisions.

4.3.2.4.2 Presentation 4-81

4-82 4-83 4-84 4-85 4-86 4-87 4-88 4-89 4-90 4.3.2.4.3 Questions and Answers There were no questions.

4.3.2.5 Coastal Flooding Panel. (Session 1B-5)

Moderator: Joseph Kanney, NRC/RES/DRA/FXHAB Arthur Taylor, National Weather Service/Office of Science and Technology Integration/Meteorological Development Laboratory Victor Gonzalez, U.S. Army Corps of Engineers (USACE) Research and Development (R&D) Center, Coastal and Hydraulics Laboratory Norberto Nadal-Caraballo, USACE R&D Center, Coastal and Hydraulics Laboratory Michael Salisbury, Atkins North America, Inc.

Bin Wang, GZA GeoEnvironmental, Inc.

Guest Panelist: Chris Bender, Taylor Engineering Moderator:

For the purposes of the panel discussion, we have invited Chris Bender, a senior engineer at Taylor Engineering, to participate. Chris Bender and Taylor Engineering have supported the NRC on several reviews of storm surge hazard assessment submittals over the years.

Moderator Question:

Id like to start off with a question for the entire panel related to Bin Wangs presentation. Bin did a good job talking about all the different epistemic uncertainties, presenting the logic tree approach.

One of the places where the logic tree approach has been used extensively is in probabilistic seismic hazard analysis. One of the other tools that's used in probabilistic seismic hazard analysis is the so-called SSHAC approach (the Senior Seismic Hazard Assessment Committee process),

where essentially you try to represent in your analysis not just your individual judgment as an analyst, but the center body and range of the technically defensible data models and methods.

One way of sketching that out is using the logic tree approach. But a lot of questions or issues remain. For example, how extensive does that logic tree have to be? Who decides on the weights? How do we decide on the weights? So, I would ask any one of the panelists who is familiar with the SSHAC process whether it has a role in something like storm surge hazard analysis?

Norberto Nadal-Caraballo:

I think SSHAC definitely needs to be part of the overall approach. We have seen that, just as discussed in the previous presentation, every path ends sometimes with very different results. So it is not just using one path as we typically do in USACE and FEMA studies (and sometimes there is justification to a single path approach). If you want to expand the methodologies and to innovate on some of the approaches, including an additional path, then we need to determine how to assign weights, or there has to be a consensus on how to assign either credibility or weights, some formal way to do that. I think thats clearly most important.

4-91

Question:

This question is for Chris Bender and follows up on Joes approach of asking about SSHAC if you are developing a full range of scenarios. We heard from Arthur Taylor, who emphasized on P-SURGE as being real time. He puts in the time component because of tides and currents, but another perspective is evolution of the flooding. If the storm is approaching, which causes inundation, the wind fetch may increase dramatically because areas that were not inundated become inundated, especially for a slow-moving hurricane. How important is it to put this into some real-time perspective?

Chris Bender:

One thing that I think was brought up by all the different presentations today is that each study needs to first define its purpose. As Arthur Taylor mentioned, NWS has an hour to develop its estimates, given the latest advisory. Many decisions follow, related to model resolution, number of runs, and the goal of the NWS objective. Other analyses are looking at a probable maximum hazard for design application. Analysts are considering a completely different set of constraints in terms of how many model runs they are going to make, how long it can take, and the resolution thats needed. For the North Atlantic Coast Comprehensive Study (NACCS), USACE had to consider that the model grid went from Virginia all the way up to Maine. Hundreds of miles of coast then required a huge storm set (a thousand storms). One takeaway from all these talks, the work that we have done for the NRC, and other studies is that each study is unique.

When it comes to the real-time component, the individual storm simulation that is conducted in the NACCS study or in Bin Wangs study or in other NRC independent studies is a full, 5-day storm surge simulation for a tropical storm. You can obtain the individual time history of the surge, the duration of the surge, and how the waves are influenced based on the fetch increasing as areas are inundated. The state of the practice in modeling can get that time evolution. To be real time, the models need to run quickly, based on fresh information such as the latest update on the on the winds, for the purposes of Arthur Taylors studies.

It is clear that there are a lot of little decisions along the way that can have some major implications on the storm surge. It would be great to move towards a SSHAC approach so then it is not just Bin Wangs solution or GZAs solution, for example. If there was a consensus of experts, then there would be maybe more of a community of practice estimate. But the challenge I see is that each site and each area is going to have different decisions and values that are required. You would need to bring those experts together and have a set of decisions for each specific site. Also, sometimes the hazard level influences the amount of uncertainty. As was mentioned, for a 100-year record, you may have enough data to reduce certain aspects of the uncertainty. But then if you are looking at an annual exceedance probability level of 10-5, there is a lot more uncertainty just because there is not enough storm history. I think the effort going into having a kind of national comprehensive SSHAC approach would need to involve determining how to divide up all the decisions that are necessary to create kind of that consensus.

Moderator Question:

With respect to Arthur Taylor's emphasis on real time, one of the things that we have observed over the years is that NWS uses SLOSH for forecasting, but it has also made the model available broadly. Consultants and other researchers can use the SLOSH model in ways other than forecasting. In fact, people have used SLOSH to perform many runs fast and then refine those with a higher resolution or higher fidelity model like ADCIRC. Sometimes they were not aware of 4-92

some of the simplifications that are built into SLOSH to make it run fast. Could you speak to some of those simplifications, for people who are trying to use SLOSH for some of these other applications, like design calculations? What are some of the key simplifications that are built into SLOSH that someone using it should really be aware of and make sure that it's consistent with their application?

Arthur Taylor:

This is going to be a hard one to answer because Im not entirely sure of all the different things others have done. One of the primary things is bottom stress. For example, the bottom stress in SLOSH is one value instead of a spatially varying one. We are starting to work on modifying that.

There are assumptions made about how you define the grid and create the grid. But if someone is building their own grid, then they are going to introduce those types of errors. SLOSH deals with the first-order terms and does not go into the second-order differentials. That missing component would need to be taken into consideration when running with a higher fidelity model. The tides were more recently added. Historically, the model had only used a high-tide component. With regard to the issue of real time, I have been trying to determine, long term, whether it is better to use a database of runs and come up with a surface or whether there was a need for real-time runs. I think it comes down to the second-order interactions, the combinations of the tide and the surge. Linear addition is something that you could do without having real-time information, but to get the second-order interactions with a tide, you have to be concerned about exactly when you ran it. You can simulate that with enough runs at a spring tide, but that is something that needs to be strongly considered. The other aspects would be the initial water conditions that surround you.

A storm like Hurricane Harvey, with a lot of rain, will have a lot of impact on the rivers. In addition, although SLOSH does not do this yet, if you start getting into the spatially varying bottom frictions, you have to worry about whether the land has been saturated already. That saturation caused by a very large rain event will have impacts. The different choices of models are mainly affected by identifying the problem you are trying to solve. In my case, I had MEOW and a MOM, which would be a first-order path through the logic tree. I came up with a maximum of all those answers rather than assigning a weight or uncertainty to it. That gave sort of an annual assessment, the worst case that you can plan for. That challenge is different than the challenge of initial real-time response. For a case scenario to determine where to place a power plant, you can just run the model at a high tide. But if you are trying to get more precise about how you want to respond to a particular event that is occurring right now, you need to have the real-time information. It depends on which problem you are dealing with. In the NPP realm, siting decisions can be done years in advance. When the storm is here, and you need to respond to it, you need to shut down the power plant and worry about whether to involve responders and whether you need to build a bridge to get to the power plant. Those are different problems.

Question:

With the notion of logic trees being an inevitable action to really get at all the layers of complexity in a PFHA, it seems to me that it also means that the use of surrogate models is going to become inevitably necessary just because of the number of simulations that would otherwise need to be run. What are the panel members thoughts on the following question: Theres everything that occurs after simulations and doesnt depend on your choice of simulation structure. But everything that comes before, like changes in drag terms or changes in tides, add in all sorts of layers of complexity. Do you envision there being much more complicated surrogate models constructed, using a much larger parameter space, or building multiple discrete ones with different model setups?

4-93

Norberto Nadal-Caraballo:

Right now, we are employing Gaussian process metamodels, and the short answer is that we take what comes out ADCIRC or other models, the weight models, as the right answer, or at least the best answer. I think there needs to be additional research, additional processes evaluating the different set of surrogate models. We also need to look at the different components and parameters in the methodological models, even different wind models that generate the wind and pressure fields. Hopefully, there will be new developments in terms of the metamodels or the sorts of models that can take into account those variations because that is something that is certainly lacking. I think we have made several improvements in terms of how to compute the probabilities of the storms, but we need better acknowledgment of the uncertainties arising from the hydrodynamic models and better validation, in many cases. We also need better ways to propagate that uncertainty and integrate the uncertainty into the logic tree approach. A lot of work is still needed in those areas.

Question (Bin Wang):

I just wanted to point out that this session is called coastal flooding, not storm surge flooding. I understand that my talk and most of the presentations that we see in this session are about storm surge. I just wanted to ask the other panelists for your thoughts on how we carry the logic tree process forward. My logic tree basically ended at the storm still water level. However, coastal flooding usually has other components, such as wave runup, loading due to wave actions, and sometimes even overtopping or erosion if its a beach or dune. Do you think that the Gaussian process modeling, maybe the stochastic part of the method, could actually carry forward from the storm surge into the so-called combined effect flooding world?

Chris Bender:

I do agree. I see the opportunity for an extension of the logic tree approach, just adding more boxes at the end because now you do have different estimates of the still water level. Then, similarly, to the branches before that, there are different options for runup equations and runup coefficients and uncertainty with those and overtopping equations and methods and approaches.

If you have a suite of water levels for which youve defined the mean and range of values, then that can provide input into moving to the right with additional boxes. For example, we might have three different runup equations. Similar to the storm surge calculation, it depends on how complicated you want to get. You can do runup with a Boussinesq model or you can do runup with a simple Excel toolkit. There would be different options for what you need. You could end up with a range of runups and associated probabilities. Once again, decisions are required for how much you believe each one of those boxes, but I do see the potential for that.

Follow-up/Elaboration (Bin Wang):

Thanks, Chris. We were aware that we could do additional paths or build additional small logic trees after we ended up with that gigantic storm surge logic tree. However, then we realized that one still water elevation is often associated with numerous storm tracks. The combined effects are very sensitive to these storm tracks. For example, a smaller, slow-moving storm may produce very different wave characteristics versus a large, slower moving storm (or a faster moving storm).

We found that the duration and the wave characteristics often depend on storm tracks, not just the still water elevation that we just defined. So that becomes a challenge.

4-94

Arthur Taylor:

With P-SURGE, we are starting to introduce waves. Initially, waves had been included by using the still water elevation and just estimating the waves on top of that. The Great Lakes wave model is a second-generation wave model, which is faster than the normal third-generation wave models. We are coupling SLOSH with that second-generation wave model, which will allow us to do an ensemble of SLOSH and wave model runs with tides and coupled surge, wave, and tide simulations that we would then run through an ensemble and each would have a probability associated with it. ADCIRC has been taking a similar approach with the SWAN wave model (ADCIRC-SWAN). The challenge there has been that the sample set is not very large, posing sensitivity problems with regard to direction (not having enough samples). Maybe it would help to use the second-generation wave model coupled with SLOSH, once we finish it. Or you can do a second-generation wave model coupled with ADCIRC. We are doing waves because we want to move to Puerto Rico and Hawaii, which are wave-dominated areas and so we need to have a probabilistic surge and wave model.

Victor Gonzalez:

I want to circle back to the issue of runup and overtopping, in terms of the complexity. The logic tree for the family of hazard curves in our presentation had already been culled. Some branches were already taken out. It was a reduced number of combinations, and it still resulted in a family of 1,261 hazard curves. Each additional layer that we add after that, in terms of runup or overtopping, gets multiplied by that number. More research is needed to really hone in on what is really driving variation to get to a simplified approach.

Question (from Phone Line):

Norberto Nadal-Caraballo made a good point about correlations being different if you bin your data rather than grouping them all together. However, in his plots, some of the correlations were based on very small number of points. How do you keep that balance?

Norberto Nadal-Caraballo:

In the plots shown, we just wanted to illustrate an example. We were using data from the (International Best Track Archive for Climate Stewardship (IBTrACS) database. The advantage of that dataset is that it has estimates of Rmax (radius to maximum winds), but it is very limited and goes back only to 1988 or 1990. Once the partitioning was done, we had approximately 40 values for the high-intensity bin in those plots. We just wanted to illustrate the effect of having different correlations based on different bins of intensity. There are other datasets that we can explore, for example, Applied Research Associates (ARA) or Ocean Wind, Inc. (OWI) hurricane wind speed maps. Some other estimates of Rmax have been documented in the literature. For implementing these, we need to explore additional sets that provide a more robust estimate of the correlation between the parameters, specifically the correlation between Rmax and the other parameters.

Moderator Question:

I have two questions for Michael Salisbury. The first is just a scope question. In the report that you talked about in your presentation, did you look at methods for total water level analysis, runup, and things like that?

4-95

The second question is, in your report, do you focus strictly on tropical cyclones? Or did you also look at methods for extratropical cyclone storm surge analysis?

Michael Salisbury:

With respect to the first question, we looked just at storm surge. We did make mention of additional steps that would be required, particularly for possible design applications such as calculating wave runup and converting to total water level. We have done that for a site using an extension of the reported methodology, but the report mentioned in our talk focused just on the storm surge swell values.

With respect to the second question, we focused just on tropical cyclones. The parameterized storm event, using combinations of discretized storm parameters, is a tropical storm system.

Extratropical storms are not defined by parameters like that.

Moderator Question:

One quick question for Norberto Nadal-Caraballo or Victor Gonzalez: Do you think that the copula method might be useful for combining extratropicals and tropicals?

Norberto Nadal-Caraballo:

It could probably be a solution. Typically, we assume that there is zero correlation between tropical and extratropical events. But the copula method can be used to incorporate the effects of multiple events occurring at the same time. We know it is unlikely that it happens, but if we can establish or estimate the correlation, then we can use a joint probability model to account for it, similar to accounting for the joint probability between storm surge and river flow due to the occurrence of tropical cyclone rainfall.

Question:

As you move north in the Gulf of Mexico, you can use all the data from all the hurricanes because there seems to be some regional commonality. However, as you go north up the Atlantic coast, if you have a shortage of data for both tropical and extratropical storms, would it be wrong to take information from, for example, North Carolina and extended to New Jersey or up into New England? If you believe that there is climatic variability or climate change, would it be wrong to take data from that area and project it north?

Arthur Taylor:

My take is that it would be problematic. The map that Bin Wang showed (which was done by the National Hurricane Center) basically combined all the MEOWs and MOMs and stopped showing Category 5 storms above a certain latitude. This is because, climatologically, it is highly unlikely that a Category 5 storm will occur that far north. Perhaps with climate change, that will change.

But currently, that would be problematic. The characteristics of storms that are in the North Carolina area are not going to be the same characteristics of storms in Maine. Storms in Maine will be faster and bigger. They will not be as tight because they have been hit either by landfall or various interactions with the atmosphere. However, North Carolina will experience nice, tight storms coming right off Puerto Rico. There will be completely different characteristics of the storms.

4-96

Chris Bender:

The sea surface temperature provides the energy for those tropical systems. If there was a climate model that showed increases in sea surface temperature up in the New England area in the future, whether thats 50 years, 100 years, or 150 years in the future, that increase in temperature could allow for changes to the amount of energy that those tropical systems can receive from the water. Other meteorological aspects could also have an impact, such as winds and the Gulf Stream. Future sea level change estimates have a huge uncertainty band. I think the climate change effect on future storminess, storm intensity, and storm frequency has an even broader uncertainty range. There is a lot of uncertainty associated with that.

Arthur Taylor:

Additionally, Maine has larger tide ranges than North Carolina. The sea surface temperature would impact how well you would be able to get energy into the storm. I think the tide would also have an impact on how that energy gets transferred from the surface. In a hypothetical situation where you had higher temperatures in Maine, you would still have problems getting that energy because of the churning of the tide.

Victor Gonzalez:

For the NACCS, the North Atlantic was divided into three regions specifically to account for the variation in parameters as you go north. That was appropriate for NACCS, as we had enough data to produce good hazard curves and quantify the variation. But it would matter, and we did not have enough data, if we partitioned the hurricanes into low, high, and extreme intensity (instead of low and high intensity). Then, as you go north, you stop getting hurricanes that belong to that extreme intensity population. In that case, we would rather scale back to just using low and high intensity rather than try to transport some additional data from farther south.

Question (Bin Wang):

To account for spatial variability in the storm recurrence rate, USACE used a Gaussian kernel to weight different tracks for each specific location differently. Do you think there is also a systematic way to weight an intensity parameter spatially rather than dividing the data source into three distinctive regions? In other words, rather than treat the whole dataset as one cohesive set, could we weight them using different kernel functions?

Norberto Nadal-Caraballo:

We are currently using a similar approach but based on a Gaussian kernel function. To account for the limited data at some of the locations, we are, at least as a statistical correction, expanding the radius of the capture zone to add storms. We compute statistics based on that sampling set, but we do corrections based on the distance from our location. For example, if we are doing a study in New York, we can expand the radius of where we are capturing the storms. But by using the Gaussian kernel function, we can adjust the statistics. We can adjust, for example, the mean based on the Gaussian kernel. Its a way of carrying that same approach beyond the simple calculation of the storm recurrence rate and incorporating some of that knowledge into the computation of the marginal distributions.

4-97

4.3.3 Day 1: Session 1C - Precipitation Session Chair: Elena Yegorova, NRC/RES/DRA/FXHAB 4.3.3.1 KEYNOTE: Satellite Precipitation Estimates, GPM, and Extremes.

George J. Huffman*, NASA/GSFC (Session 1C-1; ADAMS Accession No. ML19156A456) 4.3.3.1.1 Abstract The satellite precipitation retrievals considered high quality come from passive microwave sensors, and they have been available in sufficient quantity to construct fairly high-resolution time sequences of maps for about 20 years. The resulting publicly available, quasi-global, long-term datasets are listed in the tables at http://www.isac.cnr.it/~ipwg/data/datasets.html. In particular, the GPM mission, a joint project of the National Aeronautics and Space Administration (NASA) and the Japanese Aerospace Exploration Agency, is working to create a long-term (1998-present) data record that is relatively homogeneous across the many individual precipitation-related satellites that have flown during that time. From the U.S. Science Team, the Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithm intercompares and merges these satellite data together with other inputs to create a best map of global precipitation every half hour with a resolution of 0.1°x 0.1 of latitude/longitude. IMERG processing back to June 2000 should take place in the first 4 months of 2019. The ongoing IMERG processing occurs three times to serve different needs: an Early Run 4 hours4.62963e-5 days <br />0.00111 hours <br />6.613757e-6 weeks <br />1.522e-6 months <br /> after observation time (rapid analysis of flood, landslide, and other high-impact events), a Late Run 14 hours1.62037e-4 days <br />0.00389 hours <br />2.314815e-5 weeks <br />5.327e-6 months <br /> after (a better estimate for crop, drought, and water resource analysis), and a Final Run 3.5 months later (the research-grade product that incorporates the most data, including monthly precipitation gauge information).

One key issue in defining extreme precipitation events is the strong interdecadal variability in extreme for any reasonable index of the term (Fu et al. 2010). 9 The intermittent occurrence and non-Gaussian statistics that characterize precipitation make the analysis much more challenging than for temperature. Nonetheless, a recent study (Demirdjian et al. 2018) 10 demonstrates reasonable skill at computing average recurrence interval (ARI) maps from 15 years of a predecessor of IMERG (the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis, or TMPA) that are comparable to ARI (up to about 20 years) computed with 65 years of gauge data over the continental United States. The advantage of the satellite datasets is that they cover much of the globe, providing information that is not otherwise obtainable from surface data in remote, developing, and oceanic regions.

9 Fu, G., N.R. Viney, S.P. Charles, J. Liu, 2010, Long-Term Temporal Variation of Extreme Rainfall Events in Australia: 1910-2006. J. Hydrometeor., 11, 950-965. doi:10.1175/2010JHM1204.1.

10 Demirdjian, L., Y. Zhou, G.J. Huffman, 2018, Statistical Modeling of Extreme Precipitation with TRMM Data.

J. Appl. Meteor. Climatol., 57(1), 15-30. doi:10.1175/JAMC-D-17-0023.1.

4-98

4.3.3.1.2 Presentation 4-99

4-100 4-101 4-102 4-103 4-104 4-105 4-106 4-107 4-108 4-109 4.3.3.1.3 Questions and Answers Question:

You mentioned that you have subhourly data and then monthly data. When you refer to monthly data, do you mean monthly maximum intensity, or do you mean the monthly meansa total depth?

Answer:

We add up the processing that we do for half-hourly satellite estimates for the entire calendar month, then we do a combination of the monthly satellite with the monthly gauge. We think of that as being the best estimate of the month. Then we basically just aggregate that into the half hourly values as just a ratio. So, the monthly is the average for the calendar month and is used in the final product; it is used to adjust every half hour, so they approximately add up to the monthly number.

Question:

How does the Climate Prediction Center (CPC) Morphing Technique (CMORPH) work? Are you just advecting the field based upon some other dataset that you have access to?

Answer:

By chance, the IMERG uses CMORPH technology, so I know how it works together. The original CMORPH simply took the overpass and advected it along and then took the next overpass and advected it backwards in just a linear average from one time to the next following; I call it Lagrangian time interpolation. Since then, it has become more complicated, something Ping-Ping Shi calls a Kalman filter, where you have the backward advected and the forward 4-110

advected and then you have the infrared. Each has a correlation structure. You conduct an optimal interpolation among the three, but you do not use the infrared if you are within a half hour of overpass time. It is no longer strictly a Lagrangian time interpolation because now there is a background infrared field that is keeping the correlations up some. In the original scheme, with just backward and forward, if you are 10 hours1.157407e-4 days <br />0.00278 hours <br />1.653439e-5 weeks <br />3.805e-6 months <br /> apart, you have 5 hours5.787037e-5 days <br />0.00139 hours <br />8.267196e-6 weeks <br />1.9025e-6 months <br /> of interpolation one way and 5 hours5.787037e-5 days <br />0.00139 hours <br />8.267196e-6 weeks <br />1.9025e-6 months <br /> of interpolation the other way. Using the infrared keeps you closer to reality. A geo-spatial map uses only a forward morphing. In my case, the early run is a forward only because we do not have time to get the next overpass into a backward. But all the rest are our full common filter.

Question:

What is the spatial resolution for the pixel size or grid shell size?

Answer:

The grid cells are a 10th of a degree.

4.3.3.2 Hurricane Harvey Highlights: Need to Assess the Adequacy of Probable Maximum Precipitation Estimation Methods. Shih-Chieh Kao, Scott T. DeNeale and David B. Watson, ORNL (Session 1C-2)

This presentation was based on Kao, S.-C., S.T. DeNeale, and D.B. Watson (2019), Hurricane Harvey Highlights: Need to Assess the Adequacy of Probable Maximum Error! Bookmark not defined.Estimation Methods, J. Hydrol. Eng., 24(4), 05019005, doi:10.1061/(ASCE)HE.1943-5584.0001768.

4.3.3.2.1 Questions and Answers Question:

Thank you for your presentation. With regard to the slide with the trend as a perceptible water, I think that period of record is quite short because, as the previous speaker mentioned, there was the issue of multidecadal variability or persistence. This has a strong role, I think, to play in that. I look at long-term precipitation records on the West Coast, for example, and I see exactly that same trend. But if I back it up 50 years to 1900-1950, I see the exact opposite. You have basically relatively high rainfall in the late 1900s; midcentury, relatively low, now relatively high again. The question is, was this just a one-time situation? If you look at the paleo records, you can see that we have had that periodic signal changefrom wet to dry, wet to dry, wet to dryover and over again for a thousand years. The periods tend to be random; there is no significant signal in the data. It is suggested that our climate, certainly regionally, remains in a dry regime for a long period of time, multiple decades, and remains in a wet period for a long period of time. When looking at the model performance, I think that the global climate models (GCMs) do not handle that multidecadal persistence very well yet. I am reluctant to rely too much on trends that come out of modeling efforts, in particular, to make long-term extrapolations.

4-111

Answer:

Thank you for your good comment. But first, I need to clarify that so far, in these two analyses, none of them is related to GCM. One is purely based on observation, and the other is reanalysis data. This model has a basic calculation, but we all know reanalysis and how we usually use that as a proxy.

Follow-up Comment:

That still gets to my point about the timeframe. Your timeframe of 60 years is still relatively short, when you look at the multidecadal persistent signal that seems to be in our climate signal.

Answer:

I agree with that. That actually brings up two questions, assuming that we do believe that the data are not over a long enough period, that there is some uncertainty about your calculation of the value. As a civil engineer, when we have something that we believe is more uncertain, we ensure the risk or ensure the safety and we will use a safety factor. We will use different ways to account for that. Thats basically one aspect: if we do not have good data, what should we do? Second, the current method does not actually account for any of the trends. If right now the methods actually have a way to remove the trend and incorporate analysis and try to bring a better precipitable water (PW) estimate, thats fine. But right now, if we have data input to our analysis where there is a trend, that is against the prerequisite we have for statistical analysis. It may be independent and identically distributed, but there is a trend here. So, what should we do?

4.3.3.3 Reanalysis Datasets in Hydrologic Hazards Analysis. Jason Caldwell, USACE, Galveston District. Presented by John England*, USACE/RMC (Session 1C-3; ADAMS Accession No. ML19156A458) 4.3.3.3.1 Abstract As spatio-temporal resolution of reanalysis products and the quality of numerical weather prediction models continue to improve, the utility of these data may serve to supplement historical storm analyses. There is great potential to enrich the sample set of temporal and spatial distributions of precipitation for stochastic modeling approaches (e.g., those rooted in extreme value theory/frequency analyses and probable maximum precipitation (PMP).

In situ measurements of precipitation are now assimilated into some numerical weather models for initialization; therefore, the future may hold opportunity to use model-generated data as a surrogate in low- and mid-level hydrologic hazard analysis (HHA) studies. Comparison of model forecast and reanalysis representations of Hurricane Harvey to quantitative precipitation estimates (QPE) are provided, focusing on the perspective of dam safety HHA, including (1) evaluation of depth-area statistics, (2) spatial/temporal evolution, and (3) ensemble-based probabilistic measures of confidence. Additional potential applications of these products will be summarized.

4-112

4.3.3.3.2 Presentation This presentation was given by John England, USACE/RMC using the notes included here with the presentation slides.

This talk is focused on the use of readily available reanalysis products, primarily meteorological in nature that are available to support hydrologic hazards analyses through model inputs for hydraulics and hydrology (H&H) models and hydrometeorological design criteria. The goal of the presentation is to describe ongoing and potential applications of reanalysis data and to spur the vision toward the future use of these data in on-the-ground application. While recently employed at USACE, the presentations perspective is from Jason Caldwells background across State and Federal agencies and private industry.

4-113

Today, we will discuss primarily precipitation and precipitation-related datasetseither raw or postprocessedand how these are used in stochastic approaches and probabilistic hazard assessments, primarily from the presenters time at the U.S. Bureau of Reclamation (USBR).

Mel Schaefers talk later today will elaborate on some of these items. Historically, engineers have been limited to point precipitation data and, for about 15 years, multisensor precipitation estimates (MPEs); however, as time continues to march on with technological advances, so do the confidence, resolution, and availability of atmospheric data and reanalyses representative of those data.

4-114

Observational data (gauges) come from a variety of sources and have been interpolated using PRISM-based technologies into historical and real-time versions of storm analysis systems.

Reanalyses using these observations can also include numerical weather prediction model output constrained to these observations, which provides an opportunity to harvest additional variables of interest in the hydrometeorological community such as moisture availability and temperatures.

Most recently (top image), satellite-derived 3-hour precipitation estimates have been produced globally and published in the most recent Bulletin of the American Meteorological Society journal.

The growth in meteorological data is expected to continue to be refined and improve.

In addition to the historical data shown, forecast data are produced several times daily by weather models. The harvest of this data is, I would consider, at its infancy, but 1-kilometer (km), subhourly precipitation forecasts are at hand. While not the focus, necessarily, of this conference, the spatial-temporal information available can be archived to represent large events of PMP and stochastic modeling interest (i.e., annual maxima) or used for identification of areas of concern in operational decisionmaking processes. Private industry and now the Weather Prediction Center offer these products for flood monitoring.

4-115

As before, the reanalyses data continue to improve with the Climate Forecast System Reanalysis (CFSR) data providing 1979-2014 hourly forecast fields for the entire period at approximately 12-km resolution. Others also offer 3- and 6-hour outputs, particularly useful for larger watershed or longer duration events, or for analyzing frequency patterns of multiple events and quasistationary patternssee central Texas in 2019 or the Midwest floods in the mid-1990s.

4-116

At USBR, I initiated the use of Livneh data for creating a Continental United States-wide precipitation-frequency analyses and continued into the private sector at MetStat. For perspective, Livneh data are daily data used for the Variable Infiltration Capacity (VIC) models under historical and future climate scenarios. I believe the information from high-resolution weather models can inform the subdaily time steps for short-duration weather events. Furthermore, as Mel Schaefer will describe later, storm typing is the largest advancement in many years in precipitation frequency analysis and can be applied with this coincident time series data to construct reasonable estimates quickly compared to the massive data quality procedures for more refined precipitation frequency analysis such as NOAA Atlas 14 or site-specific analyses. This method is limited, however, in use for the interpolation methods because of issues at the tails and in parameter selection or for small basins or short durations. With storm typing and generalized extreme value (GEV)-convergence (see Mel Schaefers presentation), better understanding of the tail behavior is underway and will soon allow this process to be further refined.

4-117

The days of the smooth isohyetal pattern are gone. Now we have access to MPE and models that show the spatial and temporal variability of storm precipitation.

Discoveries in areal reduction factor (ARF) variability came from these and other studies showing how different thunderstorm ARF is for example relative to a large mid-latitude cyclone or relative to the basin orientation with respect to the general storm motion. We have a lot to learn, but the quality of meteorological data provides the ability to investigate the simplicity of elliptical storm patterns and dictated temporal patterns and their effects on hydrology and specifically annual exceedance probabilities (AEPs). Lastly, from the PMP world, we continue to move toward understanding the moisture maximization problemis surface dewpoint sufficient? Are we missing important considerations? Is having consistency for computation important (e.g., gauge dewpoint temperature, old map stochastic storm transposition/precipitable water)? These are industrywide research questions we should be asking.

4-118

Hourly observations even in todays world continue to be sparse; subhourly, even more so. But past studies have shown that the disaggregation of observed data manually in a quite time-consuming process of painful spreadsheets is reasonably reconstructed in normalized time series from numerical models and reanalyses.

4-119

As weve walked through, Im trying to focus on the items needed for hydrologic models precipitation spatial and temporalnow on to temperatures. In southern locations, the snowmelt component is trivial and in transition areas perhaps the most difficult due to rain on snow, where hydrologic risk is a mixed bag from event-driven to seasonal pack and anomalously warm spring seasons. Shown here are a few examples of seasonal variability in freezing level and times series that could be normalized to an average value for the period for scaling using these distributional properties. In the future, perhaps the categorical snow, freezing rain, sleet, and liquid fields from numerical weather prediction reanalyses might be useful to eliminate the need for lapse rates in modeling efforts or to establish the correct criteria for a watershed rather than assumed from location-specific literature. We should not constrain ourselves to single-trajectory answers for things but rather to explore the understanding of moisture for storms and what is truly the best criterionmaybe it is dewpoint, though doubted.

4-120

I hope by now it is clear that many components can be accessed from these data. Important strides are being made to include these into tools and formats for ease of access to the larger community. Past studies for the NRC, TVA, and others elucidate this fact, and the onus/responsibility lies with this group to forge a path forward. The next few slides will discuss the potential applications, some already in practice in industry and government. How do we turn these data into meaningful products? Statistics are useful from the government and operator perspectives, but for public communication they continue to be a struggle point. Placing these into categorical perspective (minor, moderate, major) or some relative amount of PMP or AEP may help. Probabilistic products from the Weather Prediction Center provide a focus for where the largest precipitation amounts should occur based on ensemblescan we use this prestorm to inform SST for H&H models? Will simple ones suffice and where and why or why not? How can this be coupled with national efforts like the National Water Model? Literally, the options are limitless, and social scientists will need to be involved to describe impacts. Frameworks are needed for everything from one dimensional (1-D) to three dimensional (3-D) coastal compound flood issues. Where might this feed in? This is a presentation on questionsmotivating thought and future goals.

4-121

Dam safety is discussed from the perspective of USBR. Discuss how dam safety analyses were conducted before. Now, improvements include the Centre for Energy Advancement through Technological Innovation (CEATO)-sponsored work in private industry (MetStat) for Federal Energy Regulatory Commission (FERC) Level 2 support. MetStat images and credit to MGS Engineering.

4-122

These two slides are estimates based on plots from the Trinity River study, and the net reduction may be overestimated due to gradients in at-site means near the coast between river basins. The goal is the sameto highlight the differences that may exist and show the importance of storm typing for assessing hazards. The relative AEP of tropical cyclone (TC) occurrence is small (0.30 or so every third year), which affects a much more gentle tail relative to other types. This will be important in scaling storms for stochastic models or the relative magnitude in frequency or PMP space.

Again, just to emphasize the mixed distribution issue and describe here how you might get the plot at bottom from reanalysisthat it is likely a similar time series has occurred albeit at a different magnitude in the historical past. Can we answer whether this is unique for a tropical storm or could a more general storm produce the same dimensionless answer? Talk about moisture source and PMP and how close Hurricane Harvey was, and do we really know the climate of SST and moisture when using a single value?

4-123

How to get where we are going. Where we are, where we were? Do we want to stay there? Other presentations today I believe show the movement toward the right column. I am personally pleased to be here and be part of that discussion.

4-124

Finally, a few ideas to ponder in operational and storm archival (i.e., the USACE effort mentioned here and the Extreme Storm Events Work Group). And, thank those at left and the NRC for their contributions.

4.3.3.3.3 Questions and Answers Question:

Im curious that you look at the NOAA Atlas; basically look at all types and do frequency analysis and it will give you the highest amount. But when you just look at the tropical-storm remnant (TSR) or other specific storm type, it will give you a much smaller value. What is the reason for that?

Answer:

The reason for that is the splitting out specifically by type. In NOAA Atlas 14, the trouble is that when you use all seasons, it is combining everything into one, which inflates the value. In terms of mixtures of populations, you essentially get a maximum rather than a mixed population probability.

The numbers are lower, in some cases, because of the relationship between the higher order L-moments. In L-skewness and L-kurtosis, you sometimes get a flatter curve, and then the tail picks up at the very end. These are comparisons at about 10-3, so Jason Caldwell did not do this and got to the problems of interest. Those estimates will pick up. By splitting out by type, you can do the same thing; sometimes a season, and you may, in some cases, get some lower estimates.

Now these are estimates and I do not know if Jason Caldwell has actually included Hurricane Harvey or not.

4.3.3.4 Current Capabilities for Developing Watershed Precipitation-Frequency Relationships and Storm-Related Inputs for Stochastic Flood Modeling for Use in Risk-Informed Decisionmaking. Mel Schaefer*, MGS Engineering Consultants, Inc.

(Session 1C-4; ADAMS Accession No. ML19156A460) 4.3.3.4.1 Abstract Several advancements in watershed precipitation frequency analysis over the past 5 years have increased the practicality and reduced uncertainties in using stochastic flood modeling for estimating the hydrologic characteristics of extreme floods with AEPs of 10-5 and rarer. Stochastic flood modeling is now the preferred method for PFHA for high-consequence dams owned by Federal agencies (e.g., TVA, USACE, USBR), where information and decisions are required about the hydrologic performance of large capital projects under extreme flood-loading conditions.

The introduction of storm typing in 2014 for assembling precipitation maxima datasets for a given storm type for use in regional precipitation-frequency (PF) analysis was a major advancement that provides increased data homogeneity and reduction of uncertainties for estimation of extreme precipitation. This approach allows separate point and watershed PF relationships to be developed for each storm type. This is important because different storm types have different spatial and temporal patterns and storm seasonality that must be matched with the appropriate watershed PF relationship for proper hydrologic modeling. For watersheds subjected to snowmelt and rain-on-snow floods, compatibility of storm-related inputs must also extend to air temperature and freezing level time series and associated temperature lapse rates. The storm-typing approach allows development of separate stochastic flood models and separate hydrologic hazard curves for each storm or flood type and addresses the problem of mixed populations of flood types common to many watersheds and climatic environments.

4-125

Stochastic storm generation methods have been developed for synoptic scale mid-latitude cyclone (MLC) and TSR storm types. Stochastic storm transposition methods with resampling of historical convective spatial patterns have been developed for the Mesoscale Storm with Embedded Convection (MEC) storm type. These stochastic storm generation methods, in combination with the Schaefer-Wallis-Taylor (SWT) method of regional PF analysis, provide for the development of the watershed PF relationship for a given storm type.

The current capabilities and practices for development of watershed PF relationships will be presented along with the associated topics of storm typing, SWT method of regional PF analysis, storm seasonality, storm spatial and temporal patterns, stochastic storm generation methods, and characterization of uncertainties.

4.3.3.4.2 Presentation 4-126

4-127 4-128 4-129

4-130 4-131 4-132 4-133 http://www.mgsengr.com/downloads/RegionalPrecipFrequencyReports_2019.zi 4-134

4-135 4-136 4-137 4-138 4-139 4-140 4.3.3.4.3 Questions and Answers Question:

What are the sources of data for your storm typing? You mentioned the reanalysis?

Answer:

In the early studies, we were using gauge data, basically ground-based data. TVA has a long-term network of gauges. There may be as many as 200 gages and over 100 years of record. We set up a network and then we saw that because of the various areas that we work in, the Livneh gave us 4-141

a little better coverage and helped us identify whether there is a big footprint or small footprint.

The other things on that list were the pressure fields, basically the surface and 850 and 500 millibars and the pressure gradients to get an idea of whether this was associated typically with an MLC with a pretty well-defined pressure field. However, I am a surface water hydrologist, and while I have had exposure to the meteorology business for a long time, I am not a meteorologist, so I have made mistakes on a couple of things. There are precipitable water fields from the reanalysis and convective available potential energy (CAPE). Then they also look at seasonality to help inform them as to the various types, when that occurs.

Follow-up Question:

So just to recap, you do have rain gauges and the analysis, or purely reanalysis?

Answer:

It depends on the location. When we did the TVA precipitation analysis, we used rain gauges.

When we did the Colorado/New Mexico statewide PMP analysis, we were using Livneh to help us give the spatial footprint.

Question:

On some of the slides you presented earlier, you had both time history plots and spatial maps. Are those time series plots for a single point location within the watershed or are those spatially averaged?

Answer:

Those are at the storm center, because sometimes we presented as basin averages, and this particular dataset was given to us by MetStat.

Question (from the Webinar): This question relates to standard deviations and multiple n-series; is the procedure additive or subtracting with regard to standard deviations?

Answer:

On the variance charts, that is basically a Latin Hypercube approach, where the sources are treated as independent. So, the variances are additive.

4.3.3.5 Factors Affecting the Development of Precipitation Areal Reduction Factors.

Shih-Chieh Kao* and Scott DeNeale, ORNL (Session 1C-5; ADAMS Accession No. ML19156A461) 4.3.3.5.1 Abstract Probabilistic precipitation estimates (e.g., T-year rainfall) are widely used to inform hydrologic and hydraulic simulation, urban planning, critical infrastructure protection, and flood risk mitigation.

Such estimates are quantified through precipitation frequency analysis, such as those provided in the NOAA Atlas. Nevertheless, many precipitation frequency products (such as NOAA Atlas 14) only provide point precipitation estimates. For watershed-scale applications, further adjustment through ARFs is needed to establish spatially representative probabilistic precipitation estimates at a large-area size.

4-142

Compared to the modern precipitation frequency products, the progress of ARF development in the United States is lagging. Technical Paper No. 29 (TP-29) ARFs published in the 1950s are still widely used in practice. In addition to being based on limited data, TP-29 ARFs do not vary with geographic location, seasonality, or return period and are only provided for area sizes up to 400 square miles. To support the development and implementation of PFHA for U.S. NPPs, clearly the values of ARFs should be updated based on the most recent data, methods, and suitable models.

To help improve our understanding of ARFs, we conducted a comprehensive study to explore factors that should be considered during ARF development. We started by identifying multiple gauge- and radar-based precipitation data products that can be used for ARF development. We then conducted a literature review to summarize recent ARF methods that may address the deficiencies in the conventional approach. Using a watershed-based annual maximum precipitation searching approach, we demonstrated how these factors may quantitatively affect the values of ARFs for three hydrologic regions in the United States. Our results suggest that ARFs could vary widely by return periods, data sources, methods, and geographical regions; thus, there is a need to determine site-specific ARF for unbiased flood estimates. This study will provide a technical basis to help develop ARF guidance for NPP-PFHA applications. The study also demonstrates the values of modern, gridded precipitation products for computing ARFs and offers a framework for ARF evaluation with different datasets and methods.

4.3.3.5.2 Presentation 4-143

4-144 4-145 4-146 4-147 4-148 4-149 4-150 4-151 4-152 4-153 4-154 4-155 4.3.3.5.3 Questions and Answers There were no questions.

4.3.3.6 Precipitation Panel Discussion. (Session 1C-6)

Moderator: Elena Yegorova, NRC/RES/DRA/FXHAB George J. Huffman, NASA/Goddard Space Flight Center (GSFC)

Shih-Chieh Kao, Oak Ridge National Laboratory (ORNL)

John England, USACE, Risk Management Center Mel Schaefer, MGS Engineering Consultants Guest Panelist: Kevin Quinlan, NRC/Office of Nuclear Regulatory Research (NRO)

Moderator:

To begin, do the panel members have any questions for each other? Everyone is from very different backgroundswe have satellite scientists, some from PMP work, and an NRC staff member. Do you have any questions?

Moderator:

Shih-Chieh Kao ended his presentation with a very good point. We are very much limited by data availability, both spatially and temporarily. How do we deal with performing these estimates for AEPs for very low frequency events? I think this is a challenge for many years and decades for all of us. How do we deal with this?

John England:

Mel Schaefer referred to space for time. In terms of storm typing, I look to our meteorology staff, since I am the hydrologist, to imagine what are the ingredients to cause really big extreme rainfalls that cause big floods, at least for the dam/levee safety program that I work for at USACE, which is still about floods. Those ingredients and scaling up: some of the parts that are basics behind PMP. We may not be using them deterministically, but putting those ingredients in models; whether it's space for time or strong transposition, and getting physical insights from the numerical model is their key pathway towards that, as well as synoptic scale observations and looking at anomalies.

Mel Schaefer:

The term space for time is used quite a bit; some have a good feel for it, while others do not. I think the easiest way to think about it is that in most minds, when you first glance at the problem, you note that we have 50 or 60 years of record. So how do we get to one in 100, 1,000, or 10,000? The slides for our work for TVA show that we are using over 1,000 stations, over 50,000 station-years of record. Were doing not only the state of Tennessee, but the five surrounding states. Were looking at a very massive area that were taking data from. When you actually look at the datasets, you might have 50 or 60 years of record. In the case of TVA, we had a number of stations, maybe as many as 100, with over 100 years of record. But still what youre trying to do is get out to one in 1,000 or one in 10,000. But the reality is that PF, when you do the point analysis, is obviously based at that particular point. The reality is that these storms are happening spatially, somewhat randomly throughout that area. The easiest place to see it is on 4-156

thunderstorms. As an example, on the thunderstorms, most people here would agree that the weather forecasters have a hard time predicting out a couple of days. If you just ask whats going to happen 3 weeks from now in a thunderstorm at a city 60 miles away, you do not have a chance if they are independent events. If you change your perspective to thinking about a thunderstorm, a local-scale thunderstorm or even a mesoscale event with embedded convection, each one of those represents a unique observation or realization of how these things take place. When you look at the dataset, something like the TVA dataset, you might have 80 or 100 or 90 events in a single year at different locations randomly. If each one is treated as an independent event because we cannot, from a statistical standpoint, predict what will happen 2 or 3 weeks from now on these very large areas, the result is an independent record length. When you perform independent equivalent record length (ERL) and consider such things as storm dates and go through the dataset and look at annual maximum events at those dates, you find that Tennessee had something like an ERL of 4,000 or 5,000. In other words, there were 5,000 separate storm dates over a 70-year period when these thunderstorms have happened. So again, if you change your perspective and are not interested in this location but are interested in the thunderstorm phenomenon and its characteristics, you now have 5,000 realizations of what that looks like. If that is true and we have a way to normalize it by standardizing by the mean at a given location, in an index flood-type approach, we now have an ability to actually have a pretty large dataset to estimate things such as the upper moments like coefficient of L-variation and probability distributions, and so forth. That becomes more problematic as we get down to these really large-scale events because we have fewer of them in a given yearlike an MLC or tropical storm, we may only have a couple of them, like in the Tennessee Valley area. Formerly, we just looked at correlation structures. However, with the correlation structure, an ERL used to be performed using drought effect. If you have a drought, and I have an annual maximum and a number of stations in drought, it is increasing the correlation when, in fact, it has nothing to do with big storms in which we are interested. Also, some years you get a little bit more from randomness, you get combinations where more stations in that particular area have a larger event. So, the correlation approach typically used in the past is not a good way to obtain the ERL. Swapping space for time is big, go to storm typing to get information on the homogeneity in the region. Then when we start to get on the spatial and temporal patterns, we are using storm transposition to move it from one location to another. Again, for all of these things, were trying to get big samples to reduce sampling variability and get more realizations of what these things look like.

Moderator Question:

As far as storm transpositioning, how do you treat that in your analysis or situation?

Shih-Chieh Kao:

I additionally want to say that using numerical weather forecasting models, as John England mentioned, has a good potential as well. If you think that that means we should use numerical weather forecasting models that give you the rainfall depth, it is quite the opposite, as we know that it is not mature yet. But many of the features, like storm structure or reaction to the topography, can be resolved quite nicely in numerical forecasting models with some initial condition perturbation. I think certainly that can be done as well.

John England:

I want to add a comment for those who arent aware. Mel Schafer said in his presentation that sometimes the research in the practical aspects is shown in conference papers. This is the case for the Colorado/New Mexico extreme precipitation study. If you have gone to the Association of 4-157

Dam Safety Officials Workshop, you may be aware of some of the work. The dam safety officials for Colorado and New Mexico put together a team to answer three questions in three different areas. I served on a board of consultants in one part. One task was for the NOAA Earth System Research Laboratory (ESRL), run by Robin Webb in Boulder. Their team looked at bringing HRRR (NOAA High-Resolution Rapid Refresh model) into the analysis. Task 1 was to do PMP across the two States. Mel Schaefer described Task 2 a little bit on precipitation frequency. Task 3 was the NOAA part; that is, how do we bring HRRR into the conversation, describing how you can get thunderstorm triggering on the very high elevations in Colorado and New Mexico and look at those spatial and temporal aspects. It was their attempt at the time to bring in numerical models to part of the picture.

Mel Schaefer:

As a follow-on to that, we actually used the NOAA HRRR model and simulations that they have done on spatial patterns as part of our resampling approach for stochastic storm transposition.

Storms that we used were a combination of actual storms that had historically occurred for which we had a combination of gauge data and our radar data. Some of them were from the spatial patterns from the HRRR model, so that was augmented. We used the Weather Research Forecasting (WRF) model to try to recreate some historical events for which there were questions about the accuracy of some of the point measurements. One interesting aspect of PMP is that the Smethport Storm was driven by precipitation. Billions and billions of dollars have been spent on dams with regard to spillways, and so forth, based on preset collected in a pickle jar. Storms in the Cherry Creek in Colorado were based on whats happened in a harsh trough. One of the storms was the Rattlesnake Storm, and there were concerns about it. A rancher measured a certain amount, and researchers were trying to help verify the validity of it. The WRF is also something that has been used, and that I see will likely continue to be used in the future.

George Huffman:

The traditional way of looking at precipitation is in an Eulerian framework would be: there is a physical location (referred to as a station): how much precipitation occurred and how much precipitation was possible? But thinking about the storm in a Lagrangian aspect is starting to be reflected within the field: you can hear it in the discussion in this panel. But just to be really explicit, the storms dont care where they are, except with regard to topographic forcing. Lacking the topographic forcing, if you think about an event like Hurricane Harvey, it wasnt necessarily the most extraordinary rainfall rates. The issue was that the rates happened in the same place. If you get off of the Eulerian framework and think more about what the systems are doing: whats the probable maximum of the system? Theres a separate question of whether, by chance, they will line up in the same place. With regard to some of the recent flash flood events near here, there was training, and so the rates were not that extreme, but they happened in the same place. The question is, can we get to the point where we are looking at system rates, and then apply them spatially and in limiting cases? How long did they last in the same place because of either training or redevelopment or something else?

Moderator:

I agree with you. Also, antecedent moisture conditions are very important as well. I think, personally and professionally, that numerical weather models are the way to the future, because you can have all of those factors in your model: soil moisture, some the physically based atmospheric processes going on, and also ingesting some of the reanalysis data. So honestly, 4-158

numerical weather models are the way to go. We had a project that ended last year working on that, and I hope we will see more of that work going forward.

Question:

This is a question for Mel Schaefer about storm typing. In your presentation, you gave some of the ingredientsthe things you are looking at, the pressure gradients, and the different isobars. What happens once you have all that? You mentioned a manual for training, but ultimately, there is an automated system. I assume it is some sort of regression-based algorithm of logistic regression or multiple linear regression?

Mel Schaefer:

This is a topic for the meteorologists. However, when the NRC sends out the slides, I will include Web links to a lot of the different technical memoranda and reports that contain specific information about those kinds of aspects. But a lot of the things on the storm typing are threshold based. As an example, certain aerial coverage with regard to the storm sets a threshold, whether it is synaptic scale, mesoscale, or local scale. Certain levels of CAPE identified whether there was really enough convection that particular day that would suggest that, in fact, it was a convective event in that there was embedded convection inside of the event. So, a lot of it is threshold based.

While I do not remember the pressure gradients, there was a certain level of pressure gradient suggestive of a synoptic system moving through that was used as a threshold that also helped indicate a storm type. And so, there was a logic tree. A diagram in the back of the TVA report, which I will give a Web link 11 for, shows the logic and the thresholds that were used to come up with that system.

Moderator:

A lot of groups are working with huge datasets and doing all these analyses for the same time and types of storms. When you are working with your project, how do you deal with that computationally intensive difficulty of working with these huge meteorological datasets but still preserve the resolution, accuracy, and temporal and spatial resolution of the data?

John England:

USACE-specific projects for the dam and levee safety program are triggered by the type of decision being made. We have a hierarchy from some very simple screening level, called periodic assessments, for the dam safety program. They just grab existing information, so the short answer is none of the above. You can use TP-29, for example, even though we dont agree in the field on TP-29s application, and we think we could do better. Then the next level is initial evaluation, where we are trying to make a decision on some major dams. We can go into something like storm typing, where we can probably share the Trinity River study report. Another consideration is the teams ability to handle some of the PF output. Getting into the hydrologic models, we tried to streamline, for that particular study, the product the contractors delivered to USACE. The country is managing a bit more on the data side in the analysis. The handoff was the precipitation estimates and the spatial temporal patterns into Hydrologic Engineering Center Watershed Analysis Tool (HEC-WAT), which we will hear a little bit about tomorrow. There are 11 http://www.mgsengr.com/downloads/RegionalPrecipFrequencyReports_2019.zip 4-159

some things in between there, such as research projects, that indicate we will make an advance to get a better ARF (hopefully working with ORNL).

Mel Schaefer:

The datasets are mostly during the analysis stage. On the hydrological side, we have consolidated everything down to much more manageable pieces for the modeling. As an aside, how do you get out to 10-7 or 10-8 type of things just from the standpoint of sampling? One of the things that we use is total probability theorem, and George Kuczera from Australia put together a procedure whereby we can use stratified sampling across the watershed, precipitation range, and then sample the other inputs to it. To get to 10-8, we might need a sample run of 10,000. So, we might be completing 10,000 floods, with a stratified sample from precipitation and then run Monte Carlo on all of the other inputs that go with it. In some cases, well use a Latin Hypercube approach and in other cases use just straight Monte Carlo.

Shih-Chieh Kao:

First, I want to say that maybe those problems are more challenging because there's more to the application. From the ORNL perspective, the kind of research we are supporting is usually on a national scale and tries to provide national insights. Therefore, we try to use the highest resolution model possible to drive our insight analysis. Today, we try to be as realistic as possible. But the type of work we are doing is not yet for a site-specific application.

John England:

USACEs Angela Durham, in the Northwest Division in Portland, OR, is working on a project in the Columbia River Basin. Soon, I think USACE will release a public Web page that will be useful for operational folks through the division. USACE has processed and used the Web application to deliver frequency-based snow water equivalent grids, for example. USACE is also aiming for PF grids. These could be used in the rainfall-runoff models trying to simplify a little bit of the translation from the meteorological atmospheric side to the production side of hazard curves, or flood forecasting and predictions. USACEs goal is to supply this to anyone in the Pacific Northwest for their use in any way they see fit. The aim is for flood risk management for the Columbia River Basin.

Question:

George Huffman was talking about looking at systems that go through a watershed. You talked about the Tennessee River Valley and also the Ohio River Valley. But you did not discuss land forms and the role of orographics and the direction of the storm. You mentioned storm training.

What is the relationship between geomorphology, land form, and storms? When you divide them into types of storms, how do you go about doing that with regard to landforms in the upper watershed and certain basins and mountainous regions? What about east-facing or north-facing landforms?

George Huffman:

Well, the shorter answer is badly. When I was in graduate school, a paper by Rich Pastorally, an undergraduate student, looked at rainfall over the Berkshires in New England. If you know the Berkshires, you wonder, so what? With the radar at the Massachusetts Institute of Technology, it turns out that you can see the differences between the storms where the airflow was from the 4-160

east, which is upslope of the Berkshires, and then the topography slopes into the Connecticut River Valley, where the flow is from the south where the storm was going up the river valley. If you were riding a bike, you would notice some elevation change. Otherwise, it is minimal. Even the Berkshires are experiencing orographic precipitation. In terms of the satellite estimates, it is really a zero-order problem that is a current area of research. In terms of the gauge analysis, models like PRISM try to do this. When you start down this path, meteorologists always end up doing little toy models. We should be doing a real model, except real models have other problems because you must deal with microphysics. This is current research that we have not figured out yet. One of my colleagues in Japan has done some nice work. Maybe you can get away with characterizing the vertical stability of the atmosphere. Because if you look in South Asia at the Western Ghats, you had one kind of response to orographic forcing, as opposed to western mountains along the coast of Myanmar (Burma). It turned out to be the static stability. PRISM is probably as close as we have right now. But the method is climatological, so if the wind is from a different direction, you are back to the same problem with the Berkshires. If the wind is now going up the Connecticut River Valley, you have a different answer. You need a conditional climatology. But no one is doing that, as far as I know.

Mel Schaefer:

Most of the advances on the hydrology side in the last 6 or 7 years have been primarily because we got out of our silo and started talking to meteorologists and trying to incorporate more reality aspects of how systems actually work and using combinations of data and information from them.

That is where most of the advancements have taken place that will continue to be shown in the future.

Moderator:

One of the challenges I have had as an atmospheric scientist is the definition of the word extreme. We're talking about extremes in the climate modeling space and extremes for dam safety and extremes for nuclear. Theres no definition for extreme, but the term is used in publications.

John England:

I think I may have a slide on this tomorrow when I will talk about Bulletin 17C. For normal, large, and extreme, the best illustrative cartoon, at least on the hydrology side, is from our friends in Australia. They have a nice graph on a frequency curve that shows large to extreme and further definitions for floods primarily for dam and levee safety. So, extremes usually on the order of 10-3 and smaller, 10-6, annual probabilities.

George Huffman:

I would like to take one step into the future, from my perspective. Many satellite estimates these days are made with what are called Bayesian schemes. Its as if you are looking at the distributions, except then they give you the most probable value, which means you will never come up with an estimate of the rare event because you are always given the most probable value. I have encouraged Chris Kummerow, who is in charge of GPROF (the GSFC profiling algorithm) to blow up the system and tell me the spread. Now he actually has a spread distribution, but its meaning is not clear. But we are having a very serious discussion in the satellite community that the next generation, which actually was intended to solve the uncertainty problem, will give an estimate of the distribution and not a single number. Now, for certain classes 4-161

of users, you still need a single number. But the really interesting question is when do we start to make estimates from satellites of the probability distribution and then propagate that forward, as opposed to giving one number then trying to make up what the uncertainty must be? I have just laid out 10 years of research, probably. But I think it that has some really interesting features.

From the standpoint of the community, it really cares about what the extremes are. If we can start to talk about the real uncertainty and precipitation estimates, where uncertainty is not just in the abstract, in a given synoptic situation, these all look the same. But sometimes you get a lot and sometimes you get a little bit. You can start to tease out what the extremes are, as opposed to really high values and median.

Kevin Quinlan:

On extremes. I feel that as an agency, the NRC is a little bit all over the place with this, depending on the hazard. I am a meteorologist and only started getting into the precipitation side and the flooding side about 5 years ago, once we started getting in the Near-Term Task Force (NTTF)

Recommendation 2.1 flood hazard analysis. I have dealt a lot with wind speeds that are 10-6 and temperatures that are either the historical extreme or the 100-year return period. So again, were really all over the place. Hopefully, we are moving to become a bit more standardized. With groups like this, we can come up with something that is a little more consistent across our external hazards.

4.3.4 Day 2 Session 2A - Riverine Flooding Session Chair: Meredith Carr and Mark Fuhrmann, NRC/RES/DRA/FXHAB 4.3.4.1 KEYNOTE: Watershed Level Risk Analysis with HEC-WAT. Will Lehmann*,

Lea Adams and Chris Dunn, USACE, Institute for Water Resources, Hydrologic Engineering Center (Session 2A-1; ADAMS Accession No. ML19156A462) 4.3.4.1.1 Presentation 4-162

4-163 4-164 4-165 4-166 4-167 4-168 4-169 4-170 4-171 4-172 4-173 4-174 4-175 4-176 4-177 4-178 4-179 4-180 4-181 4-182 4-183 4-184 4-185 4-186 4-187 4-188 4-189 4-190 4-191 4-192 4-193 4.3.4.1.2 Questions and Answers Question:

I was asked to go back to the slides, so people can read them. [Where has HEC-WAT been deployed?]

Answer:

We have deployed this in quite a few different places. We did it on Trinity River, where we ran stratified sampling and were able to estimate the uncertainty of the million-year event, which is kind of absurd.

4-194

Question:

I would like to ask if you are communicating this to a public. Imagine that I live in Sacramento and Im in a public meeting. How would you convey this information to members of the public, so they have confidence that yes, in fact, your model is coming close to reality?

Answer:

Thats a great question. I must qualify that I am a model developer and not a public communications expert. First, we have to acknowledge what we do not know. Second, I think that we need to incorporate uncertainty into our estimates rather than rely on static estimates. Right now, we say you are in the 100-year floodplain. That, to me, is one thing I can guarantee is a lie.

By describing our inability to estimate perfectly the 100-year floodplain, we are actually improving our ability to communicate to someone. I would communicate our results by representing them as a five-number summary because I think that is a really effective tool to say the likelihood of you getting wet is between zero percent chance and 0.002 percent chance, with the most likely estimate of 0.001 percent chance.

4.3.4.2 Global Sensitivity Analyses Applied to Riverine Flood Modeling.

Claire-Marie DuLuc*, Vincent Rebour, Vito Bacchi, Lucie Pheulpin and Nathalie Bertrand, Institut de radioprotection et de sûreté nucléaire (IRSN) Radioprotection and Nuclear Safety Institute (Session 2A-2; ADAMS Accession No. ML19156A463) 4.3.4.2.1 Presentation 4-195

4-196 4-197 4-198 4-199 4-200 4-201 4-202 4-203 4-204 4.3.4.2.2 Questions and Answers Question:

Would you take some additional steps on the failure mechanisms? Besides overtopping, do you have a framework? Can your framework be flexible so that you can do seepage through the levee as well?

Answer:

That should be very interesting to have a different set of failures. But you are limited by what the model can take into account, and we simplified it with only overtopping for the study. With Hydrologic Engineering Center River Analysis System (HEC-RAS), I think it is possible to make more failure modes because the model is very well developed to take into account different parameters. If we have knowledge to supply boundary conditions, and that is not an easy point, we should try to complete the study with different types of failure.

Question:

For the uncertainty source quantification, how did you determine the probability density function (PDF) for each parameter?

4-205

Answer:

For the moment, its an expert point of view given to us to define the bounds of the possible values for the input parameters and also for the shape of the distribution itself. We basically use uniform laws or triangle laws if we think there is a best nonvalue. That is the state of practice now.

Question:

Have you thought about epistemic uncertainty, specifically for the Loire test case? You have pointed out that there are historical areas that have had levee failures. Have you considered randomizing the levee failures in space?

Answer:

Yes, that is a very challenging issue, and our study does not take it into account. There is only the bridge. We have the previous studies with different locations of the bridge, but we were limited; we had the same parameter value for each bridge at the same time, which is not realistic. We have to find something that will take into account this complexity of special possibilities for bridges.

4.3.4.3 Detection and Attribution of Flood Change Across the United States.

Stacey A. Archfield*, Water Mission Area, U.S. Geological Survey (Session 2A-3)

This presentation was cancelled due to illness.

4.3.4.4 Bulletin 17C: Flood Frequency and Extrapolations for Dams and Nuclear Facilities.

John F. England* and Haden Smith, USACE, Risk Management Center; Brian Skahill, USACE R&D Center, Coastal and Hydraulics Laboratory (Session 2A-4; ADAMS Accession No. ML19156A464) 4.3.4.4.1 Abstract Flood frequency guidelines have been published in the United States since 1967 for nationwide flood risk management and flood damage abatement programs. These Guidelines for Determining Flood Flow Frequency have undergone periodic revisions. Bulletin 17C is the latest revision to these flood frequency guidelines. It was published in March 2018 and is available at https://doi.org/10.3133/tm4B5. Bulletin 17C contains statistical procedures using the Expected Moments algorithm (EMA) and the log Pearson Type III (LP-III) distribution that efficiently and effectively utilize extreme flood data, including historical, paleoflood, and botanical information.

Interval flood data are now included in flood frequency, such as large floods that exceeded some level, floods in a range, and/or floods that were less than some value. The Guidelines include procedures to properly estimate the extreme flood right-hand tail of the flood frequency distribution by carefully eliminating the influence of small floods or zero values called potentially influential low floods. An enhanced focus is on collecting at-site and regional data on extreme floods, including expanding the record in time with data from historical and paleoflood sources. Extraordinary floods, those that are the largest magnitude at a site or region and substantially exceed other observations, are of critical importance and are included by judicious use of flow perception thresholds and longer time frames than the number of years at a gaging station. The Guidelines include a new section called Frequency Curve Extrapolation that provides guidance on estimating flood probabilities less than 0.01 AEP and relevant for dams, levees, and nuclear 4-206

facilities. In these situations, such as to estimate AEPs in the range of 10-3 to 10-6, a flexible, three-step approach using multiple lines of evidence is recommended. These three elements are (1) utilize EMA and LP-III with expanded historical, paleoflood, and extraordinary flood data at the site (temporal information expansion), (2) expand and improve regional skew models (spatial information expansion), and (3) expand and include regional independent information such as from extreme flood rainfall-runoff models within the watershed or other physical and causal information. Information can be combined and weighted using quantile variances or other procedures. An overview of this approach is presented with example flood hazard estimates for dam safety.

4.3.4.4.2 Presentation 4-207

AcknowledgmentsGreg Karlovits (USACE-HEC) precipitation frequency; Keith Kelson (USACE-DSPC) paleoflood data; Angela Duren (USACE-NWD) rainfall-runoff, streamflow hazard curves, many analyses; and David Margo (USACE) reviews, ideas, collaboration, and testing.

4-208

For a dam, a series of dams, or a system of dams, assess hydrologic risk for overtopping, spillway-related potential failures, gate misoperation, or other hydrologic failure mode. Example:

Guajataca Dam, PRspillway stilling basin failure and erosion and failure of spillway chute during Hurricane Maria (2017). Gates at Lookout Point Dam, Willamette River Basin.

4-209

4-210 4-211 This is the very last section in the main report of Bulletin 17C.

4-212

4-213 See page 34 and sections titled, Comparisons of Frequency Curves and Weighting of Independent Frequency Estimates.

4-214

4-215 Here, we utilize a particular type of regional information applied to watershed of interest:

rainfall-runoff modeling with precipitation frequency.

Example of spatial (regional precipitation frequency) and causal (specific population) information.

The four-panel figure at left displays 500 millibars pressure contours, 850 millibars pressure contours, precipitable water, and CAPE used to classify storm type.

The grapevine precipitation frequency curve (at right) is very steep based on MLC regional data and the regional Kappa distribution.

4-216

Example conceptsdraft work in progress.

4-217

4-218 4-219 4-220 4-221 4-222 4-223 4-224 4-225 4.3.4.4.3 Questions and Answers Question:

You stated that this flooding event in the upper Midwest was a joint snowfall-rainfall event. As an outsider to this whole process, how do snowfall and snowpack factor into these sometimes pretty catastrophic events in the spring?

Answer:

As shown in the videos on YouTube, the ice jam drove the breaking of these gates. This is a run-of-the-river dam, so there was not really a dam failure that was going to cause loss of life. Yet, the people that run this small power dam on a small river were affected. The idea is, if you can tailor two things, one is your structure of interest, such as a levee, dam, or nuclear reactor, and the other is things that might break that. Using the global sensitivity analysis, you can factor in and identify the key factors of your structure up front and then look at the factors in the rainfall-runoff model that you should include. This simulation actually has one of the bigger factors on volume, the snow melt. We did not put any uncertainty on that. But it was a big factor that caused the volume of the water that filled the reservoir in this particular case. It means spending a bit more time on what is inside your model to determine whether you have included those factors. In a lot of cases we do, and, in some cases, we dont. Its a case-by-case situation and an evolving learning process. We found that out for this particular dam in the Willamette River Basin in western Oregon. After doing all the modeling, we found that one of the key factors that we overlooked was a sensitivity. We could have done a better job on the timing of the rainfall.

Because the decision criteria to open gates and move water out had a lot to do with the assumption you did in this sample, we had a couple patterns and that was a key factor. We knew we could have done a better job refining that piece.

Question:

Ive observed that the storm surge modeling community fairly routinely uses the JPM. There is an actual error term in there, and the community tries to model the errors, because we know that when you get the really big storm surge we are not really sure of the quality of the physics or the numerical output of our models. Should we be doing this in riverine flooding?

Answer:

Yes; we first want to identify the target in broad terms. Adding error terms will inflate our uncertainty. We are still wrestling with some of these things. But the challenges at 10-3, 10-4, and 10-5 mean that we want to try everything possible as I described. That is challenging for a lot of people to even do. We have challenges operationally getting precipitation frequency analyses that are credible estimates that we can feel comfortable with. We are spending a bit more time on that.

We are also trying to spend more time on data collection on paleofloods. So, adding error terms is good. But we also then have to go backwards and make sure that our simple models are at least somewhat close. Our model is essentially a simple loss-adjusted model that is very similar to work by Électricité de France (EDF) in France, the GRADEX. We take a constant loss off of this red curve to get the green curve to put into the analysis because we are really actually uncomfortable with all these rainfall-runoff dots, the diamonds, which suggest the curve drops farther than the 4-226

historical and paleoflood information. We are wondering if we are comfortable with this. So, this is in-process work that we can add some features to.

Question:

This is probably a bit of a sensitive question. It gets to the influence of the experts, in the sensitivity of the experts inputs on the results. What quality controls do you have in place over the experts? How well calibrated are the experts, what firsthand experience do they have? The answer is usually none. How do they set out their logic? Is there any guidance in the document in relation to controlling the quality of the expert opinions that are going in? We know that when you put randomness in, you get randomness out.

Answer:

We do not have formal, written guidelines on selecting experts like the SSHAC process yet for this. We do use open peer review. We try to go to the literature as some sort of validation of the science and the methods behind it. In addition to formal peer review of the report and analysis, we expect to eventually send it to journal publication. We also perform software validation. You also alluded to the choice of inputs and defining those inputs clearly. This is preliminary work and we have some good ideas. We have done some very simple things like a triangle distribution here, some normal, triangle, and log normalall okay and they make sense. In choosing those, we want to have a feedback loop. We have independent judgments made by myself and the team.

Then we have some reviewers who serve as a re-caliper in terms of the technical parts. We actually may want to corrupt the process with some decisionmakers and do the best technical things we do and then see if it really matters to the decision. Lets get the curve anchored to 10-5 up here and then do some sensitivity. We pick something at 1 in 1.000. Does it change the answer and order of magnitude? We will defer the formal review of expert elicitation and Bayesian analysis to at least push forward to go into a Bayesian route as opposed to strictly frequentist.

Another coworker of mine will touch on a little bit of this and the sensitivity of data into the analysis. The influence of these data has an overwhelming effect, rather than the priors you put on the analysis in the first place.

4.3.4.5 Riverine Paleoflood Analyses in Risk-Informed Decisionmaking: Improving Hydrologic Loading Input for USACE Dam Safety Evaluations. Keith Kelson*, USACE, Sacramento Dam Safety Production Center; Justin Pearce, USACE, Risk Management Center; Brian Hall, USACE, Dam Safety Modification Mandatory Center of Expertise (Session 2A-5; ADAMS Accession No. ML19156A465) 4.3.4.5.1 Abstract USACE dam safety assessments consider potential hazards (loading), system responses (fragility), and associated consequences as part of the risk assessment of its national dam portfolio. Where possible, paleoflood data are incorporated in dam safety evaluations to reduce uncertainties in the hydrologic loading component of the risk assessment. To date, the reaches of interest to the dam safety assessments have focused primarily on riverine paleostage indicators (PSI) and nonexceedance bounds (NEB), rather than established characterization methods using slackwater deposits, cave-deposit stratigraphy, dendrochronology, or others. For sites deemed justified within the risk-informed decisionmaking framework, the geomorphic or hydrologic conditions commonly are not idealwhether because of sparsely preserved PSI or NEB, channel geometric nonstationarity, temporal hydrologic nonstationarity, processes that shift stage-discharge relationships (e.g., ice jams, debris blockage), and a host of other possible 4-227

problems. These challenges are addressed during site selection in order to eliminate or minimize uncertainties that could diminish applicability of the results to the risk assessment.

Data collection in riverine settings demands an integrated approach among geomorphic, hydrologic, and hydraulic disciplines. Following initial geomorphic identification of feasible riverine reaches, the peak flood of record (FOR) is defined or estimated, using systematic gauge data, historical observations, and reservoir-inflow calculations. The existing probable maximum flood (PMF) for the site is used or refined, as needed for the risk assessment. Existing hydraulic models are utilized to develop information on the extents and down-valley elevation of both the FOR and PMF, which are then used to constrain geologic and geomorphic field reconnaissance.

Geologic/geomorphic field efforts focus on characterizing flood-related PSI (e.g., high-discharge fluvial terraces, stranded slackwater deposits, biologic features) or noninundation NEB (e.g., undisturbed alluvial fans, well-developed relict soils), and obtaining numerical or reasonable relative age estimates for the PSI and NEB. For the riverine reaches, the down-valley longitudinal profiles of the PSI and NEB are compared to floodwater surface elevations generated from either 1-D or two-dimensional (2-D) HEC-RAS hydraulic modeling of various discharges between the FOR and the PMF. Discharges associated with the PSI and NEB are estimated through comparing the geomorphic field observations, historic hydrologic data, and HEC-RAS hydraulic modeling.

Uncertainties are captured throughout the analysis and explicitly calculated or subjectively estimated for inclusion in flow-frequency calculations using the Bulletin 17C methodology. For analyses using riverine PSI or NEB analyses, primary sources of uncertainty are (1) ranges in estimated ages of PSI and NEB, (2) ranges in roughness and energy gradient for various discharges, (3) water depth and velocity required for sediment deposition (for PSI) or erosion (for NEB), and (4) inherent variability of the water surface and the PSI or NEB geomorphic datum in the down-valley profile. These potential uncertainties are captured via sensitivity analyses or acknowledged within ranges of paleoflood discharges or timing. The best estimates and ranges in discharge and age for each PSI and NEB are included into flow-frequency statistics through use of perception thresholds and flow intervals (ranges in discharge).

This approach has assisted multiple recent and ongoing USACE analyses for dam safety.

Flow-frequency curves incorporating paleoflood information can be compared to those based solely on systematic and historical information. Often, the paleoflood analysis prompts reevaluation and improvement of the historical record, which extends the ERL regardless of prehistoric data. The analyses have allowed interpretations that shift flow-frequency curves both up and down or provided a basis for narrowing confidence bands in existing curves. At this time, there appears to be no systematic shift for greater or lesser frequencies of given extreme discharges, when comparing flow-frequency curves using paleoflood data and those using only systematic and historical data. While uncertainties can be significant in paleoflood analyses, conscientious site-selection, data collection, and analytical efforts allow uncertainties to be minimized and captured for appropriate inclusion in risk-informed hydrologic loading estimates.

4-228

4.3.4.5.2 Presentation 4-229

4-230 4-231 4-232 4-233 4-234 4-235 4-236 4-237 4-238 4-239 4-240 4.3.4.5.3 Questions and Answers Question:

Im fascinated by how some people look at paleofloods and evidence as basically a treasure hunt.

Go find a single piece of evidence, and then from that, make some conclusions. I like what you said about the boulder that rolled down the stream and you could see the evidence of it moving and so it says something about the energy of the system at the time that was deposited. Thinking about trying to reconstruct what the event and the features and the processes were at the time it was deposited, it isnt just finding a log, but its the material surrounding the log. Is that highly chaotic material? Therefore, is that a high-energy regime? Or was it a series of very thin sediments, like clay, around it, which would be indicative of a much quieter energy regime? When you do this, do you actually try to reconstruct what happened at the time of the paleo event and understand the processes?

Answer:

Yes, that is what geologists do. As sedimentologists, we look at these test pits and the stratigraphy in these exposures. The grain size characteristics, the bedding, and the laminations all tell us how that deposit got there. In the case of the large concrete boulder, it was not just a large concrete boulder encased in a sand body. It was actually within a bed of other similarly sized boulders that were imbricated. Its a geologic term or sedimentologic term that indicates that there is transport in a waterway and environment. Those different clasts were implicated. We use little clues like that as geologists and sedimentologists, to determine that this was a flood deposit. The alternative is there are some places where we dig these pits, and they are clearly windblown sands. We can tell the difference between a windblown sand and a water-lain sand. Thats important because a windblown sand has not experienced being laid down by water. That 4-241

becomes an NEB. But a waterlain sand, even if there are no big boulders, just laminated sand, that tells us this was water transported. Whether it was by a flood or just a little rivulet is another debate, but it was at least waterlain, and we use that information. We also use, on the lookout point hydrological model (slide 12), the modeling in the HEC-RAS capability. In the red part in the central thread of the channel, there are little white flow lines, scaled to velocity. So, we are actually looking at the velocities along the channel here. In the blue part, the velocities are relatively low.

Theres uncertainty and we can nitpick, but basically, its a lot lower velocity up on this terrace than it is in this channel. When we find some sand in here, and gravel down in here, that reflects that the modeling reflects reality. We have comfort in the model. We use both the sedimentological characteristics and the geologic characteristics that we encounter, and we use the modeling input to double check the results.

Question:

It seems like theres a lot of information in the paleo record from just different locations. Are other universities and other researchers using paleo data? Are you able to take advantage of other studies?

Answer:

We are definitely not the only people doing this: Tess Harden and Jim OConnor have been doing this at the U.S. Geological Survey (USGS) for a long time. John England and his group at [when he was at] USBR have been working on this as well for a long time. Academic research is in process throughout the country and we are just tagging on. By no means do we think we are the only ones doing this; we are actually just trying to catch up. The hard part is that the hydrologic and geographic regimes are so different across the country that academic researchers are looking at different aspects. For example, in the western United States, where most of USBR dams are located, they use certain types of features. In other places, such as Tennessee, which Tess Harden will talk about later, there are other techniques. Those in academia are performing some new research evaluating the frequency of flooding within oxbow lakes. There are multiple avenues of academic research. Most of the research is academic, and a lot of it is in Europe, particularly France, Germany, and Spain. We are trying to just tap into that and are not at the forefront of this.

We are just applying the techniques that people have developed over the last 20-30 years to our particular facilities.

Joseph Kanney, Organizing Committee:

The report that Keith Kelson talked about, O'Connor et al. 2014, was actually a result of a project with the NRC. The researchers took a very broad look at several different river basins within the United States; came up with a set of metrics and an index to rate them; and gave them a high, medium, or low viability for paleoflood study.

4-242

4.3.4.6 Improving Flood Frequency Analysis with a Multi-Millennial Record of Extreme Floods on the Tennessee River near Chattanooga, TN. Tess Harden*, Jim OConnor and Mackenzie Keith, U.S. Geological Survey (Session 2A-6; ADAMS Accession No. ML19156A466) 4.3.4.6.1 Presentation 4-243

4-244 4-245 4-246 4-247 4-248 4-249 4.3.4.6.2 Questions and Answers Question:

On the frequency plot, it looks like the slope of the peaks is a little flatter than the slope of the final curve. Is that partly influenced by the dams that are storing water on the Tennessee River Basin or did you back out the pre-dam speaks?

Answer:

I did not use the regulated part of the record. This is pre-dam. The early part of the record still provides about 40 or 50 years of records, since they began in 1867 and has been continuous since about 1874. I think the first part is really influenced by these lower floods, and then the big tails are influenced by our big paleoflood.

Question:

Your third slide showed a number of curves with a so-called gain in a record period versus the number of paleofloods. The one in a 100-year flood line sort of crossed the rest of the other lines.

Do you know why, or is it random?

Answer:

Unfortunately, Tim Cohn, who passed away a couple of years ago, created this curve and performed the statistics. We are trying to have someone to recreate this, so we can put it in 4-250

different publications, something similar using real data. We are looking for volunteers. I do think that these paleofloods will not have that much influence on the 100-year or the 200-year flood. A good gauge record will have more influence than in some of these other floods. You really start getting the increase if you have more information on these rarer floods.

Question:

When you have only one event, you start to set a middle and, going with just precedents, you gain a lot of information. With regard to the censored value, the censored flood: you observed several floods but only used one here?

Answer:

We use three, but this does not account for the perception thresholds. This chart gives just the number of paleofloods and how that affects the curve at these certain flood quantiles. Then we add these perception thresholds where we had information about floods. We do not know each individual flood magnitude. We are just adding information that the floods did not exceed a certain level. This graph does not take that into account.

Question:

You know you have several huge floods, even if they are below a certain threshold. This is very important information, the frequency analysis. Why does your calculation not take it into account?

Answer:

Although we have information on eight floods, we used only the benchmark site and considered the three floods because this was just more of a sensitivity analysis. These are the three biggest floods and the floods we have the highest confidence in and found over and over. This site is very similar to the record at this site. We are pretty confident that these floods occurred, and they are the largest floods. For the sensitivity analysis, lets put that large flood in a frequency analysis and see what it does. Then we added the other floods. We were confident these happened, but there is slightly less evidence or there is more uncertainty with, for example, the age, although the actual age of the flood is much less important than the timeframe and flood frequency. You should use all your floods in your overall approach, but this is more of a sensitivity analysis. Sometimes it really affects the curve in the analysis if you take one flood out, and sometimes it doesnt. In this case, it just does not affect the curve. It is probably a combination of the perception threshold and the large flood that was really driving the curve. The other floods just did not have that big of an effect. But for best practices, you would use all these floods because you do actually have information about them.

Question:

With regard to the hydraulic modeling of the paleo floodplain, did you adjust the topography to match the time? Are you using current topography?

Answer:

TVA actually created and modified the model. It represents the topography before the intervention of building dams and other structures. However, when the dams and other such features are taken out, there is always the question of whether the environment shown in the model really 4-251

looks like it did when these floods occurred. That is why we pick our sites where we do. There is no human activity in the gorge, no human modification. In 1867, the river is flowing across bedrock. Even later than that, before the dam, the river was flowing across bedrock, so it has not moved much. This is limestone, but it is very resistant and is not going to change much in 4,000 years. Since the dams were installed, there is probably a little more alluvium at the bottom. I do not know when TVA performed its surveys, but even if I had a couple feet of alluvium, the cross section is like a V. This alluvium is at the very bottom of the V, so it is not going to change the stage very much. Even with some of these unknowns, I think it is fairly safe to assume that it has not changed much in just 4,000 years. That is the benefit of having a stable bedrock site.

Question:

I do not always think in terms of channels, I also think in terms of watersheds. If, for instance, a river has been developed, it could have become regulated. I am more interested in knowing the paleo history of the watershed. Do people look at the tributaries that feed into these major river systems and wonder, I may not see it in the Susquehanna River, but I could see it in some of the very large tributaries like Pine Creek, Lycoming Creek, and they may be where Ill find the paleoflood evidence and not in the main valley because of human development.

Answer:

Thats an excellent point. Some of these are more modified channels, then tributaries are an excellent place to find information about more than just tributaries. If you do a basinwide study, you can look at all the tributaries and get flooding data, and maybe it will be a similar signal but maybe it will be different. Hopefully, it will be somewhat similar if you get these large storms over these basins. You can also find some mainstem flooding in these tributaries. Those are good places to look to find that. In heavily modified areas, they are a good place to look if they are not modified as well.

4.3.4.7 Riverine Flooding Panel Discussion. (Session 2A-7)

Moderators: Meredith Carr and Mark Fuhrmann, NRC/RES/DRA/FXHAB Will Lehman, USACE/IWR/Hydrologic Engineering Center Claire-Marie DuLuc, IRSN John F. England, USACE, Risk Management Center Keith Kelson, USACE, Sacramento Dam Safety Production Center Tess Harden, U.S. Geological Survey Moderator:

I would like to start by opening this up to the panel to see if the panel members had any ideas based on what they heard that theyd like to talk about or ask their co-panelists.

Keith Kelson:

You said that there is no change in the curves? Even with the additions, that small amount of change, with the additional paleoflood information with respect to the systematic? Is this correct?

4-252

Tessa Harden:

Between adding just three floods and adding the eight floods, there was very little change.

Between adding just the gauge record and the paleoflood record, there is quite a bit of change.

Keith Kelson:

Is there value in going to the extra effort to define eight events versus less effort to define one, two, or three?

Tess Harden:

There certainly is value because you do not know if there is value until you get to the end. This is the first case where Ive seen that the extra floods did not really have much effect on the curve.

But there is definitely value because you cant say at the beginning of the study that you will just look for the three biggest floods or look just at this level. You cant say until after the fact. This is kind of a unique find, I think. Im sure theres some limit. Its definitely worth finding as many floods as you can that you have confidence in. You do not want to just do it and have low-quality data.

The largest number of high-quality flood data points, or intervals, is definitely the best.

Keith Kelson:

As a follow-up to either John England or Will Lehman, can you explain why, if we had paleoflood and it was well constrained, it did not actually change with respect to the systematic, historic record? Is there value in doing the paleoflood? Since it did not change, you can look back and say, well, I dont know why we did that because it didn't actually change.

John England:

It may not change your median model, your frequency analysis, but in part of my presentation, I alluded to our expected curve. When you roll in full uncertainty, the value will be portrayed with a reduction; in some cases, that uncertainty. It comes back to the decisionmaking. Can you tolerate using a 50- to 100-year gauge record, extrapolate into 10-6, and trust that the resulting time interval represents, and that the distribution represents, the complete tail. In my view, adding that piece of information gives you qualitative results. That it does not change the number is beside the fact. That gives you additional, precisely subjective confidence on the model you chose to perform the extrapolation and then the uncertainty about that. But then we want to roll it into the HEC-WAT. Then trying to couple, does that really mean something in terms of the consequences and the risks for the project of interest?

Will Lehman:

One of the things Tess said about the additional floods, they were more uncertain about their details. The added information of the eight floods in conjunction with their uncertainty may have yielded the result that they observed. Had they been more certain about those, then we would expect the confidence intervals to come in.

Claire-Marie DuLuc:

Its very interesting that certain information gives a big advantage to the bias and framework.

John England, what about the predictive, which requires some quite different information from 4-253

what we use with the frequencies effort? How do you propose to deal with this concept? I dont think the word Bayesian occurs in Bulletin 17C.

John England:

My coworkers and I had some long discussions when preparing this report. They wanted to go Bayesian and thought that I would disagree. The short answer is no, because, historically, in USBR, we got into this in the mid-1990s. We jumped into using geology and paleoflood information. I was working on updating so it took a long time to do both and Bulletin 17C. That was a straitjacket at the time for floodplain management using straight frequentist. What I mean by straitjacket is the committee. The committee chose to implement an improvement. We decided to keep log Pearson and moments estimation, so straight frequency. At the same time, Daniel OConnell at USBR was doing Bayesian maximum likelihood. We thought we could include multiple distributions and some model uncertainty and take into account paleoflood ages. The engineers were disturbed because it was like you saying, Its between 1,000 and 3,000 years.

We said that you could put a uniform distribution and account for that uncertainty in ages, just like the flows. One of the flows in one of todays examples had a best estimate 60,000 cubic feet per second for one event. It was tested, and 30,000 was a lower estimate and 80,000 was a higher estimate. So that larger uncertainty sometimes is very discomforting for some people who run hydraulic models and like precise answers. The Bayesian framework can readily account for that, routinely. Broadly speaking, we are headed towards using Bayesian approaches when we have these more complicated dam safety, critical problems. Bulletin 17C does a lot for the frequentist side. For more frequent floods, we want to combine them. We need to use both the scientists and the engineers knowledgeso not necessarily relying on the fancy Bayesian and Monte Carlo or these expected moments with these complicated perceptions thresholds. Look at plots, evaluate your data, and then make some inference based on that. We have to keep the human involved in the calculations, not just have the statistics take over.

Question:

John England and Will Lehman, from the dam safety perspective, as flows increase during reservoir operations, you get to a point where you can no longer influence the flow. Have you considered how much the reservoir operators decisions influence the results, looking at probability of getting to those extreme levels at dams? Second, there is a lot of uncertainty in knowing what the reservoir operators will do. How do we account for that?

Will Lehman:

When you are overtopping your dams in these very infrequent events, dam operation will have less of an impact, while in the lower frequencies, it can have a tremendous impact. Within the HEC-WAT, we would be able to evaluate a system of dams operating together in concert. These big systems such as Trinity present interesting artifacts. Because of the storm centering on the storm patterns, you would see certain reservoirs operate in certain ways that would help other reservoirs alleviate the pressure in the system. That is based off of the guide curves that you input into the HEC-ResSim model. With regard to the incorporation of deviation from operation curves, we have conducted a lot of analysis on forecast-informed reservoir operations and are considering how different duration forecasts help us to make decisions to evacuate pools earlier. We see that there can be a much better treatment for flood risk in the mid-range frequency events. The system operates like a system and needs to be modeled it that way.

4-254

John England:

Will Lehman described the Trinity River system, which is in Texas, above most of the dams that are above Dallas. We have five dams: of those we have two in series and one in a levee section protecting downtown Dallas. All those facilities are working in conjunction to provide flood risk benefits and management for the city of Dallas and other parts of Dallas or the Fort Worth metro area. The HEC-WAT is a nice way to incorporate that. The challenges, in my view, are that it affects the flood risk management. Those operations are critically important at frequencies 1 in 100 and 1 in 500. You can use that to explore deviations. On the dam safety side, beside the HEC-WAT, we have a reservoir frequency analysis tool. Its pretty simple to do reservoir stage frequency curves. Yet it has to go back to the users input, and the input there is very simple. Its not using the beauty of a little logic thats in HEC-ResSim. The user enters outflows, rating curve, and storage relationship. You have to still calculate sensitivity or global sensitivity to do perturbation to identify what happens when there are mistakes or changes in operations. [Those operational errors] will have to be constructed, usually, for this little simple tool outside, one at a time. The other challenge we have on the dam safety side is accidents happen, such as at the Oroville Dam in Northern California. We have very large spillways, which may not operate as intended and fail in this situation. We can sort of throw out the frequency part. I cant speak directly from a USACE position, but I can say that spillway has operated for flows of equal magnitude or slightly less many times in the past. This accumulation in the drain caused the spillway issue. We use the risk process to deal with these challenges. Yet we need all these things, these ingredients, to help with characterize that; whether its HEC-WAT, or global sensitivity or the full frequency stuff.

Will Lehman:

The Trinity River has two dams in series. If the upstream one fails, it cant pass the flood. In the HEC-WAT, we are also assisting in inline failure locations throughout that system and the subsequent failures of levees and reservoirs, which is what happened in reality.

Question:

Keith Kelson, is the use of archaeological data, such as paleo-Indian sites located on or adjacent to Pleistocene terraces, helpful in your analysis?

Keith Kelson:

Absolutely; archaeologists provide context for the sediments and age dating. For example, on the Missouri upstream of Bismarck, there are archaeological sites that have been demonstrated to have been flooded. The occupants left the site after that flooding. We can almost identify to the year when that flood occurred. Archaeological data provide an incredible gauge to figure out whether something such as flooding occurred or not.

Tessa Harden:

We do try to avoid archaeological sites and not dig them up. A lot of the relevant information can be found in the literature.

4-255

Question:

What are the similarities and differences in using paleo data for coastal big flood events? Who if anyone is at the forefront of doing this?

Tessa Harden:

My knowledge of coastal and storm surge is limited to what I heard yesterday, essentially. But there have been studies, I think, of the marsh areas that use the same principles as far as sediments and stratigraphy. I am not sure about doing more hurricane counts. For surge, it may be useful to know if the water is making it into certain areas and you find out if there is a limit of the deposits. After big events such as hurricanes, there is always talk that we have seen this before. There was effort with the coastal community and the river community after Hurricane Harvey to try to determine whether the records mesh. I dont know of anyone doing that.

Keith Kelson:

There are definitely people working on paleohurricane stratigraphy. This is not an area of interest to USACE because coastal areas are not locations for dams, but there are definitely groups that are working on that and they could likely be identified through an Internet search of paleohurricanes. Riverine cases are also influenced by storm surge, like Washington, DC. In North Dakota, we have the ice jam problem, where you have the tidal influence and potential storm surge. If you were to do a paleoflood geologic analysis and find evidence of a flood, you would have to figure out whether that came from the ocean or the mountains, to be simple. To my knowledge, no one has considered a case with that dual source, ocean and riverine, although it could be done.

John England:

I cant recall whether it is researchers at the State University of New York, Stony Brook or at Woods Hole who are in the marshes in New England. Both Massachusetts and Connecticut are very interested in tracking hurricanes that pass through Long Island; direct hits after Hurricane Sandy. The 1938 hurricane went up the Connecticut coast and so people in that community are working on it. Usually, its the geologists and oceanographers.

Comment (Joseph Kanney):

In OConnor et al. (2014), the USGS Scientific Investigations Report 2014-5207, that I talked about earlier, we did a review of paleotsunami and paleosurge research. Although a little bit dated now, but there is a survey of what the state of practice and the state of the research was on paleosurge deposits at that time. It was not very optimistic. Theres a lot of difficulty with preservation of those sediments if you are in a bay area with a lot of frequent storm surge with sediment that gets reworked a lot. There is more to it, as described in that report.

Moderator:

My impression is that a lot of historical data are available for storm surge because of the records that tend to be kept for shipping and for various types of harbor work. There may not be paleo data, but there might be a way to get back at what occurred and there may be reliable records.

4-256

Keith Kelson:

This brings up an interesting point, about our historical record. Is it good enough? It applies to what you just said and also applies to the riverine. In some cases, I think its probably true on the Tennessee River near Chattanooga, or if you have multiple large events in the historical record, either because the historical record is long enough, or because thats the way the system works.

But think about the extremely flashy systems in Southern California, where we do not have as long an historical record, but we have no historical events that are even close to anything that were seeing in the paleoflood. Either our window is too narrow, or the return period on these really big floods is longer than our window. Whether the paleoflood information makes a difference or not depends in part on what the system is like. It might change the curve a little bit, where you have multiple large historical events, but its going to change the curve a lot, either greater or lesser, if you dont have very many historical large events.

Question:

How do you deal with nonstationarity? How do you handle a site where you may just have one piece of data from a flood, and you are not accounting for nonstationarity or climate change?

Keith Kelson:

Understanding how the system works is important: the runoff processes, given certain meteorological inputs. It is important to understand that system and to look back in time to help us understand those runoff processes. If you will project forward using meteorological models, you also need to understand how those models work based on past history. If we do not look at the longer record, then our models are only as good as what we have from that limited historical period.

Follow-up Comment:

I was glad that during your presentation you had that slide where you lowered the annual exceeding probability for low-frequency events. You said this should not be used for decisionmaking, that you have to take that with a grain of salt and not start modifying guidance and regulatory action based on these data. It is the same thing for GCM. Its a tradeoff of what youre getting out of paleoflood data and models. We need all the data we can get.

Will Lehman:

We talked a lot about downscaling the GCM. I think there is a lot of benefit to that. There are other ways to make decisions and have full definition of the likelihood of something occurring. For example, you can look at a kind of a bottom-up approach, where we look at the variability thats expected. We run through a stratified sample of the expected outcome across certain parameters and see how it impacts our ability to perform as a system. With this, you can describe when you would start to have regrets for not taking action, and then monitor how we progress towards that end across time to enable us to make decisions. Its a bit of a different approach. Coupling that with downscaling, once you have these response surfaces, effectively, you can actually take the downscaled models, assign to them some predictive ability, and create a surface that has a probability distribution on top of it to say, whats the likely outcome? I think that we might need to start heading that direction. Weve been doing that with the HEC-WAT.

4-257

John England:

With regard to sampling in space, we have relied on work by USGS, principally John Costa, on very small basins. Keith Kelson alluded to Southern California, with really big floods and shorter records. We encourage the teams to look spatially, to look at multiple watersheds to get the signal of the events happening across multiple locations and, as Tess Harden pointed out, at multiple sites to see that signal. This will allow you to state, We get this broad signal of really big floods happening of about this magnitude. With regard to nonstationarity, we wrestled with this issue when we came out with Bulletin 17C, and so that guidance is really flexible. We advocated there to try to go toward non-time-varying parameters. When we do frequency analysis, we can do distributions like GEV, or log Pearson III. Turns out, we know nothing about the shape parameter, except from theory. There are some interesting parts where you can use rainfall and constrain what that shape parameter might be with the assumption of stationarity. Thats where the theory can help you. Second, if you have a really big flood, or the Tennessee floodsomething bad so that its in the paleoflood recordyou may never be able to answer the question, Is the record quasistationary? But all we know is whether this flood of this magnitude will break our system or cause risks sufficient that we have to investigate it. The frequency analysis does not consider whether its 50 years in the future or a thousand years ago. What matters is having an event of those magnitudes in the analysis in the first place. The National Academy of Sciences published a wonderful report on tree rings and the Colorado River in about 2000. Those tree ring records show that there are a couple of big droughts in the Colorado River system and the southern and southwestern United States that would essentially break the system. We can additionally encourage people to go look at downscaling from GCM and look at warming in the future.

Dave Curtis brought this up yesterday. For a longer time window, you need to do both sides.

Question:

First, related to Will Lehmans presentation about the HEC-WAT model, I understand the model has one component that relates to the damage and inundation elevations. My recommendation is to include the velocity. Right now, the floodplain management for flood insurance uses a factor, the velocity multiplied by the water depth, as a major parameter to estimate the inundation damage. This recommendation may help the floodplain management community to use this model. Second, Bulletin 17C treats the lower value, the lower tail of the flood frequency curve. It has an approach to deal with lower values, outliers, but it has no new approach to deal with the upper tail. You plot it, for example, using a type of plotting position, and then you find out maybe the paleoflood data show the upper tail if it is an outlier. It might be good to have some reasons to justify these upper tail outliers rather than suggest or have any justification or approach to justify them.

Will Lehman:

The Hydrologic Engineering Center Flood Impact Analysis Software (HEC-FIA), which is included in the HEC-WAT does use depth-times-velocity in the damage-driving parameters. If one knew that the structures that they were modeling were in a high-velocity zone, one could use high-velocity depth damage relationships to describe that additional damage. HEC-LifeSim explicitly uses velocity in its ability to assess damages, as well. It would depend upon the plugin, whatever application was there, and whether or not you could address it; also, the modelers decisions on whether or not it would be used in the output of HEC-WAT. It is kind of handled already, and we can do better.

4-258

Keith Kelson:

To expand on that, when we are going through our levee screening tool to assign levee safety action classes, we will use the HEC-WAT information and then qualitatively adjust that action class depending on where the breach occurs. We have some information on spatial variability of fragility of the levee. If that most likely breach location is in an area of high population, then we will adjust that safety action class, depending on the anticipated velocity. You will have a high velocity right at the breach location and it will decrease out from there. We will qualitatively adjust the action class if needed to capture that velocity gradient.

John England:

USGS, USACE, USBR, and FEMA collaborated to produce a document, and progress has been slow in some areas. Bulletin 17C mentions a Web site that has some additional information. About 8 or 9 years ago, we had a frequently asked questions (FAQ) page for Bulletin 17B. Some of the issues you have raised are appropriate for us to put on a Web site for broader dissemination on Bulletin 17C. Particularly, we have a very specific test to take care of the low floods, called potentially influential low floods, because analysis tends to focus on this big flood of interest.

Essentially, we removed the influence, and theres a specific test to figure that out. Then you can use perception thresholds. We also made a conscious choice to deviate from Bulletin 17B, which had a high outlier test. We decided that we did not want to use a statistical test to figure out that the largest flood is a high outlier. Instead, we said to plot your data. We have a plotting position formula that includes ways to adjust for historical information. We wrote a whole section on extraordinary floods. Instead of using a statistical test, look at your biggest events and the ratio of the largest flood to the next one. An example in Bulletin 17C describes this for a site in Colorado, where the largest flood was on Plum Creek, about 35 times the next largest one.

You can easily see with the points on a frequency curve that one flood is an outlier, and more information is needed on the paleofloods and the historical record. Look regionally to see where you have a flood of that magnitude that has occurred in other gauges. In this particular example, Plum Creek was a very famous flood in the Denver metropolitan area in June 1965. Multiple gauges were broken. It was a very, very large regional flood. In some other cases, though, you may find out that theres measurement error. We encourage folks to go look at that flood to see if there is an error in the database or the hydraulic calculation. A specific plotting position addresses that and a way to obtain additional information, whether its an historical record or a paleoflood record on those large events, and then look at that in context. We are in the process of developing an FAQ to explain some of the ingredients that seem to be new to people that we have overlooked in the implementation details. We have Hydrologic Engineering Center Statistical Software Package (HEC-SSP) software, just as USGS has PeakFQ, that has graphs with perception thresholds. It takes time to communicate that, so we have some PowerPoint slides and other resources on my Web site that I can share with you, and we are working on better documentation. The last section of the report discusses what is not included, including topics we have been discussing, such as regulated flows, nonstationarity, and land use changes. Those things are not in that bulletin, and we really would like to have the community help us in those aspects. Particularly on nonstationarity, I had a site visit with my coworkers from the Galveston District on some work on dams in Houston, and the land use change signal is as clear as anything in that part of Texas. That is a very important feature to include in the full frequency but not addressed in Bulletin 17C.

4-259

Follow-up Question:

In the United States, we have installed stormwater management plans in a lot of States, which follow these policies. So, there is now physical interruptions in the stochastic process. These physical installations in the watersheds or any kinds of drainage systems will, for the future, maybe the next 20 years, influence the stochastic process in a way we do not know. Is it right to consider this now?

Will Lehman:

Human intervention is natural and will happen, as well as other things. We are remodeling life loss across a 50-year life cycle. For perspective, Hurricane Katrina was the single largest migration since the American Civil War. If we have a Katrina event in our stochastic model, and then we model that the population is coming back for the next event, then we are way off. Many aspects are very difficult to model, and these are excellent questions that need lots of empirical research.

Some of them may not be able to be modeled. I am just trying to provide a framework to get closer.

John England:

We could use the HEC-WAT or some other existing tools. Considering your question with stormwater management and floodplain management, from the perspective of FEMA and infrastructure, I would look at Houston with two reservoirs upstream as an example. Some consultants have conducted continuous rainfall-runoff modeling with stochastic rainfall inputs using various continuous rainfall models like the Hydrologic Simulation ProgramFORTRAN or even the Stormwater Management model. Its more of the U.S. Environmental Protection Agency (EPA) domain that could be applied to urban flood problems for floodplain management and take into account changes in the system for present day, and then teach your forecasts. Speaking for USACE, for our Dam and Levee Safety Programs, and in risk assessments, we look at future conditions without action. We take a model, a simple rainfall-runoff model like Hydrologic Engineering Center Hydrologic Modeling System (HEC-HMS), a unit hydrograph. Then we look at changes in runoff response in riverine systems to take into account those potential future conditions as scenarios and look at some upper bound situations. Although one can do a formal framework to integrate all that, we have not quite taken that step.

4-260

4.3.5 Day 2: Session 2B - Modeling Frameworks Session Chair: Thomas Nicholson, NRC/RES/DRA 4.3.5.1 Structured Hazard Assessment Committee Process for Flooding (SHAC-F).

Rajiv Prasad^, Philip Meyer, Pacific Northwest National Laboratory; Kevin Coppersmith, Coppersmith Consulting (Session 2B-1; ADAMS Accession No. ML19156A467) 4.3.5.1.1 Abstract This research project is part of the NRCs PFHA research plan in support of development of a risk-informed analytical approach for flood hazards. Risk-informed approaches require full expression of flood hazards probabilistically. The Structured Hazard Assessment Committee for Flooding (SHAC-F) process is expected to support reviews of license applications, license amendment requests, and reactor oversight activities. Pacific Northwest National Laboratory (PNNL) is leading the development of SHAC-F. In previous years, we described virtual studies following a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 process for local intense precipitation (LIP)-generated flood and riverine floods. These studies indicated a need to define the basic aleatory model, adopt explicit characterization of epistemic uncertainties, document all aspects of the hazard assessment, and describe SHAC-F studies progressively from the simplest to most complex. Lessons learned from these studies contributed to the development of SHAC-F.

SHAC-F is structured at three levels with the levels defined in terms of the purpose of the assessment. Level 1 SHAC-F studies support screening assessments for structures, components, and systems (SSCs) or binning the flood hazard into significant or nonsignificant categories.

Level 2 studies are appropriate to (1) refine a screening analysis (e.g., where a Level 1 study could not adequately support binning of flood hazards) and (2) update an existing Level 3 assessment. Level 3 assessments support licensing reviews, design reviews, and probabilistic risk assessment (PRA) for new and existing power reactors. For all three SHAC-F levels, the expected outcome of the study is generation of a family of flood hazard curves appropriate for the purpose of the assessment. Project structures and roles and responsibilities of team members are clearly defined in SHAC-F. The composition of SHAC-F analysis teams is specific to the needs of flooding analyses and depends on the complexity of the study. At all three SHAC-F levels, an appropriately sized participatory peer review panel oversees the hazard assessment.

Data and methods used for SHAC-F are also defined to be commensurate with the purpose of the study. A SHAC-F Level 1 study uses existing data, possibly for an at-site flood-frequency analysis or simplified hydraulics simulation. A Level 1 study may use alternative conceptual models (ACMs) to represent epistemic uncertainty (e.g., various parametric or nonparametric distributions in the case of flood-frequency studies) and may include regionalization and accounting for nonstationarities. A SHAC-F Level 2 study might combine flood-frequency analyses with existing simulation model studies to refine the flood hazard estimation. ACMs in this case could include alternative simulation models that can reasonably represent the flood behavior at the site. A SHAC-F Level 3 study needs to account for spatiotemporal resolution of flood hazard predictions that can support licensing and PRA needs. Existing data can be used in a Level 3 study, but a site-specific, detailed analysis would be needed with explicit accounting of nonstationarities to support licensing timeframes. At all levels of SHAC-F, explicit characterization of uncertainties, both aleatory and epistemic, is required.

4-261

4.3.5.1.2 Presentation 4-262

4-263 4-264 4-265 4-266 4-267 4-268 4-269 4-270 4-271 4.3.5.1.3 Questions and Answers Question:

What value does this SHAC-F have over the present way that people do flood assessments?

What can you say the value of this is that warrants the extra cost of organizing and running the SHAC-F?

Answer:

We have known for a long time that probabilistic flood assessments are possible. However, theres a lot of uncertainty, especially when you consider the tails of the distribution that we need to get at; the very low annual exceedance probabilities that we need to define. That is where the epistemic uncertainty starts mattering a lot, because different approaches can either result in more accurate estimates or they can start diverging because you may not have considered some of the processes that are more prevalent in the lower parts of the tail. SHAC-F provides a consistent framework within which to bring all of this information, including both aleatory and epistemic uncertainty. It allows a process that is rigorous and ongoing through the project review.

That gives you assurance as a regulator. So, this is more from a regulatory perspective. It provides assurance that when multiple studies are done at different sites, they have followed a consistent similar process and that consistency is reproducible, and there is sufficient documentation of the process, where decisions were made, which models were picked, how the model parameters were estimated. This allows for reproduction or efficient review of this whole assessment process. That is where I think some of the additional expense may be justified.

Saying that we can do these things gives the regulator a lot more confidence in trying to come up with what the risk may be, and how that risk is informing some of the decisions that need to be made.

Question:

Is this a published document or is this theory saying we should be doing this kind of thing?

Answer:

This is not a NUREG document yet. We are writing the report currently, and this should be available in the next couple of months, at least to Joseph Kanney, who is our Contracting Officers Representative. Then it needs to go through the NRC clearance process. We will probably need to go through a review period and then try to update some of the report based on comments.

Then the NUREG publication process needs to happen. We are a little bit away from actually having a NUREG.

4.3.5.2 Overview of the TVA PFHA Calculation System. Shaun Carney*, RTI International, Water Resource Management Division; Curt Jawdy, Tennessee Valley Authority (Session 2B-2; ADAMS Accession No. ML19156A468) 4.3.5.2.1 Abstract Around 2015, TVA began a large-scale program to develop inputs to support its risk-informed decisionmaking process, including estimation of economic and life loss consequences along with 4-272

development of hydrologic hazard curves based on PFHA. Through this process, RTI International, MGS Engineering, and MetStat helped build a computational framework for performing the PFHA for reservoirs and other critical locations throughout their system. The framework uses computational modules from the Stochastic Event Flood Model (SEFM) in combination with a suite of hydrologic models and RiverWare operations models. The use of SEFM modules provides flexibility to implement different models within the computational system.

MetStat developed a customized storm transposition interface to generate representative storm patterns needed for the stochastic modeling. Other unique aspects of the computational framework include sampling for different storm types, the use of synthetic long-term time series, intelligent sampling and convergence evaluation, both natural and regulated hazard curve development, and advanced data-mining techniques to explore output from tens of thousands of simulations. This presentation will review the unique aspects of the computational framework and will discuss plans for ongoing development.

4.3.5.2.2 Presentation 4-273

4-274 4-275 4-276 4-277 4-278 4-279 4-280 4-281 4-282 4-283 4-284 4-285 4.3.5.2.3 Questions and Answers Question:

Slide 23 makes the case for SHAC-F because youve got two totally different results coming from totally different methods. What do you do as a decisionmaker?

Answer:

I think this is a real challenge. We are looking at streamflow-based and rainfall-based information; it is all historical. Then you get this crazy event from the paleoflood perspective. Ultimately, TVA needs to make decisions. This event is really high there, and thats a concern.

Question:

Are synthetic storms, characteristic storms, and storm templates referring to the same thing?

Answer:

I think so; they come from the work of Mel Schaefer and his team. They were looking at the precipitation frequency, so the volume of the storm event and the probability of that event. In addition to that you have the storm characteristics; if I get an 8-inch storm, does that storm hit mostly the lower watershed or the upper watershed and whats the timing of that? Those would be the storm templates. The synthetic storm takes the precipitation volume and scales the storm template up to that volume and then drops it in the historical record to generate the event.

Follow-up Question:

How is this done for something of the temporal distribution of the rainfall?

Answer:

For the storm template, what we have done is pick about 80 of the largest historical events that have happened over the watershed, or nearby, that are more extreme events. We take the pattern of that and transpose it to the watershed of interest. You have both the spatial pattern as well as the temporal pattern of how the storm moves across the watershed. So, its essentially historical.

Question:

Inaudible.

Answer:

The hydrologic models are not failing. In this case, there are actually a number of failures because of our failure in the rule definition that there are unique cases we did not consider. This put the reservoirs into some odd places that the river model cant solve. We do track those. We are trying to get rid of those failures of the river model. In terms of accounting for those failures and how that impacts the hazard curve, you obviously do not want the failures. But we need to be sure that we are not saying every time that the peak goes above a certain level, then the model fails, and so were safe. Weve tried to account for that and make sure we arent hitting those, and generally were not. Were not accounting for them in the modeling otherwise.

4-286

4.3.5.3 Development of Risk-Informed Safety Margin Characterization Framework for Flooding of Nuclear Power Plants. M.A. Andre, George Washington University; E. Ryan, Idaho State University, Idaho National Laboratory; Steven Prescott, Idaho National Laboratory; N. Montanari, R. Sampath, Centroid Lab; L. Lin, A. Gupta, and N. Dinh, North Carolina State University; and Philippe M. Bardet*, George Washington University (Session 2B-3; ADAMS Accession No. ML19156A469) 4.3.5.3.1 Abstract There are six categories of external flooding (XF) listed by the NRCs external flooding documentation Because of site specificities, NPPs have different concerns for flooding. Some flooding mitigation methods, such as temporary or permanent flood walls to guard against storm surge or stream and river flooding, have high uncertainties on their effectiveness due to conditions such as debris and wave overtopping. Thus, flood mitigation systems would benefit from validated Risk-Informed Safety Margin Characterization (RISMC) methodologies. Hence, a computationally efficient and flexible 3-D finite structure interaction, smoothed-particle hydrodynamics (SPH) code, NEUTRINO, is being validated in this context with existing (published) datasets as well as with a dedicated experimental facility. The experimental facility is a large sloshing tank (6 meters x 2.4 meters x 1.2 meters), where a variety of scenarios can be tested. Large impact forces can be generated in a controllable and repeatable manner, and a broad range of diagnostics can be deployed. The code and facility will be presented as well as the organization and collaboration of the numerical and experimental teams, who are closely working to define the experimental and numerical campaigns to validate NEUTRINO and its use in RISMC. Additionally, this work is being used to demonstrate an uncertainty propagation methodology when using multiple sources of data.

4.3.5.3.2 Presentation 4-287

4-288 4-289 4-290 4-291 4-292 4-293 4-294 4-295 4-296 4-297 4-298 4-299 4-300 4-301 4-302 4-303 4-304 4-305 4.3.5.3.3 Questions and Answers Question:

Will the modeling results be used to assess flood protection assessments and guidance? I think the U.S. Department of Energy (DOE) is funding you in part to look at that issue of flood protection.

Answer:

I did not mention the latest results we have on assessing the scalability of the code because we are simulating small-scale experiments in the lab and the barriers are going to be on a smaller scale. How do we scale that up? We have a significant effort with Perdue University on understanding the scalability of the experiments in the code. As a caveat, the project is ending in September, so time is short. We are also deploying some of the diagnostics in which we are exploring the velocity and pressure inside the liquid.

Question:

Who is collaborating with you on the experimental design and testing of the model and the physical model?

Answer:

The people doing the simulation all know what to do. We do the simulations ourselves as well.

This is all done by one postdoc in my lab. We design everything from scratch ourselves. We are, first and foremost, engineers. We like to do new things. Now we have lasers and cameras moving with the tanks. We have a million dollars worth of cameras starting to shake with the tank and so far, we didn't break anything. Were going to try to keep it this way.

4.3.5.4 Modeling Frameworks Panel Discussion. (Session 2B-4)

Moderator: Thomas Nicholson, NRC/RES/DRA Rajiv Prasad, Pacific Northwest National Laboratory Shaun Carney, RTI International Philippe M. Bardet, George Washington University Guest Panelist: Will Lehman, USACE/IWR, Hydrologic Engineering Center Guest Panelist: Joseph Kanney, NRC/RES Moderator:

The panel discussion this afternoon is on modeling frameworks. We have asked Will Lehman, who talked this morning on the various HEC models, to be part of a panel. As a guest panelist, we have Joseph Kanney, who works at NRC/RES in hydrology.

Id like to open the questions by talking in generalities about modeling frameworks. Rajiv Prasad, you heard an attendee point to the discussion of interpreting data from Shaun Carneys presentation as a good reason for you to do, or propose to do, SHAC-F. How difficult would it be to do Levels 2 and 3 of SHAC-F, and what kinds of difficulties do you see? You talked about LIP and riverine flooding, but what about storm surge and other types of flood events?

4-306

Rajiv Prasad:

In terms of difficulty, we dont know yet. We have not done any of these assessments yet. The virtual studies that we did provide some information that we used to streamline how the process would look. The study is tailored after the SHAC-F study. In a Level 3 study, the teams would consist of the flood experts and the flooding data experts, and others of that sort. It still involves three workshops and four working meetings. A lot of time that goes into this is related to the time that you need these experts to come in and talk about the data, the model, and how to put it together. In SHAC-F, we are trying to reduce that burden a little bit by making the teams and the topics that you discuss more appropriate for the flooding assessment at the level that we want to run the SHAC-F process for a particular review or assessment. So, while we do not know yet about the cost, hopefully in the near future, when we do some pilot studies, we will sort out some of these thorny questions.

Joseph Kanney:

They actually have a project in place that we will begin, hopefully in the next month or so, by applying SHAC-F ideas to storm surge hazard analysis. This work will be done jointly between the NRC, our colleagues at USACE (Norberto Nadal-Caraballo, Victor Gonzalez, and their group),

and PNNL.

Moderator:

One of the dilemmas, not just for SHAC-F, but for the other modeling frameworks we have heard about today, is formulating credible scenarios both now and projected into the future. I would think, especially for TVA, which has dams, NPPs, and coal-fired plants, there are probably a lot of issues that you ought to address in the selection of scenarios. Shaun Carney, could you tell us about the formulation of scenarios that you and your colleagues did?

Shaun Carney:

So far, Ive really focused on using precipitation frequency and looking at individual dams that are doing their semiquantitative risk assessments and PRAs. In those cases, we are focused on those specifically. The precipitation frequency analysis was based on historical data. It does not consider climate change or future scenarios. But what is first is looking at this historical precipitation, then incorporating some paleoflood data. We first look back to understand, as best we can, whats happened historically. Then lets start considering what happens if the present frequency is actually shifting. That would be another level of complexity.

Moderator:

Will Lehman, could you comment on how you and John England and others at USACE look at risk? You look at different scenarios, using the full complement of the HEC models, some watershed models, and hydraulic models of the channels. You were talking about, in particular, breach of levees. How does USACE focus on certain locales and issues associated with those locales, how do you then formulate these scenarios, and how rigorously do you test them?

Will Lehman:

First of all, John England showed an f-N diagram (annual probability of failure vs. average life loss) for a hypothetical cartoon dam. But in terms of our risk assessment, our prioritizing our 4-307

dams, we assign Dam Safety Action Classification ratings. Thats trying to combine the likelihood of failure with the consequences, so that we prioritize our analysis to those that will provide the greatest risk, which could be driven by either one of those two. As Shaun Carney was saying, thats focusing on the current condition and historical data, not on future conditions. In the levee screenings, once we get beyond the levee screening tool, which is used to support a Levee Safety Action Classification rating, the mapping, modeling, and consequence standard operating procedure would look at multiple failure locations along the levee system and identify one of the greatest consequences, and then we would use that to drive the risk assessment. We do this if we are trying to find a conservative estimate of the risk, which is not necessarily a true statement of risk. We need to be cautious about that. That is not done necessarily within the HEC-WAT itself.

When we are within the HEC-WAT, aside from the issues associated with linked effects on a levee segment, it is traditional to look at the most probable locations of failure, such as a highway going over a levee and a bridge pier going in, or near a shortened seepage path. You might look at that seepage location as a probable failure location. Other locations might be transition lines between different types of infrastructure. We would assess a geotechnical fragility curve at those locations driven by that failure mechanism. Another one might be the lowest point relative to the water surface, which might be an overtopping location. You would plot your fragility curves there and HEC-RAS would be set to breach based on the proper mechanism for those breaching locations and triggered by the fragility curve. That would allow for, within the HEC-WAT, multiple failures to happen simultaneously depending on the shape of the hydrograph and the timing of the hydrograph magnitude.

Moderator:

How do you formulate these scenarios? In SSHAC and now SHAC-F, are you actually looking at truly different alternative conceptual models or are you just changing the variability on certain parameters? How uniquely different do those scenarios have to be to legitimately look at the full realm, or the center, body, and range? Am I capturing the range properly for the scenario selection and formulation?

Will Lehman:

Thats really difficult to answer and outside of my domain. In terms of how that process happens at the Risk Management Center (RMC) associated with a dam or levee, they would have a probable failure modes analysis workshop where they would look at, through expert elicitation, what the probable failure modes were and how significant those might be to load into an event tree analysis where they would be looking at a failure tree that could be used to create a fragility curve that might be input into the HEC-WAT at those locations.

Shaun Carney:

This wasnt necessary for TVA but for another project. With respect to looking at the full range of uncertainty, in particular the epistemic, in terms of the models, were basically using some multi-objective calibration approaches to develop (based on different objectives) a set of different hydrologic parameters that would be for the historical record, all adequately representative of the simulation quality at the different locations. But then for a more extreme event, TVA is going to produce a different result. It's getting towards what were talking about. It isnt like taking a Sacramento Soil Moisture Accounting Model, or an HMS model and a 2-D model and running those together to see if the model structure is different between them. But its at least on the parameterization where we are characterizing the different pieces.

4-308

Moderator:

At the NRC, people are saying, You may have floods, but we can quickly go in and manage the flood using the diverse and flexible mitigation strategies (FLEX) approach and so the flood protection measures are possible, especially if you have early warning of an impending flood.

What are your thoughts with regard to flood protection? You are doing some physical and numerical models; how do you inform DOE on risk-informed safety issues?

Philippe Bardet:

DOE hires me to do these measurements using lasers. While I cannot make the decision on risk, hopefully DOE will use the work we are doing for it and the methodology we are developing, such as working with NEUTRINO, to make informed decisions.

Will Lehman:

I would tend to shy away from the term flood protection and maybe speak more to flood risk reduction. Ive learned that we need to always remember that nature is stronger than we are. We should be very cautious about saying that we can go in with heroic efforts to mitigate in the event of a storm that might overtop our defenses that currently exist. First, that increases the risk to the people who are doing those heroic efforts. Second, it compromises our ability to react to a situation that is outside of our control. While the preservation of infrastructure is important, so is the preservation of life.

Joseph Kanney:

Philippe Bardet compare what we see in natural wind wave effects versus your slosh tank.

Question: Inaudible.

Philippe Bardet:

Thats the most important question about the design we chose for this project. Hopefully I can convince you of the approach we chose. Currently, were measuring the wave profile in 2-D. We are now developing a technique to do it in 3-D plus time. We are getting to a level of resolution that is measuring down to capillary waves, which is way beyond what would be needed, but the tools we have capture that. We started talking to people in oceanography to measure the spectrum. Currently, we have sets of cameras mounted at the tank top, moving with the tank and monitoring the wave form continuously. Do the waves look the same as wind-driven waves? I would most likely say no. However, if NEUTRINO, which is the computational fluid dynamics code the Smoothed Particle Hydrodynamics server is using, can reproduce my wave profile, and it does it fairly well, then we reproduce the wave impact while measuring the right wave profile for wind-driven waves or tsunami-driven waves, and we have confidence with quantified uncertainties that it should capture the right impactful force and peak pressure. This has been our approach.

Even if you look into a laboratory-scale setup with a large wave tank, in which they create a big waveand they can create any pattern they want on this waveyou are still limited with scales because you will not have control of your surface tension of water. To understand the effect of bubbles, which are going to create some compressibility and also dampen the peak pressure you will reach in your system, for example, you will have to accept some distortions coming from that as well. We went with the approach where we could get many events and quantify the initial 4-309

shape to make sure the server was getting it right; therefore, we'll be more confident that we can capture what will happen in nature.

Moderator:

The major driver for many of these analyses, especially with regard to flooding of dams and NPPs, is the precipitation scenario. We heard a lot of discussion yesterday, especially from Mel Schaefer, about how important it is to partition the precipitation record and look at it with regard to certain characteristics in formulating your scenarios. How important is it to understand the nature of the precipitation record with regard to seasonality and the areas involved? For instance, how do you think about coupling rainfall in the subbasins within the TVA region? For example, for a 1,000-year simulation of the synthetic rainfall, how do the statistical methods do that?

Shaun Carney:

There are a few pieces there. One is the work that Mel Schaefer is focused on, and the other is the 1,000-year simulation. First, theres an extended-period simulation where we are capturing how the system responds. Then we drop individual stochastic events on top of that. With the 1,000-year simulation, we are using an alternating renewal method, where we break up the historical record into chunks of wet and then dry. We repeat that and then the distribution seats and then we reshuffle. We are maintaining the characteristics of real storms but placing them in a different order. For example, the largest storm from one year is combined with the next year, within a given month. That is one piece of it, that continuous record. Then there are the characteristics of the precipitation. Mel Schaefer put a lot of effort into getting the volumes of the whole regional point precipitation frequency right and converting it into watersheds. But then also, using storm templates, these patterns from the largest historical storms are used to capture the spatial and temporal variability across those really large storms. We are trying to get the combination of how events could sequence in time; thats from the long-term simulations that caused the reservoirs to get to some level. That, along with the characteristics of the extreme events, as well, all in one.

Joseph Kanney:

How much spinup time do you need to do that 1,000-year simulation?

Shaun Carney:

For the 1,000-year simulation, we essentially just drop the first year or something. It depends on how much memory there is in the hydrologic model. Usually youll get over those initial conditions within a month or two on a continuous hydrological model. But well just drop one year so that it is actually 999 years. That part of it is less of a concern.

Joseph Kanney:

I thought the spinup might be longer once you have all the reservoirs in the system.

Shaun Carney:

Thats a good point, but we try to start the reservoir at a normal elevation. If we are starting the simulation in January, we use whatever the typical level is that we would be starting out at that point in the season.

4-310

Will Lehman:

On this point, its about initial conditions, not the uncertainty in our initial conditions, and when that storm hits. There are multiple ways of achieving that: taking continuous simulation and dropping in storms with a selected date. In general, in the HEC-WAT, we randomize initial conditions. Inside of ResSim, our pool elevations would be based on stratified monthly or seasonally stratified distributions per starting pool. That way we erase the need for having continuous simulation by randomly starting each event. This is okay when speaking about certain parameters like flow frequency. However, if we think about expected annual damage, there are other things within our system that have memory, such as structures, and whether or not it has been rebuilt, or whether or not people have evacuated. There are other pieces in our memory other than just the natural water and reservoir operations. That question is really complex. In terms of storm typing, the best predictor we have for stream flow in a river is stream flow in a river. In some cases, the best way to get around the issue of knowing whether we should do storm typing is to look at how our system has behaved historically. We bundled up a lot of parameters into that. Within the HEC-WAT, we use correlated flow frequency curves within the hydrologic sampler to allow for that. But again, there are things that weve never seen. In order to get out to the 10-6 or 10-8 range, stochastic hydrology is the best approach. That means weather generation of some sort, and things like SEFM are critical to that. Selecting the wrong types or, if youre stratifying, selecting the wrong parameters for stratification can also lead to inefficient outcomes and inaccurate outcomes.

So, a lot of care is needed. As was said earlier, we always need to keep the human in the loop.

We cant let the computer do everything. It's only going to do as well as its told to do.

Moderator:

In a previous workshop, I think was last year, we heard about a storm catalog that USACE is developing that looks at certain significant historical floods on the Missouri, or whatever river system you are concerned about. Then USACE looked at the rainfall that contributed to that flood event. We understand that USACE is developing this catalog now. When will that become available? It goes back to the comment that you have to look at the history and understand when the floods occur on the Missouri, going back in time, and what is the causative mechanism, the rainfall, snowmelt that caused that flooding. How do you decipher that in an understanding on a watershed basis?

John England:

In extreme storms, those spatial temporal patterns are really important. We have a live database internally at USACE. It is under active development and population, and the structure is set. We used to have an open public Web site for anyone to access. It's not accessible currently to the public. We hope that the information technology portion will go through this fall, so that the public can access it just like the National Levee Database or other USACE facilities where you can have a spatial map and search for events and pull down varying levels of information. It could be gridded information, hourly 4-km grids for a storm, or it could be just qualitative information in PDF forms. We are looking at a larger group that Tom Nicholson in our Extreme Storm Work Group has been working on to propagate this information to NOAA. USACE is working internally to develop a database to answer some questions that we have at our facilities and share it as we can.

4.3.6 Day 2: Poster Session 2C Session Chair: Meredith Carr, NRC/RES/DRA/FXHAB 4-311

4.3.6.1 Coastal Storm Surge Assessment using Surrogate Modeling Methods.

Azin Al Kajbaf and Michelle Bensi, Department of Civil and Environmental Engineering, University of Maryland. (Poster 2C-1)not included in these proceedings.

4.3.6.2 Methods for Estimating Joint Probabilities of Coincident and Correlated Flooding Mechanisms for Nuclear Power Plant Flood Hazard Assessments. Michelle Bensi and Somayeh Mohammadi, Center for Disaster Resilience, University of Maryland; Scott DeNeale and Shih-Chieh Kao, Environmental Sciences Division, Oak Ridge National Laboratory. (Poster 2C-2; ADAMS Accession No. ML19156A470) 4-312

4-313 4-314 4.3.6.3 Modelling Dependence and Coincidence of Flooding Phenomena: Methodology and Simplified Case Study in Le Havre in France. A. Ben Daoued, Sorbonne University Université de Technologie de Compigne; Y. Hamdi, Institut de Radioprotection et de Sûreté Nucléaire; N. Mouhous-Voyneau, Sorbonne UniversityUniversité de Technologie de Compigne; and P. Sergent, Cerema (Poster 2C-3)not included in these proceedings.

4.3.6.4 Current State-of-Practice in Dam Risk Assessment. Scott DeNeale, Environmental Sciences Division, Oak Ridge National Laboratory; Greg Baecher, Center for Disaster Resilience, University of Maryland; Kevin Stewart, Environmental Sciences Division, Oak Ridge National Laboratory. (Poster 2C-4; ADAMS Accession No. ML19156A471) 4-315

4-316 4-317 4-318 4-319 4.3.6.5 Hurricane Harvey Highlights Challenge of Estimating Probable Maximum Precipitation. Shih-Chieh Kao, Scott T. DeNeale and David B. Watson, Environmental Sciences Division, Oak Ridge National Laboratory. (Poster 2C-5)not included in these proceedings.

4.3.6.6 Uncertainty and Sensitivity Analysis for Hydraulic Models with Dependent Inputs.

Lucie Pheulpin, Vito Bacchi, Nathalie Bertrand, Institut de Radioprotection et de Sûreté Nucléaire, Fontenay-aux-Roses, France. (Poster 2C-6; ADAMS Accession No. ML19156A473) 4-320

4-321 4-322 4.3.6.7 Development of Hydrologic Hazard Curves Using SEFM for Assessing Hydrologic Risks at Rhinedollar Dam, CA. Bruce Barker, MGS Engineering Consultants, Inc.; Nicole Novembre, Brava Engineering, Inc.; Matthew Muto, John Dong, Southern California Edison, Blake Allen, Katie Ward, MetStat, Inc.; and Jason Caldwell, Weather & Water, Inc. (Poster 2C-7; ADAMS Accession No. ML19156A474) 4-323

4-324 4-325 4-326 4-327 4.3.6.8 Probabilistic Flood Hazard Analysis of Nuclear Power Plant in Korea. Beomjin Kim, Ph.D. Candidate, Kyungpook National University, Korea; Kun-Yeun Han, Professor, Department of Civil Engineering, Kyungpook National University; Minkyu Kim, Principal Researcher, Korea Atomic Energy Institute, Korea. (Poster 2C-8)not included in these proceedings.

4-328

4.3.7 Day 3: Session 3A - Climate and Non-Stationarity Session Chair: Joseph Kanney, NRC/RES/DRA/FXHAB 4.3.7.1 KEYNOTE: Hydroclimatic Extremes Trends and Projections: A View from the Fourth National Climate Assessment. Kenneth Kunkel*, North Carolina State University (Session 3A-1; ADAMS Accession No. ML19156A475) 4.3.7.1.1 Abstract This presentation will summarize the findings in the Fourth National Climate Assessment as well as recent work by the presenter. Extreme precipitation has increased overall in the United States.

There has been regional variability in the trends, with large increases in the eastern half of the United States and small changes in the far western United States. Future global warming is highly likely to cause further increases in extreme precipitation because atmospheric water vapor concentration will increase as surface ocean waters warm. Analysis of historical precipitation extremes provides a basis for this projection; the magnitude of extreme precipitation events is positively correlated with atmospheric water vapor content. Future changes in floods are much less certain because floods depend on a number of factors in addition to the magnitude of extreme precipitation, such as antecedent soil moisture and flood type.

4.3.7.1.2 Presentation 4-329

4-330 4-331 4-332 4-333 4-334 4-335 4-336 4-337 4-338

4-339 4-340 4-341 4-342 4-343 4-344 y

4-345

4-346 4.3.7.1.3 Questions and Answers Question:

Inaudible.

Answer:

In terms of total precipitation not changing, in my interpretation of the models, when you have the right ingredients in terms of weather system plus water vapor in the atmosphere, you produce these very heavy rain amounts. You do kind of deplete the atmosphere of water vapor and it has to recharge itself. The recharging or the eventual kind of total precipitation changes in the system are not driven by Clausius-Clapeyron, but rather by changes in the overall energy budget at the surface. Those changes are much more modest, maybe 1 or 2 percent per degree Celsius change in temperature. By virtue of using up the water vapor more efficiently in the storm systems, theres really less available for other systems and that may come along later. You have a greater number of no-precipitation days, which is what the models show.

Question/Comment:

That also implies a smaller number of storms than we expect. What we get is going to be more intense. But a smaller number of storms overall.

4-347

Answer:

Generally, one of the things we showed with the models in the assessment was that if you look at a metric of drought, or dry spells, or consecutive dry days, those generally increase in the future; you have longer dry spells in the future. Thats one of the counteracting effects in the models.

Question:

What about the issue of multidecadal persistence, and how that affects these trends? From my view of some of the modeling, the models do not handle the longer term, multidecadal persistence particularly well.

Answer:

You have this multidecadal climate mode variability. The Atlantic Decadal Oscillation is an example of one multidecadal oscillation. Thats probably a stochastic process that happens in the system. It can happen in some models. But the timing of that, when it happens, is probably largely unpredictable. We have confidence that under this forcing, we should get increases in precipitation. But regionally, there could be modulation by these other factors, such as these kinds of multidecadal oscillations. Its unpredictable. But we could definitely see that in the future.

Historically, we have seen some of that; we see no change in the Southwest. It is probably related to some of the changes in the Pacific Decadal Oscillation that have been going on.

Question:

You said you are beginning to look at North American monsoons. What have you discovered in looking at the data? What is causing changes, if there are changes, in the North American monsoons?

Answer:

For the North American monsoon, we have been looking at various metrics that historically are related to occurrence of local extreme precipitation. The one that seems to be the most robust is lower level moisture convergence. It is a larger scale field that models can satisfactorily simulate.

When we look at the future, we find that the number of high-convergence days is related to extreme precipitation increase. From that we would expect larger precipitation amounts during North American Monsoon events in the future. Events that would normally be large anyway will become larger. Is that just because of the change in water vapor, or are other dynamic things going on? We are just digging into that now to find out if we can separate the two and how much of it is just purely a water-vapor phenomenon.

Question (from the Webinar):

What is the next step in this research?

Answer:

Right now, we are trying to quantify both the water vapor/thermodynamic part of this and the weather system/dynamic part. Our goal in the research is actually to develop future intensity duration frequency curves for the United States that incorporate the nonstationary process. I hope that we have some results in about a year from now.

4-348

4.3.7.2 Regional Climate Change Projections: Potential Impacts to Nuclear Facilities.

L. Ruby Leung* and Rajiv Prasad, Pacific Northwest National Laboratory (Session 3A-2; ADAMS Accession No. ML19156A476) 4.3.7.2.1 Abstract This project is part of the NRCs PFHA research plan that aims to build upon recent advances in deterministic, probabilistic, and statistical modeling to develop PFHA regulatory tools and guidance. To provide improved understanding of large-scale climate patterns and associated changes in the context of PFHA, this project provides a literature review, focusing on recent studies of the mechanisms of and changes in climate parameters in the future, including discussions of the robust and uncertain aspects of the changes and future directions for reducing uncertainty in projecting those changes. During the first year, the project reviewed various aspects of climatic changes nationwide, the second year focused on more detailed changes in the southeastern United States, and the third year focused on changes in the midwestern United States. The current focus, during the fourth year, is on the northeastern region and on updating the first three years reports. The Midwest region consists of eight states (Minnesota, Wisconsin, Michigan, Iowa, Missouri, Illinois, Indiana, and Ohio). The third-year report discusses observed historical changes and the projected future changes, drawing on major reports from the National Climate Assessment and peer-reviewed journal papers. Overall, mean and annual 5-day maximum temperatures are projected to increase. With increasing moisture accompanying the warmer temperatures, precipitation is projected to increase in the cool season, but the changes in warm season precipitation are not statistically significant. Despite inconsistency in mean precipitation changes across the seasons, extreme precipitation (99th percentile) is projected to increase by more than 10 and 30 percent by the end of the 21st century under the Representative Concentration Pathway (RCP)4.5 and RCP8.5 emissions scenarios, respectively. Consistent with observational evidence, a regional climate modeling study at 4-km resolution projected more than tripling in the frequency of intense summertime mesoscale convective systems. Lake-effect snow storms are projected to increase as reduction of the surface area of lake ice with warming increases evaporation from the surface; larger warming farther into the future may shift snowfall into rain. The Great Lakes water levels have exhibited large variability historically. Models projected small variations in the lakes levels with a large range of uncertainty, possibly indicating incomplete understanding of the lakes water budget. Observational records show strong evidence of increasing flood frequency but limited evidence of increasing flood magnitudes. With the projected increase in extreme precipitation and storm events, flooding is projected to increase notably in the future. Both land use and climate changes affect streamflow with greater effects from the latter. The fourth year report on the northeastern United States is being developed to include literature review on historical and future changes in mean temperature, heat waves, wet bulb temperature, mean and extreme precipitation, extratropical storms, heavy snowfall, severe storms and strong winds (tropical cyclones, severe convective storms, lake effect snow storms),

sea level rise, storm surge, nuisance flood, Lake Ontario water level, flood and drought, and stream temperature.

4-349

4.3.7.2.2 Presentation 4-350

4-351 4-352 4-353 4-354 4-355 4-356 4-357 4-358 4-359 4-360 4-361 4-362 4.3.7.2.3 Questions and Answers Question (from the webinar):

Slide 3 has no observed temperature data for the period from 1961 to 1985. Why is that?

Answer (Ruby Leung):

There are observed data. This figure is just comparing two time periods, and the study selected to have a comparable time period for 16 years, versus 18. Its not a problem of not having data, it is just that the way they chose to select two time periods to do the comparison.

Question:

The title of your presentation included potential impacts to nuclear facilities. Did you perform any climate change projections related to any particular nuclear facilities?

Answer (Ruby Leung):

No, this project is mainly about reviewing the literature that we find about climate projections or hydrological projections. We are not specifically looking at nuclear facilities. The type of extreme discussed in the literature, that exceedance probability, is not really similar to what the NRC is concerned about in terms of safety.

Joseph Kanney:

The project was designed to look into the climate science literature to find out if there is useful information for nuclear safety. We found that there is a gap in terms of what is considered extreme in the dam safety and nuclear safety worlds, versus what is typically referred to as extreme in the climate sciences. There is a difference in terminology. But we think that understanding what the climate trends are, seeing what the climate projections are, informs our work. But this project is not collecting data that we can put into an analysis at present.

Question (John England):

Rajiv Prasad, USBR has a long history doing downscale analyses and propagating them through VIC, a hydrological model. I want to point out a gap and see if you found any areas in your research or in your literature review, particularly around the flows. We have had problems with model calibration. What I showed yesterday was to include rainfall runoff models for extreme flood prediction. We want those variances of those estimates. One of the slides you showed earlier had statistical analysis of the flows for a time window of perhaps 90 days. In my estimation, those models perform absolutely poorly. They do bias correction at a monthly timescale. If we are not able to get the monthly flows right, and we correct them after we run the models, what can we say about the tails? Have you found anything? We are searching the climate literature with regard to this gap, such as 3-day flows, the flood, and the runoff response side.

Answer (Rajiv Prasad):

You are right that bias correction is a major issue. As far as I remember, in the study we showed, the bias correction was not done on the flows but on the climate projections, and then there wasnt that much of a rigorous calibration. It was mostly visual, trying to determine whether the spread, 4-363

using those 90 realizations of daily sequences, could actually bracket what the history was doing, looking at the historical period of 3 months. That was the basis for determining that the models were performing well for the historical timeframe and then using PRISM. But I have not seen bias correction, particularly at the frequencies or annual exceedance probabilities used in licensing.

4.3.7.3 Role of Climate Change/Variability in the 2017 Atlantic Hurricane Season. Young-Kwon Lim*, NASA/GSFC, Global Modeling and Assimilation Office and Goddard Earth Sciences, Technology, and Research/I.M. Systems Group; Siegfried Schubert and Robin Kovach, NASA/GSFC, Global Modeling and Assimilation Office and Science Systems and Applications, Inc.; Andrea Molod and Steven Pawson, NASA/GSFC, Global Modeling and Assimilation Office (Session 3A-3; ADAMS Accession No. ML19156A477) 4.3.7.3.1 Abstract The 2017 Atlantic hurricane season was extremely active with six major hurricanes, the third most on record. The sea-surface temperatures (SSTs) over the eastern Main Development Region (EMDR), where many tropical cyclones (TCs) developed during active months of August and September, were ~0.96 degree Celsius above the 1901-2017 average (warmest on record):

about ~0.42 degree Celsius from a long-term upward trend and the rest (~80 percent) attributed to the Atlantic Meridional Mode (AMM). The contribution to the SST from the North Atlantic Oscillation (NAO) over the EMDR was a weak warming, while that from El Nino-Southern Oscillation (ENSO) was negligible. Nevertheless, ENSO, the NAO, and the AMM all contributed to favorable wind shear conditions, while the AMM also produced enhanced atmospheric instability.

Compared with the strong hurricane years of 2005 and 2010, the ocean heat content (OHC) during 2017 was larger across the tropics, with higher SST anomalies over the EMDR and Caribbean Sea. On the other hand, the dynamical and thermodynamical atmospheric conditions, while favorable for enhanced TC activity, were less prominent than in 2005 and 2010 across the tropics. The results suggest that unusually warm SST in the EMDR, together with the long fetch of the resulting storms in the presence of record-breaking OHC, may be key factors in driving the strong TC activity in 2017.

4-364

4.3.7.3.2 Presentation 4-365

4-366 4-367 4-368 4-369 4-370 4-371 4-372 4.3.7.3.3 Questions and Answers Because of time limitations, questions for Session 3A-3 were moved to the panel discussion (see Section 5.3.7.4).

4-373

4.3.7.4 Climate Panel Discussion. (Session 3A-4)

Moderator: Joseph Kanney, NRC/RES Kenneth Kunkel, North Carolina Institute for Climate Studies, North Carolina State University L. Ruby Leung, Pacific Northwest National Laboratory Young-Kwon Lim, NASA/GSFC Guest Panelist: Kevin Quinlan, NRC/NRO Question (from the Webinar):

Young-Kwon Lim, you focused on years with a lot of TCs. Did you look at years with fewer TCs?

Young-Kwon Lim:

I looked at the 2013, 2014, and 2015 hurricane years. I also investigated the 2006 TC season, which was a very inactive hurricane year. The conclusion was that the climate variability was not so favorable in those years. There were very negative impacts on the TCs.

Question:

Im a hydrologist. On the hurricane and TC activity, can you apply it to, for example, nuclear hazard questions, to refine it to particular locations? For example, if you look at the 1999 hurricane season in North Carolina, the interesting story that would provide big impacts is Dennis in August and Floyd in September. We are most interested in the clustering of hurricanes and their landfall locations and subsequent impacts. Are there ways you can use the Rotated Empirical Orthogonal Functions (REOF) work you did to get some favorable conditions, particularly in the Gulf at Texas, as opposed to parts of the southern Atlantic coast? That refinement would be really helpful.

Young-Kwon Lim:

On the seasonal timescale, this analysis technique can provide what the TC track on the seasonal timescale can be, for example, more toward the Texas area or just the north Atlantic Ocean region without much landfall. The analysis technique can provide that useful information. But even if there is a seasonal characteristic, individual storms are influenced not only by the seasonal climate characteristics, but also on the mesoscale and synoptic-scale features. So, individual storms can have some exception to the seasonal trend results from the REOF analysis technique.

Question (Ray Schneider):

Were having this meeting at the NRC, so we want to understand what the impact is going forward for the NPPs. We want to know how this information will be used. The title of the second presentation indicated that it would cover how we are using the information that we are collecting about climate change. It was not that long ago that we in the industry prepared the flood hazard reevaluation reports (FHRRs) for all the plants in the United States. You have expanded that discussion to talk about the temperature changes, which may be related to ultimate heat sink and also the potential for precipitation and ice/snow loads. From the point of view of a nuclear industry practitioner point of view, which is probably unique among people here, we are confronted about 4-374

how to screen out events and how to analyze and get quantitative information on events. We are identifying that there may be thicker tails at the 2-percent AEP and such. But we cant really wait until you get to the 10-6 AEP level to start figuring out how to use that information. How would we start feeding some of this information back into the modeling, procedures, or guidance to make sure that we have the proper protections? How do we make sure that we are understanding whats important and whats not, rather than just saying things are changing?

Ruby Leung:

When we look at the climate science literature, it includes literature that looks at climate models and climate data but also include studies that look at hydrologic modeling. As we mentioned before, almost all of the studies look at extreme events only at a scale of around 10-2 AEP. There is a lot of interannual variability, or noise, in the climate system as well as the hydrological system.

We currently lack two things to be able to look at that AEP range of 10-3 or 10-4. We do not have a long enough record of data for projecting into the future. We do not have enough ensemble members to look at the 10-3 to 10-4 uncertainty range. The community is moving slowly towards that, but I think the gap is still very significant. Also, the information that we provide is not local yet.

Determining whether a specific location of an NPP might be experiencing more flooding, or similar, will require a lot more regional information, which is not quite available.

Follow-up Question (Ray Schneider):

But the fact is, we are operating and working with the information that we have available now, with the understanding of whats changing, going forward. If we are supposed to be integrating that into our thinking, is there a process? Does the NRC plan on having a process? Or is it just basically: Theres a lot of information out there and we have to think about it. Is it going to be turned into practice in any practical way within our timescale of the life of the industry?

Moderator:

You are entirely correct. This is a challenge. We are working to try to understand how we can incorporate some of the information from the climate science community along with what we know about hydrologic engineering practicalities. Its a struggle, and I cant give you a timeline for when we will have specific guidance on what you need to do about climate change in the next year.

Andy Campbell:

I can answer some of that question. I'm Andy Campbell, the Deputy Director in the NRC/NRO Division of Siting, Licensing, and Environmental Analysis (DSLE). The External Hazards Center of Expertise (EHCOE) comes under my purview. We have a process that was approved by the Commission called the Process for Ongoing Assessment of Natural Hazard Information (POANHI). We do get frequent questions about sea level rise and climate change impacts.

POANHI takes information from this. It is an internal staff process. The PFHA research is part of that. So, the NRC staff will use POANHI if an issue of the significance that could challenge the capabilities of the plants arises. In other words, the current design basis, the plants flood protection, their FLEX strategies, and their mitigating strategies, similar to what we did for the response to the accident at the Fukushima Dai-ichi NPP in Japan. As we look at that, we, with the help of a technical advisory committee, will make a decision. For example, does the issue need to go into the generic issues program or is it very specific to one plant, or maybe a couple of plants?

4-375

The POANHI is being finalized. I am currently reviewing an office instruction as well as informing the Commission about this process. So, we do have a process for dealing with climate change as new information comes our way. I will note that in the Commission memorandum and order approving the license application for Turkey Point Units 6 and 7, the Commission specifically dealt with sea level rise (on pages 25 to 28). The applicant used deterministic models to come up with a storm surge level, which included some sea level rise. If sea level rises higher in the future, and it significantly changes our decision, then we would then look at that in more detail. The Commission did not believe that, given all the conservative decisions built into that deterministic analysis, that the sea level rise in and of itself would change our decision. But we will monitor the situation. That is the answer the Commission gave a year ago. That is the answer I give whenever I get a letter asking about how climate change impacts this or how sea level rise impacts that. We are aware. Were cognizant. We look at the new information that comes along. But does it really change the likelihood of occurrence? A lot of what you see on sea level rise in the public has to do with the most extreme curve from the NOAA projections out to 100 years. There are enormous uncertainties associated with those. We have to consider the uncertainties associated with that.

The entire agency is going through what we call a transformative process. We are becoming more risk informed. We are now capable of reviewing probabilistic storm surge analyses. The staff went through a steep learning curve on that. We have been looking at frequency-based analyses so we can incorporate any new information that comes along, if it would significantly change and have a significant impact on the plants ability to basically keep the core cool, and to stay safe. I think thats the best way to answer that. I dont know what the ultimate answer is. Some of those most extreme projections require 30 percent of the Greenland ice sheet to be in the North Atlantic to melt. That will take a while. If that happens, then were going to be paying attention to it. But that's not currently what the best estimates are. Those are the most extreme. We consider the best estimates, the median from probabilistic analysis when we start looking at a risk-informed approach.

Moderator:

Kenneth Kunkel and Ruby Leung, can you fill in a bit of the background on the National Climate Assessment (NCA) to explain how the NCA is developed? Different research groups have their models. The models get put together in the Coupled Model Intercomparison Project (CMIP) collection that becomes a de facto basis. Could you lay out how that process works?

Kenneth Kunkel:

The NCA is mandated by the Global Change Research Act of 1990. Its supposed to be put out every 4 years. Ive been involved in the last two of these. The U.S. Global Change Research Program (GCRP) has developed a process to nominate authors, and then GCRP selects authors for various report chapters. I've been part of a committee on the climate science side of it to develop the approaches that are used to produce foundational climate science material from climate model simulations and other derived products. Generally, this consists of people from the GCRP agencies that contribute and make recommendations. The foundational climate science material has come out of the CMIP process of the World Meteorological Organization (WMO). We are always trying to connect to these global efforts to produce these climate model simulations.

For the rest of the report, the sectoral and regional chapter authors are selected through this nomination process. Then we go through a very intensive review process that involves a public comment period of review by the GCRP government agencies. For both the third and fourth NCAs, the report was reviewed by the National Academies of Science, Engineering and Medicine.

The National Academies performed two reviews for the third NCA.

4-376

Moderator:

This workshop is focused mainly on flooding. However, there are a lot of other factors and parameters when performing safety reviews. Kevin Quinlan, could you talk a little bit about some of those other things, beyond flooding, that you look at?

Kevin Quinlan:

As part of the safety reviews and environmental reviews, we really look at most hazards that could occur at the site. Speaking from a meteorology perspective, we look at things like tornadoes and tornado wind speeds, and hurricanes and potential missiles. We look at the high and low temperatures and humidity. We do look at different recurrence intervals for low and high temperature and humidity, looking at 100-year return periods. For winds, its out to 10-6, generally.

So usually, there is enough margin to account for the limited period of record that we have. We take into account the climate at the site to see what kind of changes have occurred. However, a weakness, I think, in our guidance right now is how we actually account for climate change in the different site characteristics. Currently, we are dealing with the climate record and trying to build in some safety margin for either a highest occurrence of hazards that we see in a site region, trying to project it out so that we have some idea of margin there, that, during the life of the plant, wont be exceeded. Or if it is exceeded, how it is dealt with in the actual design of the plant.

Follow-up Comment/Question (from Kevin Quinlan):

One thing I was hoping to get some of you to comment on is PMP. Kenneth Kunkel and I worked together on the Colorado-New Mexico statewide PMP study. Later this year, hopefully later this spring or early summer, we are putting out for public comment a document of considerations for site-specific PMP estimates at NPPs. It has a section on long-term climate change. With site-specific PMPs, the assumption has been, up to this point, if climate change is being taken into consideration, if the climate is changing, it will show up in the storm record. It will be accounted for that way. Its never been a terribly satisfying answer to any of us (that well just see it in the storm record) because it doesnt account for future climate change or going out in the next hundred years during the life of the plant. So, based on the presentations that we just saw, what is a reasonable way to account for climate change on precipitation in a site-specific PMP study, an analysis that is generally deterministic, based on the historical storm record and then maximized for moisture and location? I think part of the problem is we do not really have a grasp on what is a reasonable way to account for it. I think we all agree that it should be accounted for, but we dont really know how. Even this guidance document that we will be putting out for public comment states, Future site-specific PMP studies should account for the effects of climate change, especially in consideration of precipitable water. So, were really putting the onus back on the industry. Its kind of an open-ended question. What would be a reasonable way to go about considering this?

Ray Schneider:

I was very interested in the presentation that Kenneth Kunkel from North Carolina State gave because you have the extrapolations. We have the data. To some extent, we have processes, we have good methods, and we have design methods. Now we have a little bit more information that, for example, the tails are getting a little thicker. With that information, and we know approximately the size of the tail thickness (for example, either 20 or 40 percent on a local basis), it seems like 4-377

we should be able to come up with a reasonably comfortable way of dealing with that and not leave it up to the individual to basically say, I believe climate change could do this or I believe climate change can do that. It seems like enough information is available without having to go to 10-6 and get that data. Use the concepts of what weve learned combined with some overlay of extreme value statistics to get an idea of where we should be ending up.

Kenneth Kunkel:

The process of the radiative imbalance in the energy budget of the atmosphere thats being driven by increasing greenhouse gas concentrations is a very fundamental aspect of whats going on in the climate system. Its warming the earth. More specifically, its warming the ocean, and the ocean is serving as a reservoir of that heat. That will continue. You cant stop it. The other thing you cant stop is the increase in water vapor over the ocean driven by this Clausius-Clapeyron relationship between temperature and saturation vapor pressure. Thats not going to be what the past has been. It will continue. The degree to which it continues depends entirely on how fast we increase greenhouse gas concentrations in the atmosphere. We are virtually certain that the magnitude of that increase in water vapor depends on the pathway of greenhouse gas concentrations. But it seems like the worst assumption one can make is that it is not going to happen in the future, so we will rely on the historical record to tell us when it has happened.

Theres only one direction it is going to go in. It seems short sighted to rely on historical records to tell us whats happening when we are as close to 100-percent certain what direction things will go in. We can discuss the magnitude, but theres only one direction it will go in. These really big storms seem to be controlled by water vapor concentration. We know what direction that is going in. Could the meteorology change with these systems in ways that would offset that? Perhaps, but Im skeptical that these really, really big storms wouldnt increase in direct proportion to the water vapor concentration or water vapor availability.

4.3.8 Day 3: Session 3B - Flood Protection and Plant Response Session Chair: Thomas Aird, NRC/RES/DRA/FXHAB 4.3.8.1 External Flood Seal Risk-Ranking Process. Ray Schneider*, Westinghouse; and Marko Randelovic*, EPRI (Session 3B-1; ADAMS Accession No. ML19156A478) 4.3.8.1.1 Abstract Preventing water from entering into areas of NPPs that contain significant safety components is the function that various flood-protection components serve across the industry. Several types of flood barriers, both permanent and temporary, are used at NPPs. These barriers include external walls, flood doors, and flood barrier penetration seals (FBPSs) that allow cables, conduits, cable trays, pipes, ducts, and other items to pass between different areas in the plant. A comprehensive guidance on the design, inspection, and maintenance of flood-protection components has been assembled in EPRIs technical report, Flood Protection System Guide.12 This document includes information related to these topics for a variety of flood-protection components, while focusing specifically on FBPSs. NRC/RES has initiated a project to develop testing standards and protocols to evaluate the effectiveness and performance of seals for penetrations in flood-rated 12 Nov. 2015, no cost to members https://www.epri.com/#/pages/product/3002005423/?lang=en-US 4-378

barriers at NPPs. EPRI is currently developing a qualitative risk-ranking process for the plants to categorize, or risk-rank, installed penetration seals according to the likelihood and consequence of seal failure(s) considering the various metrics regarding seal condition, design, and location. In addition to identifying potentially risk-significant FBPS for prioritization of surveillance and/or replacement, plants performing an external flood PRA may use this process to identify which penetrations may need to be explicitly modeled in the PRA. The intent of this guidance is to provide a process to categorize and rank penetration seals with regard to the likelihood of failure and the significance of a loss of the penetration sealing capability.

4.3.8.1.2 Presentation 4-379

4-380 4-381 4-382 4-383 4.3.8.1.3 Questions and Answers Question:

I am very interested on the other side of the wall, not in the plant, but out where the backfill is. I am very interested in learning about causative mechanisms for those significant seals that are important to risk. You showed on the second slide the Blayais site in France, where some of those seals failed. Have you thought about how you can go back to EDF and ask how we could understand better what caused those seals to fail? It is not just the water level, you talked about hydrostatic load. But perched water systems could be created, and obviously subsurface condensation and local pockets of water. Could you get EDF and its contractors to say something about the causative mechanisms that caused them to fail? Could you use special geophysical methods, like electric probes, that could be associated with those seals? For example, when it does get to a certain water level or a certain condition, you would then want to have a closer inspection of these significant seals that make a difference in risk. Have you talked to the French?

Ray Schneider:

We have not talked to the French. Blayais was designed as a dry site. There was never any intention for the seals to be fully waterproof, other than for minor rain events. We can find more information for them. But we have talked to utilities that have tested different seal types. We have actually started to evaluate those data and found that certain seal types can take up to 80 pounds of hydrostatic head with just weeping. We also see the other kinds of leakage modes, of peripheral leaks around the outer periphery of the seal penetration, where it is connected to the wall and is basically the size of the surface roughness plus a few millimeters, which seems to be consistent with the data. Another set of certain seals will actually lose traction and, with enough pressure, could actually be pushed out of the gap. We are trying to identify which groups they fall in and figure out what the exact pressure levels will be when you actually get the dislodgement activity. We think this will be enough information to basically make that kind of judgment. You can use that information to determine how leakage is going to drive the water levels inside the plant.

We have also looked at the cable issues and the impact on equipment. We will look at the ability of leakage through the cable-to-cable penetrations if they pass over equipment, and we will give that as a potentially risk-significant item if the leakage is passing over an important running motor.

Those things are kind of built in or buried into the process; we did not want to get into that level of detail, although we are collecting some of that information. There is also a lot of anecdotal information out there, but EDF cant release it to us because third-party companies have done some of this work.

Marko Randelovic:

The information we are getting is for the test performance of new seals; the aging effects are extremely hard to quantify. I think the NRCs contractor performed the tests on new seals as well.

There is an effort among the NRC, the INL, and EPRI to potentially harvest the seals in the field and characterize the aging effects on the performance of the seals. So, this project will not look into that. This is just a risk-ranking process at a high level to eliminate most of the seals from the plants.

Question:

Could you give some more information about the seals in the station in France, and especially the Blayais site, as it was the dry set concept? The seals we saw that failed are mainly fire seals that 4-384

were, in fact, designed only for fire. Then the water coming into the installation had water pressure sufficient to push the seals. This could be another aspect about the French feedback. After the Blayais event, there was a large review for all the NPPs, and EDF developed the concept of volumetric protection to identify what was the polling, what should be protected, and then close all the penetrations around this volume just to limit the number of cells to be controlled or improved.

Did you have you some feedback from that type of method?

Ray Schneider:

I have heard about the volumetric closure process but was not sure how EDF implemented it. I think to some extent it is the same. It sounds like what you described is similar to what we are recommending in terms of identifying the important seals; youll look at your components that you are protecting that are important, in our case, to the flood risk. We would be evaluating the seals for maintenance. We would not necessarily require replacement of, for example, an elastomer seal with a concrete ground seal. But clearly that kind of understanding of which seals are important will help with the ability to mitigate the event, and there are multiple ways of trying to mitigate a flood event.

Question:

It is really important to be doing this sort of situational awareness in the field and thinking about it from a systems perspective. This process is very much based on having information and knowledge about the system. How are you envisioning this process to be used potentially to identify where you need to look for unsealed penetrations? We know that can be a potential challenge. Can you prioritize what areas you might want to search for unsealed penetrations? My second question is, what are you thinking with regard to inaccessible seals hidden behind something, hard to get to, or certain seals with properties that are hard to understand once they are installed? With a mechanical seal, we can see what kind of seal it is and understand it, but what are you thinking with regard to elastomer seals with the development? Third, will the process consider unknown propagation pathways, such as something that may not be an external floodwall but that may suddenly become more important if there is an unknown propagation path on the other side.

Ray Schneider:

For the propagation pathways, we will look at the ability for new sources of water, such as around walls that are assumed to wall off water. We will look at some submerged penetration seals that are internal to the plant. We would look at those to basically rank those. The last phase would be to determine whether there are penetration seals that are internal seals, not external seals, that could create new pathways that would cause an issue, and we would tag and identify those, although there is more of a standard PRA process to deal with it. For your other question, the unsealed penetrations are interesting, but we have not thought about that in particular. You want to look at where the penetrations are going through and determine what the potential impact would be. What are the seal sizes supposed to be? If were really unsure about a penetration (we dont have that in the process yet), we could assume that it has a total flow area through it and know which areas its going to propagate through, and then the process is the same. We would need to determine whether that an important area in terms of the amount of water. For penetrations you cant reach to see them or repair them or their properties are unclear, then you have to know their importance.

4-385

Question Clarification:

For example, a lot of these seals need to have a certain amount of length of the seal around the penetration to have resistance against blowout if you dont necessarily know how or if its been injected to 6 inches.

Ray Schneider:

We are looking at the test data, but if the seal is injected and is basically in place and you have a leakage pathway around it, that will be a function of the length of the seal. We could do some sensitivity studies on installation designs to determine how important that is, but you dont need that much installation length in order to keep the leakage rates low enough to basically indicate that. Of more concern is the adhesion to the outside surface.

Marko Randelovic:

The process is designed assuming you have the seals, because we cant build a process for every single case where you have to assume that there is no seal. When they do the walkdown, they will look at those hundreds of seals and will eliminate most of them. They will have to spend more time looking into some of the more important ones and seeing if there is a seal or what kind of design is there. We did a tabletop exercise for one specific plant, and that information is available although not easy to get to.

4.3.8.2 Results of Performance of Flood-Rated Penetration Seals Tests.

William (Mark) Cummings, Fisher Engineering, Inc. (Session 3B-2; ADAMS Accession No. ML19156A479) 4.3.8.2.1 Abstract Overall risk analyses of NPPs include the need for protection against potential flooding events, both internal and external events. Typically, a primary means to mitigate the effects of a flooding event are to construct flood-rated barriers to isolate areas of the plant to prevent the intrusion or spread of flood waters. Any penetrations through flood-rated barriers to facilitate piping, cabling, and similar items must be properly protected to maintain the flood resistance of the barrier.

Numerous types and configurations of seal assemblies and materials are being used at NPPs to protect penetrations in flood-rated barriers. However, no standardized methods or testing protocols exist to evaluate, verify, or quantify the performance of these, or any newly installed, flood seal assemblies. In fiscal year 2016, the NRC implemented a research program to develop a set of standard testing procedures that will be used to evaluate and quantify the performance of any penetration seal assembly that is, or will be, installed in flood rated barriers. Although this presentation represents a summary of the project, which was completed in September 2018, its primary focus is the results of testing of candidate seal assemblies using the draft test protocol that was performed in August 2018. The test results were used to evaluate if potential changes/updates to the test protocol were needed.

4-386

4.3.8.2.2 Presentation 4-387

4-388 4-389 4-390 4-391 4-392 4-393 4-394 4-395 4.3.8.2.3 Questions and Answers Question:

Fire barrier penetration seals are pretty robust because they have to be tested with a fire test and then subsequent hose stream tests. Can we assume that they will remain intact during an internal flood event?

Answer:

It depends on the material. There are many fire barrier seals for which you will use a low-density material that may not have the adhesion propertiesthe friction properties. It can easily stop a fire. Your question does get to other parameters that are not covered by this test methodology.

Thats an impact issue that is primarily an external issue, although it can also be an internal issue since you can still have material flying around in there. You could calculate the dynamic pressure of a hose stream against the seal. I do not know if that has ever been done. Even then, if you have ever watched Underwriters Laboratories run a hose test, the tester is just spraying the item to see whether the materials in their contract and cause a failure. The same kind of analogy could apply to how seals exposed to salt water perform. Is there a compatibility issue there versus a freshwater scenario? A lot of this will come down to the manufacturers stating the specific uses of these materials. For example, if you have to do a seal through bare concrete, and it does not adhere well to the concrete, you will need to apply a layer of paint or similar substance that would prove the adhesion properties. Ultimately, the manufacturers will have to define the performance parameters at least for new seals and how they believe the seals will perform and then run a test.

4-396

Question:

Why are fire barriers and fire dampers not on your list of tested material?

Answer:

Again, we are only talking about flood penetrations. You could have dual-rated flood and fire penetrations. We know fire dampers have been modified to basically become smoke dampers. In many cases, a gasket is added to prevent the migration of smoke when that damper is closed. I dont know of any tests that look at the fire or flood rating on a fire damper. This goes back to why the whole intent is to be performance based. Leakage through a seal is not necessarily failure.

What is on the other side to suck out or drain the water? If I can quantify to some degree the leakage through a flood seal based on a certain pressure parameter, at least it gives those doing the PRA a quantifiable number.

Question:

Have you tested anything else other than the polyvinyl chloride (PVC)? What was the rationale for using the PVC?

Answer:

The main rationale was ease of availability. However, some of the sleeve materials were steel.

Unfortunately, the lab had problems getting big steel pipe. Since we were testing the test, we were not specifically looking at material compatibility, although we know thats a factor. We wanted to have different materials that we could at least address and look at to see if anything obvious was evident. In this case, the elastomer did not like the PVC. It adhered much better to bare concrete than with PVC. It did fine with the steel pipe. Issues such as what you are penetrating through and what the penetrations consist of, whether its sleeved or not will need to be accounted for in terms of your performance when you try to qualify and quantify what material to use or what is in the hole.

Question:

When do you plan to release the test report?

Answer (NRC):

A draft report has been completed. We have received Mark Cummings comments on it.

NRC/RES will perform an internal review before sending it to the Office of Nuclear Reactor Regulation and NRO for their comments.

4-397

4.3.8.3 Modeling Overtopping Erosion Tests of Zoned Rockfill Embankments. Tony Wahl^,

U.S. Bureau of Reclamation (Session 3B-3; ADAMS Accession No. ML19156A480) 4.3.8.3.1 Abstract A 3-foot-high physical model of a zoned rockfill embankment dam was tested in the USBR hydraulics laboratory to gain a better understanding of erosion and dam breach processes associated with overtopping flow. Erosion rates of the model embankments were evaluated from visual records, and erodibility parameters of the soils were compared to small-scale submerged jet erosion tests performed on the test embankments and other compacted soil samples. This presentation discusses attempts to simulate the test using two computational dam breach models, WinDAM C and DL Breach. The idealized erosion process frameworks of each model are presented and compared to one another and to the erosion processes observed in the physical model test.

4.3.8.3.2 Presentation 4-398

4-399 4-400 4-401 4-402 4-403 4-404 4-405 4-406 4-407 4-408 4-409 4-410 4-411 4-412 4-413 4-414 4-415 4-416 4-417 4-418 4.3.8.3.3 Questions and Answers Specific audience questions were included in the panel discussion immediately following the presentation.

4.3.8.4 Flood Protection and Plant Response Panel Discussion. (Session 3B-4)

Moderator: Thomas Aird, NRC/RES/DRA/FXHAB Ray Schneider, Westinghouse William (Mark) Cummings, Fisher Engineering, Inc.

Tony Wahl, U.S. Bureau of Reclamation Guest Panelist: Jacob Philip, NRC/RES/DE Moderator:

We added one new member in addition to our three speakers to this panel. His name is Jacob Philip, a senior geotechnical engineer at NRC/RES.

Question:

The erosion process is very important in calculating the dam breach hydrograph. The hydrograph will be influenced by the submergence effect immediately downstream of the dam. Tony Wahl, did you consider this aspect in your experiment?

Tony Wahl:

I agree that submergence of the downstream side of the embankment can be important. Our intention in the way we set up our lab experiment was to try to maximize the volume of water that we could release through the embankment before we completely submerged the downstream side. We elevated the embankment somewhat above the downstream channel. In retrospect, since we did not see development of a head cut there, I wish we hadnt done that, although I dont know for certain that it would have changed what we saw. I wish we could have run it both ways, but we did not have the opportunity to do that.

Question:

Mark Cummings, did you take into consideration the jacket material of the cables that you used in your test?

William (Mark) Cummings:

In a broad sense, yes, although we didnt specifically request a particular type of jacket material. A lot of this was just spare material, or we got it from a hardware store, to try to formulate the bundles. However, we see that it can be an issue, whether its the jacket material for the cables themselves, the material for sleeving, or the penetrant. All of that needs to be factored in.

4-419

Follow-up Question:

A lot of balance-of-plant cabling will have PVC jacketing and so adherence is an issue. You did not talk much about the internal seals, although you considered them and are testing them. Will the testing methodology that you are developing be applied to manufacturers of the internal seals as well?

William (Mark) Cummings:

We actually did have some conduit with cable that had some internal seals. I was just showing a few representatives. When the NUREG comes out, youll get the full matrix and see all the different penetrations that were part of the series. It wasnt exhaustive, but certainly items like internal conduit seals are all things that can be tested. Those are things that in terms of the test apparatus are what you see in the methodology, as an example. A wide range of designs could be used. You could have a test apparatus specific to internal conduit seals, which would be much simpler in configuration if that was your focus, depending on whether its a plant or manufacturer.

Question (from the Webinar):

The question concerns the time duration, in particular. One of the instances we may be looking at is tsunami-induced flooding. With tsunamis, theres a lot of uncertainty about the full wave driveup and the maximum head height and the maximum hydrostatic pressures those seals could see before it starts to taper down, as the wave recedes. Is there any developed methodology behind how we can look at reasonable time durations using a graded approach with applied pressures?

William (Mark) Cummings:

We did not take any of the tests for long durations. It depends on the flood curve, or hazard curve, for your plant. We could have run these tests for 2, 3, or 4 days at a time. Obviously, we were time constrained and trying to look at as many different aspects of the methodology as we could. So, we used a step function to take it up to a maximum pressure. Some we held for minutes on end or much longer, in terms of each step, to try and see any changes in any of the seals or leakage rates that we were measuring. During a specific timestep and others, we ramped up the pressure.

That will be a function of whatever flood scenario you are trying to mimic and that those seals you are testing would be exposed to.

NRC Response:

We designed the test procedure to be flexible enough so that you could design whatever pressure regime you wanted. It did not specify some kind of 1- or 2-hour pressure event.

Question:

You have a test facility that is able to look at a number of different seal concepts and combinations. From industry, we have seen that a lot of these concepts basically fit into classifying groups: foams behave one way, elastomers behave another way, and they behave differently with and without cables. If you have one source of testing, it could provide useful information in terms of how the seal leakage occurs and the size of the leak. We have estimates of what we are getting now, which is basically just leakage around the edges in gaps as they partially dislodge, as to how the partial displacement actually resulted. You are looking for a protocol to make everyone do this individually. Why didnt you try to answer some of those 4-420

fundamental questions that might be helpful in reducing the scope of what everyones will have to do?

William (Mark) Cummings:

Although we were there to test the test, we certainly were trying to, within the parameters that we had to work with, obtain lessons learned, and the report will include some of those. For example, some of the lower density foams actually perform better in some cases than we expected. In some of those tests, you will actually see where there was leakage, especially with the elastomer around the PVC. But as we increased pressure, as if you have some plumbing plugs, where there was a bowl through two washers, it expanded radially. We think something like that was occurring with this, because as we increased pressure, the leakage actually slowed, and then finally stopped and held it until it basically wasnt leaking and until, in this particular case, the seal ejected. There will be a wide range of leakage pathways or reasons for leakage to occur. A lot of that relates to material compatibility, and some is just the function of the material itself, not necessarily the compatibility of it and its surroundings. Ideally, we would have a large research effort to test all of these aspects, but that was not the case here. Hopefully, as more tests occur and the data are made publicly available to all plants as different plants test, we can begin to develop a database, and maybe some of the manufacturers will have more knowledge of how their products perform. The proprietary nature of this is a problem. When we are trying to assess how a seal penetration will perform under certain pressures, hopefully, there is a range of pressures that, regardless of who makes the seal, it performs within that range. We can build a database and begin to assess, and those insights could be very useful and could minimize the scope in terms of what other people have to do under the protocol. One of the subjects we discussed with manufacturers was having them come up with this because they know their products. That is, we would ask the manufacturer to develop a test using a standardized methodology that is acceptable to the NRC, so we are not testing every single seal assembly in every single configuration. Instead, based on these tests that found the issue and then performing engineering evaluations in the middle, the manufacturer can generally tell how the seal will react and perform.

Joseph Kanney:

Its important to keep in mind that our project was based upon one fundamental observation: there wasnt any protocol for testing.

Jacob Philip:

When we started this project, we did not know the types of seals in these areas, and therefore we made the first task to look at all the NRC documents and to determine what the seals are like, how many there are, the shapes and sizes of the openings, and how many go through those penetrations. These considerations and the seal materials to be used were very important for us starting at the beginning. We were not going to actually test a seal and say that it is bad or not good. We wanted to have a protocol on testing because there wasnt any such available in the literature. We felt that we needed to have some type of methodology to test the seals that could be applied in the lab, rather than at a site, but that can replicate a site test. At some stage, we also want to standardize the test, potentially working with an organization like the American Society for Testing and Materials (ASTM).

4-421

William (Mark) Cummings:

As part of Task 1, we tried to get as much information as possible on the types of seal materials currently in use when the plants originally responded to the NRC request for information on flood seals. Some were low-density foams that were there for fire, where the plant had installed a silicone caulk on the top just to make it waterproof. We did test a configuration such as that to see how it performed. One had just the low-density foam and one had the same configuration but with the silicone caulk, and we did see an improvement. We did try to look at some of the configurations that might be nonstandard, which some plants had been trying to use to develop their flood mitigation strategies. But these are all things that we will look into more.

Question:

When the project is over, have you considered turning over the testing procedure and your information to ASTM and finding out if there is enough interest in the industry to pick up on what you started and to actually do an ASTM standard? Im sure others are worried about cable integrity and seals. Here in Washington, the organization that operates the Metrorail system is worried about this. By starting an ASTM standard, based upon the work you began, other groups and industry might contribute and decide to think about testing procedures.

William (Mark) Cummings:

Yes, I think that is one of the NRCs goals for the protocol/methodology.

Moderator:

It wouldnt be unprecedented. It has been done for other research projects.

Jacob Philip:

We have done that before. When we were working on the high-level waste repository, we developed a lot of guides for issues related to rock mechanics and high pressures and high temperatures that were not available in the literature at the time. We worked with ASTM to develop some of those guides. There is a precedent, and we can actually do that for this particular function if there is really interest in it.

William (Mark) Cummings:

We tried not to make the protocol and the test apparatus very specific. They are simply one of many ways to do it. Organizations like ASTM and Underwriters Laboratories do like to have set protocols. If we move that way, I hope they can develop something that would be flexible enough to roll out a range of test apparatus that could be more flexible and test certain penetrations, hopefully in a more cost-effective manner.

Comment from Andy Campbell (NRC):

This is Andy Campbell, NRC/NRO DLSE. Remember that this project originated when the NRC accompanied the industry on a walkdown of the plants. We found a lot of issues with sealsseals degraded, seals not there. There was also the St. Lucie Plant video that went viral after the walkdowns had taken place. Seals are a focus of importance for a variety of reasons. Ultimately, it is up to the licensees to ensure that their systems are appropriate. I think this provides important 4-422

information. But in and of itself, research reports are not regulatory requirements; they can become that through a process, but they are not necessarily regulatory requirements. So again, I think this is great information for the industry. I believe the industry has an ongoing effort to come back to the NRC with how they will approach seals for the future. But it is ultimately their responsibility and they have the flexibility to show how their seals will continue to hold up.

4.3.9 Day 3: Session 3C - Towards External Flooding PRA Session Chair: Joseph Kanney, NRC/RES/DRA/FXHAB 4.3.9.1 External Flooding PRA Walkdown Guidance. Andrew Miller*, Jensen Hughes; Marko Randelovic*, EPRI (Session 3C-1; ADAMS Accession No. ML19156A481) 4.3.9.1.1 Abstract As a result of the accident at Fukushima Dai-ichi, the need to understand and account for external hazards (both natural and manmade) has become more important to the industry. A major cause of loss of alternating current power at Fukushima Dai-ichi was a seismically induced tsunami that inundated the plants safety-related SSCs with flood water. As a result, many NPPs have reevaluated their XF hazards to be consistent with current regulations and methodologies. As with all new information obtained from updating previous assumptions, inputs, and methods, the desire exists to understand the changes in the characterization of the XF hazard and the potential impact to the plants overall risk profile. This has led to an increased need to develop a comprehensive external flooding probabilistic risk assessment (XFPRA) for more NPPs. One of the steps for developing XFPRA is the plant walkdown, which is the central focus of the research. This research provides guidance on preparing for and conducting XF walkdowns to gather the necessary information to better inform the XFPRA process. Major topics that will be addressed include defining key flood characteristics, preparing for the pre-walkdown, performing the initial walkdown, identifying the need for refined assessments or walkdowns, and documenting the findings in a notebook. This guidance also addresses walkdown team composition, guidance on useful plant drawings, and utilizing previous walkdowns or PRAs to inform the XF walkdown process.

4-423

4.3.9.1.2 Presentation 4-424

4-425 4-426 4-427 4-428 4-429 4-430 4-431 4-432 4-433 4.3.9.1.3 Questions and Answers Question:

Are there any plans for pilots for the walkdown and the other guidance?

Answer:

We did try to pilot the walkdown guidance, but the interest in flooding in the United States and internationally is fadingit is very low. We were unable to find anyone who would be willing to actually pilot either the walkdown guidance or the penetration seal project. EDF actually reviewed the document and provided very valuable comments. But we do not have a pilot plant.

Question:

How does the guidance consider combined effects, such as a high wind with storm surge or heavy rain?

Answer:

The guidance is not intended to tell you what to evaluate. It lists the things that need to be evaluated or verified that they were evaluated. For example, in the United States, the combined hazards would have been included in the FHRR. The walkdown guidance says to find your scenarios based on what is in that guidance already and what is required to be evaluated. There is always some wind speed attributed during flooding events, and that would need to be included in the FHRR that is included during the PRA review.

Question (Suzanne Dennis, NRC):

In your slides, you had a bullet on defining risk-significant SSCs before you do the walkdown. Is that based on your internal events PRA? Does the guidance include a methodology for determining those?

Answer:

The guidance talked about different methods. People have different ways to do this. One of the ways is to go into your internal events model and strip out the event tree branches or initiators that dont matter, and then set off a power to true and items are shown as being important. For sites that will do a flood PRA, they already know that they have a problem. They have done a lot of work already to identify which pieces of equipment are considered critical. Thats the other part of what this guidance is saying: look at the mitigating strategies assessment (MSA) and the focused evaluation and see whats identified as necessary during the flood.

Question:

Would the walkdown identify potential pathways related to FLEX? Would the walkdown also look at external flood-related manual actions?

Answer:

Yes. The MSAs were done back in 2014-2015. Some are not yet finished. But the MSA should have been reviewed. If part of your strategy is that you have FLEXthe need to mitigate a flood 4-434

and the effects of floodthen yes, it would identify those penetration fields that should be reviewed. Make sure there are no pathways that would defeat implementing FLEX during a flooding event. For operator actions, the guidance does talk about taking environmental factors and inundation pathways into account and making sure that those operator actions are not impacted from the effects of the flood.

Question:

We have two regional response facilities around the United States, and some of these sites will be inundated for 7 days. However, they need the ability to get that equipment from the regional response facilities to support sites. Do your PRA walkdowns account for that? Or how are you accounting for it? The roads and bridges will not be available.

Answer:

That is far beyond the walkdown scope, but I will answer anyway based on my involvement in writing the guidance for focus evaluation and MSA. Essentially, every site needed to look at what their coping capability is during and after a flood. You have to have a safe, stable end-state for the focus evaluation and grade assessment. I believe that term is indefinitely or at least long enough to let the flood recede. When preparing for the walkdown, you create a scenario; you should have already looked at what you are using to mitigate a flood, whether that is on site or not. Everyone has capability for FLEX on site as well, according to the MSA. Everyone has already evaluated that they have at least one train of system available on site that can be implemented during the reevaluated floods, whether thats LIP, storm surge, dam break, or some other event. The regional response would only be required if the plant did not have an installed capability or an onsite portable capability. The guidance does not explicitly talk about regional response for FLEX. But if that is something that the plant would rely on, and it is part of its strategy, then the plant would have to consider that.

4.3.9.2 Updates on the Revision and Expansion of the External Flooding PRA Standard.

Michelle Bensi*, University of Maryland (Session 3C-2; ADAMS Accession No. ML19156A482) 4.3.9.2.1 Abstract Recently, a significant effort has been undertaken to revise the American Nuclear Society/American Society of Mechanical Engineers (ANS/ASME) XFPRA standard. The draft standard is currently in the process of being reviewed and revised through the standards development process. The proposed revision is significantly more detailed than the existing standard, reflects recent lessons learned since the last revision, and recognizes the current state of practice. This presentation with share insights from the development of the proposed revision as well as other recent experience with various aspects of XFPRA.

4-435

4.3.9.2.2 Presentation 4-436

4-437 4-438 4-439 4-440 4-441 4-442 4-443 4.3.9.2.3 Questions and Answers Question:

There is probably a delicate balance between the screening that should be done during the hazard analysis and the PRA team that is taking that hazard analysis on board. I think there needs to be some thought about how the screening duties may be divided between those two groups.

Answer:

There is an entire technical element that is done on the hazard. It does not matter if the hazard was done before or if the hazard was done fresh for the PRA, the requirements have to be met.

Within that hazard technical element, there are requirements for screening. If you are using analysis, and you have done screening, it needs to meet those requirements, even if it was done 5 years ago.

Question:

The NRC is in a transformative era, an era of risk-informed decisionmaking. All the probabilistic approaches and dealing with uncertainty are incredibly important now. People are now bringing in much smaller reactor designs or even looking at microreactor designs. All the reviews need a graded approach. We have to have an approach that considers what can go wrong, how likely is it, and what are the consequences? I think this PRA standard is timely. Weve received reviews under Title 10 of the Code of Federal Regulations 50.69, Risk-Informed Categorization and 4-444

Treatment of Structures, Systems and Components for Nuclear Power Reactors, and are going through a process for storm surge. We are seeing a variety of risk-informed, performance-based approaches. My question, does the geology part of this consider volcanic hazards?

Answer:

I believe volcanic hazards will go under Part 9 of the guidance. If it is landslides, it might go under Part 5, but probably under Part 9 because Part 9 is a catchall. It could also be screened under Part 6.

Question:

What do you consider to be within the scope and outside the scope of your PRA standard? For instance, do you consider ground water flooding or combined events? How do you determine what is fair to include? You said one of the dilemmas is that certain groups have to assume responsibility because they are combined events. How do you determine what is within the scope and what is outside the scope? I hope that you do not just use screening to determine what is in and out of the scope.

Answer:

This is not a valid standard yet; we are still revising it. A combination, such as a hurricane that can induce a storm surge event, LIP, and a river flood, because you happen to be located in an estuary, would fall under Part 8 because its all inside flooding. If it crosses between hazard groups, such as a seismic event that induces a flood, then we have to worry about that assignment of responsibility. Right now, if the seismic event is affecting the plant and also causing a flood, it is considered within the seismic period because the seismic event is affecting the site.

But if it is a seismic event that happens to fail a dam 500 miles away and it does not also affect the plant, then that is within the scope of flooding. With regard to high winds: we are still making sure we know when winds happen. In practice, the scope will be defined by whoever gets that PRA done first. But I think thats the pragmatic aspect of how it will be done. If you never did a flooding PRA, then you are not going to deal with high winds in your flooding PRA. The guidance does have a note to think about other hazards that we didnt catch. We do not have specific requirements directed specifically at ground water; we do have specific requirements directed at storm surge. But that is not to say that a hazard that doesnt have a specific technical requirement is out of scope in terms of analysis.

4-445

4.3.9.3 Update on ANS 2.8: Probabilistic Evaluation of External Flood Hazards for Nuclear Facilities Working Group Status. Ray Schneider*, Westinghouse (Session 3C-3; ADAMS Accession No. ML19156A483) 4.3.9.3.1 Presentation 4-446

4-447 4-448 4-449 4-450 4-451 4-452 4-453 4.3.9.3.2 Questions and Answers Question:

You say that ANS 2.8, Determining Design Basis Flooding at Power Reactor Sites, is now being expanded to all nuclear facilities because DOE wanted you to look at other nuclear facilities. Will fuel fabrication facilities, in situ leach mining, waste storage, the Hanford Site with to tanks, and similar facilities be included in the scope? If the scope really includes all nuclear facilities, that changes the objective of why a flood may have consequences.

Answer:

What we are looking at is the frequency, or the frequency and the hazard. If it is a lower risk facility, and you could justify it, you still could just have a much simpler PFHA. You can still go through the process, and then determining what the issues are becomes a Level 1 analysis, and maybe you pick your criteria as 10-2 or 10-3 and then the analysis does not have to use any of the extreme value methods. You do not necessarily have to make everything 10-6 if they are different levels of facilities.

Follow-up Question:

The issue is that, with nuclear power sites, the emphasis was always on safety, not environmental considerations. If you expand the standard to cover all nuclear facilities, depending upon what happens, the flood may cause consequences that have both safety and environmental impacts. It changes the scope of a PRA dramatically with regard to consequences.

4-454

Answer:

This is not a PRA. It is meant to identify your hazard at the site and for you to take the appropriate protective measures for the level of site. It is not necessarily tied to the PRA, but you can use it for a nuclear site PRA.

Comment (Joseph Kanney, NRC):

To provide some clarification: this is not meant to support environmental reviews. We do environmental reviews for NPPs anyway; we do that with different processes. This is to support safety analysis. With regard to different types of facilities, that is why the matrix is there, so that when you are deciding what level of analysis you need to perform, it will be based upon the safety category of the facility. But that safety category is based upon the consequences of an unmitigated release from that facility. Then that safety category is borrowed from other American National Standards Institute standards, and then couple that with the complexity of the hazard that you are looking at, and that provides some guidance about what level of analysis you need to do for the facility in question. That matrix was developed to provide some way to choose what level you need for different types of facilities. Obviously, an NPP is always going to fall into the highest level of safety category.

Question:

Will the standard give any guidance, in relation to making a distinction between aleatory and epistemic uncertainty? There are those who will argue that epistemic uncertainty does not really exist but is an actual artifact of the limitations of our phenomenological knowledge, and therefore we use it. At the other end of the scale, I can build a model and I can basically, depending on how I partition the model, put the boundary within the model between aleatory and epistemic. Since the two things work through the risk analysis differently, you can get different answers, depending on how you define the boundary between aleatory and epistemic. Will that be sorted out?

Answer:

That will depend on the models and the hazards that you are dealing with. I dont think theres really detailed guidance on that.

Comment (Joseph Kanney, NRC):

For the purposes of using the standard, we do define what we consider aleatory and epistemic with respect to the standard. Your comment is correct, that in some sense, how you partition actually depends upon what modeling tools or analysis tools you are using. But we considered that you will have a fixed set. You can use the definitions that are in the standard to chunk it into aleatory or epistemic. Although your comment is well taken, I dont think thats an issue that we could solve in the standard, although we did spend hours talking about it.

Question/Comment (Andy Campbell, NRC):

We have provided comments and we will provide more comments on this standard. When you talk about a SHAC process for seismic, most people think seismic SHAC, Level 3, which is a very involved process. We now have guidance, NUREG-2213, Updated Implementation Guidelines for SSHAC Hazard Studies, issued October 2018, for SHAC Level 1 and Level 2 processes. In our minds, we are thinking SHAC Level 2 is the upper limit for a flooding type of analysis. You have to 4-455

consider the uncertainties and the competing models. For flooding, thats a narrower field then for seismic, generally.

Answer:

That comment would be very useful as a review comment on the issue. These are consensus standards, and we certainly want to consider that.

4.3.9.4 Qualitative PRA Insights from Operational Events of External Floods and Other Storm-Related Hazards. Nathan Siu, Ian Gifford*, Zeechung (Gary) Wang, Meredith Carr and Joseph Kanney, NRC/RES (Session 3C-4; ADAMS Accession No. ML19156A484) 4.3.9.4.1 Presentation 4-456

4-457 4-458 4-459 4-460 4-461 4.3.9.4.2 Questions and Answers Question:

Given the unique nature of some of these, the current PRAs are not set up to deal with losses of offsite power (LOOPs). Do you have any comments on how well you think the current suite of PRA tools is adapted to capturing these risk contributions from these types of events or making sure that the insights we get out of a PRA are going to point us to these types of potential issues?

Answer:

This is something we talked about a lot as a group. We had a lot of discussions, specifically on LOOP; your PRA model is not going to be able to model that scenario. But does what you are modeling bound all other conditions? We were trying to think of an example when you have already had a LOOP, and you have recovered power, is a second LOOP in a short time after that worse than being in the original LOOP? We were playing around with those ideas and determining what would be most limiting. Our team could not think of an example where the subsequent LOOP would necessarily challenge the plant. We did find some issues with the reliability of power coming on during a multi-LOOP scenario. You have the power recoveries, especially when you have your PRA data and you are looking at the mean recovery times. There was some confusion, occasionally, on the definition of when power is recovered. If you are having continuous LOOPs, technically, you have recovered power in order to have another LOOP, but is the power reliable? Can you count on that power supply? We did not come up with an elegant answer for how to analyze that. But it did beg the question, are the data actually limiting enough to rely on for a realistic PRA?

Question:

Whenever there is a LOOP event, EPRI has its own reevaluation, the NRC has its evaluation.

Often, they do not agree because of the instability of the power. Did you look at both of those to see if following this guidance would mean that it is a longer LOOP or longer duration? I think some of that is there, but the only way it really affects the PRA models is how you estimate recovery time.

Answer:

No, we did not directly look at that. But we did look into the significance of some of these findings and thought, lets assume that we could not use this recovery time. Lets use a longer recovery time in the model. Does it significantly impact our result enough that it will be worthy? We did not find evidence that it was worth digging into for this project, but we had it noted as something for the future. We were also interested in looking at where the data come from. But when plants are reporting the recovery time of these offsite power recoveries, what guidance do they have? When can they call it recovered? But we did not go in and look at different tables.

Follow-up Answer (Suzanne Dennis, NRC):

The NRC and EPRI are collaborating on our LOOP and LOOP recovery data. This will be covered in a joint report, so you wont have to worry about any of those decisions.

4-462

Question:

Ian Gifford combined flood with electrical systems. I cant help but think of Benjamin Franklin; theyre always dangerous together. You talked about a PRA bounding analysis. Im familiar with a deterministic approach in which you have a conservative bounding analysis, but what do you mean in the PRA world? Because usually, SHAC-F is to look at the full complement of alternative conceptual models, and they determine through that analysis the significant processes and consequences. Bounding often implies worst case, and I do that analysis. If I clear that, then everything else is okay. What do you mean by PRA bounding analysis?

Answer:

When we have LOOP after LOOP after LOOP, is there any initial plant condition? If you have 100 percent power and you have a LOOP, and then you recover power 30 minutes later, is there any reason that would be a more dangerous initial condition for the plant to be in then at 100 percent power? Our focus is on identifying whether there is anything worse than being at full power when you have a LOOP, and is there any reason why having 30 minutes of decay heat taken off the core will put you in a worst case scenario? Our group could not think of a reason.

That is what I meant by bounding.

Follow-up Answer (Joseph Kanney, NRC):

We also made observations with some of the high-wind events, related to the risk of a tornado striking a plant. But when you actually look at many of the events, they were actually tornado outbreaks, which could have potentially affected facilities that you might be relying upon in the wider area around the plant that might have some implications for how the plant responds.

Follow-up Answer (Ian Gifford, NRC):

I think that was the Browns Ferry Nuclear Plant in 2011. We talked a little bit more in the paper about that, but it was more of a cluster and so you had simultaneous or near-simultaneous tornadoes across multiple States. That had interesting offsite consequences, and you could actually envision a scenario where multiple sites with multiple units are affected.

4-463

4.3.9.5 Towards External Flooding PRA Discussion Panel. (Session 3C-5)

Moderator: Joseph Kanney, NRC/RES/DRA/FXHAB Andrew Miller, Jensen Hughes Michelle (Shelby) Bensi, University of Maryland Ray Schneider, Westinghouse Ian Gifford, NRC/RES Guest Panelist: Suzanne Denis, NRC/RES Guest Panelist: Jeremy Gaudron, EDF Moderator:

For the panel discussion for this session, we have two guest panelists. Jeremy Gaudron from EDF is a hazards probabilistic safety analysis engineer. Suzanne Denis is a risk analyst in our PRA branch in RES.

Question (from the Webinar):

In updating ANS 2.8 1992 for guidance on performing flooding PRAs, there was not any mention of NUREG/CR-7046, Design-Basis Flood Estimation for Site Characterization at Nuclear Power Plants in the United States of America, issued November 2011 13, which was the guidance used to develop the FHRRs. NUREG/CR-7046 appears to be a lot of carryover from ANS 2.8 1992. Will it be used in updating the ANS standard? Also, other guidance documents were used during Fukushima flooding, such as Nuclear Energy Institute (NEI)-1605 (External Flood Assessment Guidelines, Rev.1 14). It appears this would be vital information for screening processes, as a similar screening process was used in the focused evaluations or integrated assessments, as required, in accordance with NEI-1605. This was not mentioned in the presentation. Would the completed Fukushima flooding reports be used as leverage in moving forward for the PRA as well as consulting with those folks who were involved in developing those reports as needed, as a lot of work went into developing them? It would appear this work would also carry over into the PRA.

Ray Schneider:

It varies. In terms of the PRA, everything you just said will be considered. The new ANS 2.8 does not specifically reference NUREG/CR-7046. That would be the follow-on to ANS 2.8 1992.

However, in the new ANS 2.8, we shift the methodology to move towards a probabilistic hazard approach, and that would be more of an extension of the existing standards. The goal was to turn it into a probabilistic environment. However, when you deal with the PRA, everything is included.

Michelle Bensi will confirm, we will look at all the information thats available because it has the human factors, the integrated assessment, all the protective equipment, and all of that information. That would be in the PRA itself. But ANS 2.8 is just for the hazard.

13 NUREG/CR-7046, Design-Basis Flood Estimation for Site Characterization at Nuclear Power Plants in the United States of America, issued November 2011, ADAMS Accession No. ML11321A195 14 Nuclear Energy Institute (NEI)-1605 (External Flood Assessment Guidelines, Rev.1, ADAMS Accession No. ML16165A178 4-464

Joseph Kanney:

NUREG/CR-7046 is a document that was used as guidance in the post-Fukushima flood hazard reevaluations. But that document addresses deterministic methods. The new ANS 2.8 is probabilistic, so NUREG/CR-7046 isnt really applicable. Some of the other documents produced during the post-Fukushima process address some risk and probability and to that extent they will be relevant.

Michelle Bensi:

Maybe the screening input?

Joseph Kanney:

Yes, there was a part in NUREG/CR-7046 about screening.

Question:

Two of the talks today were about the Blayais site. It suffered through a flood. We saw that there was subsurface flooding. There was talk about failures. But EDF made corrections. They evaluated the consequences of that flood and they made repairs. They did things to lower the flood hazard at that site. The next time there was a flooding condition, the plant survived that, and everything was fine. Vincent Rebour told us about guidance that was put forward after that flood.

Could you give us some insight as to EDFs approach with regard to mitigation of flood hazard at that site?

Jeremy Gaudron:

Following Blayais, EDF defined a lot of different flood mechanisms and tried to put some mitigation means in front of each of them. We have some riverine flooding, we have some waves with storm surge, we have a lot of things. As Vincent Rebour already said, we put some big protection in place. For the volumetric protection, thats all the buildings within safety SSCs. We protect them up to 10-4 water level so that water cannot come in up to this floor. We also have some peripheric protections, so that almost all the plants are like a bathtub. After the accident at the Fukushima Dai-ichi NPP, we added some more protections. We added some lower protections in front of all building entrances to protect from LIP. We will have some higher protection to cover 10-5 water level on sites.

Moderator:

How did the experience at Blayais translate into changes in how probabilistic safety assessments (PSAs) are performed? Are there insights or actual changes to how theyre performed at EDF or France in general? Are there significant changes that came from that?

Jeremy Gaudron:

We are just beginning to consider PSA so thats quite new for us. Blayais did not change anything because we did not have any external flooding PSAs beforehand. The first PSAs were released last year, 20 years after the Blayais event. After Blayais, we also improved onsite procedures and 4-465

guidance to train and enhance how people react to such events. For the first PSA we performed last year, Blayais was not the focus. We integrate all our knowledge in the PSA, and Blayais was just part of the knowledge.

Moderator:

Over the last few days, we have looked at several different types of flooding hazards. We have talked a little bit about coincident or correlated processes or flooding events. We have talked a little bit about the fact that for flooding, there may be a very significant component of manual actions or procedures. What do the panelists think about which one of these we tackle first? Since risk assessment is about trying to find the most risk-significant thing and work on that, in terms of balance between analyzing this extended use of procedures and manual actions versus getting the probability right on a particular flooding event, what do you think is the most important in terms of safety?

Michelle Bensi:

I dont necessarily think getting the probabilities right is most important. I think it is much more important to really understand how the hazards are affecting the plants and being very realistic about the operating experience with regard to the potential for water, where you dont want it from unsealed penetrations, and some of the human action challenges. I think it is important to understand those things and not assume that everything will work the way you wanted it; work with an understanding that things will go wrong. Working through that thought process is the most important thing. That being said, when it comes to the numbers, my biggest concern is how they might be used for screening. I think they become most important when the numerical frequency of exceedance arguments is being used to screen out a hazard, because then we dont get any of those insights with regard to how does the hazard affect the plant? What could go wrong? So as much as I spend my whole life calculating frequencies on floods, I think that its most important that things dont get screened out improperly because we really just need to understand how these things affect the plants.

Ray Schneider:

Yes, screening is one issue. Human factors and organizational responses are also important.

Floods encompass a whole variety or spectrum of events, and you have human actions that deal with them. For example, putting up the floodgates, making the plant less vulnerable to the hazard thats coming, and knowing when to establish your triggers are important. When we did a flood analysis for one of the plants, having to deal with an event that they knew was coming, the highest risk scenario that we came up with involved the plant resources and not being able to put all the flood protections in place because of other reasons. This then caused some vulnerabilities that should not have existed. Had we maybe started earlier and had proper organizational behavior, we would have a lower risk, and that portion of it could diminish. I think we should focus on the organizational behavior, the human hazardsnot so much the response, but a lot of the preparatory actions that can be taken in advance. Im also concerned that we overestimate the fact of the numbers, if they are too low, like at Blayais. For example, you assume you have a 10-3 site and put in a wall. Now its a 10-4 site, or maybe its a 10-5 site. But what happens to the event that occurs beyond that? Have you hardened the facilities? Do I still have, for example, fire seals in the area of the electrical penetrations or do I now have flood seals there? Those are the kinds of things that provide the protection that you need. I think these are some of the insights to carry forward.

4-466

Andrew Miller:

Ive given up waiting for consensus on how to do frequency analysis for any type of flood. I gave up that maybe 7 years ago. I decided we need to move on and think of another way to do it. I couldnt agree more with Ray Schneider about operator actions and procedures, if thats something that your plant relies on. Inappropriate screening and losing the insights is certainly a good topic to address. But since the accident at Fukushima Dai-ichi, weve gotten a better grasp of our hazards. A lot of sites have upgraded their strategies and maybe are able to handle their PMF, which Im going to say, presumably, would be a low frequency. PRA should really be focused on how we get benefit and what insights do we learn in order to increase actual safety. If its a human action or organizational response, I think that is definitely where it is importantdoing reasonable simulations. Every time we talk with operators, they say that having clear procedural guidance and having training seems to be the most important thing. I would agree: to know what to do when floods come.

Jeremy Gaudron:

I also agree with that. One of the main lessons learned in our first external flooding PSA is that whatever the hazard curve is, whatever your protection heights are, at the end you always find a level where there is a cliff edge effect. So what matters is how you prepare for such events, all the ways to protect your plants, and how you train people to be as well prepared as they can be.

Ian Gifford:

When we were doing the project covered in the presentation, one of the events that I paid particular attention to was Turkey Point Nuclear Generating during Hurricane Andrew. Although this is a 1992 event, I think some of the things we saw are still relevant today. One of the most important things they had was early notification of a hazard, so they could begin preparations. I think early notification is really key. We also saw the issues with human factors, which people have already mentioned. I think its important to have as many of the features designed into the plant before the hazard occurs. Turkey Point was without offsite power for 5 days. They had a lot of plans on actions that operators would be able to take during the LOOP. It turned out that there was such wide-scale damage at the plant. A lot of the plant operators at that point are also concerned about their homes and their families; theres no communication. Relying on them to follow procedures as they normally would is a bit unrealistic. Also, so much of the equipment that they use for mitigating strategies was damaged in the event. They did not even have trucks because many of them had been wiped away, and there were downed power lines all over the plant. They ended up using large chains to throw across the power lines to see if they were energized or not. These are the things that Im not sure are always considered when youre saying, Well, if this happens, were going to take out this procedure and do this. I mean, they ran a food drive to feed the first responders. So, although that was 1992, I think a lot of those lessons still apply to today.

Suzanne Denis:

Your original question is, What should we do first? I think we have the benefit of things happening first, whether or not we think we have made a decision. Human reliability is important.

We are already looking at ex-control-room actions, and so thats an area thats being explored outside of flooding. We will be able to pull those insights into the flooding world without necessarily needing to be the initiator. I think some things are just going to happen that arent flooding specific.

4-467

Moderator:

Could you say a little bit more about those other ex-control-room activities that are being analyzed that might be useful for flooding?

Suzanne Denis:

NRC/RES is currently doing a study to look at how to model human reliability for ex-control-room actions. The PRA looks at a lot of things as to whats going on in the control room; you have a heating, ventilation, and air conditioning system, you have a floor. Things that you might not necessarily have especially in some of these more extreme events.

Moderator:

Andrew Miller, you mentioned that in preparing for the walkdown, looking at the mitigating strategies assessments could be valuable. What value, what particular elements have you seen?

Andrew Miller:

In a deterministic world, maybe mitigating strategies are not part of your flood strategy that you would take credit for. But in a PRA, you can certainly include additional capability regardless of whether or not it fits into a design basis or a licensing basis. So, with mitigating strategies, since everyone had to do an MSA, the first place well look if there are some issues with current flood protection is the mitigating strategies. When we do those walkdowns, can we reasonably get the equipment where it needs to go? Can it be hooked up? Are there procedures? I cant stress the importance of conducting reasonable simulations, where folks go out and actually do it and you watch them do it. Those types of things are invaluable to all PRA analysts, especially for human reliability analysis (HRA). Thats what we would be looking for; going through the MSA to understand and confirm that those assumptions are still valid even in a probabilistic framework, not just our deterministic box that stops here.

Moderator:

In terms of incorporating the flood hazard information into the PRA itself, we talked a lot for the last several days about the need to incorporate the uncertainty in your flood hazard. When it comes to inserting that into the PRA, I think its probably a lot simpler if you are using mean values. If the hazard analysts are giving you this full family of hazard curves with the epistemic uncertainties as well as the aleatory uncertainty, one of the first things that the PRA analysts are probably going to do is try to simplify it. How would you simplify that, because I think in some cases, the hazard analyst maybe giving you too much?

Andrew Miller:

To my knowledge, no one has actually run a full external flooding PRA with the uncertainties propagated, but the parallel to seismic would be to run your uncertainty analysis with whatever parameters they are, and then you find that the hazard dominates the uncertainty. Are those uncertainties acceptable? From what Ive seen, most people are not quite sure what to do with that. Then the uncertainties in the hazard curve get stripped out to look at other things, like are we sensitive to, or do we have high uncertainties with, operator actions? Or do we have high uncertainties? A lot of times to simplify it, the hazard gets removed to get the insights because it 4-468

usually dominates the entire uncertainty analysis. So, you sometimes can mask those other things if you dont know how to take those out and relook at them.

Ray Schneider:

We actually did an external flood analysis for a plant thats no longer operating. But the issue with this case was dam failures. There are two issues: the frequency and the frequency has uncertainty, and then the elevation and the type of hazard that drives the event. In this case, a riverine event that could be the result of opening floodgates that could be caused by a dam failure.

What you had to do is identify and map the hazard profiles (and Andrew Miller showed examples of types of hazard profiles), and those drove different human actions, and those drove different abilities to mitigate the event. You are propagating mainly the uncertainty, especially with the frequency of that event, but you have to also have enough precision or granularity to identify that not all dam floods are going to occur the same way. Some of them come with a lot of pre-flooding on the site, some will occur very rapidly and then totally inundate the site. You have to break it into pieces. Its like anything else; it becomes essentially different hazards with different frequencies.

Thats the way I try to estimate how you propagate it through. We can understand that these kinds of events are important, because I only have a given amount of time. But if I dont meet my deadline in a certain amount of time, my floods going to be 3 feet on the site and I cant get my staff to do anything at that point. I have actions that are taken with equipment that is underwater at this point. So, you have to basically do this mapping, although thats a complicated aspect of it.

Suzanne Denis:

I think that I heard earlier this week that we can develop all sorts of uncertainties, but whats the end result? What are decisionmakers going to do with that information? I think sometimes the question is not how do we propagate and get this very detailed uncertainty analysis, but both the hazard community and the risk community struggle with what do decisionmakers do with that information? The bigger question might be what do we do with that information once we have it, rather than how do we do it?

Michelle Bensi:

From a decisionmaking perspective, if you understand the conditional response to these, essentially a plant fragility curve, although its not quite that simple because its not just one dimension on the axis. I think its a key piece of information. Because once you have that sort of understanding, the sensitivity to the hazards and the quantification for decisionmaking purposes is a bit more straightforward. But then again, I think a lot of decisionmaking is made on the mean hazard. So, if its not made on the mean, estimate.

Andrew Miller:

The main point is we have to be okay with the mean estimate. Even though the uncertainties may be orders of magnitude on the range we care about, in seismic were okay with the mean now.

Flooding needs to be there too to make decisions.

Michelle Bensi:

I think the key thing about understanding the epistemic is also just getting away from the point estimate. I think a lot of what has been done in practice is you come up with a point estimate, which yields one hazard curve. If you do not necessarily know whether that point estimate is near 4-469

the mean, you do not necessarily know whether you have the right decision point. I think its understanding where the mean is and not just doing one analysis when you get just one point estimate of the parameters and one hazard curve.

Comment (Andy Campbell):

The NRC has done a review of a PFHA that is very similar to a PSHA. Its in process, so I cant talk in detail about it. But we have looked at that. We have a way of reviewing those. You end up with a hazard curve. You make a decision based upon their hazard curve and what your own analysis does, very similar to what we do routinely for PSHA. So, we do have at least one example that we will eventually publish.

Comment (Des Hartford, BCHydro):

Im coming from the hydropower perspective and what to do with the results. If you look at what USACE and USBR do with the results, they actually compare the quantitative results from their event tree analysis, the numerical values, without much consideration of these uncertainties because the methods available for us sometimes do not allow us to capture that to any significant degree. So, its on the rough scale, compared to where we really need to be. Its compared with an acceptable risk guideline. We saw a little bit of that from John England the other day. But that whole process is fraught with difficulty. What the dam owners going to do, and the same will apply the nuclear power operator, with these risk numbers will not necessarily be compared with a tolerable risk value. In fact, there are very good reasons not to compare with these f-N curves that theyve been using. There are good reasons why things must move on. So, the whole area of what to do with these risk numbers is going to take a huge amount of policy work. Much more than is been done for the dams, even though theyre making decisions, not basically prioritizing all the work now. A lot will then relate to the policy principles, the actual values of the society or the values of the authorities that are driving the decisionmaking process. How are they going to be factored in to what you do with the numbers? Thats a whole new area that needs to be reconsidered for the 21st century, given public perceptions today, compared with what Farmer 15 said back in 1967, for the sighting of nuclear reactors. Now, concerning what Farmer said back in the 1967 seminal paper, and in relation to screening, fundamentally, screening is nonconservative. The second one to deal with is the fact that when you are screening, and you can screen deterministically, up to a point, because you cant cover everything deterministically. If youre screening in a probabilistic sense, you resort to credibility, which is actually a binary decision based on a degree of beliefit's either credible or its not credible; its in relation to the physical possibility of whether it can be done. If you screen on the basis of probability, you are faced with the problem that you have assigned the probability to this, it might be 10-10, but you cant be surprised if it happens. The 10-10 event will happen because, as Farmer sagely pointed out, the credible event can have an incredibly low probability, whereas the incredible event can be made up of the combination of very credible events, but the combination is unusual. What we recently had to grapple with on the dam side is unusual combinations of usual conditions. We concluded, looking at a small hydropower plant (not very complicated), 1027 different system states. Which ones do you screen out? How do you know? You dont. Theres a whole raft of things to be done in relation to screening on the one hand, and what to do with the numbers on 15 Farmer, F.R. (1967). Siting Criteria - A New Approach. In Containment and Siting of Nuclear Power Plants:

Proceedings of a Symposium held by the International Atomic Energy Agency (IAEA), Vienna ,3-7 April 1967: IAEA.

P. 303-318.

4-470

the other, which I think needs to be revamped. Copying what was done in the 1970s, 1980s, and 1990s is not fit for purposes today in relationship to what to do with the results.

Moderator:

There is this idea of PRA capability. You can have a PRA, and it can have different levels of capability. How would you characterize our capability level today, in terms of an external flooding PRA? What capability level do you think we can get to in the near future? What capability level do you think we ultimately need?

Ray Schneider:

Historically, we ended up with three capability categories. The first one, Capability Category I, was basically saying. This is what we know now. This is the state of knowledge now. That was 25 years ago. That became the basis to say that this is our state of knowledge; this is good enough to make reasonable decisions. The second capability category, Capability Category II, is what wed like to be able to do, what we think we should be doing. The third, Capability Category III, was almost aspirationalwe can do this, we have enough knowledge, but its a lot of work, and is it really worth it? But if you do it, we want to give you credit for it, we want to say thats really good. Nowadays, I think Capability Category II is where everyone should be. But in some cases, we know that still may be overkill. In certain specific areas, you may want Capability Category I if you can justify a conservative frequency. If you can say, Im doing it bounding, I've done the analysis, and Ive made so many assumptions that I know it's a bounding frequency, but it saves me a lot of work, then you can get away with Capability Category I. But Im not really doing the state of the art at that point. But if you do Capability Category II, you can get maybe lower results and more realistic calculations. So, Capability Category I is probably okay if you can do it conservatively and you can justify the conservatism, and Capability Category II is really probably where we should be for the industry. Its basically right, its a realistic mean. But again, if the cost of that realistic mean is enormous and you can live with the Capability Category I, stay there.

Michelle Bensi:

To clarify: it used to be three capability categories, in the current PRA standard revision thats coming out, its going to just be those two (I and II). Capability Category I PRAs have different pedigrees in terms of regulatory decisionmaking. If you submit one of these (Category I), the NRC may respond differently. But the one challenge also with some of the Capability Category I PRAs is that when you start making these conservative assumptions, you may be masking important risk insights. So, its a tradeoff. Not just because the answer might be higher in terms of whatever risk metrics, but that you might actually lose some risk insights coming out of that activity.

Ray Schneider:

Its a balance. If you are using Capability Category I, you do not want to lose those insights in the process.

Andrew Miller:

To answer the question of where we are now, we are not very far. Then where can we get to? I think its going to get harder with regard to some of the participatory peer reviews that you have to do; an additional level of rigor that would be state of practice these days. I think there are only a 4-471

few sites for which the cost would justify having that tool capable. Thats the other balance: how much effort does the utility or a site want to put in? A lot of sites are very high up and do not have flood problems, or not nearly as many flood problems. I think the sample set of people who are interested in doing a flood PRA is much, much smaller than all other hazards, period. Its going to take a while to get there, I think.

Moderator:

Jeremy Gaudron, is this capability model process used in France or in Europe?

Jeremy Gaudron:

We dont have such capability categories. But then we have just begun on the subject. Its not quite mature.

Suzanne Denis:

I dont think anyone has done a really full-scope, external flooding PRA. So, were Capability Category 0 right now, and hopefully moving to something more than that.

Meredith Carr:

Weve talked a lot about trying to assign numbers to things: hazards, fragility, human actions. We have not talked much about forecasts and warnings, which are really out of the control of the plant itself. Is there any thought of trying to include the reliability of forecast, either spatial or temporal, in there? Do you think its a significant issue when youre putting all those numbers together?

Andrew Miller:

Thats going to vary organizationally. Some utilities, some sites that Ive been to have onsite meteorologists, where they can more directly control how they monitor and when they get forecasts. Others rely on NWS. I think the general trend, since the accident at Fukushima Dai-ichi, that weve seen has been utilities adding time and steps along the way. It is difficult to assign a reliability to that. We are still waiting on a reliability number for just putting up a sandbag wall, let alone how well NWS is going to perform. But I do think that its warranted to look at that. I think a lot of sites did upgrade their forecasting capabilities. What does that look like post-Fukushima, especially? I know for a few of the sites we worked with after NTTF Recommendation 2.3, those procedures got a lot better: more set points, more defined actions that were not ambiguous. For example, you need to start shutting down if it gets to this level, not merely consider doing this. To me, thats more beneficial to safety than trying to put a reliability number on the forecasts.

Ray Schneider:

From the point of view of the standard, theres no intent for a standard to basically try to do that prediction better. But from the point of view of the response, the key things are the triggers. If you have triggers and good organization, and that defines your organizational behavior, you will be able to respond. The key to that is the plants that have those triggers actually should use them; before the hazard comes upon them. In a couple of instances, people actually start taking out the floodgates when events occur far away because they know that if they wait too long, they would not have enough time. That is one way of getting confidence that the organizational behaviors will be okay.

4-472

Michelle Bensi:

I dont think its necessary to quantify the reliability of forecasts. I think the key message is that you have to understand what the uncertainty is. You know, 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> out, you really still might not know where landfall is going to happen on the hurricane. Recognizing that, I think, probably more so before all the NTTF Recommendation 2.1 and 2.3 activities were put in place, some of those procedures were pretty optimistic about how much confidence, or how much warning time, we really have. We think that we wouldnt miss the fact that theres a hurricane making landfall, but maybe not recognizing that, these things track change really late in the game. Or that you can have these large-scale participation events fall on top of the weather center, and they didnt see it coming. These things can happen. I think the key message is to understand the limitations of the forecasting capabilities so that the procedures are not taking advantage of the best case. For example, you know 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> out where the flood stage is going to be and you take 72 hours8.333333e-4 days <br />0.02 hours <br />1.190476e-4 weeks <br />2.7396e-5 months <br /> to do the procedure. You can account for those potential cases where things change more than expected, in terms of the forecast, and you dont end up waiting too long because you think you have more time than you actually do.

Suzanne Denis:

I think that weve made a lot of advances just in the last decade. When I started at the NRC, we had a potential licensee say that theyll know theres a flood when the water comes on the site.

They literally said, Well, when our feet get wet. I think were far beyond that, just in the last 10 years. I think were probably not quite at quantifying any sort of reliability in forecasts, but theres a much broader recognition for those kinds of things.

Joseph Kanney:

As part of the post-Fukushima flood hazard reassessment process, the NRC asked all the power reactor licensees to reevaluate their flooding hazards. If their reevaluated flooding hazard was higher than their current design basis, then that tipped them into a process called an integrated assessment. In the discussions with NEI and licensees on what should be put into an integrated assessment, there was a specific focus on warning time. NEI developed a white paper 16 on warning that provides basic information on the types of warnings and forecasts that are routinely provided by NWS. It had some guidance about considerations for developing your trigger. So, with respect to that, I think there has been some considerable improvements. I think thats one of the things we had a question about: what sorts of things from the post-Fukushima flooding reevaluations should be rolled into external flooding PRAs? I think that white paper is probably one example of something that developed out of that post-Fukushima process that would be a valuable input.

Andy Campbell:

The NRC did endorse the NEI white paper. We reviewed it thoroughly and endorsed it as one of the things they would do for LIP. A lot of the sites were able to close out their LIP analyses with simple things like sandbags being available, procedures, and warning times. You dont want to be overly optimistic about warning. But Im a liaison director in the NRC Operations Center, and we had Hurricane Florence coming in last summer. At the NRC Operations Center and NRC Region II, eight plants were already going through their procedures 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> beforehand, knowing the hurricane was going to hit somewhere in the Carolinas. They were already going through their 16 NEI 15-05, Rev 6, Warning Time for Local Intense Precipitation Events, ADAMS Accession No. ML18005A076.

4-473

procedures, doing lockdowns, removing loose stops that could become missiles, a variety of different things. In the end, none of the plants was threatened. The storm surge was well below what Brunswick Steam Electric Plant had for its site grade. The rainfall events did not exceed what they already had protection for. But the plants do all of this. We are right there with them, watching and monitoring. Our resident inspectors are in the plant. They hunker down with the plant personnel. So thats a difference, I think.

Meredith Carr:

In a risk-informed space, we have a lot of forecasts where the event doesnt happen. Maryland, Virginia, and Washington, DC, were all under a tornado warning recently. There is different reliability in different places. But it sounds like the approach has been deterministic and conservative, for how to respond to it.

Andrew Miller:

Yes, that is the way it has been approached thus far. There are definitely set points along the way for every site that has that as a hazard to remove the ambiguity as much as possible.

4-474

4.4 Summary This report documents the 4th Annual NRC Probabilistic Flood Hazard Assessment Research Workshop held at NRC Headquarters in Rockville, MD, on April 29-May 2, 2019. These proceedings included the following:

  • Section 5.3: Proceedings (abstracts at ADAMS Accession No. ML17355A081 and complete workshop presentation package, including slides and questions and answers, at ADAMS Accession No. ML17355A071)
  • Section 5.4: Summary
  • Section 5.5: Workshop Participants 4-475

4.5 Workshop Participants Hosung Ahn Hydrologist Frank Arner U.S. NRC/NRO/DLSE/EXHB Senior Reactor Analyst hosung.ahn@nrc.gov U.S. NRC/R-1/DRS frank.arner@nrc.gov Thomas AirdC, SC Env. Engineer Jennifer ArrigoR U.S. NRC/RES/DRA/FXHAB Science Coordination Lead thomas.aird@nrc.gov U.S. Global Change Research Program jsaleem-arrigo@usgcrp.gov Azin Al Kajbaf Research Assistant, PhD student Taylor Asher University of Maryland Graduate student akajbaf@terpmail.umd.edu University of North Carolina, Chapel Hill tgasher@live.unc.edu Reuben AniekwuR Research Coordinator Riadh AtaR U.S. Global Change Research Program Expert researcher raniekwu@usgcrp.gov EDF riadh.ata@edf.fr Michael C. Annon Owner Christopher Azmeh I&C Engineering Associates Corporate Flood Engineer iceng2008@aol.com Exelon christopher.azmeh@exeloncorp.com Rasool AnooshehpoorS Geophysicist Gregory B. Baecher U.S. NRC/RES/DE/SGSEB Professor rasool.anooshehpoor@nrc.gov University of Maryland gbaecher@umd.edu John AntignanoR Fisher Engineering, Inc. Victoria Bahls john.antignano@feifire.com Senior Hydrometeorologist MetStat, Inc.

Stacey ArchfieldS vbahls@metstat.com Research Hydrologist U.S. Geological Survey Philippe M. BardetS sarch@usgs.gov Associate Professor George Washington University Mario Alejandro Ariza bardet@gwu.edu Journalist Freelance Bruce Barker marioalejandroariza@gmail.com Principal MGS Engineering Consultants Inc.

Hans Arlt bbarker@mgsengr.com Sr. Systems Performance Analyst U.S. NRC/NMSS/DUWP hans.arlt@nrc.gov S - Speaker; P - Panelist, SC - Session Chair, C - Organizing Committee, R -Remote, RS - Remote Speaker 4-477

Dan Barnhurst Megan BurkeR Hydrogeologist Research Environmental Engineer U.S. NRC/NRO/DLSE/RENV USACE/ERDC dob1@nrc.gov megan.e.burke@usace.army.mil Laurel Bauer Chris Cahill Geologist SRA U.S. NRC/NRO/DLSE/EXHB U.S. NRC/R-I/DRS laurel.bauer@nrc.gov christopher.cahill@nrc.gov Joseph V. BelliniR R. Jason CaldwellS Vice President Hydraulic Engineer Aterra Solutions, LLC USACE Galveston joe.bellini@aterrasolutions.com raymond.j.caldwell@usace.army.mil Christopher BenderP Andy Campbell Senior Coastal Engineer Deputy Division Director Taylor Engineering U.S. NRC/NRO/DLSE cbender@taylorengineering.com andy.campbell@nrc.gov Michelle (Shelby) BensiS Karen CarboniR Assistant Professor Tennessee Valley Authority University of Maryland kcarboni@tva.gov mbensi@umd.edu Matt CarneyR R

Earl Bousquet Hydrologist Reliability and Risk Analyst Bechtel Corporation U.S. NRC/NRR/DRA/APLB/RILIT carney.matt@gmail.com earl.bousquet@nrc.gov Shaun CarneyS R

Stephen Breithaupt Senior Water Resources Engineer Earth Scientist RTI International Pacific Northwest National Laboratory scarney@rti.org stephen.breithaupt@pnnl.gov Meredith CarrC, SC Adrienne BrownR Hydrologist Reliability and Risk Analyst U.S. NRC/RES/DRA/FXHAB U.S. NRC/NRR/DRA/APLA meredith.carr@nrc.gov Adrienne.Driver@nrc.gov Chris CauffmanR Laura Bruschi Reactor Operations Engineer Safety Analysis Engineer U.S. NRC/NRR/DIRS/IRIB First Energy Solutions ccc2@nrc.gov lbruschi@firstenergycorp.com Jill Caverly Kristy BucholtzR Senior Project Manager Reactor Engineer U.S. NRC/NMSS/FCSE/ERB U.S. NRC/NRR/DSS/STSB jill.caverly@nrc.gov kristy.bucholtz@nrc.gov 4-478

Laura ChapR Biswajit Dasgupta Sr. Engineer Staff Engineer Atkins Southwest Research Institute laura.chap@atkinsglobal.com bdasgupta@swri.org Yuan Cheng Richard Deese Hydrologist SRA Region 4 U.S. NRC/NRO/DLSE U.S. NRC/R-IV/DRS/EB2 yuan.cheng@nrc.gov rick.deese@nrc.gov Donald Chung Darryl Deist Risk and Reliability Analyst Senior Manager Business Development U.S. NRC/NMSS/DSFM/CSRB Curtiss-Wright Nuclear Division donald.chung@nrc.gov ddeist@curtisswright.com Zach Coffman Huseyin DemirR Reliabilty and Risk Anlayst Senior Coastal Engineer U.S. NRC/NRR/DRA/APLB/RILIT INTERA Inc zachary.coffman@nrc.gov hdemir@intera.com Christopher Cook Scott DeNeale Executive Technical Assistant Water Resources Engineer U.S. NRC/OEDO/AO Oak Ridge National Laboratory christopher.cook@nrc.gov denealest@ornl.gov Marcela Liliam Severiano Covarrubias Suzanne DennisP Master Degree Risk and Reliability Engineer Instituto de Ingeniería, UNAM U.S. NRC/RES/DRA/PRAB marceliliam1210@gmail.com suzanne.dennis@nrc.gov William (Mark) CummingsS Thinh Dinh Sr. Engineer Fire Protection Engineer Fisher Engineering, Inc. U.S. NRC/NRR/DRA/APLB mark.cummings@feifire.com thinh.dinh@nrc.gov David C. Curtis Claire-Marie DuLucS Sr. Vice President Head of Flood Hazard Assessment Section WEST Consultants IRSN dcurtis@westconsultants.com claire-marie.duluc@irsn.de Clodoaldo Jose Da SilvaR John F. England, Jr.S Engineer/Scientist III Lead Civil Engineer Electric Power Research Institute USACE Risk Management Center csilva@epri.com john.f.england@usace.army.mil Sheri DahlkeR Julie EzellR Technical Director Attorney American Polywater Corporation U.S. NRC/OGC/GCHA/AGCNRP sheri.dahlke@polywater.com julie.ezell@nrc.gov 4-479

Ken FearonR Joseph Giacinto Civil Engineer Hydrologist FERC U.S. NRC/NRO/DLSE/EXHB kenneth.fearon@ferc.gov joseph.giacinto@nrc.gov Randall FedorsR Emily GibsonR Senior Hydrogeologist Project Engineer U.S. NRC/NMSS/DUWP/RDB Schnabel Engineering Randall.Fedors@nrc.gov egibson@schnabel-eng.com Joel Feldmeier Ian GiffordS Faculty Program Manager Naval Postgraduate School U.S. NRC/OE/CRB jwfeldme1@nps.edu ian.gifford@nrc.gov Celso Moller Ferreira Tatiana D. Gonzalez Associate Professor Meteorologist George Mason University DOC/NOAA/NWS/OSTI/MDL cferrei3@gmu.edu tatiana.gonzalez@noaa.gov Mark FuhrmannC, SC Victor M. GonzalezS Geochemist Research Civil Engineer U.S. NRC/RES/DRA/FXHAB USACE/ERDC/CHL mark.fuhrmann@nrc.gov victor.m.gonzalez@usace.army.mil Raymond FurtstenauS Corinna Goodwin Director, Office of Nuclear Regulatory Senior Project Engineer Research Parsons U.S. NRC/RES corinna.goodwin@parsons.com raymond.furstenau@nrc.gov Orli GottliebR Jeff Gangai Hydraulic Engineer Associate Vice President Alden Dewberry ogottlieb@aldenlab.com jgangai@dewberry.com Kevin GriebenowR Stanley Gardocki Branch Supervisor Senior Reactor Engineer FERC U.S. NRC/RES/DE/RGGIB kevin.griebenow@ferc.gov sjg1@nrc.gov Eric GrossR John Duncan Gatenbee Senior Civil Enginer Civil Engineer, Dam Safety FERC FERC eric.gross@ferc.gov john.gatenbee@ferc.gov Jin-Ping (Jack) Gwo R

Anubhav Gaur Systems Performance Analyst Project Manager U.S. NRC/NMSS/DSFM/RMB Enercon Services, Inc. jin-ping.gwo@nrc.gov agaur@enercon.com 4-480

Seung Gyu HyunR George J. HuffmanS Korean Institute of Nuclear Safety Research Meteorologist mgodo@kins.re.kr NASA Goddard Space Flight Center george.j.huffman@nasa.gov Kenneth HamburgerC Fire Protection Engineer Doug HultstrandR U.S. NRC/RES/DRA/FXHAB Senior HydroMeteorologist kenneth.hamburger@nrc.gov Applied Weather Associates dhultstrand@appliedweatherassociates.com Kun Yeun Han Professor Matt Humberstone Kyungpook National University Reliabilty and Risk Anlayst kshanj@knu.ac.kr U.S. NRC/NRR/DRA/APOB matthew.humberstone@nrc.gov Salman Haq Reactor Engineer Kelly Hunter U.S. NRC/RES/DSA/AAB Senior Vice President salman.haq@nrc.gov INTERA Inc khunter@intera.com Tess HardenS Hydrologist Yoko Iwabuchi U.S. Geological Survey Researcher tharden@usgs.gov Secretariat of Nuclear Regulation Authority yoko_iwabuchi@nsr.go.jp Des Hartford Principal Engineering Scientist Sharon Jasim-Hanif BC Hydro DOE NPH Program Manager des.hartford@bchydro.com U.S. DOE Office of Nuclear Safety sharon.jasim-hanif@hq.doe.gov Katherine Hawley Management and Program Analyst Patricia JehleR NOAA's Satellite and Information Service Attorney katherine.hawley@noaa.gov U.S. NRC/OGC/GCRPS/HLWFCNS patricia.jehle@nrc.gov Jermaine Heath Technical Assistant Gaudron JeremyP U.S. NRC/NRR/DRA Hazard PSA Engineer jermaine.heath@nrc.gov EDF SA jeremy.gaudron@edf.fr MarkHenry SalleyC Branch Chief Andrea JohnsonR U.S. NRC/RES/DRA/FXHAB Project Manager markhenry.salley@nrc.gov U.S. NRC/NRO/DLSE/CIPB andrea.johnson@nrc.gov Victor Hom NOAA NWS Andrew Kalukin victor.hom@noaa.gov Senior Lead Scientist Booz Allen Hamilton kalukin_andrew@bah.com 4-481

Joseph KanneyS, C, CS Kellie KvarfordtR Hydrologist Software Engineer U.S. NRC/RES/DRA/FXHAB Idaho National Laboratory joseph.kanney@nrc.gov kellie.kvarfordt@inl.gov Shih-Chieh KaoS Olivier Lareynie Senior Research Staff Nuclear Safety Inspector Oak Ridge National Laboratory French Nuclear Safety Authority (Foreign kaos@ornl.gov Assignee at the NRC) olivier.lareynie@asn.fr Bill KappelR President/Chief Meteorologist Christina Leggett Applied Weather Associates Reactor Systems Engineer billkappel@falconbb.net U.S. NRC/RES/DSA/AAB christina.leggett@nrc.gov Muhammad Kashfy Bin Zainalfikry PhD Student William LehmanS River Engineering & Urban Drainage Economist Research Centre USACE/IWR/HEC kashfy_zaf@yahoo.com william.p.lehman@usace.army.mil Keith KelsonS Shizhong LeiR Engineering Geologist Geoscience Technical Specialist U.S. Army Corps of Engineers Canadian Nuclear Safety Commission keith.i.kelson@usace.army.mil shizhong.lei@canada.ca Julie Kiang Rebecca LenewayR Chief, Analysis and Prediction Branch Project Manager U.S. Geological Survey American Electric Power jkiang@usgs.gov raleneway@aep.com Beom Jin Kim David M. LeoneR Instructor Associate Principal Kyungpook National University Civil GZA Engineering davidm.leone@gza.com diamond982@naver.com Lai-yung (Ruby) LeungS Minkyu Kim Battelle Fellow Principal Researcher Pacific Northwest National Laboratory Korea Atomic Energy Research Institute ruby.leung@pnnl.gov minkyu@kaeri.re.kr Young-Kwon LimS Yonas Kinfu Scientist Sr. Eng. Specialist NASA Goddard Space Flight Center, Global Bechtel Corporation Modeling and Assimilation Office ypkinfu@bechtel.com young-kwon.lim@nasa.gov Kenneth E. KunkelS Marco Rodrigo LópezR Research Professor PhD student North Carolina State University Instituto de Ingeniería, UNAM kekunkel@ncsu.edu mlopezlo@iingen.unam.mx 4-482

Devan Mahadevan Maria Fernanda Morales-Oreamuno Senior Civil Engineer Hydraulic Design Engineer FERC DEHC Ingenieros Consultores devan.mahadevan@ferc.gov mfmo45@gmail.com Kelly MahoneyR Norberto C. Nadal-CaraballoS Research Meteorologist Research Civil Engineer NOAA USACE/ERDC/CHL kelly.mahoney@noaa.gov norberto.c.nadal-caraballo@usace.army.mil Robert MasonR Kaveh Faraji Najarkolaie Extreme Events Coordinator Graduate Assistant- PhD student U.S. Geological Survey University of Maryland rrmason@usgs.gov kfaraji@terpmail.umd.edu Petr MasopustR John A. NakoskiS Principal Water Resources Engineer Branch Chief Aterra Solutions, LLC U.S. NRC/RES/DRA/PRAB petr.masopust@aterrasolutions.com john.nakoski@nrc.gov Stephen McDuffie Keil Jason NeffR Seismic Engineer Hydrologic Engineer U.S. Department of Energy USBR stephen.mcduffie@rl.doe.gov kneff@usbr.gov Alfonso Mejia Ching NgR Associate Professor Reliability and Risk Analyst Penn State University U.S. NRC/NRR/DRA/APOB aim127@psu.edu ching.ng@nrc.gov Bob MeyerR Kit Yin Ng nucbiz@hotmail.com Chief H&H Engineer Bechtel Corporation Melissa L. MikaR kyng@bechtel.com Civil Engineer U.S. Army Corps of Engineers Thomas NicholsonSC melissa.l.mika@usace.army.mil Senior Technical Advisor U.S. NRC/RES/DRA Andrew MillerS thomas.nicholson@nrc.gov Senior Engineer Jensen Hughes Catalin ObrejaR amiller@jensenhughes.com Physical Science Office Environment Canada and Climate Change Somayeh Mohammadi catalin.obreja@canada.ca PhD student University of Maryland Rafael A. Oreamuno somayeh@terpmail.umd.edu Head of the Sustainable Development Research Centre University of Costa Rica rafael.oreamuno@ucr.ac.cr 4-483

Mark PallinR Sean PringleR Project Manager PG&E Zachry Nuclear Engineering snpu@pge.com pallinm@zachrynuclear.com Kevin QuinlanP R

Tye W. Parzybok and other staff Physical Scientist President/Chief Meteorologist U.S. NRC/NRO/DLSE/EXHB MetStat, Inc. kevin.quinlan@nrc.gov tyep@metstat.com Ronald Raedeke Chandra S. Pathak R&D Senior Engineer American Polywater Corporation U.S. Army Corps of Engineers ron@polywater.com chandra.s.pathak@usace.army.mil Efrain Ramos-Santiago Hanh Phan Research Civil Engineer Senior Lead PRA Analyst USACE/ERDC/CHL U.S. NRC/NRO/DAR/ARTB efrain.ramos-santiago@usace.army.mil hanh.phan@nrc.gov John RandallR P

Jacob Philip NRC retiree Senior Civil (Georechnical) Engr. Formerly RES/DRA/ETB U.S. NRC/RES/DE/SGSEB johnrandall2@comcast.net jacob.philip@nrc.gov Marko RandelovicS R

Frances Pimentel Senior Technical Leader Sr. Project Manager EPRI NEI mrandelovic@epri.com fap@nei.org Vincent RebourS Marie Pohida IRSN Senior Reliability and Risk Analyst vincent.rebour@irsn.fr U.S. NRC/NRO/DSRA/SPRA marie.pohida@nrc.gov Emma Redfoot Reactor Engineer Alexandra PopovaR Oklo Inc.

Director of Licensing emma@oklo.com Oklo Inc.

alex@oklo.com Mehdi Reisi-Fard Team Leader Rajiv PrasadRS U.S. NRC/NRR/DRA/APLB Senior Research Scientist mehdi.reisifard@nrc.gov Pacific Northwest National Laboratory rajiv.prasad@pnnl.gov Andrew Rosebrook Senior Project Engineer Steven PrescottR U.S. NRC/R-I/DRP/PB1 Software Analysis Engineer andrew.rosebrook@nrc.gov Idaho National Laboratory steven.prescott@inl.gov 4-484

Mehrdad Salehi Kenneth See Senior Coastal Engineer Senior Hydrologist Sargent & Lundy U.S. NRC/NRO/DLSE/EXHB mehrdad.salehi@sargentlundy.com kenneth.see@nrc.gov Michael SalisburyRS Michael SeeringR Senior Engineer Water Resources Engineer Atkins North America, Inc. AECOM michael.salisbury@atkinsglobal.com mike.seering@aecom.com Periandros SamothrakisR Marcela SeverianoR Engineering Specialist H&H marceliliam1210@gmail.com Bechtel Corporation psamothr@bechtel.com Ken Shelley Engineering Analyst Selim Sancaktar Southern Nuclear Senior Reliability and Risk Engineeralyst klshelly@southernco.com U.S. NRC/RES/DRA/PRAB selim.sancaktar@nrc.gov William F. SjobergR Senior Systems Engineer Sivaramakrishnan Sangameswaran NESDISJPSS Program Project Manager bill.sjoberg@noaa.gov Dewberry ssangameswaran@gmail.com Harold Sprague SME John SaxtonR Parsons Corp.

Hydrogeologist harold.sprague@parsons.com U.S. NRC/NMSS/DUWP/URMDB john.saxton@nrc.gov Mathini Sreetharan Senior Engineer Mel SchaeferS Dewberry President msreetharan@dewberry.com MGS Engineering Consultants, Inc.

mgschaefer@mgsengr.com Mike St. Laurent Associate Scientist Lisa Schleicher NOAA Office of Water Prediction (OWP)

Geophysicist michael.stlaurent@noaa.gov DNFSB lisas@dnfsb.gov Kate SteinR Journalist Tim Schmitt Freelance Project Engineer katestein281@gmail.com Framatome, Inc.

timothy.schmitt@framatome.com Thomas Steinfeldt Attorney Raymond SchneiderS U.S. NRC/OGC/GCHA/AGCMLE Westinghouse thomas.steinfeldt@nrc.gov schneire@westinghouse.com 4-485

Christian Strack Carl Trypaluk Nuclear Safety Expert Associate Scientist GRS gGmbH NOAA Office of Water Prediction (OWP) christian.strack@grs.de carl.trypaluk@noaa.gov Martin Stutzke Marc Tuozzolo Senior Technical Advisor for PRA Infrastructure Associate U.S. NRC/NRO/DAR AECOM Martin.Stutzke@nrc.gov marc.tuozzolo@aecom.com Craig J. Talbot Dale UnruhR Principal Engineering Specialist Associate Scientist Bechtel Corporation NOAA Office of Water Prediction (OWP) ctalbot@bechtel.com dale.unruh@noaa.gov Arthur TaylorS Ajaya UpadhyayR Physical Scientist Data Scientist DOC/NOAA/NWS/OSTI/MDL Dewberry arthur.taylor@noaa.gov aupadhyay@dewberry.com Stewart Taylor Shilp VasavadaR Manager of Geotechnical & Hydraulics Reliability and Risk Analyst Engineering Services U.S. NRC/NRR/DRA/APLB/RILIT Bechtel Global Corporation shilp.vasavada@nrc.gov swtaylor@bechtel.com Gabriele Villarini Jeffrey Temple Director Response Program Manager IIHR-Hydroscience & Engineering U.S. NRC/NSIR/DPR/POB gabriele-villarini@uiowa.edu jeffrey.temple@nrc.gov Tony WahlRS Keith Tetter Hydraulic Engineer Reliability and Risk Analyst US Bureau of Reclamation U.S. NRC/NRR/DRA/APLB/RILIT twahl@usbr.gov keith.tetter@nrc.gov Bin WangS Mark Thaggard Senior Technical Specialist Deputy Director, RES/DRA GZA U.S. NRC/RES/DRA bin.wang@gza.com mark.thaggard@nrc.gov Shuyi WangR Wilbert O. Thomas, Jr. Student Senior Technical Consultant Western Univerisity Michael Baker International swang737@uwo.ca wthomas@mbakerintl.com Weijun WangR R

Cathie Tiernan Sr. Geotechnical Engineer Generation Project Manager U.S. NRC/NRO Dominion Energy weijun.wang@nrc.gov catherine.l.tiernan@dominionenergy.com 4-486

Katie WardR Senior Meteorologist MetStat, Inc.

katie@metstat.com Megan Watkins Senior Civil Design Engineer Duke Energy megan.watkins@duke-energy.com David WatsonR Engineering Manager SEPI, Inc.

djwatson@sepiengineering.com Thomas Weaver Seismologist U.S. NRC/RES/DE/SGSEB thomas.weaver@nrc.gov Page Weil Water Resources Engineer Lynker Technologies pweil@lynkertech.com De Wesley WuR Reliability and Risk Analyst U.S. NRC/NRR/DRA/APLB/RILIT de.wu@nrc.gov Jason White Physical Scientist U.S. NRC/NRO/DLSE/EXHB jason.white@nrc.gov Cindy Williams Engineer / PRA Engineer NuScale Power cwilliams01@nuscalepower.com Elena YegorovaC, SC Meteorologist U.S. NRC/RES/DRA/FXHAB elena.yegorova@nrc.gov Kashfy ZainalfikryR kashfy_zaf@yahoo.com 4-487

5

SUMMARY

AND CONCLUSIONS 5.1 Summary This report has presented agendas, presentations and discussion summaries for the first four NRC Annual PFHA Research Workshops (2015-2019). These proceedings include presentation abstracts and slides and a summary of the question and answer sessions. The first workshop was limited to NRC technical staff and management, NRC contractors, and staff from other Federal agencies. The three workshops that followed were meetings attended by members of the public; NRC technical staff, management, and contractors; and staff from other Federal agencies. Public attendees over the course of the workshops included industry groups, industry members, consultants, independent laboratories, academic institutions, and the press. Members of the public were invited to speak at the workshops. The fourth workshop included more invited speakers from the public than from the NRC and the NRCs contractors.

The proceedings for the second through fourth workshops include all presentation abstracts and slides and submitted posters and panelists slides. Workshop organizers took notes and audio recorded the question and answer sessions following each talk, during group panels, and during end of day question and answer session. Responses are not reproduced here verbatim and were generally from the presenter or co authors. Descriptions of the panel discussions identify the speaker when possible. Questions were taken orally from attendees, on question cards, and over the telephone.

5.2 Conclusions As reflected in these proceedings PFHA is a very active area of research at NRC and its international counterparts, as well as other Federal agencies, industry and academia. Readers of this report will have been exposed to current technical issues, research efforts, and accomplishments in this area within the NRC and the wider research community.

The NRC projects discussed in these proceedings represent the main efforts in the first phase (technical-basis phase) of NRCs PFHA Research Program. This technical-basis phase is nearly complete, and the NRC has initiated a second phase (pilot project phase) that is a syntheses of various technical basis results and lessons learned to demonstrate development of realistic flood hazard curves for several key flooding phenomena scenarios (site-scale, riverine and coastal flooding). The third phase (development of selected guidance documents) is an area of active discussion between RES and NRC User Offices. NRC staff looks forward to further public engagement regarding the second and third phases of the PFHA research program in future PFHA Research Workshops.

5-489

ACKNOWLEDGEMENTS These workshops were planned and executed by an organizing committee in the U.S. Nuclear Regulatory Commissions (NRCs) Office of Nuclear Regulatory Research (RES), Division of Risk Analysis, Fire and External Hazards Analysis Branch, and with the assistance of many NRC staff.

Organizing Committees 1st Workshop, October 14-15, 2015: Joseph Kanney and William Ott.

2nd Workshop, January 23-25, 2017: Co-Chairs: Meredith Carr, Joseph Kanney; Members:

Thomas Aird, Thomas Nicholson, MarkHenry Salley; Workshop Facilitator: Kenneth Hamburger 3rd Workshop, December 4-5, 2017: Chair: Joseph Kanney, Members: Thomas Aird, Meredith Carr, Thomas Nicholson, MarkHenry Salley; Workshop Facilitator: Kenneth Hamburger 4th Workshop, April 30-May 2, 2019: Co-Chairs: Meredith Carr, Elena Yegorova; Members:

Joseph Kanney, Thomas Aird, Mark Fuhrmann, MarkHenry Salley; Workshop Facilitator:

Kenneth Hamburger Many NRC support offices contributed to all of the workshops and these proceedings. The organizing committee would like to highlight the efforts of the RES administrative staff; the RES Program Management, Policy Development and Analysis Branch; and the audiovisual, security, print shop, and editorial staff. The organizers appreciated office and division direction and support from Jennene Littlejohn, William Ott, MarkHenry Salley, Mark Thaggard, Michael Cheok, Richard Correia, Mike Weber, and Ray Furstenau. Michelle Bensi, Mehdi Reisi-Fard, Christopher Cook, and Andrew Campbell provided guidance and support from the NRC Office of New Reactors and the Office of Nuclear Reactor Regulation. The organizers thank the Electric Power Research Institute (EPRI) for assisting with planning, contributions, and organizing several speakers. EPRI personnel who participated in the organization of the workshops include John Weglian, Hasan Charkas, and Marko Randelovic.

During the workshops, Tammie Rivera assisted with planning and organized the registration area during the conference. David Stroup and Don Algama assisted with room organization.

Notes were studiously scribed by Mark Fuhrmann, David Stroup, Nebiyu Tiruneh, Michelle Bensi, Hosung Ahn, Gabriel Taylor, Brad Harvey, Kevin Quinlan, Steve Breithaupt, Mike Lee, Jeff Wood, and organizing committee members. The organizers appreciate the assistance during the conference of audiovisual, security, and other support staff. The organizers thank the panelists, the technical presenters, and poster presenters for their contributions; Thomas Aird and Mark Fuhrmann for performing a colleague review of this document; and the Probabilistic Flood Hazard Assessment Research Group for transcript reviews.

Members of the Probabilistic Flood Hazard Assessment Research Group:

MarkHenry Salley (Branch Chief), Joseph Kanney (Technical Lead), Thomas Aird, Meredith Carr, Mark Fuhrmann, Jacob Philip, Elena Yegorova, and Thomas Nicholson (Senior Technical Advisor) 5-490

APPENDIX A: SUBJECT INDEX 17B, Bulletin, 1-48, 1-178, 1-189, 2-36, 2- annual maximum series, 1-72, 1-165, 2-155, 187, 2-200, 4-215, 4-262, 4-265 2-201, 2-373, 3-75 17C, Bulletin, 2-36, 2-187, 2-194, 2-244, 3- searching, 4-149 121, 3-332, 4-163, 4-208, 4-214, 4-220, ANS, 3-377, 4-442, 4-452, 4-461, 4-471 4-230, 4-232, 4-236, 4-252, 4-257, 4- areal reduction factor. See ARF 261, 4-265, 4-289 ARF, 1-84, 2-374, 2-383, 2-417, 3-224, 3-2D, 1-34, 3-385, 4-314 401, 4-18, 4-120, 4-133, 4-142, 4-144, model, 1-183, 1-186, 2-52, 2-211, 2-362, 4-149, 4-152, 4-162 2-367, 2-377, 3-367, 4-202, 4-313, 4- averaging, temporal and spatial, 4-148 326 dynamic scaling model, 4-151 CASC2D, 1-151 methods, 4-147 HEC-RAS. See HEC-RAS empirical, 4-151 TELEMAC, 4-203, 4-206, 4-328 test cases, 4-147 3D, 4-314 arid, 1-61, 2-217, 2-223, 3-163, 3-200, 4-132 coastal, 4-123 semi-, 4-131 model, 1-252, 1-261, 2-288, 2-295, 2-302, ARR. See rainfall-runoff: model: Austrailian 2-306, 2-393, 3-22, 3-25, 3-199, 3-378, Rainfall and Runoff Model 4-24, 4-126, 4-291 ASME, American Society of Mechanical terrain mapping, 1-252 Engineers, 4-442 accumulated cyclone energy, ACE, 4-372 associated effects, 1-12, 1-31, 1-34, 2-43, 3-ADCIRC, 1-196, 2-78, 2-334, 2-379, 2-403, 15, 4-15, 4-31 4-57, 4-94 atmospheric AEP, xxxvii, 1-12, 1-17, 1-36, 1-50, 1-54, 1- conditions, 1-90 69, 1-149, 1-166, 1-191, 1-198, 2-22, 2- dispersion, 2-16 43, 2-54, 2-154, 2-187, 2-201, 2-204, 2- environment, 2-71 219, 2-225, 2-270, 2-307, 2-340, 3-15, instability, 4-377 3-21, 3-74, 3-97, 3-116, 3-117, 3-132, 3- interactions, 4-98 135, 3-138, 3-337, 3-355, 4-15, 4-60, 4- moisture, 3-28, 4-125, 4-346, 4-353 74, 4-94, 4-120, 4-127, 4-132, 4-194, 4- parameters, 2-81, 3-111 209, 4-214, 4-253, 4-286, 4-381 patterns, 2-85 drainage area based estimate, 1-69 processes, 3-310 low AEP, 2-187, 3-117, 3-135 rivers, 1-56, 1-59, 1-162 neutral, 1-185 stability, 4-163 return periods, 1-51 variables, 4-122 very low AEP, 1-166, 2-187, 3-117, 4-158 at-site, 1-84, 1-180, 2-31, 2-152, 2-155, 2-AEP4. See distribution:Asymmetric 160, 2-163, 2-188, 2-206, 2-209, 3-70, Exponential Power 3-75, 3-79, 3-84, 3-132, 3-139, 3-310, 3-aleatory uncertainty. See uncertainty, 315, 4-125, 4-137, 4-208, 4-214, 4-264 aleatory Austrailian Rainfall and Runoff Model. See American Nuclear Society. See ANS rainfall-runoff: model: Austrailian Rainfall AMM. See Multi-decadal:Atlantic Meridional and Runoff Model Mode autocorrelation, 3-126 AMO. See Multi-decadal:Atlantic Multi- BATEA. See error:Bayesian Total Error Decadal Oscillation Analysis AMS. See annual maximum series Bayesian, 2-151, 2-162, 2-165, 2-313, 2-400, ANalysis Of VAriance, ANOVA, 4-201 2-402, 3-70, 3-88, 3-93, 3-140, 3-304, 4-annual exceedance probability. See AEP 163, 4-220, 4-223, 4-229, 4-257, 4-294 A-1

analysis, 1-171, 4-308 mass wasting, 4-419 approach, 1-167, 2-168, 2-308, 4-257, 4- models, 3-301, 4-425 366 tests, 3-269, 4-406 BHM, 1-86, 1-175, 2-338, 2-345, 3-304 Bulletin 17B. See 17B, Bulletin estimation, 1-156 Bulletin 17C. See 17C, Bulletin framework, 1-161, 1-163, 2-321, 2-369, 2- calibration, 1-89, 1-90, 1-101, 1-123, 1-158, 400, 4-257 1-161, 1-177, 2-207, 2-312, 2-317, 3-67, gridded, 3-90 3-70, 3-144, 3-146, 3-202, 4-25, 4-75, 4-hazard curve combination, 4-220 105, 4-217, 4-227, 4-313, 4-332, 4-369 inference, 2-338, 2-342, 2-347, 3-70, 3-78, CAPE, 1-60, 1-139, 2-96, 2-381, 4-136, 4-3-93, 3-304, 3-313, 3-387, 4-223 144, 4-161, 4-218 maximum likelihood, 1-186 CASC2D. See 2D:model CASC2D model, 2-321, 2-345, 2-353, 2-402, 3-307, CDB. See current design basis:

3-326 CDF, 1-152, 1-164, 4-66 posterior distribution, 1-161, 1-163, 1-171, center, body, and range, 1-136, 1-207, 2-2-163, 2-321, 2-338, 2-342, 3-78, 3-79, 354, 2-359, 3-94, 3-314, 3-320, 4-266, 3-88, 3-93, 4-223 4-313 prior distribution, 1-161, 1-171, 2-163, 3-78 CFHA. See flood hazard:flood hazard Quadrature, 1-196, 2-68, 4-69 assessment:comprehensive regional, 2-163, 3-79 CFHA. See coastal flood hazard assessment Bayesian Hierarchical Model. See CFSR. See reanalysis:Climate Forecast Bayesian:BHM System Reanalysis best practice, 1-15, 1-151, 2-34, 2-45, 2-248, CHS. See Coastal Hazard System 2-259, 2-405, 3-17, 3-22, 3-25, 3-242, 3- Clausius-Clapeyron, 1-58, 2-89, 4-353, 4-246, 3-301, 3-361, 4-18, 4-24, 4-254, 4- 384 318 cliff-edge effects, 1-12, 1-31, 2-43, 3-15, 3-Blayais, 2-9, 2-266, 3-27, 3-240, 4-390, 4- 373, 3-382, 4-15, 4-474 472 climate, 1-51, 1-54, 1-98, 1-151, 1-196, 1-bootstrap 209, 1-267, 2-16, 2-77, 2-88, 2-223, 2-1000 year simulation, 3-359 372, 2-402, 3-29, 3-81, 3-120, 3-133, 3-resampling, 4-64 136, 3-179, 3-189, 3-208, 4-11, 4-105, boundary condition, 1-90, 1-95, 1-196, 2- 4-113, 4-119, 4-125, 4-132, 4-137, 4-102, 2-113, 2-150, 2-312, 2-320, 2-326, 335, 4-354, 4-369, 4-379, 4-380, 4-383 2-354, 2-366, 2-413, 3-43, 3-47, 3-68, 4- anomalies, 1-61, 3-196 30, 4-39, 4-203, 4-266, 4-271, 4-298 hydroclimatic extremes, 4-335 bounding, 2-323, 2-337, 3-28, 4-457, 4-470, index, 2-338, 2-345, 3-304, 3-310, 3-313 4-478 mean precipitation projections, 4-341 analyses, 2-268, 2-322, 3-28, 4-470 mean precipitation trends, 4-339 assessments, 3-370 models, 1-58, 1-63, 1-95, 2-97, 2-100, 2-assumptions, 2-322 112 estimates, 2-37 downscaling, 4-341 tests, 2-268 patterns, 1-56, 2-88, 3-29, 3-192 BQ. See Bayesian:Quadrature predictions, 1-96 breach, dam/levee, 1-21, 1-148, 1-209, 1- projections, 1-22, 1-51, 1-55, 1-96, 2-48, 214, 1-220, 2-34, 2-322, 2-325, 2-329, 2-89, 2-112, 2-373, 3-19, 3-30, 3-47, 3-3-267, 3-268, 3-314, 4-198, 4-204, 4- 67, 3-162, 4-335, 4-356, 4-369 262, 4-312, 4-404, 4-405, 4-425 precipitation, 4-344 computational model, 4-415, 4-417 regional, 1-74, 1-123 development, 3-267 scenarios, 4-341 initiation, 3-198 science, 1-22, 1-52, 2-90, 2-405, 3-193, 4-location, 4-262, 4-313 381 A-2

temperature changes, 3-32 design basis flood, 4-454 trends, 4-335 event, 3-245 variability, 2-100, 4-137, 4-225, 4-371, 4- return period, 3-352 377 flood walkdown, 2-254 climate change, 1-22, 1-51, 1-63, 1-95, 1- dam, 1-210, 2-201, 2-244, 2-307, 2-329, 2-162, 1-188, 2-48, 2-77, 2-88, 2-98, 2- 338, 2-400, 3-15, 3-136, 3-149, 3-194, 102, 2-114, 2-168, 2-199, 2-307, 2-366, 3-197, 3-267, 3-314, 3-338, 3-405, 4-14, 3-19, 3-29, 3-35, 3-38, 3-115, 3-195, 3- 4-130, 4-208, 4-224, 4-228, 4-253, 4-398, 4-20, 4-30, 4-33, 4-98, 4-260, 4- 257, 4-278, 4-281, 4-312, 4-404, 4-425, 355, 4-364, 4-370, 4-378, 4-380, 4-383, 4-451, 4-476 4-454 assessments, 4-196 high temperature event frequency breach. See breach, dam/levee increase, 2-94 case study, 1-65, 1-74, 2-348, 2-378, 3-hydrologic implacts, 2-99 143, 3-333, 3-336, 3-355, 3-358, 4-125, mean changes, 2-99 4-213, 4-218, 4-238, 4-298, 4-329 precipitation changes, 2-91 computational model, 4-405 scenarios, 2-93 embankment. See embankment dam streamflow change, 2-98 erosion. See erosion: dam coastal, 1-148, 1-267, 4-34, 4-93, 4-317 failure, 1-6, 1-11, 1-37, 1-172, 1-227, 2-12, CSTORM, 2-379 2-34, 2-52, 2-276, 2-288, 2-322, 2-325, StormSim, 2-379 2-329, 2-340, 2-353, 2-409, 3-22, 3-26, coastal flood hazard assessment, 1-194 3-136, 3-197, 3-217, 3-266, 3-353, 3-Coastal Hazard System, 2-379, 3-328 371, 3-374, 3-378, 3-388, 3-395, 4-14, coincident and correlated flooding, 2-40, 3- 4-228, 4-295, 4-318, 4-322, 4-455, 4-10, 3-15, 3-395, 3-403, 4-15, 4-19, 4- 476 318, 4-448 failure analysis, 4-324 coincident events, 1-12, 2-43, 2-332, 3-15, models, 1-159, 3-191 4-15, 4-86 operations, 2-384 combined effects, 1-12, 1-30, 2-43, 4-432, Oroville, 3-339, 3-361, 3-389, 4-258 4-440 overtopping, 3-277, 3-303, 3-367, 4-330, combined events, 1-25, 1-31, 1-37, 1-133, 4-333, 4-407 2-89, 2-356, 2-419, 3-318, 3-380, 3-386, physical model, 1-209, 1-216, 3-268, 4-4-95, 4-440, 4-451, 4-454, 4-456, 4-477 405 combined processes, 1-25 potential failure modes, 2-340 compound event framework, 4-320 regulation, 1-155, 1-188, 4-289 concurrent hazards, 1-228, 2-276, 3-374, releases, 2-97, 3-37, 4-287, 4-318, 4-363 3-377 risk, 1-24, 2-378, 2-416, 3-138, 3-197, 3-correlated hazards, 2-52, 2-410, 3-26 369, 3-400, 4-20, 4-287, 4-320, 4-334 confidence interval, 1-72, 1-157, 3-15, 3-139, risk assessment, 4-321 4-14, 4-199, 4-214 safety, 1-151, 1-211, 2-203, 2-400, 2-404, confidence limits, 1-178, 1-194, 1-199, 2-36, 3-135, 3-202, 3-331, 3-353, 4-114, 4-2-196, 3-94, 3-108, 4-57, 4-69, 4-232, 4- 124, 4-130, 4-158, 4-161, 4-163, 4-209, 253 4-217, 4-224, 4-227, 4-229, 4-231, 4-NOAA Atlas 14, 2-373 279, 4-323, 4-369 convective potential energy. See CAPE system of reservoirs, 3-334 correlation system response, 3-354 spatial and temporal, 2-340, 3-307 data cumulative distribution function. See CDF collection, 4-458 current design basis, 1-10, 1-23, 1-247, 2-21, regional information, 1-154 2-42, 2-202, 2-255, 3-12, 3-154, 4-381, transposition, 4-123 4-480 A-3

data, models and methods, 1-136, 1-197, 1- distribution, 1-71, 1-153, 2-151, 2-179, 2-207, 2-53, 2-57, 2-62, 3-94, 3-96, 3-99, 187, 2-245, 2-270, 2-307, 2-369, 3-70, 3-104, 3-320, 4-57, 4-59, 4-268 3-96, 3-143, 3-315, 4-81, 4-125, 4-159, model choice, 3-312 4-163, 4-256, 4-260, 4-275, 4-315 model selection, 3-312 Asymmetric Exponential Power (AEP4), 2-DDF. See depth-duration-frequency 193, 2-197, 2-200 decision-making, 1-23, 1-32, 1-36, 2-30, 2- empirical, 4-64 246, 2-271, 2-395, 3-136, 3-248, 3-337, exponential, 1-165, 1-208, 2-63, 2-207 3-400, 4-31, 4-34, 4-117, 4-129, 4-243, extreme value, 2-151, 2-155, 3-70, 3-74 4-276, 4-465, 4-476 flood frequency, 2-207, 2-246, 3-117, 3-dendrochronology, 2-220, 2-222, 3-124, 3- 126, 4-208 190, 4-229 full, 2-205 botanical information, 4-216 Gamma, 2-63, 2-347 tree ring estimate, 3-123 generalized skew normal (GNO), 1-80, 1-tree rings, 3-124, 3-183 83, 2-159, 2-187, 2-193, 2-200, 2-373, deposits, 2-216, 2-244, 3-116, 3-182, 3-188, 3-77 3-190, 3-212, 3-234, 4-241, 4-243, 4- generalized extreme value (GEV), 1-80, 1-259 83, 1-175, 1-207, 1-258, 2-63, 2-159, 2-alluvial, 2-245 163, 2-174, 2-179, 2-187, 2-193, 2-197, bluff, 3-187 2-200, 2-207, 2-318, 2-346, 2-373, 3-70, boulder-sheltered, 2-239, 3-188, 4-250 3-77, 4-111, 4-119, 4-149, 4-157, 4-224, cave, 2-220, 2-222, 2-240, 3-187, 4-229 4-261, 4-343, 4-360 flood, 2-223, 2-225, 2-227, 2-241, 2-242, generalized logistic (GLO), 1-83, 1-84, 2-2-245, 3-163, 3-171, 3-173, 3-185, 3- 159, 2-193, 2-197, 2-373, 3-77 190, 3-196, 3-200, 3-213, 4-238, 4-243 generalized Pareto (GPA or GPD), 1-83, paleoflood characterization, 4-239 1-155, 1-196, 1-207, 2-63, 2-159, 2-187, slackwater, 2-220, 3-124, 3-186, 3-362, 4- 2-193, 2-197, 3-77, 4-224 229, 4-230 GNO (generalized skew normal), 2-197 surge, 4-259 Gumbel, 1-155, 1-196, 1-207, 2-63, 2-346, terrace, 2-220, 2-245, 3-124, 3-183, 3-184 4-205, 4-328 depth-duration-frequency, 2-372, 4-330 Kappa (KAP), 2-174, 2-177, 2-193, 2-200, deterministic, 1-30, 1-35, 1-149, 1-151, 1- 2-373, 3-358, 4-218, 4-307, 4-332 257, 2-8, 2-38, 2-71, 2-83, 2-179, 2-205, log Pearson Type III (LP-III), 1-155, 1-178, 2-260, 2-286, 2-323, 2-337, 2-408, 2- 2-36, 2-187, 2-194, 2-199, 4-208, 4-214, 410, 3-10, 3-22, 3-28, 3-103, 3-140, 3- 4-257, 4-261 246, 3-259, 3-262, 3-374, 3-391, 3-393, lognormal, 1-155, 1-207, 2-63, 2-66, 2-3-395, 4-13, 4-27, 4-31, 4-56, 4-122, 4- 207, 3-100, 4-229 126, 4-130, 4-158, 4-175, 4-293, 4-383, lognormal 3, 2-200 4-386, 4-454, 4-475, 4-477, 4-481 low frequency tails, 2-65 analysis, 2-179, 2-246, 2-322, 2-337, 3- marginal, 4-60, 4-70 390, 4-85, 4-382 multiple, 2-53, 2-187, 2-403, 3-117, 4-257 approaches, 1-6, 1-28, 1-73, 2-26, 2-50, 2- mutltivariate Gaussian, 3-102 154, 2-322, 2-337, 2-409, 3-24, 4-24, 4- normal, 1-207, 2-63, 2-171, 4-49, 4-52, 4-199, 4-470 69, 4-205, 4-229 criteria, 2-168, 2-400 parameters, 2-179, 2-188 focused evaluations, 2-21 Pearson Type III (PE3), 1-83, 2-159, 2-Hydrometerological Reports, HMR, 1-185 193, 2-197, 2-373, 3-77, 4-224 increasing realism, 2-332 Poisson, 1-165, 1-198 methods, xxxviii, 1-29, 2-25, 2-202, 4-472 posterior. See Bayesian: posterior model, 1-151, 1-243, 2-88, 3-29, 3-304, 4- distribution 330, 4-355, 4-382 precipitation. See precipitation:distribution A-4

prior. See Bayesian: prior distribution Environmental Factors, 1-19, 1-21, 1-223, 1-probability, 3-99, 4-89 238, 2-31, 2-47, 2-271, 2-276, 2-415, 3-quantiles, 2-155 19, 3-250, 3-398, 4-20, 4-441 tails, 2-207 epistemic uncertainty. See uncertainty, temporal, 1-160, 2-179, 4-121, 4-290 epistemic triangle, 4-205, 4-208, 4-229, 4-328 erosion, 1-11, 1-153, 1-222, 2-245, 3-15, 3-type, 3-101 261, 4-14, 4-81, 4-96, 4-230, 4-330, 4-uniform, 4-205, 4-208, 4-257, 4-328 334, 4-404, 4-417 Wakeby (WAK), 1-83, 2-159, 2-193, 2- dam, 3-271, 3-284, 3-292, 3-302, 3-303, 4-197, 2-373, 3-77 407, 4-414, 4-424 Weibull (WEI), 1-155, 1-196, 1-207, 2-63, embankment, 1-19, 1-21, 2-47, 3-19, 3-2-69, 2-187, 2-193, 2-197, 2-200, 3-100, 277, 3-292, 3-301, 4-19, 4-407 3-103, 4-328 rockfill, 1-209, 4-404, 4-424 Weibull plotting position, 4-64 zoned, 3-267, 4-422, 4-424 Weibull type, 4-68 zoned rockfill, 3-267, 4-404 EC. See Environmental Conditions equations, 4-420 EHCOE. See External Hazard Center of erodibility parameters, 3-273, 3-303, 4-Expertise 404, 4-415, 4-422 EHID. See Hazard Information Digest headcut, 3-267, 4-414, 4-416, 4-418 EMA. See expected moments algorithm internal, 1-213, 3-136, 3-267, 3-272, 3-embankment dam, 1-21, 1-148, 1-209, 2-47, 290, 3-292, 3-300, 3-302, 3-303, 4-416 3-19, 3-267, 3-269, 3-272, 3-276, 3-336, parameters, 1-221, 3-285 4-19, 4-424 processes, 1-21, 1-148, 1-221, 3-270, 4-erosion. See erosion: embankment 407, 4-425 rockfill, 1-216, 3-273, 4-330, 4-404 rates, 1-221, 3-267, 3-285, 4-404, 4-415 zoned rockfill, 3-274 resistance, 3-267, 3-270, 4-407, 4-417 ensemble, 1-85, 1-124, 1-144, 2-100, 2-152, spillway, 3-136, 3-343, 4-211 2-161, 3-81, 3-86, 4-41, 4-52, 4-56, 4- surface, 2-330, 3-267, 3-284, 4-414, 4-97, 4-114, 4-117, 4-123, 4-381 416, 4-418, 4-422, 4-424 approaches, 4-123 tests, 1-209, 1-215, 1-217, 3-267, 3-286, Global Ensemble Forecasting System, 4-404, 4-405 GEFS, 4-35, 4-56 error, 1-35, 1-125, 1-166, 1-195, 2-56, 2-200, gridded precipitation, 2-152, 2-160, 3-71, 2-317, 3-67, 3-105, 4-34, 4-41, 4-57, 4-3-81, 3-86, 3-89 76, 4-87, 4-90, 4-95, 4-102, 4-228, 4-models, 4-55, 4-56 262, 4-468 real-time, 4-49 Bayesian Total Error Analysis, BATEA, 1-storm surge, 4-34, 4-35, 4-36 161 ENSO. See Multi-decadal:El Nino-Southern bounds, 3-116, 3-117 Oscillation defined space, 4-35 Environmental Conditions, 1-21, 1-224, 2- distribution, 2-56, 4-49 271, 3-248 epistemic uncertainty, 3-94 impact quantification, 3-257 estimation, 4-108 impacts on performance, 2-280 forecasting, 4-35 insights, 3-256 instrument characteristic, 4-102 literature, 2-278, 3-252, 3-257 mean absolute, 4-62 method limitations, 2-284 mean square, 3-130 multiple, simultaneously occuring, 3-257 measurement, 1-161, 1-164, 4-262 performance demands, 2-275, 3-251 model, 1-162, 2-193, 2-403, 4-57, 4-69, 4-proof-of-concept, 2-273, 2-281, 3-251 79 standing and moving water, 2-279 operator, 2-284, 3-247, 3-257 quantification, 2-189, 4-59 A-5

random, 4-105, 4-107 extreme precipitation, 1-58, 1-90, 1-100, 2-relative, 3-48 88, 2-89, 2-104, 2-105, 2-153, 2-167, 3-root mean square, RMSE, 4-151, 4-306 33, 3-35, 3-40, 3-45, 3-70, 3-398, 4-101, sampling, 1-71, 2-192, 3-332, 4-79 4-110, 4-347, 4-354 seal installation, 2-267 change, 2-91 simulation, 1-197, 2-57, 2-102, 3-42, 3-67, classification, 1-92, 2-105, 3-44 3-97, 3-105 climate projections, 4-342 space, 4-35, 4-52 climate trends, 4-339 term, 2-53, 2-57, 2-73, 3-94, 3-96, 4-57, 4- Colorado/New Mexico study, 4-144, 4-159, 60, 4-228 4-383 unbiased, 3-97, 4-60 event, 1-91 undefined space, 4-35 increases, 2-94 EVA. See extreme value analysis spatial coherence, 4-337 evapotranspiration, 3-40 temporal coherence, 4-337 event tree, 1-22, 1-46, 1-260, 2-28, 2-288, 2- variability, 4-337 297, 2-300, 2-401, 2-405, 2-417, 3-301, extreme storm data, 3-334 3-303, 3-389, 4-324, 4-440 extreme storm database, 2-377 analysis, 4-313, 4-477 increase, 4-359 EVT. See extreme value theory frequency, 4-364 ex-control room actions, 4-474, 4-475 intensity, 4-364 expected moments algorithm, 1-156, 1-186, model, 1-65, 2-153, 3-72 1-188, 2-187, 2-194, 2-199, 2-207, 2- advances, 2-341 212, 2-214, 3-117, 3-122, 3-139, 3-141, risk, 4-337 3-149, 4-208, 4-214, 4-252, 4-257 extreme value analysis, 1-194, 3-328 expert elicitation, 1-135, 2-338, 2-343, 2-347, extreme value theory, 3-304, 3-313, 4-114, 3-326, 4-220, 4-226, 4-229, 4-313 4-151 external flood, 2-247, 2-259, 2-288, 3-22, 3- fault tree, 1-46, 1-260, 4-324 198, 4-385, 4-429 FHRR. See Near Term Task Force: Flooding equipment list, 3-262, 3-264, 4-435 Hazard Re-Evaluations operator actions list, 3-262, 3-264 FLEX, 2-24, 2-288, 2-304, 3-199, 3-248, 3-human action feasibility, 3-264 258, 3-263, 4-314, 4-381, 4-440 warning time, 3-264 flood, 2-415, 3-31 risks, 3-260 causing mechanisms, 4-318 scenarios, 3-132, 3-261 complex event, 4-449 external flood hazard, 2-290, 4-455 depths, 1-34 frequency, 2-79 design criteria, 3-352 model validation, 2-394 duration, 1-31, 1-34, 1-255, 2-30, 2-291 external flooding PRA. See XFPRA dynamic modeling, 1-255, 2-291, 2-304 External Hazard Center of Expertise, 2-15 elevations, 1-51 extratropical cyclone, 1-11, 1-17, 1-18, 1-58, event, 1-253, 2-289 1-91, 1-196, 2-77, 2-89, 2-97, 4-55, 4- extreme events, 1-172, 2-207, 4-466 98, 4-346, 4-355 gates, 4-473 reduced winter frequency, 4-362 hazard, 1-12, 1-153, 2-44, 3-16, 4-15 extreme event, 4-290 diverse, 4-447 extreme events, xxxvii, 1-56, 2-30, 2-88, 2- increase, 4-364 101, 2-168, 2-201, 2-307, 2-400, 3-29, mechanisms, 1-31, 1-132, 2-309, 2-325, 2-3-42, 3-140, 3-181, 3-193, 3-304, 3-313, 356, 4-432 3-371, 4-281, 4-315, 4-349, 4-381, 4- mitigation, 2-30 475 operating experience, 4-11 external events, 4-29 organizational procedure, 3-245 meteorology, 4-352 response, 3-245 A-6

risk, 1-177 comprehensive, CFHA, 1-152 riverine, 1-6, 1-16, 1-133, 1-148, 1-150, 1- influencing parameters, 4-202 168, 1-175, 1-267, 2-46, 2-202, 2-227, probabilistic analysis, 1-30 2-288, 2-338, 2-353, 2-355, 3-15, 3-18, re-evaluated, 1-248 3-22, 3-27, 3-115, 3-198, 3-246, 3-314, riverine, 2-307 4-11, 4-14, 4-24, 4-31, 4-164, 4-197, 4- scenarios, 4-458 228, 4-255, 4-265, 4-295, 4-311, 4-455 static vs. dynamic, 3-368 routing, 1-11 Flood Hazard Re-Evaluations. See Near runoff-induced riverine, 4-318 Term Task Force: Flooding Hazard Re-SDP example, 1-43 Evaluations simulation, 2-52 flood mitigation, 4-20, 4-472 situation, 4-202 actions, 3-379 sources, 4-456 approaches, 4-449 sparse data, 4-30 fragility, 3-381 stage, 4-480 proceduralized response, 3-245 warning time, 1-34, 2-30 procedures, 4-473, 4-475 flood events strategies, 2-254 Blayais, 4-465 flood protection, 1-255, 2-51, 2-248, 2-250, Cruas, 4-466 2-291, 3-22, 3-25, 3-242, 4-21, 4-24, 4-Dresden, 4-466 33, 4-472 Hinkley Point, 4-466 barrier fragility, 2-52, 2-410, 3-26, 3-395 St. Lucie, 4-466 criteria, 2-250 flood frequency, 2-30, 3-118, 3-398, 4-252, failure modes, 3-374 4-330, 4-473 features, 2-250, 3-245, 3-262, 3-265, 4-27, analysis, 1-13, 1-148, 1-150, 1-153, 1-172, 4-435 1-176, 1-180, 2-45, 2-81, 2-187, 2-190, fragility, 3-377, 3-379 2-202, 2-227, 2-244, 3-17, 3-116, 3-119, inspection, 2-250 3-126, 3-129, 3-135, 3-137, 3-142, 3- maintenance, 2-254 163, 3-199, 3-234, 3-325, 4-18, 4-246, oversight, 3-246 4-265, 4-474 reliability, 1-37 gridded, 3-92 survey, 2-257 methods, 1-13, 2-45, 3-17 testing methods, 2-250 benchmark, 4-33 training, 2-254 curve, 3-112, 3-355, 4-176, 4-253 work control, 3-245 extrapolation, 2-218 flood protection and mitigation, 1-11, 1-21, 2-extrapolation, 3-139 21, 2-43, 2-180, 2-271, 2-415, 3-13, 3-limits, 2-170 16, 3-150, 3-250, 4-11, 4-14 methods, 1-191 training, 3-245 flood hazard, 1-10, 1-27, 1-30, 2-16, 2-42, 2- flood seals, 1-19, 1-44, 1-223, 1-265, 2-19, 43, 2-182, 2-309, 3-12, 3-151, 3-371, 4- 2-47, 2-247, 2-251, 2-260, 2-265, 3-19, 14, 4-327, 4-473 3-235, 3-240, 4-20, 4-384, 4-392, 4-393, curves, 4-266 4-402, 4-403, 4-426, 4-473 combining, 4-219 characeristic types and uses, 1-266, 2-family of, 2-54, 3-108, 3-380, 4-71, 4- 262, 3-237, 4-386, 4-394, 4-397 267, 4-475 condition, 4-387, 4-435 dynamics, 3-385 critical height, 4-435 flood hazard analysis, 3-354 failure mode, 4-387 case study, 4-191 fragility, 3-381 riverine pilot, 2-50 historic testing, 2-251 flood hazard assessment, 1-29, 3-328, 3- impact assessment, 4-387 336, 4-318 A-7

performance, 1-19, 2-47, 2-261, 3-19, 3- GLO. See distribution:generalized logistic 235, 4-393 Global Climate Model, 1-128, 1-162, 2-53, 2-ranking process, 4-388 55, 2-63, 2-67, 2-71, 2-77, 2-96, 2-99, 2-risk significance, 4-386 403, 3-41, 3-47, 3-94, 3-100, 3-103, 4-tests, 1-20, 1-265, 2-262, 3-236, 4-394 99, 4-114, 4-163, 4-260, 4-360 criteria development, 2-251 downscaling, 2-55, 3-102 plan, 2-264, 3-238, 4-395 model forcing, 2-71 procedure, 1-265, 3-239, 4-396 Global Precipitation Measurement, GPM, 4-results, 4-400, 4-401 100, 4-117 series, 4-397 global regression model, 4-61 Focused Evaluations. See Fukushima Near global sensitivity analysis, 4-198, 4-327 Term Task Force: Focused Evaluations case studies, 4-202 FPM. See flood protection and mitigation simple case, 4-205 fragility, 1-11, 3-13, 4-14 GNO. See distribution:generalized skew analysis, 1-259 normal curve, 4-324 goodness-of-fit, 2-102, 2-187, 2-194 flood barrier. See flood protection: barrier tests, 1-71 fragility GPA. See distribution: generalized Pareto framework GPD. See distribution:generalized Pareto NARSIS, 4-327 GPM. See Gaussian process metamodeling simulation based dynamic flood anlaysis Great Lakes, 3-31 (SBDFA), 1-253, 1-256, 2-292 water levels, 4-366 TVA Probabilistic Flood Hazard decreases, 4-368 Assessment, 2-320, 2-404, 4-277 lowered, 3-40 scenarios, 4-282 GSA. See global sensitivity analysis Fukushima Near Term Task Force, 1-9, 1- hazard 23, 1-27, 1-32, 2-17, 2-20, 3-263, 4-11, analysis, 3-349, 4-450 4-386 assessment, 3-22 Flooding Hazard Re-Evaluations, 1-23, 4- hydrologic, 3-136, 3-195, 4-115 440, 4-471, 4-480 identification, 2-82 Fukushima Flooding Reports, 4-471 probabilistic approach, 4-471 re-evaluated flooding hazard, 4-480 quantification, 2-315 Focused Evaluations, 3-263, 4-471 hazard curves, 1-11, 1-51, 1-164, 2-43, 2-68, Integrated Assessment, 2-21, 3-263, 4- 2-84, 2-218, 3-13, 3-100, 3-104, 3-332, 386 4-14, 4-90, 4-474, 4-477 Mitigating Strategies Assessments, 3-263, comparison, 4-281 4-440, 4-475 full, 1-12, 2-43, 3-15, 4-15 post Fukushima process, 4-472 full range, 2-30 Recommendation 2.1, 4-480 integration, 4-60, 4-70 Recommendation 2.3, 4-435, 4-479 MCI, 2-70 Gaussian, 2-67 MCLC, 2-69 Gaussian process metamodeling, 3-102, 4- weight and combine methods, 4-210 59, 4-61 Hazard Information Digest local correction, 4-61 External, 3-149, 3-399 uncertainty, 4-61 Flood, 1-13, 1-223, 1-241, 2-45, 2-180, 2-GCM. See Global Climate Model, See Global 181, 2-186, 2-413, 3-17, 3-149, 3-161, Climate Model 4-18 GEFS. See ensemble:Global Ensemble flood beta, 2-183, 3-152 Forecasting System flood workshop, 1-252, 2-183, 3-152 GEV. See distribution:generalized extreme Natural, 3-151 value population, 2-183, 3-152 A-8

hazardous convective weather, 1-57, 1-60, organizational behavior, 3-379, 3-382, 3-3-31, 3-36, 3-40, 4-368 385, 4-473 NDSEV, 3-35 organizational response, 4-473, 4-479 NDSEV increase, 4-361 humidity, 1-53, 4-358 severe weather, 4-30 HURDAT, 1-207 monitoring, 3-245 hurricane, 1-57, 1-95, 2-51, 2-53, 2-77, 2-81, HCW. See hazardous convective weather 2-89, 2-105, 2-407, 3-26, 3-37, 3-43, 3-headcut. See erosion: headcut 111, 3-247, 3-393, 4-25, 4-34, 4-35, 4-HEC, 3-195, 3-201 73, 4-98, 4-113, 4-259, 4-326, 4-370, 4-

-FIA, 4-261 380, 4-480

-HMS, 2-376, 3-202, 4-166, 4-263 2017 season, 4-371 MCMC optimization, 2-376 Andrew, 4-474

-LifeSim, 4-261 Category, 4-41, 4-98

-MetVue, 2-377 Florence, 4-481 models, 4-312 Frances, 1-101

-RAS, 4-166, 4-207, 4-230, 4-244 Harvey, 3-180, 3-329, 3-361, 3-367, 3-391,

-RAS 2D hydraulics, 2-377 4-95, 4-114, 4-124, 4-160, 4-259

-ResSim, 4-166, 4-258 Ike, 4-56

-SSP, 4-262 Isaac, 3-53, 3-69

-SSP, flood frequency curves, 3-334 Katrina, 1-194, 2-53, 4-263

-WAT, 2-378, 4-161, 4-165, 4-166, 4-256, Maria, 4-211 4-261, 4-263, 4-313, 4-316 Sandy, 4-259 FRA, 4-196 hydraulic, 2-226, 2-266, 2-288, 2-307, 2-354, hydrologic sampler, 4-191 2-400, 3-198, 3-199, 3-234, 3-315, 4-MCRAM runs, 2-378 144, 4-170, 4-230, 4-254, 4-257, 4-262, HEC-RAS, 4-191, 4-236 4-326 historical detailed channel, 1-11 data, 1-96, 3-117, 3-120, 3-122, 3-131, 4- models, 1-133, 1-158, 1-186, 2-311, 2-30, 4-215, 4-269 420, 3-195, 4-60, 4-70, 4-198, 4-326 flood information, 1-154 dependent inputs, 4-326 floods, 1-187 hydraulic hazard analysis, 2-324 intervals, 3-131 hydrologic observations, 1-55, 3-80 loading, 4-232 peak, 1-155, 3-123 models, 1-63, 1-133, 1-158, 2-311, 2-376, perception thresholds, 3-131 4-123, 4-282, 4-331, 4-381 records, 2-62, 3-21, 3-183 risk, 1-15, 2-46, 3-18, 4-329 records extrapolation, 2-80 routing, 2-387 spatial patterns, 4-141 runoff units (HRUs), 3-143 streamflow, 1-183 simplified model, 3-337 water levels, 2-50, 3-24, 3-113 simulation, 4-279 homogeneous region, HR, 1-71, 1-77, 2-151, hydrologic hazard, 2-378, 3-331, 4-211 2-155, 2-159, 2-167, 3-70, 3-75, 3-83 analysis, 3-334, 4-115 human factors, 3-388, 4-471 analysis, HHA, 1-85, 2-207, 3-136, 4-114, HRA, 2-30, 4-475 4-125 HRA/HF, 1-24 curve, 1-15, 1-170, 2-45, 2-204, 2-340, 3-human actions, 2-19, 3-385, 4-446, 4-473 17, 4-130, 4-219, 4-329 Human Error Probabilities, 2-280 stage frequency curve, 4-213 human errors, 2-293 Hydrologic Unit Code, HUC, 4-149 human performance, 2-273, 3-251 watershed searching, 4-150 human reliability, 4-474 hydrology, 2-151, 2-202, 2-226, 2-307, 2-operator actions, 4-474 338, 2-354, 2-369, 2-400, 2-411, 3-70, A-9

3-135, 3-195, 3-304, 3-315, 3-325, 3- 2-287, 2-291, 2-297, 2-322, 2-326, 2-366, 3-387, 4-114, 4-122, 4-127, 4-144, 337, 2-341, 2-353, 2-370, 2-421, 3-19, 4-161, 4-170, 4-211, 4-229, 4-244, 4- 3-22, 3-42, 3-47, 3-198, 3-246, 3-314, 3-276, 4-313, 4-381 315, 4-19, 4-24, 4-264, 4-295, 4-311, 4-initial condition, 1-90, 1-95, 2-104, 3-44 455 Integrated Assessments. See Fukushima analysis, 4-480 Near Term Task Force:Integrated framework, 1-17, 2-46, 2-104, 3-18 Assessment screening, 3-369 internal flooding, 3-25, 4-386 severe storm, 1-90, 3-46, 4-361 scenarios, 3-25 numerical simulation, 1-90, 1-95 inundation logic tree, 2-56, 2-63, 2-85, 2-369, 3-94, 3-mapping, 3-367, 3-368 97, 3-107, 3-114, 4-57, 4-81, 4-86, 4-93 dyanamic, 3-368 branch weights, 4-91 modeling, 4-176 LP-III. See distribution:log Pearson Type III period of, 3-261 manual actions, 1-21, 1-31, 2-272, 2-415, 3-river flood anlysis, 4-327 245, 3-250, 3-398, 4-449, 4-473 JPM, joint probability method, 1-35, 1-195, 1- decomposing, 2-275 199, 1-209, 2-34, 2-53, 2-56, 2-74, 2-77, modeling time, 3-257 3-94, 3-99, 3-112, 4-25, 4-57, 4-64, 4- reasonable simulation timeline, 3-246 73, 4-77, 4-88, 4-228, 4-318 timeline example, 3-256 integral, 1-199, 2-56, 3-97, 4-60 maximum likelihood, 1-156 parameter choice, 2-62 Bayesian, 1-186 storm parameters, 1-197, 1-207, 2-57, 3- estimation, 1-70, 2-404 97, 3-100, 4-68, 4-76 MCMC. See Monte Carlo:Markov Chain surge response function, 4-78 MCS. See mesoscale convective system JPM-OS, joint probability method, with MEC. See mesoscale storm with embedded optimal sampling, 1-194, 1-196, 2-53, 2- convection 55, 2-73, 2-77, 3-94, 3-102, 4-81 mesoscale convective system, 1-18, 1-57, 1-hybrid methodlogy, 2-68 59, 1-64, 1-91, 1-97, 1-100, 1-111, 1-KAP. See distribution:Kappa 123, 2-101, 2-104, 2-112, 2-150, 3-29, kernel function, 2-56, 3-99, 4-68 3-31, 3-33, 3-42, 3-47, 3-49, 3-52, 3-67, Epanechnikov, EKF, 2-58, 2-65, 3-98 4-133, 4-355 Gaussian, GKF, 1-200, 1-202, 2-58, 2-60, intense rainfall increase, 4-361 3-98, 4-99 precipitation increase, 3-40, 4-368 normal, 2-65 rainfall, 4-360 triangular, 2-65 reduced speed, 4-361 uniform, UKF, 2-60, 2-65, 3-98 simulations, 2-144 land use, 1-24, 2-420 mesoscale storm with embedded convection, urbanization, 2-98 2-381, 3-357, 4-128, 4-135, 4-142, 4-land-atmosphere interactions, 1-57 159, 4-161, 4-218 levee Meta-models, 4-61, 4-206 breach. See breach, dam/levee Meta-Gaussian Distribution, 4-59, 4-64, 4-likelihood, 3-78 69 functions, 1-166 example, 4-67 LIP. See local intense precipitation meteorological L-moment ratio, 2-194, 3-77 inputs, 4-132 diagram, 2-174 model, 1-133, 1-158, 2-311 local intense precipitation, 1-6, 1-17, 1-22, 1- MGD. See Meta-models:Meta-Gaussian 34, 1-54, 1-64, 1-76, 1-88, 1-100, 1-130, Distribution 1-133, 1-144, 1-223, 1-255, 2-34, 2-47, mid-latitude cyclone, 2-382, 4-120, 4-128, 4-2-50, 2-97, 2-101, 2-103, 2-168, 2-175, 133 A-10

Midwest, 4-357, 4-368 NACCS. See North Atlantic Coast floods, 4-363 Comprehensive Study intense snowpack, 4-363 NAO. See Multi-decadal:North Atlantic Region, 3-31 Oscillation MLC. See mid-latitude cyclone National Climate Assessment, 4th, 3-42, 4-model, 1-90 335 alternative conceptual, 4-470 NCA4. See National Climate Assessment, averaging, 2-352 4th dependence, 3-310 NEB. See non-exceeedence bound improved, 1-12, 2-44, 3-16, 4-15 NEUTRINO, 4-291, 4-297, 4-314, See also nested domain, 3-53 smoothed particle hydrodynamics, SPH nested grids, 4-55 NOAA Atlas 14, 1-72, 1-185, 2-158, 2-168, numerical modeling, 1-97, 4-327 2-171, 2-179, 2-181, 2-201, 3-87, 4-127, nested domain, 1-101 4-144 parameter estimation, 2-313 future needs, 2-372 parameters, 4-176 gridded, 1-73 selection, 2-346 tests, 2-373 warm-up, 2-385 non-exceedance bound, 4-229, 4-230, 4-moisture 236, 4-238 maximization, 3-45 nonstatitionarity/nonstationary, 1-37, 1-155, saturation deficit, 1-61 1-162, 1-177, 1-188, 1-191, 3-117, 3-saturation specific humiity profile, 1-58 133, 3-315, 4-264 sources, 1-76 change points, 3-125, 3-127 water vapor, 1-61, 4-347 model, 2-373 Monte Carlo, 1-163, 1-185, 2-77, 2-187, 2- processes, 1-12, 1-55, 2-44, 3-16, 4-15 286, 2-411, 3-23, 3-79, 3-93, 3-94, 3- trends, 3-125, 3-128 199, 4-57, 4-162, 4-175, 4-257, 4-330 North Atlantic Coast Comprehensive Study, analysis, 3-21, 3-111 1-196, 2-53, 3-102, 4-94, 4-99 Integration, 2-70, 3-103 numerical weather models, 1-18, 1-89, 1-95, Life-Cycle Simulation, 2-69, 3-103, 4-64 2-104, 3-44, 3-103, 4-55 Markov Chain, 1-161, 1-171, 2-402 regional, 2-104, 3-45 sampling, 4-201 observations, 1-71 simulation, 2-55, 2-74, 2-81, 2-85, 3-102, based, 3-81 3-111, 3-113, 3-328, 4-59 data, 1-95 MSA. See Fukushima Near Term Task record, 3-121 Force: Mitigating Strategies satellite Assessments combination algorithms, 4-105, 4-108, 4-Multi-decadal 112 Atlantic Meridional Mode (AMM), 4-370, 4- combinations, 4-104 373, 4-376, 4-379 mutli-satellite issues, 4-108 Atlantic Multi-Decadal Oscillation (AMO), operating experience, 1-31, 4-447, 4-473 4-373 data sources, 4-465 El Nino-Southern Oscillation (ENSO), 1- operational event, 4-464 206, 4-370, 4-373, 4-376, 4-379 chronology review, 4-466 North Atlantic Oscillation (NAO), 4-370, 4- orographic precipitation. See precipitation, 374, 4-376, 4-379 orographic Pacific Decadal Oscillation (PDO), 4-354 paleoflood, 1-24, 1-154, 1-181, 2-87, 2-216, persistence, 4-113, 4-354 2-217, 2-225, 2-369, 2-400, 2-407, 2-multivariate Gaussian copula, 3-104, 4-59 416, 3-21, 3-26, 3-116, 3-117, 3-136, 3-MVGC. See multivariate Gaussian copula 140, 3-163, 3-179, 3-181, 3-195, 3-207, A-11

3-325, 3-393, 4-18, 4-208, 4-228, 4-244, high level requirements, 4-459 4-253, 4-259, 4-290 paleoflood based, 4-289 analytical framework, 4-233 results, 4-459 analytical techniques, 4-242 river, 4-207 benchmark, 4-252 statistical case study, 4-234, 4-236 model, 2-84 data, 1-181, 1-186, 2-51, 2-81, 2-206, 2- team, 4-458 219, 3-113, 3-117, 3-120, 3-123, 3-141, PFSS 3-179, 3-333, 3-394, 4-30, 4-215, 4-221, historic water levels, 2-81, 3-111 4-246, 4-269 pilot studies, 3-70, 3-386, 3-404, 4-11, 4-16, database, 3-208, 3-213 4-22, 4-312, 4-440 deposits. See deposits pilot studies, 2-418 event, 3-139 plant response, 1-255, 2-20, 2-289, 2-291, 3-hydrology, 2-229, 3-164, 4-247 261, 3-398, 4-20 ice jams, 4-235 model, 1-260, 3-377 indicators, 3-181 proof of concept, 1-255 interpretation, 3-394 scenarios, 1-260 reconnaissance, 2-235, 3-168, 4-233, 4- simulation, 1-22 237 state-based PRA, 1-260 record length, 4-247 total, 1-253, 2-304, 2-415 screening, 4-242 PMF, 1-150, 2-25, 2-80, 2-202, 2-205, 2-400, studies, 3-333 3-21, 3-141, 3-149, 3-266, 3-355, 3-390, humid environment, 2-228, 3-163 4-230, 4-454, 4-474 suitability, 2-235, 3-167, 3-394 PMP, 1-50, 1-56, 1-66, 1-69, 1-73, 2-25, 2-terrace, 4-236, 4-242 153, 2-168, 2-169, 2-179, 2-405, 3-69, viability, 4-234 3-149, 3-391, 4-114, 4-117, 4-120, 4-partial-duration series, 1-165, 2-201, 2-373 158, 4-160, 4-383 PCHA. See Probabilistic Coastal Hazard State SSPMP Studies, 3-338 Assessment traditional manual approaches, 2-104 PDF. See probability density function PRA, 1-11, 1-42, 1-256, 2-24, 2-28, 2-43, 2-PDO. See Multi-decadal:Pacific Decadal 79, 2-168, 2-179, 2-202, 2-216, 2-268, Oscillation 2-287, 2-289, 2-337, 2-370, 2-401, 2-PDS. See partial-duration series 417, 2-421, 3-1, 3-13, 3-21, 3-25, 3-199, PFA. See precipitation frequency: analysis 3-259, 3-266, 3-315, 3-365, 3-368, 3-PFHA, 1-257, 2-79, 2-218, 3-307, 3-353, 4- 386, 3-390, 3-396, 3-405, 4-14, 4-264, 10, 4-453, 4-477 4-312, 4-323, 4-385, 4-391, 4-403, 4-case study, 2-380 429, 4-461, 4-462, 4-463, 4-469, 4-471, combining hazards, 4-207 4-474 documentation, 4-460 bounding analysis, 4-468 framework, xxxviii, 1-12, 1-16, 1-148, 1- dams, 1-24 157, 1-163, 1-166, 1-175, 2-44, 2-46, 2- dynamic, 1-22 307, 2-311, 2-322, 2-338, 2-345, 2-353, external flood. See XFPRA 2-401, 3-16, 3-18, 3-304, 3-359, 3-398, initiating event frequency, 1-47, 2-79 4-11, 4-15, 4-19, 4-455 inputs, 1-132 aleatory, 1-163 insights, 4-476 peer review, 2-87 internal flooding, 3-262, 4-440 regional analysis, 2-342, 2-348 LOOP, 4-469, 4-474 riverine, 1-16, 2-46, 2-308, 2-312, 2-413, peer review, 4-461 3-18 performance-based approach, 4-451 site-specific, 2-309 plant fragility curve, 4-476 hierarchical approach, 4-458 quantitative insights, 4-464 A-12

recovery times, 4-469 4-146, 4-158, 4-161, 4-218, 4-228, 4-risk 282, 4-290, 4-312, 4-315 information, 4-464 analysis, 1-66, 1-73, 1-175, 3-74, 4-128, 4-insights, 4-478 138 safety challenge indications, 4-465 curve, 3-75 Standard, 3-377 estimates, 4-144 precipitation, 1-11, 1-53, 1-64, 1-160, 1-267, exceedance, 2-95 2-88, 2-168, 2-179, 2-181, 2-201, 2-226, large watershed, 3-359 2-260, 2-270, 2-288, 2-307, 2-353, 2- regional analysis, 4-133 369, 2-381, 2-402, 3-15, 3-27, 3-31, 3- relationship, 1-67, 1-85, 1-87, 3-73, 4-129 38, 3-40, 3-42, 3-52, 3-56, 3-67, 3-115, precipitation, orographic 3-134, 3-136, 3-150, 3-162, 3-198, 3- linear model, 1-86 248, 4-11, 4-14, 4-56, 4-100, 4-113, 4- methodology, 1-66 127, 4-144, 4-158, 4-210, 4-218, 4-228, regions, 1-17, 1-65, 2-153, 2-156, 2-167, 4-315, 4-326, 4-335, 4-353, 4-359, 4- 2-414, 3-72, 3-398, 4-18 380 pressure setup, 4-36, 4-37 classification, 2-105, 3-45 Probabilistic Coastal Hazard Assessment, 3-cool season, 3-307 328 distribution, 3-363, 4-114 Probabilistic Flood Hazard Assessment. See duration, 2-155, 2-179, 3-74 PFHA field area ratio, 3-48 Probabilistic Risk Assessment. See PRA gridded, 2-161, 3-81 probabilistic safety assessments, 4-472, 4-historical analysis, 1-19 474 increases, 3-40, 4-359, 4-364, 4-368 probabilistic seismic hazard assessment, 1-instrumentation, 4-102 30, 2-58, 3-94, 4-57, 4-59, 4-477 modeling framework, 3-46 probabilistic storm surge hazard near-record spring, 3-37 assessment, 2-53, 2-78, 4-81 numerical modeling, 1-17 probability density function, 1-57, 1-133, 1-patterns, 4-120, 4-140 152, 1-163, 1-164, 1-201, 2-79, 2-85, 3-point, 2-382, 2-417, 3-359, 4-18, 4-101, 4- 113, 4-205, 4-207, 4-316 146 probable maximum flood. See PMF processes, 1-90 probable maximum preciptiationrecipitation.

quantile, 3-74 See PMP regional models, 4-117 PSHA. See probabilistic seismic hazard seasonality, 1-72, 2-171, 2-382, 3-32 assessment simulation, 1-89, 2-103, 3-48 PSSHA. See probabilistic storm surge warm season, 2-340, 3-33, 3-38 hazard assessment precipitation data, 3-156, 4-147 rainfall. See precipitation/rainfall fields, 1-125 rainfall-runoff, 4-210 gage, 1-79, 2-156, 3-83, 4-117 methods, 1-15, 2-46, 3-18 geo0IR, 4-102 model, 1-11, 1-152, 1-157, 1-183, 2-211, Liveneh, 3-308, 4-119, 4-143 2-384, 2-386, 2-398, 3-15, 3-143, 4-14, microwave imagers, 4-102 4-134, 4-217 observed, 1-96, 1-181, 2-154, 3-48, 3-140 Austrailian Rainfall and Runoff Model, 1-regional, 1-181 70, 1-73, 1-150, 1-185, 2-212 satellite, 4-101, 4-104, 4-112 SEFM, 1-151, 2-213, 2-216, 3-23, 3-28, precipitation frequency, 1-19, 1-64, 1-185, 2- 3-149, 4-276, 4-316, 4-329 151, 2-154, 2-168, 2-181, 2-211, 2-270, stochastic, 1-151 2-372, 3-70, 3-72, 3-81, 3-150, 3-198, 3- stochastic, HEC-WAT, 3-334 224, 4-119, 4-127, 4-132, 4-141, 4-144, VIC, 4-119, 4-369 A-13

reanalysis, 2-56, 2-151, 4-114, 4-122, 4-125, screening, 4-124, 4-233, 4-268, 4-471, 4-4-143, 4-160, 4-269 473, 4-477 Climate Forecast System Reanalysis external flood hazard, 4-31 (CFSR), 1-95, 2-102, 2-113, 2-150, 3- Farmer, 1967, 4-477 47, 4-118 flood, 4-456 PRISM, 4-117, 4-163, 4-370 hazard, 2-82 Stage IV, 1-96, 1-100, 2-113 methods, 4-328 record length non-conservative, 4-477 effective, 3-126 Probabilistic Flood Hazard Assessment, 3-equivalent independent, ERIL, 2-175 369 equivalent, ERL, 4-159, 4-221, 4-230 SDP, 1-10, 1-41, 1-51, 1-248, 2-28, 2-42, 2-historical, 2-66 180, 3-12, 3-116, 3-149, 3-325 period of record, 2-53, 2-151, 2-373, 3-70, floods, 2-30 3-83, 3-136, 4-113 Seals, 1-44 regional growth curve, RGC, 1-77, 1-80, 1- sea level rise, 1-53, 2-89, 2-97, 4-86, 4-92, 4-84, 2-151, 2-155, 2-166, 3-75, 3-85, 3- 355, 4-381 89, 3-91 nuisance tidal floods, 2-93 uncertainty, 1-82 projections, 2-100 regional L-moments method, 1-71, 1-73, 1- SLR, 1-57 87, 1-185, 2-151, 2-154, 2-159, 2-161, sea surface temperature, SST, 4-370, 4-373 2-165, 2-167, 2-174, 2-179, 2-187, 2- anomalies, 4-374, 4-377, 4-378 201, 2-404, 3-70, 3-72, 3-77, 3-85, 3-93, SEFM. See rainfall-runoff:model:SEFM 3-143, 3-387, 4-127, 4-332 seiche, 1-6, 2-52, 2-409, 3-395, 4-318, 4-455 regional precipitation frequency analysis, 2- seismic, 1-6, 4-451 151, 2-154, 2-167, 3-70, 3-71, 3-72, 3- self-organizing maps, SOM, 1-77, 2-151, 2-75, 3-93, 3-144, 3-334, 4-218 157, 2-167, 3-70, 3-83, 3-93 reservoir, 4-170 Senior Seismic Hazard Assessment operational simulation, 4-279 Committee. See SSHAC rule-based model, 4-281 sensitivity, 4-76 system, 4-287 analysis, 4-326 RFA. See regional precipitation frequency analysis ranking, 4-200 analysis quantification, 4-476 RIDM. See Risk-Informed Decision-Making to hazard, 4-476 risk, 1-39, 1-50, 2-20, 2-154, 2-340, 2-380, 3- SHAC-F, 1-16, 1-64, 1-130, 2-46, 2-353, 3-21, 3-138, 4-166 18, 3-314, 3-325, 3-388, 4-264, 4-290, analysis, 1-51, 1-177, 2-203, 2-205, 2-401, 4-311 3-136, 3-149, 3-197, 3-217, 3-361, 4- Alternative Models, 1-142, 4-266 175, 4-462 coastal, 2-419, 3-403, 4-19 assessment, 4-92, 4-196, 4-233, 4-473 framework, 1-132, 1-133 computational analysis, 3-378 highly site specific, 3-319 qualitative information, 3-385 key roles, 2-360 risk informed, 1-6, 1-10, 1-29, 1-40, 1-149, 2- Levels, 4-268, 4-269, 4-271 42, 2-182, 2-392, 3-12, 3-151, 3-202, 4- LIDAR data, 4-271 10, 4-14, 4-129, 4-322, 4-451 LIP, 1-138, 1-142, 4-19 approaches, 2-26 LIP Project Structure Workflow, 3-318 oversight, 2-28 participatory peer review, 4-266 use of paleoflood data, 2-51 project structure, 2-360 Risk-Informed Decision-Making, 1-151, 2-24, LIP, 2-363 2-246, 2-288, 3-135, 3-198, 3-332, 3- riverine, 2-367, 3-323 337, 4-127, 4-210, 4-229, 4-279, 4-323, redefined levels, 3-322, 3-324 4-330 riverine, 2-366, 4-19 A-14

site-specific, 3-324 approach, 3-332 Work Plan, 1-135 inputs, 4-119 significance determination process. See SDP storm parameters, 4-74 skew simulation, 3-103, 3-328, 4-279, 4-281, 4-at-site, 4-214 320 regional, 4-214 storm generation, 4-140 SLOSH, Sea Lake and Overland Surges storm template, 3-145 from Hurricanes, 4-38 storm transposition, SST, 4-120 smoothed particle hydrodynamics, SPH, 1- weather generation, 3-334 263, 3-25, 3-378, 4-291, 4-296, See also Stochastic Event-Based Rainfall-Runoff NEUTRINO Model. See rainfall-runoff:model:SEFM validation, 4-306 storm snowmelt, 1-133, 2-340, 3-307, 4-217 local scale, 4-133 energy balance, 2-376 maximization, 4-120 extreme snowfall, 1-60 parameters, 4-41 flood, 1-183 patterns, 3-144, 3-364, 4-120, 4-257, 4-rain on snow, 2-97 276, 4-286, 4-332 site, 3-308 precipitation templates, 2-383 snow water equivalent, SWE, 3-306, 4- seasonality, 4-134, 4-331 224, 4-332 synoptic scale, 4-133 snowpack increased, 3-37 storm recurrence rate. See SRR VIC, snow algrorithm, 3-308 storm surge, 1-6, 1-17, 1-35, 1-57, 1-192, 1-soil moisture, 3-40 193, 2-34, 2-47, 2-53, 2-78, 2-87, 2-97, reduction, 1-57 2-259, 2-288, 2-322, 2-337, 2-369, 2-space for time, 1-77, 2-207 411, 3-19, 3-22, 3-24, 3-26, 3-29, 3-94, spillway. See erosion: spillway 3-109, 3-110, 3-112, 3-115, 3-198, 3-SRR, 1-196, 1-202, 2-57, 2-59, 3-96, 4-60, 4- 229, 3-328, 3-361, 3-364, 3-396, 4-25, 70, 4-86 4-30, 4-34, 4-35, 4-57, 4-70, 4-73, 4-81, models, 2-58, 3-98, 3-99 4-93, 4-228, 4-259, 4-295, 4-311, 4-317, rate models, 2-60 4-355, 4-382, 4-451, 4-455 sensitivity, 4-88 case study, 2-84 variability, 2-59 data partition, 4-70 SSCs, xxxviii, 1-152, 1-260, 1-265, 2-288, 2- deterministic, 2-331 307, 2-309, 2-353, 3-198, 3-262, 3-264, wind-generated wave and runup, 2-333 4-264, 4-429, 4-435, 4-440, 4-445 hazard, 2-54, 2-55, 4-84 flood significant components, FSC, 4-387 hurricane driven, 3-394 fragility, 3-371, 3-381, 4-32 model, 1-194, 4-75 safety, 4-472 numerical surge simulation, 3-105 SSHAC, 1-30, 1-64, 1-132, 2-85, 2-354, 3- PCHA Studies, 2-379 317, 4-93, 4-229, 4-264, 4-274, 4-313 probabilistic approaches, 2-50 Project Workflow, 3-321 Probabilistic Flood Hazard Assessment, 2-state-of-practice, 1-176, 4-61, 4-321, 4-444, 407, 3-393, 4-24 4-447 probabilistic model, 3-97, 4-60 statistical approaches, 1-179, 4-320 P-Surge model, 4-53 copula-based methods, 4-320 tidal height, 3-111 extreme value analysis, 4-320 total water level, 2-86 statistical models, 4-268, 4-269 uncertainty, 3-398, 4-19 streamflow based, 1-15, 2-46, 3-18 storm transposition, 2-81, 2-377, 3-21, 3-47, stochastic, 1-185, 1-257, 3-143 3-54, 3-357, 4-133, 4-281 flood modeling, 4-129, 4-132 storm typing, 2-381, 3-334, 3-356, 4-119, 4-model, 3-100, 4-458 133, 4-138, 4-217, 4-282, 4-286 A-15

large winter frontal storms, MLC, 3-357 3-67, 3-99, 3-101, 3-193, 4-14, 4-35, 4-scaling and placement, 3-359 51, 4-57, 4-61, 4-68, 4-73, 4-98, 4-125, seperation, 3-359 4-138, 4-346, 4-355, 4-370, 4-380 summer thunderstorm complexes, MEC, parameters, 2-65 3-357 P-Surge, 4-49 tropical storm remants variable cross track, 4-51 TSR, 3-357, 4-134 tropical storm remnant, 3-357 stratified sampling, 4-282 TSR, 2-382, 4-127 stratiform tsunami, 1-6, 2-52, 2-409, 2-420, 3-395, 4-leading, 1-93, 1-94 318, 4-455 parallel, 1-93, 1-94 model, 1-25 trailing, 1-93, 1-94 uncertainty, 1-36, 1-72, 1-125, 1-148, 1-167, stratigraphy, 3-163, 3-183, 3-199, 3-200, 3- 1-178, 1-187, 1-197, 2-30, 2-53, 2-74, 2-234, 4-18, 4-250 78, 2-87, 2-152, 2-165, 2-177, 2-179, 2-analysis, 2-227 187, 2-219, 2-270, 2-320, 2-338, 2-340, record, 4-251 2-377, 2-400, 2-403, 3-21, 3-29, 3-40, 3-streamflow 67, 3-71, 3-90, 3-94, 3-105, 3-119, 3-data, 3-157 126, 3-136, 3-138, 3-149, 3-163, 3-194, gage regional data, 1-181 3-202, 3-246, 3-304, 3-315, 3-326, 3-historical, 3-38 334, 3-389, 4-30, 4-34, 4-35, 4-57, 4-81, Structured Hazard Assessment Committee 4-88, 4-95, 4-114, 4-163, 4-196, 4-197, Process for Flooding. See SHAC-F 4-207, 4-228, 4-244, 4-254, 4-256, 4-structures, systems, and components. See 264, 4-275, 4-282, 4-291, 4-313, 4-355, SSCs 4-381, 4-426, 4-450, 4-462, 4-477 synoptic storms, 1-91, 2-105, 3-45 analytical, 4-242 synthetic Bayesian, 1-86 datasets, 2-62, 4-269 bounds, 1-89 storm, 2-67, 2-81, 2-386, 3-21, 3-96, 3- discretized, 4-64 102, 4-60, 4-62, 4-70, 4-78, 4-279, 4- distribution choice, 2-187, 2-193, 2-197, 3-282 70 storm simulations sets, 2-73 full, 1-15, 2-45, 3-17 storms, 2-57 hazard curve evaluation, 2-317 systematic data hydrologic, 2-99, 3-338, 4-233 gage record, 1-177, 2-206, 3-119, 3-123, integration results, 2-76 3-130, 3-183, 4-252 joint probability analysist, 2-47, 3-19 TC. See tropical cyclone knowledge, 2-356, 3-317, 4-175, 4-233 TELEMAC. See 2D:model:TELEMAC PRA, 3-373 temperature, 1-53 reduced, 2-219, 3-357 change, 2-91 SLR projections, 2-100 high, 1-57 sources, 1-42 profiles, 4-122 SRR, 2-60 trends, 4-357 storm surge, 1-17, 1-193, 2-47, 2-54, 3-19, Tennessee River 3-95, 4-58 Valley, 2-153, 2-156, 3-83, 3-182 temporal, 1-257 Watershed, 4-246 tolerance, 4-215 TRMM,Tropical Rainfall Measuring Mission, uncertainty analysis, 2-87, 4-326, 4-476 4-100, 4-111 UA, 4-198 tropical cyclone, 1-11, 1-17, 1-64, 1-67, 1-91, uncertainty characterization, 1-15, 2-46, 2-1-100, 1-123, 1-194, 1-198, 1-204, 2-53, 74, 2-81, 2-341, 3-18, 3-105, 4-233 2-55, 2-59, 2-71, 2-89, 2-95, 2-101, 2-105, 2-112, 3-15, 3-29, 3-42, 3-47, 3-53, A-16

uncertainty propagation, 1-83, 1-87, 1-193, XFEL. See external flood equipment list 2-54, 2-58, 2-73, 2-398, 3-15, 3-95, 3- XFOAL. See external flood operator actions 102, 3-106, 4-14, 4-58, 4-60, 4-200 list uncertainty quantification, 1-161, 1-193, 1- XFPRA, 3-259, 3-370, 3-372, 3-377, 3-379, 200, 2-54, 2-189, 2-206, 2-420, 3-95, 4- 3-384, 3-402, 4-429, 4-441, 4-475, 4-30, 4-58, 4-60, 4-71, 4-206, 4-215, 4- 479 298 capability categories, 4-443 input parameter, 4-201 documentation, 4-438 river flood models, 3-404 flood event oriented review, 4-467 sources, 4-205, 4-327 flood progression, 4-433 uncertainty, aleatory, 1-12, 1-42, 2-43, 2-57, fragility, 4-30, 4-444, 4-445 2-192, 2-313, 3-15, 3-96, 3-106, 4-15, 4- guidance development, 4-27 60, 4-79, 4-267, 4-268, 4-269, 4-271 hazard analysis, 4-444, 4-445 natural variability, 4-86, 4-175 HRA, 3-265, 3-374 variability, 1-194, 2-54, 4-458 initial plant state, 3-379, 3-382 uncertainty, epistemic, 1-12, 1-42, 1-163, 1- initiating event, 4-446 194, 1-197, 1-202, 2-43, 2-54, 2-57, 2- key flood parameters, 4-433 62, 2-193, 2-313, 3-15, 3-93, 3-96, 3-98, multiple end states, 3-382 3-106, 4-15, 4-57, 4-71, 4-79, 4-81, 4- operating experience, 3-371 86, 4-92, 4-267, 4-458, 4-475 period of inundation, 4-433 knowledge, 4-86 period of recession, 4-433 SRR models, 4-68 physical margin assessment, 4-435 validation, 1-90, 1-95, 1-125, 2-312, 3-48, 4- pilots, 3-371 62, 4-76, 4-293, 4-298 plant response, 3-373, 4-444 warming, 1-60, 4-337, 4-368 preferred equipment position, 3-264 increased rates, 4-357 propagation pathways, 4-433 increased saturation water vapor, 4-346 requirements, 4-443 surface, 3-34 scenarios, 3-265, 3-373, 3-385, 4-433, 4-warning, 2-259, 3-362, 4-35, 4-314, 4-479 446, 4-464 time, 1-34, 1-153, 3-261, 3-371, 4-450 screening, 4-445 triggers and cues, 3-382, 4-473, 4-479 sources, 4-433 watershed, 1-157, 3-56 uncertainty, 3-385 model, 1-158 vulnerabilities, 3-265, 4-473 Watershed Level Risk Analysis, 4-166 walkdown, 2-51, 3-26, 3-260, 3-393, 3-wave, 4-295 395, 4-26, 4-437, 4-440, 4-445, 4-475 impacts, 4-299 walkdown guidance, 2-408, 3-259, 4-440 physical modeling, 4-300 warning time, 4-433 setup, 4-36 wind, 1-53 setup, 4-36 stress formulation, 4-76 tornado frequency increasing, 2-92 locations, 2-92 warning, 2-259 waves, 1-11 WRF, Weather Research and Forecasting model, 1-18, 1-85, 1-90, 1-95, 1-97, 1-185, 2-102, 2-114, 3-28, 3-42, 3-47, 3-52, 3-69, 4-160 parameterization, 1-123, 2-114, 3-47 A-17

APPENDIX B: INDEX OF CONTRIBUTORS This index includes authors, co-authors, panelists, poster authors and self-identified participants from the audience who spoke in question and answer or panel discussions.

Adams, Lea, 4-162 Craven, Owen, 3-5, 3-195, 3-209 Ahn, Hosung, 5-490 Cummings, William (Mark), 2-256, 3-227, 4-Aird, Thomas, 2-38, 2-407, 3-11, 3-195, 3- 386, 4-419, 4-420, 4-421, 4-422 380, 4-12, 4-378, 4-419, 5-490 Dalton, Angela, 1-220, 2-267, 3-240 Al Kajbaf, Azin, 4-312 Daoued, A. Ben, 4-315 Allen, Blake, 4-323 Davis, Lisa, 3-5, 3-179, 3-195, 3-209 Anderson, Victoria, 3-354, 3-370, 3-374 DeNeale, Scott, 3-197, 3-198, 3-213, 3-219, Andre, M.A., 4-287 4-111, 4-142, 4-312, 4-315, 4-320 Archfield, Stacey A., 4-206 Denis, Suzanne, 4-464, 4-467, 4-468, 4-469, Asquith, William, 2-184 4-472, 4-473 Bacchi, Vito, 4-195, 4-320 Dib, Alain, 3-42 Baecher, Gregory, 3-197, 3-213, 4-315 Dinh, N., 4-287 Bardet, Philippe M., 4-287, 4-306, 4-309 Dong, John, 4-323 Barker, Bruce, 4-323 DuLuc, Claire-Marie, 2-391, 4-195, 4-252, 4-Bellini, Joe, 2-30 253 Bender, Chris, 4-91, 4-92, 4-94, 4-97 Dunn, Christopher, 2-370, 2-398, 4-162 Bensi, Michelle, 1-24, 4-312, 4-435, 4-464, England, John, 2-370, 2-396, 2-400, 2-401, 4-465, 4-466, 4-469, 4-471, 4-473, 5- 3-68, 3-319, 3-347, 3-348, 3-349, 3-372, 490 3-373, 4-112, 4-156, 4-157, 4-159, 4-Bertrand, Nathalie, 4-195, 4-320 160, 4-161, 4-206, 4-252, 4-253, 4-254, Bittner, Alvah, 1-220, 2-267, 3-240 4-255, 4-256, 4-258, 4-259, 4-260, 4-Blackaby, Emily, 3-5, 3-195, 3-209 307, 4-311, 4-363 Bowles, David, 2-396, 3-40 Fearon, Kenneth, 3-322, 3-347, 3-372 Branch, Kristi, 1-220, 2-267, 3-240 Ferrante, Fernando, 3-315, 3-351, 3-370, 3-Breithaupt, Steve, 3-346, 5-490 372 Bryce, Robert, 1-129, 2-349 Fuhrmann, Mark, 2-38, 2-407, 3-11, 3-163, Byrd, Aaron, 1-166 3-375, 3-380, 4-12, 4-162, 4-252, 5-490 Caldwell, Jason, 4-112, 4-323 Furstenau, Raymond, 4-1, 4-9, 5-490 Campbell, Andrew, 2-12, 4-375, 4-422, 4- Gage, Matthew, 3-209 455, 4-470, 4-473, 5-490 Gaudron, Jeremy, 4-464, 4-465, 4-467, 4-Carney, Shaun, 3-346, 4-272, 4-306, 4-307, 472 4-308, 4-310 Gifford, Ian, 4-456, 4-464, 4-467 Carr, Meredith, 2-38, 2-407, 3-9, 3-11, 3-380, Godaire, Jeanne, 3-195, 3-205 4-9, 4-12, 4-162, 4-252, 4-311, 4-456, 4- Gonzalez, Victor M., 1-190, 2-50, 3-94, 3-472, 4-474, 5-490 198, 3-223, 3-316, 3-347, 3-348, 3-349, Charkas, Hasan, 5-490 3-350, 4-56, 4-91, 4-95, 4-97 Cheok, Michael, 5-490 Gupta, A., 4-287 Cohn, Timothy, 1-174, 4-250 Hall, Brian, 4-227 Coles, Garill, 1-220, 2-267, 3-240 Hamburger, Kenneth, 5-490 Cook, Christopher, 1-24, 3-351, 3-374, 5-490 Hamdi. Y, 4-315 Coppersmith, Kevin, 1-129, 2-349, 3-304, 4- Han, Kun-Yeun, 4-328 261 Correia, Richard, 1-5, 5-490 B-1

Harden, Tessa, 2-224, 3-163, 3-194, 3-199, Miller, Andrew, 4-423, 4-464, 4-467, 4-468, 3-226, 4-242, 4-243, 4-252, 4-253, 4- 4-469, 4-471, 4-472, 4-474 255, 4-256, 4-258 Miller, Gabriel, 3-339, 3-345, 3-346 Hartford, Des, 4-470 Mitman, Jeffrey, 1-36 Hockaday, William, 3-5, 3-195, 3-209 Mohammadi, Somayeh, 4-312 Holman, Katie, 1-63, 2-148, 3-70 Molod, Andrea, 4-364 Huffman, George J., 4-98, 4-156, 4-158, 4- Montanari, N, 4-287 160, 4-161 Mouhous-Voyneau, N., 4-315 Ishida, Kei, 1-86, 2-98 Mure-Ravaud, Mathieu, 1-86, 2-98, 3-42 Jasim-Hanif, Sharon, 3-335, 3-348 Muto, Matthew, 4-323 Jawdy, Curt, 2-375, 2-396, 2-400, 4-272 Nadal-Caraballo, Norberto, 1-190, 2-50, 2-Kanney, Joseph, 1-7, 2-38, 2-266, 2-367, 2- 370, 2-399, 3-94, 3-198, 3-223, 3-316, 407, 3-11, 3-94, 3-193, 3-316, 3-348, 3- 4-56, 4-91, 4-94, 4-95, 4-96, 4-97 349, 3-369, 3-380, 4-12, 4-33, 4-91, 4- Nakoski, John, 4-1, 4-28 242, 4-256, 4-306, 4-307, 4-309, 4-310, Neff, Keil, 2-199, 3-135 4-329, 4-363, 4-374, 4-421, 4-423, 4- Nicholson, Thomas, 3-347, 3-349, 3-369, 4-455, 4-456, 4-464, 4-465, 4-473, 5-490 261, 4-306, 5-490 Kao, Shih-Chieh, 3-197, 3-198, 3-213, 3-219, Novembre, Nicole, 4-323 4-111, 4-142, 4-156, 4-157, 4-160, 4- OConnor, Jim, 2-224, 3-163, 4-242, 4-243 312, 4-320 Ott, William, 1-5, 5-490 Kappel, Bill, 3-41, 3-69 Pawson, Steven, 4-364 Kavvas, M. Levent, 1-86, 2-98, 3-42, 3-69 Pearce, Justin, 4-227 Keeney, David, 1-63, 2-148, 3-70 Perica, Sanja, 2-367, 2-399, 2-400 Keith, Mackenzie, 3-163, 4-243 Pheulpin, Lucie, 4-195, 4-320 Kelson, Keith, 3-192, 4-208, 4-227, 4-252, 4- Philip, Jacob, 1-261, 2-38, 2-407, 3-11, 3-253, 4-255, 4-256, 4-257, 4-259 380, 4-12, 4-419, 4-421, 4-422, 5-490 Kiang, Julie, 2-184, 3-116 Pimentel, Frances, 3-354 Kim, Beomjin, 4-328 Prasad, Rajiv, 1-50, 1-129, 1-147, 1-220, 2-Kim, Minkyu, 4-328 85, 2-303, 2-349, 2-365, 3-29, 3-192, 3-Klinger, Ralph, 3-195, 3-205 193, 3-240, 3-304, 3-315, 4-261, 4-306, Kohn, Nancy, 1-220 4-307, 4-349, 4-363 Kolars, Kelsey, 3-116 Prasad, Rajiv, 2-267 Kovach, Robin, 4-364 Prescott, Steven, 2-284, 3-194, 3-199, 3-Kunkel, Kenneth, 4-329, 4-376, 4-378 223, 4-287 Kvarfordt, Kellie, 1-238, 2-177, 3-149 Quinlan, Kevin, 4-156, 4-162, 4-374, 4-377, Lehman, Will, 4-162, 4-252, 4-253, 4-254, 4- 5-490 255, 4-257, 4-258, 4-260, 4-306, 4-307, Ramos-Santiago, Efrain, 3-198, 3-223 4-308, 4-309, 4-311 Randelovic, Marko, 4-23, 4-72, 4-384, 4-386, Leone, David, 4-80 4-423, 5-490 Leung, Ruby, 1-50, 2-85, 3-29, 3-115, 4-349, Randelovic, Marko, 4-378 4-363, 4-374, 4-375 Rebour, Vincent, 2-391, 2-399, 4-195 Lim, Young-Kwon, 4-364, 4-374 Reisi-Fard, Mehdi, 2-22, 3-227, 5-490 Lin, L., 4-287 Ryan. E., 4-287 Littlejohn, Jennene, 5-490 Ryberg, Karen, 3-116, 3-192, 3-194 Lombardi, Rachel, 3-209 Salisbury, Michael, 4-72, 4-91, 4-96 Ma, Zhegang, 1-250, 2-284, 3-199, 3-223, 3- Salley, MarkHenry, 5-490 360 Sampath, Ramprasad, 2-284, 3-199, 3-223, Mahoney, Kelly, 3-68, 3-69 4-287 McCann, Marty, 3-40, 3-388 Schaefer, Mel, 4-114, 4-117, 4-125, 4-156, Melby, Jeffrey, 1-190, 2-50 4-158, 4-159, 4-160, 4-161, 4-286 Meyer, Philip, 1-129, 2-303, 4-261 B-2

Schneider, Ray, 2-30, 3-350, 3-362, 3-371, Therrell, Matthew, 3-209 4-374, 4-375, 4-377, 4-378, 4-384, 4- Tiruneh, Nebiyu, 3-116, 5-490 385, 4-386, 4-419, 4-446, 4-464, 4-466, Vail, Lance, 1-50, 1-129, 2-85 4-469, 4-471, 4-472 Verdin, Andrew, 2-148, 3-70 Schubert, Sigfried, 4-364 Vuyovich, Carrie, 3-295 Sergent, P., 4-315 Wahl, Tony, 1-206, 3-258, 4-398, 4-419 Shaun Carney, 4-310 Wang, Bin, 4-80, 4-91, 4-94, 4-96, 4-97 Siu, Nathan, 3-257, 3-367, 3-369, 3-370, 3- Wang, Zeechung (Gary), 4-456 372, 4-456 Ward, Katie, 4-323 Skahill, Brian, 1-166, 2-334, 2-396, 2-397, 2- Watson, David, 3-197, 3-213, 4-111, 4-320 399, 2-400, 3-195, 3-200, 3-295, 4-206 Weber, Mike, 2-1, 2-7, 3-1, 3-9, 5-490 Smith, Brennan, 3-197, 3-213 Weglian, John, 2-46, 2-75, 2-165, 2-213, 2-Smith, Curtis, 1-238, 1-250, 2-177, 2-284, 2- 243, 2-318, 2-402, 3-20, 3-109, 3-191, 387, 2-397, 2-398, 3-149, 3-199, 3-223 3-192, 3-193, 3-234, 3-250, 3-295, 3-Stapleton, Daniel, 4-80 357, 3-369, 3-370, 3-373, 3-374, 3-375, Stewart, Kevin, 4-315 5-490 Stewart, Lance, 3-5, 3-195, 3-209 Wille, Kurt, 3-195, 3-205 Stinchcomb, Gary, 3-5, 3-179, 3-195, 3-209 Wright, Joseph, 1-174, 2-199, 3-135, 3-345, Taflanidis, Alexandros, 4-56 3-346, 3-347, 3-372, 3-373 Taylor, Arthur, 4-33, 4-91, 4-93, 4-95, 4-96, Yegorova, Elena, 2-38, 2-407, 3-11, 3-29, 3-4-97 380, 4-12, 4-98, 4-156, 5-490 Taylor, Scott, 2-267, 3-240 Ziebell, David, 2-243, 3-234 Thaggard, Mark, 5-490

SUMMARY

AND CONCLUSIONS B-3

APPENDIX C: INDEX OF PARTICIPATING AGENCIES AND ORGANIZATIONS AECOM, 4-485, 4-486 Department of Energy, xv, 2-6, 2-387, 3-7, 3-Agricultural Research Service - USDA, xxxiv 335, 3-394, 3-395, 4-483 ARS, xxxi, xxxiv DOE, x, xv, xvii, xxii, xxvi, 2-397, 2-398, 3-Alden Research Laboratory, 3-393, 4-480 348, 4-306, 4-309, 4-454, 4-481 Amec Foster Wheeler, 2-419, 3-392 Department of Health and Human Services, American Polywater Corporation, 4-479, 4- 3-392 484 Department of Homeland Security, 3-394, 3-Appendix R Solutions, Inc., 3-391 396 Applied Weather Associates, 3-41, 3-345, 3- Dewberry, 2-424, 3-397, 4-480, 4-485, 4-486 394, 4-481, 4-482 Dominion Energy, 4-486 Aterra Solutions, 2-3, 2-30, 2-419, 2-422, 3- Duke Energy, 2-422, 2-424, 3-395, 3-398, 4-391, 4-478, 4-483 487 Atkins, 2-420, 3-392, 4-2, 4-3, 4-72, 4-91, 4- Electric Power Research Institute, iii, xvi, 2-1, 479, 4-485 2-425, 3-393, 4-1, 4-479 Battelle, Columbus, Ohio, 1-220, 2-5, 2-267, EPRI, iii, xvi, xxi, xxxii, xxxvii, 2-1, 2-3, 2-4, 3-6, 3-240, 3-395, 4-482 2-5, 2-6, 2-37, 2-46, 2-75, 2-165, 2-213, BCO, 1-4, 1-220 2-223, 2-243, 2-318, 2-333, 2-402, 2-Baylor University, 3-5, 3-195, 3-209 407, 2-421, 3-1, 3-3, 3-4, 3-6, 3-7, 3-20, BC Hydro, 4-481 3-27, 3-28, 3-109, 3-115, 3-191, 3-193, Bechtel Corporation, 3-396, 3-397, 4-478, 4- 3-234, 3-238, 3-250, 3-257, 3-295, 3-482, 4-483, 4-485, 4-486 315, 3-351, 3-357, 3-369, 3-370, 3-372, Bittner and Associates, 2-5, 2-267, 2-419, 3- 3-374, 3-375, 3-392, 3-398, 4-2, 4-7, 4-6, 3-240 8, 4-23, 4-72, 4-378, 4-379, 4-384, 4-B&A, xii, 1-4, 1-220 423, 4-462, 4-484, 5-490 Booz Allen Hamilton, 4-481 Électricité de France, xvi, xxxiii, 2-262, 3-232 Brava Engineering, Inc., 4-6, 4-323 EDF, xvi, 3-232, 3-233, 4-8, 4-226, 4-384, Canadian Nuclear Safety Commission, xiii, 4-385, 4-434, 4-464, 4-465, 4-477, 4-3-394, 4-482 481 Center for Nuclear Waste Regulatory Enercon Services, Inc., 2-422, 4-480 Analyses Engineer Research and Development SwRI, 3-392, 3-398 Center, xvi, 2-3, 2-6, 2-50, 2-334, 2-421, Centroid PIC, 2-5, 2-284, 3-5, 3-199, 3-223, 2-423, 2-424, 3-5, 3-6, 3-7, 3-94, 3-195, 4-5, 4-287 3-198, 3-200, 3-223, 3-295, 3-316, 3-Cerema, 4-6 393, 4-56 Coastal and Hydraulics Laboratory, xiii, 2-3, ERDC, xvi, 3-94, 4-56, 4-478, 4-480, 4-2-6, 2-50, 2-334, 2-421, 2-423, 2-424, 3- 483, 4-484 4, 3-5, 3-94, 3-195, 3-198, 3-223, 3-393, Environment Canada and Climate Change, 3-395, 3-397, 4-2, 4-3, 4-4, 4-56, 4-91, 4-483 4-206 Environmental Protection Agency, xvi, xxxii Coppersmith Consulting, Inc, xii, 2-6, 2-349, EPA, xvi, 4-260 2-420, 3-6, 3-304, 3-392, 4-5, 4-261 Environmentalists Incorporated, 2-422, 2-424 CCI, xii, 1-3, 1-63, 1-129 Exelon, 4-477 Curtiss-Wright, 4-479 Federal Emergency Management Agency, Defense Nuclear Facilities Safety Board, 2- xvii, 2-50 420 FEMA, xvii, xxii, 2-50, 2-399, 3-349, 3-396, DNFSB, 4-485 4-91, 4-259, 4-260 DEHC Ingenieros Consultores, 4-483 Federal Energy Regulatory Commission, xvii, Department of Defense, 2-302 2-420, 2-421, 2-422, 3-7, 3-322, 3-393 C-1

FERC, xvii, 2-424, 3-347, 3-393, 3-395, 4- Institute for Water Resources - USACE, xx, 122, 4-480, 4-483 xxii, 4-4, 4-162 Finland Radiation and Nuclear Safety IWR, xxii, 4-4, 4-5, 4-252, 4-306, 4-482 Authority, xxxii Instituto de Ingeniería, UNAM, 4-479, 4-482 STUK, xxxii INTERA Inc., 4-479, 4-481 Fire Risk Management, xviii, 2-5, 2-256, 2- International Atomic Energy Agency, xxi 420, 3-6, 3-227, 3-392 IAEA, xxi FRM, xviii Jensen Hughes, 2-422, 3-395, 4-8, 4-423, 4-First Energy Solutions, 4-478 464, 4-483 Fisher Engineering, Inc., 4-7, 4-386, 4-419, Korea Atomic Energy Research Institute, 4-477, 4-479 xxii, 3-392, 3-394, 4-6, 4-328, 4-482 Framatome, Inc., 4-485 KAERI, xxii French Nuclear Safety Authority, xii, 4-482 Korean Institute of Nuclear Safety, 4-481 George Mason University, 4-480 Kyungpook National University, 4-6, 4-328, George Washington University, 4-5, 4-287, 4-481, 4-482 4-306, 4-477 Lawrence Berkeley National Laboratory, 3-Global Modeling and Assimilation Office, xix, 391 4-7, 4-364, 4-482 Lynker Technologies, 4-487 Global Research for Safety, xix Meteorological Development Lab, xxiv, 4-33 GRS, xix, 4-29, 4-486 MDL, xxiv, 4-33, 4-480, 4-486 Goddard Space Flight Center, xix, 4-7, 4- MetStat, Inc., xxxi, 2-419, 2-421, 2-423, 3-364, 4-481, 4-482 391, 3-395, 3-396, 4-6, 4-323, 4-477, 4-Earth Sciences Division, 4-7, 4-364 484, 4-487 GSFC, xix, 4-3, 4-7, 4-98, 4-156, 4-374 MGS Engineering Consultants, 2-401, 2-424, GZA GeoEnvironmental Inc., xix, 2-422, 2- 4-3, 4-6, 4-125, 4-156, 4-323, 4-477, 4-423, 2-424, 3-394, 3-395, 3-398, 4-3, 4- 485 80, 4-91, 4-92, 4-482, 4-486 Michael Baker International, 2-424, 4-486 HDR, 3-393 Murray State University, 3-4, 3-5, 3-179, 3-Hydrologic Engineering Center, xv, xx, 2- 195, 3-196, 3-209, 3-397 399, 2-420, 3-5, 3-195, 3-200, 4-4, 4- National Aeronautics and Space 252 Administration, xxv HEC, xviii, xx, 4-4, 4-5, 4-162, 4-208, 4- NASA, xviii, xix, xxv, 4-3, 4-7, 4-98, 4-156, 306, 4-482 4-374, 4-481, 4-482 HydroMetriks, 3-393 National Environmental Satellite, Data, and I&C Engineering Associates, 4-477 Information Service Idaho National Laboratory, xxi, 1-220, 2-4, 2- NESDIS, xxvi, 4-485 5, 2-6, 2-177, 2-284, 2-387, 2-422, 2- National Geospatial-Intelligence Agency, 3-424, 3-4, 3-5, 3-7, 3-149, 3-199, 3-223, 394, 3-396 3-360, 3-394, 3-395, 3-396, 3-397, 4-5, NGA, 3-392, 3-396 4-287, 4-482, 4-484 National Oceanic and Atmospheric INL, xxi, 1-4, 1-220, 1-238, 1-250, 2-177, 2- Administration, xxvi, 2-6, 2-165, 2-367, 4-178, 2-284, 2-397, 2-398, 3-149, 3-150, 142 3-193, 3-198, 3-315, 4-384 NOAA, xiv, xvi, xviii, xx, xxi, xxv, xxvi, xxvii, Idaho State University, 4-5, 4-287 xxix, 2-165, 2-176, 2-178, 2-198, 2-399, IIHR-Hydroscience & Engineering, 4-486 2-400, 2-401, 2-421, 2-423, 3-150, 3-Institut de Radioprotection et de Sûreté 348, 3-395, 3-396, 4-125, 4-142, 4-158, Nucléaire, xxii, 2-6, 2-391, 2-420, 4-6, 4- 4-311, 4-376, 4-480, 4-481, 4-483, 4-315, 4-320 485, 4-486 IRSN, xxii, xxviii, 2-6, 2-391, 2-397, 2-399, National Weather Service, xiv, xv, xvii, xxvi, 2-420, 2-423, 4-4, 4-195, 4-252, 4-479, 2-6, 2-99, 2-367, 3-42, 3-239, 4-2, 4-3, 4-484 4-33, 4-91, 4-92, 4-472 C-2

NWS, xiii, xx, xxiv, xxv, xxvi, xxvii, xxxi, 2- SEPI, Inc., 4-487 99, 2-165, 2-256, 2-399, 2-400, 2-421, 2- Sorbonne UniversityUniversité de 423, 3-396, 4-2, 4-33, 4-34, 4-480, 4- Technologie de Compigne, 4-6, 4-315 481, 4-486 Southern California Edison, 4-6, 4-323 Natural Resources Conservation Service Southern Nuclear, 3-397, 4-485 NRCS, xxvi, xxviii, xxxv, 3-393, 3-394 Southwest Research Institute, 2-420, 2-425, Naval Postgraduate School, 4-480 3-398, 4-479 NIST, 3-395 Taylor Engineering, 2-419, 3-391, 4-3, 4-91, North Carolina State University, 4-5, 4-7, 4- 4-478 287, 4-329, 4-482 Technical Services Center - USBR, 2-4, 2-Nuclear Energy Agency, xxv, 4-1, 4-2, 4-28 148, 2-199, 2-423, 2-424, 2-425, 3-3, 3-NEA, xxv 4, 3-5, 3-70, 3-135, 3-195, 3-395 Nuclear Energy Institute, xxvi, 3-7 Tennessee Valley Authority, xxxiii, 2-6, 2-NEI, xxvi, 2-333, 3-354, 3-369, 3-370, 3- 375, 2-419, 2-421, 2-422, 3-339, 3-391, 374, 3-391, 3-396, 4-464, 4-473, 4-484 3-395, 3-397, 4-5, 4-272, 4-478 NuScale Power, 4-487 TVA, xxxiii, 2-223, 2-316, 2-396, 2-400, 2-Nuvia USA, 3-391 401, 3-191, 3-345, 3-346, 3-397, 4-5, 4-Oak Ridge National Laboratory, xxvii, 2-424, 121, 4-125, 4-142, 4-156, 4-157, 4-159, 3-5, 3-198, 3-219, 3-392, 3-394, 3-397, 4-251, 4-252, 4-272, 4-286, 4-307, 4-3-398, 4-6, 4-312, 4-315, 4-320, 4-479, 308, 4-310 4-482 U.S. Army Corps of Engineers, xiii, xvi, xxxiv, ORNL, xxvii, 3-5, 3-197, 3-213, 4-3, 4-111, 1-147, 2-3, 2-6, 2-420, 2-421, 2-422, 2-4-142, 4-156, 4-160 423, 2-424, 3-5, 3-6, 3-7, 3-195, 3-198, Oklo Inc., 4-484 3-200, 3-223, 3-295, 3-316, 3-319, 3-Oregon Water Science Center - USGS, 2- 393, 4-2, 4-56, 4-113, 4-307, 4-482, 4-224, 2-421, 3-5, 3-199, 3-226 483, 4-484 Pacific Northwest National Laboratory, xxviii, COE, xiii, xxxiv 2-4, 2-5, 2-6, 2-85, 2-267, 2-303, 2-349, Corps, xiii, xxxiv, 2-50, 2-334, 2-370, 3-2-419, 2-420, 2-422, 2-423, 3-3, 3-6, 3- 347, 3-348, 3-349, 3-372, 3-373, 4-91, 4-29, 3-240, 3-304, 3-395, 3-396, 4-5, 4-7, 156, 4-159, 4-160, 4-259, 4-260, 4-307, 4-261, 4-306, 4-349, 4-374, 4-478, 4- 4-309, 4-311, 4-470, 4-482, 4-483, 4-482, 4-484 484 PNNL, xxviii, 1-3, 1-4, 1-50, 1-63, 1-129, Dam Safety Production Center, 4-208 1-147, 1-220, 3-192, 3-193, 3-240, 4- Galveston District, 4-3, 4-112, 4-478 307 RMC, Risk Management Center, xxx, 2-Parsons, 4-480, 4-485 420, 3-7, 3-319, 3-347, 3-348, 3-349, 3-Penn State University, 4-483 393, 4-3, 4-4, 4-112, 4-156, 4-206, 4-PG&E, 4-484 208, 4-227, 4-252, 4-308, 4-479 PRISM Climate Group at Oregon State Sacramento Dam Safety Protection University, xxviii Center, xv, 3-394, 4-4, 4-227, 4-252 RAC Engineers and Economists, LLC, 3-391 USACE, xiii, xvi, xvii, xx, xxii, xxv, xxx, River Engineering & Urban Drainage xxxiii, xxxiv, 1-4, 1-147, 1-166, 1-190, 2-Research Centre, 4-482 50, 2-199, 2-396, 2-397, 2-398, 2-399, 2-RTI International, 3-346, 3-391, 3-392, 4-5, 4- 400, 2-401, 3-68, 3-347, 3-348, 3-349, 3-272, 4-306, 4-478 350, 3-372, 3-373, 3-397, 4-3, 4-4, 4-5, Sargent & Lundy, 2-423, 4-485 4-91, 4-97, 4-112, 4-125, 4-156, 4-162, Schnabel Engineering, 4-480 4-206, 4-208, 4-227, 4-228, 4-252, 4-Science Systems and Applications, Inc., 4-7, 306, 4-478, 4-479, 4-480, 4-482, 4-483, 4-364 4-484 Secretariat of Nuclear Regulation Authority, U.S. Bureau of Reclamation, xii, xvii, xxxiii, 4-481 xxxiv, 1-3, 1-63, 2-4, 2-148, 2-199, 2-C-3

421, 2-423, 2-424, 2-425, 3-3, 3-4, 3-5, 3-6, 3-70, 3-135, 3-136, 3-149, 3-192, 3-195, 3-205, 3-258, 3-345, 3-346, 3-347, 3-348, 3-350, 3-372, 3-373, 3-393, 3-394, 3-395, 3-397, 3-398, 4-7, 4-114, 4-117, 4-242, 4-254, 4-259, 4-363, 4-398, 4-419, 4-470, 4-483, 4-486 USBR, xvii, xxv, xxxii, xxxiv, 1-3, 1-4, 1-63, 1-147, 1-174, 1-206, 2-213, 2-241, 2-396, 2-400, 3-192, 3-398, 4-125 U.S. Department of Agriculture, xxxiv USDA, xxxi, xxxiv, xxxv, 3-393 U.S. Fish and Wildlife Service, xxxiv USFWS, xxxiv U.S. Geological Survey, xxxiv, 2-4, 2-178, 2-184, 2-419, 2-421, 2-423, 3-4, 3-5, 3-116, 3-117, 3-163, 3-199, 3-226, 3-391, 3-393, 3-394, 3-395, 3-396, 4-4, 4-206, 4-243, 4-252, 4-259, 4-477, 4-481, 4-482, 4-483 USGS, xxi, xxvii, xxviii, xxxiv, xxxv, 1-4, 1-147, 1-174, 2-5, 2-178, 2-184, 2-198, 2-224, 3-150, 3-162, 3-192, 3-194, 3-196, 3-348, 3-394, 4-242, 4-256, 4-258, 4-259 UNC Chapel Hill, 4-477 University of Alabama, 3-4, 3-5, 3-179, 3-190, 3-195, 3-196, 3-209, 3-392, 3-395 University of California U.C. Davis, xxi, 1-3, 1-63, 1-86, 2-4, 2-98, 2-422, 2-423, 3-3, 3-42, 3-392, 3-395 University of Costa Rica, 4-483 University of Maryland, xxxiv, 3-5, 3-197, 3-226, 3-391, 4-6, 4-8, 4-312, 4-315, 4-435, 4-464, 4-477, 4-478, 4-483 US Global Change Research Program, 4-477 Utah State University, 2-396, 3-391 Virginia Tech, 2-422 Weather & Water, Inc., 4-6, 4-323 WEST Consultants, 4-479 Western Univerisity, 4-486 Westinghouse, 2-3, 2-30, 2-424, 3-7, 3-350, 3-362, 3-371, 3-397, 4-7, 4-8, 4-378, 4-419, 4-446, 4-464, 4-485 Wood, 2-149, 3-391, 5-490 World Meteorological Organization WMO, xxxv, 4-376 Zachry Nuclear Engineering, 4-484 C-4