ML12293A451
ML12293A451 | |
Person / Time | |
---|---|
Site: | Peach Bottom, Surry |
Issue date: | 10/22/2012 |
From: | Armijo J Advisory Committee on Reactor Safeguards |
To: | Sheron B Office of Nuclear Regulatory Research |
References | |
NUREG-1953, NUREG/CR-7040 | |
Download: ML12293A451 (25) | |
Text
UNITED STATES NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEE ON REACTOR SAFEGUARDS WASHINGTON, DC 20555 - 0001 October 22, 2012 Dr. Brian Sheron, Director Office of Nuclear Regulatory Research U.S. Nuclear Regulatory Commission Washington, DC 20555-0001
SUBJECT:
ACRS ASSESSMENT OF THE QUALITY OF SELECTED NRC RESEARCH PROJECTS- FY 2012
Dear Dr. Sheron:
Enclosed is our report on the quality assessment of the following research projects:
- Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models Surry and Peach Bottom, NUREG-1953 This project was found to be satisfactory, a professional work that satisfies research objectives.
- Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants, NUREG/CR-7040 This project was found to be satisfactory, a professional work that satisfies research objectives.
These projects were selected from a list of candidate projects suggested by the Office of Nuclear Regulatory Research (RES).
We anticipate receiving a list of candidate projects for quality assessment in FY-2013 prior to our December 6-8, 2012 meeting.
Sincerely,
/RA/
J. Sam Armijo Chairman
Enclosure:
As stated 1
Assessment of the Quality of Selected NRC Research Projects by the Advisory Committee on Reactor Safeguards - FY 2012 October 2012 U.S. Nuclear Regulatory Commission Advisory Committee on Reactor Safeguards Washington, DC 20555-0001 2
ABOUT THE ACRS The Advisory Committee on Reactor Safeguards (ACRS) was established as a statutory Committee of the Atomic Energy Commission (AEC) by a 1957 amendment to the Atomic Energy Act of 1954. The functions of the Committee are described in Sections 29 and 182b of the Act. The Energy Reorganization Act of 1974 transferred the AECs licensing functions to the U.S. Nuclear Regulatory Commission (NRC), and the Committee has continued serving the same advisory role to the NRC.
The ACRS provides independent reviews of, and advice on, the safety of proposed or existing NRC-licensed reactor facilities and the adequacy of proposed safety standards. The ACRS reviews power reactor and fuel cycle facility license applications for which the NRC is responsible, as well as the safety-significant NRC regulations and guidance related to these facilities. The ACRS also provides advice on radiation protection, radioactive waste management and earth sciences in the agencys licensing reviews for fuel fabrication and enrichment facilities and waste disposal facilities. On its own initiative, the ACRS may review certain generic matters or safety-significant nuclear facility items. The Committee also advises the Commission on safety-significant policy issues, and performs other duties as the Commission may request. Upon request from the U.S. Department of Energy (DOE), the ACRS provides advice on U.S. Naval reactor designs and hazards associated with the DOEs nuclear activities and facilities. In addition, upon request, the ACRS provides technical advice to the Defense Nuclear Facilities Safety Board.
ACRS operations are governed by the Federal Advisory Committee Act (FACA), which is implemented through NRC regulations at Title 10, Part 7, of the Code of Federal Regulations (10 CFR Part 7). ACRS operational practices encourage the public, industry, State and local governments, and other stakeholders to express their views on regulatory matters.
ii
MEMBERS OF THE ADVISORY COMMITTEE ON REACTOR SAFEGUARDS Dr. J. Sam Armijo (Chairman), Adjunct Professor of Materials Science and Engineering, University of Nevada, Reno Dr. Sanjoy Banerjee, Distinguished Professor of Chemical Engineering and Director of CUNY Energy Institute, The Grove School of Engineering at The City College of New York, New York Dr. Dennis C. Bley, President, Buttonwood Consulting, Inc., Oakton, Virginia Mr. Charles H. Brown, Senior Advisor for Electrical Systems, BMT Syntek Technologies, Inc.,
Arlington, Virginia Dr. Michael L. Corradini, Wisconsin Distinguished Professor of Engineering Physics, University of Wisconsin, Madison, Wisconsin Dr. Dana A. Powers, Senior Scientist, Sandia National Laboratories, Albuquerque, New Mexico Mr. Harold B. Ray (Member-at-Large), Retired Executive Vice President, Southern California Edison Company, Rosemead, California Dr. Joy L. Rempe, Laboratory Fellow and Group Leader, Idaho National Laboratory, Idaho Falls, Idaho Dr. Michael T. Ryan, Principal, Michael T. Ryan and Associates LLC, Lexington, South Carolina Dr. Stephen P. Schultz, Nuclear Engineering Consultant, Charlotte, North Carolina Dr. William J. Shack, Retired Associate Director, Energy Technology Division, Argonne National Laboratory, Argonne, Illinois Mr. John D. Sieber, Retired Senior Vice-President, Nuclear Power Division, Duquesne Light Company, Pittsburgh, Pennsylvania Mr. Gordon (Dick) Skillman, Principal, Skillman Technical Resources Inc., Hershey, Pennsylvania Mr. John W. Stetkar (Vice- Chairman), Principal, Stetkar & Associates, Lake Forest, California iii
ABSTRACT In this report, the Advisory Committee on Reactor Safeguards (ACRS) presents the results of its assessment of the quality of selected research projects sponsored by the Office of Nuclear Regulatory Research (RES) of the NRC. An analytic/deliberative methodology was adopted by the Committee to guide its review of research projects. The methods of multi-attribute utility theory were utilized to structure the objectives of the review and develop numerical scales for rating the project with respect to each objective. The results of the evaluations of the quality of the two research projects are summarized as follows:
- Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models Surry and Peach Bottom, NUREG-1953
- This project was found to be satisfactory, a professional work that satisfies research objectives.
- Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants, NUREG/CR-7040,
- This project was found to be satisfactory, a professional work that satisfies research objectives.
iv
CONTENTS Page ABSTRACT ..................................................................................................................... iv FIGURES ........................................................................................................................ vi TABLES .......................................................................................................................... vi ABBREVIATIONS .......................................................................................................... vii
- 1. INTRODUCTION ....................................................................................................... 1
- 2. METHODOLOGY FOR EVALUATING THE QUALITY OF RESEARCH PROJECTS ........................................................................................... 3
- 3. RESULTS OF QUALITY ASSESSMENT .................................................................. 5 3.1 Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models Surry and Peach Bottom............. 5 3.2 Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants..11
- 4. REFERENCES ......................................................................................................... 17 v
FIGURES Page
- 1. The value tree used for evaluating the quality of research projects ........................................3 TABLES
- 1. Constructed Scales for the Performance Measures...............................................................4
- 2. Summary Results of ACRS Assessment of the Quality of the Project on Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models Surry and Peach Bottom ............................6
- 3. Summary Results of ACRS Assessment of the Quality of the Project on Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants ...... 12 vi
ABBREVIATIONS ACRS Advisory Committee on Reactor Safeguards AEC Atomic Energy Commission ANS American Nuclear Society ASME American Society of Mechanical Engineers BNL Brookhaven National Laboratory BWR Boiling Water Reactor CFR Code of Federal Regulation FACA Federal Advisory Committee Act FY Fiscal Year GPRA Government Performance and Results Act HCLPF High Confidence Low Probability of Failure HHSI High Head Safety Injection INL Idaho National Laboratory IORV Inadvertent Open Relief Valve JNES Japan Nuclear Energy Safety Organization LOCA Loss of Coolant Accident LPCI Low Pressure Coolant Injection LWR Light Water Reactor MAUT Multi-Attribute Utility Theory NPP Nuclear Power Plant NRC Nuclear Regulatory Commission ORNL Oak Ridge National Laboratory PORV Power Operated Relief Valve PRA Probabilistic Risk Assessment PWR Pressurized Water Reactor RCP Reactor Coolant Pump RCS Reactor Coolant System RCW Reactor Cooling Water RES Office of Nuclear Regulatory Research RWST Residual Water Storage Tank SA Spectral Acceleration SGTR Steam Generator Tube Rupture SOARCA State-of-the-Art Reactor Consequence Analyses SOW Statement of Work SPAR Standardized Plant Analysis Risk U.S. United States WCAP Westinghouse Commercial Atomic Power ZPA Zero Period Acceleration vii
1 INTRODUCTION The Nuclear Regulatory Commission (NRC) maintains a safety research program to ensure that the agencys regulations have sound technical bases. The research effort is needed to support regulatory activities and agency initiatives while maintaining an infrastructure of expertise, facilities, analytical tools, and data to support regulatory decisions.
The Office of Nuclear Regulatory Research (RES) is required to have an independent evaluation of the effectiveness (quality) and utility of its research programs. This evaluation is required by the NRC Strategic Plan that was developed as mandated by the Government Performance and Results Act (GPRA). Since fiscal year (FY) 2004, the Advisory Committee on Reactor Safeguards (ACRS) has been assisting RES by performing independent assessments of the quality of selected research projects [1-8]. The Committee established the following process for conducting the review of the quality of research projects:
- RES submits to the ACRS a list of candidate research projects for review because they have reached sufficient maturity that meaningful technical review can be conducted
- The ACRS selects a maximum of four projects for detailed review during the fiscal year.
- A panel of three to four ACRS members is established to assess the quality of each research project.
- The panel follows the guidance developed by the ACRS full Committee in conducting the technical review. This guidance is discussed further below.
- Each panel assesses the quality of the assigned research project and presents an oral and a written report to the ACRS full Committee for review. This review is to ensure uniformity in the evaluations by the various panels.
- The Committee submits an annual summary report to the RES Director.
Based on our later discussions with the RES, the ACRS made the following enhancements to its quality assessment process:
- After familiarizing itself with the research projects selected for quality assessment, each panel holds an informal meeting with the RES project manager and representatives of the User Office to obtain an overview of the project and the User Offices insights on the expectations for the project with regard to their needs.
- In addition, if needed, an additional informal meeting would be held with the project manager to obtain further clarification of information prior to completing the quality assessment.
The purposes of these enhancements were to ensure greater involvement of the RES project managers and their program office counterparts during the review process and to identify objectives, user office needs, and perspectives on the research projects.
1
An analytic/deliberative decisionmaking framework was adopted for evaluating the quality of NRC research projects. The definition of quality research adopted by the Committee includes two major characteristics:
- Results meet the objectives
- The results and methods are adequately documented Within the first characteristic, the ACRS considered the following general attributes in evaluating the NRC research projects:
- Soundness of technical approach and results
- Has execution of the work used available expertise in appropriate disciplines?
- Justification of major assumptions
- Have assumptions key to the technical approach and the results been tested or otherwise justified?
- Treatment of uncertainties/sensitivities
- Have significant uncertainties been characterized?
- Have important sensitivities been identified?
Within the general category of documentation, the projects were evaluated in terms of the following measures:
- Clarity of presentation
- Identification of major assumptions In this report, the ACRS presents the results of its assessment of the quality of the research projects associated with:
- Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models Surry and Peach Bottom
- Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants These two projects were selected from a list of candidate projects suggested by RES.
The methodology for developing the quantitative metrics (numerical grades) for evaluating the quality of NRC research projects is presented in Section 2 of this report. The results of the assessment and ratings for the selected projects are discussed in Section 3.
2
2 METHODOLOGY FOR EVALUATING THE QUALITY OF RESEARCH PROJECTS To guide its review of research projects, the ACRS has adopted an analytic/deliberative methodology [9-10]. The analytical part utilizes methods of multi-attribute utility theory (MAUT)
[11-12] to structure the objectives of the review and develop numerical scales for rating the project with respect to each objective. The objectives were developed in a hierarchical manner (in the form of a value tree), and weights reflecting their relative importance were developed.
The value tree and the relative weights developed by the full Committee are shown in Figure 1.
Research Quality Success 0.25 0.75 Documentation Results Meet the Objectives Clarity of Identification Justification Soundness of Uncertainties/
Presentation of Major of Major Technical Sensitivities Assumptions Assumptions Approach/Results Addressed 0.16 0.09 0.12 0.52 0.11 Figure 1 The value tree used for evaluating the quality of research projects The quality of projects is evaluated in terms of the degree to which the results meet the objectives of the research and of the adequacy of the documentation of the research. It is the consensus of the ACRS that meeting the objectives of the research should have a weight of 0.75 in the overall evaluation of the research project. Adequacy of the documentation was assigned a weight of 0.25. Within these two broad categories, research projects were evaluated in terms of subsidiary performance measures:
3
- justification of major assumptions (weight: 0.12)
- soundness of the technical approach and reliability of results (weight: 0.52)
- treatment of uncertainties and characterization of sensitivities (weight: 0.11)
Documentation of the research was evaluated in terms of the following performance measures:
- clarity of presentation (weight: 0.16)
- identification of major assumptions (weight: 0.09)
To evaluate how well the research project performed with respect to each performance measure, constructed scales were developed as shown in Table 1. The starting point is a rating of 5, Satisfactory (professional work that satisfies the research objectives). Often in evaluations of this nature, a grade that is less than excellent is interpreted as pejorative. In this ACRS evaluation, a grade of 5 should be interpreted literally as satisfactory. Although innovation and excellent work are to be encouraged, the ACRS realizes that time and cost place constraints on innovation. Furthermore, research projects are constrained by the work scope that has been agreed upon. The score was, then, increased or decreased according to the attributes shown in the table. The overall score of the project was produced by multiplying each score by the corresponding weight of the performance measure and adding all the weighted scores.
The value tree, weights, and constructed scales were the result of extensive deliberations of the whole ACRS. As discussed in Section 1, a panel of three ACRS members was formed to review each selected research project. Each member of the review panel independently evaluated the project in terms of the performance measures shown in the value tree. The panel deliberated the assigned scores and developed a consensus score, which was not necessarily the arithmetic average of individual scores. The panels consensus score was discussed by the full Committee and adjusted in response to ACRS members comments. The final consensus scores were multiplied by the appropriate weights, the weighted scores of all the categories were summed, and an overall score for the project was produced. A set of comments justifying the ratings was also produced.
Table 1. Constructed Scales for the Performance Measures SCORE RANKING INTERPRETATION 10 Outstanding Creative and uniformly excellent 8 Excellent Important elements of innovation or insight 5 Satisfactory Professional work that satisfies research objectives 3 Marginal Some deficiencies identified; marginally satisfies research objectives 0 Unacceptable Results do not satisfy the objectives or are not reliable 4
- 3. RESULTS OF QUALITY ASSESSMENT 3.1 Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models Surry and Peach Bottom One of the key strengths and challenges of probabilistic risk assessment (PRA) models is the integration of modeling capability from different disciplines, including human performance, thermal-hydraulics, severe accident progression, nuclear analysis, fuels behavior, structural analysis, and materials analysis. In some cases, thermal-hydraulic success criteria from the U.S. Nuclear Regulatory Commission's (NRC's) Standardized Plant Analysis Risk (SPAR) models are inconsistent when compared to counterpart licensee PRAs, other relevant SPAR models (i.e., models for similar plants), or relevant engineering studies. Such inconsistencies often reflect inconsistencies in licensee PRAs for similar plants because success criteria in the SPAR models are largely based on the success criteria used in the associated licensee PRA model. Licensees have used a variety of methods to determine success criteria, including conservative design-basis analyses and more realistic best-estimate methods. Consequently, in some situations plants that should behave similarly from an accident sequence standpoint have different success criteria for specific scenarios. This issue has been recognized for some time, but until recently the infrastructure was not in place at the NRC to support refinement of these success criteria.
The NRC staff, in collaboration with the staff of Idaho National Laboratory (INL), has recently completed an initial effort to strengthen the technical basis for thermal-hydraulic aspects of the SPAR models. The results of this effort are documented in NUREG-1953, Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models - Surry and Peach Bottom [13]. The scope of this quality review is limited to this report.
As documented in NUREG-1953, MELCOR calculations were completed for specific sequences to provide the basis for confirming or changing the corresponding SPAR model success criteria.
The Surry Power Station (Surry) and the Peach Bottom Atomic Power Station (Peach Bottom) plants were selected for these calculations because MELCOR input models were available from the State-of-the-Art Reactor Consequence Analyses (SOARCA) project [14, 15].
The analysis focused upon developing a basis for the use of a 2,200 °F (1,204 °C) peak cladding temperature as a surrogate for core damage because no universal quantitative definition of core damage exists. The American Society of Mechanical Engineers (ASME)/American Nuclear Society (ANS) PRA standard [16] defines core damage as uncovery and heatup of the reactor core to the point at which prolonged oxidation and severe fuel damage are anticipated and involving enough of the core, if released, to result in offsite public health effects. The core damage surrogate provides the linkage between the qualitative definition above and quantitative, measurable computer code outputs.
The NUREG-1935 report indicates that results were mapped to SPAR models for other similar plants. However, it is noted that additional work must be completed to perform similar analysis for other types of plant designs. In addition, the report notes that work is planned related to other aspects of this topical area, including the degree of variation typical in common PRA sequences and the quantification of conservatisms associated with core damage surrogates.
5
General Observations The consensus scores for this project are shown in Table 2. The score for the overall assessment of this work was found to be 5.1 (satisfactory, a professional work that satisfies research objectives). Comments and conclusions within the evaluation categories are provided below. Our most significant recommendation pertains to the area, "Treatment of Uncertainties/Sensitivities". As noted below, we strongly recommend that follow-on efforts include an activity to explicitly identify and address uncertainties.
Table 2. Summary Results of ACRS Assessment of the Quality of the Project, Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models - Surry and Peach Bottom Performance Measures Consensus Weights Weighted Scores Scores Clarity of presentation 6.0 0.16 1.0 Identification of major 5.0 0.09 0.5 assumptions Justification of major 4.3 0.12 0.5 assumptions Soundness of technical 5.3 0.52 2.8 approach/results Treatment of 2.3 0.11 0.3 uncertainties/sensitivities Overall Score 5.1 Clarity of Presentation (Consensus Score - 6.0)
The report was well-organized and well-written. The objective of the report was to document results from MELCOR analyses that were used to provide a basis for confirming or changing success criteria related to thermal hydraulics aspects of the SPAR models for the Surry and Peach Bottom plants. That objective was achieved.
The analysis approach and results are sufficiently documented that readers can understand what was done and find supporting information for specific analyses. The tables of applied input conditions for each analysis case could be improved if they 6
were organized in logical progressions of various input conditions for some analyses.
However, the reader can understand which combinations of conditions were run or can examine each set of cases for completeness. The summary tables provide much better documentation for such comparisons and completeness checks than separate descriptions of the input conditions for each analysis case.
The comparisons of the estimated timing of various versus core damage surrogates were new results and interesting to view. It was a useful way to display the basis selected for core damage.
Some minor corrections or clarifications (e.g, missing equations, missing references, undefined acronyms, Appendix A plots with undefined variable identifiers in the curve legend and with 'missing' curves masked by other curves, etc.) are needed in the current version of this report. Appendix A figures in which all of the plotted values are zero should be eliminated. However, the staff is aware of these issues and plans to issue an updated version of this report.
Identification of Major Assumptions (Consensus Score - 5.0)
In general, readers can find and understand assumptions that were made for each analysis. However, some assumptions are "buried" in the supporting text. In trying to identify the reason for the observed behavior in some of the event sequence timing and plots, the relevant assumption was only stated in the narrative description, rather than listed in the "key assumptions" for each analysis.
Justification of Major Assumptions (Consensus Score - 4.3)
The assumptions about event scenario progression, equipment performance, and operator actions seem to have reasonable justification. However, some assumptions are simply stated, without explanatory material to describe their technical bases. This is especially true for the timing of operator actions that is included in some scenarios.
The authors note that the analysis was performed with MELCOR 1.8.6, and assert that future MELCOR updates won't change results presented in this document. It isn't clear how one can justify such a blanket assertion. For example, fuel thermal conductivity models in updated versions of MELCOR have not been updated to reflect degradation associated with burnup.
The report notes that the plant models were primarily based on the models used for the SOARCA evaluations (although some modifications were implemented because additional detail was required for this application). Although the nodalization scheme for each plant model is presented, the main report does not include any justification, such as sensitivities, for this nodalization. In Appendix D, the adequacy of the selected MELCOR Surry nodalization schemes was questioned in a public comment.
The authors responded that the nodalization used in the Surry model follows a well-established nodalization convention for the use of MELCOR in reactor applications and that past sensitivity studies have shown that the selected nodalization can 7
reproduce the necessary physics for the types of accidents considered in this report.
However, the authors should also supply references to support their response.
Soundness of Technical Approach/Results (Consensus Score - 5.3)
The scope of the analyses has examined the key issues that affect the PRA success criteria for the targeted scenarios. The MELCOR analysis results seem reasonable and consistent with typical thermal-hydraulic results from other plant-specific models.
With the possible exception of Case 5 for the Peach Bottom Inadvertent Open Relief Valve analyses (discussed below in "Treatment of Uncertainties/Sensitivities"), none of the results seem counter-intuitive.
Treatment of Uncertainties/Sensitivities (Consensus Score - 2.3)
The study authors admit that no formal uncertainty analyses or systematic sensitivity analyses were performed (Table 1, Requirement SC-C3). The authors infer that the large margins to core damage indicated by sensitivity analyses compensates for of uncertainty in the models or data.
A formal activity to identify and address sources of uncertainty could have an important effect on the overall results for specific analyses. Several examples are discussed below to illustrate why the selected approach may be inadequate. Hence, we recommend that follow-on efforts explicitly include an activity to identify and address uncertainties.
Safety / Relief Valve Re-closure Failure Rates There is very large uncertainty in the assigned failure rates for PWR and BWR relief valve failures to reclose after steam relief and water relief. The report citations of point-estimate values from NUREG/CR-7037 imply that those values are derived from robust operating experience data. In fact, the vast majority of the cited values are derived from evidence of zero failures in a relatively small number of demands, which was generally used to update a non-informative prior distribution. That process typically results in very large uncertainties. The point-estimate values from NUREG/CR-7037 are then characterized in this report as very precise numbers (e.g.,
failure to reclose after 956 lifts).
The study does not use the failure rates from NUREG/CR-7037. It uses the point-estimate failure rates from the Surry and Peach Bottom PRAs, with no further discussion of the sources for the data or the uncertainties in those failure rates. The MELCOR models then assume that the respective valve sticks open after the number of cycles which result in a cumulative failure probability of 50%. For example, the point-estimate failure rate for the Surry Pressurizer PORVs is 2.8E-03 failure to reclose per demand. According to the applied valve failure model, there would be 50% cumulative probability that a PORV has failed to reclose after 247 cycles. Therefore, the MELCOR models assume that the valve sticks open at 247 cycles.
8
Some of the analyses include sensitivity cases which assume that a PORV, safety valve, or relief valve sticks open on the first demand or never sticks open. Those cases help to better characterize the range of results that may occur. However, the sensitivity studies are not performed for all analyses, and the text does not address the potential effects from these uncertainties. Some potential large effects that aren't addressed include:
- Behavior of the PORV is important for the PWR small break Loss of Coolant Accident (LOCA) analyses. A stuck-open PORV is beneficial, because it allows the RCS to depressurize for automatic low pressure recirculation before core damage occurs (see Table 6, Cases 5, 6, 7, 8).
- Behavior of the PORV is also important for the PWR feed and bleed scenarios. Those scenarios assume that the PORV is not opened manually, and it cycles less than the 247 times calculated for the 50% cumulative probability to stick open (relieving either water or two-phase fluid from successful HHSI injection - see plots in Appendix A.3). It isn't immediately obvious if a stuck-open PORV is advantageous for these scenarios. It would increase heat removal but increase the rate of RWST depletion. It would also depressurize the RCS after the RWST is drained, leading to subsequent automatic low pressure recirculation.
- Behavior of the ruptured steam generator relief valve is somewhat important for SGTR scenarios. If the valve does not stick open until 119 cycles with water relief, core damage is not expected until after 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br />, even with 5 ruptured tubes (see Table 11 and plots in Appendix A.4). However, this may not be too important for the overall conclusions, because 119 cycles are exceeded at 0.76 hour8.796296e-4 days <br />0.0211 hours <br />1.256614e-4 weeks <br />2.8918e-5 months <br /> for one ruptured tube and at 0.35 hour4.050926e-4 days <br />0.00972 hours <br />5.787037e-5 weeks <br />1.33175e-5 months <br /> for five ruptured tubes (see Table 12).
- Behavior of the main steam relief valves may be important for BWR station blackout scenarios. If a relief valve does not stick open until 187 cycles, core damage is not expected until about 7.2 hours2.314815e-5 days <br />5.555556e-4 hours <br />3.306878e-6 weeks <br />7.61e-7 months <br /> (with only RCIC) or about 10.7 hours8.101852e-5 days <br />0.00194 hours <br />1.157407e-5 weeks <br />2.6635e-6 months <br /> (with only HPCI). If the valve sticks open earlier, RCIC and HPCI will be disabled sooner, and less time will be available for AC power recovery or other mitigation actions (see Section 6.7, Case 6 and Case 10).
Modeling Uncertainties We concur that characterization and quantification of the effects from uncertainties in the MELCOR model "physics" is beyond the scope or purpose of this study. Hence, the authors were not penalized for their lack of attention to "modeling uncertainty" in that context, beyond the sensitivity studies that were performed for specific phenomena such as reactor coolant pump (RCP) seal LOCA size. However, there are several areas where the authors should have at least discussed how phenomenological uncertainties might affect the results.
- The RCP seal LOCA analyses evaluated three seal leakage rates (21 gpm, 182 gpm, and 500 gpm). The 21-gpm leakage rate is normal leakage, which 9
applies if no seal degradation occurs. All analyses initiated the seal failure (i.e., 182 gpm or 500 gpm) at time t = 13 minutes after loss of all AC power.
However, the authors also cite Westinghouse Commercial Atomic Power (WCAP) analyses which indicate that the time of seal failure may range from 8 minutes to 40 minutes after loss of all seal cooling. It would seem that a range of 8 minutes to 40 minutes for the onset of seal failure might significantly affect the amount of time that is available for recovery of offsite power or other operator actions to mitigate the event (e.g., active depressurization). The analyses in this report are focused on the derivation of those recovery time windows. The assigned time of seal failure at t = 13 minutes is not necessarily "conservative" for the purposes of these analyses.
It would be more useful if the analyses provided a matrix of results that apply for probabilistically-weighted combinations of the seal failure timing and the seal leakage rate (i.e., probability P that the seal failure occurs at time T, with leakage rate of L gpm).
- Case 5 for the Peach Bottom Inadvertent Open Relief Valve (IORV) analyses includes only Low Pressure Coolant Injection (LPCI). The plots for this case in Appendix B show that LPCI flow starts at about t = 2400 seconds after the initiating event. Reactor water level is below the core mid-plane from about t
= 1700 seconds until about t = 2800 seconds (approximately 18 minutes) during this case. The minimum level occurs at about t = 2400 seconds, and is only slightly above the bottom of the active fuel. The MELCOR analyses conclude that no core damage occurs during this case (peak cladding temperature = 1212 °K = 939 °C = 1722 °F). This could be a very important conclusion for the PRA success criteria, because it indicates that absolutely no high pressure injection is needed to mitigate this event. The discussion does not examine possible assumptions, modeling limitations, or other conditions that could affect this conclusion. For example, how well does MELCOR model steam cooling of the fuel under these conditions? Does MELCOR account for the effects from fuel thermal conductivity degradation?
Would the conclusion be different if the SRV was not fully open? (Table 37 indicates that the LPCI-only success criteria were not used in the current SPAR models.)
Uncertainty Effects on SPAR Model Success Criteria For the results listed in Tables 36 and 37 of Section 7, uncertainties in the input parameters or the MELCOR models do not directly affect the updated SPAR model success criteria. However, such uncertainties may affect applied time windows for operator actions, electric power recovery options, alternate cooling strategies, etc.
that are not currently included in the Surry and Peach Bottom SPAR models, but may be considered in the future for those models or for other plants. Without appropriate identification and characterization of the uncertainties in the analyses in this report, there is a distinct likelihood that PRA analysts will simply refer to the tables and figures for specific analysis cases as "accepted" justification for specific event progression and timing, without an appreciation of how those results may be influenced by the associated uncertainties.
10
3.2 Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants The Japan Nuclear Energy Safety Organization (JNES) conducted a multi-year (2002-2012) equipment fragility test program to obtain realistic equipment fragility capacities for use in the seismic probabilistic risk assessments (PRAs) of nuclear power plants (NPPs) in Japan. As part of collaborative efforts between the United States and Japan on seismic issues, the NRC and Brookhaven National Laboratory (BNL) independently participated in this program by evaluating the results of the JNES equipment fragility tests. The goal of this research effort was to compare the JNES fragility results with the fragility data typically used in current U.S. seismic PRAs and assess the impact that the new test results may have on current seismic PRAs and how this data can be utilized for future seismic PRAs. The results of BNL (and its independent consultants) evaluation of the JNES equipment fragility test data are summarized in NUREG/CR-7040, Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants [17]. The scope of this quality review is limited to this report.
Although a large amount of the generic fragility data and estimates are available, except in the case of relays, very few of these data have been obtained from full-scale tests of equipment under seismic excitations that greatly exceed design basis earthquake levels. In most of the equipment qualification tests, from which equipment fragilities have been derived, the input seismic waves are only at or slightly higher than the design basis earthquake. The JNES equipment fragility test program is a very comprehensive effort to determine realistic seismic equipment fragility capacities based on full-scale high-level shaking table tests. The JNES fragility evaluation considered both structural and functional limit states.
The purpose of the full-scale tests was to identify critical acceleration levels and failure modes of the equipment. After critical elements were identified from the full scale tests, element tests were conducted with multiple samples for each element type to determine their median capacity and the associated statistical variability.
The NUREG/CR-7040 report includes a review of common U.S. practice for estimating seismic fragilities of equipment qualified by test and a comparison with the JNES approach to developing fragility data of equipment for nuclear facilities by shaking tests. U.S. practice defines fragilities in terms of a broad frequency 5% damped spectral acceleration (SA) at the base of the component. The response spectrum shape used in the JNES tests is very different.
The JNES tests typically had more highly amplified and narrower frequency response spectra that typically peaked in the 7 to 10 Hz frequency range. The Japanese results are also reported in terms of the zero period acceleration (ZPA) rather than the SA typically used in the U.S.
Because of these differences in test procedures, some judgment is required to compare results from the Japanese tests with fragility values used in U.S. seismic PRAs. Based on comparisons of the spectral shapes in the Japanese tests, the authors suggest that it is most appropriate, on average, to convert the JNES reported ZPA fragilities to equivalent broad frequency 5%
damped SA fragilities by the relation SA 2.4*ZPA. Detailed results for the JNES tests are included as an appendix to the report so that users can do more rigorous comparisons for specific cases. They also note that caution must be applied to assess the applicability of the results to the specific equipment being considered. In particular, an analysis of the component anchorage and support fragility needs to be performed as a necessary supplement to the equipment fragility data for a proper application.
11
In some cases, the test results show that the fragility values typically used in seismic PRAs may be very conservative. For large horizontal pumps, such as Reactor Cooling Water (RCW) pumps and Charging High Pressure Injection pumps, it has been common U.S. fragility practice to base their fragility estimate on a review and scaling of the qualification stress report for the specific pump involved. For lower Central and Eastern U.S. seismic regions, and for less critical horizontal pumps the median ZPA capacity of horizontal pumps has been be estimated to be about 2.0 g, which is much less than the function confirmed ZPA 6.0 g ob tained in the JNES RCW pump full-scale test. Eight electrical panels were selected for the JNES full-scale tests, including a main control board, a reactor auxiliary control board, a logic circuit control panel, an instrumentation rack, a reactor protection rack, a reactor control center, a power center, and a 6.9 kV metal-clad switchgear. However, based on comparisons of both the natural frequencies of the panels and the response amplification factors, it appears that the JNES tested electrical components are much stiffer than most electrical components in existing U.S. plants. Thus the JNES reported median fragilities should not be used for U.S. electrical components unless it can be shown that the component has stiffnesses similar to those tested by JNES.
JNES performed a full-scale test on a control rod drive mechanism, control rod, and fuel bundle assembly representative of 3 and 4 loop PWR plants. The JNES fragility results are applicable for failure modes associated with fuel assembly displacements. Within the U.S., control rod insertion fragilities are generally derived based on a detailed review and scaling of NSSS vendor submitted qualification reported results. For PWR plants, the derived fragilities are generally controlled by the supports of the control rod drive mechanism. The failure modes that have typically been considered to be controlling in U.S. fragility assessments for control rod insertion could not have occurred during these tests because the entire fuel assembly was supported by very stiff frames in the JNES tests.
Table 3. Summary Results of ACRS Assessment of the Quality of the Project on Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants Performance Measures Consensus Weights Weighted Scores Scores Clarity of presentation 5.0 0.16 0.8 Identification of major 5.3 0.09 0.5 assumptions Justification of major 4.3 0.12 0.5 assumptions Soundness of technical 5.3 0.52 2.8 approach/results Treatment of 4.3 0.11 0.5 uncertainties/sensitivities Overall Score 5.1 12
General Observations The consensus scores for this project are shown in Table 3. The score for the overall assessment of this work was found to be 5.1 (satisfactory, a professional work that satisfies research objectives). Comments and conclusions within the evaluation categories are provided below.
Clarity of Presentation (Consensus Score - 5)
The document is adequately organized. Inclusion of the translated Japanese publication augmented the report. More attention could have been given to separation of description and assessment.
The report is well organized and reasonably clear even for non-seismic experts. The discussions of the U.S. and Japanese test approaches are helpful for understanding the difficulties that can arise in comparisons of the data. The inclusion of the detailed test data as an appendix is important.
The report is generally easy to follow, although there are a few places where it descends into jargon, because terms are not defined in the text and tables (and there is no table of acronyms or glossary). The executive summary and Chapter 1 are well written and very clear, laying out the objectives and major assumptions and their bases. They also warn of limitations in the study and provide caveats on the use of the results.
In Chapter 2, many specific parameters used in the discussion and accompanying tables are undefined and there is no glossary or table of acronyms to explain the usage. Chapter 2 is meant to be a summary of past and existing practice in the U.S.
Perhaps the authors expect all their readers to be familiar with this history and the terms. But then, why write the chapter? For new readers, it could be very confusing.
The authors also get a bit sloppy with definitions and usage. For example, their definition of high confidence low probability of failure (HCLPF) is not the definition, but a common usage approximation that has generally proven to be reasonable.
Also, in their discussion of uncertainty factors r [the aleatory component of uncertainty (called randomness when these expressions were originally developed)],
u [the epistemic part of the uncertainty (originally called uncertainty), and c [the composite uncertainty (now known to be the mean of r and u) for lognormal r and u)], they sometimes describe them correctly and sometimes lump them as variability.
Chapter 3 describes Japanese fragility tests fairly well and is easy to follow.
Embedded in the discussion are many important details of the tests, how they were performed, why they stopped where they did, and how those facts could affect their value to future seismic PRAs.
Chapters 4 and 5 compare the test results with fragility data commonly used in U.S.
seismic PRAs and discuss if it would be appropriate to use in new U.S seismic PRA 13
calculations. Chapter 5 summarizes the authors conclusions. This material is clearly written.
Identification of Major Assumptions (Consensus Score - 5.33)
Some important assumptions are not identified. For example, the very different approaches to selecting an input spectrum for the tests are described, but no attempt to justify or describe why either one is appropriate is made. Since this is a report primarily of interest to practitioners of the seismic fragility art, this lack of identification for fundamental assumptions is acceptable. The importance of elements like support and attachments are stated clearly.
The predominant assumptions of technical importance are associated with the design of the Japanese test program. Authors of the report are not responsible for this program. The report could have been improved with a more comprehensive exploration of this plan.
The authors are careful to define the main assumptions in the report, in the Executive Summary and in the body of the report. It is an unusual research report, in that it is not reporting the authors original technical work, but examining the fragility test results developed in Japan for applicability to U.S. seismic PRAs. The authors also point out the reasons the test results may not match the design of current U.S.
plants Justification of Major Assumptions (Consensus Score - 4.33)
As noted previously, some assumptions are acceptable simply as part of the terms of the art. However, the discussion of the development of the relation between the spectral acceleration SA and the zero period acceleration (ZPA) was not clear and could have been expanded.
Important assumptions made in the conduct of the test program are of course beyond the control of the report authors. Assessments made of the applicability are probably accurate but are based almost entirely on expert judgment. They could benefit from additional substantiation.
Soundness of Technical Approach/Results (Consensus Score - 5.33)
The technical approach for this study is sound, and the goals are admirable. As pointed out in the report, although the amount of generic fragility data is large, it has been extremely rarethat fragility data is directly obtained from full-scale teststhat greatly exceed the design basis earthquake. The idea that such data, generated in a Japanese test program could be applicable to U.S. seismic PRAs needed to be examined. The report also points out that about half the time the overall component fragility is governed by anchorage or support capacity, which must be analyzed on a case by case basis, i.e., even if JNES tests showed much higher capacities than previously developed by analysis, the overall effect could be substantially less.
14
The report meets most of the broad objectives stated in the introduction in that it compares the JNES fragility results with fragility data typically used in current U.S.
seismic PRAs and assesses the impact that the new test results may have on current seismic PRAs. The report was also supposed to assess how the new data can be used for future seismic PRAs. Here they simply said that it might be applicable to equipment in new plant designsno technical evaluation and not helpful at allexcept for electrical components, where they point out that if the specific components tested are used in new plants the test data are applicable; of course, that is a limited set.
The actual tests were performed by JNES and are not part of our review evaluation.
However, the authors evaluation of those tests is considered. The authors were uncritical of the approach used for component selection and for setting the combined
[accelerator (shake table) on a shake table] input acceleration capacity. Both used Fussell-Vesely importance from a previous seismic PRA. This approach devalues equipment that did not substantially contribute to the PRA results. Because this could have been due to the use of overly optimistic data in that PRA, it would have been more appropriate to use additional measures such as RAW, i.e., the program was designed only to generate lower risk calculations in seismic PRAs.
The conclusions varied substantially among the four types of components tested:
- Electrical components. Although the test results showed the components more rugged than the past seismic PRA fragilities indicate, they point out that the tested components are much more stiff than those in current use in the U.S. We agree with the recommendation that the test data not be used for current U.S. seismic PRAs. The natural frequency of tested devices is much greater than the 7-11 Hz test spectra (Fig 3-16). The authors point this out, but it is not clear how the test results are corrected for excitation so far from the natural frequency. For this kind of equipment, shaking at other than the natural frequency is very unlikely to move contacts off their normal positions long enough to have any circuit effects.
- Horizontal shaft pumps. The tested pumps are similar to those used in the U.S. and the test results are recommended for current U.S. seismic PRAs. The test spectra was about the same as for electrical components, but the natural frequencies of the pump failure modes is not given.
- Large vertical shaft pumps. Test results seem to confirm the reasonableness of the current approach of calculating 90% of the fully plastic moment capacity for mounting bolts; lower motor stand; and pump barrel, casing, column or shaft. Pump-specific analysis is recommended.
- Control rod insertion. Because the stiff frames used in the tests mask likely failure modes, vendor calculations are recommended.
15
The document is a welcome contribution. The critical assessment of the applicability of the test results is especially important.
These are unique experimental results that are valuable for themselves and to benchmark estimates of fragilities obtained by other methods.
Treatment of Uncertainties/Sensitivities (Consensus Score - 4.33)
The discussions are purely qualitative, but the authors are careful to point limitations and uncertainties that must be considered in attempts to use the data.
Chapter 2 describes the historical approach for addressing uncertainty in seismic PRA analysis in the U.S. Chapter 3 provides some details of uncertainty affecting the specific Japanese fragility tests, but Chapters 4 and 5 do not even mention uncertainty.
The report never points out that most of the uncertainty included in the seismic PRA fragility curve r and u values is due to uncertainty in the specific earthquake that actually occurs. That is, many different earthquakes (in terms of detailed frequency content and time histories) are lumped into a single parameter acceleration for the family and that single parameter is used to characterize the fragility curve. The report does not mention that shake-table tests, with a single frequency and duration, will not exhibit these uncertainty characteristics. Again, the knowledgeable reader will understand this, but the report should make it clear.
The work does not lend itself to detailed quantification of uncertainties and sensitivities. The report does an adequate job at identification of sensitivities that affect applicability of the results.
16
- 4. REFERENCES
- 1. Letter Dated November 18, 2004, from Mario V. Bonaca, Chairman, ACRS, to Carl J.
Paperiello, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects.
- 2. Letter Dated November 5, 2005, from William J. Shack, Acting Chairman, ACRS, to Carl J. Paperiello, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2005.
- 3. Letter Dated October 17, 2006, from Graham B. Wallis, Chairman, ACRS, to Brian Sheron, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2006.
- 4. Letter Dated October 19, 2007, from William J. Shack, Chairman, ACRS, to Brian Sheron, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2007.
- 5. Letter Dated October 22, 2008, from William J. Shack, Chairman, ACRS, to Brian Sheron, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2008.
- 6. Letter Dated September 16, 2009, from Mario V. Bonaca, Chairman, ACRS, to Brian Sheron, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2009.
- 7. Letter Dated November 15, 2010, from Said Abdel-Khalik, Chairman, ACRS, to Brian Sheron, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2010.
- 8. Letter Dated September 19, 2011, from Said Abdel-Khalik, Chairman, ACRS, to Brian Sheron, Director, Office of Nuclear Regulatory Research, NRC,
Subject:
ACRS Assessment of the Quality of Selected NRC Research Projects - FY 2011.
- 9. National Research Council, Understanding Risk: Informing Decisions in a Democratic Society. National Academy Press, Washington, DC, 1996.
- 10. Apostolakis, G.E., and Pickett, S.E., Deliberation: Integrating Analytical Results into Environmental Decisions Involving Multiple Stakeholders, Risk Analysis, 18:621-634, 1998.
- 11. Clemen, R., Making Hard Decisions, 2nd Edition, Duxbury Press, Belmont, CA, 1995.
- 12. Keeney, R.L., and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, Wiley, New York, 1976.
- 13. Esmaili, H., Helton, D., .Marksberry, et. al., Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models -
Surry and Peach Bottom, Nuclear Regulatory Commission, NUREG-1953, 2011.
17
- 14. U.S. Nuclear Regulatory Commission, NUREG/CR-7110, Vol.1, State-of-the-Art Reactor Consequence Analyses (SOARCA) Report, Volume 1: Peach Bottom Integrated Analysis, Sandia National Laboratories, Albuquerque, NM, January 2012.
- 15. U.S. Nuclear Regulatory Commission, NUREG/CR-7110, Vol.2, State-of-the-Art Reactor Consequence Analyses (SOARCA) Report, Volume 2: Surry Integrated Analysis, Sandia National Laboratories, Albuquerque, NM, January 2012.
- 16. ASME/ANS RA-Sa-2009, Standard for Level 1/Large Early Release Frequency Probabilistic Risk Assessment for Nuclear Power Plant Applications, Addendum A to RA-S-2008, ASME, New York, New York, American Nuclear Society, La Grange Park, Illinois, February 2009.
- 17. Kennedy, R., Nie, J. and. Hofmayer, C., Evaluation of JNES Equipment Fragility Tests for Use in Seismic Probabilistic Risk Assessments for U.S. Nuclear Power Plants, Brookhaven National laboratory, BNL-NUREG-94629-2011, NUREG/CR-7040, 2011.
18