ML003770849

From kanterella
Jump to navigation Jump to search
Draft Regulatory Guide DG-1096, Transient and Accident Analysis Methods
ML003770849
Person / Time
Issue date: 12/31/2000
From:
Office of Nuclear Regulatory Research
To:
Lauben N (301)415-6762
References
-nr, DG-1096 SRP Section 15.0.2
Download: ML003770849 (39)


Text

U.S. NUCLEAR REGULATORY COMMISSION December 2000 OFFICE OF NUCLEAR REGULATORY RESEARCH Division 1 Draft DG-1096 DRAFT REGULATORY GUIDE

Contact:

N. Lauben (301)415-6762 DRAFT REGULATORY GUIDE DG-1096 TRANSIENT AND ACCIDENT ANALYSIS METHODS A. INTRODUCTION In 10 CFR Part 50, Domestic Licensing of Production and Utilization Facilities, Section 50.34, Contents of Applications; Technical Information, requires that:

1. Safety Analysis Reports be submitted that analyze the design and performance of structures, systems, and components provided for the prevention of accidents and the mitigation of the consequences of accidents, and
2. Analysis and evaluation of emergency core cooling system (ECCS) cooling performance following postulated loss-of-coolant accidents (LOCAs) be performed in accordance with the requirements of 10 CFR 50.46.

The technical specifications for the facility (10 CFR Part 50.36) are to be based on the safety analysis.

This regulatory guide is being developed to describe a process that is acceptable to the NRC staff for the development and assessment of evaluation models that may be used to analyze transient and accident behavior. Chapter 15 of the Standard Review Plan (SRP)(NUREG-0800, Ref. 1) and the Standard Format and Content Guide (Regulatory Guide 1.70, Ref. 2) describe these events (transients and accidents), which are a sub-set of those required by 10 CFR 50.34 to be addressed. These events are presented in Sections 15.1 through 15.6 of the SRP, except for the fuel assembly misloading event and all radiological consequence analyses. An appendix to this regulatory guide is provided for ECCS analysis. As appropriate, other appendices will be developed for other specific classes of events that are described in SRP Sections 15.1 through 15.6 to address phenomena, assessment, uncertainty analyses, and other factors important or unique to a particular class of events.

This regulatory guide is intended to provide guidance on realistic accident analyses, which will provide a more reliable framework for risk-informed regulation and a basis for estimating the uncertainty in understanding transient and accident behavior.

This regulatory guide is being issued in draft form to involve the public in the early stages of the development of a regulatory position in this area. It has not received complete staff review or approval and does not represent an official NRC staff position.

Public comments are being solicited on this draft guide (including any implementation schedule) and its associated regulatory analysis or value/impact statement. Comments should be accompanied by appropriate supporting data. Written comments may be submitted to the Rules and Directives Branch, Office of Administration, U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001. Comments may be submitted electronically or downloaded through the NRCs interactive web site at <WWW.NRC.GOV> through Rulemaking. Copies of comments received may be examined at the NRC Public Document Room, 11555 Rockville Pike, Rockville, MD. Comments will be most helpful if received by February 15, 2001.

Requests for single copies of draft or active regulatory guides (which may be reproduced) or for placement on an automatic distribution list for single copies of future draft guides in specific divisions should be made to the U.S. Nuclear Regulatory Commission, Washington, DC 20555, Attention: Reproduction and Distribution Services Section, or by fax to (301)415-2289; or by email to DISTRIBUTION@NRC.GOV. Electronic copies of this draft guide are available through NRCs interactive web site (see above), on the NRCs web site <www.nrc.gov> in the Reference Library under Regulatory Guides, and in NRCs Public Electronic Reading Room at the same web site, under Accession Number ML003770849.

Section 15.0.2 of the SRP (Ref. 1) provides guidance to NRC reviewers of transient and accident analysis methods. This regulatory guide and SRP Section 15.0.2 cover the same subject material and are meant to be complementary documents, with Section 15.0.2 providing guidance to reviewers and this guide providing practices and principles for the benefit of methods developers. Chapter 15 of the SRP recommends that approved evaluation models or codes be used for the analysis of most identified events. The SRP suggests that evaluation model reviews be initiated whenever an approved model for a specified plant event does not exist. If the applicant or licensee proposes to use a new model, an evaluation model review should be initiated.

This guide is intended to be used by construction permit applicants that must meet the design bases description requirements of 10 CFR 50.34 and the relation of the design bases to the principal design criteria described in Appendix A to 10 CFR Part 50. Chapter 15 of the SRP (Ref. 1) describes the transients and accidents that the NRC staff reviews as part of the application, and the criteria of Appendix A that specifically apply to each class of transient and accident. Chapter 15 also states that acceptable evaluation models should be used to analyze these transients and accidents.

This guide is also intended to be used by operating license applicants that must meet the design bases description requirements of 10 CFR 50.34 and the relation of the design bases to the principal design criteria described in Appendix A to 10 CFR Part 50.

This guide would be applicable to new evaluation models or changes to existing evaluation models proposed by operating reactor licensees that the NRC staff undertakes to review.

Regulatory guides are issued to describe to the public methods acceptable to the NRC staff for implementing specific parts of the NRC's regulations, to explain techniques used by the staff in evaluating specific problems or postulated accidents, and to provide guidance to applicants. Regulatory guides are not substitutes for regulations, and compliance with regulatory guides is not required. Regulatory guides are issued in draft form for public comment to involve the public in developing the regulatory positions. Draft regulatory guides have not received complete staff review; they therefore do not represent official NRC staff positions.

The information collections contained in this draft regulatory guide are covered by the requirements of 10 CFR Part 50, which were approved by the Office of Management and Budget, approval number 3150-0011. If a means used to impose an information collection does not display a currently valid OMB control number, the NRC may not conduct or sponsor, and a person is not required to respond to, the information collection.

B. DISCUSSION The two fundamental features of transient and accident analysis methods are (1) the evaluation model concept and (2) the basic principles important for the development, assessment, and review of those methods.

EVALUATION MODEL CONCEPT 2

The basis for analysis methods used to analyze a particular event or class of events is contained in the evaluation model concept. This concept is described in 10 CFR 50.46 for LOCA analysis but can be generalized to all analyzed events described in Chapter 15. An evaluation model (EM) is the calculational framework for evaluating the behavior of the reactor system during a postulated transient or design basis accident. It may include one or more computer programs, special models, and all other information necessary for application of the calculational framework to a specific event, such as:

1. Procedures for treating the input and output information, particularly the code input arising from the plant geometry, the assumed plant state at transient initiation,
2. Specification of those portions of the analysis not included in the computer programs for which alternative approaches are used, and
3. All other information necessary to specify the calculational procedure.

It is the entirety of an evaluation model that ultimately determines that the results are in compliance with applicable regulations. Therefore, the entire evaluation model must be considered during the development, assessment, and review process.

In this regulatory guide, the term model is also used and should be distinguished from the evaluation model or EM. In contrast to EM as defined here, model without the evaluation modifier is used in the more traditional sense to describe the representation of a particular physical phenomenon within a computer code or procedure.

Most evaluation models used to analyze the events in Chapter 15 of the SRP (Ref.

1) rely on a systems code that describes the transport of fluid mass, momentum, and energy throughout the reactor coolant systems. The extent and complexity of the physical models needed in the systems code are strongly dependent on the reactor design and the transient being analyzed. For a particular transient, a subsidiary device like a sub-channel analysis code may actually be more complex than the systems code. Regardless of its complexity, the systems code plays a key role in organizing and controlling other aspects of the transient analysis. Each computer code, analytical tool, or calculational procedure that compose the evaluation model is referred to as a calculational device in this guide.

In some cases, as many as 7 or 8 calculational devices may be used to define an evaluation model for a particular event, although the trend today is to integrate many of these components into a smaller set of computer codes, usually within the framework of the systems code.

Sometimes, a general purpose systems code may be developed to address similar phenomenological aspects of several diverse classes of transients. This presents unique challenges in the definition, development, assessment, and review of those codes as they apply to a particular transient evaluation model. A separate section of the Regulatory Position is devoted to the issues involved with general purpose computer codes.

BASIC PRINCIPLES OF EVALUATION MODEL DEVELOPMENT AND ASSESSMENT Recent reviews have shown the need to provide guidance to applicants and licensees regarding transient and accident analysis methods. By providing such guidance, the review process should be streamlined by reducing the frequency and extent 3

of iterations between the methods developers and NRC staff reviewers. To produce a viable product, certain principles should be addressed during the model development and assessment process.

There are six basic principles that have been identified as important to follow in the process of evaluation model development and assessment. They are:

1. Determine requirements for the evaluation model. The purpose of this principle is to provide a focus throughout the evaluation model development and assessment process (EMDAP). An important outcome should be the identification of mathematical modeling methods, components, phenomena, physical processes, and parameters needed to evaluate the event behavior relative to the figures of merit described in Chapter 15 of the SRP and derived from the General Design Criteria (GDC) in Appendix A to 10 CFR Part 50. The phenomena assessment process is central to ensuring that the evaluation model can analyze the particular event appropriately and that the validation process addresses key phenomena for that event.
2. Develop an assessment base consistent with the determined requirements. Since an evaluation model can only approximate physical behavior for postulated events, it is important to validate the calculational devices, individually and collectively, using an appropriate assessment base. The data base may consist of already existing experiments or it may require the performance of new experiments, depending on the results of the requirements determination.
3. Develop the evaluation model. The calculational devices needed to analyze the events in accordance with the requirements determined in the first principle should be selected or developed. To define an evaluation model for a particular plant and event, it is also necessary to select proper code options, boundary conditions, and the temporal and spatial relationship among the component devices.
4. Assess the adequacy of the evaluation model. Based on the application of the first principle, especially the phenomena importance determination, an assessment should be made regarding the inherent capability of the evaluation model to achieve the desired results relative to the figures of merit derived from the GDC.

Some of this assessment is best made during the early phase of code development to minimize the need for corrective actions later. A key feature of the adequacy assessment is the ability of the evaluation model or its component devices to predict appropriate experimental behavior. Once again, the focus should be on the ability to predict key phenomena as described in the first principle. To a large degree, the calculational devices are collections of models and correlations that are empirical in nature. Therefore, it is important to assure that they are used within the range of their assessment.

5. Follow an appropriate quality assurance protocol during the EMDAP. Quality assurance standards, as required in Appendix B to 10 CFR Part 50, are a key feature of the development and assessment process. When complex computer codes are involved, peer review by independent experts should be an integral part of the quality assurance process.

4

6. Provide comprehensive, accurate, up-to-date documentation. This is an obvious requirement for a credible NRC review. It is also clearly needed for the peer review described in the fifth principle. Since the development and assessment process may lead to changes in the importance determination, it is most important that documentation of this activity be developed early and kept current.

The principles of an EMDAP were developed and applied in a study on quantifying reactor safety margins (Ref. 3). In that report, the code scaling, applicability, and uncertainty (CSAU) evaluation methodology was applied to a large-break LOCA. The purpose of that study was to demonstrate a method that could be used to quantify uncertainties as required by the best-estimate option described in the 1988 revision to the ECCS Rule (10 CFR 50.46). While the goal was related to code uncertainty evaluation, the principles derived to achieve that goal involved the entire process of evaluation model development and assessment. Thus many of the same principles would apply even if a formal uncertainty evaluation was not the specific goal. Since the publication of Reference 3, there have been several applications of the CSAU process with modifications to fit each particular circumstance (See References 4-12).

In References 4 and 5, a process was developed using an integrated structure and scaling methodology for severe accident technical issue resolution (ISTIR). ISTIR defined separate components for experimentation and code development. Although a code development component is included in ISTIR, the ISTIR demonstration did not include code development. An important feature of Reference 4 is the use of hierarchical system decomposition methods to analyze complex systems. In the ISTIR demonstration, the methods were used to investigate experimental scaling, but they are also well suited to provide structure in the identification of evaluation model fundamentals.

Reference 6 was an adequacy evaluation of RELAP5 for simulating AP600 small-break LOCAs (SBLOCAs). Most of that effort focused on demonstrating the applicability and assessment of a developed code for a new application.

The subjects addressed in References 3-6 are complex, and the structures used to address these subjects are very detailed. The EMDAP described in this guide is also detailed, so that it can be applied to the complex events described in SRP Chapter 15.

This is particularly true if the application is new or the methods proposed are new. The risk- importance of the event or the complexity of the problem should determine the level of detail needed to develop and assess an evaluation model. For simpler events, many of the steps in the process may only need to be addressed briefly. Also, if a new evaluation model only involves an incremental change to an existing evaluation model, the process may be shortened as long as the effect of the change is thoroughly addressed. An overall diagram of the EMDAP process and the relationship of its elements is shown in Figure 1.

5

Gui danc e on meth ods for calc ulati ng trans ient and acci dent beh avior is provided in the following Regulatory Position. Appendix A provides additional information important to ECCS analysis. The Regulatory Position addresses four related aspects of evaluation model development and assessment. They are:

6

1. Description of the four elements and included steps in the EMDAP based on the first four principles described above and shown in Figure 1.
2. The relationship of accepted quality assurance practices to this process and the incorporation of peer review as described in the fifth principle.
3. A description of what should be included in evaluation model documentation to be consistent with the sixth principle.
4. The unique aspects of general purpose computer programs.

C. REGULATORY POSITION

1. EVALUATION MODEL DEVELOPMENT AND ASSESSMENT PROCESS (EMDAP)

The basic elements developed to describe an EMDAP directly address the first four principles described in the Discussion section and are shown in Figure 1. This Regulatory Position addresses the four elements and the adequacy decision shown in Figure 1.

Adherence to an EMDAP for new applications or a completely new evaluation models could involve significant iterations within the process. However, the same process applies even if the new evaluation model is the result of relatively simple modifications to an existing evaluation model. Feedback loops are not shown; rather, they are addressed in the adequacy decision described in Regulatory Position 1.5.

1.1 Element 1 - Establish Requirements for Evaluation Model Capability It is very important to determine, at the beginning, the exact application envelope for the evaluation model and to identify and agree upon the importance of constituent phenomena, processes, and key parameters within that envelope. Figure 2 illustrates the steps within this element.

1.1.1 Step 1. Specify Analysis Purpose, Transient Class, and Power Plant Class The first step in establishing evaluation model requirements and capabilities is specification of the analysis purpose and identification of the class of plants and class of transients to be analyzed. Specification of the purpose is important because any specific transient may be analyzed for different reasons. For instance, a SBLOCA may be analyzed to assess the potential for pressurized thermal shock (PTS) or to assess compliance with10 CFR 50.46. The statement of purpose influences the entire process of development, assessment, and analysis. Evaluation model applicability is scenario-dependent because the dominant processes, safety parameters, and acceptance criteria change from one scenario to another. The transient scenario, therefore, dictates the processes that must be addressed. A complete scenario definition is plant-specific because the dominant phenomena and their interactions differ in varying degrees with the reactor design.

7

For events described in Chapter 15 of the SRP, these steps should be straight-forward. The purpose is compliance with the GDC; the events and event classes are described in Chapter 15.

The licensee or applicant and evaluation model developer should then specify their applicability to plants and plant types. As examples, fuel design, core loading, number and design of steam generators, number and design of coolant loops, safety injection system design, and control systems can differ significantly from plant to plant and will significantly influence scenario behavior.

1.1.2 Step 2. Specify Figures of Merit Figures of merit are those quantitative standards of acceptance that are used to define acceptable answers for a safety analysis. The GDC in Appendix A to 10 CFR Part 50 describe general requirements for maintaining the reactor in a safe condition during normal operation and during transients and accidents. Chapter 15 of the SRP further defines these criteria in terms of quantitative fuel and reactor system design limits (departure from nucleate boiling ratio (DNBR) limits, fuel temperatures, etc.) for the events of interest. For ECCS design, five specific criteria described in 10 CFR 50.46 must be met for LOCA analysis. Thus, for Chapter 15 events, figures of merit are generally synonymous with criteria directly associated with the regulations, and their selection is usually a simple matter. During evaluation model development and assessment, a temporary surrogate figure of merit may be of value in evaluating the importance of phenomena and processes. Section 2.5 of Reference 7 describes a hierarchy of criteria that was used in SBLOCA assessment, in which vessel inventory was judged to be more 8

valuable in defining and assessing code capability. Justification for using a surrogate figure of merit should be provided.

1.1.3 Step 3 - Identify Systems, Components, Phases, Geometries, Fields, and Processes That Must Be Modeled The purpose of this step is to establish the evaluation model characteristics. In References 4 and 5, hierarchical system decomposition methods are used to investigate scaling in complex systems. These methods can also be valuable in the identification of evaluation model characteristics. The ingredients at each hierarchical level described in References 4 and 5 are, in order from top to bottom:

1. System -- The entire system that must be analyzed for the proposed application.
2. Sub-systems -- Major components that must be considered in the analysis. For some applications, these may include the primary system, secondary system, and containment. For other applications only the primary system would need to be considered.
3. Modules -- Physical components within the sub-system, i.e., reactor vessel, steam generator, pressurizer, piping run, etc.
4. Constituents -- Chemical form of substance, e.g., water, nitrogen, air, boron, etc.
5. Phases -- Solid, liquid, or vapor.
6. Geometrical Configurations -- The geometrical shape that is defined for a transfer process, e.g., pool, drop, bubble, film, etc.
7. Fields -- The properties that are being transported (mass, momentum, energy).
8. Processes -- Mechanisms that move properties through the system.

Ingredients at each hierarchical level can be decomposed into the ingredients at the next level down. In References 4 and 5, this process is described in the following way:

1. Each system can be divided into interacting subsystems.
2. Each subsystem can be divided into interacting modules.
3. Each module can be divided into interacting constituents.
4. Each constituent can be divided into interacting phases.
5. Each phase can be characterized by one or more geometrical configurations.
6. Each geometrical configuration can be described by three field equations, that is, by conservation equations for mass, energy, and momentum.
7. Each field can be characterized by several processes.

By carefully defining the number and type of each ingredient at each level, the evaluation model developer should be able to establish the basic characteristics of the evaluation model. An important principle to note is that if a deficiency exists at a higher level, it is usually not possible to resolve it by fixing ingredients at lower levels. For relatively simple transients, the decomposition process should also be simple.

9

1.1.4 Step 4. Identify and Rank Key Phenomena and Processes Process identification is the last step in the decomposition described above and provides the logical beginning to this step. Plant behavior is not equally influenced by all processes and phenomena that occur during a transient. An optimum analysis reduces candidate phenomena to a manageable set by identifying and ranking the phenomena with respect to their influence on figures of merit. Each phase of the transient scenario and system components are separately investigated. The processes and phenomena associated with each component are examined. Cause and effect are differentiated. After the processes and phenomena have been identified, their importance should be determined with respect to their effect on the relevant figures of merit.

The importance determination should also be applied to high-level system processes, which may be missed if the focus is solely on components. High-level system processes, such as depressurization and inventory reduction, are often very closely related to figures of merit. Focus on such processes can also help to identify the importance of individual component behavior.

As noted in Step 2, it may be possible to show that a figure of merit other than the applicable Chapter 15 acceptance criterion is more appropriate as a standard for identifying and ranking phenomena. This is acceptable as long as it can be shown that, for all the scenarios being considered for the specific ranking and identification activity, the alternative figure of merit is consistent with plant safety.

The principal product of the process outlined above is a phenomena identification and ranking table (PIRT) (see References 3, 6, 9, and 12). Evaluation model development and assessment should be based on a credible and scrutable PIRT. The PIRT should be used to determine the requirements for physical model development, scalability, validation, and sensitivities studies. Ultimately, the PIRT is used to guide any uncertainty analysis or in the assessment of overall evaluation model adequacy. The PIRT is not an end in itself, but is rather a tool to provide guidance for the subsequent steps.

The processes and phenomena that evaluation models should simulate are found by examining experimental data, experience and code simulations related to the specific scenario. Independent techniques to accomplish the ranking include expert opinion, selected calculations, and decision making methods (such as the Analytical Hierarchical Process (AHP)). Examples of the first two are found in Reference 12, and an example of the last is found in Reference 13. Comparison of the results of these techniques provides assurance of the accuracy and sufficiency of the process.

The initial phases of the PIRT process described in this step can rely heavily on expert opinion, which can be subjective. Therefore, iteration of the PIRT based on experimentation and analysis is important. Although the experience is limited, development of other less subjective importance determination methods is encouraged.

Sensitivity studies can help determine the relative influence of phenomena identified early in the PIRT development and for final validation of the PIRT as the EMDAP is iterated. Examples of sensitivity studies used for this purpose are provided in References 3, 6, 9, 11, and 12.

The identification of processes and phenomena proceeds as follows:

10

1. The scenario is divided into operationally characteristic time periods in which the dominant processes and phenomena remain essentially constant.
2. For each time period, processes and phenomena are identified for each component following a closed circuit throughout the system. This is done to differentiate cause from effect.
3. Starting with the first time period, the activities continue, component by component, until all potentially significant processes have been identified.
4. The procedure is repeated sequentially, from time period to time period, until the end of the scenario.

When the identification has been completed, the ranking process begins. The reason to numerically rank the processes and phenomena is based on the need to provide a systematic and consistent approach to all the subsequent EMDAP activities.

Sufficient documentation should accompany the PIRT to adequately guide the entire EMDAP. Development and assessment activities may be revisited during the process, including the identification and ranking. In the end, however, the evaluation model, the PIRT, and all documentation should be frozen to provide the basis for a proper review. With well defined ranking of important processes, evaluation model capabilities, and calculated results, the prioritization of further modeling improvements can be made more easily. An important principle is the recognition that the more highly ranked phenomena and processes require modeling with greater fidelity. References 6 and 7 describe the role of the PIRT process in experiments, code development, and code applications associated with reactor safety analysis.

1.2 Element 2 - Develop Assessment Base The second component of ISTIR (Refs. 4 and 5) is a scaling methodology that includes acquiring appropriate experimental data relevant to the scenario being considered and assuring that the experimental scaling is suitable. In References 4 and 5, the relationship of the severe accident scaling methodology (SASM) component to code development is shown but not emphasized in the SASM demonstration. For the EMDAP, the purpose is to provide the basis for development and assessment as shown previously in Figure 1. Figure 3 shows the steps in this element and their relationship. It should be noted that for simple transients or transients where the scaling issues and assessment are well characterized, the implementation of this element should also be simple. The numbering of steps in this and subsequent elements continues from each previous element.

11

1.2.1 Step 5. Specify Objectives for Assessment Base For analysis of Chapter 15 events, the principle need for a data base is to assess the evaluation model and, if needed, to develop correlations. The selection of the data base is a direct result of the requirements established in Element 1. The data base should include:

1. Separate effects experiments needed to develop and assess empirical correlations and other closure models,
2. Integral systems tests to assess system interactions and global code capability,
3. Benchmarks with other codes (optional),
4. Plant transient data (if available), and
5. Simple test problems to illustrate fundamental calculational device capability.

12

It should be noted that items 3 and 5 in the above list are not meant to be substitutions for obtaining appropriate experimental and/or plant transient data for evaluation model assessment.

1.2.2 Step 6. Perform Scaling Analysis and Identify Similarity Criteria All experiments are compromises with full-scale plant systems. Even nominally full-scale experiments do not include complete similitude. Scaling analyses should be conducted to ensure that the data, and the models based on the data, will be applicable to the full-scale analysis of the plant transient. Scaling compromises that are identified here should ultimately be addressed in the bias and uncertainty evaluation in Element 4.

Scaling analyses are employed to demonstrate the relevancy and sufficiency of the collective experimental data base for representing the behavior expected during the postulated transient and to investigate the scalability of the evaluation model and its component codes for representing the important phenomena. The scope of these analyses is much broader than for the scalability evaluations described in Element 4 relating individual models and correlations or scaling-related findings from the code assessments. Here, the need is to demonstrate that the experimental data base is sufficiently diverse that the expected plant-specific response is bounded and that the evaluation model calculations are comparable to the corresponding tests in non-dimensional space. This demonstration allows extending the conclusions related to code capabilities, drawn from assessments comparing calculated and measured test data (Element 4), to the prediction of plant-specific transient behavior.

The scaling analyses employ both top-down and bottom-up approaches. The top-down scaling approach evaluates the global system behavior and systems interactions from integral test facilities that can be shown to represent the plant-specific design under consideration. A top-down scaling methodology is developed and applied in which:

1. The non-dimensional groups governing similitude between facilities are derived,
2. These groups are shown to scale the results among the experimental facilities, and
3. It is determined whether the ranges of the group values provided by the experiment set encompass the corresponding plant- and transient-specific values.

The bottom-up scaling analyses address issues raised in the plant- and transient-specific PIRT related to localized behavior. These analyses are used to explain differences among tests in different experimental facilities and to use these explanations to infer the expected plant behavior and determine whether the experiments provide adequate plant-specific representation. Application of this scaling process is described in Section 5.3 of Reference 6.

In most applications, especially those with a large number of processes and parameters, it is difficult, if not impossible, to design test facilities that preserve total similitude between the experiment and the NPP. Therefore, based on the important phenomena and processes identified in Step 4 and the scaling analysis described above, the optimum similarity criteria should be identified, and the associated scaling rationales developed for selecting existing data or designing and operating experimental facilities.

1.2.3 Step 7. Identify Existing Data or Perform IETs and SETs To Complete Data Base 13

Based on the results of the previous steps in this element, it should be possible to complete the data base by selection and experimentation. To complete the assessment matrix, the PIRT developed in Step 4 is used to select experiments and data that best address the important phenomena and components. In selecting experiments, a range of tests should be employed to demonstrate that the calculational device or phenomenological model has not been tuned to a single test. A correlation derived from a particular data set may be identified for inclusion in the evaluation model. In such cases, an effort should be made to obtain additional data sets that may be used to assess the correlation. For integral behavior assessment, counterpart tests (similar scenarios and transient conditions) in different experimental facilities at different scales should be selected. Assessments using such tests lead to information concerning scale effects on the models used for a particular calculational device.

1.2.4 Step 8. Evaluate Effects of IET Distortion and SET Scaleup Capability 8A - IET Distortions. Distortions in the integral experimental data base may arise from scaling compromises (missing or atypical phenomena) in sub-scale facilities or atypical initial and boundary conditions in all facilities. The effects of the distortions should be evaluated in the context of the experimental objectives determined in Step 5. If the effects are important, a return to Step 7 is probably needed.

8B - SET Scaleup. As noted in Step 7, correlations should be based on SETs at various scales. In the case of poor scaleup capability, it may be necessary to return to Step 6. Appendix C of Reference 3 describes rationale and techniques associated with evaluation of scaleup capabilities of computer codes and their supporting experimental data bases.

1.2.7 Step 9. Determine Experimental Uncertainties as Appropriate It is important to know the uncertainties in the data base. These uncertainties arise from such items as measurement errors and experimental distortions. If the quantified experimental uncertainties are too large compared to the requirements for evaluation model assessment, the particular data set or correlation should be rejected.

1.3 Element 3 - Develop Evaluation Model As discussed earlier, an evaluation model is a collection of calculational devices (codes and procedures) developed and organized to meet the requirements established in Element 1. The steps for developing the desired evaluation model are shown in Figure 4.

1.3.1 Step 10. Establish an Evaluation Model Development Plan Based on the requirements established in Element 1, a development plan should be devised that includes development standards and procedures that will apply during the development activity. Specific areas of focus should include:

1. Calculational device design specifications,
2. Documentation requirements (see Regulatory Position 3 of this guide),
3. Programming standards and procedures, 14

4 . Transpo trability requirem nts, e

5 . Quality ssuranc a

procedu e

res (see Regulat ory Position 2 of this guide),

and

6. Configuration control procedures 1.3.2 Step 11. Establish Evaluation Model Structure The evaluation model structure includes the structure of the individual component calculational devices and the structure that combines the devices into the total evaluation model. This structure is based on the principles of Element 1, especially Step 3.

The structure for an individual device or code consists of:

1. Systems and components -- A structure should be present that can analyze the behavior of all the systems and components that play a role in the targeted application.
2. Constituents and phases -- The code structure should be able to analyze the behavior of all constituents and phases relevant to the targeted application.
3. Field equations -- Field equations are equations that are solved to determine the transport of the quantity of interest (usually mass, energy, and momentum).
4. Closure relations -- Closure relations are correlations and equations that provide code capability to model and scale particular processes; they are needed to model the terms in the field equations.
5. Numerics -- Numerics provide code capability to perform efficient and reliable calculations.
6. Additional features -- These address code capability to model boundary conditions and control systems.

15

Of course, the code structure should be based on the requirements established in Element 1 and Step 10. Because of the importance of selecting proper closure relationships for the governing equations, these models are treated separately in Step 12.

The six ingredients described above should be successfully integrated and optimized if a completed code is to meet its objectives determined in Step 10.

There are special concerns related to the integration of the component calculational devices into a complete evaluation model. This is frequently referred to as the evaluation model methodology. The way in which the devices are connected spatially and temporally should be described. How close the coupling needs to be would, in part, be determined by the results of the analysis done in Step 3, but it is determined by the magnitude and direction of transfer processes between devices. The hierarchical decomposition described in References 4 and 5 would apply to how transfer processes are analyzed between devices. Since most devices include user options, all selections made should be justified as appropriate for the evaluation model.

1.3.3 Step 12. Develop or Incorporate Closure Models Models or closure relations that describe a specific process are developed using SET data. This includes models that can be used in a stand alone mode or correlations that can be incorporated in a calculational device (usually a computer code). On rare occasions, sufficient experimental detail may be available to develop correlations from IET experiments. The scalability and range of applicability of a correlation may not be known a priori the first time it is developed or selected for use in this step. An iteration of scaleup evaluation (Step 8) and adequacy assessment (Element 4) may be needed to ensure correlation applicability. It should be noted that a path is shown from Element 2 to this step, since correlations may be selected from the existing data base literature.

Models developed here are key to successful evaluation model development. The basis, range of applicability, and accuracy of incorporated phenomenological models should be known and traceable. Justification should be provided for extension of any models beyond their original basis.

1.4 Element 4 - Assess Evaluation Model Adequacy Evaluation model adequacy can be assessed after the previous elements have been established and the evaluation model capability has been documented. Figure 5 is a diagram of Element 4.

The evaluation model assessment is divided into two parts as shown in Figure 5.

The first part (Steps 13 through 15) pertains to the bottom-up evaluation of the closure relations for each code. The second part (Steps 16 through 19) pertains to the top-down evaluations of code-governing equations, numerics, the integrated performance of each code, and the integrated performance of the total evaluation model.

In the first part, important closure models and correlations are examined by considering their pedigree, applicability, fidelity to appropriate fundamental or separate effects test data, and scalability. The term bottom-up is used because the review focuses on the fundamental building blocks of the code.

16

It is import ant to note that any changes to an evaluation model should include at least a partial assessment to assure that these changes do not produce unintended results in the code predictive capability.

1.4.1 Step 13. Determine Model Pedigree and Applicability To Simulate Physical Processes The pedigree evaluation is related to the physical basis of a closure model, assumptions and limitations attributed to the model, and details of the adequacy characterization at the time the model was developed. The applicability evaluation is related to whether the model, as implemented in the code, is consistent with its pedigree or whether use over a broader range of conditions is justified.

1.4.2 Step 14. Prepare Input and Perform Calculations To Assess Model Fidelity or Accuracy 17

The fidelity evaluation is related to the existence and completeness of validation efforts (through comparison to data) or benchmarking efforts (through comparison to other standards, for example, a closed form solution or results obtained with another code) or some combination of these comparisons.

SET input for component devices used in model assessment (usually computer codes) should be prepared to represent the phenomena and test facility being modeled and the characteristics of the nuclear power plant design. In particular, nodalization and option selection should be consistent between the experimental facility and similar components in the nuclear power plant. When the calculations of the SETs are completed, the differences between calculated results and experimental data for important phenomena should be quantified for bias and deviation.

1.4.3 Step 15. Assess Scalability of Models The scalability evaluation here is limited to whether the specific model or correlation is appropriate for applying to the configuration and conditions of the plant and transient under evaluation. References 5 and 14-17 document recent approaches to scaling, ranging from theoretical methods to specific applications that are of particular interest here.

In the second part of the assessment, the evaluation model is evaluated by examining the field equations, numerics, applicability, fidelity to component or integral effects data and scalability. This part of the assessment effort is called the top-down review because it focuses on capabilities and performance of the evaluation model.

1.4.4 Step 16. Determine Capability of Field Equations To Represent Processes and Phenomena and the Ability of Numeric Solutions To Approximate Equation Set The field equation evaluation considers the acceptability of the equations. An assessment of the governing equations in each of the component codes should consider their pedigree and the key concepts and processes culminating in the equation set solved by the code. The objective of this assessment is to characterize the relevance of the governing equations for the chosen application.

The numeric solution evaluation considers convergence, property conservation, and stability of code calculations to a solution of the original equations when applied to the target application. The objective of this review is to summarize information regarding the domain of applicability of the numerical techniques and user options that may impact the accuracy, stability, and convergence features of each component code.

A complete assessment within this step can only be performed after a sufficient foundation of assessment analyses is complete. Section 3 and Appendix A of Reference 6 provide an example for application of this step.

1.4.5 Step 17. Determine Applicability of Evaluation Model to Simulate Systems and Components This applicability evaluation considers whether the integrated code is capable of modeling the plant systems and components. Before integrated analyses are performed, it should be determined that the various evaluation model options, special models, and input have the inherent capability to model the major systems and subsystems required for the particular application.

18

1.4.6 Step 18. Prepare Input and Perform Calculations To Assess System Interactions and Global Capability The fidelity evaluation considers the comparison of evaluation model-calculated and measured test data from component and integral test data and, where possible, plant transient data. For these calculations, the entire evaluation model or its major components are used to compare against the integral data base selected in Element 2.

As was done in Step 14 for the SET assessments, the evaluation model input for IETs should best represent the facilities and should represent the characteristics of the nuclear power plant design. As before, nodalization and option selection should be consistent between experiment and nuclear power plant. When the IET simulations are complete, the differences between calculated results and experimental data for important processes and phenomena should be quantified for bias and deviation. The ability of the evaluation model to model system interactions should also be evaluated in this step.

Section 5 of Reference 6 provides an example application of this step.

In this step, plant input decks should also be prepared for the target applications.

Sufficient analyses should be performed to determine parameter ranges expected in the nuclear power plant. These input decks also provide the groundwork for the analyses performed in Step 20.

1.4.7 Step 19. Assess Scalability of Integral Calculations and Data for Distortions The scalability evaluation here is limited to whether the assessment calculations and experiments exhibit otherwise unexplainable differences among facilities, or between the calculated and measured data for the same facility, that indicate experimental or code scaling distortions.

1.4.8 Step 20. Determine Evaluation Model Biases and Uncertainties The analysis purpose established in Step 1 and the transient complexity will determine the substance of this step. For best-estimate LOCA analysis, uncertainty determination description and guidance are in References 3 and 18 and Appendix A of this guide. In these examples, the uncertainty analyses discussed have the ultimate objective of providing a singular statement of uncertainty with respect to the 10 CFR 50.46 acceptance criteria when using the best-estimate option in that rule. This singular uncertainty statement is accomplished when the individual uncertainty contributions are determined (see Regulatory Guide 1.157, Ref. 18).

For other Chapter 15 events, a complete uncertainty analysis is not required.

However, in most cases the SRP guidance is to use suitably conservative input parameters. This suitability determination may involve a limited assessment of biases and uncertainties and is closely related to the analyses performed in Step 16. Based on the results of Step 4, individual device models can be chosen from those obtained in Step 9.

The individual uncertainty (in terms of range and distribution) of each key contributor is determined from the experimental data (Step 11), input to the nuclear power plant model, and the effect on appropriate figures of merit evaluated by performing separate nuclear power plant calculations. The figures of merit and devices chosen should be consistent.

In most cases the analysis would involve the entire evaluation model. The last part of this step is to determine whether the degree of overall conservatism or analytical uncertainty is appropriate for the entire evaluation model. This is done in the context of the analysis purpose (Step 1) and the regulatory requirements.

19

1.5 Adequacy Decision The decision on the adequacy of the evaluation model is the culmination of the EMDAP described in Regulatory Positions 1.1 through 1.4. Throughout the EMDAP, questions concerning the adequacy of the evaluation model should be asked. At the end of the process, the adequacy should be questioned again to assure that all the answers are satisfactory and that intervening activities have not invalidated previous acceptable responses. If unacceptable responses indicate significant evaluation model inadequacies, the code deficiency is corrected and the appropriate steps in the EMDAP are repeated to evaluate the deficiency correction. The process continues until the ultimate question regarding adequacy can be answered positively. Of course, the documentation as described in Regulatory Position 3 should be updated as code improvements and assessment are accomplished during the process. Analysis, assessment and any sensitivity studies can also lead to a re-assessment of the phenomena identification and ranking. Therefore, that documentation should also be revised as appropriate.

It is helpful to develop a list of questions to be asked during the process and again at the end. To answer these questions, standards should be established by which the capabilities of the evaluation model and its composite codes and models can be judged.

Section 2.2.2 of Reference 6 provides an example of the development of such standards.

2. QUALITY ASSURANCE Much of what is described throughout this regulatory guide relates to good quality assurance practices. For that reason it is important to establish, early in the development and assessment process, appropriate quality assurance protocol. The development, assessment, and application of an evaluation model are all activities that are related to the requirements of Appendix B to 10 CFR Part 50.Section III of Appendix B is a key requirement for this activity and requires that design control measures be applied to reactor physics, thermal, hydraulic, and accident analyses.Section III states that:

The design control measures shall provide for verifying or checking the adequacy of design, such as by the performance of design reviews, by the use of alternate or simplified calculational methods, or by the performance of a suitable testing program.

Section III also states that design changes should be subject to appropriate design control measures.

It is important to note that other parts of Appendix B are also relevant, such asSection V (which requires documented instructions, e.g., user guidance);Section XVI (corrective actions, e.g., error control, identification, and correction); and Section VI and XVII, which address document control and records retention.

To capture the spirit and intent of Appendix B, independent peer review should be performed at key steps in the process, such as at the end of a major pass through an element.

In the early stages of evaluation model development, it is recommended that a review team be convened to review evaluation model requirements as developed in 20

Element 1. Peer review should also be employed at the later stages during major inquiries associated with the adequacy decision.

In addition to programmers, developers, and end users, it is recommended that the peer review team have independent members with recognized expertise in relevant engineering and science disciplines, code numerics, and computer programming. Expert peer review team members, who were not directly involved in the evaluation model development and assessment, can enhance the robustness of the evaluation models.

Further, they can be of value in identifying deficiencies that are common to large system analysis codes.

Throughout the development process, configuration control practices should be adopted that protect program integrity and allow traceability of the development of both the code version and the plant input deck used to instruct the code in how to represent the facility or nuclear power plant. Configuration control of the code version and the plant input deck are separate but related elements of the evaluation model development and require the same degree of quality assurance. Responsibility for these functions should be clearly established. At the end of the process, only the approved, identified code version and plant input deck should be used for licensing calculations.

3. DOCUMENTATION Proper documentation allows appraisal of the evaluation model application to the postulated scenario. The documentation for the evaluation model should cover all the elements of the EMDAP process and should include the:
1. Evaluation Model requirements document
2. Evaluation Model methodology document
3. Code description manuals
4. User manuals and user guidelines
5. Scaling reports
6. Assessment reports
7. Uncertainty analysis reports 3.1 Requirements Document The requirements determined in Element 1 should be documented so the evaluation model can be assessed against known guidelines. In particular, a documented, current PIRT is important in deciding whether a particular evaluation model feature should be modified before the evaluation model can be applied with confidence.

3.2 Methodology Document Methodology documentation should include the inter-relationship of all the computational devices used for the plant transient being analyzed, including the description of input and output. This should also include a complete description and specification of those portions of the evaluation model not included in the computer programs. A description of all other information necessary to specify the calculational procedure should also be included. A very useful part of this description would be a diagram to illustrate how the various programs and procedures are related, both in time 21

and in function. This methodology description is needed to know exactly how the transient will be analyzed in its entirety.

3.3 Computational Device Description Manuals A description manual is needed for each computational device that is contained in the evaluation model. There are several important components to the manual. The first is a description of the modeling theory and associated numerical schemes and solution models, including a description of the architecture, hydrodynamics, heat structure, heat transfer models, trip systems and control systems, reactor kinetics models, and fuel behavior models.

A key ingredient of the documentation is a models and correlations quality evaluation (MC/QE) report. The MC/QE report provides a basis for the traceability of the models and detailed information on the closure relations. Information on correlation and model sources, data bases, accuracy, scale-up capability, and applicability to specific plant and transient conditions should also be documented in the MC/QE report. The MC/QE report represents a quality evaluation document that provides a blueprint as to what is in the computational device, how it got there, and where it came from.

The MC/QE document has three objectives:

1. To provide information on the sources and quality of closure equations, that is, on correlations and models or other criteria used.
2. To describe how these closure relations are coded in the device and to assure that the descriptions in the manual conform to the coding, and the coding conforms to the source from which the closure relations were derived.
3. To provide a technical rationale and justification for using these closure relations; that is, to confirm that the dominant parameters (pressure, temperature, etc.)

represented by the models and correlations reflect the ranges expected in the plant and transient of interest.

Consequently, for correlations, models, and criteria used, the MC/QE should:

1. Provide information on the original source, the supporting data base, the accuracy and applicability to the plant-specific transient conditions.
2. Provide an assessment of effects if used outside the supporting data base. A description of and justification for the extrapolation method should be provided. For certain applications, recommendations may be given to use options other than the default options. In such cases, instructions should be provided to ensure that appropriate validation is performed for the nonstandard option.
3. Describe the implementation in the device (i.e., actual coding structure).
4. Describe any modifications required to overcome computational difficulties.
5. Provide an assessment of effects caused by implementation (item 3) or modifications (item 4) on the overall code applicability and accuracy.

22

References 19 and 20 are examples of the MC/QE documents generated to meet the requirements listed above.

3.4 Users Manual and User Guidelines The users manual should be a complete description of how to prepare all required and optional input. The user guidelines should describe recommended practices for preparation of all relevant input. To minimize the risk of inappropriate program use, the guidelines should include:

1. The proper use of the program for the particular plant-specific transient or accident being considered,
2. The range of applicability for the transient or accident being analyzed,
3. The code limitations for such transients and accidents,
4. Recommended modeling options for the transient being considered, the equipment required, and the choice of nodalization schemes. Plant nodalization should be consistent with nodalization used in assessment cases.

3.5 Scaling Reports Reports should be provided for all scaling analyses used to support the viability of the experimental data base, the scalability of models and correlations, and the scalability of the complete evaluation model. Section 5.3 of Reference 6 provides an example and references to scaling analyses done to support adequacy evaluations.

3.6 Assessment Reports Assessment Reports are generally of three types:

1. Developmental assessment
2. Component assessment
3. Integral effects test ssessment Most developmental assessment (DA) reports should be a set of code analyses that focus on a limited set of ranked phenomena. That is, the code or other device should analyze experiments or plant data that demonstrate in a separate effects manner the capability to calculate individual phenomena and processes determined to be important by the PIRT for the specific scenario and plant type.

A code or other device may model certain equipment in a special way; assessment calculations should be performed for these components.

Integral effects tests (IET) should show the evaluation models integral capability by comparison to relevant integral effects experiments or plant data. Some IET assessments may be general in nature, but for evaluation model consideration, the IET assessments should include a variety of scaled facilities applicable to the plant design and transient.

23

For some plants and transients, code-to-code comparisons can be very helpful. In particular, if a new code or device is intended to have a limited application, the results may be compared to calculations using a previous code. However, the previous code should be well assessed to integral or plant data for the plant type and transient being considered for the new device. Differences in key input such as system nodalization should be explained so that favorable comparisons are providing the right answers for the right reasons. Such benchmark calculations would not be a replacement for assessment of the new code.

A significant amount of evaluation model assessment may be performed before selection of the plant-specific transient to be analyzed. In other cases, the assessment may be done outside the context of the plant- and transient-specific evaluation model. In still other cases, the assessment may be done by organizations other than those responsible for the plant-specific analysis. If it is desired to credit these assessments to the plant and transient under consideration, great care should be taken in evaluating the applicability of those assessments. The applicability to the present case should be thoroughly evaluated and documented.

To gain confidence in evaluation model predictive capability when applied to a plant-specific event, it is important for assessment reports to:

1. Assess calculational device capability and quantify accuracy to calculate various parameters of interest, in particular those described in the PIRT.
2. Determine whether or not the calculated results are due to compensating errors by performing an appropriate scaling analysis and sensitivity analysis.
3. Assess whether or not the calculated results are self-consistent and present a cohesive set of information that is technically rational and acceptable.
4. Assess whether the timing of events calculated by the evaluation model are in agreement with the experimental data.
5. Assess the evaluation model capability to scale to the prototypical nuclear plant.

Almost without exception, such assessment also addresses the experimental data base used in development or validation of the evaluation model.

6. Explain any unexpected or, at first glance, strange results calculated by the evaluation model or component devices. This is particularly important when experimental measurements are not available to give credence to the calculated results. In such cases, rational technical explanations will greatly support generation of credibility and confidence in the evaluation model.

Whenever there is a disagreement between calculated results and experimental data, assessment reports must:

7. Identify and explain the cause for the discrepancy, that is, identify and discuss the deficiency in the device (or, if necessary, discuss the inaccuracy of experimental measurements).
8. Address the question of how important the deficiency is to the overall results, that is, to parameters and issues of interest.

24

9. Explain why a deficiency may not have an important effect on a particular scenario.

With respect to a calculational device input model and sensitivity studies, it is necessary for assessment reports to:

10. Provide a nodalization diagram along with a discussion of the nodalization rationale.
11. Specify and discuss the boundary and initial conditions, as well as the operational conditions for the calculations.
12. Present and discuss results of sensitivity studies (if performed) on closure relations or other parameters.
13. Discuss modifications to the input model (nodalization, boundary, initial or operational conditions) resulting from sensitivity studies (if performed).
14. Provide guidelines for performing similar analyses.

3.7 Uncertainty Analysis Reports Documentation should be provided for any uncertainty analyses performed as part of Step 20 of the EMDAP.

4. GENERAL PURPOSE COMPUTER PROGRAMS Very often a general purpose transient analysis computer program, such as RELAP5, TRAC, or RETRAN, is developed to analyze a number of different events for a wide variety of plants. These codes can constitute the major portion of an evaluation model for a particular plant and event. Generic reviews are often performed for these codes to minimize the amount of work required for plant- and event-specific reviews. A certain amount of generic assessment may be performed for such a code as part of the generic code development. The EMDAP, on the other hand, starts with identification of plant, event, and directly related phenomena. This process, as previously described, may indicate that a generic assessment does not include all the appropriate geometry, phenomena, or the necessary range of variables to demonstrate code adequacy for some of the proposed plant-specific event analyses. Evidence of this is the fact that safety evaluations for generic code reviews often contain a large number of qualifications on the use of the code. To avoid such problems, it is important to qualify the applicability of the generic code, including its models and correlations, and the applicability of any generic assessment that accompanies the code.

D. IMPLEMENTATION The purpose of this section is to provide information to applicants and licensees regarding the NRC staffs plans for using this draft regulatory guide.

25

This draft guide has been released to encourage public participation in its development. Except in those cases in which an applicant or licensee proposes an acceptable alternative method for complying with the specified portions of the NRCs regulations, the methods to be described in the effective guide reflecting public comments will be used in the evaluation of submittals in connection with evaluation models used to analyze transients and accidents.

26

DEFINITIONS These definitions are in the context of this regulatory guide and may not apply to other uses.

AHP Analytical Hierarchical Process -- An analytical and software based methodology used to combine experimental data with expert judgment to efficiently rank the relative importance of phenomena and processes to the response of an NPP to an accident or other transient in a consistent and traceable manner.

AP600 Advanced Passive 600 Mwe PWR designed by Westinghouse Electric Co.

Bottom-up The approach to a safety-related analysis similar to top-down (see below), but in which the key feature is to treat all phenomena and processes, including all those associated with the analysis tools for modeling, as equally important to the facilitys response to an accident or transient. Therefore, the phenomena and processes are quantified in depth.

Calculational Computer codes or other calculational procedures that compose an devices evaluation model.

Chapter 15 In this regulatory guide, Chapter 15 events refer to the transients and events accidents that are defined in Chapter 15 of the SRP, NUREG-0800 (Ref. 1) to be analyzed to meet the requirements of the General Design Criteria (GDC) of Appendix A to 10 CFR Part 50, except for the fuel assembly misloading event and all radiological consequence analyses.

CFR Code of Federal Regulations Closure relations Equations and correlations required to supplement the field equations that are solved to obtain the required results. This includes physical property definitions and correlations of transport phenomena.

Constituents Chemical form of any material being transported, e.g., water, air, boron.

CSAU Code scaling, applicability, and uncertainty -- A process to determine the applicability, scalability, and uncertainty of a computer code in simulating an accident or other transient. A PIRT process is normally embedded within a CSAU process. See Reference 3.

DA Developmental Assessment -- Calculations performed using the entire evaluation model or its individual calculational devices to validate its capability for the target application.

DNBR Departure from nucleate boiling ratio EMDAP Evaluation model development and assessment process 27

ECCS Emergency core cooling system Evaluation model Calculational framework for evaluating the behavior of the reactor (EM) system during a postulated Chapter 15 event, which includes one or more computer programs and all other information needed for use in the target application.

Fields The properties that are being transported (mass, momentum, energy).

Field equations Equations that are solved to determine the transport of mass, energy, and momentum throughout the system.

Frozen The condition whereby the analytical tools and associated facility input decks remain unchanged (and under configuration control) throughout a safety analysis, thereby ensuring traceability of and consistency in the final results.

GDC General Design Criteria -- Design criteria described in Appendix A to 10 CFR Part 50.

Geometrical The geometrical shape that is defined for a transfer process, e.g.,

configurations pool, drop, bubble, film.

H2TS Hierarchical two-tiered scaling -- Methodology that uses hierarchical systems analysis methods to evaluate experimental scaling.

Described in References 4 and 5.

IET Integral Effects Test -- An experiment in which the primary focus is on the global system behavior and the interactions between parameters and processes.

ISTIR Integrated Structure for Technical Issue Resolution -- Methodology derived for severe accident issue resolution. Described in References 4 and 5.

LBLOCA Large-break loss-of-coolant accident LOCA Loss-of-coolant accident LWR Light water reactor MC/QE Models and correlations quality evaluation -- A report documenting what is in a computer code, the sources used to develop the code, and the conditions under which the original source of information was developed.

Model (Without evaluation modifier) -- Equation or set of equations that represents a particular physical phenomenon within a calculational device.

28

Modules Physical components within the sub-system, e.g., reactor vessel, steam generator, pressurizer, piping run.

MYISA Maine Yankee Independent Safety Assessment NPP Nuclear power plant PCT Peak cladding temperature Phase State of matter involved in the transport process, usually liquid or gas.

A notable exception is heat conduction through solids.

PIRT Phenomena Identification and Ranking Table -- May refer to a table or to a process, depending on the context of use. The process relates to determining the relative importance of phenomena (or physical processes) to the behavior of an NPP following the initiation of an accident or other transient. A PIRT table is a listing of the results of application of the process.

Processes Mechanisms that move properties through the system.

QA Quality Assurance SASM Severe accident scaling methodology SBLOCA Small-break loss-of-coolant accident Scalability The process in which the results from a subscale facility (relative to a (scaling) NPP) or the modeling features of a calculational device are evaluated to determine the degree to which they represent a NPP.

Scenario Description and time sequence of events Sensitivity studies The term is generic to several types of analyses; however, the definition of most interest here relates to those studies associated with the PIRT process and used to determine the relative importance of phenomena or processes. This may also involve analysis of experimental data that are a source of information used in the PIRT process.

SET Separate Effects Test -- An experiment in which the primary focus is on a single physical phenomena or process.

SRP Standard Review Plan -- Acceptable plan for NRC reviewers, NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants.

System The entire system that must be analyzed for the proposed application.

Systems code The principal computer code of an evaluation model that describes the transport of mass, momentum, and energy throughout the reactor coolant systems.

29

Sub-systems The major components that must be considered in the analysis. For some applications this would include the primary system, secondary system, and containment. For other applications only the primary system would need to be considered.

Target application The safety analysis for which a specific purpose, transient type, and NPP type has been specified.

Top-down The approach to a safety-related analysis in which one sequentially determines or performs (1) the exact objective of the analysis (regulatory action, licensing action, desired product, etc.), (2) the analysis envelope (facility or NPP, transients, analysis codes, facility-imposed geometric and operational boundary conditions, etc.), (3) all plausible phenomena or processes that have some influence on the facility or plant behavior, (4) a PIRT process, (5) applicability and scalability of the analysis tools, and (6) the influence of various uncertainties embedded in the analysis on the end product. A key feature of the top-down approach is to address those parts of the safety analysis associated with items 5 and 6 in a graduated manner based on the relative importance determined in item 4. The approach items 1 through 5 are independent of analysis tools. Items 5 and 6 require the approach to become dependent on analysis tools.

Uncertainty There are two separate but related definitions of primary interest:

(1) The inaccuracy in experimentally derived data typically generated by the inaccuracy of measurement systems and (2) The inaccuracy of calculating primary safety criteria or related figures of merit typically originating in the experimental data or assumptions used to develop the analytical tools. The analytical inaccuracies are related to approximations and uncertainties involved with solving the equations and constitutive relations.

30

REFERENCES

1. Draft Section 15.0.2, Review of Analytical Computer Codes, December 2000, of NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants, USNRC, updated by section.1
2. Regulatory Guide 1.70, Standard Format and Content of Safety Analysis Reports for Nuclear Power Plants (LWR Edition), Revision 3, USNRC, November 1978.2
3. B. Boyack et al., Quantifying Reactor Safety Margins, Application of Code Scaling, Applicability, and Uncertainty Evaluation Methodology to a Large Break, Loss-of- Coolant Accident, NUREG/CR-5249, USNRC, 3

December 1989.

4. B. Boyack et al., An Integrated Structure and Scaling Methodology for Severe Accident Technical Issue Resolution, Draft NUREG/CR-5809, USNRC, November 1991.2
5. N. Zuber et al., An Integrated Structure and Scaling Methodology for Severe Accident Technical Issue Resolution: Development of Methodology, Nuclear Engineering and Design, 186 (1-21), 1998.
6. C. D. Fletcher et al., Adequacy Evaluation of RELAP5/MOD3, Version 3.2.1.2 for Simulating AP600 Small Break Loss-of-Coolant Accidents, INEL-96/0400 (nonproprietary version), April 1997.4 (Available in PERR by Accession Number ML003769921)
7. G.E. Wilson and B.E. Boyack, The Role of the PIRT Process in Experiments, Code Development and Code Applications Associated with Reactor Safety Analysis, Nuclear Engineering and Design, 186 (23-37), 1998.

1 Electronic copies are posted on NRCs web site, <WWW.NRC.GOV> , through Rulemaking, and from NRCs Distribution Section, Public Document Room, and Public Electronic Reading Room. See footnotes below.

2 Single copies of regulatory guides, both active and draft, and draft NUREG documents may be obtained free of charge by writing the Reproduction and Distribution Services Section, OCIO, USNRC, Washington, DC 20555-0001, or by fax to (301)415-2289, or by email to <DISTRIBUTION@NRC.GOV>. Active guides may also be purchased from the National Technical Information Service on a standing order basis. Details on this service may be obtained by writing NTIS, 5285 Port Royal Road, Springfield, VA 22161; telephone (703)487-4650; online

<http://www.ntis.gov/ordernow>. Copies of certain guides and many other NRC documents are available electronically on the internet at NRCs home page at <WWW.NRC.GOV> in the Reference Library. Documents are also available through the Public Electronic Reading Room (NRCs ADAMS document system, or PERR) at the same web site.

3 Copies are available at current rates from the U.S. Government Printing Office, P.O. Box 37082, Washington, DC 20402-9328 (telephone (202)512-1800); or from the National Technical Information Service by writing NTIS at 5285 Port Royal Road, Springfield, VA 22161; (telephone (703)487-4650. Copies are available for inspection or copying for a fee from the NRC Public Document Room at 11555 Rockville Pike, Rockville, MD; the PDRs mailing address is USNRC PDR, Washington, DC 20555; telephone (301)415-4737 or (800)397-4209; fax (301)415-3548; email is PDR@NRC.GOV.

4 Electronic copies are available in NRCs Public Electronic Reading Room, which can be accessed through the NRCs web site, <WWW.NRC.GOV> .

31

8. H. Holmstrom et al., Status of Code Uncertainty Evaluation Methodologies, in Proceedings of the International Conference on New Trends in Nuclear System Thermohydraulics, Dipartimento di Costruzioni Meccaniche Nucleari, Pisa, Italy, 1994.4 (Available in PERR under Accession Number ML003769914)
9. M. G. Ortiz and L. S. Ghan, Uncertainty Analysis of Minimum Vessel Liquid Inventory During a Small Break LOCA in a Babcock and Wilcox Plant, NUREG/CR-5818, USNRC, December 1992.3
10. W. Wulff et al., Uncertainty Analysis of Suppression Pool Heating During an ATWS in a BWR-5 Plant, NUREG/CR-6200, USNRC, March 1994.3
11. G. E. Wilson et al., Phenomena-Based Thermal Hydraulic Modeling Requirements for Systems Analysis of a Modular High Temperature Gas-Cooled Reactor, Nuclear Engineering and Design, 136 (pp. 319-333), 1992.
12. R. A. Shaw et al., Development of a Phenomena Identification and Ranking Table (PIRT) for Thermal-Hydraulic Phenomena During a PWR Large-Break LOCA, NUREG/CR-5074, USNRC, August 1988.3
13. J.C. Watkins and L.S. Ghan, AHP Version 5.1, Users Manual, EGG-ERTP-10585, Idaho National Engineering Laboratory, October 1992.4 (Available in PERR by Accession Number ML003769902)
14. J. Reyes and L. Hochreiter, Scaling Analysis for the OSU AP600 Test Facility (APEX), Nuclear Engineering and Design, pp.53-109, November 1, 1998.
15. S. Banerjee et al., Scaling in the Safety of Next Generation Reactors, Nuclear Engineering and Design, pp. 111-133, November 1, 1998.
16. V. Ransom, W. Wang, M. Ishii, Use of an Ideal Scaled Model for Scaling Evaluation, Nuclear Engineering and Design, pp. 135-148, November 1, 1998.
17. M. Ishii et al., The Three-Level Scaling Approach with Application to the Purdue University Multi-Dimensional Integral Test Assembly (PUMA), Nuclear Engineering and Design, pp. 177-211, November 1, 1998.
18. Regulatory Guide 1.157, Best-Estimate Calculations of Emergency Core Cooling System Performance, USNRC, May 1989.2
19. RELAP5/MOD3 Code Manual, Models and Correlations, NUREG/CR-5535, Volume 4, USNRC, August 1995.3
20. J. Spore et al., TRAC-PF1/MOD2, Theory Manual, NUREG/CR-5673 (Draft), July 1993. (Available electronically at <www.nrc.gov/RES/TRAC-P> .)

32

Appendix A ADDITIONAL CONSIDERATIONS IN THE USE OF THIS REGULATORY GUIDE FOR ECCS ANALYSIS A.1 BACKGROUND Section 50.46 of 10 CFR Part 50, as it existed prior to September 1988, provided the requirements for domestic licensing of production and utilization facilities using conservative analysis methods. The acceptance criteria for peak clad temperature, cladding oxidation, hydrogen generation, and long-term decay heat removal were listed in 10 CFR 50.46(b). Appendix K to 10 CFR Part 50 provided specific requirements related to ECCS evaluation models. The requirements of 10 CFR 50.46 were in addition to the requirements of Criterion 35 of (GDC 35) in Appendix A to 10 CFR Part 50. GDC 35 states requirements for electric power and equipment redundancy for ECCS systems.

Chapter 15.6.5. of NUREG-0800, the Standard Review Plan, describes for reviewers the scope of review, acceptance criteria, review procedures, and findings relevant to ECCS analyses submitted by licensees. Chapter 15.0.2 of NUREG-0800 is the companion SRP section to this regulatory guide.

In September 1988, the NRC amended the requirements of 10 CFR 50.46 and Appendix K so that the regulations reflected the improved understanding of ECCS performance during reactor transients that was obtained through extensive research performed between the promulgation of the original requirements in January 1974 and September 1988. Examples of that body of research can be found in Reference A-1. Further guidance to licensees or applicants was provided in May 1989 by Regulatory Guide 1.157, Best-Estimate Calculations of Emergency Core Cooling System Performance. The amendment to 10 CFR Part 50 and Regulatory Guide 1.157 now permit licensees or applicants to use either the Appendix K conservative analysis methods or a realistic evaluation model (commonly referred to as best-estimate plus uncertainty analysis methods). That is, the uncertainty in the best-estimate analysis must be quantified and considered when comparing the results of the calculations with the applicable limits in 10 CFR 50.46(b) so that there is a high probability that the criteria will not be exceeded. It may be noted the acceptance criteria for peak cladding temperature, cladding oxidation, hydrogen generation, and long-term decay heat removal did not change with the September 1988 amendment.

A.2 NEED FOR REGULATORY GUIDANCE UPDATE FOR ECCS ANALYSIS The regulatory structure described above was strongly founded on the supporting work documented in Reference A-2. Therefore, it is important to update the regulatory structure to reflect the last eleven years of advancement in best-estimate plus uncertainty analysis methods. Examples of the extension of evolving best-estimate plus uncertainty analysis methods to both the old and new advanced reactor designs can be found in References A-3 through A-9 of this appendix.

A.3 UNCERTAINTY METHODOLOGY 33

The best-estimate option in 10 CFR 50.46(a)(1)(i), allowed since 1988, requires that:

Uncertainties in the analysis method and inputs must be identified and assessed so that the uncertainty in the calculated results can be estimated. This uncertainty must be accounted for, so that, when the calculated ECCS cooling performance is compared to the criteria set forth in paragraph (b) of this section, there is a high level of probability that the criteria would not be exceeded.

To support the revised 1988 ECCS rule, the NRC and its contractors and consultants developed and demonstrated an uncertainty evaluation methodology called code scaling, applicability, and uncertainty (CSAU) (Ref. A-2). While this regulatory guide is oriented toward the CSAU approach, including its embedded PIRT process, it is recognized that other approaches exist. Since the CSAU demonstration was not a plant-specific application, evaluation of input uncertainties related to plant operation was not emphasized. Proprietary methodologies have been submitted to and approved by the NRC that fully address uncertainties in analysis methods and input. Thus, other approaches to determine the combined uncertainty in the safety analysis are recognized as having potential advantages, as long as the evaluation model documentation provides the necessary validation of its approach.

The safety criteria (PCT, H2 generation, etc.) specified in 10 CFR 50.46 remain unchanged regardless of the uncertainty methodology used in a licensing or regulatory submittal. Similarly, the general guidelines in Regulatory Guide 1.157 with regard to the phenomena, components, and computer models also remain unchanged. Thus, the focus of the remainder of this section is those considerations primarily related to determining the:

+ Relative importance of the phenomena or processes and components, and those that should be included in the uncertainty analysis,

+ Method of establishing the individual phenomenon or process contribution to the total uncertainty in the safety criteria, and

+ Method to combine the individual contributions to uncertainty into the total uncertainty in the safety criteria.

CSAU and other methods address the relative importance of phenomena or processes, the difference being in the approach. CSAU uses the PIRT process in which relative importance is established by an appropriate group of experts based on experience, experimental evidence, or computer-based sensitivity studies. When finalized, the resulting PIRTs guide the degree of effort to determine the individual phenomenon or process uncertainty in the safety criteria. The PIRT process results also guide the method used to combine the individual contributions into an estimate of the total uncertainty in the safety analysis. Commonly, but it is not required, a response surface is developed to act as a surrogate for the computer codes used in estimating the total uncertainty. The response surface can then be extensively Monte Carlo sampled to determine the total uncertainty. The use of limited computer calculations to develop an accurate response surface is followed by sufficient Monte Carlo sampling of the response surface in an effort to be as thorough as necessary yet as economical as possible. Therefore, the major cost of the CSAU methodology is related to the extensive expert staff-hours normally required by the expert panel to perform the PIRT process. Additional advantages of the CSAU are that it has been used by the USNRC, and the details of the methodology have been well documented (Ref. A-2).

34

A potential disadvantage is related to the dependency of the number of computer simulations on the number of phenomena or processes determined in the PIRT that may be needed to estimate the total uncertainty. That is, at least two single parameter change runs must be made for each required phenomenon or process. In addition, cross-product runs must be made when several of the phenomena or processes have significant covariance. The cross-product runs may involve change runs of two parameters, three parameters, or four parameters to adequately determine the effect of nonindependent phenomena or processes.

In contrast, other methods (Ref. A-7) may only use a panel or individual experience to determine what phenomena or processes may contribute to the total uncertainty in the safety criteria and adequate estimates of the variability of those phenomena or processes.

Similar to CSAU, the estimates of the individual parameter variations are based on expert experience, experimental data, and available sensitive studies. A large number of computer simulations may be required because the number of computer calculations needed to determine the total uncertainty is independent of the number of contributors.

That is, the number of computer simulations is dependent only on the probability and confidence limits desirable in the final results. For example, 95%/95% limits require approximately 90 simulations regardless of the number of phenomena or processes selected as contributors. This feature is achieved through the use of unique statistical assumptions with respect to how the individual contributor uncertainty domain is sampled.

There is not a strong non-proprietary precedence that could be used a priori by the USNRC in approving such a licensing or regulatory submittal to evaluate overall uncertainty. Accordingly, such submittals would initially require significant validation of the methodology. The same is considered to be true of uncertainty methodologies described in Reference A-7 that might be used.

An uncertainty methodology is not required for the original conservative option in 10 CFR 50.46. Rather, the required features of Appendix K provide sufficient conservatism without the need for an uncertainty analysis. It should be noted that Section II.4 of Appendix K requires that To the extent practicable, predictions of the evaluation model, or portions thereof, shall be compared with applicable experimental information.

Thus, Appendix K requires comparisons to data similar to those required for the best-estimate option, but without the need for an uncertainty analysis. However, poor comparisons with applicable data may prevent NRC acceptance of the Appendix K model.

35

APPENDIX A REFERENCES A-1 Compendium of ECCS Research for Realistic LOCA Analysis, NUREG-1230, USNRC, December 1988.1 A-2 B. Boyack et al., Quantifying Reactor Safety Margins, Application of Code Scaling, Applicability, and Uncertainty Evaluation Methodology to a Large-Break Loss-of-Coolant Accident, NUREG/CR-5249, USNRC, December 1989.1 A-3 G.E. Wilson et al., Phenomena Identification and Ranking Tables for Westinghouse AP600 Small Break Loss of Coolant Accident, Main Steam Line Break, and Steam Generator Tube Rupture Scenarios, NUREG/CR-6541, USNRC, June 1997.1 A-4.

A-4 M.G. Ortiz and L.S. Ghan, Uncertainty Analysis of Minimum Vessel Liquid Inventory During a Small Break LOCA in a Babcock and Wilcox Plant, NUREG/CR-5818, USNRC, December 1992.1 A-5 U. S. Rohatgi et al., Bias in Peak Clad Temperature Predictions Due to Uncertainties in Modeling of ECC Bypass and Dissolved Non-Condensable Gas Phenomena, NUREG/CR-5254, USNRC, September 1990.1 A-6 C.D. Fletcher et al., Adequacy Evaluation of RELAP5/MOD3, Version 3.2.1.2 for Simulating AP600 Small Break Loss-of-Coolant Accidents, INEL-96/0400 (Nonproprietary version), April 1997.2 (Available in PERR by Accession Number ML003769921)

A-7 H. Holmstrom et al., Status of Code Uncertainty Evaluation Methodologies, Proceedings of the International Conference on New Trends in Nuclear System Thermohydraulics, Dipartimento di Costruzioni Meccaniche Nucleari, Pisa, Italy, 1994.2 (Available in PERR under Accession Number ML003769914)

A-8 G.E. Wilson and B.E. Boyack, The Role of the PIRT Process in Experiments, Code Development and Code Applications Associated with Reactor Safety Analysis, Nuclear Engineering and Design, 186 (pp. 23-37), 1998.

A-9 RELAP5/MOD3 Code Manual, Models and Correlations, NUREG/CR5535, Volume 4, USNRC, August 1995.1 1

Copies are available at current rates from the U.S. Government Printing Office, P.O. Box 37082, Washington, DC 20402-9328 (telephone (202)512-1800); or from the National Technical Information Service by writing NTIS at 5285 Port Royal Road, Springfield, VA 22161; (telephone (703)487-4650). Copies are available for inspection or copying for a fee from the NRC Public Document Room at 11555 Rockville Pike, Rockville, MD; the PDRs mailing address is USNRC PDR, Washington, DC 20555; telephone (301)415-4737 or (800)397-4209; fax (301)415-3548; email is PDR@NRC.GOV.

2 Electronic copies are available in NRCs Public Electronic Reading Room, which can be accessed through the NRCs web site, <WWW.NRC.GOV> .

36

REGULATORY ANALYSIS

1. PROBLEM Section 50.34, Contents of Applications; technical information of 10 CFR Part 50, Domestic Licensing of Production and Utilization Facilities, requires that:
1. Safety Analysis Reports be submitted that analyze the design and performance of structures, systems and components provided for the prevention of accidents and the mitigation of the consequences of accidents,
2. Analysis and evaluation of ECCS cooling performance following postulated loss-of-coolant accidents (LOCAs) shall be performed in accordance with the requirements of Section 50.46, and
3. The technical specifications for the facility (Section 50.36) will be based on the safety analysis.

Various sections of Chapter 15 of the Standard Review Plan (SRP) (NUREG-0800) instruct transient and accident reviewers to initiate generic reviews of models used by applicants or licensees if the analytical models have not been previously reviewed and found acceptable by the staff. While the SRP discusses review of the input to these models, no guidance is provided regarding review of the analytical models. Except for Regulatory Guide 1.157 on best estimate ECCS analysis, no guidance exists for applicants3 on the development and assessment of transient and accident analysis methods. Recent reviews have shown that such guidance could have a positive effect in terms of clarifying expectations and streamlining the review process. To produce a viable product, certain principles should be addressed during the model development and assessment process; these would be described in Draft Regulatory Guide DG-1096. The accompanying new section of Chapter 15 of the SRP addresses the same principles as DG-1096 but focuses on responsibilities of the analytical model reviewer. This regulatory analysis applies to both the proposed DG-1096 and the proposed section 15.0.2 of the SRP.

2. ALTERNATIVE APPROACHES Two alternative approaches were considered:
1. Take no action
2. Provide guidance on the development, assessment, application, and review of methods used to analyze transients and accidents as described in 10 CFR 50.34.

The first alternative, take no action, would require no additional direct cost for the NRC staff or applicants over current conditions, since no change to the process would occur.

This process would involve significant effort on the part of applicants to anticipate the type of information that would be acceptable to the NRC staff to demonstrate the capability of the analysis methods. In addition, the NRC staff review would involve considerable effort 1

In this regulatory analysis, an applicant means an applicant, a licensee, a vendor, a methods developer or other entity that petitions for approval of an analytical model to be used on behalf of a licensee or applicant.

37

and iteration with the applicant to determine whether the proposed methods are acceptable.

The second alternative, provide the analytical guidance to industry and reviewer, was considered. Providing guidance should reduce the NRC staff effort. Defining acceptable analytical modeling principles should streamline the review process. It may appear to involve more effort on the part of applicants, but if one considers the time and effort spent on iterations with the staff, real savings could be realized in the long run. The development, assessment, application, and review process described in the proposed regulatory guide and SRP chapter are based on an initial identification of the plant and transient followed by assessment of phenomena and processes and application of these phenomena and processes to code development and assessment. A key principle here is that guidance would focus the development, assessment, and review as clearly as possible.

3. VALUES AND IMPACTS In this analysis, the probability of guidance having a positive effect and the probability of that effect on the achievement of overall safety goals are not known quantitatively. In the summary below, an impact is a cost in schedule, budget, staffing, or an undesired attribute that would accrue from taking the proposed approach.

3.1 Alternative 1 - Take No Action This alternative has a perceived cost benefit since there are no start-up activities. It also provides flexibility, since each applicant would devise its own process for analytical methods development. However, the NRC would continue to receive requests to review and approve analytical methods that are prepared with no clear guidance on what the NRC staff considers to be an acceptable model. The lack of an identified set of guidelines and practices would have adverse effects on the level of staff effort required to conduct model reviews and to assure consistency of principle among reviews. It would take a longer time for the applicant to understand staff reviewer expectations and for the staff to have a clear understanding of what the applicant was providing to satisfy those expectations. Thus, although the initial cost would apparently be low, taking no action could result in greater total costs, to both the NRC staff and the applicant, during the review process.

Value - No value beyond the status quo Impact - Schedule, budget, and staffing cost, to the staff and applicant associated with regulatory uncertainty.

3.2 Alternative 2 - Provide Guidance on Analytical Methods Development for Chapter 15 Events A benefit to the NRC staff of this alternative would be a more comprehensive understanding of a focused basis for the development of the analytical methods under review. From the applicants perspective, there would be a reduction in the need to iterate with the staff as the review progressed. The results of using such a process would also result in a better product more clearly suited to the specific analytical task at hand. The costs involved would be some additional work and documentation initially. This alternative would have the value of promoting a predetermined common understanding that has 38

gained acceptance over the last dozen years with the affiliated technical community. The result would be higher confidence in results of transient and accident analysis.

Value - Common understanding of good practices for development, assessment, and application of analysis methods.

1. Less burdensome developmental and review iterations between staff and applicant.
2. Minimization of regulatory uncertainty.

Impact - Additional initial work and documentation by the applicant.

4. CONCLUSIONS Experience with recent model reviews has demonstrated the need for guidance in this area. On balance it is believed that guidance in the form of good principles of transient and accident code development and assessment outweighs the relatively small cost of initial work and documentation. Therefore, Draft Regulatory Guide DG-1096 and the proposed new Standard Review Plan Section 15.0.2 should be issued for public comment.

ADAMS Accession Number for DG-1096: ML003770849 39