Regulatory Guide 1.203

From kanterella
(Redirected from ML053500170)
Jump to navigation Jump to search
Transient and Accident Analysis Methods (Formerly Draft Regulatory Guide DG-1096, December 2000, and Draft Regulatory Guide DG-1120, December 2002)
ML053500170
Person / Time
Issue date: 12/30/2005
From: Marshall S
NRC/RES/DSARE/SMSAB
To:
Marshall S, 415-5861
References
DG-1096, DG-1120 RG-1.203
Download: ML053500170 (50)


U.S. NUCLEAR REGULATORY COMMISSION December 2005 REGULATORY GUIDE

OFFICE OF NUCLEAR REGULATORY RESEARCH

REGULATORY GUIDE 1.203 (Drafts were issued as DG-1096, dated December 2000, and DG-1120, dated December 2002)

TRANSIENT AND ACCIDENT ANALYSIS METHODS

A. INTRODUCTION

In Title 10, Part 50, of the Code of Federal Regulations (10 CFR Part 50), Domestic Licensing of Production and Utilization Facilities, Section 50.34, Contents of Applications; Technical Information (10 CFR 50.34),

specifies the following requirements regarding applications for construction permits and/or licenses to operate a facility:

(1) Safety analysis reports must analyze the design and performance of structures, systems, and components, and their adequacy for the prevention of accidents and mitigation of the consequences of accidents.

(2) Analysis and evaluation of emergency core cooling system (ECCS) cooling performance following postulated loss-of-coolant accidents (LOCAs) must be performed in accordance with the requirements of 10 CFR 50.46.

(3) The technical specifications for the facility must be based on the safety analysis and prepared in accordance with the requirements of 10 CFR 50.36.

This regulatory guide describes a process that the staff of the U.S. Nuclear Regulatory Commission (NRC)

considers acceptable for use in developing and assessing evaluation models that may be used to analyze transient and accident behavior that is within the design basis of a nuclear power plant. Evaluation models that the NRC

has previously approved will remain acceptable and need not be revised to conform with the guidance given in this regulatory guide.1

1 Regulatory Guide 1.157, Best-Estimate Calculations of Emergency Core Cooling System Performance, describes acceptable models, correlations, data, model evaluation procedures, and methods for meeting the specific requirements for a realistic or best-estimate calculation of ECCS performance during a loss-of-coolant accident.

The U.S. Nuclear Regulatory Commission (NRC) issues regulatory guides to describe and make available to the public methods that the NRC staff considers acceptable for use in implementing specific parts of the agencys regulations, techniques that the staff uses in evaluating specific problems or postulated accidents, and data that the staff need in reviewing applications for permits and licenses. Regulatory guides are not substitutes for regulations, and compliance with them is not required. Methods and solutions that differ from those set forth in regulatory guides will be deemed acceptable if they provide a basis for the findings required for the issuance or continuance of a permit or license by the Commission.

This guide was issued after consideration of comments received from the public. The NRC staff encourages and welcomes comments and suggestions in connection with improvements to published regulatory guides, as well as items for inclusion in regulatory guides that are currently being developed. The NRC staff will revise existing guides, as appropriate, to accommodate comments and to reflect new information or experience. Written comments may be submitted to the Rules and Directives Branch, Office of Administration, U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001.

Regulatory guides are issued in 10 broad divisions: 1, Power Reactors; 2, Research and Test Reactors; 3, Fuels and Materials Facilities; 4, Environmental and Siting;

5, Materials and Plant Protection; 6, Products; 7, Transportation; 8, Occupational Health; 9, Antitrust and Financial Review; and 10, General.

Requests for single copies of draft or active regulatory guides (which may be reproduced) should be made to the U.S. Nuclear Regulatory Commission, Washington, DC

20555, Attention: Reproduction and Distribution Services Section, or by fax to (301) 415-2289; or by email to Distribution@nrc.gov. Electronic copies of this guide and other recently issued guides are available through the NRCs public Web site under the Regulatory Guides document collection of the NRCs Electronic Reading Room at http://www.nrc.gov/reading-rm/doc-collections/ and through the NRCs Agencywide Documents Access and Management System (ADAMS)

at http://www.nrc.gov/reading-rm/adams.html, under Accession No. ML053500170.

Chapter 15 of the NRCs Standard Review Plan (SRP) for the Review of Safety Analysis Reports for Nuclear Power Plants (NUREG-0800, Ref. 1) and the Standard Format and Content of Safety Analysis Reports for Nuclear Power Plants (Regulatory Guide 1.70, Ref. 2) describe a subset of the transient and accident events that must be considered in the safety analyses required by 10 CFR 50.34.

Sections 15.1 through 15.6 of the SRP also discuss many of these events.

This regulatory guide is intended to provide guidance for use in developing and assessing evaluation models for accident and transient analyses. An additional benefit is that evaluation models that are developed using these guidelines will provide a more reliable framework for risk-informed regulation and a basis for estimating the uncertainty in understanding transient and accident behavior.

Toward that end, the Discussion section of this guide addresses the fundamental features of transient and accident analysis methods. Next, the Regulatory Position section describes a multi-step process for developing and assessing evaluation models, and provides guidance on related subjects, such as quality assurance, documentation, general purpose codes, and a graded approach to the process.

The Implementation section then specifies the target audience for whom this guide is intended, as well as the extent to which this guide applies, and the Regulatory Analysis section presents the staffs related rationale and conclusion. For convenience, this guide also includes definitions of terms that are used herein. Finally, Appendix A provides additional information important to ECCS analysis, and Appendix B

presents an example of the graded application of the evaluation model development and assessment process (EMDAP) for different analysis modification scenarios.

Section 15.0.2 of the SRP (Ref. 1) provides guidance to NRC reviewers of transient and accident analysis methods. This regulatory guide and SRP Section 15.0.2 cover the same subject material and are intended to be complementary, with Section 15.0.2 providing guidance to reviewers and this guide providing practices and principles for the benefit of method developers. Chapter 15 of the SRP recommends using approved evaluation models or codes for the analysis of most identified events. The SRP also suggests that evaluation model reviews should be initiated whenever an approved model does not exist for a specified plant event. If the applicant or licensee proposes to use an unapproved model, an evaluation model review should be initiated.

The NRC staff has consulted with the agencys Advisory Committee on Reactor Safeguards (ACRS)

concerning this guide, and the Committee has concurred with the staffs regulatory position as stated herein.

The NRC issues regulatory guides to describe to the public methods that the staff considers acceptable for use in implementing specific parts of the agencys regulations, to explain techniques that the staff uses in evaluating specific problems or postulated accidents, and to provide guidance to applicants. Regulatory guides are not substitutes for regulations, and compliance with regulatory guides is not required.

This regulatory guide contains information collections that are covered by the requirements of 10 CFR Part 50 which the Office of Management and Budget (OMB) approved under OMB control number 3150-0011. The NRC may neither conduct nor sponsor, and a person is not required to respond to, an information collection request or requirement unless the requesting document displays a currently valid OMB control number.

RG 1.203, Page 2

B. DISCUSSION

The two fundamental features of transient and accident analysis methods are (1) the evaluation model concept, and (2) the basic principles important for the development, assessment, and review of those methods.

Evaluation Model Concept The evaluation model concept establishes the basis for methods used to analyze a particular event or class of events. This concept is described in 10 CFR 50.46 for LOCA analysis, but can be generalized to all analyzed events described in the SRP.

An evaluation model (EM) is the calculational framework for evaluating the behavior of the reactor system during a postulated transient or design-basis accident. As such, the EM may include one or more computer programs, special models, and all other information needed to apply the calculational framework to a specific event, as illustrated by the following examples:

(1) procedures for treating the input and output information (particularly the code input arising from the plant geometry and the assumed plant state at transient initiation)

(2) specification of those portions of the analysis not included in the computer programs for which alternative approaches are used

(3) all other information needed to specify the calculational procedure The entirety of an EM ultimately determines whether the results are in compliance with applicable regulations. Therefore, the development, assessment, and review processes must consider the entire EM.

The reader should note that this regulatory guide also uses the term model, which should be distinguished from the evaluation model or EM. In contrast to the EM as defined here, model (without the evaluation modifier) is used in the more traditional sense to describe a representation of a particular physical phenomenon within a computer code or procedure.

Most EMs used to analyze the events in Chapter 15 of the SRP (Ref. 1) rely on a systems code that describes the transport of fluid mass, momentum, and energy throughout the reactor coolant systems.

The extent and complexity of the physical models needed in the systems code are strongly dependent on the reactor design and the transient being analyzed. For a particular transient, a subsidiary device like a subchannel analysis code may actually be more complex than the systems code. Regardless of its complexity, the systems code plays a key role in organizing and controlling other aspects of the transient analysis. In this guide, each computer code, analytical tool, or calculational procedure that comprises the EM is referred to as a calculational device. In addition, the term computer code is not limited to executables of traditional compiled languages such as FORTRAN. Rather, computer code can also include calculations performed in spreadsheets or other mathematical analysis tools such as MathCAD and Mathematica, because such tools are often used in a manner that is indistinguishable from classical compiled programs.

In some cases, as many as seven or eight calculational devices may be used to define an EM

for a particular event, although the trend today is to integrate many of those components into a smaller set of computer codes, usually within the framework of the systems code.

RG 1.203, Page 3

Sometimes, a general purpose systems code may be developed to address similar phenomenological aspects of several diverse classes of transients. This presents unique challenges in the definition, development, assessment, and review of those codes as they apply to a particular transient EM.

This guide devotes a separate section of the Regulatory Position to the issues involved with general purpose computer codes.

Basic Principles of Evaluation Model Development and Assessment Recent reviews have shown the need to provide applicants and licensees with guidance regarding transient and accident analysis methods. Providing such guidance should streamline the review process by reducing the frequency and extent of iterations between the method developers and NRC staff reviewers.

To produce a viable product, certain principles should be addressed during the model development and assessment processes. Specifically, the following six basic principles have been identified as important to follow in the process of developing and assessing and EM:

(1) Determine requirements for the evaluation model. The purpose of this principle is to provide focus throughout the evaluation model development and assessment process (EMDAP).

An important outcome should be the identification of mathematical modeling methods, components, phenomena, physical processes, and parameters needed to evaluate the event behavior relative to the figures of merit described in the SRP and derived from the general design criteria (GDC)

in Appendix A to 10 CFR Part 50. The phenomena assessment process is central to ensuring that the EM can appropriately analyze the particular event and that the validation process addresses key phenomena for that event.

(2) Develop an assessment base consistent with the determined requirement

s. Since an EM

can only approximate physical behavior for postulated events, it is important to validate the calculational devices, individually and collectively, using an appropriate assessment base.

The database may consist of already existing experiments, or new experiments may be required for model assessment, depending on the results of the requirements determination.

(3) Develop the evaluation model. The calculational devices needed to analyze the events in accordance with the requirements determined in the first principle should be selected or developed.

To define an EM for a particular plant and event, it is also necessary to select proper code options, boundary conditions, and temporal and spatial relationships among the component devices.

(4) Assess the adequacy of the evaluation model. Based on the application of the first principle, especially the phenomena importance determination, an assessment should be made regarding the inherent capability of the EM to achieve the desired results relative to the figures of merit derived from the GDC. Some of this assessment is best made during the early phase of code development to minimize the need for later corrective actions. A key feature of the adequacy assessment is the ability of the EM or its component devices to predict appropriate experimental behavior. Once again, the focus should be on the ability to predict key phenomena, as described in the first principle. To a large degree, the calculational devices use collections of models and correlations that are empirical in nature. Therefore, it is important to ensure that they are used within the range of their assessment.

(5) Follow an appropriate quality assurance protocol during the EMDAP. Quality assurance standards, as required in Appendix B to 10 CFR Part 50, are a key feature of the development and assessment processes. When complex computer codes are involved, peer review by independent experts should be an integral part of the quality assurance process.

(6) Provide comprehensive, accurate, up-to-date documentation. This is an obvious requirement for a credible NRC review. It is also clearly needed for the peer review described in the fifth principle.

Since the development and assessment process may lead to changes in the importance determination, it is most important that documentation of this activity be developed early and kept current.

RG 1.203, Page 4

The principles of an EMDAP were developed and applied in a study on quantifying reactor safety margins (NUREG/CR-5249, Ref. 3), which applied the code scaling, applicability, and uncertainty (CSAU)

evaluation methodology to a large-break LOCA. The purpose of that study was to demonstrate a method that could be used to quantify uncertainties as required by the best-estimate option described in the NRCs

1988 revision to the ECCS Rule (10 CFR 50.46). While the goal related to code uncertainty evaluation, the principles derived to achieve that goal involved the entire process of evaluation model development and assessment. Thus, many of the same principles would apply even if a formal uncertainty evaluation was not the specific goal. Since the publication of Reference 3 in December 1989, the CSAU process has been applied in several instances, with modifications to fit each particular circumstance (see References 4 through 12).

In References 4 and 5, a process was developed using an integrated structure and scaling methodology for severe accident technical issue resolution (ISTIR), which defined separate components for experimentation and code development. Although ISTIR includes a code development component, the ISTIR demonstration did not include that component. An important feature of Reference 4 is the use of hierarchical system decomposition methods to analyze complex systems. In the ISTIR demonstration, the methods were used to investigate experimental scaling, but they are also well-suited to provide structure in the identification of EM fundamentals.

Reference 6 was an adequacy evaluation of RELAP5 for simulating AP600 small-break LOCAs (SBLOCAs). Most of that effort focused on demonstrating the applicability and assessment of a developed code for a new application.

The subjects addressed in References 3 through 6 are complex, and the structures used to address those subjects are very detailed. The EMDAP described in this guide is also detailed, so that it can be applied to the complex events described in SRP Chapter 15. This is particularly true if either the application or the proposed methods are new. The complexity of the problem should determine the level of detail needed to develop and assess an EM. For simpler events, many of the steps in the process may only need to be briefly addressed. Also, if a new EM only involves an incremental change to an existing EM,

the process may be shortened as long as the effect of the change is thoroughly addressed. These instances describe a graded approach to the EMDAP, which is discussed in detail in Section 5 of the Regulatory Position. Figure 1 augments that discussion by providing an overall diagram of the EMDAP process and the relationships among its elements.

RG 1.203, Page 5

RG 1.203, Page 6

C. REGULATORY POSITION

This section discusses the NRC staffs regulatory position, which provides guidance concerning methods for calculating transient and accident behavior. Toward that end, this section describes the following five related aspects of evaluation model development and assessment:

(1) the four elements and included steps in the EMDAP, based on the first four principles previously described in the Discussion section and illustrated in Figure 1

(2) the relationship of this process to accepted quality assurance practices, and the incorporation of peer review as described in the fifth principle

(3) items that should be included in EM documentation to be consistent with the sixth principle

(4) unique aspects of general purpose computer programs

(5) a graded approach to application of the EMDAP

Appendix A provides additional information important to ECCS analysis, and Appendix B

presents an example of the graded application of the EMDAP for different analysis modification scenarios.

1. Evaluation Model Development and Assessment Process (EMDAP)

The basic elements developed to describe an EMDAP directly address the first four principles described in the Discussion section and illustrated in Figure 1. This regulatory position addresses the four elements and the adequacy decision shown in Figure 1. Adherence to an EMDAP for new applications or a completely new EM could involve significant iterations within the process. However, the same process applies even if the new EM is the result of relatively simple modifications to an existing EM.

Feedback loops are not shown; rather, they are addressed in the adequacy decision described in Regulatory Position 1.5.

1.1 Element 1: Establish Requirements for Evaluation Model Capability It is very important to determine, at the beginning, the exact application envelope for the EM,

and to identify and agree upon the importance of constituent phenomena, processes, and key parameters within that envelope. Figure 2 illustrates the steps within this element, as described in the following subsections.

1.1.1 Step 1: Specify Analysis Purpose, Transient Class, and Power Plant Class The first step in establishing EM requirements and capabilities is specifying the analysis purpose and identifying the transient class and plant class to be analyzed. Specifying the purpose is important because any given transient may be analyzed for different reasons. For instance, an SBLOCA may be analyzed to assess the potential for pressurized thermal shock (PTS) or compliance with10 CFR 50.46.

The statement of purpose influences the entire process of development, assessment, and analysis.

Evaluation model applicability is scenario-dependent because the dominant processes, safety parameters, and acceptance criteria change from one scenario to another. The transient scenario, therefore, dictates the processes that must be addressed. A complete scenario definition is plant-class-specific or sometimes even plant-specific because the dominant phenomena and their interactions differ in varying degrees with the reactor design or a plant-specific configuration such as a specific fuel type or core loading.

RG 1.203, Page 7

For events described in Chapter 15 of the SRP, these steps should be straightforward. The purpose is compliance with the GDC; the events and event classes are described in Chapter 15. The licensee or applicant and the EM developer should then specify their applicability to plants and plant types.

As examples, fuel design, core loading, number and design of steam generators, number and design of coolant loops, safety injection system design, and control systems can differ significantly from plant to plant and will significantly influence scenario behavior.

1.1.2 Step 2: Specify Figures of Merit Figures of merit are those quantitative standards of acceptance that are used to define acceptable answers for a safety analysis. The GDC in Appendix A to 10 CFR Part 50 describe general requirements for maintaining the reactor in a safe condition during normal operation and during transients and accidents.

Chapter 15 of the SRP further defines these criteria in terms of quantitative fuel and reactor system design limits [departure from nucleate boiling ratio (DNBR) limits, fuel temperatures, etc.] for the events of interest. For ECCS design, five specific criteria described in 10 CFR 50.46 must be met for LOCA

analysis. Thus, for Chapter 15 events, figures of merit are generally synonymous with criteria directly associated with the regulations, and their selection is usually a simple matter. During evaluation model development and assessment, a temporary surrogate figure of merit may be of value in evaluating the importance of phenomena and processes. Section 2.5 of Reference 7 describes a hierarchy of criteria that was used in SBLOCA assessment, in which vessel inventory was deemed more valuable in defining and assessing code capability. Justification for using a surrogate figure of merit should be provided.

RG 1.203, Page 8

In line with the surrogate figure of merit, it is also important to consider other related performance measures in conjunction with the principle objectives. Because compensating errors in the code can unintentionally lead to correct answers, additional performance measures serve as physical tracking points and additional proof of accuracy. While the code may calculate the correct peak cladding temperature (PCT), for example, incorrect or physically impossible parameter values could evolve in other areas of the calculation.

1.1.3 Step 3: Identify Systems, Components, Phases, Geometries, Fields, and Processes That Must Be Modeled The purpose of this step is to identify the EM characteristics. In References 4 and 5, hierarchical system decomposition methods were used to investigate scaling in complex systems. These methods can also be valuable in identifying EM characteristics. In order from top to bottom, References 4 and 5 describe the following ingredients at each hierarchical level:

(1) System: The entire system that must be analyzed for the proposed application.

(2) Subsystems: Major components that must be considered in the analysis. For some applications, these may include the primary system, secondary system, and containment. For other applications, only the primary system would need to be considered.

(3) Modules: Physical components within the subsystem (i.e., reactor vessel, steam generator, pressurizer, piping run, etc.)

(4) Constituents: Chemical form of substance (e.g., water, nitrogen, air, boron, etc.)

(5) Phases: Solid, liquid, or vapor.

(6) Geometrical Configurations (phase topology or flow regime): The geometrical shape defined for a given transfer process (e.g., pool, drop, bubble, film, etc.)

(7) Fields: The properties that are being transported (i.e., mass, momentum, and energy).

(8) Transport Processes: Mechanisms that determine the transport of and interactions between constituent phases throughout the system.

Ingredients at each hierarchical level can be decomposed into the ingredients at the next level down.

In References 4 and 5, this process is described as follows:

(1) Each system can be divided into interacting subsystems.

(2) Each subsystem can be divided into interacting modules.

(3) Each module can be divided into interacting constituents.

(4) Each constituent can be divided into interacting phases.

(5) Each phase can be characterized by one or more geometrical configurations (phase topology or flow regime).

(6) Each phase can be described by field equations (e.g., conservation equations for mass, energy, and momentum).

(7) The evolution of each field can be affected by several transport processes.

By carefully defining the number and type of each ingredient at each level, the evaluation model developer should be able to establish the basic characteristics of the EM. An important principle to note is that if a deficiency exists at a higher level, it is usually not possible to resolve it by fixing ingredients at lower levels. For relatively simple transients, the decomposition process should also be simple.

RG 1.203, Page 9

1.1.4 Step 4: Identify and Rank Key Phenomena and Processes Process identification is the last step in the decomposition described above and provides the logical beginning to this step. Plant behavior is not equally influenced by all processes and phenomena that occur during a transient. An optimum analysis reduces candidate phenomena to a manageable set by identifying and ranking the phenomena with respect to their influence on figures of merit. Each phase of the transient scenario and system components are separately investigated. The processes and phenomena associated with each component are examined. Cause and effect are differentiated. After the processes and phenomena have been identified, their importance should be determined with respect to their effect on the relevant figures of merit.

The importance determination should also be applied to high-level system processes, which may be missed if the focus is solely on components. High-level system processes, such as depressurization and inventory reduction, are often very closely related to figures of merit. Focus on such processes can also help to identify the importance of individual component behaviors.

As noted in Step 2, it may be possible to show that a figure of merit other than the applicable acceptance criterion is more appropriate as a standard for identifying and ranking phenomena. This is acceptable as long as it can be shown that, for all scenarios being considered for the specific ranking and identification activity, the alternative figure of merit is consistent with plant safety.

The principal product of the process outlined above is a phenomena identification and ranking table (PIRT) (see References 3, 6, 7, 9, and 12). Evaluation model development and assessment should be based on a credible and scrutable PIRT. The PIRT should be used to determine the requirements for physical model development, scalability, validation, and sensitivities studies. Ultimately, the PIRT is used to guide any uncertainty analysis or in the assessment of overall EM adequacy. The PIRT is not an end in itself; rather it is a tool to provide guidance for the subsequent steps.

The processes and phenomena that EMs should simulate are found by examining experimental data, experience, and code simulations related to the specific scenario. Independent techniques to accomplish the ranking include expert opinion, selected calculations, and decision-making methods, such as the Analytical Hierarchical Process (AHP). Reference 12 provides examples of expert opinion and selected calculations, while Reference 13 provides an example of decision-making methods.

Comparing the results of these techniques provides assurance of the accuracy and sufficiency of the process.

The initial phases of the PIRT process described in this step can rely heavily on expert opinion, which can be subjective. Therefore, it is important to validate the PIRT using experimentation and analysis.

Although the experience is limited, development of other less-subjective initial importance determination methods is encouraged.

Sensitivity studies can help determine the relative influence of phenomena identified early in the PIRT development and for final validation of the PIRT as the EMDAP is iterated. References 3, 6, 9,

11, and 12 provide examples of sensitivity studies used for this purpose.

The identification of processes and phenomena proceeds as follows:

(1) The scenario is divided into operationally characteristic time periods in which the dominant processes and phenomena remain essentially constant.

(2) For each time period, processes and phenomena are identified for each component, following a closed circuit throughout the system, to differentiate cause from effect.

RG 1.203, Page 10

(3) Starting with the first time period, the activities continue, component by component, until all potentially significant processes have been identified.

(4) The procedure is repeated sequentially, from time period to time period, until the end of the scenario.

When the identification has been completed, the ranking process begins. The reason to numerically rank the processes and phenomena is based on the need to provide a systematic and consistent approach to all subsequent EMDAP activities.

Sufficient documentation should accompany the PIRT to adequately guide the entire EMDAP.

Development and assessment activities may be revisited during the process, including the identification and ranking. In the end, however, the EM, PIRT, and all documentation should be frozen to provide the basis for a proper review. With well-defined ranking of important processes, EM capabilities, and calculated results, further modeling improvements can more easily be prioritized. An important principle is the recognition that the more highly ranked phenomena and processes require greater modeling fidelity.

References 6 and 7 describe the role of the PIRT process in experiments, code development, and code applications associated with reactor safety analysis.

1.2 Element 2: Develop Assessment Base The second component of ISTIR (Refs. 4 and 5) is a scaling methodology that includes acquiring appropriate experimental data relevant to the scenario being considered and ensuring the suitability of experimental scaling. References 4 and 5 show the relationship of the severe accident scaling methodology (SASM) component to code development, although that relationship was not emphasized in the SASM

demonstration. For the EMDAP, the purpose is to provide the basis for development and assessment as previously depicted in Figure 1. Figure 3 shows the steps and their relationships for Element 2.

The reader should also note that for simple transients or transients where the scaling issues and assessment are well-characterized, the implementation of this element should also be simple. The numbering of steps in this and subsequent elements continues from each previous element.

RG 1.203, Page 11

1.2.1 Step 5: Specify Objectives for Assessment Base For analysis of Chapter 15 events, the principal need for a database is to assess the EM and, if needed, develop correlations. The selection of the database is a direct result of the requirements established in Element 1. As such, the database should include the following records:

(1) separate effects experiments needed to develop and assess empirical correlations and other closure models

(2) integral systems tests to assess system interactions and global code capability

(3) benchmarks with other codes (optional)

(4) plant transient data (if available)

(5) simple test problems to illustrate fundamental calculational device capability It should be noted that Records 3 and 5 in the above list are not intended to be substitutions for obtaining appropriate experimental and/or plant transient data for evaluation model assessment.

RG 1.203, Page 12

1.2.2 Step 6: Perform Scaling Analysis and Identify Similarity Criteria All experiments are compromises with full-scale plant systems. Even nominally full-scale experiments do not include complete similitude. Scaling analyses should be conducted to ensure that the data, and the models based on those data, will be applicable to the full-scale analysis of the plant transient.

Scaling compromises identified here should ultimately be addressed in the bias and uncertainty evaluation in Element 4. Scaling analyses are employed to demonstrate the relevancy and sufficiency of the collective experimental database for representing the behavior expected during the postulated transient and to investigate the scalability of the EM and its component codes for representing the important phenomena. The scope of these analyses is much broader than for the scalability evaluations described in Element 4 relating individual models and correlations or scaling-related findings from the code assessments. Here, the need is to demonstrate that the experimental database is sufficiently diverse that the expected plant-specific response is bounded and the EM calculations are comparable to the corresponding tests in non-dimensional space. This demonstration allows extending the conclusions related to code capabilities, drawn from assessments comparing calculated and measured test data (Element 4), to the prediction of plant-specific transient behavior.

The scaling analyses employ both top-down and bottom-up approaches. The top-down scaling approach evaluates the global system behavior and systems interactions from integral test facilities that can be shown to represent the plant-specific design under consideration. A top-down scaling methodology is developed and applied to achieve the following purposes:

(1) Derive the non-dimensional groups governing similitude between facilities.

(2) Show that these groups scale the results among the experimental facilities.

(3) Determine whether the ranges of group values provided by the experiment set encompass the corresponding plant- and transient-specific values.

The bottom-up scaling analyses address issues raised in the plant- and transient-specific PIRT

related to localized behavior. These analyses are used to explain differences among tests in different experimental facilities and to use these explanations to infer the expected plant behavior and determine whether the experiments provide adequate plant-specific representation. Section 5.3 of Reference 6 describes the application of this scaling process.

In most applications, especially those with a large number of processes and parameters, it is difficult, if not impossible, to design test facilities that preserve total similitude between the experiment and the plant.

Therefore, based on the important phenomena and processes identified in Step 4 and the scaling analysis described above, the optimum similarity criteria should be identified, and the associated scaling rationales developed for selecting existing data or designing and operating experimental facilities.

1.2.3 Step 7: Identify Existing Data and/or Perform Integral Effects Tests (IETs)

and Separate Effects Tests (SETs) To Complete the Database Based on the results of the previous steps in this element, it should be possible to complete the database by selection and experimentation. To complete the assessment matrix, the PIRT developed in Step 4 is used to select experiments and data that best address the important phenomena and components.

In selecting experiments, a range of tests should be employed to demonstrate that the calculational device or phenomenological model has not been tuned to a single test. A correlation derived from a particular data set may be identified for inclusion in the EM. In such cases, an effort should be made to obtain additional data sets that may be used to assess the correlation. Ideally, both the data that will be used to develop the correlation and the data that will be used to assess the correlation should be identified before developing the correlation. This would help to ensure that the correlation is not tuned to a particular RG 1.203, Page 13

data set, and that the data used to assess the correlation have not been deliberately selected to make the correlation appear to be more accurate than it truly is. The data used for development and assessment should cover the full range of conditions for which the correlation will be used. For integral behavior assessment, counterpart tests (similar scenarios and transient conditions) in different experimental facilities at different scales should be selected. Assessments using such tests lead to information concerning scale effects on the models used for a particular calculational device.

1.2.4 Step 8: Evaluate Effects of IET Distortions and SET Scaleup Capability (a) IET Distortions. Distortions in the integral effects test (IET) database may arise from scaling compromises (missing or atypical phenomena) in sub-scale facilities or atypical initial and boundary conditions in all facilities. The effects of the distortions should be evaluated in the context of the experimental objectives determined in Step 5. If the effects are important, a return to Step 7 is probably needed.

(b) SET Scaleup. As noted in Step 7, correlations should be based on separate effects tests (SETs)

at various scales. In the case of poor scaleup capability, it may be necessary to return to Step 6.

Appendix C of Reference 3 describes the rationale and techniques associated with evaluating scaleup capabilities of computer codes and their supporting experimental databases.

1.2.7 Step 9: Determine Experimental Uncertainties as Appropriate It is important to know the uncertainties in the database. These uncertainties arise from measurement errors, experimental distortions, and other aspects of experimentation. If the quantified experimental uncertainties are too large compared to the requirements for evaluation model assessment, the particular data set or correlation should be rejected.

1.3 Element 3: Develop Evaluation Model As previously discussed, an EM is a collection of calculational devices (codes and procedures)

developed and organized to meet the requirements established in Element 1. Figure 4 depicts the steps for developing the desired EM.

RG 1.203, Page 14

1.3.1 Step 10: Establish an Evaluation Model Development Plan Based on the requirements established in Element 1, an EM development plan should be devised.

Such a plan should include development standards and procedures that will apply throughout the development activity, and should address the following specific areas of focus:

(1) design specifications for the calculational device

(2) documentation requirements (see Regulatory Position 3 of this guide)

(3) programming standards and procedures

(4) transportability requirements

(5) quality assurance procedures (see Regulatory Position 2 of this guide)

(6) configuration control procedures

1.3.2 Step 11: Establish Evaluation Model Structure The EM structure includes the structure of the individual component calculational devices, as well as the structure that combines the devices into the overall EM. This structure should be based on the principles of Element 1 (especially Step 3), as well as the requirements established in Element 1 and Step 10. The structure for an individual device or code consists of the following six ingredients:

(1) Systems and components: The EM structure should be able to analyze the behavior of all systems and components that play a role in the targeted application.

(2) Constituents and phases: The code structure should be able to analyze the behavior of all constituents and phases relevant to the targeted application.

(3) Field equations: Field equations are solved to determine the transport of the quantities of interest (usually mass, energy, and momentum).

RG 1.203, Page 15

(4) Closure relations: Closure relations are correlations and equations that help to model the terms in the field equations by providing code capability to model and scale particular processes.

(5) Numerics: Numerics provide code capability to perform efficient and reliable calculations.

(6) Additional features: These address code capability to model boundary conditions and control systems.

Because of the importance of selecting proper closure relationships for the governing equations, these models are treated separately in Step 12. The six ingredients described above should be successfully integrated and optimized if a completed code is to meet the objectives determined in Step 10.

The special concerns related to integrating the component calculational devices into a complete EM

are frequently referred to collectively as the EM methodology. The way in which the devices are connected spatially and temporally should be described. How close the coupling needs to be would be determined, in part, by the results of the analysis done in Step 3, but would also be determined by the magnitude and direction of transfer processes between devices. The hierarchical decomposition described in References 4 and 5 would apply to how transfer processes between devices are analyzed. Since most devices include user options, all selections made should be justified as appropriate for the EM.

1.3.3 Step 12: Develop or Incorporate Closure Models Models or closure relationships that describe a specific process are developed using SET data.

This includes models that can be used in a standalone mode or correlations that can be incorporated in a calculational device (usually a computer code). On rare occasions, sufficient experimental detail may be available to develop correlations from IET experiments. The scalability and range of applicability of a correlation may not be known (a priori) the first time it is developed or selected for use in this step. An iteration of scaleup evaluation (Step 8) and adequacy assessment (Element 4) may be needed to ensure correlation applicability. (Note that Figure 1 shows a path from Element 2 to this step, because correlations may be selected from the existing database literature.)

Models developed here are key to successful EM development. The basis, range of applicability, and accuracy of incorporated phenomenological models should be known and traceable. Justification should be provided for extension of any models beyond their original bases.

1.4 Element 4: Assess Evaluation Model Adequacy Evaluation model adequacy can be assessed after the previous elements have been established and the EM capability has been documented. Figure 5 is a diagram of Element 4.

The EM assessment is divided into two parts as shown in Figure 5. The first part (Steps 13-15)

pertains to the bottom-up evaluation of the closure relations for each code. In the first part, important closure models and correlations are examined by considering their pedigree, applicability, fidelity to appropriate fundamental or SET data, and scalability. The term bottom-up is used because the review focuses on the fundamental building blocks of the code.

The second part (Steps 16-19) pertains to the top-down evaluations of code-governing equations, numerics, the integrated performance of each code, and the integrated performance of the overall EM.

In the second part of the assessment, the EM is evaluated by examining the field equations, numerics, applicability, fidelity to component or integral effects data and scalability. This part of the assessment is called the top-down review because it focuses on capabilities and performance of the EM.

Calculations of actual plant transients or accidents can be useful as confirmatory supporting assessments RG 1.203, Page 16

for the EM for the top-down evaluation, even though it does not usually contain enough resolution to determine the adequacy of individual models. Plant data can be used for code assessment if it can be demonstrated that the available instrumentation provides measurements of adequate resolution to assess the code.

It is important to note that any changes to an EM should include at least a partial assessment to ensure that these changes do not produce unintended results in the codes predictive capability.

1.4.1 Step 13: Determine Model Pedigree and Applicability To Simulate Physical Processes The pedigree evaluation relates to the physical basis of a closure model, assumptions and limitations attributed to the model, and details of the adequacy characterization at the time the model was developed. The applicability evaluation relates to whether the model, as implemented in the code, is consistent with its pedigree or whether use over a broader range of conditions is justified.

RG 1.203, Page 17

1.4.2 Step 14: Prepare Input and Perform Calculations To Assess Model Fidelity or Accuracy The fidelity evaluation relates to the existence and completeness of validation efforts (through comparison to data), benchmarking efforts (through comparison to other standards, such as a closed-form solution or results obtained with another code), or some combination of these comparisons.

SET input for component devices used in model assessment (usually computer codes) should be prepared to represent the phenomena and test facility being modeled, as well as the characteristics of the nuclear power plant design. In particular, nodalization and option selection should be consistent between the experimental facility and similar components in the nuclear power plant. Nodalization convergence studies should be performed to the extent practicable in both the test facility and plant models.

Some models are essentially lumped parameter models and, in those cases, a nodalization convergence study cannot be performed. In such cases, care should be taken to ensure that the model is applicable to both the test facility and the plant. When the calculations of the SETs are completed, the differences between calculated results and experimental data for important phenomena should be quantified for bias and deviation.

1.4.3 Step 15: Assess Scalability of Models The scalability evaluation is limited to whether the specific model or correlation is appropriate for application to the configuration and conditions of the plant and transient under evaluation.

References 5 and 14-17 document recent approaches to scaling, ranging from theoretical methods to specific applications that are of particular interest here.

1.4.4 Step 16: Determine Capability of Field Equations To Represent Processes and Phenomena and the Ability of Numeric Solutions To Approximate Equation Set The field equation evaluation considers the acceptability of the governing equations in each component code. The objective of this evaluation is to characterize the relevance of the equations for the chosen application. Toward that end, this evaluation should consider the pedigree, key concepts, and processes culminating in the equation set solved by each component code.

The numeric solution evaluation considers convergence, property conservation, and stability of code calculations to solve the original equations when applied to the target application. The objective of this evaluation is to summarize information regarding the domain of applicability of the numerical techniques and user options that may impact the accuracy, stability, and convergence features of each component code.

A complete assessment within this step can only be performed after completing a sufficient foundation of assessment analyses. Section 3 and Appendix A to Reference 6 provide an example for application of this step.

1.4.5 Step 17: Determine Applicability of Evaluation Model To Simulate System Components This applicability evaluation considers whether the integrated code is capable of modeling the plant systems and components. Before performing integrated analyses, the various EM options, special models, and inputs should be determined to have the inherent capability to model the major systems and subsystems required for the particular application.

RG 1.203, Page 18

1.4.6 Step 18: Prepare Input and Perform Calculations To Assess System Interactions and Global Capability This fidelity evaluation considers the comparison of EM-calculated and measured test data from component and integral tests and, where possible, plant transient data. For these calculations, the entire EM or its major components should be exercised in comparing data against the integral database selected in Element 2.

As in the SET assessments for Step 14, the EM input for IETs should best represent the facilities and characteristics of the nuclear power plant design. Nodalization and option selection should also be consistent between the experiment and nuclear power plant. In addition, nodalization convergence studies should be performed to the extent practicable in both the test facility and plant models. Some models are essentially lumped parameter models and, in such cases, a nodalization convergence study cannot be performed and care must be taken to ensure that the model is applicable to both the test facility and the plant. Once the IET simulations have been completed, the differences between calculated results and experimental data for important processes and phenomena should be quantified for bias and deviation.

The ability of the EM to model system interactions should also be evaluated in this step, and plant input decks should be prepared for the target applications. Sufficient analyses should be performed to determine parameter ranges expected in the nuclear power plant. These input decks also provide the groundwork for analyses performed in Step 20. Section 5 of Reference 6 provides a sample application of this step.

1.4.7 Step 19: Assess Scalability of Integrated Calculations and Data for Distortions This scalability evaluation is limited to whether the assessment calculations and experiments exhibit otherwise unexplainable differences among facilities, or between calculated and measured data for the same facility, which may indicate experimental or code scaling distortions.

1.4.8 Step 20: Determine Evaluation Model Biases and Uncertainties The purpose of the analysis (established in Step 1) and complexity of the transient determine the substance of this step. For best-estimate LOCA analysis, References 3 and 18 describe the uncertainty determination and provide related guidance, augmented by Appendix A to this guide. In these examples, the uncertainty analyses have the ultimate objective of providing a singular statement of uncertainty, with respect to the acceptance criteria set forth in 10 CFR 50.46, when using the best-estimate option in that rule. This singular uncertainty statement is accomplished when the individual uncertainty contributions are determined (see Regulatory Guide 1.157, Ref. 18).

Other SRP events do not require a complete uncertainty analysis. However, in most cases, the SRP guidance is to use suitably conservative input parameters. This suitability determination may involve a limited assessment of biases and uncertainties, and closely relates to the analyses in Step 16 because what constitutes suitably conservative input depends on the set of field equations chosen for the EM. Based on the results of Step 4, individual device models can be chosen from those obtained in Step 9. The individual uncertainty (in terms of range and distribution) of each key contributor is determined from the experimental data (Step 11), input to the nuclear power plant model, and the effect on appropriate figures of merit evaluated by performing separate nuclear power plant calculations.

The figures of merit and devices chosen should be consistent. In most cases, this analysis should involve the entire EM. The last part of this step is to determine whether the degree of overall conservatism or analytical uncertainty is appropriate for the entire EM. This is done in the context of the purpose of the analysis (established in Step 1) and the regulatory requirements.

RG 1.203, Page 19

As an alternative to using suitably conservative input parameters, the EM may choose to perform an uncertainty analysis of the safety limit with an evaluation at the nominal technical specifications and setpoints being considered as the base case. The safety limit can then be analyzed with uncertainties in both phenomena and setpoints evaluated in a probabilistic manner similar to the way the 2200 EF limit is evaluated in a best-estimate LOCA analysis, as described in Regulatory Guide 1.157 (Ref. 18).

A hybrid methodology (where some parameters are treated in a bounding manner, and other are treated in a probabilistic manner) may also be acceptable.

1.5 Adequacy Decision The decision regarding the adequacy of the EM is the culmination of the EMDAP described in Regulatory Positions 1.1 through 1.4. Throughout the EMDAP, questions concerning the adequacy of the EM should be asked. At the end of the process, the adequacy should be questioned again to ensure that all the earlier answers are satisfactory and that intervening activities have not invalidated previous acceptable responses. If unacceptable responses indicate significant EM inadequacies, the code deficiency should be corrected and the appropriate steps in the EMDAP should be repeated to evaluate the correction.

The process continues until the ultimate question regarding EM adequacy is answered in the affirmative.

Of course, the documentation described in Regulatory Position 3 should be updated as code improvements and assessment are accomplished throughout the process. Analysis, assessment, and sensitivity studies can also lead to reassessment of the phenomena identification and ranking. Therefore, that documentation should also be revised as appropriate. It is helpful to develop a list of questions to be asked during the process and again at the end. To answer these questions, standards should be established by which the capabilities of the EM and its composite codes and models can be assessed. Section 2.2.2 of Reference 6 provides an example of the development of such standards.

2. Quality Assurance Much of what is described throughout this regulatory guide relates to good quality assurance practices. For that reason, it is important to establish appropriate quality assurance protocol, and to do so early in the development and assessment process. Moreover, the development, assessment, and application of an EM are three activities that relate to the requirements of Appendix B to 10 CFR Part 5

0. Section III

of that Appendix specifies a key requirement for these activities, in that design control measures must be applied to reactor physics, thermal-hydraulic, and accident analyses. Specifically,Section III states that The design control measures shall provide for verifying or checking the adequacy of design, such as by the performance of design reviews, by the use of alternate or simplified calculational methods, or by the performance of a suitable testing program. In addition,Section III states that design changes should be subject to appropriate design control measures.

It is important to note that other parts of Appendix B are also relevant. In particular, these include Section V, which requires documented instructions (e.g., user guidance);Section XVI, corrective actions (e.g., error control, identification, and correction); and Sections VI and XVII, which address document control and records retention.

To capture the spirit and intent of Appendix B, independent peer review should be performed at key steps in the process, such as at the end of a major pass through an element. For that reason, the NRC staff recommends that a review team should be convened in the early stages of EM development, to review the EM requirements developed in Element 1. Peer review should also be employed during the later stages for major inquiries associated with the adequacy decision.

RG 1.203, Page 20

In addition to programmers, developers, and end users, the staff further recommends that the peer review team should include independent members with recognized expertise in relevant science and engineering disciplines, code numerics, and computer programming. Expert review team members who were not directly involved in developing and assessing the EM can enhance its robustness, and can be of value in identifying deficiencies that are common to large system analysis codes.

Throughout the development process, configuration control practices should be adopted to protect program integrity and allow traceability of the development of both the code version and the plant input deck used to instruct the code in how to represent the facility or nuclear power plant. Configuration control of the code version and plant input deck are separate but related elements of evaluation model development and require the same degree of quality assurance. Responsibility for these functions should be clearly established. At the end of the process, only the approved, identified code version and plant input deck should be used for licensing calculations.

3. Documentation Proper documentation allows appraisal of the EM application to the postulated scenario.

The EM documentation should cover all elements of the EMDAP process, and should include the following information:

(1) EM requirements

(2) EM methodology

(3) code description manuals

(4) user manuals and user guidelines

(5) scaling reports

(6) assessment reports

(7) uncertainty analysis reports

3.1 Requirements The requirements determined in Element 1 should be documented so that the EM can be assessed against known guidelines. In particular, a documented, current PIRT is important in deciding whether a particular EM feature should be modified before the EM can be applied with confidence.

3.2 Methodology Methodology documentation should include the interrelationship of all computational devices used for the plant transient being analyzed, including the description of input and output. This should also include a complete description and specification of those portions of the EM that are not included in the computer programs, as well as a description of all other information necessary to specify the calculational procedure. A very useful part of this description would be a diagram to illustrate how the various programs and procedures are related, both in time and in function. This methodology description is needed to know exactly how the transient will be analyzed in its entirety.

RG 1.203, Page 21

3.3 Computational Device Description Manuals A description manual is needed for each computational device that is contained in the EM.

There are several important components to the manual. One component is a description of the modeling theory and associated numerical schemes and solution models, including a description of the architecture, hydrodynamics, heat structure, heat transfer models, trip systems, control systems, reactor kinetics models, and fuel behavior models.

Another component of the documentation is a models and correlations quality evaluation (MC/QE) report, which provides a basis for traceability of the models and detailed information regarding the closure relations. Information on model and correlation sources, databases, accuracy, scale-up capability, and applicability to specific plant and transient conditions should also be documented in the MC/QE report.

Thus, the MC/QE report represents a quality evaluation document that provides a blueprint as to what is in the computational device, how it got there, and where it came from. As such, the MC/QE report has three objectives:

(1) Provide information regarding the sources and quality of closure relations (that is, models and correlations or other criteria used).

(2) Describe how these closure relations are coded in the device, and ensure that the descriptions in the manual conform to the coding, and that the coding conforms to the source from which the closure relations were derived.

(3) Provide a technical rationale and justification for using these closure relations [that is, to confirm that the dominant parameters (pressure, temperature, etc.) represented by the models and correlations reflect the ranges expected in the plant and transient of interest].

Consequently, for models, correlations, and criteria used, the MC/QE report should achieve the following purposes:

(1) Provide information regarding the original source, supporting database, accuracy, and applicability to the plant-specific transient conditions.

(2) Assess the effects of using the models, correlations, and criteria outside the supporting database, and describe and justify the extrapolation method. For certain applications, the MC/QE report may recommend using options other than the defaults. In such cases, the report should provide instructions to ensure that appropriate validation is performed for the nonstandard options.

(3) Describe the implementation in the device (i.e., actual coding structure).

(4) Describe any modifications required to overcome computational difficulties.

(5) Assess the effects caused by implementation (item 3) or modifications (item 4) on the overall applicability and accuracy of the code.

Reference 19 is an example of MC/QE documents generated to meet the above requirements.

RG 1.203, Page 22

3.4 Users Manual and User Guidelines The users manual should completely describe how to prepare all required and optional inputs, while the user guidelines should describe recommended practices for preparing all relevant input.

To minimize the risk of inappropriate program use, the guidelines should include the following information:

(1) proper use of the program for the particular plant-specific transient or accident being considered

(2) range of applicability for the transient or accident being analyzed

(3) code limitations for such transients and accidents

(4) recommended modeling options for the transient being considered, equipment required, and choice of nodalization schemes (plant nodalization should be consistent with nodalization used in assessment cases)

3.5 Scaling Reports Reports should be provided for all scaling analyses used to support the viability of the experimental database, scalability of models and correlations, and scalability of the complete EM. Section 5.3 of Reference 6 provides an example and references to scaling analyses done to support adequacy evaluations.

3.6 Assessment Reports Assessment reports are generally of three types:

(1) Developmental assessment

(2) Component assessment

(3) Integral effects test assessment Most developmental assessment (DA) reports should comprise a set of code analyses that focus on a limited set of ranked phenomena. That is, the code or other device should analyze experiments or plant data that demonstrate (in a separate effects manner) the capability to calculate individual phenomena and processes determined (by the PIRT) to be important for the specific scenario and plant type.

A code or other device may model certain equipment in a special way; assessment calculations should be performed for these components.

Integral effects tests (IETs) should show the EMs integral capability by comparison to relevant integral effects experiments or plant data. Some IET assessments may be general in nature, but for EM

consideration, the IET assessments should include a variety of scaled facilities applicable to the plant design and transient.

For some plants and transients, code-to-code comparisons can be very helpful. In particular, if a new code or device is intended to have a limited application, the results may be compared to calculations using a previous code. However, the previous code should be well-assessed to integral or plant data for the plant type and transient being considered for the new device. Differences in key input (such as system nodalization) should be explained so that favorable comparisons provide the right answers for the right reasons. Such benchmark calculations would not replace assessment of the new code.

RG 1.203, Page 23

A significant amount of evaluation model assessment may be performed before selecting the plant-specific transient to be analyzed. In other cases, the assessment may be done outside the context of the plant- and transient-specific EM. In still other cases, the assessment may be done by organizations other than those responsible for the plant-specific analysis. If it is desired to credit these assessments to the plant and transient under consideration, great care should be taken to thoroughly evaluate and document the applicability of those assessments to the present case.

To gain confidence in the predictive capability of an EM when applied to a plant-specific event, it is important for assessment reports to achieve the following purposes:

(1) Assess calculational device capability and quantify accuracy to calculate various parameters of interest (in particular, those described in the PIRT).

(2) Determine whether the calculated results are attributable to compensating errors by performing appropriate scaling and sensitivity analyses.

(3) Assess whether the calculated results are self-consistent and present a cohesive set of information that is technically rational and acceptable.

(4) Assess whether the timing of events calculated by the EM agrees with the experimental data.

(5) Assess the capability of the EM to scale to the prototypical nuclear plant. (Almost without exception, such assessments also address the experimental database used in developing or validating the EM.)

(6) Explain any unexpected or (at first glance) strange results calculated by the EM or component devices. (This is particularly important when experimental measurements are not available to give credence to the calculated results. In such cases, rational technical explanations greatly support credibility and confidence in the EM.)

Whenever the calculated results disagree with experimental data, assessment reports must also achieve the following purposes:

(7) Identify and explain the cause for the discrepancy; that is, identify and discuss the deficiency in the device (or, if necessary, discuss the inaccuracy of experimental measurements).

(8) Address how important the deficiency is to the overall results (that is, to parameters and issues of interest).

(9) Explain why a deficiency may not have an important effect on a particular scenario.

With respect to a calculational device input model and the related sensitivity studies, assessment reports must achieve the following additional purposes:

(10) Provide a nodalization diagram, along with a discussion of the nodalization rationale.

(11) Specify and discuss the boundary and initial conditions, as well as the operational conditions for the calculations.

(12) Present and discuss the results of sensitivity studies (if performed) on closure relations or other parameters.

(13) Discuss modifications to the input model (nodalization, boundary, initial or operational conditions)

resulting from sensitivity studies (if performed).

(14) Document the numerical solution convergence studies, including the basis for the time steps used and the chosen convergence criteria.

(15) Provide guidelines for performing similar analyses.

RG 1.203, Page 24

3.7 Uncertainty Analysis Reports Documentation should be provided for any uncertainty analyses performed as part of Step 20

of the EMDAP.

4. General Purpose Computer Programs Very often, a general purpose transient analysis computer program (such as RELAP5, TRAC,

or RETRAN) is developed to analyze a number of different events for a wide variety of plants. These codes can constitute the major portion of an EM for a particular plant and event. Generic reviews are often performed for these codes to minimize the amount of work required for plant- and event-specific reviews. These reviews, which are limited in terms of the applications and parameter ranges considered, establish the technical foundation for justifying the applicability of the codes in plant- or event-specific analyses conducted by licensees. A certain amount of generic assessment may be performed for such codes as part of the generic code development. Applying portions of the EMDAP process to an existing general purpose transient analysis computer program is useful in determining its suitability for use as the basis for an EM and can identify deficiencies in models and assessment that should be addressed before the code is submitted for NRC review.

The EMDAP starts with identification of the plant, event, and directly related phenomena.

When applied to an EM that uses an existing general purpose transient analysis computer program, this process may indicate that the generic assessment does not include all of the appropriate geometry, phenomena, or necessary range of variables to demonstrate code adequacy for some of the proposed plant-specific event analyses. Evidence of this is the fact that safety evaluations for generic code reviews often contain a large number of qualifications on use of the code. To avoid such problems, it is important to identify the intended range of applicability of the generic code, including its models and correlations.

The generic assessment that accompanies the code must support its intended range of applicability.

Use of the EMDAP before submitting the general purpose transient analysis computer program for review can ensure that the code models and assessment support the use of the code over its intended range of applicability. Application of the EMDAP should be considered as a prerequisite before submitting a general purpose transient analysis computer program for review as the basis for EMs that may be used for a variety of plant and accident types. Evaluation models that use an approved general purpose transient analysis computer program that has been scrutinized or developed using the EMDAP process can efficiently identify the models and assessment that support the analysis of the specific plant and accident types for which the EM will be used.

5. Graded Approach to Applying the EMDAP Process Application of the full EMDAP described in this regulatory guide may not be needed for all EMs submitted for review by the staff. Some EMs submitted for review are relatively minor modifications to existing EMs. Thus, the scope and depth of applying the development process to the EM can be based on a graded approach. The following four attributes of the EM should be considered when determining the extent to which the full model development process may be reduced for a specific application, as described in the following subsections:

(1) novelty of the revised EM compared to the currently acceptable model

(2) complexity of the event being analyzed

(3) degree of conservatism in the EM

(4) extent of any plant design or operational changes that would require reanalysis RG 1.203, Page 25

5.1 Novelty of the Revised EM Compared to the Currently Acceptable Model The level of effort involved in applying the development and assessment process should be commensurate with the extent of the changes made to an EM. Small changes to a robust, time-tested EM

component, such as a change to a simple heat transfer or drag correlation (possibly required by an error correction), may not require full application of the EMDAP to the entire EM. In this case, scaling would only have to be considered within the context of how well the new model scales to full plant analysis if the model is developed from a reduced-scale test program. Consideration would also have to be given to how well the assessment cases for the model represent full-scale plant conditions. Implementation testing needs to be performed to show that the new model has been correctly implemented. A small subset of the entire code assessment matrix may be adequate to test the phenomena that are affected by the revised model. Another subset of the code test cases may need to be performed to ensure that other parts of the model are not inadvertently impacted by the changes. The impact of any changes attributable to an error correction would have to be evaluated for the current license analysis of record. A large model change may require application of the EMDAP on a much larger scale. Changing from an equilibrium, drift flux model to a two fluid, non-equilibrium model would be an example of a significant change that would require an extensive development and assessment process for the new EM.

5.2 Complexity of the Event The level of effort involved in applying the development process should be commensurate with the complexity of the EM. At first glance, the EMDAP may seem too burdensome to apply to simple events.

However, application of the EMDAP to a simple event will automatically result in a simplified process.

In simple events, the number of key physical phenomena should also be small, and the code assessment only needs to cover the important phenomena even though the underlying general purpose transient analysis computer program may have models that cover a much wider range of conditions. An example of this is the system evaluation of a pressurized-water reactor pump trip analysis in which the important phenomena may be limited to a few quantities, such as single-phase liquid wall drag and heat transfer, and pump inertia.

In this case, very little assessment would need to be performed, and there may be adequate full-scale plant data for the code assessment so there would be no need for a scaling analysis. The other extreme is an EM for a large-break LOCA, where the physical phenomena and the mathematical models are complex and cover a wide range of conditions. An extensive code development process and assessment would be required in this case.

5.3 Degree of Conservatism The intended results of an analysis can be conservative due to a combination of code input and modeling assumptions. The amount of assessment required for a change to an EM may be significantly reduced if the documented degree of conservatism is large or if the new model can be shown to give more conservative results than the previous model. However, conservatism in just one aspect of the EM;

that is, a heat transfer correlation cannot be used to justify conservatism in the EM as a whole, because other aspects of the model may be non-conservative and cause the overall model to be non-conservative.

The degree of conservatism in the overall EM must be quantified and documented for the particular application in order to justify a reduction in assessment requirements using this argument. Showing the degree of conservatism in an EM for a simple transient may be accomplished by a relatively simple uncertainty analysis, even if the underlying computer code is a large multipurpose code. The key to simplifying the uncertainty analysis is identifying the small number of parameters and physical phenomena that are important in determining the behavior of the accident.

RG 1.203, Page 26

5.4 Extent of Plant Design or Operational Changes That Require Reanalysis The level of effort required to apply the process should be commensurate with the extent of changes made to the plant design or operation. Most changes to plant equipment or operations do not cause the plant to operate outside the range of validity of the EM. In such cases, no additional development and assessment needs to be performed. However, this may not be the case for all changes. Examples of changes that may require additional assessment of the EM are fuel bundle design changes (including grid spacer and intermediate flow mixer design changes), increases in the peak linear heat generation rate, or operational changes that may cause reliance on a different safety grade trip which requires accurate prediction of a quantity not required in the previous analysis. In such cases, a limited application of the EMDAP (similar to that described in Section 5.1) should be sufficient.

RG 1.203, Page 27

D. IMPLEMENTATION

The purpose of this section is to provide information to applicants and licensees regarding the NRC staffs plans for using this regulatory guide. No backfitting is intended or approved in connection with the issuance of this guide. Licensees and applicants may propose means other than those specified by the Regulatory Positions expressed in this guide for meeting applicable regulations. The NRC staff has approved this guide for use as an acceptable means of complying with the Commissions regulations and for evaluating submittals in the following categories:

(1) Construction permit applications that must meet the design-basis description requirements of 10 CFR 50.34 and the relationship of the design bases to the principal design criteria described in Appendix A to 10 CFR Part 50. Chapter 15 of the SRP (Ref. 1) describes the transients and accidents that the NRC staff reviews as part of the application, and the criteria of Appendix A

that specifically apply to each class of transient and accident. Chapter 15 also states that acceptable EMs should be used to analyze these transients and accidents.

(2) Operating license applications that must meet the design-basis description requirements of 10 CFR 50.34 and the relationship of the design bases to the principal design criteria described in Appendix A to 10 CFR Part 50.

(3) New or modified EMs proposed by vendors or operating reactor licensees that, in accordance with 10 CFR 50.59, require NRC staff review and approval. In such instances, a graded application of the principles of this regulatory guide can be undertaken, based on the nature and extent of the new model or proposed changes. When this graded approach is applied in proposing changes to existing EMs, the principles of this regulatory guide need only apply to the changes.

The owner of the model does not need to backfit the entire model to comply with the principles of this regulatory guide. The question of whether the changes require a licensing amendment is beyond the scope of this regulatory guide. That question is addressed in 10 CFR 50.59, and its answer has no bearing on the EM development process.

REGULATORY ANALYSIS

Experience with recent model reviews has demonstrated the need for guidance in the area of transient and accident analysis methods. There is, however, a perception that new costs will be incurred as a result of new startup activities brought on by such guidance. After considering the merits of providing the guidance or taking no action, the staff concludes that guidance in the form of good principles of transient and accident code development and assessment outweighs the relatively small initial cost of startup activities and documentation.

RG 1.203, Page 28

GLOSSARY

These following definitions are provided in the context of this regulatory guide and may not apply to other uses.

AHP Analytical Hierarchical Process: A software-based analytical methodology used to combine experimental data with expert judgment to efficiently rank the relative importance of phenomena and processes to the response of an NPP to an accident or other transient in a consistent and traceable manner.

AP600 A 600-MWe advanced passive pressurized-water reactor designed by Westinghouse Electric Company.

Bottom-up An approach to a safety-related analysis similar to top-down (see below),

but in which the key feature is to treat all phenomena and processes, including those associated with the analysis tools for modeling, as equally important to the facilitys response to an accident or transient.

Therefore, the phenomena and processes are quantified in depth.

Calculational Computer codes or other calculational procedures that comprise a devices evaluation model.

Chapter 15 events In this regulatory guide, Chapter 15 events refer to the transients and accidents that are defined in Chapter 15 of the SRP (Ref. 1) to be analyzed to meet the requirements of the General Design Criteria (GDC)

of Appendix A to 10 CFR Part 50, except for the fuel assembly misloading event and all radiological consequence analyses.

CFR Code of Federal Regulations.

Closure relations Equations and correlations required to supplement the field equations that are solved to obtain the required results. This includes physical property definitions and correlations of transport phenomena.

Constituents Chemical form of any material being transported (e.g., water, air, boron).

CSAU Code scaling, applicability, and uncertainty: A process to determine the applicability, scalability, and uncertainty of a computer code in simulating an accident or other transient. A PIRT process is normally embedded within a CSAU process. See Reference 3.

DA Developmental Assessment: Calculations performed using the entire evaluation model or its individual calculational devices to validate its capability for the target application.

RG 1.203, Page 29

DNBR Departure from nucleate boiling ratio.

EMDAP Evaluation model development and assessment process.

ECCS Emergency core cooling system.

Evaluation model (EM) Calculational framework for evaluating the behavior of the reactor system during a postulated Chapter 15 event, which includes one or more computer programs and all other information needed for use in the target application.

Fields The properties that are being transported (mass, momentum, energy).

Field equations Equations that are solved to determine the transport of mass, energy, and momentum throughout the system.

Frozen The condition whereby the analytical tools and associated facility input decks remain unchanged (and under configuration control) throughout a safety analysis, thereby ensuring traceability of and consistency in the final results.

GDC General Design Criteria: Design criteria described in Appendix A

to 10 CFR Part 50.

Geometrical configurations The geometrical shape that is defined for a transfer process (e.g., pool, drop, bubble, film).

H2TS Hierarchical two-tiered scaling: Methodology that uses hierarchical systems analysis methods to evaluate experimental scaling.

Described in References 4 and 5.

IET Integral Effects Test: An experiment in which the primary focus is on the global system behavior and the interactions between parameters and processes.

ISTIR Integrated Structure for Technical Issue Resolution: Methodology derived for severe accident issue resolution. Described in References 4 and 5.

LBLOCA Large-break loss-of-coolant accident.

LOCA Loss-of-coolant accident.

LWR Light-water reactor.

MC/QE Models and correlations quality evaluation: A report documenting what is in a computer code, the sources used to develop the code, and the conditions under which the original source of information was developed.

RG 1.203, Page 30

Model [Without evaluation modifier] Equation or set of equations that represents a particular physical phenomenon within a calculational device.

Modules Physical components within the subsystem (e.g., reactor vessel, steam generator, pressurizer, piping run).

MYISA Maine Yankee Independent Safety Assessment.

NPP Nuclear power plant.

PCT Peak cladding temperature.

Phase State of matter involved in the transport process, usually liquid or gas.

A notable exception is heat conduction through solids.

PIRT Phenomena Identification and Ranking Table: May refer to a table or process, depending on the context. The process relates to determining the relative importance of phenomena (or physical processes)

to the behavior of an NPP following the initiation of an accident or other transient. A PIRT table lists the results of applying the process.

Processes Mechanisms that move properties through the system.

QA Quality Assurance.

SASM Severe accident scaling methodology.

SBLOCA Small-break loss-of-coolant accident.

Scalability (scaling) The process in which the results from a subscale facility (relative to an NPP) or the modeling features of a calculational device are evaluated to determine the degree to which they represent an NPP.

Scenario Description and time sequence of events.

Sensitivity studies The term is generic to several types of analyses; however, the definition of greatest interest here relates to those studies associated with the PIRT

process and used to determine the relative importance of phenomena or processes. This may also involve analysis of experimental data that are a source of information used in the PIRT process.

SET Separate Effects Test: An experiment in which the primary focus is on a single physical phenomena or process.

SRP Standard Review Plan: The acceptable plan for NRC reviewers, as defined in NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants.

System The entire system that must be analyzed for the proposed application.

RG 1.203, Page 31

Systems code The principal computer code of an evaluation model that describes the transport of mass, momentum, and energy throughout the reactor coolant systems.

Subsystems The major components that must be considered in the analysis. For some applications, this would include the primary system, secondary system, and containment. For other applications, only the primary system would need to be considered.

Target application The safety analysis for which a specific purpose, transient type, and NPP

type have been specified.

Top-down The approach to a safety-related analysis in which one sequentially determines (1) the exact objective of the analysis (regulatory action, licensing action, desired product, etc.), (2) the analysis envelope (facility or NPP, transients, analysis codes, facility-imposed geometric and operational boundary conditions, etc.), (3) all plausible phenomena or processes that have some influence on the facility or plant behavior,

(4) a PIRT process, (5) applicability and scalability of the analysis tools, and (6) the influence of various uncertainties embedded in the analysis on the end product. A key feature of the top-down approach is to address those parts of the safety analysis associated with items 5 and 6 in a graduated manner based on the relative importance determined in item 4. Items 1 through 4 are independent of the analysis tools.

Items 5 and 6 are dependent on the chosen analysis tools.

Uncertainty There are three separate but related definitions of primary interest:

(1) the inaccuracy in experimentally derived data typically generated by the inaccuracy of measurement systems, (2) the inaccuracy of calculating primary safety criteria or related figures of merit typically originating in the experimental data or assumptions used to develop the analytical tools, or (3) the analytical inaccuracies related to approximations and uncertainties.

RG 1.203, Page 32

REFERENCES

(1) Draft Section 15.0.2, Review of Analytical Computer Codes, of NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants, USNRC,

updated by section, December 2000.2

(2) Regulatory Guide 1.70, Standard Format and Content of Safety Analysis Reports for Nuclear Power Plants (LWR Edition), Revision 3, USNRC, November 1978.3

(3) B. Boyack et al., Quantifying Reactor Safety Margins, Application of Code, Scaling, Applicability, and Uncertainty Evaluation Methodology to a Large Break, Loss-of- Coolant Accident, NUREG/CR-5249, USNRC, December 1989.4

(4) B. Boyack et al., An Integrated Structure and Scaling Methodology for Severe Accident Technical Issue Resolution, Draft NUREG/CR-5809, USNRC, November 1991.2

(5) N. Zuber et al., An Integrated Structure and Scaling Methodology for Severe Accident Technical Issue Resolution: Development of Methodology, Nuclear Engineering and Design, 186 (pp. 1-21), 1998.5

2 Electronic copies are posted on the NRCs public Web site, http://www.nrc.gov, through Rulemaking, and from the NRCs Reproduction and Distribution Services Section, Public Document Room, and Public Electronic Reading Room. See footnotes below.

3 Single copies of regulatory guides, both active and draft, and draft NUREG documents may be obtained free of charge by writing the Reproduction and Distribution Services Section, OCIO, USNRC, Washington, DC 20555-0001, or by fax to (301)415-2289, or by email to DISTRIBUTION@nrc.gov. Active guides may also be purchased from the National Technical Information Service (NTIS)on a standing order basis. Details on this service may be obtained by contacting NTIS at 5285 Port Royal Road, Springfield, Virginia 22161, online at http://www.ntis.gov, or by telephone at (703) 487-4650. Copies are also available for inspection or copying for a fee from the NRCs Public Document Room (PDR), which is located at 11555 Rockville Pike, Rockville, Maryland; the PDRs mailing address is USNRC

PDR, Washington, DC 20555-0001. The PDR can also be reached by telephone at (301) 415-4737 or (800) 397-4205, by fax at (301) 415-3548, and by email to PDR@nrc.gov. Copies of certain guides and many other NRC documents are available electronically through the Public Electronic Reading Room on the NRCs public Web site, http://www.nrc.gov, and through the NRCs Agencywide Documents Access and Management System (ADAMS)

at the same Web site.

4 Copies are available at current rates from the U.S. Government Printing Office, P.O. Box 37082, Washington, DC

20402-9328 [telephone (202) 512-1800], or from the National Technical Information Service (NTIS), 5285 Port Royal Road, Springfield, Virginia 22161 telephone (703) 487-4650. Copies are available for inspection or copying for a fee from the NRCs Public Document Room (PDR), which is located at 11555 Rockville Pike, Rockville, Maryland; the PDRs mailing address is USNRC PDR, Washington, DC 20555-0001. The PDR can also be reached by telephone at (301) 415-4737 or (800) 397-4205, by fax at (301) 415-3548, and by email to PDR@nrc.gov.

5 Nuclear Engineering and Design is available for electronic download (by free subscription) through Science Direct, a service of the Reed Elsevier Group, at http://www.sciencedirect.com/science?_ob=JournalURL&_cdi=5756&_auth=y&_acct=C000050221&_version=1&_ur lVersion=0&_userid=10&md5=5bc1bb0de40acc1a7b0e4129b21d809f.

RG 1.203, Page 33

(6) C.D. Fletcher et al., Adequacy Evaluation of RELAP5/MOD3, Version 3.2.1.2 for Simulating AP600 Small Break Loss-of-Coolant Accidents, INEL-96/0400 (nonproprietary version),

April 1997.6

(7) G.E. Wilson and B.E. Boyack, The Role of the PIRT Process in Experiments, Code Development and Code Applications Associated with Reactor Safety Analysis, Nuclear Engineering and Design, 186 (pp. 23-37), 1998.4

(8) H. Holmstrom et al., Status of Code Uncertainty Evaluation Methodologies, in Proceedings of the International Conference on New Trends in Nuclear System Thermohydraulics, Dipartimento di Costruzioni Meccaniche Nucleari, Pisa, Italy, 1994.7

(9) M.G. Ortiz and L.S. Ghan, Uncertainty Analysis of Minimum Vessel Liquid Inventory During a Small-Break LOCA in a Babcock and Wilcox Plant, NUREG/CR-5818, USNRC,

December 1992.3

(10) W. Wulff et al., Uncertainty Analysis of Suppression Pool Heating During an ATWS

in a BWR-5 Plant, NUREG/CR-6200, USNRC, March 1994.3

(11) G.E. Wilson et al., Phenomena-Based Thermal Hydraulic Modeling Requirements for Systems Analysis of a Modular High Temperature Gas-Cooled Reactor, Nuclear Engineering and Design, 136 (pp. 319-333), 1992.4

(12) R.A. Shaw et al., Development of a Phenomena Identification and Ranking Table (PIRT) for Thermal-Hydraulic Phenomena During a PWR Large-Break LOCA, NUREG/CR-5074, USNRC, August 1988.3

(13) J.C. Watkins and L.S. Ghan, AHP Version 5.1, Users Manual, EGG-ERTP-10585, Idaho National Engineering Laboratory, October 1992.8

(14) J. Reyes and L. Hochreiter, Scaling Analysis for the OSU AP600 Test Facility (APEX),

Nuclear Engineering and Design, 186 (pp.53-109), November 1, 1998.4

(15) S. Banerjee et al., Scaling in the Safety of Next Generation Reactors, Nuclear Engineering and Design, 186 (pp. 111-133), November 1, 1998.4

(16) V. Ransom, W. Wang, M. Ishii, Use of an Ideal Scaled Model for Scaling Evaluation, Nuclear Engineering and Design, 186 (pp. 135-148), November 1, 1998.4

6 Electronic copies are available through the Public Electronic Reading Room on the NRCs public Web site, http://www.nrc.gov, and through the NRCs Agencywide Documents Access and Management System (ADAMS)

under Accession #ML003769921.

7 Electronic copies are available through the Public Electronic Reading Room on the NRCs public Web site, http://www.nrc.gov, and through the NRCs Agencywide Documents Access and Management System (ADAMS)

under Accession #ML003769914.

8 Electronic copies are available through the Public Electronic Reading Room on the NRCs public Web site, http://www.nrc.gov, and through the NRCs Agencywide Documents Access and Management System (ADAMS)

under Accession #ML003769902.

RG 1.203, Page 34

(17) M. Ishii et al., The Three-Level Scaling Approach with Application to the Purdue University Multi-Dimensional Integral Test Assembly (PUMA), Nuclear Engineering and Design,

186 (pp. 177-211), November 1, 1998.4

(18) Regulatory Guide 1.157, Best-Estimate Calculations of Emergency Core Cooling System Performance, USNRC, May 1989.2

(19) RELAP5/MOD3 Code Manual, Models and Correlations, NUREG/CR-5535, Volume 4, USNRC, August 1995.3 RG 1.203, Page 35

APPENDIX A

ADDITIONAL CONSIDERATIONS

IN THE USE OF THIS REGULATORY GUIDE

FOR ECCS ANALYSIS

A.1 Background As it existed prior to September 1988, 10 CFR 50.46 provided the requirements for domestic licensing of production and utilization facilities using conservative analysis methods. The acceptance criteria for peak clad temperature, cladding oxidation, hydrogen generation, and long-term decay heat removal were specified in 10 CFR 50.46(b), while Appendix K to 10 CFR Part 50 provided specific requirements related to ECCS evaluation models. The requirements of 10 CFR 50.46 were in addition to the requirements of General Design Criterion (GDC) 35 in Appendix A to 10 CFR Part 50, which stated requirements for electric power and equipment redundancy for ECCS systems. In addition, Section 15.6.5 of NUREG-0800, the NRCs Standard Review Plan, describes for reviewers the scope of review, acceptance criteria, review procedures, and findings relevant to ECCS analyses submitted by licensees, while Section 15.0.2 of NUREG-0800 is the companion SRP section to this regulatory guide.

In September 1988, the NRC amended the requirements of 10 CFR 50.46 and Appendix K so that the regulations reflected the improved understanding of ECCS performance during reactor transients, which was obtained through extensive research performed since the NRC published the original requirements in January 1974. Examples of that body of research can be found in Reference A-1.

The NRC subsequently provided further guidance to licensees or applicants in May 1989 by publishing Regulatory Guide 1.157, Best-Estimate Calculations of Emergency Core Cooling System Performance.

The amendment to 10 CFR Part 50 and Regulatory Guide 1.157 now permit licensees or applicants to use either the conservative analysis methods defined in Appendix K or a realistic EM (commonly referred to as best-estimate plus uncertainty analysis methods). That is, the uncertainty in the best-estimate analysis must be quantified and considered when comparing the results of the calculations with the applicable limits in 10 CFR 50.46(b) so that there is a high probability that the criteria will not be exceeded. It should be noted, however, that the acceptance criteria for peak cladding temperature, cladding oxidation, hydrogen generation, and long-term decay heat removal did not change with the September 1988 amendment.

A.2 Need for Regulatory Guidance Update for ECCS Analysis The regulatory structure described above was strongly founded on the supporting work documented in Reference A-2. Therefore, it is important to update the regulatory structure to reflect the past 11 years of advancement in best-estimate plus uncertainty analysis methods. Examples of the extension of evolving best-estimate plus uncertainty analysis methods to both the old and new advanced reactor designs can be found in References A-3 through A-9 of this appendix.

Appendix A to RG 1.203, Page A-1

A.3 Uncertainty Methodology The best-estimate option in 10 CFR 50.46(a)(1)(i), allowed since 1988, requires that uncertainties in the analysis method and inputs must be identified and assessed so that uncertainty in the calculated results can be estimated. This uncertainty must be accounted for, so that, when the calculated ECCS

cooling performance is compared to the criteria set forth in paragraph (b) of this section, there is a high level of probability that the criteria would not be exceeded.

To support the revised 1988 ECCS rule, the NRC and its contractors and consultants developed and demonstrated an uncertainty evaluation methodology called code scaling, applicability, and uncertainty (CSAU) (Ref. A-2). While this regulatory guide is oriented toward the CSAU approach, including its embedded PIRT process, the NRC recognizes that other approaches exis

t. Since the CSAU

demonstration was not a plant-specific application, it did not emphasize evaluation of input uncertainties related to plant operation. Proprietary methodologies that fully address uncertainties in analysis methods and input have been submitted to and approved by the NRC. Thus, other approaches to determine the combined uncertainty in the safety analysis are recognized as having potential advantages, as long as the EM documentation provides the necessary validation of its approach.

The safety criteria (PCT, H2 generation, etc.) specified in 10 CFR 50.46 remain unchanged, regardless of the uncertainty methodology used in a licensing or regulatory submittal. Similarly, the general guidelines in Regulatory Guide 1.157 with regard to the phenomena, components, and computer models also remain unchanged. Thus, the focus of the remainder of this section is those considerations primarily related to determining the following:

  • relative importance of the phenomena or processes and components, and those that should be included in the uncertainty analysis
  • method of establishing the individual phenomenon or process contribution to the total uncertainty in the safety criteria
  • method to combine the individual contributions to uncertainty into the total uncertainty in the safety criteria.

CSAU and other methods address the relative importance of phenomena or processes, the difference being in the approach. CSAU uses the PIRT process in which relative importance is established by an appropriate group of experts based on experience, experimental evidence, or computer- based sensitivity studies. When finalized, the resulting PIRTs guide the degree of effort to determine the individual phenomenon or process uncertainty in the safety criteria. The PIRT process results also guide the method used to combine the individual contributions into an estimate of the total uncertainty in the safety analysis. Commonly, a response surface is developed (although it is not required) to act as a surrogate for the computer codes used in estimating the total uncertainty. The response surface can then be extensively Monte Carlo sampled to determine the total uncertainty. The use of limited computer calculations to develop an accurate response surface is followed by sufficient Monte Carlo sampling of the response surface in an effort to be as thorough as necessary, yet as economical as possible.

Therefore, the major cost of the CSAU methodology is related to the extensive expert staff-hours normally required for the expert panel to perform the PIRT process. Additional advantages of the CSAU

are that it has been used by the NRC, and the details of the methodology have been well-documented (Ref. A-2).

Appendix A to RG 1.203, Page A-2

A potential disadvantage relates to the dependency of the number of computer simulations on the number of phenomena or processes determined in the PIRT that may be needed to estimate the total uncertainty. That is, at least two single parameter change runs must be made for each required phenomenon or process. In addition, cross-product runs must be made when several of the phenomena or processes have significant covariance. The cross-product runs may involve change runs of two, three, or four parameters to adequately determine the effect of nonindependent phenomena or processes.

Methods other than the CSAU methodology may also be used for uncertainty analysis.

Examples of other uncertainty methodologies that might be used are described in Reference A-7.

Obviously, such submittals would require validation of the methodology (including any statistical assumptions used in the methodology) to show that it is applicable for determining the uncertainty of the parameter of interest.

An uncertainty methodology is not required for the original conservative Appendix K option in 10 CFR 50.46. Rather, the features required by Appendix K provide sufficient conservatism without the need for an uncertainty analysis. It should be noted that Section II.4 of Appendix K

requires that To the extent practicable, predictions of the EM, or portions thereof, shall be compared with applicable experimental information.

Thus, Appendix K requires comparisons to data similar to those required for the best-estimate option, but without the need for an uncertainty analysis. However, poor comparisons with applicable data may prevent NRC acceptance of the Appendix K model.

Appendix A to RG 1.203, Page A-3

APPENDIX A

REFERENCES

A-1 Compendium of ECCS Research for Realistic LOCA Analysis, NUREG-1230, USNRC,

December 1988.1 A-2 B. Boyack et al., Quantifying Reactor Safety Margins, Application of Code Scaling, Applicability, and Uncertainty Evaluation Methodology to a Large-Break Loss-of-Coolant Accident, NUREG/CR-5249, USNRC, December 1989.1 A-3 G.E. Wilson et al., Phenomena Identification and Ranking Tables for Westinghouse AP600

Small-Break Loss-of-Coolant Accident, Main Steam Line Break, and Steam Generator Tube Rupture Scenarios, NUREG/CR-6541, USNRC, June 1997.1 A-4 M.G. Ortiz and L.S. Ghan, Uncertainty Analysis of Minimum Vessel Liquid Inventory During a Small Break LOCA in a Babcock and Wilcox Plant, NUREG/CR-5818, USNRC,

December 1992.1 A-5 U.S. Rohatgi et al., Bias in Peak Clad Temperature Predictions Due to Uncertainties in Modeling of ECC Bypass and Dissolved Non-Condensable Gas Phenomena, NUREG/CR-5254, USNRC,

September 1990.1 A-6 C.D. Fletcher et al., Adequacy Evaluation of RELAP5/MOD3, Version 3.2.1.2 for Simulating AP600 Small-Break Loss-of-Coolant Accidents, INEL-96/0400 (Nonproprietary version),

April 1997.2 A-7 H. Holmstrom et al., Status of Code Uncertainty Evaluation Methodologies, Proceedings of the International Conference on New Trends in Nuclear System Thermohydraulics, Dipartimento di Costruzioni Meccaniche Nucleari, Pisa, Italy, 1994.3

1 Copies are available at current rates from the U.S. Government Printing Office, P.O. Box 37082, Washington, DC

20402-9328 [telephone (202) 512-1800], or from the National Technical Information Service (NTIS), 5285 Port Royal Road, Springfield, Virginia 22161 telephone (703) 487-4650. Copies are available for inspection or copying for a fee from the NRCs Public Document Room (PDR), which is located at 11555 Rockville Pike, Rockville, Maryland; the PDRs mailing address is USNRC PDR, Washington, DC 20555-0001. The PDR can also be reached by telephone at (301) 415-4737 or (800) 397-4205, by fax at (301) 415-3548, and by email to PDR@nrc.gov.

2 Electronic copies are available through the Public Electronic Reading Room on the NRCs public Web site, http://www.nrc.gov, and through the NRCs Agencywide Documents Access and Management System (ADAMS)

under Accession #ML003769921.

3 Electronic copies are available through the Public Electronic Reading Room on the NRCs public Web site, http://www.nrc.gov, and through the NRCs Agencywide Documents Access and Management System (ADAMS)

under Accession #ML003769914.

Appendix A to RG 1.203, Page A-4

A-8 G.E. Wilson and B.E. Boyack, The Role of the PIRT Process in Experiments, Code Development and Code Applications Associated with Reactor Safety Analysis, Nuclear Engineering and Design, 186 (pp. 23-37), 1998.4 A-9 RELAP5/MOD3 Code Manual, Models and Correlations, NUREG/CR-5535, Volume 4, USNRC, August 1995.1

4 Nuclear Engineering and Design is available for electronic download (by free subscription) through Science Direct, a service of the Reed Elsevier Group, at http://www.sciencedirect.com/science?_ob=JournalURL&_cdi=5756&_auth=y&_acct=C000050221&_version=1&_ur lVersion=0&_userid=10&md5=5bc1bb0de40acc1a7b0e4129b21d809f.

Appendix A to RG 1.203, Page A-5

APPENDIX B

EXAMPLE SHOWING THE GRADED APPLICATION OF THE EMDAP

The EMDAP, in its entirety, guides the development of an evaluation model from the ground up.

It presents all of the necessary considerations and assessments that should be addressed in order to ensure a complete, accurate, and robust model. For situations when previously approved EMs require modifications, a graded approach to the EMDAP can be undertaken. In these cases, the following items should be considered in order to correctly address the issue:

(23) state of the old model

(24) extent of proposed changes

(25) new modeling

(26) change integration To begin, every aspect of the old model should be gathered together and arrayed in accordance with the EMDAP, and any necessary information that is missing or obsolete should be gathered or corrected.

This effort creates a solid foundation from which to start, and makes it easier to consider the next item, the changes. Viewing the old model through the EMDAP template, it should be determined where in the process the modifications have an effect. Does the modification introduce phenomena that hasnt been evaluated? Are new closure models being introduced? Depending upon the nature of the modification, these and other questions should be answered to determine the effect the modification has on the EM.

After establishing these effects, if any new modeling is to occur it should be conducted in accordance with the four elements of EMDAP, item 3:

(27) Establish Requirements for Model Capability.

(28) Develop Assessment Base.

(29) Develop Model.

(30) Assess Model Adequacy.

Finally, once all of the modeling and changes have been developed and assessed, the old EM,

in its entirety, is again viewed through the EMDAP template and the modifications are incorporated at the appropriate steps in the process. The remaining steps of the EMDAP are then followed to completion.

The example that follows demonstrates a graded application of the EMDAP for EM changes that are relatively small and that have relatively low safety significance. Each step in the process is addressed for relevance, with general guidance and example-specific information provided for clarity and instruction. The intent of the example is to focus attention on the level of consideration required for the EM modification, not on the technical details of the example, which are not complete in every way.

Example B1: Changes to an NRC-Approved Multipurpose Thermal-Hydraulic Code A vendor of products and technical services utilized by a number of NRC licensees has been refining a methodology for the analysis of non-LOCA transients occurring in certain PWRs.

This methodology encompasses the full compliment of NUREG 800, Chapter 15 transients. As product improvements have evolved, it has become necessary for the vendor to introduce more realism into certain long-term heatup transient calculations to maintain compliance. Therefore, the vendor has proposed the following modifications to the transient EMs: the crediting and use of a more refined heat transfer model and the use of an auxiliary code to generate more realistic steam generator masses.

Appendix B to RG 1.203, Page B-1

The multipurpose thermal-hydraulic computer code used for the analysis, called CODE1, constitutes the major portion of the EMs. The first modification is deemed necessary because prior analyses did not credit the heat energy deposited in the reactor coolant system (RCS) metal during heatup and, furthermore, when crediting this phenomenon, it was determined that the current heat conduction model in CODE1 gave non-conservative results. Therefore, a more refined heat conduction model is proposed that aids in continuing to maintain sufficient safety margin. The second modification is proposed because the steam generator (SG) model in CODE1 has been found to conservatively under-predict secondary-side SG water masses following transient initiation. Therefore, a stand-alone thermal-hydraulic computer code, known as AUX1, will be used as an auxiliary code to give more realistic but conservative secondary-side SG water masses. Both modifications will be applied to future licensing actions.

1.1 Element 1 - Establish Requirements for Evaluation Model Capability

1.1.1 Step 1. Specify analysis purpose, transient class, and power plant class.

Old Evaluation Model New Evaluation Model The analysis purpose of this EM is the evaluation The analysis purpose of this EM is the evaluation of the following transients to establish the compliance of the following transients to establish the compliance of certain PWRs with the applicable general design of certain PWRs with the applicable general design criteria (GDC): criteria (GDC):

(31) Loss of Outside External Load (39) Loss of Non-Emergency AC Power (LOAC)

(32) Turbine Trip (40) Loss of Normal Feedwater Flow (LONF)

(33) Loss of Condenser Vacuum (41) Feedwater System Pipe Breaks (FLB)

(34) Closure of Main Steam Isolation Valve

(35) Steam Pressure Regulator Failure The same PWR models will be analyzed.

(36) Loss of Non-Emergency AC Power

(37) Loss of Normal Feedwater Flow

(38) Feedwater System Pipe Breaks Inside and Outside Containment The PWRs analyzed are four loop models. The cores consist of between 180 and 200 fuel rod bundles containing between 254 and 272 fuel pins in a 17 x 17 array, which generates between 3293 and 3411 Mwt at normal full-power operation. Each of the four loops consist of a steam generator, reactor coolant pump (RCP), associated piping, and ECCS. The steam generators have the vertical U-tube configuration, containing Inconel tubes, and the RCPs are single-stage, centrifugal design. Each plant has an electrically heated pressurizer connected at the hot leg to one of its four loops.

Appendix B to RG 1.203, Page B-2

1.1.2 Step 2. Specify Figures of Merit Example: The same figures of merit apply for each transient.

Loss of Normal Feedwater Flow Figures of Merit:

42. Pressure in the reactor coolant and main steam systems should be maintained below 110%

of the design pressures for low probability events and below 120% of the design pressures for very low-probability events such as double-ended guillotine breaks.

43. The potential for core damage is evaluated on the basis that it is acceptable if the minimum DNBR remains above the 95/95 DNBR limit for PWRs based on acceptable correlations.

1.1.3 Step 3. Identify Systems, Components, Phases, Geometries, Fields, and Processes That Must Be Modeled Note: When modifying existing codes that are a part of the EM, identify each by its frozen version number.

Old Evaluation Model New Evaluation Model CODE1/V4.03 was used in the analysis. It is CODE1/V4.03 remains the analysis tool, with all an advanced, best-estimate computer program designed of the systems, components, phases, geometries, fields, to calculate the transient reactor behavior of a PWR. and processes modeled as before. However, two As such, it incorporates four-component (liquid water, changes will be incorporated: (1) the use of a different liquid solute, water vapor, and noncondensible gas), variation of the heat conduction equation to model two-fluid (liquid-gas) modeling of the thermal- the heat transfer from the coolant to the thick metal hydraulic processes involved in such transients. of the vessel lower plenum, vessel bypass, vessel downcomer, hot leg, cold leg, and pressurizer;

No other calculational device is used in the EM. and (2) the use of the AUX1 code to calculate SG

water mass following transient initiation.

1.1.4 Step 4. Identify and Rank Key Phenomena and Processes Examine the original PIRT for the transients of concern, and identify the phenomena that are relevant to the proposed EM changes. Determine if any new phenomena should be added to the original PIRT

or if the originally listed phenomena are effected by the modifications. Then develop a new PIRT,

based solely on the changes, that incorporates the old and new information.

Example:

Change 1: Crediting the heat absorption characteristics of the RCS thick metal masses will ultimately result in a reduction of RCS fluid temperature and pressurizer water level following the transient.

The transferring of heat from the coolant to the metal will reduce the RCS temperature changes and subsequently reduce the pressurizer insurge attributable to the fluid expansion.

Change 2: The AUX1 SG model is more detailed than the CODE1 model and will produce different values for parameters such as secondary-side mass and pressure, circulation ratio, and primary-side temperatures.

Appendix B to RG 1.203, Page B-3

Original Non-LOCA PIRT for Heatup Transients System Modules Phenomena LOAC LONF FLB

RCS Core Fuel Heat Transfer L L M

Decay Heat H H M

Thermal hydraulic-nuclear feedback Pressurizer Thermal-hydraulics H H H

Surgeline hydraulics Coolant Loops Natural Circulation H L H

Critical Flow L L H

1- and 2-phase pump behavior Valve leak flow Structural heat absorption and losses L L L

Steam Steam Generator Primary side thermal hydraulics H H H

System Secondary side thermal hydraulics M M M

Separator behavior L L L

where, LOAC = Loss of Non-Emergency AC Power LOFW = Loss of Normal Feedwater Flow FLB = Feedline Break H = Phenomenon has a high impact on the figure of merit M = Phenomenon has a medium impact on the figure of merit L = Phenomenon has a low impact on the figure of merit Note: The old EM was used in the analysis of all eight of the Heatup Transients; however, since the proposed modifications deal only with three of the transients, only three are relevant to the present analysis.

Appendix B to RG 1.203, Page B-4

PIRT Based on Proposed EM Changes Change System Modules Phenomena LOAC LONF FLB

1&2 RCS Vessel -L. Structural heat absorption L L L

Plenum and losses

1&2 Vessel-Bypass Structural heat absorption L L L

and losses

1&2 Downcomer Structural heat absorption L L L

and losses

1&2 Cold-Leg Structural heat absorption L L L

and losses

1&2 Hot-Leg Structural heat absorption L L L

and losses

1&2 Pressurizer Thermal hydraulics H H H

Surgeline hydraulics

2 Steam Supply SG Primary side thermal H H H

System hydraulics Secondary side thermal hydraulics The newly developed PIRT will serve as the basis and justification for how the remaining steps of the EMDAP are addressed; high-ranking phenomena require increased emphasis, whereas certain aspects of the EMDAP can be de-emphasized for low-ranking phenomena.

Appendix B to RG 1.203, Page B-5

1.2 Element 2 - Develop Assessment Base

1.2.1 Step 5. Specify Objectives for Assessment Base Identify assessment objectives, specifying any quantitative or quantitative acceptance criteria.

Old Evaluation Model New Evaluation Model The old EM has been validated using a database Objective 1: Demonstrate the influence that met required standards and objectives of nodalization on the calculation.

Objective 2: Demonstrate that the coded equations of the new heat transfer model achieve results that are in agreement with the known solutions to standard problems.

Objective 3: Demonstrate that the coded heat transfer model achieves results for localized behavior that are within the uncertainty bands of corresponding SET

results.

Objective 4: Demonstrate that the modified EM

results are similar in trend and magnitude with the old EM results and within the spread or uncertainty bands of IET data.

1.2.2 Step 6. Perform Scaling Analysis and Identify Similarity Criteria If it has not been conducted previously, a scaling analysis and similarity criteria identification should be conducted for any experimental data sources needed to accomplish the objectives specified in the previous step.

Example: There is no need to perform any additional scaling analyses. IET data from experimental facilities that have been previously analyzed for scale will be used; the newly proposed heat transfer model is based on first principles; and no new separate or integral effects experimental data sources have been identified for inclusion into the assessment database.

1.2.3 Step 7. Identify Existing Data or Perform IETs and SETs To Complete Database Complete the assessment database by identifying the existing data needed to accomplish the stated assessment objectives of Step 5. Based on the availability of assessment data and the PIRT

of Step 4, a decision should be made as to the need for further testing or experimentation.

Appendix B to RG 1.203, Page B-6

Example: The following chart shows the experimental database that is available for the assessment of the proposed EM modifications:

PARAMETER GROUPS

Structural Heat Absorption and Losses Primary-Side SG Thermal-Hydraulics Secondary-Side SG Thermal-Hydraulics Pressurizer Thermal-Hydraulics Suregeline Hydraulics IET FACILITIES TEST NOS. TRANSIENTS

Loss-of-Fluid Test (LOFT) L6-5 LOFW / LOAC C C C C C

Loop for Off-Normal Behavior BT-06 FLB C C C C C

Investigations (LOBI)

SET FACILITIES

CISE - PRESSURIZER FLOODING (ITALY) C

NEPTUNUS (NETHERLANDS) C

MB-2 (USA, WESTINGHOUSE) C C C

1.2.4 Step 8. Evaluate Effects of IET Distortion and SET Scaleup Capability Example: No need for further consideration. See step 6.

Appendix B to RG 1.203, Page B-7

1.2.5 Step 9. Determine Experimental Uncertainties as Appropriate For each parameter involved in the EM changes, quantify the experimental uncertainty and acceptance criteria.

Example: The following table shows the experimental uncertainty in the relevant parameter values.

Experimental Uncertainty of Relevant Parameters PHENOMENA RELATED EXPERIMENTAL FIGURES OF MERIT

PARAMETERS UNCERTAINTY

Structural Heat RCS Fluid Temperature +/- 100 K ~ 100 K

Absorption and Losses Fluid Enthalpy +/- ~

Heat Transferred to Metal +/- ~

Metal Temperature +/- ~

Pressurizer Pressurizer Flow Rate +/- ~

Thermal-Hydraulics Pressurizer Pressure +/- ~

Pressurizer Water +/- ~

Volume RCS Pressure +/- ~

Steam Generator SG Liquid Volume +/- ~

Thermal-Hydraulics Steam Mass Flow Rate +/- ~

Appendix B to RG 1.203, Page B-8

1.3 Element 3 - Develop Evaluation Model

1.3.1 Step 10. Establish an Evaluation Model Development Plan Considering the areas of focus listed in this regulatory guide, along with procedures from the developers quality assurance program, establish a strategy for the development and implementation of the EM changes. The following table provides excerpts from NUREG-1737, Software Quality Assurance Procedures for NRC Thermal-Hydraulic Codes, dated December 31, 2000.1 Life Cycle Development Product Verification & Validation Activities Example of Standard Initial Planning Software Quality Management Review Assurance Plan Requirements Software Requirements

  • Verification of Requirements S1 Definition Specification
  • Review of test plan and acceptance criteria Software Design Documentation Review of Design S2 of Software Design and Implementation Coding Source Code
  • Review / Inspection of Source Code S3 Verification Testing
  • Verification of Program Integration Report
  • Verification of Test Results Software Testing Validation Testing Validation of Program S4 Report Installation and
  • Installation Package
  • Verification of Installation Package S5 Acceptance
  • Program Upgrade
  • Verification of Program Documentation Documentation S1: Functional Requirements. The theoretical basis and mathematical model consistent with the phenomena to be modeled are described. The range of parameters over which the model is applicable is specified. All figures, equations, and references necessary to specify the functional requirements for the design of the software is documented.

Performance Requirements: Resolution of speed, accuracy, and scalability issues require development of a test plan and acceptance criteria. The code should be exercised using the test plan and results should meet the acceptance criteria. The test plan should include the following information:

(a) The number and types of qualification problems to be completed, (b) The rationale for the problem choice, (c) The specific range of parameters and boundary conditions for which successful execution of the problem set will qualify the code to meet specific functional requirements, (d) Descriptions of the code input test problems, (e) A description of what code results will be compared against, (f) Significant features not to be tested and the reasons, (g) Acceptance criteria for each item to be tested.

(h) Discussion of scalability, if applicable.

1 Electronic copies are available through the NRCs Agencywide Documents Access and Management System (ADAMS) under Accession #ML010170081. See http://www.nrc.gov/reading-rm/adams/web-based.html.

Appendix B to RG 1.203, Page B-9

Validation Requirements:

Excellent Agreement applies when the code exhibits no deficiencies in modeling a given behavior.

Major and minor phenomena and trends are correctly predicted. The calculated results are judged to agree closely with data.

Reasonable Agreement applies when the code exhibits minor deficiencies. Overall, the code provides an acceptable prediction. All major trends and phenomena are predicted correctly. Differences between calculated values and data are greater than are deemed necessary for excellent agreement.

Minimal Agreement applies when the code exhibits significant deficiencies. Overall, the code provides a prediction that is not acceptable. Some major trends or phenomena are not predicted correctly, and some calculated values lie considerably outside the specified or inferred uncertainty bands of the data.

Insufficient Agreement applies when the code exhibits major deficiencies. The code provides an unacceptable prediction of the test data because major trends are not predicted correctly.

Most calculated values lie outside the specified or inferred uncertainty bands of the data.

For PIRT high-ranked phenomena, the minimum standard for acceptability with respect to fidelity is generally reasonable agreement.

S2: The software design and implementation documentation shall describe the logical structure, information flow, data structures, the subroutine and function calling hierarchy, variable definitions, identification of inputs and outputs, and other relevant parameters. It shall include a tree showing the relationship among the modules and a database describing each module, array, variables, and other parameters used among code modules.

S3: The source code listing or update listing shall be reviewed for the following attributes. There will be sufficient explanations in the listing which will permit review of these attributes:

(a) Traceability between the source code and the corresponding design specification:

Analyze coding for correctness, consistency, completeness, and accuracy.

(b) Functionality: Evaluate coding for correctness, consistency, completeness, accuracy, and testability.

Also, evaluate design specifications for compliance with established standards, practices, and conventions. Assess source code quality.

(c) Interfaces: Evaluate coding with hardware, operator, and software interface design documentation for correctness, consistency, and accuracy. At a minimum, analyze data items at each interface.

S4: All testing activities shall be documented and shall include information on the date of the test, code version tested, test executed, discussion of the test results, and whether the software meets the acceptance test criteria.

S5:

Installation Package: The program installation package shall consist of program installation procedures, files of the program, selected test cases for use in verifying installation, and expected output from the test cases.

Upgrading Program Documentation: The existing program documentation shall be revised and enhanced to provide a complete description of the program. Code manuals will be produced and upgraded concurrently with the code development process. The set of code manuals will cover the following subjects: Theory, Models & Correlations Manual; Users Manual; Programmers Manual;

and Developmental Assessment Manual.

Appendix B to RG 1.203, Page B-10

1.3.2 Step 11. Establish Evaluation Model Structure In accordance with the plan outlined in the previous step and this regulatory guide, describe the structure of the new methodologies. Describe how each is integrated into the EM (ie., written into the source code, input as a user defined function, etc.).

Example:

Change 1: The new heat transfer model will be written in FORTRAN and input using the User-Defined Subroutine option available in CODE1. The subroutine will only be used for the specific metal structures identified for the modification.

Change 2: CODE1 and AUX1 will receive the same input parameters and boundary conditions, and each code will run until a steady state is reached. From this point, AUX1 will be restarted for transient initiation and allowed to run to completion. The steam generator mass outputs from AUX1 will then be input as a tabular function at the restart of the CODE1 calculation. CODE1 will then run until completion of the transient.

In both cases, the requirements for the design of the software will be done in accordance with the plan described in the previous step. At a minimum, they will include the nodalization, defense of the chosen parameters, any needed sensitivity studies, and justification of the conservative nature of the input parameters.

1.3.3 Step 12. Develop or Incorporate Closure Models Example:

In the strictest sense, the newly proposed heat transfer equation is not a closure model, but a variation of the heat conduction field equation currently used in the EM. However, because its application is based on particular geometries, the separate effects of the phenomenon must be assessed in the same manner as closure models.

At this point in the EMDAP, the proposed EM modifications are incorporated into the old EM,

creating a newly revised EM. From this point forward, the EMDAP will be followed to assess the newly revised EM as a whole.

1.4 Element 4 - Assess Evaluation Model Adequacy

1.4.1 Step 13. Determine Model Pedigree and Applicability To Simulate Physical Processes Example:

The information for the determination of model pedigree and applicability is given in accordance with the model development plan. For the change in the heat conduction equation, the assumption of a uniform temperature distribution within the metal must be justified. This allows for the use of the proposed heat transfer solution technique. For the model pedigree determination, information is provided showing that the internal resistance of the metal is negligible in comparison with the external resistance at its surface and that the Biot number is less than 0.1. In determining applicability, a consistency evaluation is conducted on the discretized heat equation. By expanding the equation using a Taylor series and rounding off the truncation error, the discretized equation is shown to be consistent with the analytical equation. The determination of model pedigree and applicability was conducted in the initial assessment of AUX1. The assessment of its incorporation into the EM will begin in Step 16.

Appendix B to RG 1.203, Page B-11

1.4.2 Step 14. Prepare Input and Perform Calculations To Assess Model Fidelity or Accuracy Perform as directed.

1.4.3 Step 15. Assess Scalability of Models Example:

The scaling issue was resolved in Step 6.

1.4.4 Step 16. Determine Capability of Field Equations to Represent Processes and Phenomena and the Ability of Numeric Solutions to Approximate Equation Set Example:

In the heat transfer modification, the coupling of the fuel/structural heat transfer to the thermal-hydraulic fluid behavior has not been altered, so theres no need to evaluate the remaining field equations which have been reviewed previously. It is determined that there are no restrictions on the solutions range of applicability and that no additional constitutive relations are needed. Also, the new solution technique is in no way related to any restrictions placed on the usage of CODE1 in the codes original safety evaluation report (SER). In the numeric solution evaluation, the key focus is the accuracy, stability, and convergence of code calculations to a solution of the original equations. In this review, the solution technique is shown to be accurate and numerically stable for any time step; errors approach zero as the thickness and time step approach zero, and errors due to finite thicknesses and large time steps always occur in the direction to minimize heat transfer. For Change 2, the field equations and numerics of the AUX1 code were validated in the initial assessment; therefore, no further consideration is required.

1.4.5 Step 17. Determine Applicability of Evaluation Model to Simulate Systems and Components Example:

The applicability of the EM to simulate systems and components was determined and documented in the initial assessment of the old EM and AUX1. The applicability of the new heat conduction equation was determined in Step 13.

1.4.6 Step 18. Prepare Input and Perform Calculations to assess System Interactions and Global Capability Perform as directed.

1.4.7 Step 19. Assess Scalability of Integral Calculations and Data for Distortions Example:

Evaluate the code results according to the specified acceptance criteria. If distortions are present, assess the scalability of the integral calculation.

1.4.8 Step 20. Determine Evaluation Model Biases and Uncertainties Due to the relatively small nature of the changes, no uncertainty analysis is warranted.

Therefore, provide a qualitative statement on the degree of conservatism embedded in the EM,

describing assumptions or input values that ensure conservatism in the analysis.

1.5 Adequacy Decision Based on the comparisons made in Step 18, the model change is judged adequate or not.

Appendix B to RG 1.203, Page B-12