ML20003C990

From kanterella
Jump to navigation Jump to search
Comments on Steps Involved in NRC Plan for Independent Code Assessment.Summary of Assessment Steps Is Provided in Encl Table 1.Draft Ltr from GE Encl
ML20003C990
Person / Time
Issue date: 07/08/1980
From: Shotkin L
NRC OFFICE OF NUCLEAR REGULATORY RESEARCH (RES)
To: Fabic S
NRC OFFICE OF NUCLEAR REGULATORY RESEARCH (RES)
Shared Package
ML20003C956 List:
References
NUDOCS 8103181017
Download: ML20003C990 (10)


Text

_

W UNITED STATES

,/fye neoq%

NUCLEAR REGULATORY COMMisslON e

WASHINGTON D. C. 20065

.I p :k,,.

d i

JUL0 81980 j

MEMORANDUM FOR:

5. Fabic, Chief Analysis Development Branch, RSR FROM:

L. M. Shotkin Analysis Development Branch, RSR

SUBJECT:

THOUGHTS ON THE ADB PLAN FOR INDEPENDENT CODE ASSESSMENT The ADB plan for independent assessment of released codes consists of five separate steps:

1.

Release of the code by the code developers.

2.

Qualification of the released code against existing data.

3.

Testing of the released code against new data.

4.

Improvements suggested to code developers for the next released version (or code modificatied.

5.

Extrapolation of code results to full-scale LWR's.

The qualification and testing of the released code against data, and the suggestion of improvements to code developers (steps 2, 3 and 4}, involves the bulk of the effort of independent code assessment.

This memo suggests that by treating each of these five steps separately, and by further subdividing steps 2, 3 and 4 from steps 1 and 5, the ADB plan can l

be made more acceptable to the technical comunity.

1.

Release of the code by the code developer.

l There are good reasons why the NRC must release codes based on practical l

schedule requirements, rather than on whether the code has reached a 1

certain level of " maturity."

l l

. NRC requires calculations to be performed in support of experimental I

facilities, to audit vendor calculations, to participate in Standard l

l l

8108181017

S. Fabic 2

Problem Exercises, to explore r.ew accident scanarios, etc. When these calculation requirements arise, NRC must use the best tool available at the time and does not have the luxury to wait several years for the " ultimate" code.

It should thus be made clear that code versions are typically released with known deficiencies:

a.

Approximate models for phenomena in accident scenarios for which the code is designed.

b.

Lack of models for phenomena in accideat scenarios for which the code is not designed.

This means, of course, that the code cannot predict " surprises" for phenomena which have not been properly modeled.

2.

Qualification of the released code against existing data.

There are four aspects of this process which appear to be firmly supported by the technical consnunity:

a.

An accepted way to qualify a code is to compare its results against experimental data.

b.

These comparisons should be performed systematically, according to a well thought out " test matrix."

c.

Have several groups of individuals perform these comparisons to:

1.

prevent bias and get different points of view, ff. get the work done quicker, because it is done in parallel y

and not in series.

i d.

Present the results of code vs. data comparisons in a femal, j

defined way.

l There is a fifth aspect which should be added to these four for the qualification process. Namely, that the acceptance criteria can be specified separately for each test or test series. That is, if the loop seal blew, then the code should calculate.it. Similarly, if the lower plenum emptied or if a pool developed in the upper plenum then the code should be assessed relative to its ability to simulate l

l l

f l

S. Fabic 3

these major phenomena for each ;est or test series. To be more explicit; for the code qualification process against existing data it is not necessary to define general integral acceptance criteria, specific ones are more suitable.

3.

Testiag of the released code against new data The pmcedure for this process is essentially % same as for (2) above, except that blind post-test predictions are t.ade of the data. The administrative procedures for this process are in place and work reasonably well. Again, here, the acceptance criteria should be specific for each test, rather than general-integral.

4.

Improvements suggested to code developers These improvements will, most logically, be suggested based on the specific data comparisons for each test, or test series. Some restraint may be appropriate here to avoid over-criticism of adequate modeling.

It does not appear that general acceptance criteria are helpful at this stage of the independent assessment process.

5.

Extrapolation of code results to full-scale LWR's The best way, of course, is to test the code against data in full-scale facilities, and we are doing just that in several experimental programs

(.20/3D,GE/Lynn,etc.). Beyond that, how can we use the results of scaled experiments, from steps 2 and 3 above, to justify extrapolatability?

It is here that general criterta based on integral phenomena, as presented at the 6/27/80 meeting, would be most appropriate.

The above arguments are summarized in Table I.

l L. M. Shotkin i

Analysis Development Branch j

Division of Reactor Safety Research

Enclosure:

as stated cc w/ enc 1:

N. Zuber F. Odar.

P. Andersen I

o TABLE I ASSESSMENT CRITERIA Assessment Stra Source of Assessment Type of Assessment Criteria Criteria Specific Specific Integral for Test or for for Accident Tes t-Series Accident Typa Type l.

Code Release Developmental assessment against range of data in-y fluenced by hRC schedule requirements 2.

Qualifying Code Comparisons of code with data Against Exist-for specific tests in test ing Data matrix. Data should include

/

/

" spread" in instruments with-in same computational cell 3.

Testing Code Same as 2. above j

j Against New Data 4.

Suggested Based on code's ability te Improvements calculate specific phenomena

/

/

to Code during accident scenarios for Developers which code was designed 5.

Code Extra-Comparison with full-scale polation to data relevant to accident LWR's type. Comparison with data

/

/

in 2 and 3 above, based un integral criteria.

.J DRAFI l

To: Dr. S. Fabic, Branch Chief f cc:

P.S. Andersen Analysis Development Branch R.B. Duffy l

Nuclear Regulatory Commission I

P. North L.H. Sullivan From:

G. E. Dix, Manager L S. Tong I

Safety & Thermal Hydraulic Technology Il General Electric Company Suoject: NRC Code Assessment Approach

Reference:

Fabic presentation, June 26-27 1980.

I have carefully reviewed the subject approach as defined at the June Code Assessment Review Group meeting (reference).

I believe that i

this thoughtful' approach provides solutions to many of the problems associated with best-estimate code assessment and, ultimately, defining reactor application uncertainties. However, I believe there are several weaknessess in the defined approach that are critical to its success.

~

Furthennore, since the assessment process is currently proceeding, I believe it ts urgent to correct the problems related to data utiliza-tion and control before all opportunities for appropriate virgin facility and data comparisons are expereded.

l l

The key concerns with the current NRC approach are sumarized in the attacfment. The primary items concerning data utilization and scheduling are further amplified below.

From a model development viewpoint alone, it is most efficient to maphasize assessment of basic and component models first, then proceed to extensive integral predictions when basic models are judged acceptable.*

l

  • Acceptance criteria for basic and component models should be established in advance, based upon esti::;ates of the sensitivity of the final reactor predicticas to each model. Such criteria would necessarily involve specifications of the data to be used. This should be part of the original model development specifications.

l

,r-.x

I

~

2-l Therefore, the integral system comparisons at the early stage would only be done for identificat4n of any interface problens to be addressed during model development.

Fran. this model development viewpoint, the issuance of periodic 1

integral versions with extensive integral assessment is staewhat counter-

. productive (assessment resources could be more productively applied on l

basic / component models). However, the parallel need for current versions

'of the code for planning of experimeats and as backup in case of energen-l cies complicates this process. This introduces a requirenent for periodic updates and integral assessments. Unfortunately, these conflicting needs may sometimes confuse and slow the model development process since integra!

.systen version releases are controlled somewhat by schedult requirements l

rather than just by milestones in incorporating and imprcving models in the code. Hence, the released version errors result from combinations of l

model deficiencies, interface probluns, and set-up/noding application

-problems. This results in diversion of modeling resources to assist in

' evaluation and interpretation of the difficulties (many of which may not be applicable to later code y esions).

l Therefore, it is very 'sportant to carefully consider the trade-offs of early release and assessment of integral code versions. To the maximum extent practical, the model release schedules (and specific model features) should be coordinated with virgin facility data schedules (e.g., complete upper plenum models prior to Lynn 30* Sector CCFL tests). As disc:sssed below, such coordination between experiments and code version release schedules is also very important tc address a weakness in the current approach for credible extrapolation of uncertainties to reactor predictions.

G

.m l

es

. The iten of most concern in the current NRC assessnent appreach is the lack of an effective plan to utilize the first code prediction of new

" virgin" facilittas to provide a basis for subsequently extrapolat1.g to the reactor. Since the phenomena included {ina best estimate model can ultimately be suffic*ently refined

  • with available data comparisons, the major uncertainty for predicting any "new facility" (which will always include full scale reactors) will primarily result from some relevant phenomena not being included in the models (either because they were not. thought of or were not important in previous facility results). Once the data from any facility have been compared with the models, such phenonena omissions are quickly identiffed and can subsequently be rectified. Unfortunately, we will not have the luxury of such post-comparison corrections in the case of reactor application. Therefore, we must establish and demonstrate a record of impmving success in first-time predictions of more complex facilities to be confident that no surprise phenomena will subsequently invalidate full scale reactor predictions.

I believe the solution to this problem is provided by keeping a separate accounting of the uncertainties for all virgin facility predictions. This should supplement the currently planned uacertainty accounting for thr evolving code verisons, and sho916 provide the primary indicator of the confidence for predictions of new facilities. This approach better anphasizes the actual impcrtance of the virgin facility predictions, and clearly points out the need to use appropriately mature code versions for each such prediction, and the need to avafd such predictions occuring incidentally as part of the Development Assessment process (thus, the data used for Developmental Assessment must.be restricted). The work done in the current NRC approach to define appropriate l

parameters for comparison would apply directly to this extended virgin facility

  • This degree of refinement is the prieary element being quantified by the independent assessment under the current NRC approach.

l 1

4 accounting. The only additional consideration would be for those faci?f ties in which both basic separate effects and integral response experiments are conducted.

In that case it may be appropriate to have two classes of virgin predictions (for the two classes of experiments).

I believe the most urgent need is to define which experimentsare critical for such virgin facility accounting and eliminate those data from t'w ongoing i

Development Assessment proceis. The timing is critical for the TRAC-BWR assessment.

I urge your early attentica to this matter.

D w

& ll & C, g h J., E l Comments on NRC Assessment Approach 1

1.

It is appropriate to define a weigh +ed tecperature parameter, but should not refer to it as oxidation penetration (confusing).

2.

Need to assure that such important parameters as 29 levels in key regions are emphasized (particularly important for comparison with Lynn 30' Sector and 30 type facilities).

3.

Number of fuel rodr, in facility is not truly indicative of facility com-plexity and degrr.e of simulation of reactor for BWR due to isolatsi channels.

Need to define a measure of reactor features included.

4.

Need sensitivity assessment and acceptance criteria in advance of model comparisons to avoid subjective arguments after the fact (bite-the-bullet).

l Also, th'ese will guide more effective expenditure of resources (emphasize 1

experimental andmodel development work where needed).

l 5.

Code uncertainty studies are useful for both code improvement and experiment specification guidance.

6.

Limit Developmental Assessment access to integral. data to avoid loss of facility " virginity".

7.

Keep two sets of books:

a) Virgin facility comparisons [possibly subdivide into separate effects results and integral results].

b) Revised code comparisons Comparisons (a) best basis for extrapolating expected uncertainty to new facility (including reactor).

a

---c

~ ~, -

- - - - - - - - - ^ - - -

~

~ ~ - ~ ^~

_m.--

=.

N

_.2 m,

s_

a

_wa.

4-

.,A,m.w EllCLOSURE - 3 O

I l

l I

l l

l t

I m