ML20010C410
| ML20010C410 | |
| Person / Time | |
|---|---|
| Issue date: | 07/23/1981 |
| From: | Bernero R NRC OFFICE OF NUCLEAR REGULATORY RESEARCH (RES) |
| To: | |
| Shared Package | |
| ML20010C401 | List: |
| References | |
| NUDOCS 8108200056 | |
| Download: ML20010C410 (8) | |
Text
....
PROBABILISTIC RISK ASSESSMENT - PROBLEMS AND UNCERTAINTIES Remarks by R. M. Bernero at NRC Workshop on Safety Goals - July 23. 1981 Introduction I am Robert Bernero, Director of the Division of Risk Analysis at the NRC.
We are gathered here today for a purpose which I strongly support, the development of clear safety goals for the regulation of nuclear power.
The purpose of my remarks here is to temper the widespread enthusiasm for detailed quantitative safety goals with a brief survey of the methodology and data problems in probabilistic risk assessment.
For this survey I enlisted the help of senior staff in our division and at one of our contractors, Battelle Columbus Laboratories.
As a framework for discussing the methodology and data problems in current risk evaluations, we first summarize the various analysis steps which are performed in a risk evaluation:
l.
Event trees are constructed for the possible accident sequences which are to be evaluated.
2.
Fault trees are constructed for the system failures in the event trees.
3.
The fault trees are Boolean-evaluated to obtain the minimal cut sets of the fault trees and event trees.
4.
The minimal cut sets are quantified to obtain the system failure probabilities, accident sequence probabilities, and core melt probabilities.
The above four steps yield probabilities of accidents.
The following five steps are additionally required to quantify the consequences of the accidents:
5.
For each event tree sequence, resulting accident variables are quantified including resulting containment pressures and temperature and core conditions.
6.
For each event tree sequence, the possible containment failure modes are quantified including break size and break location.
7.
For each event tree sequence, the size of the radionuclide sources released to the environment are quantified including quantification of plume characteristics.
8.
The source term is transported, accounting for meteorological and topographical effects to give resulting doses.
9.
Finally, taking into account population distributions, the resulting doses are translated to yield quantities health and property effects.
1 810a200056 810724 PDR REVCP NRCISG S
At every analysis step there are major methodology and data problems which can cause the risk evaluation to be very soft and very subjective.
In general, these problems exist in most, if not all, risk evaluations performed today causing most risk results to be of questionable value. These problems are not unique to nuclear power plant risk evaluations but in general occur in every risk evaluation of high consequence, low probability events.
The problems in nuclear power plant risk evaluations will be discussed in the following order:
a.
Problems in Event Tree and Fault Tree Modeling (Analysis steps 1 and 2) b.
Problems in Boolean Manipulation (Analysis step 3) t c.
Problems in Event Tree and Fault Tree Quantification (Analysis step 4) l l
d.
Problems in Accident Phenomenological Modeling (Analysis steps 5, 6, and 7) 1 e.
Problems in Transport and Health and Property Effects Evaluations l
(Analysis steps 8 and 9)
For each area specific problems will be listed and will be briefly discussed.
Some of the problems will be more difficult to resolve than others.
- However, all the problems must be resolved in some uniform, reasonable manner before the quantitative results of risk analysis spa be confidently used in NRC decision-making.
A.
PROBLEMS IN EVENT TREE AND FAULT TREE MODELING 1.
What specific accident initiators are to be included?
This is a basic problem and concerns the coverage and completeness of the risk analysis.
It is not enough to simply say " Include LOCA (Loss of Coolant Accidents) and Transient sequer.ces." Specific types of LOCA's and specific types of transients need to be defined. Whether the root causes of initiating events should oe modeled also needs to be addressed.
2.
What specific system failures are to be included in the accident sequences?
For subsequent fault tree analysis and qua.r.tification, systems need to be defined i terms of associated hardware and specific system boundaries l
need to be definod.
l 3.
The time sequencing of events and failures which are to be considered.
Syr.p* oms change with time and' vary with operator performance.
A good exs. Tale is the Three Mile Island Unit 2 accident where the plant shifsed in Dehavior from transient to LOCA to transient status several times as the operators opened the block valve.
l
- 1
)
2 1
1
I 4.
The definition af what constitutes system failure.
The WASH-1400 BWR modeling of low and high pressure injection system failures are good examples.
The Appendix K modeling gives one set of conservative success criteria; realistic best-estimate calculations give less restric-tive definitions.
Also whenever continuous variables are associated with l
performance such as flow, pressure, etc., the definition of system failure becomes fuzzy.
5.
The definition of how to treat external events in the construction of accident sequences.
This problem is related to the first problem cited involving what accident initiators to consider.
If external events are to be considered at all then the specific types of external events to be considered ircluding all l
pertinent variables and characteristics need to be explicitly defined.
l l
6.
The initial plant conditions which exist before an initiating ev2nt i
occurs.
l Most analysis assume 100% steady state power and add an assumption that components are independently and randomly out for test and maintenance according to some distribution.
Dependencies and risk contributions can thus be lost by not treatics time in core life (important for ATWS) and by not considering othce operating states (e.g., evaluating the ef fect of various bypasses when below full power operation).
7.
The definition and identification of system interactions to consider.
Without subsequent fault tree analysis, the event trees show only those l
dependencies which are known based on plant knowledge.
The construction of the event tree does not improve the knowledge of the plant, although it can focus attention on areas where knowledge is lacking. An example of an interaction that would require detailed fault tree analysis to uncover is that related to the Crystal River NNI bus transient.
Extensive fault tree analysis might have uncovered it - the event tree process would not have.
8.
The identification of source terms to be associated with each event tree sequence.
To minimize calculational effort many studies group similar sequences into accidant categories.
Each category is usually characte;ized by l
attempt'nl to choose the worst combination of fission product releases by chemical species.
This characterization is in general conservative.
I However, if the phenomenology differs, e.g., event V, this characteriza-l tion can be non-conservative.
9.
The definition of possible, and not only probable, human actions which can occur in an accident scenario.
3 l
i Most risk analysis louks at a narrow class of operator maintenance errors and a narrower class of errors of omission during the accident.
Operator i
errors other than these narrow classes are omitted.
TMI is one example of an error of commission (turning HPI off) that would not have been j
analyzed correctly a priori.
There is essentially an infinite variety
{
of possible human errors which can occur, many of which may be signifi-cantly high in probability because of operator misinterpretation, con-1 fusion or shortcuts.
- 10. The level of detail to develop failure causes.
i j
The deeper one goes the better the change of detecting a subtle single j
failure but the cost goes up exponentially.
Also more detailed fault j
trees may be unquantifiable because of lack of data.
- 11. What failures to specifically consider in fault tree anlaysis, i
The greatest uncertainty probably lies in the modeling and inclusion of human errors which can occur. Also failure related to design errors, fabrication errors, and installation errors can be significant contributors j
but are not often included.
1 j
12.
How to treat partial failure and functional failure.
1 j
Partial failure and functional failure have happened and can be dominant l
contributors.
Risk analyses in general ignore these failt.res assuming j
complete success (,r failure and assuming once a component or system j
operates it functions successfully.
13.
How to treat phasing (e.g., from injection to recirculation) and how to treat failure to continue operation.
I This modeling involves hypothesizing the kinds of contingencies and actions j
that might occur in long term safety system operation.
It also involves i
considering failure interactions and dependencies which can exist between i
different operational phases.
This area is generally glossed over in a risk analysis - the analyst preferring to devote his attention to " failure to start" - contributions only since this is a cleaner problem.
14.
What coding schemes to use for qualitative and quantitative analysis and what formats to use in presenting the models and results.
I f
Fault tree and event tree coding and reporting formats are rather mundane problems but if not addressed will result in inconsistencies, models l
which are not usable in the future, and in reports which are almost impossible to review.
B.
PROBLEMS IN BOOLEAN MANIPULATION 1.
How to modulari;te the fault trees for Boolean manipulation.
i Detailed fault trees can be so large that the results are not comprehensi-ble without some defined method for systematic organization and hierarchical i
grouping.
A detailed scram fault tree for example can easily yield 10,000,000 i
4
---,-.-,.n,_,--
,_,_--_,-n_,
-,., ~, -. -
i I
minimal cut sets which represent basic causes of system failure. With modularization this number of cut sets can be reduced to a smaller size.
2.
How many cut sets to obtain and the maximum size of cut set to obtain?
l This problem is related to the previous one.
Obtaining large cut sets can result in thousands or tens of thousands of minimal cut sets - even with modularization. Yet the large size minimal cut sets - say involving four or more component failures - can be important contributors to risk when common cause failures are considered.
i 3.
How to detect fault tree and event tree construction errors.
Checking schenes in computer codes will only detect a restricted class of coding and logic errors.
To comprehensively review a fault tree or almost every detailed step involving a review effort which is comparable to the original effort.
l WASH-1400 in the main ignored system successes and considered them only when the system had single component failures.
In certain instances ignoring system succerses can cause gross conservatisms.
Powever considering system successes can blow up the Boolean manipulations required.
5.
Are component minimal cut sets to be obtained for every event tree sequence?
Component minimal cut sets are those combinations of couponent failure causing either system or event tree failure.
Even for failure trees, the number of minimal cut sets can be enormous.
For event trees, the number of minimal cut sets grows factorially.
However, the minimal cut sets for event trees can be important for common cause evaluations.
C.
PROBLEMS IN EVENT TR'.e AND FAULT TREE QUANTIFICATION
\\
l l
1.
How to quantify independent component failure contributions.
Included in this general problem are the following specific problems:
(1) How should failure times be modeled and what data sources should be used.
l (2) How should test and maintenance be modeled and what data should be used.
l Most risk analyses today use the simplest models assuming constant failure rates (no wearout) and perfect testing aid maintenance with only downtime contributions.
The data for these calculations are a hodgepodge l
of whatever data are available and subjective guesses.
l 2.
How to quantify common cause component failure contributions.
l The common cause approach used in most risk analysis today consists of:
(1) Subjectively selecting those multiple failure which are thought to haye significant dependencies; there is often little or no rationale given for the selection.
5
(2) Guessing a beta factor or conditional failure probability for the dependent failures; the goal seems to be to puess a number such that common cause failures are contrtbutors to risk but do not overwhelm all the other contributors.
3.
How de we model and quantify human errors.
Associated with this general question are the following specific problems:
(1) What individual pre-accident and post accident human 3rrers do we consider for quantification and what is the basic data source.
(2) How do we account for those important performance shaping factors which modcfy the basic human error rates for the particular scenario being evaluated.
(3) How do we model and quantify dependent human errors which are coupled because of operator laxity, confusion, or misinterpretation ("mindset").
(4) How do we model and quantify human actions which are "nonroutine" including mitigatien actions and post accident errors of commission.
In risk analysis today, human error treatments are almost entirely subjec-tive with little or no account taken fo the subjectivity such as performing thorough sensitivity analyses.
4.
How do we model and quantify the uncertaintfes in a risk analysis including data uncertainties, modeling uncertainties and uncertainties do to lack of completeness.
Most risk analyses today do not adequately treat uncertainties - in fact many risk analyses make no attempt at treating uncertainties other than by paying lip service to the fact that there are large uncertainties and care must be taken - whatever that means.
Since risk analyses generally have large uncertainties associated with them a lack of treatment of the uncertainties in some systematic manner causes the results to be question-able at best.
D.
PROBLEMS IN ACCIDENT PHENOMEN0 LOGICAL MODELING 1.
How to predict containment failure pressure and failure mode.
Models are capable of predicting the incipient failure prei.,ure of the basic c'.ructure; however, the behavior of penetrations and discontinuities are very uncertain. The capability of predicting failure location and size does not really exist and is unlikely to be achievable in the near future.
2.
How to model core meltdown ' node and timing.
The modeling of core meltdown after initial heatup of the fuel to liquid fuel formation is highly uncertain.
There is little basis for predicting the mode of entry of molten material into the lower plenum, the mode of attack of the pressure vessel, and the initial stage of entry of material into the reactor cavity.
6
3.
How to model hydrogen production and associated interactions.
The evolution of hyd* ogen from the zirconium-water reaction during the early stages of core meltdown is more or less reasonably characterized.
After the geometry of the core is significantly altered, the production rate of hydroaen is much less certain.
Uncertainties also exist regarding hydrogen distribution in coatainment, ignition requirements, and condi-tions leading to detonation.
There has, furthermore, been little evalua-tion of containment failure from a detonating hydrogen cloud.
4.
How to model contaitsent pressure-temperature transients.
With regard to prediction of pressure and temperature transients in con-tainment, uncertainties in the analysis arise from the input masses and energies used.
One of the major uncertainties is the rate of steam generation associated with hot fuel-water interactions in the reactor cavity following failure of the reactor pressure vessel.
5.
How to model core-concrete penetrations.
In rodeling the penetrations of concrete by a molten core, what happens in the lor"y term attack of the concrete by the core, particularly after the core begins to freeze, is nc. particularly known.
It is not currently possible to predict with confidence whether or not the containment basement would be penetrated in a core meltdown accident.
6.
Whether to consider unusual containment failure modes.
Because of the importance of containment failure to accident consequences it may be important to examine the potential for less obvious failure modes not generally considered in present risk evaluations.
Such less obvious failure modes include:
vessel motion leading to the tearcut of penetrations, thrust, forces on the vessel following pressure vessel melt through, failure modes for penetration seals, and missile generation.
7.
How to quantify the sizes and characteristics of radionuclide source terms from the core.
Large uncertainties presently exist on the magn.udes of source terms.
Also the rate and timing of radianuclide releases is potentially important in risk evaluations and at present the modeling is gross and uncertain.
Furthermore, little information presently exists on the chemical forms of the source terms.
8.
How to model radionuclide transport and deposition within the primary system and containment.
The characterization of transport and deposition within the primary system and contrinment has significant uncertainties and lacks validation in a variety of areas.
Key questions involve the extent of aerosol agglomera-tion, the effects of chemical forms, the effects of water in the flow path, and the characterization of washout in. pool water.
7
l E.
PROBLEMS IN CONSEQUENCE MODELING The largest source of uncertainty is that a system of immense complexity, j
containing subsystems that are theeselves highly complex, such as the atmosphere (which transports and dilutes radioactive material), the food chain or the j'
human body, can be modeled in a simple enough way that computational expenses are kept within reasonable bounds while retaining encugh realism for the results 1
to remain credible. So, for example, many dispersion models assume that the j
basic Gasuian model gives answers that are sufficiently accurate even when used over complicated terrain.
In short, there is a large assumption that the
}
idealization which is inseparable from any consequence model, no matter how j
sophisticated, does not destroy the meaningfulness of the results.
i l
Apart from this generalized assumption, there are specific uncertainties in j
each of the elements (data and models) of a consequence analysis.
Examples i
are as follows:
a j
1.
Meteorological data gathered at the reactor site is used to define the prevailing weather conditions out to great distances from the source, perhaps several hundred n'les.
l i
2.
It is frequently assumed that wind direction does not crange once a 7
release of radioactivity has taken place.
4 I
3.
A single dry deposition velocity is usually used for particulate matter
]
emitted in the after math of a reactor accident.
I 4.
There are considerable simplifications involved in the health physics calculations.
For example, for extremely low radiation doses and dose rates, a linear relationship between dose and the probability of the induction of cancer. Other auti. ors assume a dose effectiveness factor 3
j whereby the effectiveness of very low doses is reduced by up to a j
factor of five.
Both of these are assumptions which are subject to argument.
5.
Members of the public are expected to behave in much the same way in the aftermath of an accident; e.g., they begin to evacuate at the same time and at the same speed; or the sas.e shielding factor from externally delivered gamma radiation is applicable to everybody.
In pradice, the behavior of such a group of people would be much less homogeneous.
l This survey of problems may have sounded very negative to you.
It should l
give you the impression that we cannot calculate the probability of death from nuclear accidents with precision - because we cannot. This is not to say that we propose to abandon probabilistic risk assessment in safety i
goal work or in reactor regulation.
On the contrary, I believe it is a 3
most promising tool to use in the systematic analysis of reactor hazards and how society should deal with them, l
6 1
l j
8
_