ML20097F177

From kanterella
Jump to navigation Jump to search
Draft App D, Socio-Technical Approach to Assessing Human Reliability (Stahr), to Pressurized Thermal Shock Evaluation of Calvert Cliffs Unit 1 Nuclear Power Plant
ML20097F177
Person / Time
Site: Calvert Cliffs Constellation icon.png
Issue date: 08/22/1984
From: Embrey D, Humphreys P, Phillips L
OAK RIDGE NATIONAL LABORATORY
To:
Shared Package
ML20097F171 List:
References
REF-GTECI-A-49, REF-GTECI-RV, TASK-A-49, TASK-OR NUDOCS 8409180414
Download: ML20097F177 (31)


Text

e ..

ir r

Appendix D. A Socio-technical Approach to Assessing Human Reliability (STAHR) of A PRESSURIZED THERMAL SHOCK EVALUATION OF THE CALVERT CLIFFS UNIT 1 NUCLEAR POWER PLANT written by Lawrence D. Phillips and Patrick Humphreys Decision Analysis Unit London School of Economics and Political Science and David Embrey.

Human Reliability Associates Lancashire, England for The PTS Study Group of Oak Ridge National Laboratory 4

Date of Draft: August 22, 1984 NOTICE: This document contains information of a preliminary nature. It is subject to revision or correction and therefore does not represent a final report.

  • Research sponsored by U.S. Nuclear Regulatory Commission under Contract No. DE-AC05-840R21400 with the Martin Marietta Energy Systems, Inc.

with U.S. Department of Energy.

8409180414 840904 PDR ADOCK 05000317 P pop

i W A PRESSURIZED THERMAL SHOCK EVALUATION OF THE CALVERT CLIFFS UNIT 1 NUCLEAR POWER PLANT List of Chapters Chapter 1 Introduction

, Chapter 2 Description of the Calvert Cliffs Unit 1 Plant Chapter 3 Development of Overcooling Sequences for Calvert Cliffs Unit 1 Nuclear Power Plant O

Chapter 4 Thermal-Hydraulic Analysis of Potential Overcooling Transients Occurring at Calvert Cliffs Unit 1 Nuclear Power Plant Chapter 5 Probabilistic Fracture-Mechanics Analysis of Calvert Cliffs

Unit 1-Sequences Chapter 6 PTS Integrated Risk for Calvert Cliffs Unit 1 and Potential Mitigation Measures i

Chapter 7 Sensitivity and Uncertainty Analysis Chapter 8 Sunnary and Conclusions l

f i

i

- - . - - , ,, ,-a , ~ , , , - - , - . , - - , , - _ - , , - - , , , - - , , - - . - - - , - - - - - - , , . - - - - . - , - ,

6 -

4 -

D. 0'. APPENDII D. A SOCIO-TECHNICAL APPROACH 'ID ASSESSING HUMAN

'I RELIABILI*IT (STAHR)- l l

J.1. Introduction D . 2.~ General Description of the STAHR Approach D.2.1. - The Technical . Component: The Infinance Diagram D.2.2. The Social Component: Human Judgments D.3. Design of the STAHR Influence Diagram D.4. Application of the STAHR Infinance Diagram D.5. Group Processes .

D.6. Summary Statement References J

t e

4 8

l l

i

LIST OF FIGURES D.I. Influence diagrams and their. corresponding event trees.

(a) Event A is influenced by event B; (b) event A is influenced by events B and'C; and-(c) event A is influenced by events B and C, and B is influenced by C D.2. The STAHR influence diagran (as of June 1983)

M t

. _ . . , - ..._m. .-.. _ , . .. . . . . . _ . . , , . , .. , _ . _ _ _ . - . . _-

4 CC-D.1 APPENDIX D. A SOCIO-TECHNICAL APPROACH 10 ASSESSING )

HUMAN RELIABILITY (STAHR). ,

i l l

D.I. Introduction This appendiz describes the status, as of June 1983, of a new approach for '

assessing human reliability in complex technical systems such as nuclear power plants. This approach was utilized in the present PTS study for Cal-vert Cliffs Unit 1, the.results of which are described in Appendiz E.

The new approach includes both a social component and a technical com-ponent. To help keep this in mind and also to provide an easily recognized I J acronym, we are calling our methodology a "socio-technical assessment of i

, human reliability" - or the, STAHR approach.

It is important to emphasize that the approach described here does not pro-vide the definitive technical fix to a problem on which a great deal of effort has already been spent. It does, however, provide regulators and risk assessors with another methodology that has certain advantages and disadvantages compared to existing approaches. How useful it proves to be in practice is yet to te determined but' work to date indicates that addi-tional research on this approach is warranted.

A key feature of the approach is that it draws on two fields of study:

Jecision theory and group processes. Decision theory provides the form of the scdel that allows the desired error rates to be determined, while group processes provide the input data through the group interaction of experts

i CC-D.2 who are knowledgeable about the factors influencing the event whose error rate is'being assessed. The different perspectives of these experts, if managed effectively by the group, can lead to informed, useful inputs to the model. Thus, the validity of any error rates that are produced by the model depends not only on the technical model itself, but also on the

, social processes that help to generate the model inputs.

The, impetus for the socio-technical approach began in 1982 at an Oak Ridge, Tennessee, meeting addressing methods for assessing human reliability in the PTS studies. One of us (Phillips) introduced influence diagram tech-l nology1 as a potentially easier modeling tool then event trees or fault

, trees. The main advantage of an influence diagram fr a a technical per-spective is that it capitalizes en the independence between events and models only dependencies; that is, the influence diagram organizes the dependencies as a system of conditional probabilities, as explained in Section D.2. By the early spring of 1983, the Decision Analysis Unit at the London School of Economics and Human Reliability Associates, Lan-cashire, England, together had developed a human reliability assessment technology utilizing influence diagrams to the point that it could be tested in the field. In late May a field test was carried out at Hartford, Connecticut, to address operator actions' associated with potential pressar-ized thermal shock events that could occur at the Calvert Cliffs Unit 1 nuclear power station, the results of which are described in Appendix E.

In the paragraphs that follow, a general discussion of influence diagrams is first presented, followed by a description of how group processes work to provide specific diagrams and the input data. Finally, the l

)

CC-D.3 i

particularized STARR approach is described.

D.2. General Descrintion gi ,thg JI&lg Annroach D.2.1. The Technical Component: - The Influence Diagram As stated above, STARR consists of both a.s'ocial_ component and a technical component. The technical component is the influence diagram. Influence diagrams were developed in the mid-70's by Miller et. al.2 at the Stanford Research Institute and then were applied and further developed at Decisions and Designs, Inc. for intelligence analysis, all without a single paper being published in a professional journal. In 1980, Howard and Matheson 1 extended the theory and showed that any event tree can be represented as an influence diagram, but not all influence diagrams can be turned into event

+ trees unless certain allowable logical transformations are performed on the linkages between the influencing events.

e

'f.

The key principles of influence diagram technology are illustrated by the m/ i"

.6

  • simple diagrams shown in Figure D.1. Diagram (a) shows the simplest kind f

of infinance. Here Event A is influenced by Event B; that is, the proba-bilities that one would assign to the occurrence or non-occurrence of Event A are conditional on whether or not Event B has occurred. .Shown with the influence diagram is an equivalent event tree representation, where Events A and B are assumed to have only two outcomes, A and A, B and EI. In the event tree the probability of B occurring is given by pg . The probability of A occurring, given that B has occurred, is shown by p2, and the proba-bility of A ocentring, given that B has not occurred, is given by p3 The w

  • s * -'" -*wrv-'- ' "= - ~ - - - - * = - <--*--e'- -~ ?# '"~ - - * = - = - " -~w ---+ =e---= = - - - = - - = * ~ - - - ~ ~ - # ~ ' ' ' ' ' "

CC-D.4 B P:

I p, X' (a) .A A f 1 - pi y P,

_l,7

( I - p, X A

Pa B

'I ' P2 5

. C

~ A P.

I - Pr B

.P l - P4 A (b)

'. A g p3 I - Pi i I - ps I g

, A Pe 1 - pi i I-P 5 A

B P*

I - Pa I

- C A

Ps 1 - o, E f p. 1 - p, X (c) /

C i

l Pe 1 p' B I - p. X "g

A P,

1 - p, g 1 - P, I l

Pipre D.1. Influence diagrams and their corresponding event trees.

! (a) Event A is infinanced by event B; (b) event A is influenced by events B and C; and (c) event A is infinanced by events B and C, and B is influenced by C.

e .

CC-D.5 Point here is that p2 is not equal to p3 If p2 and p3 were' equal, then the infinance diagram would show two circles unconnected by any influencing 4 link.

Diagram (b) shows a slightly more complex infinance. Here, Event A is influenced by both Event B and Event C. The comparable event' tree consists of three tiers because the probability assigned to A at the extreme right depends spon the previous occurrence or non-occurrence of both B and C.

These probabilities for A, conditicnal on previous events, are shown by p3 l

through p6 Note that p2 SPPears in two places, indicating that the proba- I l

bility assigned to B is the same whether or not C occurs. l Finally, dingema (c) shows the same influences on A as diagram (b), but now l

Event C influencas not only Event A but also Event B. No te that the event .

trees for diagrams (b) and (c) have the same structura, but for diagram (c) the pro'Jability assigned to B conditional on C is no longer the same as. the probability of B conditional on C. Thus, while there are six different probabilities in the event tree for diagram (b), there are seven different i

probabilities in the event tree for diagram (c). It is easy to see that the infinance diagram representation not only is compact, but also contains i I

more information than the structure of the event trees without any proba- l

I l bility assignments.

In practical situations for which an infinance diagram has many nodes, it is typical for tne actual number cf influencing paths to be far fewer than l

l the maximum that could occur if every node were linked to every other node.

( Any assessment procedure based on the influence diagram will require only

! l

  • \

CC-D.6 l

the minimum number of probability assignments.- For example, an influence diagram procedure for (b) in Figure D.1 would require only six probabili-ties and would recognize that the same probability is assigned to Event B whether or not Event C occurs. In an event tree representation of the same problem, dependencies between events are not obvious until probabilities have been associated with each branch, and keeping track of independent events within a large tree can be a tedious housekeeping chore.

In applying influence diagram technology, Event A is taken as the target event and assessments are made of only the necessary and sufficient condi-tional probabilities that enable the unconditional probability of the tar-ge't event outcomes to be calculated. Fox .zample, in diagram (a) of Figure D.1, the probability of A is given by calculating the joint proba-bilities of all paths on which an A occurs and then summing the joint pro-babilities, i.e., p1P2 +. (1 - p g)p3 Fbr more complex influence diagrams, .

successive application of the addition and multiplication laws of probabil-ity are sufficient to enable the unconditional probability of the targeted event to be calculated.

4 It is, of course, important to recognize that no probability is ever uncon-ditional. All events shown on an influence diagram occur within some con-l text, and it is this context that establishes conditioning events that are not usually shown in the notation on the influence diagram. Thus, in

. applying this technology, it will be important to establish at the start of every assessment procedure what these common conditioning events are.

l

l. D.2.2. The Social Component: Human Judgments The preceding discussion has illustrated how the influence disgram provides I

s,, , - - , - - . _ . . ,. , , , , . . , . . _- -, , . - -- -- - ,,

! OC-D.7 the technical means for organizing the conditional probability assessments that are required for esiculating the unconditional probability of the tar-get event. But where does the specific influence diagram needed come from, and how are the conditional probability assessments obtained?- The answer is that they are developed mainly through human judgments obtained from experts working in groups, and it is these judgments that comprise the

~

"socio" component of the STAHR approach.

The theory behind the socio component was developed and illustrated with a case study by Phillips.4 3 The key idea is that groups of experts are 8

brought together to work in an iterative and consultative fashion to create a requisite model of the problem at hand. A judgmental model is censidered requisite if it is sufficient in form and content to solve the problem. A requisite model is developed by consulting " problem owners," people who have the information, judgment and experience relevant to the problem.

1 The process of creating a model is iterative, with current model results being shown to the problem owners who can then compare the current results

, with their own holistic judgments. Any sense of discrepancy is explored, with two possible results: intuition and judgment may be found lacking or wrong, or the model itself may be inadequate or incorrect. Thus, the pro-cess of~orenting a requisite decision model uses the sense of unease felt by the problem owners about current model results, and this sense of unease is used to develop the model further and to generate new intuitions about the problem. Then the sense of unease has gone and no new intuitions emerge, then the model is considered requisite. The aim of requisite

-ow.-e . - -

ns. --.- - --,.--e, - - , - , - - - . - - , . . , - --n- , - --. , - - . - - . .

CC-D.8 l modeling is to help problem owners toward a shared understanding of the problem, thus enabling decision makers to act, to create a new reality.

A requisite model usually is neither optimal or normative,-is rarely descriptive, and is at best conditionally prescriptive. A requisite model is about a shared social' reality, the current understanding by the problem owners. Requisite models are appropriate when there is a substantial judg-sental element that must be made explicit in order to solve a problem.

Because judgment, intuition and expertise are important ingredients of requisite models, there can be no external reality that can serve as a cri-terion agal'nst which optimality would be judged. Thus, requisite models are not optimal models. Nor are requisite models normative models in the I sense that they describe the behavior of idealized, consistent decision makers; that claim would be too strong. Neither can they be considered as descriptive models in the sense that they describe tho' behavior of actual people. Requisite models are stronger than that; they serve as guides to action, though they may not themselves model alternative courses of action.'

A requisite model attemptt to overcome limitations on human processing of information due to bounded rationality.

Requisite modeling seems ideally suited for the determination of human error rates in complex technical systems. The human operator in a complex i

system cannot, for the purpose of determining error rates, be treated as an unreliable machine component. In determining error rates for nachines, two fundamental assumptions are made. First, that all machines of a particular L._________---______._- - - - ... . . - . . - - - - - - . - . - - - - - - - - - -- - - - -

~

m a .

3 CC-D.9 type are identica1' as far as error rates are concerned,' and second, that all machines of a particular type will be operating within environmental bounds over which the arror rate remains unchanged. Neither of these assumptions is true for the human operator. Each person is different from s  %

'5 the next, and not even requiring certain standards of training and com-pekencecanensurethatothurfactors,suchasthoseaffectingmoraleand motivation, will not have over-riding effects on the error rates. More-over, environmental factors can have a substantial impact on human error rates. The same operator may' perform differently at a new plant of the same design, if, for example, teams function differently in the two plants.

j In short, people are different, and the environments they operate in are different, not only from plant to plant but also, from time to time, within a plant. Human error rates are not, then, unc,onditional figures that can be assigned to particular events. Rather, they are numbers that are condi-l tional on the individus1, and on the social and physical environment in which he is operating.

The effective assesssient of error rates should take these conditioning

, influences into account. Technically, the STAHR approach does this by s using the influence diagram to display the conditioning influences, and by using the educated assessments of experts to provide judgments that can take account of the uniqueness of the influences for a particular plant.

l As yet, it is not known when the STAER approach should be used in prefer-i ence to other approaches. It is not even clear whether the STAHR approach l> should be considered as a competitor to other methods, for it may well turn out that different methods are called for in different circumstances.

1 1 l-4 >

j ,

j l

.. I CC-D.10

- Clearly, the STAHR approach focuses on the process of obtaining assessments l and in this respect it differs considerably from the handbook approach (the THERP approach) of Swain and Guttman.6 At this stage of research, it can only be said that the STAER approach is different fram TWRRP. Our guess is 4

that both STAHR and THERP, and possibly other approaches as well, will each find their own uses, depending on the circumstances. Research is needed to identify those circumstances.

Finally, can experts provide assessments that are valid? Our view is that given the right circumstances people can provide precise, reliable and accurate assessments of probability. This viewpoint is elaborated in Phil-lips,7 but some authorities believe that bias is a pervading element in probability assessment.8 Unfortunately, virtually none of the research that leads to the observation of bias in probability assessments has been conducted under circumstances that would, facilitate good assessments. Many

! of these circumstances are explained in Staal von Holstein and Matheson.I Recent research by the Decision Analysis Unit with insurance underwriters suggests that two additional factors contribute to obtaining good probabil-ity assessments. One is the structure of the relationships of events wLose probabilities are being assessed, and th's other is the use of groups in generating good assessments. In the STAER approach, the influence diagram presents a well-understood structure within which groups of experts gen-erste assessments.

The success of th' STAHR approach depends, in part, on the presence of a I

group facilitator who is acquainted with the literature on probability I

l f

l.

l

--, , , - ~ . _ . . . - - . - . . - .. - - . - - - . _ . - , . - . . , - - . - . ~ . . . ~ . , . . . - . . , - . - . . - - - - .- . . - . - . - - . -

_g

- 'Q H ,

*: * !s, .

p !l ^ ~ , .

CC-D.11

7
  • ~

Kt

~ l

. .c , )

assessment and who is experienced in using techniques that facilitate good

-. . as sessments. How crucial this role is we do not yet know, but we are sure

+

s.. .-

?"~ that the necessary expertise and skills can be acquired with reasonable of fort by potential grr.up facilitators. 'In any event, there is nothing in

,O.

tt research literature to suggast that people are incapable of making good i.

essessments. In the United States, weathermen do it now. For example, a i ,

' review of weather predictions showed that when weathermen predicted a 605

~

chance of rain within 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br />, 60% of the time it rained within 24 hours2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br />.

Thus, weather forecasts dre said to ba * "well-calibrated"; the STAHR approach tries to arrange for circumstances that will promote "well-2 a calibrated" probability. assessments. However, calibrating the very low t

probabilities that emerge from the STAHR spproach, or indeed any other

!? l ', ,

l' '

approach, is technically difficult because of the low error' rates implied.

There are simply too_few opportunities to determine whether the '

p weathermen's low probability of rain in the desert is realistic.

i D.3. Desian .gf .1hg J,IA]E Influence Dinarna 2

l

! After several revisions, the influence diagram as of June 1983 for events e

.c tnat are infinanced by operator sctions in nuclear power stations is shown (*

in Figure D.2. We do not' yet know whether this influence diagram is gen- [

i oric in the sense that'it can handle all events in which operators are expected to take actions. Possibly parts of the diagram are generic and  ;

1 others need to be developed to fit the specific situation. The STAHR l approach is sufficiently flexible that modifications to the influence  !

diagram can be made to suit the circumstances, or entirely different influ-ence diagrams could be drawn.

I

.[

id

s

+ . ,..,,, ,r-... _ - ,- < - r,. - -, .r-,._-, ,.r--,.n..-.-----,..-

e

  • s CC-D.12

- O2 5 e

=

3, a* u

-  ! 55 , []

g

~. =,

( 4 -

2 5a ~ 9 2 w ~

p .

u, V , .

5 U

D .

E G.

- A d

e 3 .

e _

5 2 ~

e t (3 .

\ .

e .

rm V e a

  • 3

- s- -

I

$ w 3 . e* .

[D -

e 5

=I ~- g-O a

1 5

$ 3 I 3"- -

S

= 1i a 'gk c w 3 5 2

%* )* g 7 E.

  • =

%) WS  %

~ H t *

\,

A,\ .=

e5 g

g 6 s

-  %) .

Q=

  • u= <=

b a 3

s - e W

i

/ S S  %

s

\, - 58 E

[N sU g s 5 E E b z u 5

-  %)

a=

A

\

e ***

3

't ~\

t g

i CC-D.13 The top node in Figure D.2 indicates the target event. For example, if an t

alarm in the control room signals that some malfanction has occurred and the operator. attempts to correct the malfunction by following established i

procedures, one target event might be that the operator correctly performs a'specified step in the procadares. The influence diagram shows three major infinances on the target event. One is the quality of information available to the operator, the second is th's extent to which the organiza-tion of the nuclear power station contributes to getting the work done

. effectively, and the third is tae impact of personal and psychological fac-tors pertaining to the operators themselves. Another way of saying this is that the effective performance of the target event depends on (A) the phy-sical environment, (B) the social environment, and (C) personal factors.

Each of these three major factors is itself influenced by other factors.

The quality of information available is largely a mattst of good design of the control room and of the presence of meaningful procedures. The organi-zation is requisite; i.e., it facilitates getting the required work done effectively if the operations department has a primary role at the power station and if the organization at the power station allows the effective 4

formation of teams. Personal factors will contribute to effective perfor-mance of the target event if the level of stress experienced by operators l

l is helpful, if morale and motivation of the operators are good, and if the operators are highly competent. In other words, the following seven

" bottom-level" infinances actually describe the power station, its organi-zation and its operators:

I l

. . , - - - _ ~ . - , . _ . _ . _ . . _ _ ~ . . -.....--_,m__,~

,.m., , . . _ __.__v-, _ _ , . -._,

^

i

.. a \

  • i CC-D.14 (A) Physical Environment

)

(1) Design of control room (good vs. Poor).

l (2) Meaningfulness of procedures (meaningful vs. not meaning-ful). ,

(B) Social '. Environment (3) Role of Operations Department (primary vs. not primary).

(4) Effectiveness of teens (team work present vs. absent).

(C) Personal Factors (5) Level of stress (helpful vs. not helpful).

(6) Level of norale/ motivation (good vs. bad).

(7) Competence of operators (high vs. Iow).

hose seven influences are discussed in more detail in Appendix E with

respect to their application during the field testing of the STAHR metho-I I

dology at 'Calvert Clif fs. Suffice it to say here that in considering the impact of these seven influences, most nuclear power stations will be found to have mixtures of " good" vs. " poor," "high vs. Iow, " e tc.

CC-D.15 l

l D.4. Ano11 cation 21 .thg IIA R Influence Dianram Using the STAHR influence diagram is natter of applying the following ten steps:

(1) Describe all relevant' conditioning events.

(2) Define the target event.

(3) Choose a middle-level event and assess the weight of evidence for each of the bottom-level influences leading into this middle-level event.

(4) Assess the weight of evidence for this middle-level influence conditional on the bottom-level influences.

(5) Repeat steps 3 and 4 for the renalning middle- and bottom-level influences.

(6) Assess probabilities of the tairget event conditional on the middle-level influences.

j (7) Calculate the unconditional probability of the target event and f

the unconditional weight of evidence of the middle-level influ-ences.

e. .

CC-D.16 (8)- Ccapare these results to the holistic judgments of the assessors; revise the assessments as necessary to reduce discrepancies l

between holistic judgments and model results.

(9) Iterate through the above steps as necessary until the assessors I have finished refining their judgments.

(10) Do sensitivity analyses on any remaining group disagreements; report either point estimates if disagreements are cf no conse-quence, or ranges if disagreements are substantial.

In step 1, participants would describe the general setting in which the target event might occur, as well'as all conditions leading up to the tar-get event. Assessors are reminded that this description and statement of initial conditions form a context for their subsequent assessments and that these assessments are conditional on this context.

In the second stage, the target event is defined in such a way that its occurrence or non-occurrence is capable, at least theoretically, of conf,ir-mation without additional information. Thus, " rain tomorrow" is a poorly defined event, whereas "less than 0.1 an of precipitation falls in a range gauge located at weather station x" is a well-defined event.

I In carrying out step 3, the assessors might begin by focusing attention on the lef t-most middle mode, quality of information, and assess weights of evidence for the two bottom influences, design and procedures. This is l

done with reference to the specific definitions of these botton l

l

. _ , _ _ _. . . . . _ _ . . . . . _ . ._ _ . . . , , _ _ . _ _ _ . . _ , , , . , _ _ _._.m. , . . _ ,, . _ .

C C-D.17

^

[i, i

l '.

influences.* For example, with respect to the design influence, the group 8.

/

of assessors must decide whether, on balance, the design of the particular power station is more similar to the good definitions or to the poor defin-itions (see items Al and A2 in list of bottom-level infinances in Section D.3). The assessors may find it helpful to imagine a continuous dimension between good and poor and then try to deterhine where on this dimension this particular power station lier with respect to the event in question. In short, the assessors are judging unnbers that reflect the relative weight of evidence as between the poles of the ' design influence.

The weight of evidence would also be judged for the next botton node, mean-ingfulness of procedures, but here six different factors, from realism to format, must be taken into account in making the judgment.

The weights of evidence placed on the poles of each dimension are assigned as numbers that sum to 1. Thus, by letting wi represent the weight of evi- ,,

dance on the design being good and w2 represent the weight of evidence on the procedures being meaningfni, the assessments for these two bottom nodes can be represented as follows:

Good Poor Design wy 1-w g l

Meaningful Not Meaningful Procedures v j 2 1 - '2

  • For specific definitions, see Table E.2 in Appendiz E.

-m--, - ,. - . .w. r-e-- ,----rr- -- - - ---w=--------r---r-w,--+w-e -r e-w-e-e-----wc------w----e r=-37 ----ee-'m--- ev-r *---w'---v-

9 ,

CC-D.18

~

Step 4 re uires the assessment of probabilities for the quality of informa-tion, a middle-level influence, conditional on the lower-level influences.

He poles of the ' two bottom-level influences combine to make four different combinations good design and meaningful proceduress good design and not-meaningful proceduress poor design and meaningful procedures and poor design and not-meaningful procedures. Each of these four combinations describes a hypothetical power station'of the sort under consideration, and these hypothetical stations are kept in mind by the assessors when they determine the weight of evidence for the quality of information. This can be set out as follows:

If then QUALITY OF INFORMATION is DESIGN & PROCEDURES HIGH IM JOINT TEIGHTS Good Meaningful v *

.1 - wg 3 wl'2 Good Not meaningful v4 1 - w4 wy'(1 - w2)

-Poor Meaningful wg 1 - w5 (1 - w1)w2 Poor Not meaningful wg 1 - w6 (1 - '1)(1 - *2)

, For example, w is the weight of evidence that the quality of information 3

is high, given that design is good and the procedures are meaningful. Here high quality of information does not mean an ideally perfect power station; instead it means a power station in which both the design and the pro-cedures are of a high, yet practically realizabic standard. Neither does low quality of information mean some abysmally bad standard, but rather a standard that is minimally licensable. The assessments w3 through w6 i

- - _- . . _ . . - ~ . . _ _ - . , . _ . _ _ _ _ _ _. _ _ - _ . . _ - . . . _ _ _ _ _ _ _ . _ . _ _ . . . . __ . . _ _ . . . _ . _ . _ _ , _ _ ~ . . _ _

i

. l o' .

a CC-D.19 capture possible interactions betwson design and procedures. This is a key feature of the influence diagram technology and experience to date suggests that it is an important feature for human reliability assessment. For example,-in some power stations good design may compensate to same extent for procedures that.are not very meaningful, whereas if the design were poor the additional burden of procedures that were not meaningful could be very serious indeed.

At this point, a brief technical diversion from describing the ten-step procedure is warranted because it is now possible to illustrate the calca-lations that are involved in using influence diagrams. The weights are assessed in such a way that they are assmaed to follow the probability cal-l coins. Thus, the overall weights'of evidence that wonid be assigned to those four hypothetical stations described at step 4 can be obtained by multiplying the two relevant weights of evidence. For example, the weight of evidence assigned to the actual power station under consideration being l

l both good in design and meaningfni in procedures is given by the product of

.l w i and w 2. 7.se are shown above as joint weights. Note that the product rule for probabilities is applied. The next stage in the calculation is to ,

multiply these four joint weights by the weights w 3 through w6 and then to l add thoce four products to obtain the overall weight of evidence that qual-ity of Laformation is high for the power station under consideration. That is, w(HIGH) = v 3w1 '2 + '4'1(1 - w2) + '$II ~ *1)'2 + '6II ~ '1)(1 ~ '2)

  • L l i  !

I Yv=9'r'-rr-N-"*m' =*'Ar'W W

- , ,------a , m,,e , ,-.-e.--m--9, '"*="W't WWd' "-

.e CC-D.20 Note that this calculation makes use of both the product and the addition laws of probability.- It is the repeated application of these two laws that

. allows unconditional weights at higher nodes to be determined. He uncon-ditional weights now determined for the quality of information will serve as weights on the rows of the matrix for the next higher level event, and the types of calewittions just illustrated are repeated to obtain the unconditional probabilities for 'the target event.

Returning now to the ten-step procedure, step 5 requires that steps 3 and 4 be repeated for the rest of the middle- and bottom-level influences. Rus ,

weights of evidence would be assessed for the role of operations and for teams; then a matrix of conditional probabilities would be assessed for the organizational influence conditional on the lower-level influences. He same procedures would then be followed in making the necessary assessments i for the personal factors.

Step 6 requires, for the first time, assessments of probabilittes. How-4 over, these probabilities are for the target event conditional on the middle-level influences. In a sense, what is being assessed is conditional ,

error rates; that is, assessors are giving their judgments about what the error rates would be under the assumption of particular patterns of influ-ences. Since the quality of information can be either high or low, the organization can either be requisite or not, and personal factors can be l favorable or unfavorable. H ere are eight possible combinations of these l

influences. A separate error rate associated with the target event is I assessed for each of those eight combinations. This is not a particularly j l

easy job for assessors because they must keep in mind three different t

a CC-D.21 -

' l 1

influences as well as their possible interaction. Favorable personal fac-tors, for example, may well save the day even if the organization is not requisite, and may even compensate to some extent for low quality of infor-nation. Insofar as the middle-level influences interact, this stage in the assessment process is important, for it allows assessors to express the i

effect on error rates of these interactions.

Stop 7 is best carried out by a computer which can apply the multiplication and addition laws of probability to determine the unconditional probability of the target event as well as the next-lower influences.

In step 8, the unconditional probabilities and weights of evidence for the middle-level influences are given to the group of assessor,s who then com-pare these results to their own holistic jt.ipnts, Discrepancies are usa-i ally discussed in the group and revisions made as nacessary to any assess-

  • ment. _

Step 9 indicates that iteration through the first 8 steps may occur as Individual assessors share their perceptions of the problem with each ,

other, develop new intuitions about the problem, and revise their assess-ment. Eventually, when the sense of unease created by discrepancies between current model results and holistic judgments disappear, and when no l

new intuitions arise about the problem, model development is at an end, and I

the model can be considered requisite.

Since individual experts may still disagree about certain assessments, it is worthwhile in step 10 to do sensitivity analyses to determine the extent

-,-.,.,r- ,

, , - - , , . . - - , - .- . . . , , _ . , . _ , , _ . , . , ~ , , . . - . . . _,,_n._,,.-- , , . - - - , , .

"o .

e l CC-D.22 l l

target event. An easy, but not entirely satisfactory, way to this is first to put in all those assessments that would lead to the lowest probability for the target event and see what its naconditional vaine is and then to put in all assessments that would lead to the largest probability, thus determining a range of possible results. The difficulty with this is that no individual in the group is likely to believe all of the most pessimistic or all of the most optimistic assessments, so the range established by this approach to sensitivity analysis is unduly large. It should not be too difficult, however, to develop easy and effective procedures for establish-l las realistic ranges for the probability of the target event, ranges that accommodate the actual variation of opinion in the group.

This has been only a very brief description of the stages that appear to be necessary for applying the influence diagram technology. As experience is gained in the STAHR approach, these steps no doubt will be modified and elaborated. The steps are certainly not intended as a rigid procedure to

, be followed without deviation. Instead, they should be thought of as an agenda that will guide the work of the group.

D.5. 93.95R Processes 4

So far, little has been said about the group processes that form the "socio" component of the STAHR approach. A key assnaption here is that asny heads are better than one for probability assessments. Partionlarly for human reliability assessment in complex systems, there is an11kely to l be any single individual with an unblased perspective on the problem.

l l Although each individual may be biased in his view, the other side of the

o CC-D.23 coin is that each person has something worthwhile to contribute to the overall assessment. It is within the context of the group that differe'nt perspectives of the problem can most effectively be revealed and shared with others, so that the group's main function is the generation of assess-ments that take into account these different perspectives.

To ensure that all perspectives on the problem are fairly represented, it is important that a group climate be established within which information is seen as a neutral commodity to be shared by all regardless of an individual's status or investment in the problem. The role of group con-sultant can be established to help create this climate. This individual needs to be conversant with the technical aspect of influence diagrams and with probability assessment and to have a working knowledge of group processes. The group consultant should be seen by the group as an impar--

. tial facilitator of the work of the group, as someone who is providing structure to help the group think about the problem but is not providing any specific content. Although the group consultant needs some minimal acquaintance with the principles of nuclear power generation and with the key components in the plant itself, it is probably desirable that he not, be l

a specialist in nuclear powers otherwise he might find it more difficult to maintain a neutral, task-oriented climate in the group. Thus, a major role for the group consultant is not to tell people what to think about the problem but how to think about it.

The other major role for the group consultant is to attend to the group i

~

processes and intervene to help the group maintain its task orientation.

i l The group can easily become distracted from its main task because i

4

. - - . + ---a w --.r-----,..-- - - , . - , - - ..-----4m.--- -. , -- . - . , .,--.m.,-, ,m+., , cm, , - - - - - , - .- ,,,,w+m,,,

p ..

CC-D.24

-t l viewpoints in the group will of ten be divergent. The cognitive maps that a design engineer and a reactor operator have of the same system may be quite different, yet each will at times insist on the validity of his particular viewpoint. The group consultant must help the group to legitimize each of these viewpoints and to explore them in generating usein1 assessments.

To a certain extent, adversarial processes may even operate in these groups. Operators will openly criticize certain aspects of design, and design engineers may well be contemptuous of procedaros that they deem to be unnecessary if only people would operate the system properly. Trainers may be somewhat sceptical of the optimistic "can-do" attitude of the opera-tors, while operators may feel that anyone who has not had " hands-on" experience in the real control room rather than just simulator experience is out-of-date at best and simply out of touch at worst. Unless the group consultant manages the group processes effectively, minor squabbles can easily turn into major confrontations that seriously divert the group from its effective work.

This discussion is not meant to imply that the group should be composed so as to reduce adversarial processes. On the contrary, an underlying assump-tion of the FIABR approach is that diversity of viewpoint is needed if good assessmen s are to be generated. Differences are to be confronted openly in the group and to be taken seriously regardless of the status of the holder of the viewpoint. Thus, diversity of viewpoint is a key criterion in composing the groups. As yet, we are not certain about the roles that should be represented in the groups but it would appear that at least the following are necessary: group consultant, technical moderator to help

d

,CC-D.25 t

direct the discussion on technical issues, trainer of nuclear power station operators, eligibility and systems analyst, thermohydraullos engineer, pos-sibly one or two other engineers with specialized knowledge of the power station, and, of course, reactor operators. Farther work is needed to establish exsetly who the problem owners are for these human reliability assessments.

1 D.6. Summary Statement This appendix has described the STAHR approach as it was originally con-ceived for application to the assessment of the reliability of operator actions at a nuclear power station during potential PTS events. As will be apparent from Appendiz E, the first field test of the methodology resnited in some modifications of the detailed definitions of the bottom-level influences, and farther revisions are anticipated as the approach is more generally applied.

I i

l l

I l

i l

(

  • CC-D.27 1

REFERENCES

1. Howard and Matheson, 1980.
2. Miller et al., 1976.
3. Selvidge, 1976.
4. Phillip s, 1983.
5. Phillips, 1982.
6. NUREG/CR-1278.
7. Phillips, 1981.
8. Fischhof f f& MacGregor,1982.

- - - -- , - - _ .- , - _ - - ,,, , __----3-.._-.,___w----.-_,.,_, ,_f-y,wm.- 7