ML20080T152

From kanterella
Jump to navigation Jump to search
Testimony of G Apostolakis Re Design Qa.Idvp Weak Since Program Fails to Recognize Implications of Decision to Cast Idvp in Probabilistic Terms & Fails to Use Appropriate Probabilistic Analysis Principles & Methods
ML20080T152
Person / Time
Site: Diablo Canyon  Pacific Gas & Electric icon.png
Issue date: 10/16/1983
From: Apostolakis G
CALIFORNIA, STATE OF, CALIFORNIA, UNIV. OF, LOS ANGELES, CA
To:
Shared Package
ML20080T133 List:
References
ISSUANCES-OL, NUDOCS 8310200184
Download: ML20080T152 (21)


Text

__ _ _

1 NUCLEAR REGULATORY COMMISSION 2

BEFORE THE ATOMIC SAFETY AND LICENSING APPEAL BOARD 3

4 In the Matter of )

) Docket Nos. 50-275 O.L.

$ PACIFIC GAS AND ELECTRIC COMPANY ) 50-323 0.L.

)

(Diablo Canyon Nuclear Project, )

k Units 1 and 2) )

7 I O

DIRECT TESTIMONY OF GEORGE APOSTOLAKIS I

Q. Please state your name.

10 A. George Apostolakis.

11 Q. What is your business address?

12 A. 5532 Boelter Hall, University of California, Los Angeles, 13 California 90024.

14 Q. What is the purpose of your testimony in this proceeding?

15 A. I have been asked to render my professional opinion on the le applicability of probability theory, decision theory, and 17 statistics to the verification of the design of a nuclear 18 power plant and to evaluate the adequacy of the Independent 19 Design Verification Program (IDVP) to insure the adequacy 20 of the design of Diablo Canyon Nuclear Power Plant, Units 1 21

! and 2. Specifically, my testimony pertains to contentions 1 22 and 7.

23 I.

24 QUALIFICATIONS 25 g, -What is your present position?

26 A. I am a Professor in the School of Engineering and Applied 27 Science at the University of California, Los Angeles, where I l 1.

8310200184 831016 PDR ADOCK 05000275 T PDR

! . .-~ ._ _ ~ ~ = - . - - - - - ~ - - - - - - - - - --

e e 1

have taught since July 1974. I am a member of the faculty of 2

the Mechanical, Aeronautical, and Nuclear Engineering 3

Department.

4 Q. Please summarize your education.

5 A. I hold a Ph.D. in Engineering Science and Applied Mathematics l k and an M.S. in Engineering Science, both from the California t 7

Institute of Technology. I also hold a diploma in Electrical 8

Engineeri:q from the National Technical University, Athens, 9

Greece.

10 g ,.

Are you a member of any professional organizations?

11 A.

'. am a member of the American Nuclear Society and the Society 12 of Risk Analysis. I am a past recipient of the Mark Mills 13 Award from the American Nuclear Society.

14 Q. Please summarize your work experience in the fields of risk 15 assessment and nuclear engineering.

16 A. For the past ten years, I have been continuously engaged in 17 research in risk assessment, including the conduct of 18 probabilistic risk analyses for nuclear power plants; 19 probability theory, decision theory, and statistics; 20 reliability analyses; and nuclear engineering.

21 Since 1977, I have served as a consultant to Pickard, 22 Lowe and Garrick, Inc., where I participated in probabilistic 23 risk analyses of the oyster Creek, Zicn, and Indian Point 24 nuclear generating stations; I also served for Pickard, Iowe 25 and Garrick on the technical review board for the Seabrook 26 Probabilistic Safety Study. For the past three years, I have 27 also served as a consultant to the Bechtel Power Corporation 2.

1 on probabilistic risk assessment. In the past I have served 2

as a member of the Peer Review Panel for the Load Combination 3

Program of the Lawrence Livermore National Laboratory, as a 4

consultant to the Seismic Safety Margins Research Program of 5

Lawrence Livermore National Laboratory, as a consultant on

% risk methodology for geologic disposal of radioactive waste 7

, for the Sandia National Laboratories, and as a member of a 8

research review group for the Probabilistic Analysis Staff of f 9

the U.S. Nuclear Regulatory Commission.

10 My research work at UCLA has been both theoretical and i 11 applied.

I have conducted research on the foundations and 12 methods of probabilistic risk analysis, on data analysis, on 13 fire risk analysis, and the general area of risk-benefit. I 14 have developed and taught two courses on probabilistic risk 15 analysis. I have also taught courses in nuclear engineering 16 as well as basic engineering courses.

17 Q. Do you regularly publish in the professional literature?

18 A. Yes. I have edited one book and contributed to another on 19 risk analysis. I have published numerous articles on 20 probabilistic risk assessment, nuclear engineering, and 21 1 related matters. I also serve as a reviewer for Nuclear i 22

_ Safety, Nuclear Science and Engineering, Nuclear Technology, 23 IEEE Transactions on Reliability, AIChE Journal, Risk 24 Analysis, and Reliability Engineering. The list of my 25 i publications has been submitted separately in my affidavit {

26 of qualifications.

27 7

3.

)

i

1 II.

PROBABILITIES AND STATISTICS 3

Q. What do you mean by statistical inference?

4 A. Statistical inference is the process by which evidence is 5

incorporated in our body of knowledge. This body of knowledge is, in general, expressed by probabilistic statements. '

8 l Q. How is evidence incorporated in our body of knowledge?

9 A. I view this question in the context of the Bayesian (or 10 Subjectivistic) Theory of Probability. According to this 11 theory, we always have some degree of knowledge of any 12 uncertain event of interest. Bayesian Theory asserts that 13 our degree of knowledge can be expressed in terms of 14 probabilities. As information becomes available, we modify 15 our state of knowledge; that is, we revise our probabilities.

' 1" This modification is done in a consistent manner, using 17 Bayes' Theorem.

18 Q. What do you mean by " evidence"?

A. " Evidence" can be any kind of information. This includes '

20 what is commonly referred to as " statistical evidence" as 21 well as such qualitative information as opinions of people, 22 scholarly literature, the results of experiments, etc.

Q. What does the term " statistical evidence" mean?

24 A. For present purposes, I use the term " statistical evidence" 25 to refer to information concerning the frequency with which a 26 given attribute is observed in a specified population. This 27 would include how many redheads we find in a given group of ,

4.

. . . _ . . . i People, the number of times a coin turns up heads in a 2

sequence of tosses, the proportion of American families 3

within a given income bracket, and so on.

Q. What is the relationship between frequencies and probabilities?

A. Frequencies are observable quantities in a given sample or population. Often we express a frequency as a proportion of 8

a sample or a population. Probabilities, on the other hand, 9

are not observable. They are numerical measures of degrees 10 of belief. In other words, frequencies are objective facts 11 and probabilities are subjective beliefs.

E Q. What is the distinction between probability theory and statistics?

14 A. Statistics is part of probability theory. Probability theory is a set of rules that, if obeyed, guarantee coherence.

Statistics is that part of probability theory that deals with the coherent use of evidence.

Q. What do you mean by " coherent"?

A. Human beings dealing intuitively with uncertainty have been 0

found to make inconsistent and unreliable use of the information at their disposal. Probability theory, or, more 2

generally, decision theory, requires them to make their reasoning process, their assumptions, and their use of 24 information censistent with certain principles of rational 25 behavior. This makes the decision process explicit and 26 visible.

27 j

s.

I 1

Q. What is the virtue of making the process explicit and 2

visible?

3 A. Probabilities are inherently subjective, as are decisions <

made under uncertainty, leading to differences of opinion 5

among people. By making the process explicit and visible, we allow pecple holding different opinions, and third parties observing the differences, to approach resolution of the 8

differences on a reasoned basis.

8 Q. What is the nature of the differences in opinion among people?

10 A. People differ in their assessments of probabilities. They 11 also differ in their assessments of the costs and benefits of '

12 different consequences of decisions.

13 Q. What are the reasons for different probability assessments?

1 A. Different decision makers may have different states of knowledge. In addition, there is evidence that human beings 16 have great difficulty expressing their knowledge in terms of probabilities.

18 There is a substantici body of evidence indicating that 18 people perform poorly in assessing probabilities, that is, in 0

dealing coherently with a body of incomplete evidence. For l example, Slovic, Fischhoff, and Lichtenstein, in their article " Facts and Fears: Understanding Perceived Risk" l (published in Societal Risk Assessment, R.C. Schwing and W. A.

l 24 Albers, Jr. , Editors, . Plenum Press,1980), state, on the 25 basis of their own experiments and research and those of others, that people tend to deny uncertainty, misjudge risks, 27 and express unwarranted confidence in their judgments. The 6.

t

--._...___._,__.-__._.,__,..._.,_..___._____,___,...__...._.,_,_.-.,_,_-...,--_-,.,,.._.-..__m.__

. m.

1 1

same authors show that expert assessments are also 2

susceptible to biases, particularly underestimation of risks.

3 Kaplan, Garrick, Duphily, and I found similar evidence of expert underestimation of failure rates in a study we did 5

of the performance of several components of a nuclear plant.

@ We found, somewhat to our surprise, that the statistical evidence of failures at that plant indicated substantially 8

higher failure rates that the experts had predicted.

(Apostolakis, Kaplan, Garrick and Duphily, " Data O

Specialization for Plant Specific Risk Studies," Nuclear 11 Engineering and Design, 56:321-329 (1980).)

12 For rare events the difficulties people have assessing 13 probabilities can lead to dramatica1]y different opinions.

14 of course, this la one area where statistical evidence can be 15 most useful. Bayes' Theorem tells us that when statistical 16 evidence is strong, the prior beliefs (i.e., beliefs prior to obtaining the statistical evidence) become unimportant and 18 the probability assessments are controlled by this evidence, 18 l that is, they are independent of the assessor. All this, of 20 course, assumes that different assessors interpret the

! 21 evidence in the same way, something that is not always true.

22 777, 23 DESIGN ERRORS 24 Q. Has there been any formal research done on the frequency and i 25 significance of design errors in nuclear power plants?

26 A. Yes. Three studies are particularly pertinent here:

27 7

7.

l l , . _._ _ _ _ . _ _ . _ . . _ _ _ _ _ _ . . . . _ , . . _ - . - . . _ .

1 (1) J. R. Taylor, "A Study of Failure Causes Based on U.S.

Power Reactor Abnormal Occurrence Reports," in Reliability of Nuclear Power Plants (Proceedings of a Symposium, Innsbruck, April 14-18, 1975), pp. 119-130, Unipub, Inc., N.Y., 1975. Taylor studied Abnormal Occurrence Reports (now known as Licensee Event Reports (LERs)) submitted to the Atomic Energy Commission and found that a large proportion of the failures in U.S.

plants involved design, installation, and operation 10 errors, with an unexpectedly large proportion of the incidents involving multiple failures. Of 490 failures, I

he classified 36 percent as being due to design errors.

The largest single cause of design errors was found to 14 be unforeseen conditions.

(2) T. M. Hsieh and D. Okrent, "On Design Errors and System 6

Degradation in seismic Safety," in Transactions of the 4th International Conference on Structural Mechanics in Reactor Technology, San Francisco, Calif. , August 15-19, 19 1977, T. A. Jaeger and B. A. Boley ( Eds. ) , Vol. K, Paper 90

~

K9/4, Commission of European Communities, Luxembourg, 21 1977. Hsieh and Okrent investigated the possible number 22 and influence of seismic-related design errors by 23 examining the historical record of such errors for a 24 specific reactor. Their estimates of the core melt 25 l frequency were substantially higher than those of the l

Reactor Safety Study (WASH-1400), which had not taken 27 into account the possibility of design errors.

8.

l l i

1 (3) P. Moleni, G. Apostolakis, and G. E. Cummings, "On 2

Random and Systematic Failures," Reliability 3

Engineering, 2:199-219 (1981). We analyzed the LERs for  ;

4 two power reactors plus 100 design errors compiled by Oak Ridge National Laboratory. We found that 18 percent S

of all licensee events at one of the two reactors and 13 7

percent at the other were due to design errors. We 8

found that the most common design error was the failure 9

to foresee environmental conditions. That design error

. 10 alone accounted for nearly as many LERs as all operational procedure errors.

12 It is important to keep in mind that these results are based 13 on each group of researchers' definitions of the term " design 14 error" and on their interpretation of the events reported.

i 15 Despite these reservations, there is a great deal of useful 16 information in these studies. For example, they show that design errors are a more frequent cause of failures in 18 nuclear power plants than has been widely assumed.

19 Q. What are the typical causes of design errors in nuclear power 0

plants?

21 A. The cited studies indicate that major causes appear to be 22 unforeseen environmental conditions, specification errors, 23 and wrong analyses.

24 Q. Do these studies show that design errors are inevitable or 5

widespread in commercial reactors?

26 A. Not necessarily. Each of these studies has examined 27 previously identified operational failures and classified 9.

l

\. -

~ . . - = - . :.: --=-----.--: -

-+ - - . - - - - -~ -

- - - -- --- ..-----.~m .._- _.

1 them in various ways. There is no evidence from which one 2

could conclude how representative the plants experiencing 3

these events are of all commercial U.S. reactors. I know of 4

no study of how frequent design errors are in general and of 5

what their impact on the margin of safety is.

" So while these studies show that design errors are a more significant factor in plant failures than was previously 8

thought, they do not tell us how frequent and how important 9

to safety such errors are.

10 Q. Is there any basis for evaluating the safety significance of 11 the design errors described in the literature?

12 A. One must be very careful about the meaning of the term 13

" safety significance." If by that we mean actually causing 14 injuries to the public, then none of the errors were safety 15 significant. But if we are speaking about an error having 18 the potential for such harm under possible conditions that were not ac'tually experienced before the error was detected,

! 18 then it is more difficult to dismiss any error as not being 19 safety significant.

O I think that the most meaningful. way to investigate 21 these issues is based on the reduction in the presumed margin 22 of safety. The only way I know to practically evaluate the 23 l

safety significance of an error in these terms is to conduct a probabilistic risk assessment. This enables one to test 25 the sensitivity of a given facility to designated system and 26 component failures. In my experience, PRAs sometimes reveal failure paths not perceived by knowledgeable engineers l

! 10.

I

. , t i

1 involved in the design of the plant. Furthermore, the i

a 2

potential of multiple failures of redundant components due to 3

$ Jesign errors cannot be fully assessed without a PRA.

4 Q. In the probabilistic risk assessments with which you are familiar, how have design errors been treated?

Design errors have been treated only indirectly.

A. By this I mean that, while something is usually done, the analysis is

not as rigorous as other pe.rts of PRAs are. For example, Appendix X to the Reactor Safety Study (WASH-1400, NUREG 10 75/014, October 1975) is entitled " Design Adequacy." The '

11 study team felt that 1. hey needed additional assurance that

certain components would function as intended under severe 13 conditions. Part of the reason for this was that the 14 failure-rate distributions did not reflect experience with 15 such environments. The design adequacy assessment was 6

performed by the Franklin Institute Research Laboratories, which checked a sample of components, systems and structures.

8 They found only minor problems, e.g., errors in assumptions l used to calculate stresses and inadequate tests. The o

consequence of these errors was assessed to be a reduction in the safety margin.

In more recent PRAs, like those for the Zion and Indian 23 Point nuclear power plants, the issue of design errors was in the minds of the analysts when they quantified their 25 il judgment, so tilat very low values for failure rates were 26 avoided. Design errors were part of the "other" category of failure causes, which means, causes not explicitly l 11.

l l

< ~ '-

1 quantified. 'The notion of the "other" category has been 2

proposed by Kaplan and Garrick (see Risk Analysis, vol.1, 3

p. 11, 1981), who were among the principal investigators 4

performing these PRAs.

5 IV.

VERIFICATION OF DESIGN USING PROBABILITY THEORY 8

Q. Do you know of any case where the adequacy of a nuclear 9

power plant's design was demonstrated using sampling?

10 A. No. There have been the studies of design errors I described 11 above. But to the best of my knowledge, no nuclear power 12 plant has ever been licensed using a sampling verification 13 program as a substitute for a quality assurance program that 14 was found to be inadequate.

15 Q. What is the significance of the decision to verify the design 16 by sampling?

A. Ordinarily, licensing decisions are framed in deterministic 8

terms, i.e. , does the plant design comply with the NRC criteria? A relatively straightforward answer to this O

! question could be obtained by checking the entire design and 21 fixing any errors found. If one decides to verify the design 22 by sampling less than 100 percent of the design, then one l 23 transfers the problem into the realm of probabilities, i.e.,

24 one is assessing the probability of an affirmative answer to 25 the original question regarding compliance with the NRC criteria. In other words, one is no longer asking the j 27 deterministic question, "Does the design meet the licensing l

12.

-~ _ _ --
- ~ . - _- - . . - . _ , _ , _ _ - - . - - - . - - _ _ .

. . . . . . - ...--.. a-. - u- =- _ - ---- -

I criteria?" Instead, one is asking, "What is the probability 2

that the design meets the licensing criteria?" Or, more 3

precisely, one is asking, "What is the probability that there are no deviations from the criteria in the existing design?" ,

The nature of the problem has now been considerably C

changed. One is now explicitly accepting the possibility of 6 7

a deviation from the licensing criteria remaining undetected.

8 Q. Can statistical techniques make a contribution to a program 9

to verify the design of a nuclear power plant?

10 A. Yes, given my earlier discussion of statistics as part of 11 probability theory. Once the decision has been made to 2

characterize the problem in probabilistic terms, statistical 13 techniques enable us to make full use of the information that we have available and furnishes the discipline and guidance 15 that insures we are using the data properly.

Q. How do statistical techniques do so?

A. These methods can provide guidance to the decision maker 18 concerning both the qualitative aspects of the problem (e.g. ,

19 what kinds of errors have been made, what can be done about 20 them, etc.) and the quantitative aspects (e.g. , how likely 21 errors of a certain type are, how many errors remain

undetected, etc.)

23 In this way, probability theory and statistics further I

the goal of making the analysis and evaluation explicit and 25 '

visible.

26 Q. Is it possible to estimate the frequency of design errors in 27 a nuclear power plant using statistical techniques?

13.

.,  :~m :::::: - _ . - - : :. : - . . . . . -  :- - - - - - - - - - - --

.. . -- . . - _ . . . . ~ . . .; . :.. = . e  :: = - - - -

- -^ ^ ' ^ ^ -

1 A. Yes. Again, one has to be very careful with one's 2

terminology. Because there is no general definition of 3

" design errors," a definition would have to be established at i 4

the outset of the study. The definition would have t.o 5

correspond to the purpose of the study and be precise enough E

to permit consistent classification of observations. These 7

requirements are not substantially different from the 8

requirements for any engineering study, whether or not statistics are used.

10 Assuming, however, that we are working with well-defined 11 events, like selecting the wrong design pressure, we could, 12 then, consider the universe of such selections and apply 13 random sampling to estimate the frequency of such errors.

14 Q. What is a " random sample"?

15 A. A random sample of a population is one in which each element le of the population has an equal chance of being drawn for the

  • I sample.

18 Q. What is "j udgmental sampling"?

18 A. This is not a' term I had encountered before my involvement in this case. I gather from the IDVP materials I have read that 21 the IDVP uses this term to refer to the process of selecting 22 elements from the population by using engineering judgment.

E3 Q. Are both kinds of sampling used in statistical analysis?

24 A. There are places for the-use of informed judgment, including 25 engineering expertise, in a statistical study. For example, 6

jud 9'.aent is used . to formulate hypotheses. However, once a 27 j

14.

my i m_ - ,b h -- -r v.,--,- .,,+p-

,-,.y -..,,e..-. . ...yry--m ..,, w...-,. - . - - - - . ,---,-m. --,.,---m.,,m-..,,,,--y,,-.. , - - - -

A population is identified for study, samples are drawn from the population randomly.

Q. Why?

A. In statistical terms, any sample that is not drawn randomly 5

is suspect of biases. Once one departs from random selection, the danger exists that the selection mechanism contains a bias, presumably unintended, that will lead to an unrepresentative sample and results that cannot validly be 8

generalized to the population from which the sample was drawn.

10 Q. Can you state a pertinent example?

11 A. There are many well known examples of biased samples 12 rendering invalid results. One of the best known is the 13 Presidential preference poll taken by the Literary Digest before the 1934 election. Over two million respondents to the poll showed a preference for Landon over Roosevelt by a 57% to 43% margin. In the election, President Roosevelt got 62% of the vote.

Any time one departs from random sampling one hazards 19 similar errors. For example, it has been stated that the 20 IDVP sampled the Diablo Canyon design work emphasizing complex designs on the assumption that those were the designs 2

l where errors were most likely to be found. However, it is

! entirely possible that the managers who oversaw the design 24 work recognized the complex problems and assigned them to the 25 most competent engineers and designers. If so, sampling in l

26 this way could underrepresent the work of those people most 27 likely to make errors.

15.

- -.~ .

1 Q. Are you saying 'that what the IDVP calls judgmental sampling 2

has no place in a design verification program?

3 A. No. If one has information leading one to suspect the 4

location or type of errors, that information should be 5

exploited. But I do not believe that a sample drawn 0

non-randomly can validly be used to generalize about the frequency of errors in the unsampled portion of the 8

population.

9 V.

10 EVALUATION OF THE IDVP 11 Q. What have you reviewed concerning the Diablo Canyon 12 Independent Design Verification Program?

13 A. Parts of the Phase II Program Management Plan, the IDVP Final 14 Report, NUREG-0675 (Safety Evaluation Report, Supplement 18),

15 the IDVP Program Management Plan for Phase II, Interim 16 Technical Reports 1, '8, 34, and 35, and certain depositions 17 and interrogatory answers.

18 Q. What is your understanding of how the IDVP sought to verify 18 the adequacy of the non-seismic design?

O A. Three systems were selected (the auxiliary feedwater system, 21 the control room ventilation and pressurization system and 22 the safety-related portions of the 4160-V electrical 23 distribution system). I am told that the IDVP verified 24 completely the design of these systems in Unit 1. The IDVP 25 examined the design of these systems and identified errors.

26-It grouped these errors into classes according to whether or 27

  • j 16.

. _ . _ - - . . . . . _ - . ~ . _ . . . _ . - _ . . .

c 1

not the errors caused criteria or operating limits to be exceeded.

The IDVP then sought to group some of these errors into 4

" generic concerns." Five generic concerns were raised and all systems where these could apply were verified. No other I samples were taken.

On the basis of. this examination, the IDVP drew 8

conclusions about the adequacy of the overall design of Unit 1, including the systems not sampled.

10 Q. In your opinion, did the IDVP proceed in an appropriate 11 way?

12 A. It is not clear to me why they chose to sample and use 13 probabilistic arguments rather than a full deterministic 14 review. Given, however, that they decided to sample, the 15 available statistical methods, particularly random sampling, O

that would justify extrapolation of their findings to parts s

of the plant not sampled, have not been used.

In your opinion, was the IDVP's judgment concerning the five i

Q.

19 generic concerns sound?

A. I do not have enough information to judge. I do recognize

, 21 that issues like this involve extensive use of judgment.

Therefore, different analysts may classify errors in many 23 different ways. Nevertheless, I find the presentation of the 24 IDVP's classification unconvincing.

25 For example, the selection of system design pressure, 6

temperature, and differential pressure across valves is 27 identified as a generic concern. I can see a more general 17.

1 concern being the selection of system design parameters, 2

l which would also include other variables, such as stress, 3

enthalpy, humidity, etc. Since the literature I cited above 4

suggests that incorrect selection of design parameders in 5

general is a common source of errors, I find no adequate justification for limiting this generic concern to incorrect ~

7 selection of pressures, temperatures, and differential 8

i pressures across valves.

9 As a second example, it is stated on page 6.3.4-2 of the 10 IDVP Final Report that three EOIs (8001, 963 and 1069) 11 involve the misapplication of computer programs. Because 12 there was no commonality ';

between the programs involved in EOI 13 .

8001 and the other piir, and because the types of errors were 1 14 l different,[a ge,neric concern was not identified. It may be 15 reasonable,jhEwever, to identify " misapplication of computer le codes"asagenericAbncern. '

Q. What is the significance of thi fact that the IDVP found what 3

it called " random errors," that is, errors that were not 19 covered by the five generic concerns?

20 A. If the three sampled systems were really representative of 21 the unsampled systems, this implies that"there are similar 22 errors remaining to be found in the, unsampled parts of the 23 plant. On the other hand, if the three systems are unrepresentative, we have almost no information about the 25 unsampled elements of the de. sign and no basis for confidence 26 in the adequacy of the design.

27 Q. Is the safety significance of the errors uncovered relevant?

18.

,.m__ _ _ . , _ _ _ _

. . - . . - - . . . . - . = . . . . . . . - . . = . . . .- ..

1 A. It depends on what the issue is. If the issue is whether the I

plant's design meets licensing requirements, safety 3

significance of the design errors is not relevant.

4 If the issue is the safety of the plant, then safety 5

significance of errors is obviously relevant, but, as I stated earlier, the only way I know to perform such an 7

evaluation is in the context of E PRA.

8 Q. In your opinion, does the IDVP's work provide a basis for 9

estimating the number of as yet undrtected design errors?

10 A. No. The failure to use random senpling techniques makes a 11 reliable extrapolation impossible and creates the suspicion 12 that there may be errors whose types are not known yet.

13 Furthermore, the same lack of random sampling does not hilow 14 the estimation of error frequencies or absolute numbers. The

'5 design of the IDVP was not amenable to providing a basis for 16 estimating frequencies.

17 Q. Does the IDVP provide a basis for concluding that the rate of 18 undetected errors is acceptable?

l A. No. To decide that a given rate of errors is acceptable, one l

20 must know two things: what the rate of errors remaining in l

21 l the plant is and what rate is acceptable. For the reasons I 22 l have just given, one cannot get from the IDVP's work an i estimate of the rate of remaining errors at Diablo Canyon.

24 And nowhere have I seen anyone attempt to set and justify an acceptable rate. The decision that I identified earlier, 26 namely, to recast the problem in probabilistic terms has created the need to have a criterion for acceptability. The 19.

1

W *#

w&

1 issue of an acceptable rate of design errors has not been 2

studied and resolved.

3 Q. Could one not attempt to set a rate that provides reasonable 4

assurance of safety?

5 A. The term " reasonable assurance" is not defined. This term is I

usually used in NRC regulatory matters to refer to the level 7

of assurance sought in setting the design criteria. Thus, we 8

say that the criteria, if met, will provide a reasonable 9

assurance of safety. It would be a significant departure to 10 talk about a reasonable assurance that the criteria are even 11 met. Then one is talking about a reasonable assurance of 12 meeting license criteria that, if met., would provide a

~

13 reasonable assurance that the plant is safe. This is a novel 14 notion, the implications of which are not obvious.

15 Q. What can be said about the adequacy of Diablo Canyon Unit 2 16 from the verification program for Unit 17 A. I have already said that the findings of the IDVP in Unit 1 18 ~

cannot be generalized to the portions of Unit I not examined.

1 That is obviously true of Unit 2, for which the IDVP does not 20 have a sample at all.

21 Q. Do we know whether the rates and distribution of errors in 22 the two units are the same?

. A. No. We know of certain similarities and certain differences 24 between the two units. 'Ib be able to say anything about the error rates in the two units, random samples would be needed 26 from both units.

27 j

20.

l-21 n- O % - ,a% - 49 n. p n -

1 Q. What can now be done to achieve confidence in the design of 2

Diablo Canyon?

3 A. As a first step, the decision to cast the problem in 4

probabilistic terms should be fully understood. . Given the 5

decision to verify by sampling, the objectives of the study E

and the decision criteria should be explicitly stated, and 7

the populations should be defined. Random samples should be  :

1 8

drawn to determine the nature and frequency of the errors.

8 This would permit one to draw valid conclusions about the 10 design as a whole.

. VI.

12 4

CONCLUSION 13 Q. How would you summarize your evaluation of the IDVP's work?

14 A. In general, it appears that a great deal of good engineering 15 i

work has been done. In my opinion, the greatest weakness of 16 the IDVP effort has been its failure to recognize the implications of the decision to cast the verification program 18 in probabilistic terms and its failure to use the principles 18 and methods appropriate to a probabilistic analysis. These 20 shortcomings are particularly manifested in the lack of 21 explicit and visible decision rules and the failure to use  !

22 random samples.

23 24 25 26 27

21. l

_ _ u .. - m - _ . _ , = . .w - .-- =;_-,_,._.. .. _ - - _ _ _ . . . . , . . _ . . , _ _ , _ _ . . _ _ .,_ _ ._-