ML20205C606
ML20205C606 | |
Person / Time | |
---|---|
Site: | Braidwood |
Issue date: | 08/31/1986 |
From: | Delgeorge L, Frankel M CITY COLLEGE OF NEW YORK, NEW YORK, NY, COMMONWEALTH EDISON CO. |
To: | |
Shared Package | |
ML20205C591 | List: |
References | |
OL, NUDOCS 8608120404 | |
Download: ML20205C606 (87) | |
Text
- _ .
W e e n. e UNITED STATES OF AMERICA 00LKETED NUCLEAR REGULATORY COMMISSION USNRC BEFORE THE ATOMIC SAFETY AND LICENSING,BQ@
w nuu i l A10 :40 In the Matter of )
0FFICE !4 SE 6.,e ; gy
) 00CKf. ling A st:- sig l COMMONWEALTH EDISON COMPANY ) Dochist NNos . 50-456 l ) 50-457 l
i (Braidwood Station Units 1 and 2) )
REBUTTAL TESTIMONY OF
[ LOUIS O. DEL GEORGE]
[ MARTIN R. FRANKEL]
807 DOhk8pj00456 PDR August 1986 50 3
ti - i
.r _ l IT UNITED STATES OF AMERICA
\/ NUCLEAR REGULATORY COMMISSION BEFORE THE ATOMIC SAFETY AND LICENSING BOARD In the Matter of )
)
COMMONWEALTH EDISON COMPANY ) Docket Nos. 50-456
) 50-457 (Braidwood Station Units 1 and 2) )
REBUTTAL TESTIMONY OF LOUIS 0. DEL GEORGE (ON ROREM Q.A. SUBCONTENTION 2)
(Harrassment and Intimidation August 1986
- r
^b ~
2 UNITED STATES OF AMERICA O' NUCLEAR REGULATORY COMMISSION BEFORE THE ATOMIC SAFETY AND LICENSING BOARD In the Matter of )
)
COMMONWEALTH EDISON COMPANY ) Docket Nos. 50-456
) 50-457 (Braidwood Station Units 1 and 2) )
TESTIMONY OF LOUIS O. DEL GEORGE (ON ROREM Q.A. SUBCONTENTION 2)
(Harassment and Intimidation)
Q.1. Please state your full name for the record.
A.1. Louis Owen Del George.
Q.2. Who is your employer and what is your occupation?
A.2. I am employed by Commonwealth Edison Company (CECO) in its Corporate Offices in Chicago, Illinois.
I am an Assistant Vice-President, responsible for Engineering and Licensing activities.
Q.3. What are your responsibilities in this position?
A.3. In this capacity, I am responsible for all engineering activities related to the Company's operating nuclear reactors at Dresden, Quad Cities, Zion and LaSalle County Stations, as well as, Byron Unit 1. The engineering organization that reports to me maintains functional oversight of the engineering activities related to the reactor facilities under construction.
The purpose of this oversight is to provide for the O
- g. l
=.
()
uniform application of Commonwealth Edison's engineering procedures at both our operating nuclear plants and nuclear plants under construction.
In addition, I am responsible for all' nuclear licensing activities for the Company's operating nuclear facilities and for the nuclear reactors which Commonwealth Edison is currently constructing at Byron and Braidwood.
Q.4. Please state your education and professional experience.
A.4. I received a Bachelor of Science Degree in Engineering Science from the Illinois Institute of Technology in 1970. I also received a Juris Doctor degree from the Chicago Kent College of Law of the Illinois Institute of Technology in 1977.
I began my professional career at the Bettis Atomic Power Laboratory in 1969 where I held various positions of increasing responsibility related to the design and fabrication of nuclear reactor internals.
While employed at the Laboratory, I was appointed to the Shock and Vibration Design Review Committee which assessed the adequacy of vibration design practices for all pressurized water reactor plants designed at
- the Laboratory, including the Shippingport facility.
I also attended the Laboratory's Reactor Engineering
)
s .
I 6 .
(} School which provided graduate level instruction in the design of nuclear power systems.
In 1974, I joined Commonwealth Edison and have held positions of increasing responsibility in the Station Nuclear Engineering and Licensing Departments. In connection with my engineering experience, I managed numerous backfit projects related to the Dresden and Quad Cities Stations between 1974 and 1978.
In connection with my licensing experience, from 1978 to 1981, I was responsible for all licensing activities related to the LaSalle County Station including development of the Company's response to all NRC questions concerning design and construction activities. In addition, I participated in the development of numerous construction verification programs developed to address questions raised concerning the adequacy of construction activities.
In July, 1981, I was appointed Director of Nuclear Licensing for Commonwealth Edison at which time I assumed responsibility for all licensing activities related to the Company's nuclear facilities both operating and under construction. 'In this capacity, I participated directly in major decisions affecting the licensing of Braidwood Station including management meetings and enforcement conferences requested by the Nuclear Regulatory Commission.
(:)
)
i
t .
]
l In January, 1983, I assumed the position of f) assistant to the Assistant Vice-President of Engineering and Licensing with primary responsibility for coordinating all Company positions affecting the licensing of Byron Station, including hearings before the Atomic Safety and Licensing Board. I gave testimony to that Board in March, 1983 and July, 1984 on matters related to Commonwealth Edison's nuclear licensing activities and the Byron Quality Control Inspector Reinspection Program, (QCIRP) which I helped develop. I directed Commonwealth Edison's review of that Program and its results, and I directed the preparation of the report documenting those results and the Company's conclusions.
In November, 1983, I assumed my current position of Assistant Vice-President. In this position, I am responsible for all engineering activities at Dresden, Quad Cities, Zion, LaSalle County and Byron Unit 1.
This responsibility includes the direction of all safety related modification work at those facilities.
I am also responsible for managing communications between Commonwealth Edison and the Nuclear Regulatory Commission. This responsibility involves directing those Edison employees who have any contact with the Nuclear Regulatory Commission, and assuring that all O
\
s .
questions raised by that agency and directed to any
{ ,
Commonwealth Edison nuclear facility are understood, thoroughly reviewed and effectively answered. In this j regard, I and the licensing personnel reporting to me, review Commonwealth Edison's commitments to the NRC Staff to ensure that the Company's actions are adequate and responsive to the NRC Staff's questions or concerns. In this capacity, in 1984 I coordinated the efforts of several Commonwealth Edison senior managers who developed the Braidwood Construction Assessment Program (BCAP). Subsequently I monitored the implementation of that program by means of monthly technical reviews which I directed, and in which those same senior managers participated.
Since March 1985, I have been responsible for directing all Commonwealth Edison's technical and licensing actions in connection with litigation of the quality assurance contention raised by intervenor Bridget Rorem, et. al.
Q.5. Mr. Del George, what is the purpose of this testimony?
A.5. The purpose of this testimony is to show how the results of the BCAP Construction Sample Reinspection (CSR) and the results of the Pittsburgh Testing Laboratory (PTL) Overinspection Program address the l
claims of pervasive production pressure and harassment
()
< I
s . !
and intimidation of L.K. Comstock (LKC) QC inspectors
(]}
i which have arisen in this proceeding. The CSR and the PTL overinspection program were conceived, designed d
and carried out independently of each other, and for reasons unrelated to intervenors' claims of production pressure, harassment and intimidation. Nevertheless the evaluation documented in my testimony is based on
, the reinspection data compiled in these two programs.
i It is my opinion that the data from these two repeat inspection programs can be used to reliably evaluate what effects if any the alleged pervasive production pressure, harassment and intimidation had on the ability of LKC QC inspectors to identify quality problems, to initiate, recommend or provide solutions to any such problems, and to verify implementation of such solutions.
Q.6. What have you done to prepare this testimony?
A.6. In general, my preparation for this testimony entailed a review of the harassment contention submitted by intervenors in this proceeding, the regulations applicable to the conduct and activities put in issue by that contention and information relevant to the alleged harassment. That information included:
3 1. Deposition transcripts of all L.K. Comstock (LKC) '
QC inspectors identified by intervenor as having I }
s .
been involved in the specific activities included
(])
in the contention, and testimony before this Board given by these QC inspectors at the time my testimony was prepared.
- 2. The BCAP Program document and the results of the BCAP Construction Sample Reinspection related to LKC.
- 3. The Pittsburgh Testing Laboratory overinspection program and the results of those overinspections for the period from July, 1982 to June 1986.
- 4. Inspection checklists employed by LKC, BCAP and PTL related to electrical construction activities at Braidwood.
- 5. Construction verification programs, with which I was involved at other nuclear sites, for related construction activities.
- 6. Rebuttal testimony prepared by other Commonwealth Edison witnesses in connection with this proceeding.
Q.7. What is your understanding of intervenors' claims with respect to L.K. Comstock QC inspector performance?
A.7. It is my understanding that intervenors claim "there is historic, tangible, powerful and pervasive production pressure at the Braidwood facility" (Guild at Tr. 7903-4), that this pressure was manifested through acts of harassment and intimidation by QC management of QC inspectors, and that these specific !
(
3 I
i I
i
s .
i acts as well as the general management climate affecting the LKC quality control function provide a sufficient basis for inferring that "there have been effects of such pressure and harassment-and i
intimidation" (Guild at Tr. 7934).
Most of intervenors' allegations in this case involve Mr. R. Saklak and I. De Wald, who assumed their QC management positions in July, 1982 and August, 1983 respectively. The most specific allegations of undue pressure, harassment and intimidation involve the alleged imposition of production quotas during the reduction of the LKC QC backlog in 1984, the transfer of inspector J. Seeders in the end of September 1984, the termination of inspector W. Puckett in August, 1984 and the R. Snyder incident on March 28, 1985 which led to 24 LKC QC j inspectors making allegations to the NRC on March 29, j 1985. Intervenors characterize these specific j allegations as " fitting into a larger scheme" caused by production pressure from Commonwealth Edison (Guild, Tr. 7908).
1 It is my understanding that the'intervenors urge this Licensing Board to infer that the alleged production pressure, the alleged lack of independence
- from cost and schedule considerations, the alleged i
incidents of harassment, intimidation and retaliatory O
=.
f
(} treatment have had a deleterious effect on the adequacy of QC inspections. Intervenors argue that this presumed deleterious effect on QC inspections precludes licensing the Braidwood facility (Guild at Tr. 7903, 7912).
Q.8. How do the results of the CSR and the PTL over-inspections pertain to intervenors' claims?
A.8. Both the CSR reinspections and PTL overinspections provide the results of a large number of repeat inspections of LKC QC accepted work. These repeat inspections were produced by qualified inspectors independent of the LKC QA/QC organizations. The s
l extent of the deficiencies found through these repeat inspections can be used as a measure of the effective-ness of the original LKC QC inspections. In addition, since this data spans the entire time period of electrical construction at Braidwood, variations over
- time in the reinspection results can be used to trace the impact, if any, on inspector performance of the events which intervenors characterize as improper production pressure, harassment and intimidation.
Intervenors' witness Dr. Arvey has recommended a reinspection of some sort directed towards the hard-ware installed and QC inspected by LKC. Dr. Arvey appeared to indicate that a sample reinspection i
program should provide a clear indication of the O
I -. -. . - - - _ . _ . _ - - . .- , - -
s .
() existence of a-true problem, if harassment and intimidation occurred and through " observational learning" in the work place reduced the effectiveness of QC inspector performance. In response to a question from Judge Callihan, Dr. Arvey stated that he did not know enough about the hardware to give an opinion on how such a reinspection program should be designed and implemented. (Tr. 4433) However, Dr. Arvey opined that if a 10% random sample of hardware items (consisting for example of 1000 welds) were reinspected and 30 percent of those welds had "significant defects," that would be a clear indication that additional remedial action should be undertaken. (Arvey, Tr. 4436)
It is my judgment, based on my familiarity with the electrical hardware at Braidwood and my experience in the design and review of reinspection programs, that the CSR and the PTL overinspection program are l adequately structured and have generated sufficient information to provide a reliable answer to intervenors' claims. I explain the reasons why I believe the CSR and the PTL overinspection program together are adequate for this purpose later in this testimony.
Q.9. Based on intervenors' claims and the testimony of intervenors' experts Dr. Ilgen and Dr. McKirnan, what would you expect the results of the CSR and PTL overinspections to show?
4 l
t
s .
() A.9. I would expect those results to show two things:.
unacceptable work; and trends or variations of the results over time which could be correlated with periods of intense production pressure and acts of alleged harassment and intimidation.
If, as has been alleged, the production pressure on LKC QA/QC was " historic, tangible, powerful and pervasive," and as a result impaired the effectiveness of LKC QC inspections, the CSR and PTL overinspection results should identify numerous significant defects in the LKC QC accepted work. For example, intervenors' witness Dr. McKirnan contends that individual events of harassment and intimidation involving disincentives to quality inspector performance will " anchor" the employees criterion for good or bad work, and that the resulting " quantity over quality" values anchored in the workplace would
" lower the performance standards of the quality control inspectors generally, such that the norms differentiating ' good' from ' bad' work may become less.
clear" (McKirnan at 12). Dr. McKirnan infers that inspectors as a class or presumably in subclasses would " generally understand that the standard of quality for passable work is becoming lower".
(McKirnan at 12)
O I
Intervenors' consultant Dr. Ilgin has provided (v')
prefiled. testimony which suggests that the inspector work force as a whole was subjected perhaps unconsciously through " observational learning" to a model for behavior that promoted " quantity over quality". He remarks that he would be surprised "if the inspection behavior of at least some of the inspectors was not affected by the pressure" (Ilgen at 21). Dr. Ilgen goes on to say that the quality of actual workmanship, in this case the effectiveness of inspector performance, is an obvious behavior trace evidencing work behaviour. (Ilgen at 24)
Second, I would expect some correlation in time between acts of harassment and poor inspector performance as revealed by the repeat inspection results. In his deposition, Dr. McKirnan predicted that the inspectors' " anchor" (i.e., their performance standards) would move in response to events such as the discharge of Mr. Saklak. (McKirnan Dep. Tr.
136-140, 204-209). Dr. Ilgin indicated in his deposition that it would be useful to examine the results of a sample reinspection program to determine whether the quality of Comstock QC inspection, plotted as a function of time, could be associated with alleged episodes of harassment. (Ilgen Dep. Tr.
72-76).
O
()
Q.10. Mr. DelGeorge, please summarize the conclusions you have drawn from the results of the CSR and PTL repeat inspections.
A.lO. Based on my review of the CSR reinspection data and the PTL overinspection data, I have concluded that:
(1) the LKC QC inspections were effective, and (2) there is no apparent relationship between LKC QC inspector performance, as reflected by the variation over time in th CSR and PTL results, and the incidents of alleged harassment and intimidation and the periods of alleged intense production pressure described in the testimony presented to this Licensing Board.
Q.11. Mr. DelGeorge, what is the basis for your conclusion
, that the LKC QC inspections were effective?
A.ll. One basis for this conclusion is the results of the CSR as described by Dr. Kaushal, Mr. Kostal and Mr.
Thorsell. The most important result of the CSR is that there were no design significant discrepancies in the electrical work. This means that whatever the work climate was at LKC, the QC inspectors did not ignore or overlook important construction defects.
Moreover the nature and rate of occurrence of dis-crepancies identified in the CSR, as discussed by Mr. Kostal and Mr. Thorsell, does not appear to me to be unusual, based on my own experience at Byron and elsewhere.
O
__4-
c s .
i The PTL overinspection data provides a supporting basis for my conclusion. Although there are no engineering evaluations of discrepancies comparable to those performed for the CSR, the overall agreement rates for the work subjected to repeat inspection are in my judgment acceptable. Over the four-year period from July 1982 through June 1986, 92.56% of the welds which LKC QC inspectors found to be acceptable were also found to be acceptable by PTL overinspectors.
When the results of S&L Level III inspections performed in the last year are taken into account, the agreement rate increases to 93.28%. The agreement rate for objective (non-welding) work overinspected by PTL from June 1985 through June 1986 is 95.98%, which I also consider to be acceptable. These results are comparable to the results for the reinspection of the same hardware attributes at Byron, where those results have been accepted as adequate.
Q.12. How did you calculate agreement rates-for purposes of your review of the CSR and PTL data?
A.12. I calculated agreement rates in two ways. For the CSR, where the results have always'been presented on an inspection point basis, I defined the agreement rate as being the number of inspection points determined by the CSR inspections to be acceptable divided by the total number of inspection points ;
l l
1 l
l
rm reinspected in the CSR for the same period. In O addition, for CSR weld reinspections I also calculated agreement rates on a weld basis, that is, as the number of welds found acceptable by the.CSR divided by the total number of welds reinspected in the CSR for the same period. This latter definition of agreement rate is the same as that used by myself and by Mr. Marcus in reviewing the results of the PTL overinspect'ons for welding.
Q.13. In your judgment, is it reasonable to use these agreement rates as measures of Comstock QC inspector performance?
A.13. Yes. Calculating agreement rates on an inspection point basis allows one to focus on the numerous judgments whi:h a QC inspector is required to make in inspecting a hardware item. However, presenting the agreement rates on a weld basis is also useful because it allows one to compare the CSR results with the PTL overinspection results displayed in the attachments to Mr. Marcus' testimony.
The approach I have taken, which focuses on agreement rates, assigns to the original LKC inspector responsibility for each discrepancy identified by the CSR or PTL inspector. In my experience with reinspec-tion programs, the cause of such discrepancies is not always attributable to the performance of the original
)
l l
l
(]) QC inspector. For example, procedural or inspection practice enhancements between the original inspection and the reinspection, design document clarifications after the original inspection, and to some extent the effectiveness of the reinspector's performance can all affect the actual acceptance rate of the original inspector. My review of the CSR and PTL programs indicates that each of these mechanisms was present to some small degree. Each of these other effects, though individually small, collectively add a measure of conservatism to the assignment of all observed discrepancies to the original LKC inspector. It is for these reasons that I believe that the agreement rates presented in my testimony and in the testimony of Mr. Marcus and Dr. Frankel provide a reasonable measure of LKC QC inspector effectiveness.
Q.14. Mr. Del George, please describe how you reviewed the CSR and PTL data to determine whether there was any relationship between LKC QC inspector performance and the allegations of harassment, intimidation and production pressure which have been made in this case.
A.14. My review of both data bases took the following form.
First, all repeat inspection results were sorted by the date of the original LKC hardware inspection. In this way the number of LKC inspections and the corresponding agreement rates between the original LKC inspection and the CSR reinspection could be displayed
()
1
(} as a function of the time the original inspection was These agreement rates were used as a measure of done.
the effectiveness of the LKC QC hardware inspection activity as a function of time. I then performed a review of the data, considering: (1) performance of all inspectors as a group; (2) performance of inspection sub-groups (i.e., subjectivo and objective attribute inspections); (3) performance of the complaining inspectors as a class (i.e., those identified inspectors who expressed concerns to the NRC on March 29, 1985 about the existence of harassment or intimidation); and finally, (4) performance of individual inspectors.
Q.15. Of the class of 24 LKC inspectors who expressed concerns to the NRC Staff on March 29, 1985, how many did inspections which were reinspected in the CSR and overinspected by PTL?
A.15. A total of 20 of these 24 LKC inspectors were included in the CSR sample. This includes 18 of the 20 weld inspectors who made allegations on March 29, 1985, and 2 of the 4 inspectors who were not certified as weld inspectors.
A total of 20 of the 24 LKC inspectors were captured in the PTL overinspections. This includes 19 of the 20 weld inspectors and 1 of the 4 inspectors who were not certified as weld inspectors.
O G
Q.16. Mr. Del George, do you have any graphs which
-O illustrate the variation over time of the CSR results?
A.16. Yes. Attachment 2C (Del George-1) has two graphs.
The top one shows the number of inspection points sampled in the CSR versus the time the corresponding Comstock QC inspections were performed (by calendar quarter). The bottom graph shows the CSR agreement rate, calculated on an inspection point basis, for the same time periods.
Attachment 2C (DelGeorge - 2) is in the same format. It shows agreement rates versus time for the CSR inspection point data for all welding sampled.
Attachment 2C (DelGeorge - 3) also shows agreement rates versus time for all welding sampled, but on a weld basis rather than an inspection point basis.
Attachments 2C (DelGeorge - 4) and (DelGeorge - 5) show the agreement rates versus time for all of the complaining LKC weld inspectors, on an inspection point basis and on a weld basis, respectively.
Eighteen of the 20 weld inspectors who made allega-tions to the NRC Staff are represented in these figures.
Attachment 2C (DelGeorge - 6) shows the number of CSR inspection points for the class of complaining inspectors for both welding and non-welding inspections.
O
(
l l
In fact, as the bottom graph shows, two of the complaining LKC QC inspectors are responsible for most of the non-welding inspections performed by this class and captured in the CSR.
The format in which these attachments are prepared helps one to judge whether there is sufficient data so that variations in the CSR results from one time period to another are meaningful.
Q.17. In your opinion is there sufficient information in these graphs to trend the effectiveness of L. K.
Comstock inspections as a function of time?
A.17. The CSR reinspection data base is quite large. In the case of the CSR data, 733 hardware equipment items were reinspected, including more than 10,000 welds and 276,000 separate inspection points distributed over the entire period covered by the CSR sample. The CSR resulted in the repeat inspection of the work of 79 inspectors, of which 54 were welding inspectors. As previously indicated, 20 of the 24 inspectors who made allegations to the NRC on March 29, 1985 had some of their work reinspected under the CSR program. Thus I believe it is reasonable to evaluate variations in the agreement rates over time for all CSR electrical inspections, as shown in Attachment 2C (DelGeorge - 1).
When one starts focusing in on subsets of the CSR data, for example, welding inspections only, or the
() inspections performed by the class of complaining inspectors, the quantity of data is smaller and more caution has to be used in examining variations in agreement rates from one time period to another. l l
My determination of how much information is sufficient to allow trending is a qualitative judgment, unlike Dr. Frankel's calculation of the coefficient of variation for purposes of his testimony. In making this judgment, I considered the number of inspection points or welds in adjacent time periods, the variation in these quantities over time, and the number of LKC QC inspectors whose work was represented in the sample for each time period.
In my judgment, there is sufficient data for the cumulative sample of all CSR electrical inspections as represented by Attachment 2C (DelGeorge - 1), and for the subgroup of welding inspections, as represented by Attachments 2C (DelGeorge - 2) and (DelGeorge - 3), to support analysis of variations in agreement rates. I am less confident about the sufficier.cy of the data for the class of complaining inspectors, as shown in Attechments 2C (DelGeorge - 3), (DelGeorge - 4) and DelGeorge - 5). Attachment 2C (DelGeorge - 6) demonstrates that there is insufficient data to support analysis of trends in non-welding inspections performed by the complaining inspectors.
O l
l -. - - - .
Q.18. To the extent there is sufficient CSR data to permit trending analysis, did you observe any trends?
A.18. No. In my review of the CSR data, including Attachments 2C (DelGeorge - 1 to DelGeorge - 5), I did not find any apparent trend attributable to the alleged undue pressure, harassment and intimidation claimed by intervenors to have occurred at Braidwood involving the work activities covered by the CSR.
Moreover, this finding is supported by the testimony 4
of Dr. Frankel whose analysis identifies no statistically significant trend in agreement rates derived through the CSR inspection program. I have reviewed Dr. Frankel's testimony and I believe his approach, including his method of combining construction categories and the levels of significance used in his calculations, is appropriate.
Q.19. Dr. Kaushal testifies that the CSR only included reinspection attributes which (1) were required by j applicable codes and standards, (2) could potentially have an effect on the item's ability to perform its safety-related functions and (3) are currently observable. How, if at all, do these limitations affect the usefulness of the CSR results in assessing Comstock QC inspector performance?
A.19. In my opinion, the usefulness of the CSR data is derived from its predefined scope and distribution in time. Because the effectiveness of QC inspector performance is at issue, an evaluation of that effectiveness requires first an assessment of the I ~ _
l l
l
() character of the inspection activity at issue and then a measurement in understandable terms of the signifi-cance of deviations from defined inspection criteria. ,
i !
My previous experience with the evaluation of inspector effectiveness through hardware reinspection indicates that inspection activities are divisible into two primary types, subjective and objective. A subjective inspection involves an inspection that cannot be truly measured and is dependent upon senses and judgment. An objective inspection involves an inspection element that can usually be quantified or measured. Examples are: material, size, shape, traceability, dimensional configuration, etc. If a reinspection program is defined in such a way that these subjective and objective inspection activities are adequately surveyed over time, inspection effectiveness can then be assessed. The design of the CSR sample addressed these considerations.
Furthermore, it is both reasonable and appropriate to direct the reinspection effort at those construction attributes having the potential for an effect on a hardware item's ability to perform its safety-related function. In such a way the attributes of significance to the safe operation of the nuclear facility can be assessed, and be assessed in a l
l
! ()
l l l
quantitative manner. Directing the focus of
(~)
v activities in this way may exclude from consideration non-safety related work activities from reinspection, and might limit the inferences that can be drawn with respect to such ancillary activities. However, where the question to be answered is whether a nuclear facility can be operated without undue risk to the health and safety of the public such a limitation is of no consequence.
In addition, a limitation of reinspection activities to those attributes observable at the time of reinspection, though having the effect of reducing the construction population subject to reinspection, does not affect the inferences to be drawn from the I reinspection effort if: (1) the population actually l
reinspected includes construction attributes of the type no longer observable which were inspected by persons having both observable and unobservable work, such that some of their work remains subject to reinspection, and (2) where the sub-population of work no longer observable is not related to the question that has precipitated the reinspection effort in a manner different than the work that is observable.
Having reviewed the CSR data base and those attributes excluded from reinspection under the CSR, I find no measurable limitation on the usefulness of the O
() CSR data for assessing collective.QC inspector effectiveness for the period covered by the CSR. In this regard, the CSR sample limitations are not unlike those in the Byron Quality Control Inspector Reinspec-tion Program. Certain work activities are either not accessible or not recreatable at the time of a reinspection. In this regard, this minor limitation in the CSR is equivalent to that identified and accepted in the Byron Licensing Board's October 16, 1984 Supplemental Initial Decision with respect to non-recreatable attributes addressed in the Byron QCIRP. Moreover, there is no basis for concluding that harassment and intimidation would have a different effect on the limited number of unreinspect-able or non-recreatable. inspections because such inspections are typically identical in character and were in fact performed by the same inspectors for whom observable inspections were reinspected.
- A possible exception is the case of in-process inspections such as verification of weld preheat.
However, such non-recreatable attributes are typically first inspected because of the greater likelihood that work deficiencies not detected in-process would result in physical hardware damage. For example, because weld preheat is itself required to reduce the O
l
I
() potential of weld shrinkage cracks, a design signifi-cant consequence of a deficient in-process inspection of such an attribute would be manifested as a weld crack which itself would be manifested and observable by the reinspector. Moreover, such in-process I l
inspections are often subjected to redundant owner oversight, reducing further the potential for inadequate in-process review. An example of such an i
l in-process inspection is cable pull force, for which I l
CECO site QA monitors the contractors set-up for all cable pulls not made by hand. (Cable pulls by hand are unlikely to cause damage.) For these reasons, the limitation attendant to the inability to reinspect such in-process attributes is not significant.
Q.20. Dr. Kaushal also testifies that certain CSR reinspection observations were declared " invalid" or "out of scope." How, if at all, does the exclusion of these items affect the usefulness of the CSR results in assessing Comstock QC inspector performance?
A.20. I do not believe that these exclusions affect the usefulness of the CSR results. Repeating the point made previously in response to Question 19, the usefulness of such reinspection data is dictated by the scope of coverage of the program and the distribu-tion in time of the attributes actually reinspected.
Having reviewed the LKC observations declared
" invalid" or "out of scope", I have identified only O
() one reinspectable inspection activity that has been precluded from evaluation either generally or as a function of time. That activity is cable pan hanger configuration which, because of a 100% walkdown effort then being conducted by Sargent & Lundy in conjunction with LKC QC personnel, fell outside the CSR scope.
The characteristics of this specific inspection activity are typical of other LKC objective activities actually reinspected under the CSR. Moreover, a review of the LKC inspector certification records for the period covered by the CSR indicate that only one LKC QC inspector who did field inspections and whose work was reinspected under the CSR was certified in cable pan hanger configuration only. In fact, most inspectors certified in that area also held welding inspection certifications, among others, for which the underlying inspection work was reinspectable and was reinspected under CSR.
Of the 24 inspectors who expressed concerns to the NRC on March 29, 1985, 20 were previously certified in and conducted welding inspections. As previously stated, 18 of these welding inspectors are captured by the CSR data base. Only 4 of these 24 inspectors were not reinspected within the CSR sample for some work activity. Moreover, each LKC construction activity
(
i 1
() and each area of field-installed hardware inspection certification as a class was reinspected under the CSR sample.
Q.21. Mr. Del George, Dr. Frankel testifies that, based on his review of the CSR data, there is not a statistically significant difference between the discrepancy rate for L.K. Comstock QC inspectors prior to July 1, 1982 and after July 1, 1982. Why is July 1, 1982 a significant date (if it is) for purposes of determining whether harassment and intimidation may have affected the work of the Comstock QC inspectors?
A.21. Based upon my review of the Intervenors' harassment and intimidation contention, as well as the record in this case, I understand that intervenors contend that Mr. R. Saklak had a pervasive deleterious influence on the overall inspection system during his tenure.
Therefore, after consultation with Dr. Hulin, who will
- also be submitting rebuttal testimony in this case, I recommended that the significance of this date, which was the date on which Mr. Saklak assumed the position of QC Supervisor for LKC, be statistically evaluated with respect to the effectiveness of LKC inspector performance as manifested by the acceptance rate as a function of time deduced from the CSR data.
Q.22. Mr. Del George, Dr. Frankel also testifies that he I finds no statistically significant difference in the l CSR data between the error rate prior to August 1, 1983 and after August 1, 1983. Why is August 1, 1983 a significant date (if it is) for purposes of determining whether harassment and intimidation may have affected the work of the Comstock QC inspectors? l i
(2) i
[
, I ,)\ A.22. On the basis of a similar review as was described in response to Q.21., it is my understanding that intervenors contend that Mr. I. DeWald had a pervasive deleterious effect on the overall LKC QC inspection system. Therefore, after consultation with Dr. Hulin, I recommended that the statistical significance of this date, which was the date on which Mr. DeWald assumed the position of QC Manager for LKC, be evaluated with respect to the effectiveness of LKC inspector performance as manifested by the acceptance rate deduced from the CSR data.
In this regard, it is also worth noting that one of the primary objectives that Mr. DeWald testified he devoted himself to upon assuming his position was the elimination of the LKC inspection backlog that existed in the Fall, 1983. Mr. DeWald's management initia-tives in late 1983 through the Fall, 1984 when the major backlog item of hanger weld inspection was eliminated, are alleged to have demonstrated an interest in quantity of inspections to the detriment of the quality of those inspections. For this reason, a review of the relative effectiveness of welding inspection before and after August 1, 1983 should manifest a downward trend in inspector performance after August 1, 1983 if, as has been alleged, a pervasive environment of harassment and intimidation
l
() through the setting and enforcement of welding inspection quotas by Mr. DeWald and his management subordinates existed and, in fact, affected inspector effectiveness.
Q.23. Dr. Frankel testifies that, in performing his analysis he only used the items from the CSR sample which were selected using random sampling. Based on Dr. Kaushal's testimony, this means a portion of the total CSR sample was excluded. Have you reviewed the remaining non-probability portion of the CSR sample for any changes in Comstock QC inspector performance over time which might be attributable to harassment or intimidation?
A.23. Yes.
Q.24. How did you go about performing this evaluation?
A.24. I described in response to Q.14. how I reviewed the CSR data to evaluate the effectiveness of LKC QC inspector performance. As part of that process, I reviewed the non-probability portion of the CSR sample as well as the combined random and non-probability sample data. I made a comparative evaluation of the shape and distribution in time of the data sorted in this manner to identify any apparent differences in the form of or trends manifested by the data. On this basis, I was able to develop certain engineering judgements with respect to the possibility of any changes in QC inspector performance manifested by the non-probability portion of the CSR sample which might be attributable to harassment or intimidation.
O
() 'Q.25. What did you conclude?
A.25. I have concluded that the non-probability portion of the CSR sample does not manifest any visible changes which might be attributable to harassment or intimidation. Specifically, the shape and absolute t
i value of the data defined by acceptance rate as a function of time displays only minor differences from that provided by the random portion of the CSR sample, and no trends that I believe are attributable to harassment or intimidation.
Q.26. Have you reviewed the CSR results for individual Comstock inspectors to determine whether there were any changes in an individual's performance over time, which might be attributable to harassment or intimidation?
! A.26. Yes. However, in general, there is not sufficient CSR data to examine the effectiveness of individual inspector performance. This is due to the fact that i
individual inspectors did not have a sufficient number of inspections reinspected in multiple consecutive time intervals to allow for meaningful trending of the
. results.
Q.27. If there isn't sufficient CSR data to examine inspection performance on an individual basis, what value, if any, does the CSR data have for purposes of i
determining whether harassment and intimidation occurred?
A.27. Although quantitive conclusions with respect to
() individual inspectors would be difficult to support, i ._ . _ . , _ _ . - .
)
this fact alone does not preclude reliable conclusions
( '}
being drawn from the CSR data with respect to whether harassment and intimidation occurred at LKC during the time period covered by the CSR; and perhaps of greater importance, whether haraesment and intimidation if it did occur diminished the effectiveness of QC inspector performance.
The intervenors here contend that events and conditions within the LKC quality control department brought about by specific named individuals with the knowledge and at least unvoiced support of QA/QC management generally, created a pervasive and debilitating atmosphere of harassment and intimidation. Intervenors rely on the fact that 24 LKC QC inspectors went to the NRC on March 29, 1985 with such complaints. Although an event involving Mr. Saklak that occurred on March 28, 1985 was the event that triggered the complaints by these inspectors, intervenors would have us believe that LKC QC management had for a considerable time, i.e., at minimum during the tenure of Mr. Saklak as QC Supervisor and Mr. DeWald as QC Manager, promoted inspector production over inspection quality. Because this alleged pressure on inspectors is alleged to be pervasive, data on group performance covering all work activities, the welding activity separately or the O
v 1
l
() work activity of the class of 24 inspectors should manifest any deleterious effects produced by such pressure. The CSR data base is sufficient to make an engineering evaluation of these three performance parameters. ,
Q.28. Mr. Del George, turning to the PTL overinspection
. results presented by Mr. Marcus, in your opinion what value, if any, do they have for assessing whether the
! performance of the Comstock QC inspectors was adversely effected by alleged harassment and intimidation?
A.28. The PTL overinspection results are also instructive with respect to whether the effectiveness of LKC QC 1
inspectors was adversely affected by the harassment and intimidation which is alleged to have occurred.
This data base consists of reinspections performed by PTL of generally 10% of all welding that was QC accepted by LKC between July, 1982 at Mr. Saklak's
! arrival and June, 1986; and generally 10% of all objective hardware inspections performed by LKC between June, 1985 and June, 1986. This data base is described in greater detail by Mr. Marcus.
With respect to welding inspections, 19 of the 20 welding inspectors who expressed concerns to the NRC on March 29, 1986 were reviewed under the PTL overinspection program. The number of welds reinspected in the period reviewed is 28,422 overall, and 11,311 for the class of inspectors who made O
() allegations to the NRC on March 29, 1985. This represents an extremely large though not technically a randomly selected sample. The PTL weld overinspection sample reinspects the work of 100 LKC inspectors. The PTL non-weld overinspection conducted over the past year reinspects the work of 45 LKC inspectors, including 5 of the class of 24 and contains a total of 4,650 elements.
Because of the size of the PTL overinspection sample and the methods for selecting the sample, which are described by Mr. Marcus, I believe this data base can be used to evaluate changes in the effectiveness of LKC QC inspector performance as measured by the agreement rate between the results of the original LKC inspection and the PTL overinspection.
Q.29 What evaluations, if any, have you made of the PTL overinspection results?
A.29 I have reviewed the overinspection data for welding for the period July, 1982 through June, 1986; and the non-welding data developed for the period June, 1985 through June, 1986. I have compared that data to the data generated by the CSR program. I have reviewed the data from the standpoints of inspection activities as a class, i.e., welding (subjective) and non-welding (objective), the performance of the class of 24 inspectors who made allegations to the NRC Staff on l l
() March 29, 1985, as well as the performance of individual inspectors. I have assessed the data in light of my knowledge of similar data developed at other Commonwealth Edison construction sites involving similar activities.
Q.30 What are the results of your review of the PTL overinspection data for all LKC inspectors?
A.30 My review did not identify any apparent trends or changes in the data that support intervenors' claims of undue pressure, harassment or intimidation of LKC inspectors. Although occasional changes in agreement rates were identified, there appeared to be no confirmatory indications or trends linking these changes to the alleged misconduct which intervenors contend existed and caused deleterious effects. This leads me to conclude that changes in these agreement rates are isolated and unrelated.
Q.31 What are the results of your review of the PTL overinspection results for the class of 24 LKC inspectors who made allegations to the NRC Staff on March 29, 1985?
A.31 The results of my evaluation of these inspectors are the same as for all LKC inspectors. In particular, as shown in Attachment 2C (Marcus-4), the agreement rates for the months preceding March 29, 1985 for this group are uniformly acceptable.
t
(
l 1
de I~l Q.32 Have you evaluated the PTL results for individual inspectors?
A.32 Yes. The data was sorted for review by Mr. Marcus and I in such a way that we could, for each inspector whose hardware inspections were reinspected, review which inspectors were captured by the PTL sample, in what months an inspector's work was originally inspected and how many inspections were reinspected as well as the number of rejects identified by the overinspector in that month.
My review of this PTL data indicates that of the
! 100 weld inspectors whose work was overinspected,
, sufficient data exists to make a reliable assessment of the effectiveness of the performance of 78 of those inspectors. The limited amount of data available (less than 50 welds reinspected) precludes drawing definitive conclusions for the remaining 22, although Mr. Marcus' testimony addresses the three inspectors out of this group of 22 whose agreement rates were less than 90%. Of those 78 inspectors for whom sufficient data does exist, it is my opinion that the PTL overinspection data demonstrates the acceptability of the work of 75 inspectors. The performance of the ;
remaining 3 inspectors is either unacceptable or is deficient in a way that requires remedial action.
Those three inspectors are Messrs. Asmussen, Arndt l
l l
(') and Hunter. Mr. Marcus and I performed this analysis jointly. I have reviewed the discussion reported in his testimony and I agree with it.
t ,
The small number of inspectors whose' work appears questionable does not detract ~from the strength of the conclusions otherwise drawn with respect to the l
- effectiveness of QC inspector performance as a class.
In fact, the known inspector-specific deficiencies are themselves generally isolated and dissimilar and suggest no common cause. In addition, my review of the QC inspector testimony in this case suggests no obvious causal link between these isolated deficiencies and the alleged undue pressure, harassment and intimidation.
Q.33 What conclusion, if any, do you draw from Dr. Frankel's testimony that there is a slight linear trend of improving inspections in the data?
A.33 This conclusion by Dr. Frankel is instructive for a number of reasons. First, the PTL data base is i entirely independent of the CSR data base. Because Dr. Frankel has reached similar conclusions with respect to both, I believe with an extremely high degree of confidence that no significant adverse trends actually exist in the population from which the two samples were chosen.
O
I Second, the quantitative assessment by Dr. Frankel confirms my own engineering evaluation of the PTL data. My review did not identify any apparent trends or changes in the data that support intervenors' claims of alleged undue pressure, harassment or intimidation of LKC inspectors.
Q.34 Are the results of the CSR and the PTL overinspec-tions consistent?
A.34 Yes. By way of foundation, I have previously testified that I reviewed the inspector checklists used to implement both of these programs. Although there were minor differences in these checklists, they are equivalent in terms of substantive inspection attributes. It is my impression that the CSR measurement techniques and acceptance criteria are sometinaes more conservative than those employed by LKC. Furthermore, I have reviewed the sampling j
techniques used under both programs and as I previously testified, I conclude that each individually provides a reliable indicator of the effectiveness of LKC QC inspection activity.
l After reviewing each data base individually as I have previously discussed, I attempted to determine whether on the basis of a comparative analysis any material differences existed. The composite mean agreement rate from the PTL welding data base is O
l I
1
l i
about 93% and from the CSR welding data base is about 86%. The fact that the PTL agreement rate is greater than the CSR agreement rate is not surprising to me.
First, as Mr. Kostal testifies, the weld discrepancies identified in the CSR included a very high percentage 1
of insignificant deficiencies. Second, the CSR was l
- conducted long after most of the LKC inspections which.
were reinspected, while the PTL overinspections were generally performed soon after the LKC inspections.
The agreement rate between contemporaneous inspections performed using equivalent procedures would be expected to be higher. Finally, there was intense oversight of the CSR inspections by-CECO, the IEOG and 1 the NRC Staff, which contributed to the very conservative inspection results produced by the CSR.
Q.35. Does the fact that prior to June, 1985 the overinspections only applied to welding affect the 1
value of results in determining whether the alleged harassment and intimidation of Comstock QC inspectors affected their performance?
A.35. In my opinion, the fact that prior to June, 1985 the PTL overinspections only applied to welding does not affect the value of the overinspection results in determining whether the alleged harassment and intimidation of LKC QC inspectors reduced the effectiveness of their performance. First, as I have
- observed previously, most of the inspectors who have l
l i I
I
i expressed a concern upon which the intervenor
(%)
contention rests have been welding inspectors.
Because of the extremely large percentage of inspectors in this single certification class who expressed concern, the intervenors' psychologists' testimony strongly suggests that:
- 1. There exists a pervasive occupational learning model in the LKC weld inspection area that promotes quantity over quality.
- 2. This class of inspectors as a group, whether or not acknowledged or known by them, should have been influenced through occupational learning, with a resultant reduction in the quality performance norm for the class.
As previously stated, the PTL overinspection results do not show any reduction in the quality of Comstock inspections over time. This is true even though, as Mr. Marcus' testimony indicates, the monthly agreement rate figures are quite sensitive to poor performance by individual inspectors.
Moreover, although I have no expertise in industrial psychology, my experience with the review of inspection activities leads me to conclude that welding inspections may in fact be a more sensitive indicator of any real negative effects due to production pressure or other harassment and
()
j l
() intimidation where such effects would lead to a reduction in the quality values of the QC, inspector work force. Because weld inspections are subjective and by their nature amenable to a greater degree of l qualitative judgement by an inspector than are most objective hardware inspections; if a reduction in quality norms were to occur, a shift in performance might occur and be less obvious to the sensibilities ,
of the individual inspector who is relying on qualitative judgement to make inspection decisions.
Moreover, in the case of a subjective inspection, an inspector may be influenced to err non-conservatively where he knows the decision is recognized by others as judgemental. Welding reinspections should therefore be able to detect a shift in the performance of the original inspectors as a class. The PTL overinspection program demonstrates that no such shift in performance occurred.
Q.36. Does the gap in the PTL overinspection data in October and November 1982, the minimal amount of data for November and December of 1983 and May of 1985, and the fact that the PTL overinspection results have not been compiled for period prior to July 1982, significantly affect the value of t>.s data base in assessing the performance of Comsto k QC inspectors?
A.36. In my opinion, taken collectively, this limitation does not have a significant effect on the value of the O
(} PTL data base in assessing the effectiveness of LKC QC inspector performance.
First, July, 1982 was chosen at my direction as the starting point for purposes of the review. Review of the the CSR welding information indicate that no statistically significant difference exists in the inspected welding data base before and after July, 1982. Because none of the PTL data for those early q years had been compiled in a computer data base, the extensive expense and more importantly the extensive time necessary to compile the pre-July, 1982 data was judged by me to be unnecessary. My decision was also influenced by the fact that July, 1982 was the time at which Mr. Saklak assumed supervisory oversight of inspection activity for LKC and could have affected weld inspection activity. This data also precedes all the time periods of specific production pressure identified by the intervenors. The PTL data base has been developed and used to assess inspection effectiveness from July, 1982 through June, 1986. In my opinion, the lack of analysis of PTL data prior to July, 1982 has no material effect on the review undertaken by Commonwealth Edison.
I Second, the total time after July, 1982 during i 1
which very limited or no PTL overview data was generated is very short; involving only 5 months of
("% the 48 months from July, 1986 to June, 1982 that were ;
s_) '
assessed. Based on my knowledge of the record developed in this case, there were no specific events of harassment or intimidation affecting-either the welding area or other LKC inspection areas in those time periods. Had there been such an event, a review of the trends in the data before and after these short gaps would still be instructive as to any affect on performance due to such an event. This is particularly true of the data for May, 1985 which followed soon after the March 29, 1985 allegations to the NRC by 24 inspectors concerning harassment and intimidation concerns. In that case, the entire period prior to the event was overviewed by PTL and the data is available in this study. Moreover, as Mr. Bowman (Tr. 6830-32) and other QC inspectors have testified, the March 28, 1985 incident that preceded the report to the NRC really represented a culmination of concerns that had allegedly been growing for some months. A review of the data in the period prior to April, 1985 should be instructive as to any developing impact on inspector effectiveness that might have !
occurred. As previously stated, the PTL results for the months preceding April, 1985 show no such impact.
Furthermore, intervenors' witness Dr. Ilgen testified that once such an effect is anchored, it is not easily ,
1 O
(~)
\_/
modified. In discussing the perceptions developed by an inspector based on personal observations or discussions with other employees, Dr. Ilgen indicates that these perceptions " die slowly". He'goes on to say, "It is often not enough to simply make some minor change of course, and certainly not enough to merely proclaim such changes" (Ilgen Prefiled at 17). With this intervenor testimony in mind, I believe that the 1 month period in May, 1985 for which very little PTL overinspection data is available is of no material significance; especially where PTL overinspection results for an extended period from June, 1985 to June, 1986 are available and show no adverse trends in the effectiveness of LKC QC inspector performance.
Q.37. In your opinion, does the fact that the PTL overinspections are not a probability sample preclude its use in assessing Comstock QC inspector performance?
A.37. No. Because of the fact that the PTL overinspection program typically covered 10% or more of all work done, including for a substantial period 10% or more of the work of all inspectors, the sophistication of a l probability sample was not necessary to perform engineering trending of the reinspection results. In !
other words, the extensive sampling of almost all work, generally covering all inspectors over an l l /~
(_)T
() extensive period of time diminishes the need for more refined sampling techniques.
The fact that intervenors allege the existence of a pervasive debilitating environment covering the period after July, 1982 which, if it did exist, should have affected the LKC QC inspectors as a class, also reduces the need for a refined survey tool to determine the existence of a substantial problem. If such a pervasive problem did exist, the extensive sample used in the PTL overinspection program would have identified it.
Furthermore, in my view, the actual sample chosen has some features that are also used in formal random sampling programs which increase its reliability. All the inspection work on a given inspection request was within the population subject to the sample.
I Moreover, the selection of a specific report or inspection within an inspection request was not biased in c way that work activities as a class, or inspectors as individuals, or time periods of potential interest were excluded.
Q.38. The PTL results for certain inspectors do not seem to be continuous. That is, not every weld inspector was overinspected every month. Have you tried to j ascertain why?
A.35. Prior to June, 1985 the PTL overinspection program did
- not require that each inspector be reinspected O
1 1
~ . . , - . . . _ _ _ . _ . . .__ . - - , . . _,
l J
(]) regularly, even when the inspector had performed inspection work. So it is unsurprising that this observation can be made. .In addition,.very few inspectors were certified in only. welding or.some
] other sole inspection activity. Most inspectors had .
and used more than one.of their certifications.
Furthermore, as has been made clear through QC inspector testimony, redefinition of task priorities often resulted in inspectors rotating between various inspection activities. I believe these circumstances caused the observed gaps in the overinspection sample for individual inspectors.
Q.39. Would you compare and contrast the strengths and weaknesses of the CSR and the PTL overinspections in assessing whether the performance of the Comstock QC inspectors was adversely affected by alleged harassment and intimidation?
A.39. By way of summary, the CSR inspection and PTL i
overinspection data bases both provide relevant and I
, believe dispositive evidence that the effectiveness of
. LKC QC inspector performance was adequate throughout the period of LKC's involvement at Braidwood. The CSR l data, developed as it was based on engineering judgement and statistical sampling methods, provides
, an extensive sampling of all previously inspected work I activities from the start of LKC work through June 30, l 1984, at which time all of the persons' alleged to have
(
i i.
1
.. I i
l
{} harassed or intimidated employees had been at the site for almost 1 year and before which many of the events i
l related to alleged production pressure involving the 1 backlog had occurred. The CSR data base ends at June, 1984 and does not cover the period when other specific j alleged acts of intimidation or harassment occurred.
I do not believe this limitation significantly affects the strength of conclusions drawn from the CSR data for the reasons discussed in response to Q.21., Q.22.,
and Q.27. As discussed in response to Q.26., the CSR data is of limited use in assessing individual inspector performance. However, as further developed in response to Q.27., this limitation has minimal significance.
The strengths of the PTL data base are in its sheer size. Its focus on the welding area, which is arguably the area most prone to the debilitating learning model which intervenor's psychological witnesses conclude existed allows for an extensive review of this area. Although the PTL sample is not a probability sample, its size and distribution in time outweigh the effects of this limitation. In addition, the PTL data base allows a more extensive inspector specific assessment for the welding area, which itself
, 1 l
provides confirmation of conclusions otherwise drawn from the CSR and PTL data bases for all weld inspectors as a class.
Q.40. Mr. DelGeorge, in response to Q.lO. you stated that there is no apparent relationship between Comstock QC inspector performance, as reflected in the variation over time in the CSR and PTL inspection results, and the incidents of alleged harassment and intimidation and the periods of intense production pressure described in the testimony in this proceeding. How can you reconcile this conclusion with intervenors' claims.
A.40. In my opinion, there are three alternative explanations for the fact that the results of these two independent repeat inspection programs do not manifest a negative impact. on QC inspection activities of the type hypothesized by intervenors.
First, the alleged undue production pressure, and the alleged acts of harassment and intimidation were not perceived by the LKC inspectors as requiring them to sacrifice quality for quantity, and therefore would have had no effect.
An alternative explanation is that pressures normal to the construction work environment did exist; that sporadic instances of management labor dispute did in fact occur; and that some QC inspectors may have believed that LKC management was encouraging them to sacrifice quality for quantity. However, the absence of any significant trends in the CSR and PTL repeat
'() inspection data demonstrates that the LKC inspectors did not allow these perceptions to compromise the effectiveness of their inspections.
The final possibility is that the alleged undue production pressure and acts of harassment and intimidation of LKC QC inspectors actually took place and had an effect on the quality of inspections which was constant over time. Although in my opinion this 1
alternative seems furthest from the facts of record, nevertheless, even accepting this third alternative explanation, the adequacy of the LKC QC accepted work can be and is established by the CSR and PTL repeat inspection data. The CSR data demonstrates with a high degree of reliability that LKC work as inspected by the LKC QC department was acceptable. Through June, 1984 the results of the CSR effort support with high confidence the conclusion that no unidentified program. ia design significant problem existed in the LKC work scope. The PTL data base extends this conclusion for the LKC welding area, though not statistically, through June, 1986 and for a more limited period for all other QC inspected work. Those results, which are based on an extensive amount of data, lead me to conclude that the combination of conservative design, good LKC craft labor performance l
. () 1 i
() and the LKC QC inspections have produced work which was adequate and remains adequate at this time.
On the basis of the facts contained in this hearing record which I have reviewed, including the CSR and PTL data, as well as my review of the conditions and l
events which together comprise the environment in the LKC QA/QC department, I conclude that the persons and organizations performing quality assurance functions for LKC had sufficient authority and organizational freedom to identify quality problems; to initiate, recommend, or provide solutions; and to verify implementation of solutions. Those LKC persons and organization had sufficient independence from cost and 4
schedule when opposed to safety considerations to assure the effective implementation of the LKC quality program.
Q.41. Does this conclude your testimony?
A.41. Yes.
, ( l i
Attachment 2C (Del George-1)
CSR REINSPECTION RESULTS FOR LKC All Populations, All Samples, Welding and Non-Welding
- mooo 4oooo
- Number of :
Inspection Points ;
i 3o000- -30000 I aoooo 'a0000
- a i
_ oo:
0 i s l e_ h E
E I m HEHE a ga . coo.
0 100 -, _m,, m ,,m _ - - - , , -,, ,,,, ,, ,,,,u m, ,
_u u , - mu ,,u _ -100 =
l N- M E #
l 80- -80 70- -70 T So- -80 j
f g m-M~ @ y
-m j l 30- -30 5 j ao-
-20 g i 10-1
-10 3 5
{
) o l
QuytapI iI 2 I 3 1 4 I 1 I 2 I 3 I 4 I i1 2 I 3 I 4 I i1 2 I 3 I 4 I iI 2 I 3 I 4 I 1 I 2 I
- 1979 1980 1981 198a 1983 1984 4
~
T;li.
1 Attachment 2C (Del George-2)
CSR REINSPECTION RESULTS FOR LKC
- All Populations, All Samples, Welding Only
{ 40000- -
40000
- Number of .
j ; Inspection Points ;
- 30000- -30000 1
20000- -20000 I .
0 W/A rza / 0 i 100- ' ' ,
~
- ~ m ' , - , - , ,
' , , m -100 *
! 90 90 2 l 80- -80 70- 70 ,
j T. 60-
-60 h
-50
- i E g 40- -40 l
) 30- -30 g
- 20- -20 g
{5 10- -10 0 ,
0 Guarterl ii2 1 3 1 4 I 1 1 2 i9 I 4 1 1 1 2 1 3 1 4 l 1 1 2 I 3 I 4 1 1 1 2 1 3 1 - 1 1 1 2 I e 1979 1980 1981 1982 1983 1984 W.
I Attaclunent 2C (Del George-3)
! CSR REINSPECTION RESULTS FOR LKC
- All Populations, All Samples, Welding Only "I Number of Welds ."
l 1500- -1500 1000- -1000 l . .
t . .
) :#
0 VA c /A A A 0 100- " -100 90- ,
u u u u u -90 ,
- 80- ' ,
-80 g 70- -70
-60 C i
y 60-O 50- '
-50
$ 40- -m E, 30- -30 20- -20 j 10- -10 0-- - '- - - ' - ' - - - - -
_'-' -'-' - - 1- - '-' -'
--0 Quarteri i I 2 1 3 1 4 I i1 2 1 3 1 4 1 1 1 2 1 3 1 4 1 1 1 2 I 3 1 4 I i 1 2 1 3 1 4 1 1 1 2 i
- 1979 1980 1981 1982 1983 1984 1
l u.
Attachment 2C (Del George-4) l CSR REINSPECTION RESULTS FOR LKC 24 LKC Inspectors, All Samples, Welding Only 2o000 2o000 i . .,
Number of :
- ; Inspection Points ;
i 15000- -15000 10000- -10000 i
5000{ h 5000 o
i u G$$ nM -
o i 100- , , , , , ,
.-100 l 90- Inspection Point -90 80 Areement Rate -80 70- -70
- 80- -so i 0 50- -50 1
E 40- -40 30- -30 20- -20 10- -10 l
o o Ouarteri 1 i2 I 3 1 4 1 1 1 2 I 3 1 4 I i1 2 1 3 1 4 1 1 i a1 3 1 4 1 1 l 2 1 3 i 4 I i i ai
- 1979 1980 1981 1982 1983 1984 t,
i
Attachnent 2C (Del George-5)
CSR REINSPECTION RESULTS FOR LKC 24 LKC Inspectors, All Samples, Welding Only ,
, 2000 2000 i : Number of Welds :
i : :
1500- -1500
.: p p :.
, em M VM b E E222P/M 3
, 100- i -100 j 90- Weld Agreement Rate ,
-90
, 80- -80 70- -70 t 60- -60 f50- -50 i
& 40- -40
- 30- -30 20- -a0 l
j 10- -10 Quarteri i ia1 3 i 4 iiiai3 1 4 i i1 2 1 3I4It aiai4 iiiaiai4 1 1Iai 1979 1980 1981 1982 1983 1984 h
Attachment 2C (Del George-6) i d' CSR REINSPECTION RESULTS FOR LKC 24 LKC Inspectors, All Samples 20000 20000
. : Number of Inspection :
Points Welding :
15000- -15000 I
10000- -10000
! 5000- . 15000 0
VA -
k b 40 20000- -20000
- Number of Inspection Points Non-Welding :
15000- -15000 10000 2 Insp. -10000
- ; iii nini
22 Insp. -5000 5000f 0 0
! QuarterI i1 2 1 3 I 4 I 1 I 2 1 3 1 4 1 1 I 2 1 3 1 4 1 1 1 2 1 3 I 4 1 1 1 2 I 3 I 4 1 1 1 2 i 1979 1980 1981 1982 1983 1984
3g .s UNITED STATES OF AMERICA NUCLEAR REGULATORY COMMISSION BEFORE THE ATOMIC SAFETY AND LICENSING BOARD i
In the Matter of )
)
COMMONWEALTH EDISON COMPANY ) Docket Nos. 50-456
) 50-457 4 (Braidwood Station Units 1 and 2) )
REBUTTAL TESTIMONY OF MARTIN R. FRANKEL (ON ROREM Q. A. SUBCONTENTION 2)
(Harassment and Intimidation)
O i
1 1
i I
'l i
3 i
1 I
i
,e August, 1986 O
, . - . _ , . . . , _ _ . ~ , , , - . . _ - , _ _ - - . . - - _ - _ . _ . - - . -
i '
REBUTTAL TESTIMONY OF MARTIN R. FRANKEL ON THE CONSTRUCTION SAMPLE REINSPECTION ELEMENT OF THE BRAIDWOOD CONSTRUCTION ASSESSMENT PROGRAM Q.l. Please state your full name for the record.
A.l. Martin R. Frankel Q.2. Please describe your present positions and your job responsibilities.
A.2. At the present time I am Professor of Statistics, Bernard Baruch College, City University of New York, l 17 Lexington Avenue, New York, New York 10010. I am responsible for the teaching of all graduate and '
undergraduate courses in survey sampling. In addi-tion, I teach courses in general statistics and in computer languages. I have been at Baruch College since 1971 with the exception of a two year period when I was an Assistant Professor of Statistics in the Graduate School of Business of the University of Chicago.
I also serve as Technical Director of the National Opinion Research Center, University of Chicago. In this position I am responsible for the statistical and technical quality of all contract survey research conducted by the Center.
{
i
\
I
( Q.3. Please describe your educational and professional background.
A.3. I hold an AB degree in Mathematics from the University of North Carolina. I hold an M.A. degree in Mathe-matical Statistics and a Ph.D. degree in Mathematical Sociology from the University of Michigan. My doc-toral dissertation was in the area of inference from complex probability samples. This dissertation, which was published by the Institute of Social Research'of the University of Michigan under the title Inference From Complex Samples, is currently in its fifth printing.
( I have been actively involved in the use of probabil-ity sampling techniques for a period of 21 years.
Over this time period I have been involved in the design, selection and implementation of more than 100 different large scale samples. This work has been carried out for federal agencies, universities, an international organization and business firms.
The major professional organization for applied statisticians in the United States is the American j i
Statistical Association. I was elected a Fellow of the Association in 1979 for my work in the area of probability survey sampling. I have served as
(
J l
4 -
( Chairman of the Association's Section on Survey Research Methods and its Advisory committee to the U.
S. Bureau of the Census. I also served as an associate Editor of the Journal of the American Statistical Association for a period of 8 years. In addition to the title mentioned above, I am co-author of 2 books in the area of survey sampling. I am coauthor and author respectively of the chapters on probability sampling in The Handbook of Marketing Research (McGraw Hill, 1974) and the Handbook of Survey Research (Academic Press). I have published articles on survey sampling in various scientific journals. I am one of the four members of the
(
Editorial Board of the 8 volume Encyclopedia of Statistical Sciences (John Wiley and Sons).
I was elected to membership in the International Statistical Institute in 1983. At the present time I serve as a member of the National Academy of Sciences Panel on Occupational Safety and Health Statistics.
Q.4. Are you familiar with the Construction Sample Rein-spection (CSR) element of the Braidwood Construction Assessment Program (BCAP)?
A.4. Yes. I have had an active involvement with BCAP and specifically with the CSR element of BCAP since August of 1984.
O.5. Can you define the terms probability sample, non-probability sample and random sample?.
A.5. A probability sample is a sample tnat is selected by a procedure that gives eacn element in a defined popula-tion a Known, Calculable, non-zero probability of being incluced in tne sample.
A non-probability sample is any sample tnat does not fall unaer tne definition of a probability sample.
2 Tne term random sample is often used three different
- ways.
'f In tne formal tneory of probability sampling it is usea to describe a type of probability sample in wnich all combinations of elements of a given size in the population ana all subsets of tnis size nave an equal chance of being selected into the sample. In tnis context, random samples of elements may be detinea as "selectea witnout replacement" or "with replacement."
In general statistical theory, tne term random sample is used to describe a sample from a population that may be treated matnematically as tne product of independent, identically distributed random variables. As I will aiscuss later, tnere are 1
( numerous instances where samples which do not satisfy the probability sampling definition of random samples are treated as random samples in various analytical and inferential procedures.
The term random is also used by the general population and the media that serve this population. In this context the term does not seem to have any clearly defined meaning.
Q.6. As a general matter, in cases where it is possible to collect data about each item in a given population, is there any reason to sample the population instead?
A.6. There are various situations when it may be preferable
[
I to examine a sample of items from a population rather than the entire population. This preference for sampling may come about for three distinct reasons.
First, time constraints may make it impossible to undertake a complete population examination but it may be possible to undertake and complete a sample study within the allotted time. Second, it may be difficult to justify the cost of a complete population examination but it may be cost effective to examine a sample. Third, and sometimes most importantly, the use of a sample may produce study results that are more precise than the results that would be produced
{ by a complete population enumeration or study. This
-S-
( may occur because the imprecision and uncertainty that is introduced because a sample is used may be less than the imprecision and uncertainty associated with undertaking a large scale study. For example, data collectors may not do as good a job counting a large number of items as they would if they had fewer items to count. In other words, on a per item or case basis, the errors associated with undertaking a large scale study may be significantly greater than those t
that occur in a smaller study. This difference in imprecision may more than offset the errors that occur because only a sample if examined.
(
A somewhat extreme example of the superiority of a sample over a complete population enumeration or study is provided by.the decennial US census. Given the number of temporary employees that must be hired and trained in order to conduct the decennial population 4
census, it is generally recognized that a more accurate projection of the number of inhabitants in the entire United States, as a whole, would be obtained if the count were carried out on a sample basis. In fact, it is a sample study, known as the PES (Post Enumeration Survey), that is used in order to evaluate the degree to which a census over or under count may have occurred.
[
l
( Q.7. Can you describe the role of probability samples in drawing inferences from a sample to a larger population?
A.7. The use of probability sampling methods generally assure that objective statistical inferences may be drawn about the large population from which the sample
, was selected. More specifically, support for one of the assumptions that must be made in order to apply various theories of mathematical statistics may be directly linked to the sample selection process.
In less technical terms, the use of probability sampling methods makes questions'about sample adequacy amenable to answer via objective statistical methods.
The methods and techniques of probability sampling were first introduced in the late 1930s and early 1940s. Since that time it has been recognized that probability sampling generally simplifies the drawing of inferences to the larger population. This simpli-fication is directly linked to the built in "objecti-vity" of the methods used in the selection of probability samples.
At the present time probability samples are used in i
order to produce a number of national level statistical series that inform Federal Government
{
I policy decisions. Examples of these statistics include the size of the labor force, the unemployment rate, the size of the money supply and various
. measures of health status; Q.8. Can you describe the role of non-probability samples in drawing inferences from the sample to a larger population?
A.8. While it is true that probability sampling is generally preferred to non-probability sampling, this does not preclude the use of non-probability samples in making inferences about larger populations. There are many instances which involve public welfare and safety in which policy decisions are made on the basis of non-probability samples.
Examples of the use of non-probability samples in this context include the approval of drugs for general distribution and testing of products for the satis-faction of safety standards.
Non-probability samples are somewhat more difficult to use than probability samples, for the purpose of making inferences to a large population. This is because the adequacy of a non-prcbability sample is not assured by the sample selection method, but l
instead, must be evaluated by an individual or l
[
1
( individuals who have substantive knowledge and experience related to the quantities under study.
Q.9. Can you please define the terms reliability and con-fidence level?
A.9. Both of these terms are used in a number of different contexts. In the present context the term reliability is used to denote the complement of the error or failure percentage rate. More specifically the re-liability percentage is 100% minus the failure per-centage. For example if the manufacturer of a certain product guaranteed that parts would pass certain tests with a reliability rate of 99% this means that the
( manufacturer was guaranteeing that not more than 1% of the parts would not pass these tests.
i The term confidence level is used in its standard statistical context. Specifically.the term con-fidence level is a measure of the probability associated with the truth of a statistical statement.
i If a statement has a 95% confidence level this means that the probability that the statement is true is 95%.
Q.10. What kind of sampling program is the CSR?
A.10. The CSR made use of both probability samples and non-probability samples in various construction category l
.g.
i I
l
( populations. The nature of these populations is des-cribed.in Dr. Kaushal's testimony.
For each construction. category population a probability sample was selected via the use of random selection without replacement. Under this type of sampling plan, each element in a population is given the same chance of selection as every other element in the population. Furthermore, the probabilities of selection among various elements are statistically independent.
As initially designed, the CSR sample sizes were specified in sequential form, which allowed for possible sample expansion. In this context the feasible sample sizes were chosen so as to assure a minimum statement of 95% reliability with 95%
confidence.
The non-probability cvomponent of the CSR involved the selection of additional population elements based on general engineering judgment. Criteria for selection within this context are discussed in the testimony of Dr. Kaushal.
(
1 I
l
( Q.ll. Is the use of both a probability and non-probability sample an unusual practice?
A.ll. No. This " dual" approach is often used in quality assurance and auditing examinations in order to provide objective and subjective information. Each type of sample serves as a complement to the other.
Q.12. What inferential statements about the quality of the electrical work at Braidwood can be made based on the results of the CSR?
A.12. Given that no design significant discrepancies were found in any of the CSR construction categories, objective statistical statements concerning the absence of design significant discrepancies may be
( made at the 95% level of reliability with 95%
confidence for each of the electricel construction categories for the period covered by the CSR program.
With respect to objective statistical inferences, it should be noted that for certain populations the sample sizes exceeded levels required for objective statements of 95% reliability at 95% confidence. For these populations the levels of reliability and confidence are correspondingly higher. In addition, higher levels of reliability and confidence will result when the results are combined across all
. electrical populations.
i .
1 I
I As a statistician I am not qualified to discuss the subjective inferences that may be made from the non- l i
probability portion of tne sample.
Q.13. Dr. Frankel, for purposes of the'following questions, I define tne term " agreement rate" as the number of inspection points determined by CSR inspections to be acceptable (i.e., non-discrepant) for any hardware
- item, divided by the total number of inspection points reinspected for tne same item.
- Is it possible to use objective statistical methods to-l evaluate whether the error rates or agreement rates j for L. K. Comstock QC inspections differed when we compare the period prior to July 1, 1982 and the period on or after July 1, 1982?
A.13 Yes, I nave performed such an evaluation.
] Q.14. What were the conclusions of this evaluation?
A.14. My evaluation snowea tnat tnere was not a statisti-cally significant difference between the agreement-l
! rate for L. K. ComstoCK QC inspections prior to j July 1, 1982 and the agreement rate for L. K. Comstock i
! Inspections _on or after July 1, 1982.
i Q.15. How did you go about doing that evaluation?
A.15. As is described in Dr. Kausnal's testimony, random
- sampling was one of the methods used to select samples of items in each of six electrical populations covered by the CSR.
( For each item in an electrical population that was selected through rancom sampling, I was provided by -
Edison witn counts of the number of inspection points and the number of discrepancy points, and the number of agreement points (inspection points-discrepancy points = agreements points). Tnese counts, which were provided on an item by item basis, were combinea for all six construction categories and divided into two o
parts: counts associateo witn ComstocK inspections I that were performed prior to July 1, 1982 and counts associateo witn ComstocK inspections that were performed on or after July 1, 1982.
Since tnis data was based on a probability sample, it was possible, using objective statistical methods, to carry out a projection of these agreement rates to tne population of all ComstoCK inspections Covered by the CSR defined electrical populations. Furtnermore, it was possible to carry out this projection separately for the time perioa prior to July 1, 1982 and the time period on or after July 1, 1982.
Q.16. Wnat did these projections snow? l l
A.16. These projections showed that the overall agreement rate for ComstoCK QC inspection prior to June 30, 1984 was 98.909%. The agreement rates for the periods
( prior to July 1, 1982 and on or after July 1, 1982 l were 98.893% and 98.925% respectively.
l In order to obtain an objective confirmation that this apparently small difference in sample projections was indeed insignificant and trivial, I conducted a statistical hypothesis test.
i Q.17. What is a statistical hypothesis test?
A.17. A statistical hypothesis test is a paradigm for inference which involves a formal set of steps. These
., steps ultimately result in a test conclusion of I
" acceptance" or " rejection" of a stated hypothesis or t
assertion.
4 Q.18. Please describe how you carried out the statistical hypothesis test.
A.18. In the present context the statistical test was carried out in four steps:
STEP 1. Statement of assumptions and hypotheses; STEP 2. Specification of significance level and formulation of test statistic and critical region; STEP 3. Computation of the test statistic; and STEP 4. Statement of results and conclusions.
( l j l
i
( Q.19. Please describe the first step of your statistical l hypothesis test.
A.19 In the first step of a statistical test, the population and sample are defined and two mutually exclusive and exhaustive hypothesis are formulated.
In the present situation, there are two populations:
l Comstock inspections of work covered by the CSR that occurred prior to July 1, 1982 and Comstock inspec-tions of work covered by the CSR that occurred on or after July 1, 1982.
The sample of these two populations consisted of a
( stratified random sample of elements from 6 strata (the six construction catagories) with total sample size equal to 467. This represents the total number of hardware items in the six construction categories which were selected using random sampling methods and for which at least one inspection point was included in the data that I used in my projections.
The two hypotheses are as follows:
NULL HYPOTHESIS: The agreement rate for pre-July 1, 1982 Comstock inspections covered by CSR is the same as the agreement rate for post-July 1, 1982 Comstock inspections covered by CSR.
I
(
k ALTERNATIVE HYPOTHESIS: The agreement rate for pre-July 1, 1982 Comstock inspections covered by CSR is different from the agreement rate for post-July 1, 1982 Comstock inspections covered by CSR.
Q.20. Please describe Step 2 of your statistical hypothesis test, i.e., your specification of significance level and formulation of test statistic and critical region.
A.20. The next step in conducing a statistical hypothesis test is the specification of the " level" of significance or alpha level. Lovel of significance specifies the probability that the statistical test will REJECT the NULL hypothesis if the NULL hypothesis is indeed TRUE. In general the lower the level of
(
significance, the more conservative is the test. The 1
most commor ly used value for an alpha level in statistical hypothesis tests is 5%, but depending upon the application, alpia levels sometimes range between 20% and 0.1%. As an initial starting point the test 4
was carried out using a 5% alpha level. As shall be seen later, the test results for this CSR data are invariant over a broad range of alpha levels.
The test statistic is the numerical quantity that will be derived from the data obtained from the two samples (Comstock inspections pre and post July 1, 1982).
(
' I The test statistic (TS) for the hypotheses stated in step 1 consists of the difference between the pre and post July 1, 1982 agreement rates divided by the standard error of this difference.
AGREEMENT RATE PRE 7/1/82 - AGREEMENT RATE POST 7/1/82 TS= ------------------------------------------------------
STANDARD ERROR OF DIFFERENCE The formula for the standard error of the difference may be found in most statistical sampling texts. It is shown in Attachment 2C (Frankel-1).
l The critical region for the test is defined as the values of the test statistic (TS) which will result in k
the rejection of the NULL hypothesis. If the test statistic does not fall within the critical region the NULL hypothesis will be ACCEPTED.
The formula for the critical region can also be found in most statistical sampling texts. Note that the critical region is a function of the the alpha level selected. For the test statistic in this context, the critical region is 1.96 or greater or -1.96 or less.
In other words if the test statistic is greater than or equal to 1.96 or less than or equal to -1.96 the NULL hypothesis of no difference will be REJECTED. If the test statist $c falls inside the range -1.96 to
1 k +1.96, the test statistic can not be REJECTED. It is therefore ACCEPTED.
Q.21. Please describe Step 3 in your statistical hypothesis test.
l A.21. The third step in conduct of a statistical test l
involves the computation of the test statistic based on the actual data.
In the present situation the two error rates are 98.893% and 98.925% respectively. The standard error of the difference is 0.24314%. Thus the test statistic TS is calculated as
(
AGREEMENT RATE PRE 7/1/82 - AGREEMENT RATE POST 7/1/82 TS= ------------------------------------------------------
STANDARD ERROR OF DIFFERENCE 98.893 - 98.925
= _______________
O.24314
= 0.1316 Q.22. Please explain the results of the statistical hypothesis test.
A.22. The final step in a statistical test involves a statement of the test conclusions based upon the test statistic and the critical region. In the present situation, the test statistic (TS = -0.1316) falls well inside the range -1.96 to 1.96. Thus, the
{ conclusion of the test is ACCEPTANCE of the NULL I
hypothesis. In other words, the test does NOT find a statistically significant difference between the agreement rates before and after July 1, 1982.
It should be noted that had the alpha level been different, the conclusion would have been the same.
For example if the test had been carried out at the l
20% alpha level, the critical region would range from
-1.28 to +1.28. The test statistic remains -0.1316, which falls well within this region. Had the test been carried out at a 0.1% alpha level, the critical region would range from -3.29 to +3.29. Again, the test statistic falls well within this region.
Q.23. Are there any additional features of statistical tests that should be noted?
- A.23. Yes. In addition to considering the level of signi-ficance associated with a test, it is also sound practice to examine a quantity known as the statistical " power of the test" or simply the power.
Statistical power refers to the probability that the test process will REJECT the NULL hypothesis when it is, in fact, FALSE.
Examination of power is somewhat more complex since there are a number of different ways that the NULL
(
(. hypothesis might be false. For example in the present context, the hypothesized difference in agreement rates before and after July 1, 1982 might be one percent (1.0%, absolute), or nine tenths of one percent (0.9%, absolute) or some other value.
Since the possible ways in which the NULL hypothesis might be false can vary, the statistical power of the test is determined for various alternative scenarios under which the NULL hypothesis might be false.
For the test described above, the power is in excess of 99.0% against a situation in which the agreement rates differ by 1% (absolute) or more. This power level in excess of 99% holds for differences as small as 0.7% (absolute). The power level is in excess of 95% for differences as small as 0.5% (absolute).
Q.24 Dr. Frankel, what would be the conclusion of a statistical test if you were to compare the agreement rates for the period prior to August 1, 1983 and on or after August 1, 1983?
A.24. Application of the same test procedure shows the same conclusion of no statistically significant difference at any of the levels which are typically used for significance testing. This includes the 0.1% alpha
(
l
w -
(
level, tne 1% alpna level, the 5% alpha level, the 10%
alpha level, and the 20% alpha level.
Q.25. Dr. Frankel, could you please describe in more detail now you combined items in different construction categories for purposes of the analyses aescribec in your previous answers?
A.25. On tne basis or discussions witn Mr. Del George and Dr. Kausnal it was determined that the parameter of i
choice for tne assessment of tne effectiveness of Comstock QC inspections was the item specific agreement rate over all reinspections within the scope of the CSR attributable to Comstock inspections . -
i (
i Since tne CSR includeo random samples from each of the a
six construction categories covered by Comstock inspections, it was possible to produce an objective statistical estimate of this population parameter.
In the terminology of survey sampling, tne six random samples from each of the six construction categories may be viewea as a single, stratifiec, probability sample from the entire combined population. Each of tne CSR construction categories or populations is referred to as a stratum in the terminology of survey sampling tneory.
( s t
,.w..
(
Tne fact tnat tne sample qualities as a probability sample follows immediately from the fact that a random sample of construction items were selected, witn known probabilities, from each of the six electrical construction categories in tne population. Tne prob-tn ability of selection for an item in tne n CSR I
population or stratum (h takes on values from 1 to 6) I l
is fn =n /N h, w ere N and n h
are the sizes of tne population ana tne random sample respectively.
Since this is a probability sample, it is possible to produce unbiaseo estimates of population totals from
( tne sum of each sample value weighted by the inverse of its probability of selection. Tnis procedure was followed for producing the component parts of the sample estimates describea below. For eacn randomly selected item in the CSR sample, Edison provided me witn the counts of the number of inspection points and 1 the number of agreement points (inspection points -
aiscrepancy points = agreement points). Tnese counts were provided separately for Comstock inspections prior to July 1, 1982 and for Comstock inspections on or after July 1, 1982. Using the inverse probability of selection for each item, I appliea weignts to these counts in order to obtain projections of the total number of inspection points and the total number of k agreement points for the entire population of Comstock QC inspections from which the CSR samples were drawn.
The agreement rates described in my previous answer ~
were the percent of projected CSR agreement points relative to the projected total number of inspection points for the entire population.
l A
, These estimates, which are termed ratio estimates in survey sampling theory, are described mathematically
- in Attachment 2C (Frankel-1).
4 Q.26. What did you do for any items for which no Comstock QC~
inspector could be identified, or more than one Comstock QC inspector was identified?
A.26. Since the purpose of this sample projection was the. ;
estimation of an inspector level agreement rate and-
+
not a present hardware observation rate, only inspection points that could be definitively linked to -
Comstock inspectors and times were included in the estimate. If the inspector could not be identified, I
then the counts were not included. If more than'one i
Comstock inspector and/or more-than one date was I
4 associated with inspections for a particular item, the counts for the corresponding inspection points.and l
- agreement points were included in the appropriate time period or periods (even if this.resulted in
{
double-counting).
1
.j.
- , - . - . - -, , , .- .,n , ,,, ..v.. ,,-
Q.27. Dr. FranKel, dio you evaluate wnetner tne L.K. ComstoCK agreement rate in visual Wela inspections differed over time, based on tne CSR data?
A.27. Yes. Tne agreement rates in visual weld inspections followea tne same pattern snown by tne overall agreement rates. Tne agreement rate on an inspection point basis for visual welds was nigner in tne secono time period examined (in one instance post July 1, 1982 ano in tne secona post August 1, 1983). Tnis ditference was not statistically significant.
Q.28. Dr. FranKel, dio you evaluate wnetner tne error rate differed over time for each of tne six construction categories?
A.28. I attempted tnis evaluation but was unable to do so.
Q.29. Wny not?
A.29. Wnen a percentage estimate is based on a ratio of two separate sample estimates, tne percentage is, in tne terminology of statistical survey sampling, a ratio type. hnen ratio type estimates are useo, certain conditions must be satisfied. One of these conditions involves the coefficient of variation of tne denominator of tne ratio. More specifically, the coefficient of variation of the cenominator snoulo be less than 0.20 (20%) 1/ Examination of the
! 1/ Tnis conoition may be founa in stancard texts on probabil-ity sampling. See, for example Kisn, L., Survey Sampling.
New York: Jonn Wiley and Sons, 1965 pp. 207-209.
l .
appropriate coefficients of variation for tne separate construction categories revealed that the sample sizes witnin most categories would not support separate analyses of tne type carried out for tne entire sample.
It should be noted, nowever, that in tne several instances wnere separate analyses were possible, the test conclusions were the same as those for tne total population. More specifically, for tnose construction categories and time period divisions where testing was possible, no statistically significant differences in agreement rates were found between the two time perious.
Q.30. Dr. Frankel, are you familiar with tne results of tne PTL overinspections of welds for tne period July, 1982 to June, 1986, as described in Mr. Marcus' testimony?
A.30. Yes.
Q.31. Assume that per weld agreement rate between the PTL overinspections anc the ComstoCK QC inspections is a measure of tne effectiveness of ComstocK QC inspections. Wnat, if anytning, can be saic about the effectiveness of ComstoCK QC inspections over time, based on these PTL overinspection results?
A.31. Since tne PTL sampling process was not a probability process, inferences from these sample results to tne entire population must be based on subjective expertise. However, if we consider tne sample as an entire population, it is possible to examine tnis population for a linear trend over time. Attacnment 2C (FranKel-2) snows tne agreement rates (without override) for tne PTL overinspections of the L. K.
ComstoCK visual weld inspections for the period July, 1982 througn June, 1986. It snould be notea tnat no overinspections were reported for the months of October and November, 1982. Tnese montns are eliminatea from the computations tnat follow in order to prevent cistortions.
It is clear from this grapn tnat tnere are variations in agreement rates over time. Furtner, tnere does not appear to be a strong trend over time, but tnere is some indication of increasing agreement rates.
In order to provide a more objective quantification of a possible treno over time, it is possible to apply a technique Known as linear regression analysis.
Basically, in linear regression analysis, a straight line is mathematically fit to the actual data points using a criterion Known as "least squares."
For the data shown in Attachment 2C (FranKel-2) the least squares line is Y = 90.247 + .ll7295X, where Y =
agreement rate and X = time (1, 2, 3, ..., etc.)
1 l .
I
( This means that if we mathematically fit a straight line to this data we find a slight positive trend in the agreement rate over time. (i.e., the inspections got slightly better over time). It should be emphasized that this rate is very slight.
In conjunction with this best fitting straight line a quantity known as coefficient of determination or R squared was also calculated. The coefficient of determination tells us the extent to which the fitted straight line can, in fact, explain variations in the data over time. R squared will fall somewhere between 0.0 and 1.0. A value of 1.0 would indicate a pefect k
explanation of the variation in the agreement rate on the basis of time. A value of 0.0 would indicate the total lack of a relationship between the agreement rate and time. For this data set, the value of R i squared was 0.0702. This is quite small and indicates l
only a slight linear relationship between time and the l agreement rate.
Q.32. Does this conclude your testimony?
A.32. Yes.
l
(
l l
s ATTACHMENT 2C (Frankel-1)
The population consists of H-6 strata. Within each stratum a random sample of n items was selected using simple random (without replacementfsampling from the population of N items.
Tggs the uniform probability of selection for items wiNhin the h stratum is f h~"h/ h*
Let y hi - the number of inspection points minus the number of discrepancy points found durinEhthe first time perggd associated with the i sample item in the h stratum. This quantity is gefined as the number ogha greement points for the i sample item in the h stratum.
Let x hi the number of inspection points dyging the first time perggd associated with the i sample item in the h stratum.
(
Let denote the cgreement and inspection point counts yhi and during the xI[rs t period weighted by the inverse of their probability of selection into the sample. Thus,
~ x l
Yhi 7 hi Nh !"h 7
- hi ~
- hi
- h/"h The proj ec ted totals y and x for the above variables are defined as the sample sums of the weighted values yhi and x hi' H n h
y- r r yhi h-1 1-1
)
H n h
x - r r x hi h-1 1-1
6 k
The agreement rate during 'the first period is ratio I estimate, r, defined as H n h
I I yhi 7" . h-1 1-1 r - ----------------------
1 H n h
x- z z x hi h-1 1-1 i
In a corresponding fashion, define x', y' and r' as the proj ected total agreement points, inspection points and; agreement rate for the second time period.
The statistic of interest is the difference between agreement rates between time periods r - r'.
The estimated standard error of the difference r - r' is given by se (r-r') -
[ var (r-r') ]1/2 where 1 H 1 H 2 2 VAR (r-r') -
--) z dz h
+ ~~~
z d z' x'2 x h-1 h=1
( - ---
2 H z dz xx' h-1 h dz'3 .
L:
l I
l =
l-1 3
l i
The term d 2 z h
is defined as 1
4 5
2 (1 - f) h "h 2 2 dz ~ ~~~~~~~~
h" (n ~ 1) I * ~
I' "h I"1
- hi *h h
l
- where
( *hi ~
Yhi
~ #
- hi' i
and z -
"h z z j .
1-1 I
2 The terms d z' are defined as above using z'hi in Pl ace-of z'g ,
I l
The terms are defined as dz3 dz'h (1 - f) h "h dz d
- h *'h - --(n -----~~-
- 1)
I "h I"1
- M**M ~
- h* h I
- h
(
l These formulas may be found in Chapter 6 of Survey Samoline.by I
o
- -, , y- _
. s _--, _ , . - -_. m
- . - , , - - - - . _.-,_y,#
/
O l
\.
4 Leslie Kish (John Wiley and Sons, 1965).
The coefficient of variation for x is computed as "h 2 2 I ( nh *
- M ~
- h l h-1 1-1 cv(x) - --------------------------------
H I X h-1
(
where x
"h h
~
1-1
- hi -
(
4
S. .A -.eu-, "-' - - - - - - ~ -' - - - - - - - - -- - ' - " - - - - - ' -
-ua.A.A _am4 - .* --
e PTL OVERINSPECTION OF LKC AGREEMENT RATE OVER TIME
" ~
100 7 G 8 I $
90 - Q ai z
8 80 -
I$
70 - E 5
td D ke 60 - 5
- y Id 50 -
2 Id td
@ 40 -
30 -
20 -
10 -
O i i i i i , i iiiiiiii iiiiiii iiiiiiiiiii,,,i,,,,,,i JUL 82 JUL 83 JUL 84 JUL 85 MONTH-YEAR e