ML20155C046

From kanterella
Jump to navigation Jump to search
Assessment of Effectiveness of National Programs for Safety of Sources,Including Development of Performance Indicators, at 980914-18 Intl Conference on Safety & Radiation Sources & Security of Radioactive Matls in Dijon,France
ML20155C046
Person / Time
Issue date: 08/08/1998
From: Camper L
NRC OFFICE OF NUCLEAR MATERIAL SAFETY & SAFEGUARDS (NMSS)
To:
Shared Package
ML20155C038 List:
References
IAEA-CN-70-R8, NUDOCS 9811020060
Download: ML20155C046 (15)


Text

_

." 6 ASSESSMENT OF THE EFFECTIVENESS OF NATIONAL PROGRAMS FOR THE l

SAFETY OF SOURCES, INCLUDING DEVELOPMENT OF PERFORMANCE INDICATORS I

i 1

OVERVIEW REPORT IAEA /CN-70/R8 AUGUST 8,1998 l

l l

i l

l l

l l

l l

LARRY W. CAMPER, MS,MBA CHIEF, MATERIALS SAFETY BRANCH US NUCLEAR REGULATORY COMMISSION l

t t

9811020060 981022 PDR ORG EFGIg

[')8}/opcot,p_,

t l.

ABSTRACT Most developed and emerging nations have implemented national programs containing key elements such as regulations, authorization, inspection and enforcement processes designed to protect public health and safety regarding the uses of radiation sources. These programs have j

been generally successful. However, the degree to which those programs have undergone an assessment of their effectiveness as determined through the use oflogical and measurable j

performance indicators is not as clear. The International Atomic Energy Agency and the U.S.

Nuclear Regulatory Commission have taken steps to provide guidance or implement programs l

designed to employ performance indicators relative to licensees' radiation safety programs and i

l to the conduct of national regulatory programs. Regulatory executives are increasingly under pressure to utilize their resources as effectively as possible and manage their programs tc achieve desired outcomes in cost effective and effective means. While substantial progres:: has been made in developing systematic approaches to aid regulatory authority executives in assessing their programs through the use of peer reviews and objective performance indicator.=.

much work remains to be done in this arena. Assessment of national programs through the use of performance indicatys should receive increasing attention in the future. Focused efforts to l

share findings and disseminate information to all regulatory authorities responsible for protecting i

public health and safety relative to the uses of radiation safety is warranted.

1.O INTRODUCTION Throughout the world within medicine, industry, research and teaching the use of radiation sources and radioactive materials is established, widespread and generally increasing. The l

need to ensure the safety and control of these materials is well established and understood.

The majority of developed and emerging nations have establist ed comprehensive national programs designed to ensure proper control of radiation sources and to ensure the prevention of accidents and mitigation of consequences when such accidents take place. As a result, the overall safety record for utilizing these materials has generaHy been satisfactory however, there have been some serious accidents including fatalities. Therefore regulatory authorities remain vigilant in their efforts to improve their radiation safety programs with an increasing emphasis upon assessing the effectiveness of their programs.

During the past fifty years, a great deal of effort has been exerted by governments, regulatory authorities, academia, international organizations and pufessional societies to define the critical components of radiation safety programs and essential elements of national radiation j

regulatory programs designed to protect public health and safety relative to the safe use of i

1

radioactive materials. As a result, a substantial amount of literature, professional presentations and carefully developed regulatory programs have emerged, most of which have served to improve the public's confidence in the use of radioactive materials. However, the degree to which regulatory authorities have applied logical or measurable approaches to the use of performance indicators in assessing the success of their regulatory programs is not obvious within the literature. This paper will provide a brief overview of recent or ongoing efforts by various organizations to develop safety criteria, enhance accountability of radiation sources, establish criteria for national regulatory programs and methods to identify performance indicators designed to evaluate the effectiveness of regulatory efforts. The listing is certainly not all inclusive and it attempts to identify only those documents or regulatory activities which seem to readily lend themselves to the application of performance indicators for national programs, or that specifically address performance indicators within an overall process. The paper places an emphasis on activities underway by the International Atomic Energy Agency (IAEA) and the US Nuclear Regulatory Commission (NRC). This emphasis results from the scarcity of available literature on the complex topic of the relationship between the development of radiation safety regulatory programs, especially national programs, and efforts to assess their effectiveness. It is assumed that a number of other organizations or countries are in the midst of focusing attention on this important issue. In time, additional information will emerge and should be compiled in a meaningful manner to allow access and use by regulatory authorities working toward improving the overall effectiveness of their programs.

2.0 INTERNATIONAL ATOMIC ENERGY AGENCY The IAEA has played a major leadership role in developing radiation safety standards, development of critical elements for radiation safety programs, development of critical elements for national programs designed to protect public health and safety and investigation of major accidents and dissemination of information about them. The top level publication in the IAEA Safety Standard Senes on Radiation Protection and Safety is the Radiation Protection and the Safety of Radiation Sources, IAEA Safety Fundamentals, Safety Series No.120, approved by the IAEA Board of Governors in June 1995 Requirements for the implementation of the Safety Fundamentals are provided in the International Basic Safety Standards for Protection Against lonizing Radiation and for the Safety of Radiation Sources, IAEA Safety Series No.115 (referred to as the BSS). Both publications are jointly sponsored by Food and Agriculture Organisations of the United Nations (FAO), the International Atomic Energy Agency (IAEA), the International Labour Organisation (ILO), the Nuclear Energy Agency of the Organisation for Economic Co-operation and Development (EDC/NEA), the Pan American Health Organisation

l i

i l

(PAHO) and the World heal Organization (WHO). Both the Safety Fundamental e id the BSS l

are based on the recommendation of the international Commission on Radiological Protection (ICRP) as given in Publication 60 [1].

The BSS established basic requirements for protecting human beings against the risks associated with both external and internal exposure to ionizing radiation and for ensuring the safety of all types of radiation sources. The BSS have been developed from widely accepted radiation protection and safety principles, such as those published in the Annals of the ICRP and the IAEA Safety Series. The standards are intended to ensure the safety of all types of radiation sources, and in doing so, to complement standards already developed for large and complex radiation sources, such as nuclear reactors and radioactive waste management facilities. For the complex sources, more specific standards, such as those issued by the IAEA, are typically needed to achieve acceptable levels of safety. As these more specific standards are generally consistent with the BSS, in complying with them, such more complex installations will also generally comply with the BSS. The standards are limited to specifying basic requirements of radiation protection and safety, with some guidance on how to apply them [2].

Within the Safety Series publications, the BSS is supported by a number of Safety Guides, that provide guidance on fulfilling the requirements of the BSS. Each Safety Guide addresses a topic area, with a necessary degree of overlap on important issues. The primary audience of the Safety Standard Guides are regulatory authorities, though they may be of use to any organization or institution utilizing ionizing radiation. The Safety Guides are supported by Safety Report publications. These publications focus on technicalinformation typically common to a number of practices but may be specific to a given practice. The primary audience of the Safety Reports should be licensees or registrants for the use of radiation sources, but they may also be of interest to regulatory authorities [1].

The BSS, Safety Guides and Safety Reports emphasize key operational program elements such as basic obligations, administrative requirements, management requirements, radiation protection requirements and verification of safety through inspection and monitoring processes.

These documents do not address performance indicators designed to test the effectiveness of the regulatory programs but rather focus upon identifying and explaining key elements of radiation safety programs and provide methodologies for compliance with those elements. In terms of performance, the apparent underlying thinking is that if all elements of the program are l

satisfactorily addressed by licensees, registrants or regulatory authorities, then the resulting j

program willindeed achieve satisfactory and acceptable performance. Most countries have adopted the BSS or developed an essentially equivalent set of criteria as their standards for

.m

_ __ ~ _

l radiation protection. However, the degree to which they have successfully implemented those standards is not readily apparent. It may be valid to assume that the low frequency of occurrence for serious radiological accidents, especially fatalities, is proof of adequate implementation. However, such logic does not lend itself to incrementalimprovements in regulatory programs. Such improvements may prevent an incident which would otherwise have occurred had the improvements not been made.

2.1 Draft TECDOC," Organization and Implementation of a National Regulatory infrastructure Governing Protection Against lonizing Radiation and the Safety of Radiation Sources".

This document, published on July 1,1997, was jointly sponsored by the FAO, IAEA, OECD/NEA, PAHO and WHO. The document is primarily intended to assist those Member States needing to establish or improve their national radiation protection infrastructure in order to imphment the basic requirements of the BSS. Member States receiving assistance in applications of nuclear energy or radiation technology from FAO, IAEA, ILO, PAHO or WHO are expected, as a condition of such Agreements, to implement the BSS or equivalent radiation protection and safety standards as may be appropriate for their respective situations. This requirement can only be ensured by developing and implementing an adequate radiation protection infrastructure (3].

The TECDOC addresses the elements of a radiation protection and aafety infrastructure at the national level needed to apply the BSS to radiation sources such as those used in industry, medicine and research. The term " infrastructure" refers to the underlying structure of systems and organizations in place to achieve the regulatory agency's mission. As radiation technologies have emerged or advanced and as radiation protection correspondingly has become more complex, an adequate radiation protection and safety infrastructure has become even more necessary. It is this system, rather than individual specialists, which provides the depth of protection necessary to protect the public constituency of the regulatory authorities. The BSS can only be implemented through an effective radiation protection infrastructure that includes adequate laws and regulations, an efficient regulatory system, supporting experts and services, and a " safety culture" shared by all those with responsibilities for protection. The infrastructure j

requires clear lines of authority and responsibility, and adequate resources to operate the l

system at alllevels. Obviously, this process will vary greatly amongst countries and the TECDOC provides assistance for all nations regardless of their current stage of development for the radiation protection program. In addition, to providing advice on the essential elements of a i

radiation protection and safety infrastructure at the governmental level, a further objective is to i

i

provide advise on the optim.;;mion of resource utilization with the infrastructure. As a result, advice is provided on approaches to the organization and operation of national radiation protection program in view of factors that have a bearing on optimization [3]. This emphasis on optimization of program elements and focus on resources provides the key elements for a regulatory authority to consider when establishing performance indicators. In other words, key program elements such as licensure, event response and inspection are ideal candidates for evaluation utilizing performance indicators.

Section 3.29 of the TECDOC addresses performance indicators as it relates to licensee or registrant performance. The section pointa out that in addition to conducting inspections to determine compliance with all applicable regulatory requirements, an inspection should provide a general sense of the ' safety' of operations within the licensees or registrants program (3].

Performance indicators denote specific sets of circumstances that aid in identifying the potential for degraded safety performance. Performance indicators can be utilized as a basis to inform source users of the need to improve, and also as a basis for establishing the frequency of inspection of any particular source user. Section 6.0 of the document contains a useful but limited discussion on the assessment of the effectiveness of national regulatory programs. The discussion focuses upon the establishment of procedures relative to quality assurance and analysis of program data to ensure an effective regulatory program for radiation protection and the safety of sources (3). The document does not provide any further information on this topic, per se, however the need for such information has been recognized by the IAEA. As a result, a Draft TECDOC (described in Section 2.3 below) is currently in preparation. That document will set forth an approach for peer review of the regulatory infrastructure utilizing performance indicators.

2.2 Draft TECDOC," Safety Assessment Plans for Authorization and inspection of Radiation Sources" The TECDOC is intended to assist regulatory authorities and those invoived with the assessments and inspections covering radiation protection and safety of radiation sources. The document's primary objective is to enhance the efficacy, quality and efficiency of the overall regulatory process. The document provides advice on good practice administrative procedures l

for the regulatory process relative to preparation of applications, granting of authorizations, conduct of inspections and enforcement. It also provides guidance on the development and use I

of standard safety assessment plans for authorization (licensure) and inspections. These plans are to be used with more comprehensive guidance relative to specific technical modalities. As a

result, this TECDOC provides advice on a systematic approach to evaluations of protection and safety with other IAEA guides assisting in distinguishing the acceptable and the unacceptable

[4].

Section 3.5 of the TECDOC addresses performance indicators as it relates to licensee or registrant performance. Similar to the TECDOC discussed in Section 2.1 above, the document focuses on a specific set of circumstances that aid in the identification of radiation source users with potential for degraded safety performance. In this sense, they are negative performance indicators which are early subjective warning of degraded performance'and are mainly management related. A list of performance indicators for radiat:on source users in contained within Annex X of the TECDOC.

l The document does not address perforn'ance indicators relative to the processes to be used by i

l the regulatory authorities in issuing authonzations, conducting inspections or addressing

(

enforcement issues. However, all of these processes could be subject to review through performance indicators. For example, managers of regulatory programs could review their programs to determine the extent to which all of the criteria and standard checklists catained within the annexes of the document are followed by their regulatory staff. An alternative would i

l be in these instances where inspectors identify significant problems within the licensee's I-prcgram, a retrospective comparative analysis could be conducted to determine how much of the problem may have been discoverable during the licensing process, or if the particular issue was identified by a reviewer, was it brought to successful closure prior to issuance of the l

authorization to possess and utilize radioactive materials.

2.3 Draft TECDOC," Assessment by Peer Review of the Effectiveness of a Regulatory Programme for Protection Against lonizing Radiation and for the Safety of Radiation Sources.

The document is currently in preparation.

When published, the TECDOC will provide guidance on the conduct of an independent peer review of the effectiveness of a regulatory program and thus allow recommendations designed to strengthen the program. The document will address not only those aspects of a radiation program infrastructure but also those ancillary activities such as the provision of dosimetry services which directly affect the ability of a regulatory authority to discharge its responsibilities s

(5).

The document points out that the preamble to the BSS states that the Standards are based on

=

the presumption that a national infrastructure is in place which enables the Government to discharge its responsibilities for radiation protection and safety. Essential components of the l

national radiation protection infrastructure are: legislation and regulations; a regulatory authority empowered to authorize and inspect regulated activities and to enforce the legislation and regulations, sufficient resources; and adequate numbers of trained personnel. A national I

radiation protection infrastructure includes all persons, organizations, qualified experts, systems, l

documents, facilities and equipment that are in whole or in part, dedicated to protection and safety. The requirements of the BSS apply only to those who possess and use radiation sources but the managerialissues addressed by the BSS, particularly safety culture and quality assurance are also relevant to the overall effectiveness of the regulatory program infrastructure designed to oversee the users of radiation sources and materials.

The authors of the document will point out that the purpose of an independent peer review is to examine the various components and activities of a regulatory program as they are established, organized and implemented by the regulatory authority in order to assess whether they are performing their intended functions, to identify areas where adjustments might be made to optimize performance and to make recommendations for optimization of performance. The authors will point out that peer review is necessarily qualitative and therefore it is important that such reviews be conducted by persons who, collectively have a good understanding and experience with the main components of a regulatory program and the functions of a regulatory authority [5].

The authors will stress the importance of standardization of the review process through the use of questionnaires addressing the key areas of a regulatory program that assess the effectiveness of the program. The review should address those areas of a country's radiation protection infrastructure that bear directly on it effectiveness even though some of those areas may not be a part of actual program. The authors will draw distinction between the two components. At this point in time, the authors identify the following areas as subject to peer review (5).

1. Legislation / Regulations
2. Notification
3. Authorization (Licensing / Regulation)
4. Inspection
5. Enforcement l

l

6. Emergency Response l

. -.-.~

7. Incident / Accident Investigation and Follow-up l
8. Availability of Technical Services f
9. Interagency Coordination and Cooperation
10. Staffing and Training of Regulatory Staff.
11. Facilities and Equipment (including operating budget of the Regulatory Authority)
12. Information Dissemination in addition to identifying these major areas for review, the authors will provide checklists for use in peer reviews. The checklists will serve as an aid to collect data and to better assure the completeness of the review and standardization of report development. The document will address the use of performance indicators through the use of standardized questions designed to measure the effectiveness of the regulatory programs examined. The answers to the questions are envisioned as relevant performance indicators. The authors recognize that some parts of the regulatory program will be deemed to be wholly effective and therefore satisfactory, while others will benefit from improvements of either a major or minor nature. The objective of the peer review is to identify those are.a of the regulatory program warranting attention toward improvement. To facilitate this process, peer review recommendations would be prioritized with increasing importance on those aspects of the program which may cause significant problems with the regulatory progr m and thus require the attention of often limited resources. The proposed scheme for such prioritization is (5]:
1. ESSENTIAL, meaning that a delay in implementation represents a substantial and immediate hazard to health and/or it concerns a serious deficiency in the regulatory program.
2. IMPORTANT, meaning that delay represents a significant hazard and/or that the recommendation concerns a significant deficiency in the regulatory program.
3. ADVISED, meaning that the recommendation identifies a relatively minor deficiency in the regulatory program.

This document when completed promises to be one of most definitive efforts to date, by the IAEA, to identify criteria and methods for review of regulatory programs employing the use of performance indicators. The peer review approach detailed in the draft document has a number of similanties to the integrated Materials Performance Evaluation Program (IMPEP) currently utilized by the NRC and Agreement States in the U.S. The IMPEP methodology has been tested

[

and shown to work, and has been generally well received by the affected regulatory authorities i

as a means of constructive peer review focusing upon critical program elements. While this

process has been applied to generally large scale regulatory programs, its principles could also be applied to smaller programs such as those in emerging nations. The IMPEP is discussed in Section 4 of this paper, It should be noted, that the document is currently under development and therefore has not undergone the customary review and approval by the IAEA prior to being published. As a result, the contents are subject to change but the authors current thinking is none the-less thought provoking and useful relative the application of performance indicators for evaluating the effectiveness of regulatory programs designed to protect public health and safety relative to i

the use of radiation sources.

3.0 IMPROVING CONTROL OF GENERALLY LICENSED DEVICES in the United States much concem tas been expressed by the metal recycling industries about unwanted radioactive materials, principally radioactive sources and devices containing radioactive materials, in metal scrap. For example, in the U. S. about two million radioactive sources and devices are used which contain radioactive materia!s regulated under the Atomic Energy Act, as amended. Each year, about 200 of these are reported lost, stolen or abandoned. As a result, radioactive sources have entered the public domain in an uncontrolled manner and become potential sources of radiation and contamination. The U.S. steeLmaking industry, for example, has been put a risk as a result of radioactive sources becoming mixed with ferrous scrap metalintended for recycling. On 18 occasions, radioactive sources have been accidentally melted by U.S. steel mills resulting in radiation exposures of mill workers and contamination of the mill and mill products and byproducts (6).

The NRC staff and the Commission has been concerned about generallicensees maintaining control and accountability of devices for more than two decades. Throughout this period, there l

was substantial communication between the staff and the Commission as to the nature and scope of the problem. Several regulatory initiatives proposed by the staff in the early 1990's did not materialize, primarily due to resource constraints given the large staff effort necessary to adequately account for and establish control of the large population of devices distributed under the generallicense approach. In 1994, the Commission requested the staff, in part, to continue to explore the problem of accidental smeltings of licensed devices and develop recommendations for future staff actions in this area (7]. The NRC staff subsequently proposed the formation of a joint NRC-Agreement State Working Group (WG) to evaluate the problem and propose solutions. The NRC staff indicated that the formation of the WG was necessary to

l address the concerns from a national perspective, allow for a broad level of Agreement State input, and to reflect their experience given that several Agreement States had already implemented programs to improve the control and accountablilty of such devices.

In June 1995, the Commission approved the staffs recommendation and provided guidance on how to proceed.

In October 1996, the NRC staff provided the Commission with the final report submitted by the l

WG and requested permission to proceed to develop an action plan to address the numerous 1

recommendations provided within the report. The report contained a number of key recommendations dealing with: inadequate regulatory oversight; inadequate control over and accountability for devices by users; and improper disposal of devices and orphaned devices.

Throughout this process the NRC staff and the WG identified a number of issues to be addressed by the WG including: NRC and Agreement State compatibility; cost and fee considerations; radiation exposure savings; device design; changes affecting all devices versus on newly acquired devices; device disposal; device identification; devices requiring increased oversight; general licensed versus specific licensed devices; identification of current users and I

devices and imposing restrictions on portable devices and storage of devices.

During November 1997, the NRC staff provided the Commission with a recomrrHation to develop and implement a registration program for certain 10 CFR 31.5 generallicensees. The recommended program included rulemaking to establish a regulatory basis for a registration program, an automated registration system and related policy and guidance documents, implementation of the registration program and follow-up activities. Subsequently,in April 1998, the Commission directed the staff to implement a registration and follow-up program for the generally licensed sources / devices identified by the WG, apply fees to these general licensees, and incorporate requirements for permanent labeling of sources / devices (8].

Implementation of the registration program will necessarily involve the development of a fully automated registration system which will be a large scale and significant information technology l

(IT) project. Within the U.S., the Clinger-Cohen Act of 1996 required Federal agencies to develop a Capital Planning and Investment Control Process (CPIC) for IT investments and to l

impose performance and results-based management for such projects. The CPIC process places a great emphasis on the development and explicit clarification of the business needs for the IT system under development and requires that the project only move forward once the business need is clearly established and justified. Once the business case for a system has

.-.~

=

9

r i

been developed and the project is approved, development of the IT system must be performed in accordance with the System Development Life Cycle (SCDLC) methodology developed by the NRC Office of the Chief Information Officer (OClO) (9].

Almost all regulatory authority managers throughout the world are required to conduct adequate programs designed to protect public health and safety, typically with inadequate or marginal resources. In this era of business process reengineering, organizational cost cutting and trimming ths size of governmental functions (particularly regulatory within the US), these l

managers face new challenges. Increasingly, they are being required to manage their regulatory programs as executives responsible for a business. Regulated entities are l

concerned about the fees assessed to them by regulatory bodies and are prone to voice their 1

concerns and seek relief from the legislative bodies responsible for the oversight of regulatory programs including those dealing with radiation sources. None of this is inappropriate and in fact, it is a part of the democratic representative process but it does require regulatory l

managers to reassess how they do business. For example, the type of justification required for IT systems under the CPIC process addressed above, could be viewed as requiring managers to consider performance indicators on the front end of such projects. In developing the l

business case for these systems, managers must fully evaluate and understand their needs, assess alternatives and make their case through a requirements analysis. Once the project is approved, they must proceed with development of the project in a certain manner which carries built in accountability for successful completion of the project on schedule and within cost.

Excessive cost overruns and delays will no longer be tolerated, in fact, variances greater than j

5% require certain actions and cost variances in excess of 10% require reporting to the US Congress (9).

The extent to which regulatory authority mangers in other countries will be required to conduct similar processes as required by the Clinger-Cohen Act within the U.S. remains to be seen.

Certainly, they can expect to find the management of their programs under increasing scrutiny, as governments seek ways to manage scarce resources and increasingly expect their regulatory managers to not only address technicalissues designed to protect the public from radiation sources but also expect them to mange their programs in a cost effective and efficient fashion. As a result, regulatory managers will need to rethink the concept of" performance indicators" in the sense that they not only apply to the outcome of their regulatory efforts but also to how they manage their programs in achieving those desired and measurable outcomes.

The value of the generallicense registration program currently under development by the NRC will ultimately prove extremely usefulin improving the accountability of generally licensed devices. Similarly, the fully automated registration system will prove to be an effective means

for processing the registration of generally licensed devices and for facilitating the necessary follow-up regulatory actions such as inspections etc. The challenging issue for the regulatory managers involved with the project, will be the intense application of performance indicators up front as they justify and manage the development of the IT system to support the regulatory l

process directed by the Commission. Increasingly, regulatory authority managers will need to be cognizant of this function as they carry out their responsibilities and interface with their agency directors, governmental leaders and legislative bodies etc.

4.0 INTEGRATED MATERIALS PERFORMANCE EVALUATION PROGRAM The Integrated Materials Performance Evaluation Program was developed, in part, in response to an April 1993 report prepared by the General Accounting Office (GAO) entitled,"Better Criteria and Data Would Help Ensure Safety of Nuclear Material." Under the approaches in use at that time, GAO found that it would be difficult to ensure that a uniform level of protection of public health and safety was being provided nationwide within the U.S. GAO recommended that

" the Chairman, NRC, establish (1) common performance indicators in order to obtain comparable information to evaluate the effectiveness of both (the Agreement States and NRC regional materials] programs in meeting NRC's goal..

  • Later that year, the NRC staff presented to the Commission, its plan and schedule to develop a program employing such indicators (10]

In 1994, following substantialinteraction with the Agreement States, the NRC staff documented the IMPEP program in the form of a proposed management directive using common performance indicators in review of NRC regional and Agreement State materials programs.

l The five draft indicators were (10):

1. Status of Materials inspection Program
2. Technical Staffing and Training
3. Technical Quality of Licencing Actions
4. Technical Quality of Inspections
5. Response to Incidents and Allegations l

The staff also informed the Commission of its plan to implement toe directive on a pilot basis in l

the NRC regions and in the Agreement States. In March 1994, the Commission approved the

(

staff's plan, noting that the information gathered by the staff" should be used in reformulating l

the draft common performance indicators in response to the GAO reccmmendations." The Commission also directed that under the pilot program, the review of Agreement States should be done under the existing 1992 Policy Statement and procedures for determining adequacy l

l

and compatibility. The collection of information on common performance indicators should be done in addition to the normal review process for Agreement State programs (10).

During 1995, the NRC staff evaluated the pilot program and concluded that the IMPEP approach was viable. The common performance indicators were considered effective in evaluating a materials licensing and inspection program and were sufficiently broad to be applicable for both NRC regional and Agreement State reviews. The use of an interdisciplinary team from the Office of Nuclear Material Safety and Safeguards (NMSS), Office of State Programs (OSP) and the Regions under the IMPEP process was considered to be particularly effective. Other aspects of the pilot program including the issuance of a draft report for comment, the use of a five member Management Review Board (MRB), and Regional and Agreement State management participation in MRB meetings provided greater openness, participation, and management involvement in the review process (10). In addition to these key points, the NRC staff also recommended implementation of IMPEP on an interim basis in the Regions and Agreement States using the five common performance indicators listed above and that Agreement State representatives be a part of the review teams and that non-voting Agreement State liaison to the MRB be appointed. Finally, the staff recommended that, as appropriate, certain program areas that are not part of both NRC regional and Agreement State program (such as low level radioactive waste disposallicensing or sealed source and device approvals) would be evaluated using non-common indicators. In June 1995, the Commission approved the staff's recommendations (10).

l The NRC staff issued Management Directive 5.6," Integrated Materials Performance Evaluation Program (IMPEP)* during September 1995, outlining the responsibilities, authorities, review process and review criteria of the program. All aspects of the staff's proposal for involving Agreement States in the review process and participation on the MRB were satisfactorily brought to closure and an initial cadre of 36 IMPEP team members was established. During FY 1996, the cadre conducted nine reviews under IMPEP. These reviews included two NRC regions and seven Agreement States. The performance of both of the NRC regions was found to be " Satisfactory" with respect to all common and non-common performance indicators and I

therefore, adequate to protect public health and safety. In five of the Agreement State reviews, the programs were found adequate and compatible. The other two Agreement State program reviews resulted in findings ranging from " Satisfactory" to " Satisfactory, with Recommendations for Improvement" to " Unsatisfactory" These findings have resulted in the development of i

I appropriate corrective action plans by the affected State regulatory entities and/or continuing scrutiny and review by the NRC (10]. The IMPEP process continues with the majority of

findings being " Satisfactory". In those cases of lesser findings, corrective actions or other appropriate responses have been implemented.

The Management Directive approved on September 12,1995, was subsequently revised on November 25,1997, to include NRC's final Po! icy Statement on Adequacy and Compatibility of l

Agreement States Programs approved by the Commission on June 30,1997. The directive j

sets forth a description of the program including authority and responsibilities for the various aspects of the program. Part 11 of Handbook 5.6 which accompanies the Directive, describes the performance indicators (both common and nor> common), Part lll addresses the evaluation criteria including detailed descriptions of what constitutes a" Satisfactory *," Satisfactory With Recommendations for improvement *, " Unsatisfactory", and Category N which addresses special conditions warranting a withholding of a rating. Part IV of the handbook addresses programmatic assessment. This section points out that the MR8 will make the overall assessment of each NRC region's or Agreement State's program, on the basis of the proposed l

final report and recommendations prepared by the team that conducted the review of that region on State, including any unique circumstances. For NRC regions, the MRB will only assess the adequacy of the program to protect public health and safety but findings for Agreements States l

will also address compatibility with NRC regulations (11).

The NRC regions and Agreement States are currently being evaluated in their ability to conduct effective materials safety programs using both the common and nor> common indicators. The evaluation process utilizes an experienced cadre of investigators and employs the evaluation criteria set forth in Handbook 5.6 which accompanies the same Management Directive. The evaluation criteria do not represent an exhaustive list of the factors that may be relevant in determining performance as there may t,e additional considerations about a particular aspect of the regulatory program which warrants atypical consideration (11). The evaluation criteria is designed to be objective and measurable. For example, under the category of Status of Materials inspection Program, which is designed to ensure that periodic inspections of licensed operations are conducted, in order to achieve a rating of" Satisfactory" an NRC region must satisfactorily address the following (11):

l

1. Core licensees (those with inspection frequencies of 3 years or less) are inspected at regular intervals in accordance with frequencies prescribed in the NRC manual chapter.

l

2. Deviations from these schedules are normally coordinated between working staff and management. Deviations consider the risk of licensee operation, past licensee performance f

and the need to temporarily defer the inspection to address more urgent or critical prionties.

l

-