ML23304A006

From kanterella
Jump to navigation Jump to search
NUREG-1021, Revision 12, Effectiveness Review Interim Report September 2023
ML23304A006
Person / Time
Issue date: 09/30/2023
From: Maurin Scheetz
NRC/NRR/DRO/IOLB
To:
References
NUREG-1021, Rev 12
Download: ML23304A006 (21)


Text

NUREG-1021 Revision 12 Effectiveness Review Interim Report, September 2023 Overview In May of 2022, the operator licensing program office (NRR/DRO/IOLB) initiated an effectiveness review of the major changes introduced in Revision 12 of NUREG-1021, Operator Licensing Examination Standards for Power Reactors, which went into effect on March 17, 2022. The original effectiveness review plan is available in Agencywide Documents Access and Management System (ADAMS) Accession No. ML22138A409. The effectiveness review plan was subsequently updated in August 2023 as a result of the Differing Professional Opinion (DPO) process and DPO-2021-002 (ML23059A216) to include additional tasks for the teams review of the generic fundamentals questions. The updated effectiveness review plan is available in ADAMS at accession No. ML23236A119.

The effectiveness review focuses on the impact of the following changes in NUREG-1021:

1. The addition of generic fundamentals questions on the written examination and the discontinuation of the standalone NRC Generic Fundamentals examination.
2. The revised critical task methodology.
3. The identification and documentation of critical and significant performance deficiencies on the simulator operating test.
4. Licensee implementation and use of Revision 2 of ACAD-10-001, Guidelines for Initial Training and Qualification of Licensed Operators as the standard set of eligibility requirements for accredited training programs and allowances made in Revision 2 to accept related science degrees for SRO-instant applicants and reduce on-shift time under instruction.
5. The re-structuring of the NUREG into topic-based sections.

The purpose of the effectiveness review is to monitor the use of new and revised instructions and guidance in NUREG-1021, Revision 12, to determine if additional actions are needed, such as training or clarifications to improve the implementation of the major changes made in the revision. The review also checks for any unexpected outcomes of the change. Finally, the review team will assess the overall impact of the changes and determine if the results and outcomes justified the resources invested in making the revision.

This is an interim report to share observations from the review following the first year of Revision 12 examinations. This report primarily focuses on preliminary findings and observations from 39 initial licensing examinations that were developed, administered, and graded using Revision 12 from April 2022 through April 2023.

Activities To perform the effectiveness review, the team collected data for each of the eight tasks according to the effectiveness review plan. The data collection included qualitative and

quantitative information from the development, administration, and grading phases of initial licensing examinations as well as operator licensing program office activities such as regional reports on interactions (ROIs), examination audits and requests for administrative reviews. For their assigned examinations, each chief examiner provided the information at the end of the examination by responding to a post-examination questionnaire, after licensing decisions were made. The team also collected general feedback about the revision from qualified examiners during the bi-weekly operator licensing conference calls and a formal survey that was distributed to all qualified NRC examiners in July 2023.

The team increased the scope of the review plan during the first year of the review to include information about how facility licensees were implementing allowances in the National Academy for Nuclear Trainings ACAD-10-001, Guidelines for Initial Training and Qualification of Licensed Operators, (ML21144A141) to modify requirements for time spent on shift as a trainee (also known as Extra Person on Shift or EPOS hours). In January 2023, this was added to task number seven in the effectiveness review plan. In August 2023, elements of the task for the review of the generic fundamentals questions were revised as a result of the NRCs Differing Professional Opinion process; the results of these changes will be addressed in a future report.

Assessment Areas and Preliminary Findings Generic Fundamentals Task 1 o Review the integration of generic fundamentals topics into the site-specific examination and verify that the instructions in NUREG-1021 are being used as intended; identify any areas for improving clarity in the instructions/guidance related to these "new" questions.

Task 2 o Verify that generic fundamental knowledge is being maintained through initial and continuing training programs.

Data Collected:

Title 10 of the Code of Federal Regulations (10 CFR) 55.41, Written Examination: Operators, and 10 CFR 55.43, Written Examination: Senior Operators, require that the written operator licensing examinations for reactor operators (ROs) and senior reactor operators (SROs) must include questions concerning various mechanical components, principles of heat transfer, thermodynamics, and fluid mechanics. These regulations also require that the written examinations must address fundamentals of reactor theory, including the fission process, neutron multiplication, source effects, control rod effects, criticality indications, reactivity coefficients, and poison effects. Written examination questions that test knowledge of these concepts are referred to as generic fundamental (GF) questions. GF questions originate from knowledge and ability (K/A) statements in sections 5 Component and 6 Theory of the vendor or technology specific K/A catalog (herein referred to as the applicable K/A catalog).

To assess the integration of GF questions on the site-specific written examination and the adequacy of the instructions in NUREG-1021 for sampling generic fundamental knowledge

statements from the applicable K/A catalog, the team reviewed written examination outlines and tracked the number of generic fundamentals questions for all the examinations administered during the first year of NUREG-1021 Revision 12. According to the instructions in NUREG-1021, GF K/As are sampled in Tier 2 and Tier 4. Every examination must have 6 generic fundamentals questions in Tier 4 (all from Section 6 of the applicable K/A catalog), and, because of the random sampling method, there is a likelihood that a Components K/A is selected for one or more of the Tier 2 systems. This is used to test a concept about a component relevant to the selected system. In other words, some written examinations will have more than 6 GF questions if the random sampling process results in testing a component K/A in Tier 2.

To monitor the quality of the GF questions, the team reviewed results from six examinations chosen for an in-depth review via the operator licensing program offices program for the review of initial licensing examinations (also known as the exam audit program). The exam audit program is conducted in accordance with Operator Licensing Manual Chapter (OLMC)-320 (ML22216A198). For the six examinations that were audited, IOLB examiners conducted a detailed review of each generic fundamentals question against the criteria on NUREG-1021, Form 2.3-5, Written Examination Review Worksheet, and observed how chief examiners documented their review of proposed questions against the same criteria.

To verify that GF knowledge is being maintained through initial training programs, the team reviewed individual applicant performance on the GF questions on the first 39 Revision 12 initial licensing examinations. The team assumed that performance on the set of GF questions can be used to draw conclusions about an individual applicants level of GF knowledge. Because 2022 was a transition year (the last NRC GFE was administered in September 2021), the team also tracked whether or not the applicants had taken a previous NRC GFE. Here the team assumed that studying for and taking the NRC GFE would likely skew applicant GF performance.

To verify that GF knowledge is being maintained through continuing training programs, the team reviewed trends from NRCs Annual Effectiveness Review of Training in the Industry reports (ML22154A474 and ML23151A696) and periodic operating experience communications from the Operating Experience Branch to determine if any inspection findings or industry events could be attributed to a deficiency in generic fundamental knowledge. The NRCs Annual Effectiveness Review of Training in the Industry report includes information about inspection findings associated with the crossing cutting aspect H.9 for training, reported industry events, and NRC observations of the training program accreditation process.

Interim Results and Observations:

During the first few weeks of this review, the team discovered an anomaly after reviewing several written examination outlines; there were almost no instances of component-related GF questions in Tier 2. The team found that Step B.5.f from the instructions in examination standard (ES)-4.1, which states:

For the generic K/A category in Tier 2 on the RO outline, to select topics from Section 2 and Section 5, Components, of the applicable K/A catalog that are relatable/relevant to the selected system was not being followed in the development of these outlines. The team determined that the sampling algorithm for the sample plan generator tool was rarely selecting a component K/A because of the large number of Section 2 generic K/As. The team contacted the owner of the sample plan generator and was able to get the code fixed immediately. The generator was changed to alternate between Section 2 and Section 5 for all subsequent Tier 2 generic K/As.

The team also discussed the issue with the chief examiners assigned to those exams and during an operator licensing biweekly teleconference. The team determined that the error in the sample plan generator tool did not impact the validity of those first examinations since it is possible that random sampling methodology could also cause an absence of component K/As in Tier 2 of the written examination.

For the first year of NUREG-1021 Revision 12 examinations, there were, on average, 7 questions per examination testing generic fundamental K/As (i.e., from Section 5 and 6 of the applicable K/A catalog). Apart from the anomaly mentioned above, the team finds that the sampling instructions are adequate and being followed properly.

NUREG-1021 contains instructions to review the written examination for psychometric errors that are common to multiple choice written examination questions. All GF questions were reviewed on six examinations as part of the operator licensing program offices detailed review of a sample of initial licensing examinations, or the exam audit program. Psychometric deficiencies were identified in all six examination audits performed during the period of this report. In two audits, a GF question was marked as either new or modified but it did not meet the criteria for a new or modified question; it should have been counted as a bank question. One audit found multiple GF questions that were not modified to make them plant-specific; written examination questions are expected to operationally valid with plant-specific language used throughout the question. Another audit found an issue with a two-part question that could be answered solely by knowing the answer to the first part of the question because of the first-part answer choices were unique. Another audit had a question stem that cued the correct answer choice. Two other audits had GF questions with more than one correct answer choice or a partially correct answer choice as well as a correct answer choice. One audit had a GF question with low level of difficulty (i.e., LOD 1), which is not allowed by NUREG-1021. Despite these psychometric errors, these examinations were still found to be valid examinations.

There were no informal staff reviews of GFE questions related to an applicant written examination failure.

To assess level of GF knowledge, the team reviewed the performance of 430 applicants on their given set of GF questions from the first 39 Revision 12 initial licensing examinations during the period of April 2022 through April 2023. Figure 1 below shows the spread of applicant performance on GF questions.

Percent of GF Questions Answered Correctly 250 200 Number of Applicants 150 100 50 0

10-20% 40-50% 50-60% 60-70% 70-80% 80-90% 90-100%

Percent Correct Figure 1: Histogram of performance on GF questions Average performance on the generic fundamentals questions was 88% correct with a standard deviation of 14%. It is important to note that of 430 applicants, 377 applicants (approximately 88%) took the NRC GFE and 53 applicants (approximately 12%) did not take the NRC GFE. In other words, the majority of the applicants took the NRC GFE before starting initial license training class.

The team looked for any difference in average performance between applicants that took the NRC GFE when compared to applicants that did not take the NRC GFE and did not find any significant differences (see Figure 2), however the sample size for applicants that did not take an NRC GFE is very small (53 out of 430 applicants) for this comparison. The average percentage of correct GF questions for applicants that took a previous NRC GFE was approximately 89% with a standard deviation of 13% compared to an average percentage correct of 86% with a standard deviation of 15% for applicants that did not take an NRC GFE.

Average performance on GF Questions 100.0%

95.0%

89%

90.0%

86%

85.0%

80.0%

75.0%

Applicants did not Applicants took 70.0%

take NRC GFE NRC GFE no yes Figure 2: Average performance on GF questions for applicants that took or did not take the NRC GFE Using the traditional NRC GFE cut score of 80%, the team looked at the number of applicants that got less than 80% correct on the set of GF questions they were given. Out of 430 applicants, 87 applicants answered less than 80% of their GF questions correctly, or approximately 20% of the applicants. Table 1, below, shows the breakdown by license level for the 87 applicants that answered 80% or less of their GF questions correctly.

Table 1: Applicants answering less than 80% of their GF questions correctly by license level Level Number of Applicants scoring <80%

RO 43 SRO Instant 30 SRO Upgrade 14 Total 87 out of 430 Out of 430 applicants, 12 applicants answered only half or less than half of their GF questions correctly, or approximately 3% of applicants. Table 2, below, shows the breakdown by license level for the 12 applicants that answered 50% or less of their GF question correctly.

Table 2: Number of applicants than answered 50% or less of their GF questions correctly by license level Level Number of Applicants scoring 50% or lower RO 7 SRO Instant 1 SRO Upgrade 4 Total 12 out of 430

The team observed one occurrence of an applicant answering only one GF question correctly; this applicant answered 1 of 6 GF questions correctly. There were no observations of applicants answering all GF questions incorrectly.

Figure 3 displays average performance on GF questions by license level for the 430 applicants that took the Revision 12 examination this past year. In general SRO applicants performed slightly better on the set of GF questions.

Average Performance on GF Questions 100.0%

90.0%

80.0%

70.0%

60.0%

50.0%

40.0%

30.0%

20.0%

10.0%

RO SRO-I SRO-U Figure 3: Average Performance on GF questions by license level Historically, applicant performance on the NRC GFE, from 1988 to 2021, averages 91%. Table 3 below provides cumulative data for a variety of NRC GFE categories from September 1988 through September 2021. This average is calculated from the test scores of 202 exams for over 14,000 applicants. It is important to note that the NRC GFE was a fifty-question examination; 91% equates to an average of 5 out of 50 questions wrong. If an applicant misses one GF question on a NUREG-1021 Revision 12 examination, they likely would be at a 90% for GF question performance.

Table 3: Cumulative NRC GFE data (historical)

Facility Type No. of Exams No. of Mean Score Examinees (%)

BWR 101 5245 90.5 PWR 101 9357 91.3 Total 202 14602 91.0 For comparison purposes, Table 4 below shows the average percent of GF questions answered correctly on the 39 NUREG-1021 Revision 12 initial licensing examinations for 430 applicants.

Table 4: Average percentage of GF questions answered correctly Facility Type No. of Exams No. of Mean Score Examinees (%)

Multiple 39 430 88.5 Because the two types of testing methods are very different (a 50-question examination verses a set of 6-9 questions), the team does not believe that conclusions can be made by comparing the average performance of the two tests. However, performance on the GF questions for the first year of this change can be used to set a baseline of GF performance for applicants that have taken the NRC GFE and are thus considered to have a typical level of knowledge of generic fundamentals because they took the NRC GFE. This information can be used during the effectiveness review to analyze if generic fundamental knowledge is being maintained over time.

The final NRC GFE was administered in September 2021; as a result, the majority of applicants that took the written examination during the period analyzed for this interim report took an NRC GFE prior to taking their NUREG-1021 Revision 12 initial licensing examination. The first instance of applicants not taking the NRC GFE occurred on examinations administered in December 2022. However, even after this December 2022 exam, the majority of applicants during the period of this interim report took the NRC GFE. Overall, approximately 10% of the applicants did not take the NRC GFE.

The team identified one event described in an inspection report (ML23122A168) that involved generic fundamental knowledge for reactivity. During a reactor startup on December 21, 2022, control room operators exceeded a plant procedure limit for start-up rate due to excessive control rod withdrawal. The licensee determined the direct cause to be inadequate monitoring of diverse indication and the failure to establish critical parameters to minimize the significance.

The licensees corrective action referenced use of their systematic approach to training process to address the use of diverse indications, critical parameters, and to improve oversight intervention. As a result, the effectiveness review team associates aspects of this event with the following knowledge statements for criticality found in NUREG-1122, Knowledge and Abilities Catalog for Nuclear Power plant Operators: Pressurized Water Reactors, Section 6.1, Reactor Theory: 192008 Reactor Operational Physics: K1.01, K1.05, and K1.08 (ML20260H083). Since the operators involved in this event were licensed using examination standards that preceded NUREG-1021 Revision 12, this is being captured by the review team as a data point for task 2 in assessing the use of licensed operator continuing training to maintain operator knowledge of GF topics.

Using the summary of findings related to training shared in the Annual Effective Review of Training in the Industry Reports, the team reviewed each finding to determine if any could be attributed to GF knowledge and found none. This is summarized in Table 5 below.

Table 5: Reactor Oversight Program Findings related to Training in 2021 and 2022 Year Total Number of Number of Percent of Number of Findings Findings Findings Findings Related to H.9 that related to H.9 Related to H.9 could be attributed to GF knowledge 2022 425 6 1.4% none

2021 278 7 2.5% none Recommendations There were some issues identified in the development and review of site-specific GF questions however, the psychometric flaws identified in the GF questions are on par with those identified during audits of non-GF questions. Because only a small set of questions were reviewed in detail for this interim report, the team should continue to monitor for psychometric flaws in the GF questions via the examination audit program to determine if any themes are identified once there is a larger sample size.

Because the team found some GF questions that were included as-is from the Generic Fundamentals Examination question bank without modifications to make them more plant or site specific, the team recommends additional communications of this issue with examiner qualified staff and adding more information about how to properly modify GFE bank questions in the next revision on NUREG-1021. Examples of how to make GF questions appropriate for use on the site-specific written examination are available on the 1021 Toolbox site as a post on Nuclepedia and the OLPF titled Gen 61.

Because there have been no informal staff reviews of GF questions during the interim period, no conclusions can yet be drawn. The team should continue to monitor for any informal staff reviews that involve GF questions.

Since most applicants took the NRC GFE during the period of this report, the average performance results on GF questions do not exhibit how applicants are performing on the GF questions as a result of removing the requirement for the NRC GFE. The results are not a yet usable for determining whether or not generic fundamental knowledge is being maintained through initial training programs. The team should consider using the GF question performance during the period of this report to establish a baseline of performance on the new set of GF questions for use in assessing how generic fundamental knowledge is changing as result of the removal of the NRC GFE requirement going forward. In accordance with the effectiveness review plan, the team will continue to monitor operating experience and inspection results for any indication that operator knowledge of generic fundamentals is not being maintained.

Additionally, as a result of DPO-2021-002 (ML23059A216), the NRR operator licensing staff was directed to conduct additional activities as a result of the change in NUREG-1021 Revision 12 to eliminate the NRC GFE and integrate GF questions on to the site-specific examination.

First, the staff was directed to evaluate NRC inspectors ability to monitor and disposition potential generic fundamentals knowledge weaknesses in licensed operators, should they occur. The staff determined that inspectors, particularly resident inspectors, needed training and updated guidance so that the NRC could adequately monitor licensed operator performance.

The staff provided training to regional inspectors and all operator licensing examiners during the Spring 2023 counterpart meetings. The training informed inspectors about the changes in fundamentals testing for operators, how to identify potential knowledge deficiencies in licensed operators, and how to disposition potential issues. The staff is in process of updating Inspection Procedure 71111.11, Licensed Operator Requalification Program and Licensed Operator Performance (ML21257A202). The training and updated guidance are in accordance with the Reactor Oversight Process and the Memorandum of Agreement Between the Institute of Nuclear Power Operations and the U.S. Nuclear Regulatory Commission (ML23026A093) and do not add any new processes or revise any inspection requirements. The effectiveness review team should continue to monitor inspection results and operating experience trends for any indication of deficiency in GF knowledge.

Next, the NRR operator licensing staff was directed to revise the effectiveness review plan so that the task that tracks applicant performance on the new Tier 2 and Tier 4 generic fundamentals questions includes the following enhancements:

a. The task has a sufficient duration to ensure that meaningful data is collected for generic fundamentals questions from applicants who had NOT previously taken a stand-alone NRC Generic Fundamentals Examination
b. The task includes guidance to review any questions that may indirectly/implicitly test generic fundamentals and were incorrectly answered and include that data in the effectiveness review
c. The task contains specific thresholds for performance that will trigger revisiting the sample plan distribution contained in NUREG-1021, Revision 12 The team has updated the NUREG-1021 Effectiveness Review plan (ML23236A119) to include these changes and is already actively collecting this information.

Revised Critical Task Methodology Task 3:

o Determine if the instructions for identifying critical tasks are clear and verify proper and consistent application of the critical task methodology; identify any areas for improving clarity in the instructions/guidance related to critical tasks.

Task 4:

o Verify that the critical task methodology is adequate for AP1000 simulator operating tests.

Data Collected To determine if the instructions for identifying critical tasks are clear and to verify proper and consistent application of the critical task methodology, the team reviewed results from examinations chosen for an in-depth review as part of the operator licensing program offices review of initial licensing examinations (also known as the exam audit program). The exam audit program is conducted in accordance with OLMC-320, IOLB Review of Initial Licensing Examinations (ML22216A198) and six examinations were audited during the period of this interim report. For these six examinations the staff (1) conducted a detailed review of each critical task against the critical task methodology in ES-3.3, General Testing Guidelines for Dynamic Simulator Scenarios; (2) tracked the number of times that alternative boundary conditions were used; and (3) analyzed if boundary conditions were arbitrary conditions.

Data was not collected for Task 4 for the period of this report because the first AP1000 examination using NUREG-1021 Revision 12 is scheduled for May 2024.

Interim Results and Observations During the transition period after NUREG-1021 Revision 12 was published, but before it was used for grading Revision 12 examinations, the program office received several questions about the new and revised instructions for critical task identification, the elements of critical tasks, and grading performance deficiencies. As a result, the program office promulgated additional clarification for the critical task methodology in two separate reports on interactions to answer

these questions. The program office also published these clarifications in the Operator Licensing Program Feedback database (ML23101A094) as the OLPFs numbered 3.3.1 through 3.3.5. Finally, the program office directed the team to add a subtask to Task No. 3 in the NUREG-1021 Revision 12 Effectiveness Review plan to monitor for the use of arbitrary alternative boundary conditions for critical tasks, due to the concern identified during the report on interaction process.

Critical Task methodology Overall, the team found that within the scope of the exam audits, critical tasks met the critical task methodology in NUREG-1021 Revision 12. There was one occurrence of a task in a scenario guide that met the critical task criteria but was not marked in the scenario guide as a critical task. The task was to isolate a faulted steam generator. When asked why the task was not designated as a critical, the examination author did not think that the action met the critical task criteria because the related event did not challenge a safety function and the action to isolate a faulted generator is a checklist item on the balance-of-plant operators procedure for the event. However, the task does meet the critical task criteria because it is an Emergency Operating Procedure directed action essential to the events overall mitigative strategy; it should have been marked as critical. There were no performance deficiencies associated with this task during administration of the operating test, thus, this is an administrative issue only.

In another examination, the measurable performance standard for a critical task had two standards, the task was to be completed prior to an orange or red path (emphasis added).

Allowing two measurable standards introduces ambiguity for how to grade applicant performance and this practice should be avoided.

Alternate Boundary Criteria The team collected the following information, shown in Table 6, for the use of alternate boundary conditions (ABC) from the six examinations reviewed by the audit program:

Table 6: Alternate Boundary Conditions Found During Examination Audits Exam No. Number of CTs with ABC Number of arbitrary ABCs 1 3 0 2 2 1 3* 4 4 4 1 0 5 0 Not Applicable 6 2 0

  • This examination was developed before additional guidance and training about proper use of alternate boundary conditions was provided to NRC examiners and facility licensees.

In general, an operating test consists of approximately three to five scenarios with at least two CTs per scenario; there are usually about six to ten CTs per exam. Therefore, alternate boundary conditions seem to be used around 50% of the time. NUREG-1021 does not limit the number of ABCs that can be used for the operating test. Alternate boundary conditions are not the preferred type of boundary condition. This is a small sample size (six examination total, so conclusions cannot be drawn as to whether the use of ABCs is high, low, or appropriate.

As shown in Table 6 above, arbitrary ABCs were identified on two examination audits. In one examination audit, the basis for the ABC was documented as agreement by management without any additional basis information such as why the cited time period is reasonable or if the time period was calculated in some fashion. Another examination audit found an exam with four instances of using arbitrary ABCs for critical tasks; the examination was developed before additional guidance was provided about the proper use of ABCs. On this operating test, the four ABCs were similar in that each specified critical task completion within a set time period. Documentation for the basis of the selected time periods was missing though there was a statement that the facility licensee examination developer and the NRC chief examiner agreed on the time-based ABCs. NUREG-1021, ES-3.3 requires that use of a time-based ABC to include expiration of a reasonable amount of time (emphasis added) however, the NUREG does not explicitly state to document the basis for time-based ABCs on the simulator guide.

Recommendations Critical Task Methodology So far, there does not seem to be a significant issue in the identification and development of critical tasks that meet the methodology. The team found one instance where a task that met the critical task criteria was not identified therefore, the team recommends expanding the scope of the review to evaluate if the simulator scenarios include actions that meet the critical task criteria, but these actions not identified as critical tasks.

Alternate Boundary Conditions The team should continue to monitor the use of alternate boundary conditions and also track the use of preferred boundary conditions for comparison purposes. This could help the team understand how ABCs affect the level of difficulty of simulator scenarios. The team also recommends communicating best practices for documenting ABCs on simulator operator test forms with NRC examiners. The team recommends providing clear instructions in a future revision of NUREG-1021 to explicitly state the need to document the basis for time-based ABCs to ensure that time-based ABCs are reasonable.

Critical Performance Deficiencies and Significant Performance Deficiencies Task 3:

o Verify proper and consistent application of the critical task methodology and grading of critical performance deficiencies; identify any areas for improving clarity in the instructions/guidance related to critical performance deficiencies.

Task 5:

o Verify that the instructions for identifying and grading significant performance deficiencies are clear and verify proper and consistent application of grading instructions; identify any areas for improving clarity in the instructions/guidance related to significant performance deficiencies.

Data Collected To verify proper and consistent application of critical performance deficiencies (CPDs) and significant performance deficiency (SPD) grading, the team reviewed all final CPDs and SPDs from the examinations administered during the first year of NUREG-1021 Revision 12. From the final results of 39 initial licensing examinations given during the period of April 2022 through April 2023, there were 5 CPDs, and 18 SPDs. The team reviewed the operating test comments on the affected applicants Form 3.6-4 and compared the write-ups for the CPDs and SPDs against the criteria in Revision 12 for grading SPDs and CPDs. There were no informal staff reviews associated with a CPD or SPD during this period.

The team also received feedback, via the program offices report on interaction process, about the new and modified instructions in NUREG-1021 Revision 12 for the critical task methodology, SPDs and CPDs.

Interim Results and Observations From the review of the five CPDs assigned to applicants during the first year of Revision 12 examinations, the team found that the instructions in Revision 12 were applied properly.

However, the team identified an opportunity to improve the instructions for SPDs and CPDs when there is more than one performance deficiency that leads to a critical task not being met. In other words, the instructions do not explain how to treat multiple, distinct performance deficiencies that, when looked at alone would not cause the failure to meet the CT but when combined and regardless of which applicant is assigned the performance deficiency (for example, one applicant error or more than one applicant made an error) could be determined to be a CPD. The program office is currently evaluating this problem and issue for resolution.

From the review of the 18 SPDs assigned to applicants during the first year of Revision 12 examinations, the team found that the instructions in Revision 12 were applied properly and that the instructions need some minor clarifications. Specifically, during the review of the first SPD, the team found an editorial error in the description of an SPD in ES-3.6 on page 4 of 28. The words, or involve were inadvertently left out of the description. As a result, the team corrected the deficiency via an errata to the NUREG that was issued in September 2022.

In most cases, the SPD included how the performance deficiency met the grading criteria, although the reviewer noted that sometimes it was not clear why the SPD was assigned to a specific competency rating factor. Additionally, at one facility, multiple applicants had similar SPDs related to the diagnosis of plant indications. Some of these SPD write-ups did not include the actual performance deficiency that was observed and focused more on how SPD criteria were met.

There are no informal staff reviews associated with CPDs or SPDs for the period of this interim report.

Recommendations As described above, the operator licensing program office is evaluating the issue of multiple performance deficiencies meeting the threshold of a CPD or SPD to determine if any clarifications are necessary.

The team should continue to review the application of CPD and SPD instructions in the grading of performance deficiencies on the simulator operating test as directed in the effectiveness review plan.

ACAD 10-001 Revision 2 Task 7 o Assess the use of new allowances in ACAD-10-001 for SRO-instant applicants to be eligible for licensing class by possessing a "related sciences degree." Assess any reductions to time spent on shift as under-instruction. Determine the impact, if any, of the changes made to eligibility requirements.

Data Collected Data for this task was collected from the responses to the post-examination questionnaire that each chief examiner filled out. One question on the questionnaire asked whether any applicants possessed a related sciences degree, as defined in ACAD 10-001, Guidelines for Initial Training and Qualification of Licensed Operators (ML21144A141).

During the review period, one chief examiner expressed concern that there may be inconsistencies in how licensees implement changes to the number of hours applicants complete as the extra-person-on-shift (EPOS) during initial license training. Under, NUREG-1021, Revision 11 and previous versions of ACAD-10-001, there was specific guidance that applicants should have a minimum of 3 months of EPOS. The change in Revision 12 to cite ACAD-10-001 as the primary source of eligibility criteria for training programs accredited by the National Nuclear Accreditation Board resulted in a change to the EPOS timeframe. Operator Licensing Program Feedback No. 2.2.15 clarifies that:

absent a commitment to ensure applicants spend at least 3 months as an extra person on shift, stations with an accredited training program would need to determine the amount of time to schedule applicants as the extra person on shift using the systems approach to training process (e.g., by considering the time needed for applicants to successfully complete on-the-job training tasks and under instruction watches).

In response to a concern that inconsistencies may exist in how licensees make these determinations, the team was asked to track what changes are being made to EPOS. This data collection began in January 2023; specifically, the effectiveness review team tracked any reductions to the previous requirement of 3 months of extra person on shift time observed by chief examiners during application reviews. This question was added to the post-examination questionnaire.

Interim Results and Observations For the period of this interim report, the effectiveness review team collected data from 39 initial licensing examinations, which included 430 applicants. NRC chief examiners were asked to record if, during their audit of at least 10% of application applications, an SRO-instant applicant possessed a related science degree. From this sample, two applicants possessed a related science degree.

The two applicants with a related science degree had the following education and experience: 1) One had a Master of Science degree, leadership experience in the medical field, and nuclear navy experience. 2) One had a degree in Mathematics and Education and extensive experience as a non-licensed operator.

In all cases, the facility licensees convened a committee to evaluate the applicants eligibility for senior reactor operator training. The committees determined that the applicants had a diverse engineering and technical knowledge and experience and/or demonstrated leadership skills, command, and control, and use of technology that met the definition of related science in ACAD 10-001. The committee and the Site Vice President approved the applicant for senior reactor operator training. In all cases, the applicants passed the written examinations and operating tests.

In summary, the team observed facility licensees using the new allowance for assessing related science degrees for two applications. Each of the applicants had previous nuclear experience in addition to relevant education and passed all portions of the written examinations and operating tests.

While collecting the data, the team noted that more than one chief examiner incorrectly reported an applicant having a related science degree per ACAD 10-001. The team concluded that chief examiners may not understand the new definition and process.

The team also interviewed several training managers at facilities to obtain their opinion on the changes to eligibility criteria. The training managers expressed that it is difficult to recruit high quality people to become operators. They believe that the added flexibility in the ACAD eligibility guidelines will increase the population of people eligible for licensed operator training, and therefore would increase the quantity and quality of job applicants and therefore licensed operator applicants. The review team does not have any data to substantiate these beliefs but includes it in this interim report for anecdotal information about the impact of NUREG-1021 Revision 12.

For the period of this interim report, the team observed that facility licensees maintained the previous requirement for applicants to spend at least three months as the extra-person-on-shift. However, the team did receive some information that some licensees may be in the process of changing their training program requirements, as detailed in the following observations:

  • A chief examiner received a phone call from a licensee training staff member about the change in EPOS time. The facility staff member stated that they are evaluating reducing the extra person on-shift time as long as the set of EPOS learning objectives can still be met through this format of training.
  • A chief examiner noted that a fleet procedure for EPOS time was changed. The procedure required a nominal three months of EPOS time but added allowances to change EPOS time for unanticipated leave or other unanticipated situations.
  • Several licensee training program staff members and formerly licensed operators shared their opinion that the focus should shift from quantity of EPOS time to quality of EPOS time for the purpose of increasing an applicants knowledge and ability to perform licensed operator work. The focus should be on the quality of mentoring, the mixture of time as an extra person with time under-instruction in a watchstanding position, and accomplishing the EPOS learning objectives.

Recommendations While the team has not observed any substantial changes being implemented in facility licensee training programs for the assessment of related science degrees, the review team should continue to track how facility licensees are implementing allowances in ACAD-10-001 for SRO-instant applicants to be eligible for licensing class by possessing a "related sciences degree." The review team should also continue to assess any reductions to time spent on shift as under-instruction. To ensure that the team is receiving accurate data, the effectiveness review teams should go clarify the definition of related science degree and give more examples. This could be done during an OL bi-weekly teleconference and in the examination data collection questionnaire.

The team noticed that not all chief examiners fully understand the new ACAD 10-001 related science degree definition and process for approving these applicants for SRO training. The team should provide more information to the chief examiners on the changes and follow-up by determining if chief examiners accurately report related science degrees under this effectiveness review.

Restructured NUREG Task 8 o Assess the ease-of use of NUREG-1021 Revision 12 as a result of changes made for streamlining and added value of these changes.

Data Collected To assess the ease-of-use of the restructured format of NUREG-1021 Revision 12, the effectiveness review team conducted both quantitative and qualitative analyses. First, the team compared the number of hours charged for Revision 12 examinations with the number of hours charged for Revision 11 examinations. Specifically, the team collected the total number of Cost Activity Code (CAC) hours charged for exam development, administration, and documentation from April 2022 to April 2023 in each region and compared that to the total hours charged to the same set of CACs in years 2017 - 2021. The team performed a separate analysis of the CAC hour totals in years 2017 - 2018 with the hours in 2019 - 2020 to rule out any differences in hours charged due to the COVID-19 pandemic, which would have impacted the Revision 11 data. The resulting data was analyzed for differences to determine the potential impact on the usability of Revision 12 based on the restructured format.

Secondly, the team collected qualitative information about the restructuring of NUREG-1021 through a survey (July 2023) and regular solicitation of feedback throughout the first year that Revision 12 examinations were being developed, reviewed, administered, and graded.

Interim Results and Observations From the review of CAC hours for exam development, the data shows that the number of hours for exam development can vary greatly from exam to exam. In order to effectively analyze whether the impact of NUREG-1021 Revision 12 resulted in a significant change in the number of hours required for exam development, a Pareto Analysis was performed to compare the spread of exam hours. The results identified that the overall number of hours of exam development related activities concentrated in the lower bands of the numbers

charged. The data included a review of those exams in 2017 - 2021 (Revision 11 exams) and April 2022 - April 2023 (Revision 12 exams). A 90% barrier was used as a discrimination threshold to determine the concentration of exam development charges against the overall spectrum of hours charged against the CAC. The results showed that, for the Revision 12 exams, 90% of the hours charged were in the bottom quartile of the total number of hours charged. Comparatively, from 2017 - 2021, 90% of the hours charged were in the bottom half of the hours charged. Overall, this demonstrates an increase in the number of exam development charges that were at the lower end of the total hours charged during the time period. See the figure 4 and 5 for additional details:

Figure 4: Examination development hours using NUREG-1021 Revision 11

Figure 5: Examination development hours using NUREG-1021 Revision 12 For exam administration, the results identified that, for NUREG-1021 Revision 12 examinations, the overall number of hours of exam administration related activities reduced by an average of 10% per exam across the regions when compared against Revision 11 exams. Furthermore, the standard deviation of exam administration charges reduced from 15% to 8% of the average, possibly indicating that use of Revision 12 resulted in more consistent charges in exam administration times across the exams in the regions.

Regarding exam documentation, the results show an overall consistent number of hours of exam documentation related activities. While the data indicates that the number of hours varied based on region and exam, depending on applicant performance, there is no indication of a step increase in documentation hours directly related to the implementation of NUREG-1021 Revision 12.

For qualitative analysis, feedback was collected from NRC examiners via survey questions about using NUREG-1021 Revision 12 to gauge its effectiveness and efficiency in the administration of initial examinations. The survey results showed that, on average, the regional examiners considered the administration of Revision 12 pretty much the same level of effort as administering and documenting an exam using Revision 11. Exam development activities related to identification of and defining critical tasks was considered a little harder/more burdensome, including using the grading process (PD, SPD, CPD).

However, the feedback regarding the overall ease of use and effectiveness of NUREG-1021 Revision 12 changes showed that the changes resulted in pretty much the same level of effort in exam related activities.

Recommendations Based on the above, it is recommended to continue to monitor the CAC hours charged for determining the ease-of-use of NUREG-1021 Revision 12.

Overall Impact of REV 12 Tasks 6 and 8 o Determine the overall impact of the Revision 12 changes on the pass/fail rate for NRC initial licensing examinations.

o Conduct other assessments of value Data Collected For this task area, the team collected data from 39 NUREG-1021 Revision 12 examinations administered from April 1, 2022, to April 30, 2023. Specifically, the team tracked written examination scores and the pass/fail results for the operating test. This information was compared to historical data from the 2020 - 2021 exams implemented under NUREG-1021 Revision 11 to analyze for adverse trends specific to the implementation of Revision 12.

Additionally, the team tracked the number of ROIs, appeals and hearings related to NUREG-1021 Revision 12 and compared this to the total received during implementation of

NUREG-1021, Rev 11. The staff also collected feedback from the region-based examiners via a survey (July 2023) on the overall effectiveness of the major changes in NUREG-1021 Revision 12 and what areas, if any, should be considered for improvement in the next revision.

Interim Results and Observations The average pass rate of the NRC exams from April 2022 to July 2023 period was 96.9%, with an average failure rate of 3.1%. This is equivalent to the historical average fail rate of 3.5%. The written average pass rate is 97.1%, which is within the margin of the historical average pass rate of 95 - 98%. The operating test pass rate is 99.5%, which is above the historical average pass rate of 98-99%. See table 7 for reference.

Table 7: Overall Pass and Failure Rates for Revision 11 and Revision 12 licensing examinations Average Pass and Failure Rates Rev 11 Exams Rev 12 Exams Pass Rate 96.5% 96.9%

Failure Rate 3.5% 3.1%

Written Test Pass Rate 95 - 98% 97.1%

Operating Test Pass Rate 98 - 99% 99.5%

In review of the average grades, the average RO, SRO and combined RO and SRO or overall grades for the effectiveness review period were 89.1%, 90.0%, and 89.6%, respectively. The historical averages for the RO, SRO and overall sections of the written exam are 84-92%, 89-91%, and 90.1%. Upon review, these results are similar to the previous performance results of NUREG-1021; therefore, the results are well within an acceptable level of deviation for the given data collected. See table 8 for reference.

Table 8: Overall Written Examination Averages, Revision 11 and Revision 12 written examinations Overall Written Examination Averages Rev 11 Exams Rev 12 Exams RO Portion 84 - 92% 89.1%

SRO Portion 89 - 91% 90.0%

Overall Exam 90.1% 89.6%

In review of the ROIs, a total of 72 ROIs were received from March 2017 to March 2022, averaging to ~ 1.36 ROIs per month. From March 2022 to July 2023, the number of ROIs received was 25, or ~ 1.31 ROIs per month. While the number of ROIs has varied per year, the overall trending average does not indicate an abnormally high number of ROIs as a result of using NUREG-1021 Revision 12.

From an assessment of the number of administrative reviews and hearings, from 2017 to March 2022 there was an average of 6 administrative reviews per year; this represents an average of administrative reviews under NUREG-1021 Revision 11. From March 2022 to July 2023, there was an average of 3.5 administrative reviews per year. There has only been one hearing in the last ten years; this occurred in 2013. These numbers indicate a stable level of reviews that do not indicate an adverse response to the implementation of NUREG-1021 Revision 12.

With respect to the feedback collected from 65 regional examiners that took the NUREG-1021 Revision 12 survey, there were a total of 28 responses. Of those who responded, there were numerous comments regarding the operator examination process, including requests for revisiting the performance deficiency grading methodology, use of critical tasks, and the competency areas and rating factors. Additional comments were received on the Tier 4 generic fundamentals question requirements changes made in revision 12, which are being reviewed as part of the effectiveness review. Additional feedback related to post scenario documentation was received regarding grading on potential critical task failures or significant performance deficiencies.

Finally, there were two positive comments pertaining to Revision 12, including feedback stating the overall organization is much better than previous revisions.

Recommendations The final part of the examiner survey asked for input on what to include in a future revision of NUREG-1021 and the team received the following recommendations:

  • The instructions for grading performance deficiencies should include a direction to document any decisions about PDs, SPDs or CPDs made as a result of information collected from simulator data collection or post-scenario questions. Specifically, examiners should document the basis for not grading a performance deficiency as SPD or CPD in the operating test grading.
  • The program office should consider additional ways to reduce subjectivity in simulator operating test grading.

The data supports that the changes made in NUREG-1021 Revision 12 did not result in a significant deviation in exam scores or pass rates. The team should continue to collect data in accordance with the effectiveness review plan. Comments received in the survey with regard to making changes to the rating factor assignment process are outside the scope of the effectiveness review but can be addressed during a future revision to NUREG-1021.

Summary In summary, the team has not observed a substantial impact of the Revision 12 changes on the operator licensing program at this point in the effectiveness review. The team should continue to collect and analyze data in accordance with the effectiveness review plan and take action on the following recommendations from the task areas above:

1. Provide additional training to NRC examiners on the review of GF questions.
2. Consider adding specific instructions for how to modify GFE bank questions and examples of satisfactory GF questions in the next revision on NUREG-1021.
3. For the effectiveness review, consider using the average GF question performance (from this first year of Revision 12 examinations) for applicants that have taken an NRC GFE to establish a baseline of performance on the new set of GF questions for use in assessing how generic fundamental knowledge is changing as result of the removal of the NRC GFE requirement.
4. Expand the scope of the effectiveness review to evaluate if the simulator scenarios include actions that meet the critical task criteria, but these actions were not identified as critical tasks. (Complete)
5. Expand the scope of the effectiveness review to include the use of preferred boundary conditions for comparison with the use of alternative boundary conditions.

(Complete)

6. Communicate best practices for documenting alternative boundary conditions on simulator operator test forms with NRC examiners.
7. Consider adding explicit instructions to document the basis for time-based alternative boundary conditions in the next revision of NUREG-1021.
8. For effectiveness review data collection questionnaire, clarify the definition of related science degree. (Complete) Add this information to the Nuclepedia page, 1021 Toolbox for knowledge management purposes.