ML11061A005
ML11061A005 | |
Person / Time | |
---|---|
Site: | Hope Creek |
Issue date: | 03/30/2011 |
From: | Undine Shoop Division of Inspection and Regional Support |
To: | Houghton T Nuclear Energy Institute |
Molly Keefe | |
References | |
Download: ML11061A005 (5) | |
Text
Hope Creek SC Assessment Observations
- The following were issues identified by the NRC team through observation of the Hope Creek safety culture assessment and an interview with the team lead. As the team observed the pilot, the NRC guidance for conducting an independent or third-party safety culture assessment located in Inspection Procedure 95003.02 were used as a basis for evaluating the assessment process.
I. Process Highlights The NRC team observed that the framework and overall NSCA assessment process can provide useful insights into a sites safety culture if implemented effectively. The NRC team noted several acceptable aspects of the NSCA, as follows.
The NRC team observed that the afternoon team meetings, which the team leader used as a learning opportunity for each assessment sub-team, seemed to be effective and allowed for robust team dialog. In addition, the day 4 (Thursday) afternoon roll-up exercise allowed the NSCA team to continue that dialog in greater depth, so that each team member gained a good understanding of the issues facing the site. The team members questioned each other, and used the dialog to develop thoughtful assessments.
After observing previous pilot assessments, the NRC provided feedback to NEI that the process should allow more opportunity for the assessor to follow up on issues for which there was little or conflicting information that may warrant further clarification. During day 4 of the Hope Creek assessment, the assessment Team Leader suggested to the teams that the afternoon roll-up session be a good opportunity to collect additional information on subject areas that required more clarification. Several members of the Hope Creek assessment team followed this guidance which allowed them to gather additional information on several issues. However, not every team member used the opportunity to their advantage. The NRC staff determined that the practice of allowing for time to collect clarifying information is effective if implemented consistently because it allows an opportunity for the team member to better understand the extent of the issue.
The NRC team determined that the method of asking for specific examples from interviewees is a good practice/strength. Asking for specific examples from interviewees (1) provides a check on the interviewees understanding of the safety culture principle or attribute that is the topic of the interview question by giving them an opportunity to demonstrate they are able to apply the principle or attribute to real-world situations; (2) provides diagnostic information to the team by identifying past incidents which may have had an impact on individuals perceptions of safety culture at the site; and (3) enhances the assessment teams ability to explain the issues identified by the team to site senior management.
II. Observations of Interviews and Interview Questions
- The NRC team noted that the interview questions used in this assessment differed from those used in previous pilots and identified three issues with the revised questions that if modified would improve the process as described below:
- 1. The first issue is that the interview questions are worded ambiguously. For example, several of the questions address two or more topics in one question which can make it difficult for the respondent to understand what is really being asked. The ambiguity and abstract nature of the interview questions did not ensure that answers received provided information pertaining to the INPO safety culture principle or attribute on which the question was based. In some instances that the NRC team observed, interviewees did not appear to understand the intent of the question and provided answers and examples that were unrelated to the question topic. In other instances, interviewees provided answers that focused on only one part of a question, rather than all of the topics included in a specific question. The NRC team observed that interviewers often restated the questions to clarify them, but their restatements also frequently focused on only one part of the complex interview questions. An example of a question that the NRC team observed to be confusing to interviewees was Does employee knowledge of fundamentals establish a solid foundation for sound decisions?
rather than Do you feel that you and those you work with have the knowledge, information and training needed to make good decisions?
- 2. The NRC team also observed that the interview questions were not tailored to the organizational level and work duties of the interviewees. As a result, the questions appeared to the NRC team to frequently confuse the interviewees.
One example of a question not tailored to the organizational level of an interviewee was that, during an interview with maintenance personnel, the interviewer asked questions pertaining to upper management and corporate bonus structures. The maintenance personnel did not have access to information about these topics and so provided answers that did not address the question. Another example of a question that was irrelevant to the interviewees work duties was asking contract security personnel whether support groups such as Human Resourcesunderstand their roles in contributing to nuclear safety? This question could not elicit informed responses from contract security personnel because they do not interact with the licensees Human Resources department. These weaknesses in the interview questions are of concern because the interviewees lack of knowledge in the area will not allow the interviewer to gather the information he or she wants. Because interviewees may not have knowledge of the area in question, they may provide responses that are outliers which may bias conclusions in the final report.
- 3. The process by which questions were selected for interviews did not appear to the NRC team to be clearly defined or consistently implemented. The practice appears to be different from what is prescribed in the NCSA manual. In some cases, the assessment team chose questions based on identified weak areas in the survey results, but in other cases, questions were chosen simply to use the question without consideration to its applicability. Although teams were given information about topic areas in which work groups scored low and high on the survey, often the team members focus seemed to be on assessing all of the principles and attributes equally. For example, at the end of each day, the team would look at what questions were not asked as frequently, and would try to fill in with the next round of interviews. This strategy may come at a cost of asking those questions that the survey showed had a very low score for a certain group.
There is not a specific strategy to ensure that the critical areas identified by the survey are the primary focus during the interviews, versus one that weights all
questions/principles equally. There is an opportunity to collect more detailed and diagnostic information during interviews than is possible through a survey.
Additional diagnostic information about work groups in which survey results suggest there might be a problem would be more useful for developing recommendations for corrective actions.
The following bullets describe additional Interview Method Observations:
- The process for selecting interviewees was not clear to the NRC team. Specifically it was not clear to the NRC team who at the site was responsible for selecting individuals and scheduling the interviews. NRC would anticipate that if the team leader requests a general number of individuals instead of specific names, that the supervisors for those individuals would not be involved in the selection process. For an independent or third-party assessment, the site should not be involved in selecting individuals.
- It is not clear to the NRC team how and why the choice of interview type, individual or group, is used. Additionally, it is not clear to the NRC team why different types of interviews are suggested in the manual for different work groups. The Hope Creek teams process appeared to be different from what is prescribed in the NCSA manual which says that some of the interviews with the line organization should be done in groups of two to four. For example, during the Hope Creek assessment, choice of interview type appeared to be decided ad hoc rather than being based on a specific type/level of workgroup. The NRC team observed that there were a few group interviews scheduled but it was not clear how or why those particular people were chosen to be interviewed in groups versus individually. The NRC team observed that the NSCA process would be more effective and scrutable if there were criteria established in the manual for when individual and group interviews should be used based on best practices and lessons learned.
- It is not clear to the NRC team why 50 to 60 interviews are considered adequate as described in the manual, for every site. Hope Creek is a single unit site, but for sites with multiple units and bigger work forces, 50 to 60 interviews might not sufficiently capture enough information to make a valid assessment.
- The NRC team observed that the location of the interviews did not always ensure confidentiality, anonymity, and neutrality or limit the amount of distractions present. In some cases, the rooms used by the assessment team for interviews had windows and were located in areas with a high volume of foot traffic and were not marked to prevent intrusion. This allowed for other site personnel to see who was being interviewed. The NRC team noted that interviews were frequently interrupted by others entering the interview rooms looking for other meetings. The NRC team determined that this assessment could have benefited by using clearly marked rooms to minimize interruptions during the interviews. Allowing privacy for interviewees may cause the interviewee to provide more candid comments.
III. Observations on Selection/Training for interviewers
- There appears to the NRC team to be no criteria to specify appropriate qualifications for assessment team members. Based on feedback from the assessment team, it is clear that thought was put into who should be included on the assessment team, primarily in
terms of understanding of functional areas (e.g., ops, maintenance). However, the manual states that good interviewing and interpersonal skills should be considered for inclusion on the team. While the NRC agrees that there is significant value of having team members with relevant experience in the area they are assessing, the results desired cannot be achieved without the skills to extract them. Based on the NRC teams observation, the skills needed to conduct effective interviews/focus groups were often not present. The NRC team did not observe that formal training was given to interviewers nor was a dry run interview given which is inconsistent with the guidance in the manual. The results of the interviews would be improved if standardized training with role playing is provided.
Based on previous NSCA observations and discussions with the Hope Creek team, the NRC team observed that there may be inconsistencies with how sites and team leaders are conducting the training that is provided to the assessment teams. The NRC team observed that no standardized training on how to conduct individual interviews or focus groups (e.g., pull the thread, rephrasing, avoiding leading questions) was given. For the Hope Creek assessment, the team leader did provide interviewing guidance in notebooks which he handed out during the Sunday training session which the NRC team noted is a good practice if implemented during each NSCA. For an independent or third-party safety culture assessment, training needs to be emphasized, specifically on conducting focus groups and good interviewing techniques.
IV. Observations of Assessment Scoring
- The NRC team observed that scoring rules for the interview responses are unclear and, as was observed at other pilot sites, were applied inconsistently by the different assessment sub-teams. The NRC team did not observe standardized training on scoring rules and how to apply them. The practice at Hope Creek appeared to be different from what is prescribed in the NCSA manual. While at Hope Creek, the NRC team watched a video recently developed for training which demonstrated the differences between a positive, neutral and negative interview response and how to score that response on the score sheet. The NRC agrees that using a video which demonstrates each type of interview is a good practice, however, the scoring rules used by the actors in the video were unclear. For example, the NRC team could not tell if the actors were scoring answers negatively simply because the interviewee could not provide a concrete example, versus the interviewee providing an example that was positive, neutral, or negative. Additionally, the NRC team observed that the interviews in the video did not demonstrate good interviewing practices.
V. Miscellaneous
- Field observations, although prescribed in the NSCA manual, were very limited during most of the week although encouraged for follow-up during the fourth day.
- Document reviews appeared to be limited to a few procedures and site policies. For example, the NRC team did not observe that the assessment team reviewed training materials for safety culture or safety conscious work environment which provide the team with data on how site management communicates expectations in these areas.
For an independent assessment, the NRC team notes that a review of ECP files can provide insights into already existing trends or issues in specific work groups.