ML22070A148

From kanterella
Jump to navigation Jump to search
Paper - Integrated Human Event Analysis System for Human Reliability Data (IDHEAS-DATA)
ML22070A148
Person / Time
Issue date: 03/11/2022
From: Chang Y, Segarra J, Jing Xing
NRC/RES/DRA/HFRB
To:
Xing, Jing - 301 415 2410
References
Download: ML22070A148 (12)


Text

Integrated Human Event Analysis System for Human Reliability Data (IDHEAS-DATA)

Jing Xing, Y. James Chang, Jonathan DeJesus Segarra U.S. Nuclear Regulatory Commission (01)-301-802-3196, jing.xing@nrc.gov Abstract Human actions are a significant contributor to overall plant risk, and human reliability analysis (HRA) results directly affect the risk-informed decisionmaking of the U.S. Nuclear Regulatory Commission (NRC). Many conventional HRA methods were not developed with a strong data basis; therefore, their results can be associated with large uncertainties. From time to time, the uncertainties are large enough to affect regulatory decisions. Further, many conventional HRA methods lack the data basis to support HRA applications for emerging technologies, such as for Diverse and Flexible Coping Strategies and digital instrumentation and control. The NRC developed the Integrated Human Event Analysis System (IDHEAS) series of products to address these issues by being human-centered (thus being expandable to novel situations) and data-based - the IDHEAS General Methodology (IDHEAS-G) and the IDHEAS Method for Event and Condition Assessment (IDHEAS-ECA). Human reliability data is essential to IDHEAS-ECA. The NRC staff developed a database, referred to as IDHEAS-DATA, to generalize and document human error data to support human error probability calculations in IDHEAS-ECA. This paper describes the structure of IDHEAS-DATA and the process of generalizing human error data of various sources and of different formats.

IDHEAS-DATA documents the human reliability and performance data collected through a large-scale literature review. The data were classified based on the scientific foundations described in IDHEAS-G and generalized to support the development of the IDHEAS-ECA method. The data were from various sources, including operational experience and studies of human reliability and performance in nuclear and non-nuclear domains. The large data diversity and quantity establish a strong data basis. The data generalization process and scientific foundation provide a sound process to include new HRA data.

The data are generalized into 27 tables, referred to as IDHEAS-DATA TABLEs (IDTABLEs).

IDTABLE-1 through IDTABLE-20 document the data related to the effects of the performance influencing factors (PIFs) documented in IDHEAS-G. IDTABLE-21 includes data associated with optimal human reliabilities. IDTABLE-22 concerns the combined effects of more than one PIF.

IDTABLE-23 and IDTABLE-24 are data for assessing the uncertainty distribution of the time required to perform a task. The information documented in IDTABLE-23 and IDTABLE-24 are a small portion of the collected data. The NRC has analyzed a much larger portion of the literature to support guidance development on specifying the uncertainty distributions of task completion times. IDTABLE-25 and IDTABLE-26 are information on task dependency and error recovery, respectively. Finally, IDTABLE-27 documents the situations where a high percentage of human failures occurred.

IDTABLE-27 helps HRA analysts understand the main drivers to human error to help them quickly perceive similar conditions in their analyses. The human error data generalized in IDHEAS-DATA were independently verified in 2020.

IDHEAS-DATA, along with other IDHEAS-series products, modernizes the NRCs HRA techniques with a solid scientific, technology-inclusive foundation and a strong data basis.

1. Introduction U.S. Nuclear Regulatory Commission (NRC) uses probabilistic risk assessment (PRA) results and insights for risk-informed regulatory decision making. The NRC continues to improve the robustness of PRA, including human reliability analysis (HRA) through many activities. To date, there have been roughly fifty HRA methods developed worldwide to estimate human error probabilities (HEPs) to support PRA. Yet, the use of empirical data for HEP estimations has been limited due to the lack of

databases for the HRA methods, discrepancies in the formats of available data, and the relevance of data to nuclear power plant (NPP) operation. Human error data are available from task performance in various domains, in different formats, and at a range of levels of details. The available human error data cannot be directly used for HEP estimation. Lack of a strong data basis in HRA methods challenges the validity of HEP estimations.

The NRC staff developed the General Methodology of the Integrated Human Event Analysis System (IDHEAS-G)[1], which is built on a cognitive basis structure and has the capability of using human error data to improve HEP estimations. IDHEAS-G provides a hierarchical structure to analyze and assess the reliability of human actions. IDHEAS-G models human performance with five macrocognitive functions: Detection, Understanding, Decisionmaking, Action Execution, and Interteam Coordination. IDHEAS-G defines a set of cognitive failure modes (CFMs) for each macrocognitive function to describe the various ways of failing the corresponding macrocognitive function. IDHEAS-G also has a performance-influencing factor (PIF) structure that consists of a set of PIFs and their attributes to represent the context of a human event. IDHEAS-G analyzes an event in progressively more detailed levels: event scenario, human actions, critical tasks of the actions, macrocognitive functions and CFMs of the critical tasks, and PIFs and the associated attributes. This structure provides an intrinsic interface to generalize various sources of human error data for HEP estimation.

Along with the development of IDHEAS-G, the NRC staff developed IDHEAS-DATA [2], a data structure that generalizes and documents human error data from various sources. The sources of human error data reported in operational databases and literature are evaluated. For the human error data reported, the human task performed is analyzed for applicable CFMs; the context under which the task was performed is represented with PIF attributes. Thus, a piece of human error data is represented as one datapoint in IDHEAS-DATA, with the error rates for the CFMs under the given PIF attributes. Developing IDHEAS-DATA has been a continuous effort as more human error data are identified from the literature and new data becomes available. The data, once sufficiently populated, can provide a basis for estimating HEPs.

In 2019, the NRC staff developed the IDHEAS for Event and Condition Assessment (IDHEAS-ECA) method based on IDHEAS-G. The first version of the IDHEAS-ECA method is documented in NRC Research Information Letter RIL-2020-02 [3]. IDHEAS-ECA models human errors in a task with five CFMs, that is, the failure of the five macrocognitive functions in IDHEAS-G. IDHEAS-ECA models event context with the 20 PIFs in IDHEAS-G, but with fewer PIF attributes than what are presented in IDHEAS-G. IDHEAS-ECA uses a set of base HEPs and PIF weights to calculate HEPs of the CFMs of a human task for the given context. In developing IDHEAS-ECA, the NRC staff integrated the human error data populated in IDHEAS-DATA to estimate the base HEPs and PIF weights.

This paper presents the IDHEAS-DATA framework and the process of generalizing human error data. IDHEAS-DATA provides the data foundation for IDHEAS-ECA and also provides a knowledge base for those who use IDHEAS-ECA and query the data basis. Moreover, the IDHEAS-DATA framework can serve as the hub for HRA data exchange and synthesis, which may be of interest to those who want to use human error data for HRA.

2. Overview of human error data Human error data have been available in various work domains such as aerospace, aviation, manufacturing, and health care. Many cognitive behavioral studies produced human error data in controlled experimental contexts. Moreover, human performance data in nuclear power plant

operations have become available in the last two decades. Several human performance databases have been developed to systematically collect operator performance data in nuclear power plants for HRA. Examples of recent efforts include the Scenario Authoring, Characterization, and Debriefing Application (SACADA) database [4] developed by the NRC and the Human Reliability Data Extraction (HuREX) database [5] developed by the Korea Atomic Energy Research Institute. While individual sources of human error data may not be enough to inform HEPs for all kinds of human tasks under a large breath of contexts, consolidating and integrating the data available in various sources would yield more robust HEP estimates.

Ideally, the data to inform HEPs would have the following features:

  • The numerators and denominators of human error rates are collected. The denominator is sufficiently large so the divided results are statistically significant.
  • Human error rates are measured repetitively under the same context to minimize uncertainties in the data.
  • Human error rates are collected for a variety of personnel so that the data can represent average personnel or operators.
  • Human error data are collected for a range of task types or failure modes, and their context are sufficiently documented.

While such ideal data do not exist, these features can be used as criteria to evaluate real data for their applicability to HRA.

Along with the development of IDHEAS-G, the NRC staff documented human error data from the literature and human performance databases. The data sources include the following categories:

A. Nuclear simulator data (e.g., SACADA) and operational data (e.g., nuclear power plant maintenance human error data)

B. Operation performance data from other domains (e.g., air traffic control operational errors)

C. Experimental data reported in the literature D. Expert judgment data E. Inference data (statistical data, ranking, and categorization, etc.)

The NRC staff examined data from a variety of sources for their applicability of informing HEPs. The following are several types of human error data with examples to demonstrate whether and how the data may be used to inform HEP estimation.

Human error rates with known PIFs This type of data provides the numerators and denominators of human error rates for tasks performed under the same context or under the known range of contexts. Such data can directly inform the HEPs of the CFMs. The following are two examples:

  • Data of unsatisfactory task performance in nuclear power plant operator simulator training, as collected in the NRCs SACADA database. The database was built with the same macrocognitive model as that in IDHEAS-G and collects operator task performance for various training tasks under different contexts. The types of failures of performing the tasks can be mapped to the CFMs in IDHEAS-G, and the various contexts can be represented with IDHEAS-G PIF attributes. Thus, the SACADA database can inform the HEPs of the CFMs and the quantitative effects of PIF attributes on HEPs.
  • The analysis of human errors in maintenance operations of German nuclear power plants.

Preischl and Hellmich [6,7] studied human error rates for various basic tasks in maintenance operations. The following are some example human error rates reported:

- 1/490 for operating a circuit breaker in a switchgear cabinet under normal conditions

- 1/33 for connecting a cable between an external test facility and a control cabinet

- 1/36 for reassembly of component elements

- 1/7 for transporting fuel assemblies This type of data from operational databases inherits uncertainties in the data collection process.

For example, the definitions of human failure vary from one database to another, so caution is needed when integrating human error rates from different sources.

Quantification of PIF effects Many sources present the changes in human error rates when varying the states of one or more PIFs. Such data can inform the quantification of PIF effects in the IDHEAS-G quantification model.

The following are several examples:

  • Prinzo et al. [8] analyzed aircraft pilot communication errors and found that the error rate increased nonlinearly with the message complexity of the communication. The error rate was around 0.04 for the message complexity index of 4 (i.e., the number of messages transmitted per communication), 0.3 percent for an index of 12, and greater than 0.5 for indices greater than 20.
  • Patten et al. [9] studied the effect of task complexity and experience on driver performance.

The PIFs manipulated in the experiment were low experience versus high experience, and low complexity versus high complexity. The mean error rates were 0.12, 0.21, 0.25, and 0.32 respectively for the four combinations of PIF states: low complexity and high experience, low complexity and low experience, high complexity and high experience, high complexity and low experience.

Human error rates with unknown or mixed context This type of data presents human error rates calculated statistically across a mixture of context. Such data cannot inform HEPs because neither the failure modes nor the context was specified. The data might represent the best or worst possible scenarios or the average scenario. This type of data can be used to validate the ranges of HEPs obtained by other means.

Given the variety of data sources, the following criteria were used to select data sources for HRA:

1) Participants - The participants of the studies should be adults trained on the tasks of which human performance was measured; the sample size should be adequate to yield statistically significant results.
2) Tasks - The tasks studied should be at the macrocognitive function level. i.e., performing the task would demand most, if not all, of the processors of a macrocognitive function.
3) Measurements - Human error rates are the most preferred measures; task performance measures that can be related to human error rates can also be considered.
4) Specificity - The studies should be clearly described such that the CFMs of the tasks and PIFs in the context are identifiable.
5) Uncertainties - The uncertainties in the studies should be controlled, known, or traceable.
6) Breath of representation - The study should be representative of the research field it represents; it is desirable that the reported results have been repeated or confirmed by other research organizations. On the other hand, selection of data sources from similar studies should have the breath representing the entire field rather than overly representing a few research labs among a large variety of similar ones.

The above are the general criteria for selecting data sources. Often, the criteria must be compromised for areas such as organizational factors that human error data are sparse. Also, the criterion of participant sample sizes needs to be compromised for high-fidelity simulation studies with

operational crews in complicated scenarios. Compromission of data selection criteria should be clearly annotated as the sources of uncertainties in the data.

3. Basis for generalizing human error data IDHEAS-G is the basis for generalizing human error data. This section briefly introduces the IDHEAS-G cognitive model, which includes the cognitive basis structure and the PIF structure, as well as the HEP quantification model in IDHEAS-G.

3.1 The cognitive basis structure The cognitive basis structure models the cognitive and behavioral process of success or failure of a task. The model explains the cognitive process of human performance in applied work domains where human tasks are complex and often involve multiple individuals or teams. The model is described as follows:

  • Macrocognition consists of five functions: Detection, Understanding, Decisionmaking, Action Execution, and Interteam coordination. The first four functions may be performed by an individual, a group or a team, and the Interteam coordination function is performed by multiple groups or teams.
  • Any human task is achieved through the macrocognitive functions; complex tasks typically involve all five macrocognitive functions. Failure of each macrocognitive function is a CFM; thus, the failure of a task can be represented with one or more CFMs.
  • Each macrocognitive function is processed through a series of basic cognitive elements (processors); failure of a cognitive element leads to the failure of the macrocognitive function.
  • Each element is reliably achieved through one or more cognitive mechanisms; errors may occur in a cognitive element if the cognitive mechanisms are challenged.
  • PIFs affect cognitive mechanisms.

3.2 The performance-influencing factor structure Table 1. Performance-influencing factors in IDHEAS-G Environment and System Personnel Task situation

  • Work Location
  • System and
  • Staffing
  • Information Availability Accessibility and Instrumentation
  • Procedures, and Reliability Habitability and Control (I&C) Guidelines, and
  • Scenario Familiarity
  • Workplace Visibility Transparency to Instructions
  • Multi-Tasking,
  • Noise in Workplace Personnel
  • Training Interruptions and and Communication
  • Human-System
  • Team and Distractions Pathways Interface (HSI) Organization
  • Task Complexity
  • Cold/Heat/Humidity
  • Equipment and Factors
  • Mental Fatigue
  • Resistance to Tools
  • Work Processes
  • Time Pressure and Physical Movement Stress
  • Physical Demands The PIF structure describes how various factors in the event context affect the success or failure of human tasks. PIFs affect cognitive mechanisms and increase the likelihood of macrocognitive function failure. The PIF structure is independent of HRA applications and systematically organizes PIFs to minimize inter-dependency or overlapping of the factors. The PIF structure is described as follows:
  • PIF category: PIFs are classified into four categories, corresponding to characteristics of environment and situation, systems, tasks, and personnel.
  • PIFs: Each category has high-level PIFs describing specific aspects of the environment and situation, systems, tasks, or personnel.
  • PIF attributes: These are the specific traits of a performance influencing factor. A PIF attribute represents a poor PIF state that challenges cognitive mechanisms and increases the likelihood of errors in cognitive processes.

Table 1 shows the PIFs within the four categories.

3.3 The human error probability quantification model IDHEAS-G provides its HEP quantification model. The estimation has two parts: estimating the error probabilities attributed to the CFMs ( ) and estimating the error probability attributed to the uncertainties and variability in the time available and time required to perform the action ( ). The estimation of the HEP is the probabilistic sum of and  :

= 1 (1 )(1 ) (1)

In Equation (1), is the probability of the HFE being analyzed (i.e., the HEP), and and have already been defined. Note the following:

  • can also be viewed as the probability that the time required to perform an action exceeds the time available for that action, as determined by the success criteria.
  • captures the probability that the human action does not meet the success criteria due to human errors made in the problem-solving process.

Estimation of Pc is the probabilistic sum of the HEPs of all the CFMs of the critical tasks in a human action. The probability of a CFM applicable to the critical task is a function of the PIF attributes associated with the critical task. The calculation of the probability of a CFM for any given set of PIF attributes, provided that all the PIF impact weights and base HEPs are obtained, is estimated as:

1

= 1 + ( 1)

=1 (2) 1 + (1 1) + (2 1) + + ( 1)

=

The terms in Equation (2) are defined as follows:

  • is the base HEP of a CFM for the given attributes of the following three PIFs:

information availability and reliability, scenario familiarity, and task complexity. is also calculated as the probabilistic sum of the base HEPs for the three PIFs:

= 1 [(1 )(1 )(1 )] (3) where , , and are the base HEPs for information availability and reliability, scenario familiarity, and task complexity, respectively. In the situations when no adverse conditions are identified in the three base PIFs, a lowest base HEP of the CFM is assigned to .

  • is the PIF impact weight for the given attributes of the remaining 17 PIFs and is calculated as:

= (4)

where is the human error rate at the given PIF attribute and is the human error rate when the PIF attribute has no impact. The human error rates used in Equation (4) are obtained from empirical studies in the literature or operational databases that measured the human error rates while varying the PIF attributes of one or more PIFs.

  • is a factor that accounts for the potential recovery from failure of a critical task, and it is set to 1 by default. is a factor that accounts for the interaction between PIFs, and it is set to 1 for the linear combination of PIFs impacts unless there are data suggesting otherwise.

Given equations (1) through (4), calculating the HEP of any CFM requires the base HEPs of all the attributes of the three base PIFs and the weights of all the attributes of the other 17 PIFs. IDHEAS-DATA provides human error data for deriving the base HEPs and PIF weights.

4. The process of generalizing human error data This section introduces the process of generalizing human error data. All the numeric values in this section are for demonstrating the process. Human error data generalization is the mapping of the context and task from the data source onto the IDHEAS-G elements (e.g., CFMs and PIFs) and the documenting of uncertainties in the data. The generalized data are documented in IDHEAS-DATA.

4.1 Approach of generating human error data from various sources Various sources of human error data provide instances of human errors, error rates (i.e., percent of errors), or task-related performance measures of human actions, tasks, or failure modes. The human error data are generally measured at a specific context. The NRC staff generalizes human reliability data from various sources into a common format using the IDHEAS-G cognition model (cognitive basis structure and PIF structure). Then, the generalized human reliability data are documented in IDHEAS-DATA.

Sources of human error Data source 1 Data source 2 data Tasks Context Tasks Context Failure PIFs Failure PIFs Generalization modes modes Human error Human error Human error Human error rates of the rates at the rates of the rates at the failure modes PIF states failure modes PIF states Integrate data for the failure modes and PIFs Integration

= ( )

Figure 1. Illustration of IDHEAS-G data generalization and integration Figure 1 illustrates the approach of data generalization. IDHEAS-G is inherently capable of generalizing human error data of various sources because (1) IDHEAS-G can model any human task with its basic set of CFMs, (2) the CFMs are structured in different levels of detail, and (3) the PIF

structure models the context of a human action with high-level PIFs and detailed PIF attributes. For example, two data sources have human error data for different kinds of tasks and in different contexts, but the failure of the tasks can be represented with the applicable IDHEAS-G CFMs, and the context can be represented with the relevant PIF attributes. Thus, both data sources provide human error information with respect to the common sets of CFMs and PIF attributes. Generalization of human error data refers to the process of mapping the data source into the corresponding CFMs and PIFs.

4.2 Approach of generating human error data from various sources The process of data generalization is essentially the same as that of performing a qualitative HRA using IDHEAS-G. The following process, as illustrated in Figure 1, is adapted from IDHEAS-G for generalizing human error data:

  • Analyzing the data source. This includes identifying the tasks of which human error information is reported, analyzing the context, characterizing the tasks, and assessing the time uncertainties of the tasks.
  • Mapping the data onto the IDHEAS-G structure. This includes representing the reported human errors of the tasks with applicable CFMs and representing the context of the tasks with PIF attributes.
  • Analyzing recovery of human failures and dependency between human actions for events.

Such information is often available in operational and simulation data.

  • Analyzing uncertainties in the data source and the mapping process.

4.3 Examples of generalizing human error data IDHEAS-DATA has 27 tables, referred to as IDTABLEs, to document different types of human error data. The details will be described in next section. A Base HEP IDTABLE documents human error data in the associated CFMs and PIF attributes. Each row of the IDTABLE is referenced as one datapoint, which may consist of one or several reported human error rates at different states of the PIF attribute. Each datapoint comes from one data source such as a technical report or a research paper. One data source may contain multiple datapoints for the same or different IDTABLEs because the reported study may have examined human error rates for different tasks or different PIF attributes. The columns of the table document the following dimensions of information for every datapoint:

  • Column 1: the base PIF attribute for the reported human error rates
  • Column 2: the applicable CFMs of the reported human error data - The CFMs are labeled as D, U, DM, E, and T for failure of Detection, Understanding, Decisionmaking, Action Execution, and Interteam Coordination, respectively. If the task for which the human error rates were reported contains more than one CFM, then the labels of all the applicable CFMs are presented in Column 2.
  • Column 3: human error rates - The human error rates reported in the data source. The error rates are percent of errors unless specified otherwise.
  • Column 4: the tasks for which the human error rates were reported in the data source, along with the definition of the human errors measured for the tasks.
  • Column 5: PIF attribute measure - The task-specific factor or variable used in the data source under which the tasks were performed and human error rates were measured.
  • Column 6: Other PIFs that are also present in the tasks and uncertainties - In addition to the PIF attribute that were under the study, the context of the tasks in a data source may have other PIF attributes present during task performance; therefore, they would contribute

to the reported error rates. Column 6 documents the other PIF attributes that were present.

In particular, Column 6 documents whether the tasks were performed under time constraints.

Information about the time availability is important to infer the base HEPs from the reported human error data. If the time available is inadequate, then a reported human error rate corresponds the probabilistic sum of the base HEPs and the error probability due to inadequate time ( ). Column 6 also documents the uncertainties in the data source and in the mapping to the CFMs and PIF attributes. The uncertainties would affect how the reported error rates are to be integrated to inform base HEPs.

  • Column 7: The date source reference.

Next is an example to demonstrate the process of generalizing human error data to a base HEP IDTABLE. The data source is a report, The Outcome of [Air Traffic Control] Message Complexity on Pilot Readback Performance, by Prinzo et al. [8]. The study analyzed aircraft pilot communication errors and reported the error rates at varying message complexities of pilot communication. Table 2 shows the process of generalizing the data to IDTABLE-3 for Task complexity.

Table 2. Example of generalizing human error data Analyze the data source: Prinzo et al. [8] The task is that pilots listen to and read back messages from air traffic controllers. The pilots hold the information in their memory and read back at the end of the transmission. The cognitive activities involved are perceiving information and communicating it. The pilots perform the task individually without peer-checking, and the tasks are performed without time constraints.

Readback errors are defined as misreading or missing key messages. Message complexity is defined as the number of key messages in one transmission. The study calculates percent of readback errors at different levels of message complexity from thousands of transmissions.

Identified human error data for generalization: The readback error rates at different message complexity levels are identified as the data for this entry.

Applicable CFMs: The CFM for readback errors is failure of Understanding. While the task is listen to and readback messages, the cognitive activities required are identifying, comprehending, and relating all the key messages in one transmission. Those are the elements in the macrocognitive function Understanding.

Relevant PIF attributes: The primary PIF is Task complexity. The PIF attribute is C11, the number of key messages to be kept. Another PIF present is the Work Process attribute, Lack of verification or peer-checking.

Other PIF attributes present: Some transmissions may be performed with the presence of other PIF attributes such as distraction, stress, or mental fatigue. Those PIFs were not prevalent in the transmissions analyzed but could increase the overall error rates. Pilots flying experience was not correlated with the error rates.

Uncertainties in the data and mapping: The source audio transmissions are a mixture of normal and emergent operation.

The analysis results are documented in IDTABLE-3 as one datapoint. Table 3 shows the information documented for this datapoint. All the information items are in one row. The top two row has column numbers for reference.

Table 1. Sample of IDTABLE Base HEPs for Task Complexity 1 2 3 4 5 6 7 Task (and CF Other PIFs (and PIF Error rates error PIF measure REF M Uncertainty) measure)

C11 U Number of Error Pilots listen Message (Mixture of normal and [8]

messages rate to and read complexity - # of emergent operation so 5 0.036 back key key messages other PIF attributes 8 0.05 messages in one may exist) 11 0.11 transmission

15 0.23 17 0.32

>20 >0.5 A data source in Category A (see Section 2) is the SACADA database, which collects NPP operator task performance data in simulator training for requalification examination. Using the SACADA data available until April 2019, we calculated the rates of unsatisfactory performance (UNSAT) for training objective tasks when a situational factor is checked versus not checked. The UNSAT rates are generalized in IDTABLE-1 for the applicable CFMs of the tasks and PIF attributes representing the situation factors. For example, the UNSAT rate for diagnosis tasks is 1.2E-1 and the UNSAT rate for decisionmaking is 1.1E-2 where the familiarity factor in SACADA was characterized as Anomaly among the three available options (Standard, Novel, and Anomaly) The generalized data points are shown in Table 4.

Table 4. Example of datapoints in IDTABLE-1 for Scenario Familiarity Task (and error Other PIFs (and PIF CFM Error rates PIF measure REF measure) Uncertainty)

SF3.1 U 1.2E-1 (8/69) NPP operators Anomaly (Other PIFs may [4]

diagnose in scenario exist) simulator training SF3.1 DM 1.1E-2 (1/92) NPP operators Anomaly (Other PIFs may [4]

decisionmaking in scenarios exist) simulator training

5. The structure of IDHEAS-DATA Overall, IDHEAS-DATA includes 27 tables, referred to as IDHEAS-DATA TABLEs (IDTABLEs), each documenting the data in one element of IDHEAS-G. The generalized human error data for each PIF is documented in one IDTABLE. In addition, one IDTABLE documents the lowest human error rates for each CFM when all the PIFs have no impact on HEPs. The data for each of these IDTABLEs are integrated for calculating the HEPs of CFMs. IDHEAS-G HEP quantification model also calculates the HEP due to time uncertainties based on the time available and time required for a human action.

Thus, there are two IDTABLES documenting the time required for operators completing typical human actions in nuclear power plant operations and the effects of PIFs on the time required. The HEP quantification model also addresses crediting recovery of human failures in an event. Moreover, IDHEAS-G has a dependency model to evaluate the effect of dependency between human actions on HEPs. IDHEAS-DATA documents data sources in these areas as well.

The IDTABLEs are briefly described as follows:

IDTABLE-1 to IDTABLE-3 are the base HEP Tables. IDTABLE-1 is for base HEPs of Scenario Familiarity, IDTABLE-2 is for base HEPs of Information Availability and Reliability, and IDTABLE-3 is for base HEPs for Task Complexity. The data of human error rates from various sources are analyzed for the applicable CFMs and relevant attributes of the three base PIFs.

IDTABLE-4 to IDTABLE-20 are for PIF weights. They document human error rates for the CFMs at different PIF attributes of the other 17 PIFs. The data sources contain human error rates or task performance measures varying with specific PIF attributes. The attribute weight can be inferred from the data in which human error rates were measured as a PIF attribute was varied from a no or low impact status to a high impact status.

IDTABLE-21 is for Lowest HEPs of the CFMs. It documents human error rates when the tasks were performed under the condition that none of the known PIF attributes was present so that all the PIFs presumably had no impact on human errors. The data inform the lowest HEPs for the CFMs.

IDTABLE-22 is for PIF Interaction. It documents human error data on PIF interaction. The data are from the studies in which human error rates were measured as two or more PIF attributes varied independently as well as jointly. The data informs the PIF interaction factor C in the HEP quantification model (i.e., Equation (2)).

IDTABLE-23 is for Distribution of Task Completion Time, i.e., time required to perform a human action. IDHEAS-G has a time uncertainty model that calculates Pt as the convolution of the distributions of time required and time available. The data can be used to validate the IDHEAS-G time uncertainty model and inform the estimation of the time required distribution.

IDTABLE-24 is for modification to Task Completion Time. It documents empirical data on how various factors modify the time required to complete a task. The IDHEAS-G time uncertainty model requires analysts to estimate the distribution of time required for a human action. Many factors such as environmental conditions can modify the center, range, and/or shape of the time distribution.

IDTABLE-24 provides the empirical basis for analysts to estimate the time required distribution under different contexts.

IDTABLE-25 is for Dependency of Human Actions. It documents instances and empirical data on dependency between human actions. IDTABLE-25 provides the technical basis and reference information for HRA analysts to evaluate dependency between human actions.

IDTABLE-26 is for Recovery of Human Actions. It documents instances of recovery actions.

Currently, the IDHEAS-G HEP quantification model uses the factor Re to represent crediting recovery. The information can help HRA analysts to identify and assess and credit recovery actions.

IDTABLE-27 is for Main Drivers to Human Failure Events. It documents empirical evidence on main drivers to human failures in nuclear power plant events. The information should guide HRA analysts to capture the main drivers and to not overlook important drivers in human events.

6. Discussion and concluding remarks IDHEAS-DATA uses the IDHEAS-G framework to organize and document characteristics of human error data. The structure of IDHEAS-DATA can generalize human error data of various sources to the common formats that can be used for HEP quantification. The generalized data before July 2019 have been used to support HEP calculation in IDHEAS-ECA.

Only a portion of available nuclear operation and simulation data were generalized as of 2019. The effects of many PIF attributes on human error rates need to be analysed before the data can be generalized to IDHEAS-DATA. Moreover, the Halden Reactor Project has conducted NPP simulation experiments over the last three decades. Most of the experimental results are not generalized to IDHEAS-DATA because the studies measured operator task performance indicators such as situational awareness and levels or workload. Using such data requires establishing the quantitative relationships between the performance indicators and human error rates based on empirical evidence in the experiments. The NRC staff expects that these efforts would greatly enrich IDHEAS-DATA.

The structure of IDHEAS-DATA is generic because it is based on the IDHEAS-G CFMs and PIFs that model human cognition and behavior. IDHEAS-DATA is also flexible because its 27 IDTABLEs are independent of each other and the datapoints in each IDTABLE can be at different levels of detail. These features make IDHEAS-DATA a candidate for serving as a hub for HRA data exchange.

Different human performance databases can be generalized to IDHEAS-DATA, and the generalized data can be used for different HRA applications.

References

[1] Xing, J., Y.J. Chang, and J. DeJesus Segarra, The General Methodology of an Integrated Human Event Analysis System (IDHEAS-G), U.S. Nuclear Regulatory Commission, NUREG-2198 (ADAMS Accession No. ML21127A272), May 2021.

[2] Xing, J., Y. J. Chang, and J. DeJesus Segarra, Integrated Human Event Analysis System for Event and Condition Assessment (IDHEAS-ECA), U.S. Nuclear Regulatory Commission, RIL-2020-02 (ADAMS Accession No. ML20016A481), Feb. 2020.

[3] Xing, J., Y. J. Chang, and J. DeJesus Segarra, Human Error Data for Integrated Human Event Analysis System (IDHEAS-DATA), U.S. Nuclear Regulatory Commission, RIL-2022-xx (in preparation), 2022.

[4] Chang, Y.J., D. Bley, L. Criscione, B. Kirwan, A. Mosleh, T. Madary, R. Nowell, R. Richards, E.M. Roth, S. Sieben, and A. Zoulis, The SACADA database for human reliability and human performance. Reliability Engineering & System Safety, 2014. 125: p. 117-133.

[5] Jung, W., J. Park, Y. Kim, S.Y. Choi, and S. Kim, HuREX - A Framework of HRA Data Collection from Simulators in Nuclear Power Plants. Reliability Engineering & System Safety, 2018.

[6] Preischl, W. and M. Hellmich, Human error probabilities from operational experience of German nuclear power plants. Reliability Engineering & System Safety, 2013. 109: p. 150-159.

[7] Preischl, W. and M. Hellmich, Human error probabilities from operational experience of German nuclear power plants, Part II. Reliability Engineering & System Safety, 2016. 148: p. 44-56.

[8] Prinzo, O.V., A.M. Hendrix, and R. Hendrix, The Outcome of [Air Traffic Control] Message Complexity on Pilot Readback Performance, 2006, DOT/FAA/AM-06-25, Federal Aviation Administration

[9] Patten, C.J.D., A. Kircher, J. stlund, L. Nilsson, and O. Svenson, Driver experience and cognitive workload in different traffic environments. Accident Analysis & Prevention, 2006. 38(5):

p. 887-894.