ML23249A070

From kanterella
Jump to navigation Jump to search
NRC Public Ai Workshop Presentation
ML23249A070
Person / Time
Issue date: 09/19/2023
From: Dennis M
NRC/RES/DSA
To:
References
Download: ML23249A070 (29)


Text

Data Science and AI Regulatory Applications Public Workshop AI Characteristics for Regulatory Consideration September 19, 2023 Matt Dennis U.S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research

Outline

  • Artificial Intelligence (AI) Landscape and the NRC
  • AI Strategic Plan Development Background and Overview
  • AI Characteristics for Regulatory Consideration
  • Moving Forward and Stakeholder Engagement 2

Artificial Intelligence (AI) Landscape and the NRC NUCLEAR INDUSTRY OTHER CONSIDERATIONS INTERNAL TO THE NRC (EXTERNAL) AND OPPORTUNITIES (EXTERNAL)

OMB EO 13960 NRC Evidence Building and reporting Priority Questions requirements for Industry wants implementing agencies to use AI Internal interest in researching AI-based tools ranging from ACTIVITIES AI-embedded in commercial AI Strategic Plan applications to custom to prepare staff Wide range of AI meetings, programming to review AI conferences, and activities 3

AI Strategic Plan Development Background

  • Formed an interdisciplinary team of AI subject matter experts (2021)

- Insights gained from Data Science and Artificial Intelligence Regulatory Applications Workshops*

- Engaged across the agency

  • Proactively researching AI usage across the nuclear industry, Federal government, and international counterparts

- Leveraging MOUs (e.g., EPRI and DOE)

- Maintaining federal awareness (e.g., FDA and NIST)

- International collaboration (e.g., CNSC, ONR and IAEA)

  • Early stakeholder engagement and data gathering to execute the AI Strategic Plan

- AI Strategic Plan comment-gathering public meeting (Summer 2022)

- Internal seminars and training opportunities

- Upcoming AI workshops

AI Strategic Plan Overview Vision and Outcomes

  • Continue to keep pace with technological innovations to ensure the safe and secure use of AI in NRC-regulated activities
  • AI framework and skilled workforce to review and evaluate the use of AI in NRC-regulated activities The AI Strategic Plan consists of five strategic goals
  • Goal 1: Ensure NRC Readiness for Regulatory Decisionmaking
  • Goal 2: Establish an Organizational Framework to Review AI Applications
  • Goal 3: Strengthen and Expand AI Partnerships
  • Goal 4: Cultivate an AI-Proficient Workforce Draft Available at ML22175A206
  • Goal 5: Pursue Use Cases to Build an AI Foundation Across the NRC Final available at ML23132A305 5

KEEPING THE END IN MIND - DETERMINING THE DEPTH OF REVIEW Goal 1. Ensure NRC Readiness for Regulatory Decisionmaking AI Research Framework and Tools Communications Determine approach to Clarify the process and Public meetings assess AI (e.g., XAI, procedures for AI to inform key activities trustworthiness, etc.) regulatory reviews and oversight Development of AI Consider options for Agency-wide internal standards and identify long-range changes for communications and where gaps exists AI regulatory reviews coordination to and oversight that may harmonize AI activities require rulemaking Outcome: Develop an AI framework to review the use of AI in NRC-regulated activities 6

Regulatory Considerations for AI Applications

- Table 1, Notional AI and Autonomy Levels in Commercial Nuclear Activities

- notional framework to consider the levels of human-machine interaction with AI systems

- Serves as a starting point in this public meeting to further discuss the variety of AI attributes which may affect regulatory considerations at each notional level

  • AI Attributes Working Group

- Formed May 2023 and includes members from agency offices

- Paul Krohn, Matt Dennis, Trey Hathaway, Jonathan Barr, Reed Anzalone, Josh Kaizer, Dave Desaulniers, Jesse Seymour, Tanvir Siddiky, Joshua Smith, Scott Rutenkroger, David Strickland, and Howard Benowitz 7

OFFICIAL USE ONLY - INTERNAL INFORMATION Notional AI and Autonomy Levels in Commercial Nuclear Activities Notional AI and Potential Uses of AI and Level Autonomy Levels Autonomy in Commercial Nuclear Activities Human Level 0 AI Not Used No AI or autonomy integration in systems or processes Involvement Insight AI integration in systems is used for optimization, Level 1 Human decision-making operational guidance, or business process automation that assisted by a machine would not affect plant safety/security and control AI integration in systems where algorithms make Collaboration Level 2 recommendations that could affect plant safety/security Human decision-making and control are vetted and carried out by a human augmented by a machine decisionmaker Operation AI and autonomy integration in systems where algorithms Level 3 Machine decision-making make decisions and conduct operations with human supervised by a human oversight that could affect plant safety/security and control Fully autonomous AI in systems where the algorithm is Fully Autonomous Level 4 responsible for operation, control, and intelligent Machine decision-making adaptation without reliance on human intervention or Machine with no human intervention oversight that could affect plant safety/security and control Independence Common Understanding of the Level Key for Regulatory Readiness 8

Disclaimer to AI Regulatory Considerations

  • Considering NIST AI Risk Management Framework (RMF)*

and other frameworks for future alignment

  • The following AI characteristics and considerations for developing AI systems does not represent an exhaustive list of categories for consideration
  • The following AI characteristics are defined by a range of implementation levels that may impact regulatory decision-making
  • NRC has not endorsed using the NIST AI RMF as means to meet current or future regulation 9

AI Characteristics for Regulatory Consideration Safety AI Autonomy Security Explainability Significance Model Regulated Regulatory Application Lifecycle Activity Approval Maturity 10

Safety Significance

  • What is the safety significance of the use of AI?
  • Safety Principles using Risk or Determinism - In the absence of the ability to quantify risk, there are good engineering principles (e.g., defense-in-depth) that can be used to guard against unintended consequences.
  • Failure and Consequence Identification - A first step as part of AI systems engineering, a formalized process to quantify the hazards and modes of operation can be considered to ensure adequate system design.

No impact on safety or Potential consequences with implemented safety functions significant safety implications 11

AI Autonomy

  • Transition point exists where AI controls the process without human intervention
  • A graded approach which considers a variety of AI characteristics may determine the level of regulatory review required Automation with Complete AI-Driven No AI Utilized Autonomy 12

Clarifying Automation, Autonomy, and AI

  • AI technologies can enable autonomous systems

- Not all uses of AI are fully autonomous as many may be used to augment human decision-making rather than replace it.

- Higher autonomy levels indicate less reliance on human intervention or oversight and, therefore, may require greater regulatory scrutiny of the AI system.

  • Multiple definitions exist; however, it is important to have a clear understanding of the differences between automation and autonomy

- Automation - considered to be a system that automatically acts on a specific task according to pre-defined, prescriptive rules. For example, reactor protection systems are automatically actuated when process parameters exceed certain defined limits.

- Autonomy - a set of intelligence-based capabilities that allows the system to respond to situations that were not pre-programmed or anticipated (i.e., decision-based responses) prior to system deployment. Autonomous systems have a degree of self-governance and self-directed behavior resulting in the ability to compensate for system failures without external intervention.

13

AI applied to Automation and Autonomy Blended Automation and Automation Dynamic Adaptation Manual Assistance Operation Autonomous with No Operation Increasing Automation Autonomous with Limited Levels of Automation Operation Intervention Autonomous with Backup Operation Intervention within Constraints Increasing Levels of Autonomy Graphics source: https://www.businessinsider.com/what-are-the-different-levels-of-driverless-cars-2016-10 14

Security

  • Can others influence the AI?
  • Open-Source Tools - Use of open-source tools are not precluded, but using non-specialized software solutions means that there are steps taken to rigorously confirm the safety and security of the implemented solution.

Open access to model, data, and code Closed access and fully isolated 15

Explainability

  • To what degree do we understand how the AI is working?
  • Establishing a Trustworthy System - Explainability exposes a chain of decision-making for potentially complex logic that is easily interpretable by anyone unfamiliar with the AI system design. This applies to all stakeholders which include reviewers (e.g., regulators) as well as system users.

INPUT INPUT OUTPUT OUTPUT Black Box AI System Visibility into the What, How, and Why within an AI System 16

Model Lifecycle

  • How often the AI is updated and maintained?
  • Data Provenance - Based on a graded approach, the modeling data may have a variety of various pedigrees based on the application area (e.g., safety significance).
  • Model Updating - Models need to be maintained to avoid performance degradation and kept consistent with the pre-determined change control and notification process for that application.

Frozen or Locked Continuous Model Updating 17

Regulatory Activity

  • Is AI being used in a regulated activity?
  • Human and Organizational Factors - The context of operation needs to consider the handover to human operation, immediacy for human action, or if placement in a safe stable state is required based on the operational context.

Application Domain AI Supports Outside Regulated Activity Regulated Activity 18

Regulatory Approval

  • What is the level of regulatory approval required?
  • Extensive Application Areas - A variety of regulatory requirements apply to various potential AI application areas. Existing requirements may range from evaluation of sufficient functional performance up to specific requirements to ensure AI system safety and security.

Performance Prescriptive Requirements for Requirements Methods or Approaches 19

AI Maturity

  • Is AI commonly used in this way?
  • Existing Guidance - Traditional safety, security, software, and systems engineering practices are still applicable as the starting point for good engineering practice.

Novel AI Application Commonplace AI with Minimal Application with Experience Extensive Usage 20

Summary Considerations (1/2)

  • Existing Guidance - Traditional safety, security, software, and systems engineering practices are still applicable as the starting point for good engineering practice.
  • Establishing a Trustworthy System - Explainability exposes a chain of decision-making for potentially complex logic that is easily interpretable by anyone unfamiliar with the AI system design. This applies to all stakeholders which include reviewers (e.g., regulators) as well as system users.
  • Safety Principles using Risk or Determinism - In the absence of the ability to quantify risk, there are good engineering principles (e.g., defense-in-depth) that can be used to guard against unintended consequences.
  • Open-Source Tools - Use of open-source tools are not precluded, but using non-specialized software solutions means that there are steps taken to rigorously confirm the safety and security of the implemented solution.

21

Summary Considerations (2/2)

  • Failure and Consequence Identification - A first step as part of AI systems engineering, a formalized process to quantify the hazards and modes of operation can be considered to ensure adequate system design.
  • Data Provenance - Based on a graded approach, the modeling data may have a variety of various pedigrees based on the application area (e.g., safety significance).
  • Model Updating - Models need to be maintained to avoid performance degradation and kept consistent with the pre-determined change control and notification process for that application.
  • Human and Organizational Factors - The context of operation needs to consider the handover to human operation, immediacy for human action, or if placement in a safe stable state is required based on the operational context.
  • Extensive Application Areas - A variety of regulatory requirements apply to various potential AI application areas. Existing requirements may range from evaluation of sufficient functional performance up to specific requirements to ensure AI system safety and security.

22

NRC AI Considerations Traceable and Auditable Evaluation Methodologies Current Understanding Licensee and Applicant AI Usage Regulatory Guidance and Decision-Making Development Differentiating AI Usage for Reactor Design Versus Autonomous Control Future Explainable AI and Trustworthy AI - Reliability and Assurance Internal AI Budget Predicated on Emergent Industry Applications 23

Moving Forward and Stakeholder Engagement

  • Continued safety and security in the nuclear industry is paramount
  • Embrace new and innovative ways to meet NRCs mission
  • Maintain strong partnerships with domestic and international counterparts
  • Engage with the NRC early and often on plans and operating experience Future Activities
  • Regulatory framework applicability assessment of artificial intelligence in nuclear applications (Summer 2023-Spring 2024) 24

Contact Information

  • Matt Dennis Data Scientist Office of Nuclear Regulatory Research matthew.dennis@nrc.gov

Chief, Accident Analysis Branch Division of Systems Analysis Office of Nuclear Regulatory Research luis.betancourt@nrc.gov

  • Victor Hall Deputy Division Director Division of Systems Analysis Office of Nuclear Regulatory Research victor.hall@nrc.gov 25

BACKUP SLIDES 26

Acronyms

  • AI - Artificial Intelligence
  • IAEA - International Atomic Energy Agency
  • AICoP - Artificial Intelligence Community of Practice
  • IEC - International Electrotechnical Commission
  • AISC - Artificial Intelligence Steering Committee
  • MOU - Memorandum of Understanding
  • DOE - U.S. Department of Energy
  • NLP - Natural Language Processing
  • EO - Executive Order
  • NRC - U.S. Nuclear Regulatory Commission
  • EPRI - Electric Power Research Institute
  • OMB - U.S. Office of Management and Budget
  • FDA - U.S. Food and Drug Administration
  • ONR - U.K. Office for Nuclear Regulation
  • FRN - Federal Register Notice
  • NEI - Nuclear Energy Institute
  • FY - Fiscal Year
  • NIST - National Institute of Standards and
  • GAO - U.S. Government Accountability Office Technology
  • GSA - U.S. General Services Administration
  • XAI - Explainable Artificial Intelligence 27

Other Regulatory and Risk Management Approaches

  • United Kingdom AI Regulation: A Pro-Innovation Approach
  • European Union AI Act
  • U.S. Food and Drug Administration AI Regulatory Framework for Medical Devices
  • U.S. Department of Health and Human Services Trustworthy AI Playbook
  • U.S. National Institute of Standards and Technology AI Risk Management Framework
  • U.S. Department of Energy AI Risk Management Playbook 28

Additional AI References

  • United Kingdom AI Standards Hub
  • United Kingdom Centre for Data Ethics and Innovation (CDEI) AI Assurance Techniques
  • OECD AI Policy Observatory
  • Partnership on AI
  • AI Incident Database 29