ML24081A110
| ML24081A110 | |
| Person / Time | |
|---|---|
| Issue date: | 03/18/2024 |
| From: | Doug Eskins NRC/RES/DE |
| To: | |
| Doug Eskins 301-415-3866 | |
| Shared Package | |
| ML24075A025 | List: |
| References | |
| Download: ML24081A110 (20) | |
Text
Some issues in the Assurability of safety-critical digital systems Part 1 Assurance and AI Doug Eskins Senior Computer Engineer U.S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research The views expressed herein are those of the author and do not represent an official position of the U.S. NRC.
IAEA Technical Meeting EVT2300917 on Deployment of Artificial Intelligence Solutions for the Nuclear Power Industry:
Considerations and Guidance18-21 March 2024 U.S. Nuclear Regulatory Commission Headquarters, Rockville, MD, USA
Assurance
- A claim (about X) is supported by sound, valid evidence (under the assumptions and conditions identified in Y).
- X could be a system design or an O&M process.
- Y is a set of conditions and assumptions under which the claim holds.
- Assurance is sometimes referenced to a CAE triplet (claim, arguments, evidence)
Artificial Intelligence A machine-based system that can go beyond defined results and scenarios and has the ability to emulate human-like perception, cognition, planning, learning, communication, or physical action (NRC AI Strategic Plan).
Note: Each human-like capability is referenced to some (domain-specific) application.
AI & Assurance
- How can AI be assured?
- How can AI be used for assurance?
Assuring AI
- What are the bounds of application?
- In nuclear: safety or non-safety, design or O&M?
- Is assurance comparable between humans and AI?
- How will the CAE needed to assure an application differ for AI?
- Ex) Can non-interference with a safety function be assured?
AI for Assurance
- Can AI facilitate the CAE needed for assurance?
- Data collection, processing, and analysis to support Evidence generation
- System modelling to support Argument construction and validation
- System and domain analysis to ensure a necessary and sufficient set of Claims to support assurance.
Assuring AI for Nuclear Cybersecurity Applications
- Ongoing NRC research exploring the use of AI to characterize nuclear cybersecurity states.
- Issues encountered relevant to assurance of cybersecurity classification models:
- Data artifacts & joint IT/OT data
- Model performance measures & coverage of plant states
- Answers can be very application dependent
Some issues in the Assurability of safety-critical digital systems Part 2 Knowledge Engineering is on the back burner Sushil Birla Senior Technical Advisor U.S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research The views expressed herein are those of the author and do not represent an official position of the U.S. NRC.
IAEA Technical Meeting EVT2300917 on Deployment of Artificial Intelligence Solutions for the Nuclear Power Industry:
Considerations and Guidance18-21 March 2024 U.S. Nuclear Regulatory Commission Headquarters, Rockville, MD, USA
Distinguish between data, information & knowledge Data Raw Curated Values of properties As acquired Raw Curated Not yet processed Not yet organized Information Processed Organized Curated datasets Contextualized Accessible Meaningfully Knowledge Justified True Belief Verifiable Predictive Cause-effect relationships, e.g.:
Laws of physics Generalization within bounds DataBase DB)
KnowledgeBase (KB)
Deterministic Fuzzy Rule-set
Knowledge Engineering (KE)
Acquire Organize Validate Knowledge Problem-solving Decision-making Specific Case Situation Scenario to facilitate for KB Inference Engine +
Reasoning Algorithm info decision Within a Well-defined Domain D for Domain D
KR: the field of artificial intelligence (AI) dedicated to representing knowledge about the world in a form that can be mechanized to solve complex tasks.
Means of KR example: Ontology a set of concepts and categories in a subject area or domain that shows their properties and the relations between them KR formalisms - characteristics of interest:
Expressivity Tractability Comprehensiblity Usability; Learnability Knowledge Representation (KR)
Domain Engineering Development, evolution, and sustenance of domain specific knowledge and artifacts to support the development and evolution of particular systems in the domain.
Includes engineering of:
Domain models Components Methods Tools Asset management (possibly)
Reference model 7
Source: ISO/IEC 26550:2015(E)
Ob Pre-certifiable objects:
Object is certified Evaluate Accredited certifying authority People Rework Learn Domain engineered pre-certifiable assets Tools Processes Procedures Methods & techniques Facilities Other reusable assets, e.g.:
- Libraries 7
CLARISSA TOOLS Assurance Case to Logic Program Step 1 ASCE s(CASP)
Semantic Reasoning Step 2 Human Explainable Logically-Reasoned Assurance Case using ASCE Theories Defeaters Knowledge Assistance Engines
- ErgoAI, LLM Logically Integrated Case, Evidence
& Theories Prolog, s(CASP)
DecisionSupport System Known vulnerabilities Structural &
Syntactic Analysis Semantic Analysis Assurance Case Logic Program (Prolog)
Reasoning Analysis Synthesis DISTRIBUTION C. Distribution authorized to U.S. Government Agencies and their contractors; administrative or operational use; 10/19/2021. Other requests for this document shall be referred to DARPA, I2O.
10 Source: CLARISSA team presentation at ARCOS meeting, Niskayuna NY March 12, 2024
ASSURANCE CASE SYNTHESIS Synthesis Assistant is a research tool designed to synthesize claims, arguments and evidence structures from a root or top-level claim.
Given:
Top-level claim (defined in ErgoAI or node imported from anASCE file)
Definition of the system structure Possible defeaters Theories used to develop the case Evidences for the case Clarissa ASCE NL Claim
- Theories, Evidence Formalised (HiLog)
Synthesised (HiLog)
Graphical and textual summary Selection and integration Synthesis Assistant Clarissa ASCE DISTRIBUTION C. Distribution authorized to U.S. Government Agencies and their contractors; administrative or operational use; 10/19/2021. Other requests for this document shall be referred to DARPA, I2O.
9 Source: CLARISSA team presentation at ARCOS meeting, Niskayuna NY March 12, 2024
Supporting information
ISO/IEC 26550 family of standards ISO/IEC 26550:2015(E)
Software and systems engineering Reference model for product line engineering and management ISO/IEC 26551:2016(E)
Tools and methods for product line requirements engineering ISO/IEC 26552:2019(E)
Tools and methods for product line architecture design ISO/IEC 26553:2018(E)
Processes and capabilities of methods and tools for domain realization and application realization ISO/IEC 26554:2018(E)
Methods and tools for domain testing and application testing ISO/IEC 26555:2015 Tools and methods for technical management ISO/IEC 26556:2018(E)
Tools and methods for organizational management 8
ISO/IEC 26550 family of standards ISO/IEC 26557:2016(E)
Methods and tools for variability mechanisms ISO/IEC 26558:2017(E)
Methods and tools for variability modeling ISO/IEC 26559:2017(E)
Methods and tools for variability traceability ISO/IEC 26560:2019(E)
Methods and tools for product management ISO/IEC 26561:2019(E)
Methods and tools for technical probe ISO/IEC 26562:2019(E)
Processes and capabilities of methods and tools for transition management ISO/IEC 26563:2022(E)
Processes and capabilities of methods and tools for configuration management of assets ISO/IEC 26564: 2022(E)
Methods and tools for product line measurement 9
ISO/IEC 26550 family of standards ISO/IEC 26850:2021(E)
Methods and tools for the feature-based approach to software and systems product line engineering ISO/IEC 26565 to ISO/IEC 26599: To be developed 9