ML23262B205

From kanterella
Jump to navigation Jump to search
RES Seminar 2023-07-26 - Redacted
ML23262B205
Person / Time
Issue date: 07/26/2023
From: Jason Carneal, Chang Y, Polra S, Pringle S
NRC/RES/DRA/HFRB, Sphere of Influence
To:
References
Download: ML23262B205 (1)


Text

Use Unsupervised Machine Learning to Inform Inspection Priority Y. James Chang1, Scott Pringle2, Stuti Polra2, Jason Carneal1 1Nuclear Regulatory Commission 2Sphere of Influence, Inc. (SphereOI)

Presented at NRC/RES Seminar 2023-07-26 1

Presentation Outline

  • Objective and research preparation

- Y. James Chang (RES/DRA/HFRB)

  • Technical analysis

- Scott Pringle and Stuti Polra (SphereOI)

  • Applicability to Reactor Oversight Process (ROP)

- Jason Carneal (NRR/DRO/IOEB) 2

Objective and Research Preparation Y. James Chang 3

Objective and Motivation

  • Objective: Perform a feasibility study on the use of unsupervised machine learning (ML) techniques to prioritize inspections
  • Motivation:

- Covid-19 pandemic disrupted the NRCs inspection plans

- Support NUREG-2261 Artificial Intelligence Strategic Plan 4

Five AI Strategic Goals

- NUREG-2261 Artificial Intelligence Strategic Plan(FY 2023-2027)

1. Ensure NRC readiness for regulatory decision-making
2. Establish an organizational framework to review AI applications
3. Strengthen and expand AI partnerships
4. Cultivate an AI-proficient workforce
5. Pursue use case to build an AI foundation across the NRC 5

Industry Use of AI/ML EPRI 3002023821 Automating Corrective Action Programs in the Nuclear Industry (2022):

- Utility A: trained by about 600,000 CAP data from four years of several stations

- Utility D: used IBM Watson Cloud AI

  • Initially trained on 750 historical *condition reports

- Utilities A & D planned to invest more in this area

6

Potential Use for NRC

  • Identified 5 power outage events impacted security operations

- 2022 (2 events): human errors contributed to the events

- 2021 (3 events): not attributed to human errors

  • The OpE COMM suggested focusing on potential human impacts on power supply equipment when conducting Inspection Procedure (IP) 71130.04, Equipment Performance, Testing, and Maintenance.

7

Two Tasks

  • Task1: provide a general evaluation of four AI/ML systems:

- Amazons Sagemaker, Microsofts Azure, Googles Google-AI, MatLab, and others

  • Task2: selected an AI/ML system to identify safety clusters 8

Future Focused Research Contract

  • Contractor:
  • The team

- 3 SphereOI staff

- 7 NRC staff (RES, NRR, and Region 3)

  • fast-paced operation - weekly meetings for about four months
  • Trained the algorithms with ~20,000 inspection reports/15,000 inspection findings, 269 NUREGs, 195 RILs, 1004 acronyms, and 407 common failure modes
  • Reports and slides will be available in Nuclepedia 9

Technical Analysis Scott Pringle and Stuti Polra 10

Technical Analysis Agenda

  • Problem Statement
  • Solution Overview
  • Data
  • Neural Topic Modeling - BERTopic
  • BERTopic Configuration Experiments
  • Results 11

Problem Statement The objective of this acquisition is to evaluate the suitability of the commercially available machine learning (ML) systems to perform unsupervised learning to identify the safety clusters among US nuclear power plants and to perform an in-depth evaluation of a selected ML system to identify safety clusters using the inspection reports of Nuclear Regulatory Commission as input data.

12

Task 1 13

Task 1 Environment Selection

  • All Cloud environments support this type of Azure slightly outperformed analysis the others, but the
  • Some are slightly better at Topic Modeling difference in scores was not
  • Most modeling will be done in a Notebook significant environment that is cloud provider agnostic 14

Task 1 Topic Modeling

  • Unsupervised discovery of topics from a collection of text documents
  • Latent Dirichlet Allocation (LDA)
  • Describe a document as a bag-of-words
  • Model each document as a mixture of latent topics
  • Topic is represented as a distribution over the words in the vocabulary
  • Variants of Topic Modeling can be explored
  • Text-embeddings from Language Models and Neural Topic Modeling can be used to improve the quality of results 15

Task 1 Method Selection

  • Topic Modeling vs Neural Topic Modeling
  • LDA vs BERTopic
  • LDA did not perform well for the technical language in the inspection reports
  • Neural Topic Modeling leverages language embeddings rather than words that commonly occur together
  • Different embedding algorithms are available and some work better on technical language
  • A more extensive assessment of the embedding algorithms is provided in Phases II and III Neural Topic Modeling Selected over LDA Topic Modeling 16

Task 2 17

Solution Overview

1. Topic Modeling 2. Topic Modeling 3. Topic Input Parameters Representation and Visualization Representation Weighting Tokenizer &

Vectorizer Clustering Dimension Reduction Embedding 18

Topic Modeling Input 19

Input Optimization

~20,000 Inspection reports spanning 25 years Question Item Introduction Item Intro Summary Key Phrase Answering Full Text Bart-large-cnn Flan-t5-base Unsupervised KeyBERT Selected - good balance of size and detail Roberta-base-T5-base Guided KeyBERT squad2 Title Flan-t5-base Bert-large-cased Unsupervised KeyBERT

+ KeyphraseVectorizers Rejected - insufficient Guided KeyBERT +

Pegasus-xsum KeyphraseVectorizers detail Pegasus-arxiv Full Report Text Pegasus-pubmed Rejected - Embedding models using unlimited Pegasus-cnn-text did not perform well dailymail 20

Summarization Models Model Architecture Pre-training and Fine-tuning Details

  • transformer-based encoder-decoder
  • converts all NLP problems into a text-to-text
  • variety of unsupervised, self-supervised and T5-Base format: model is fed text for context or supervised training objectives conditioning with a task-specific prefix and
  • Colossal Clean Crawled Corpus (C4) dataset produces the appropriate output text
  • instruction finetuning
  • transformer-based encoder-decoder Flan-t5-base
  • Chain-of-thought reasoning
  • enhanced version of the T5 model
  • increased number of fine-tuning tasks
  • text infilling and sentence permutation objectives
  • sequence-to-sequence transformer-based
  • Pre-trained on subset of common crawl, news and BART-large-cnn architecture book data
  • bidirectional encoder & left-to-right decoder
  • Fine-tuned on cnn daily mail dataset
  • masked language modeling (MLM) and gap sentence generation (GSG) pre-training objectives
  • sequence-to-sequence transformer-based
  • Pre-trained on the Colossal Clean Crawled Corpus (C4)

Pegasus architecture dataset and the HugeNews dataset

  • bidirectional encoder & left-to-right decoder
  • Fine-tuned versions: cnn daily mail, xsum, arxiv, pubmed 21

Question-Answering Models Model Architecture Pre-training and Fine-tuning Details

  • instruction finetuning
  • Chain-of-thought reasoning
  • transformer-based encoder-decoder Flan-t5-base
  • increased number of fine-tuning tasks from
  • enhanced version of the T5 model T5
  • transformer model derived from BERT
  • pre-trained on BookCorpus, English Wikipedia, CC news, OpenWebText and Roberta-base-squad2
  • hyperparameter modifications and Stories datasets removal of the next-sentence pre-
  • fine-tuned on the SQuAD2.0 dataset of training objective question-answer pairs
  • pre-trained on the BookCorpus and English Wikipedia datasets using a masked language Bert-large-casedwhole-word-
  • transformer model (BERT-large) modeling (MLM) objective maskingfinetuned-squad
  • fine-tuned on the SQuAD dataset of question-answer pairs No consistent results attained so QA was not selected 22

Key Phrase Extraction Methods Method Description

  • unsupervised key phrase extraction algorithm
  • Tokenize text into words and phrases to obtain candidate keywords and phrases
  • Embed full text document and candidate keywords or phrases with pre-trained sentence KeyBERT transformer model
  • Compute cosine similarity between embedded document and embedded key phrases
  • Retrieve top N keywords or phrases that are most similar to document
  • slight variation of the KeyBERT approach
  • pre-defined list of important words and phrases is provided to the algorithm as seeded Guided KeyBERT keywords
  • Seeded words are embedded and combined with the document embeddings with a weighted average
  • PatternRank algorithm
  • KeyphraseVectorizers used to extract candidate phrases that have zero or more adjectives KeyBERT +

followed by one or more nouns KeyphraseVectorizers

  • KeyBERT used to find candidate phrases most similar to the full document in an unsupervised manner
  • PatternRank algorithm
  • KeyphraseVectorizers used to extract candidate phrases that have zero or more adjectives Guided KeyBERT +

followed by one or more nouns KeyphraseVectorizers

  • Guided KeyBERT used to find candidate phrases most similar to the full document in a semi-supervised manner, using a list of important words and phrases 23

Item Introduction Input Variation Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report. Specifically, on two separate occasions, Entergy did not ensure that transient combustibles were evaluated in accordance with established procedures; and as a result, they allowed combustible loading in the 480 volt emergency switchgear room to exceed limits established in the fire hazards analysis (FHA) of record. The inspectors determined that not completing a TCE, as required by EN-DC-161, Control of Combustibles, Revision 18, was a performance deficiency, given that it was reasonably within Entergys ability to foresee and correct and should have been prevented. Specifically, on August 28, 2018, wood in excess of 100 pounds was identified in the switchgear room; however, an associated TCE had not been developed. Additionally, on October 1, 2018, three 55-gallon drums of EDG lube oil were stored in the switchgear room without an associated TCE having been developed to authorize storage in this room, as required for a volume of lube oil in excess of 5 gallons. The inspectors determined the performance deficiency was more than minor because it was associated with protection against external factors attribute of the Mitigating Systems cornerstone, and it adversely affected the cornerstone goal of ensuring the availability, reliability, and capability of systems that respond to initiating events to prevent undesirable consequences. Specifically, storage of combustibles in excess of the maximum permissible combustibles loading could have the potential to challenge the capability of fire barriers to prevent a fire from affecting multiple fire zones and further degrading plant equipment. Additionally, this issue was similar to an example listed in IMC 0612, Appendix E, "Examples of Minor lssues," Example 4.k., because the fire loading was not within the FHA limits established at the time. Entergy required the issuance of a revised evaluation to provide reasonable assurance that the presence of combustibles of a quantity in excess of the loading limit of record would not challenge the capacity of fire barriers, and further evaluation and the issuance of an EC was necessary to raise the established loading limit to a less-conservative value. The inspectors assessed the significance of the finding using IMC 0609, Appendix F, Fire Protection Significance Determination Process, and determined that this finding screened to Green (very low safety significance) because it had a low degradation rating in accordance with Attachment 2 of the appendix. The inspectors determined that this finding had a cross-cutting aspect in the area of Human Performance, Work Management, because Entergy did not adequately plan, control, and execute work activities such that nuclear safety was the overriding priority, nor did they adequately identify risk associated with work being performed or coordinate across working groups to anticipate and manage this risk. Specifically, in the case of wood scaffolding being stored in the switchgear room, while planning work to be performed, Entergy did not adequately consider the fire risk that would be introduced by the presence of additional combustible materials. In the case of lube oil being stored in the room, Entergy did not take adequate action to ensure that activities were executed in a manner that would prevent work taking place in one area (the adjacent EDG cell) from introducing additional fire risk into a space for which it had not been evaluated (the switchgear room). In both cases, Entergy did not take sufficient action to ensure that workers were aware of the fire protection requirements associated with activities being conducted and to ensure that they coordinated as needed across working groups to adequately assess and mitigate the associated fire risk.

24

Summarization Models: Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report []

T5-Base Flan-T5-base inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take Entergy did not take adequate measures to control adequate measures to control transient combustibles. the transient combustibles in accordance with established inspectors determined that not completing a TCE was a procedures and thereby did not maintain in effect all performance deficiency, given that it was reasonably provisions of the approved fire protection program, as within entergys ability to foresee and correct and should described in the Unit 3 final safety analysis report.

have been prevented. in the case of wood scaffolding being stored in the switch BART-large-cnn Pegasus-cnn-dailymail Entergy did not ensure that transient combustibles were evaluated in accordance with established The inspectors identified a Green NCV of Unit 3 Technical procedures . Entergy did not take sufficient action to Specification (TS) 5.4.1 when Entergy did not take ensure that workers were aware of the fire protection adequate measures to control transient combustibles. The requirements associated with activities being inspectors determined that not completing a TCE, as conducted . Entergy did not adequately identify risk required by EN-DC-161, Control of Combustibles, associated with work being performed or coordinate Revision 18, was a performance deficiency.

across working groups to anticipate and manage this risk 25

Question-Answering Models: Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report []

Flan-T5-base Roberta-base-squad2 Storage of combustibles in excess of the maximum permissible combustibles loading could have the potential to challenge the capability of fire barriers to nuclear safety prevent a fire from affecting multiple fire zones and further degrading plant equipment Bert-large-casedwhole-word-maskingfinetuned-squad No consistent results nuclear safety attained so QA was not selected 26

Key Phrase Extraction Methods: Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report []

KeyBERT KeyBERT + KeyphraseVectorizers allowed combustible loading, allowed combustible, combustibles additional fire risk, fire protection requirements, final safety revision 18, combustibles evaluated accordance, permissible analysis report, fire risk, maximum permissible combustibles combustibles loading, result allowed combustible, combustibles revision, combustibles evaluated, permissible combustibles, loading, fire protection significance determination process, fire transient combustibles evaluated, additional combustible, maximum barriers, additional combustible materials, combustible loading, permissible combustibles, combustibles loading, 161 control fire protection program, combustibles, transient combustibles, combustibles, final safety analysis, control combustibles revision, low safety significance, edg lube oil, fire loading, entergy, presence additional combustible, unit final safety, combustibles multiple fire zones, fire, nuclear safety, further degrading plant accordance established, established hazards analysis equipment`

Guided KeyBERT Guided KeyBERT + KeyphraseVectorizers allowed combustible loading, final safety analysis, unit final final safety analysis report, fire protection requirements, safety, combustibles evaluated accordance, safety analysis additional fire risk, fire risk, fire protection significance report, combustibles revision 18, allowed combustible, determination process, maximum permissible combustibles permissible combustibles loading, transient combustibles loading, fire barriers, fire protection program, combustible evaluated, combustibles evaluated, permissible combustibles, loading, low safety significance, additional combustible result allowed combustible, maximum permissible combustibles, materials, transient combustibles, combustibles, edg lube oil, 161 control combustibles, combustibles revision, combustibles further degrading plant equipment, fire loading, nuclear safety, loading, established hazards analysis, safety significance, safety entergy, multiple fire zones, volt emergency analysis, control combustibles revision 27

Neural Topic Modeling 28

BERTopic

  • Generate document embeddings with pre-trained transformer-based language models
  • Reduce dimensionality of document embeddings
  • Cluster document embeddings
  • Generate topic representations with class-based TF-IDF procedure
  • Coherent and diverse topics 29

BERTopic: Modularity

  • BERTopic offers modularity at each step of the process

- Embedding

- Dimensionality Reduction

- Clustering

- Tokenizer

- Weighing scheme

- Representation tuning

  • Each component can be easily swapped according to the goals and to accommodate the data 30

BERTopic: Representing a Topic Refine how a topic is represented and interpreted

  • KeyBERT

- Extract keywords for each topic and a set of representative documents per topic

- Compare the embeddings of the keywords and the representative documents

  • Maximal Marginal Relevance

- Reduce redundancy and improve diversity of keywords

  • Part of Speech

- Extract keywords for each topic and documents that contain the keywords

- Use a part-of-speech tagger to generate new candidate keywords

  • Zero-shot Classification

- Assign candidate labels to topics given keywords for each topic

  • Text Generation and Prompts

- Create topic labels based on representative documents and keywords

- Huggingface Transformers, OpenAI GPT, co:here, LangChai 31

BERTopic: Topic Modeling Variations

  • Topic Distributions
  • Semi-supervised Topic Modeling

- Approximate topic distributions per - Steer dimensionality reduction of document when using a hard-clustering document embeddings into a space close approach to the topic labels for some or all documents

  • Topics per Class (Category)

- Extract topic representations for each class

  • Guided (Seeded) Topic Modeling or category of interest from topic model - Predefined keywords or phrases for the topic model to converge to by comparing document embeddings with seeded topic
  • Dynamic Topic Modeling embeddings

- Analyze how the representation of a topic changes over time

  • Supervised Topic Modeling

- If topic labels are already known, discover

  • Hierarchical Topic Modeling relationships between documents and

- Obtain insights into which topics are topics similar and sub-topics that may exist in data

  • Manual Topic Modeling
  • Online Topic Modeling - Find topic representations for document topic labels that are already known and

- Continue updating topic model with new use other topic modeling variations with data this model 32

Neural Topic Modeling - Customizations

  • Stopword Removal

- Input level

  • Remove references to reactor sites and parent companies
  • Prevent formation of clusters around large sites/companies with different underlying safety issues

- Topic representation level

  • Remove generic nuclear terms used in NRC text that are not indicative of safety issues
  • Create topic representations that are specific and insightful
  • Custom Topic Representations

- Representations from BERTopic often contained incomplete terms and were not as insightful or intuitive for analysts

- Custom topic representations were developed with input from NRC staff

  • Vocabulary: curated list of NRC abbreviations, full-forms, and failure modes of reactor systems and components
  • Key Phrases: automatically extracted from inspection report item introductions
  • Vocabulary + Key Phrases: combined list of vocabulary and automatically extracted key phrases 33

Neural Topic Modeling - Extensions

  • Topic Reduction

- Number of discovered topics can range from 10s to 100s depending on the parameters of BERTopic

- Topic reduction techniques were explored to merge similar topics

  • Manual: reduce to specified number of topics by iteratively merging similar topic representations
  • Automatic: cluster topic representations that are similar, leaving outliers as standalone topics
  • Outlier Reduction

- Nearly 1/3 of the documents are considered outliers by the clustering algorithm

- Outlier reduction techniques were explored to assign outlier documents to existing topics

  • Topic probability: assign to most probable topic according to clustering algorithm
  • Topic distribution: assign to most frequently discussed topic in document
  • C-TF-IDF: assign to topic with the most similar topic representation as the document
  • Embeddings: assign to the topic with the most similar embedding as the document

- Topic representations were updated after outlier assignment 34

Topic Modeling Parameters:

BERTopic Experiments 35

Components of Topic Modeling

3. Topic Representation TF-IDF on input text in each topic cluster:

BERTopic MMR (diversity = 0.6)

BERTopic MMR + POS

2. Topic Modeling Parameters MMR (diversity = 0.6) + POS (NOUN, PROPN, ADJ-NOUN, ADJ-PROPN)

MMR (diversity = 0.6);

MMR (diversity = 0.6) TF-IDF, Counts: String matching

+ POS (NOUN, on full item-intros in each topic

1. Topic Modeling Input PROPN, ADJ-NOUN, cluster:

ADJ-PROPN)

Vocabulary 1411 abbreviations + full forms

+ failure modes Item Introduction n-grams range of 1-3 min cluster size 10, Key Phrases (66,325 words/phrases 20, 40, 60 Item Introduction Summary extracted from Item Introductions using (Pegasus_cnn_dailymail model) 15 neighbors, 5 KeyphraseVectorizer +

components Guided KeyBERT with vocab of 1411 abbreviations, full Item Introduction Key Phrases forms and failure modes)

(KeyphraseVectorizer + Guided all-MiniLM-L6-v2 KeyBERT with custom vocab of 1411 Vocabulary + Key abbreviations, full forms and failure Phrases modes) (67,402) 36

Create a Select the text Present the mathematical Reduce the that will be Select parameters Split up text cluster representation of number of Calculate used in for unsupervised and count themes in the document to parameters importance of unsupervised clustering occurrences of analyst use in the from hundreds terms learning the tokens friendly terms algorithms to dozens 15 tested 5 tested 10 tested 4 tested 3 tested 5 tested 1 Chosen 3 Chosen 1 Chosen 1 Chosen 1 Chosen 1 Chosen 2 Chosen Input - Item Tokenizer &

Embedding Dimension Clustering Weighting Representation Introduction Vectorizer Reduction HDBSCAN Full Text Full Text UMAP Min doc/cluster 10 MMR N=5, C=5 Uni-grams HDBSCAN HDBSCAN Summary Min doc/cluster Min doc/cluster 20 20 UMAP Uni + Bi- MMR+POS BART-large- cnn N=10, C=5 HDBSCAN grams T5-Base Min doc/cluster 40 Flan-t5-base all-MiniLM-L6-all-MiniLM-L6- Uni + Bi + Vocab v2 v2 UMAP TF-IDF Pegasus xsum N=15, C=5 HDBSCAN Tri-grams Pegasuscnn-Pegasus cnn-all-mpnet-base- Min doc/cluster 60 dailymail daily mail Key Phrase Pegasus arix v2 UMAP Pegasus pubmed N=20, C=5 Vocab +

xlnet-base-Question cased Key Phrase Answering UMAP N=5, C=10 Key BERT Flan-t5-base SPECTER Inspired Roberta-base-squad2 Bert-large-cased multi-qa-MiniLM-L6-dot-v1 Key Phrase UMAP Unsupervised KeyBERT N=10, C=50 Guided KeyBERT Unsupervised KeyBERT KeyphraseVectorizers UMAP GuidedKeyBERT Guided KeyBERT N=15, C=50 KeyphraseVectorizers KeyphraseVectorizers Named Entity UMAP Recognition N=20, C=50 37

Results 38

Input Variation Item Intro - Full Text Summary Key Phrase 39

Stopword Removal Before stop-word removal After stop-word removal 40

Topic Representation Variation 41

Outlier Reduction Before Outlier Reduction After Outlier Reduction Topic 41 MMR-POS Representation Topic 32 Vocab+Key Phrases Representation 42

Potential Use for NRC

  • Identified 5 issues related to improper calibration and maintenance of radiation monitoring and dose assessment equipment that impact emergency plan actions

- Waterford 2011-2022 (2 events)

- Vogtle 2019

- Fermi 2016

- Wolf Creek 2013

  • The OpE COMM identifies opportunities to identify these issues under Inspection Procedure (IP) 71124.05, Radiation Monitoring Instrumentation, and emergency drill observations, plant modifications or surveillance test reviews 43

Benchmark with an OpE COMM :

Radiation Monitoring Issues Impacting Licensee Emergency Plans OpE identified 4 findings that exhibited safety issues that are related The clustering approach placed 3 of the 4 in the same cluster and the 4th in a similar cluster 44

Conclusion 45

Machine Learning, using cloud agnostic, notebook-based implementations can identify relevant safety clusters to group together safety related inspection findings.

46

Applicability to ROP Jason Carneal 47

How do These Topics Relate to the ROP?

The NRC is attempting to increase the use of data in its decision-making processes.

Most NRC data is unstructured free text.

Available structured data is not consistent across NRC IT systems.

These tools can be used to classify / summarize NRC documents by safety topic, root cause, 10 CFR references, safety systems, etc.

Utilizing these techniques, NRC staff could have efficient access to more information to support their activities.

48

Facilitate data-driven decisions at NRC Reduce manual effort to get relevant data

  • Eliminate repetitive manual reports on popular topics
  • Provide tools for inspectors / NRC staff / management to review data of interest
  • Lower bar of access to data for both internal and external users Increase chance we and others can identify issues early
  • Consolidating and democratizing access to sparse and difficult to access data
  • Deploying tools that allow users to explore data on their own
  • Revealing trends and insights previously difficult to ascertain Support proactive use of NRC operating experience data 49

OpE Hub -

Deployed Products Consolidation of Deployed Products

  • Website portal for NRC users
  • One stop shop for all OpE products
  • Easy to navigate
  • Facilitates user interaction and support
  • Link to OpE Hub 50

ROP Use Case Development of Advanced Operating Experience Search Tools Objectives Training Set AI algorithms Historical OpE

  • Build advanced operating Clearinghouse Data Unsupervised Learning Natural Language Processing experience search tools that Findings Data Summarization Industry Data leverage machine learning and natural language processing
  • Automate certain aspects of Incoming OpE operating experience workflow NRC Customized Documents
  • Provide expanded capabilities (e.g., licensee models for data analysis. event reports)

Userbase Advanced OpE

  • NRC inspectors Trending,
  • NRC technical staff Communication,
  • NRC Management and Search Tools 51