ML23262B205

From kanterella
Jump to navigation Jump to search
RES Seminar 2023-07-26 - Redacted
ML23262B205
Person / Time
Issue date: 07/26/2023
From: Jason Carneal, Chang Y, Polra S, Pringle S
NRC/RES/DRA/HFRB, Sphere of Influence
To:
References
Download: ML23262B205 (1)


Text

Use Unsupervised Machine Learning to Inform Inspection Priority Y. James Chang1, Scott Pringle2, Stuti Polra2, Jason Carneal1 1Nuclear Regulatory Commission 2Sphere of Influence, Inc. (SphereOI)

Presented at NRC/RES Seminar 2023-07-26 1

Presentation Outline

  • Objective and research preparation

- Y. James Chang (RES/DRA/HFRB)

  • Technical analysis

- Scott Pringle and Stuti Polra (SphereOI)

  • Applicability to Reactor Oversight Process (ROP)

- Jason Carneal (NRR/DRO/IOEB) 2

Objective and Research Preparation Y. James Chang 3

Objective and Motivation

  • Objective: Perform a feasibility study on the use of unsupervised machine learning (ML) techniques to prioritize inspections
  • Motivation:

- Covid-19 pandemic disrupted the NRCs inspection plans

- Support NUREG-2261 Artificial Intelligence Strategic Plan 4

Five AI Strategic Goals

- NUREG-2261 Artificial Intelligence Strategic Plan(FY 2023-2027)

1. Ensure NRC readiness for regulatory decision-making
2. Establish an organizational framework to review AI applications
3. Strengthen and expand AI partnerships
4. Cultivate an AI-proficient workforce
5. Pursue use case to build an AI foundation across the NRC 5

Industry Use of AI/ML EPRI 3002023821 Automating Corrective Action Programs in the Nuclear Industry (2022):

- Utility A: trained by about 600,000 CAP data from four years of several stations

- Utility D: used IBM Watson Cloud AI

  • Initially trained on 750 historical *condition reports

- Utilities A & D planned to invest more in this area 6

Potential Use for NRC

  • Identified 5 power outage events impacted security operations

- 2022 (2 events): human errors contributed to the events

- 2021 (3 events): not attributed to human errors

  • The OpE COMM suggested focusing on potential human impacts on power supply equipment when conducting Inspection Procedure (IP) 71130.04, Equipment Performance, Testing, and Maintenance.

7

Two Tasks

  • Task1: provide a general evaluation of four AI/ML systems:

- Amazons Sagemaker, Microsofts Azure, Googles Google-AI, MatLab, and others

  • Task2: selected an AI/ML system to identify safety clusters 8

Future Focused Research Contract

  • Contractor:
  • The team

- 3 SphereOI staff

- 7 NRC staff (RES, NRR, and Region 3)

  • fast-paced operation - weekly meetings for about four months
  • Trained the algorithms with ~20,000 inspection reports/15,000 inspection findings, 269 NUREGs, 195 RILs, 1004 acronyms, and 407 common failure modes
  • Reports and slides will be available in Nuclepedia 9

Technical Analysis Scott Pringle and Stuti Polra 10

Technical Analysis Agenda

  • Problem Statement
  • Solution Overview
  • Data
  • Neural Topic Modeling - BERTopic
  • BERTopic Configuration Experiments
  • Results 11

Problem Statement The objective of this acquisition is to evaluate the suitability of the commercially available machine learning (ML) systems to perform unsupervised learning to identify the safety clusters among US nuclear power plants and to perform an in-depth evaluation of a selected ML system to identify safety clusters using the inspection reports of Nuclear Regulatory Commission as input data.

12

Task 1 13

Task 1 Environment Selection 14 All Cloud environments support this type of analysis Some are slightly better at Topic Modeling Most modeling will be done in a Notebook environment that is cloud provider agnostic Azure slightly outperformed the others, but the difference in scores was not significant

Task 1 Topic Modeling Unsupervised discovery of topics from a collection of text documents Latent Dirichlet Allocation (LDA)

Describe a document as a bag-of-words Model each document as a mixture of latent topics Topic is represented as a distribution over the words in the vocabulary Variants of Topic Modeling can be explored Text-embeddings from Language Models and Neural Topic Modeling can be used to improve the quality of results 15

Task 1 Method Selection Topic Modeling vs Neural Topic Modeling LDA vs BERTopic LDA did not perform well for the technical language in the inspection reports Neural Topic Modeling leverages language embeddings rather than words that commonly occur together Different embedding algorithms are available and some work better on technical language A more extensive assessment of the embedding algorithms is provided in Phases II and III 16 Neural Topic Modeling Selected over LDA Topic Modeling

Task 2 17

Solution Overview

1. Topic Modeling Input
3. Topic Representation and Visualization
2. Topic Modeling Parameters Embedding Dimension Reduction Clustering Tokenizer &

Vectorizer Weighting Representation 18

Topic Modeling Input 19

Input Optimization 20

~20,000 Inspection reports spanning 25 years Item Intro Full Text Summary Bart-large-cnn T5-base Flan-t5-base Pegasus-xsum Pegasus-arxiv Pegasus-pubmed Pegasus-cnn-dailymail Question Answering Flan-t5-base Roberta-base-squad2 Bert-large-cased Key Phrase Unsupervised KeyBERT Guided KeyBERT Unsupervised KeyBERT

+ KeyphraseVectorizers Guided KeyBERT +

KeyphraseVectorizers Selected - good balance of size and detail Item Introduction Rejected - insufficient detail Title Rejected - Embedding models using unlimited text did not perform well Full Report Text

Summarization Models 21 Model Architecture Pre-training and Fine-tuning Details T5-Base

  • transformer-based encoder-decoder
  • converts all NLP problems into a text-to-text format: model is fed text for context or conditioning with a task-specific prefix and produces the appropriate output text
  • variety of unsupervised, self-supervised and supervised training objectives
  • Colossal Clean Crawled Corpus (C4) dataset Flan-t5-base
  • transformer-based encoder-decoder
  • enhanced version of the T5 model
  • instruction finetuning
  • Chain-of-thought reasoning
  • increased number of fine-tuning tasks BART-large-cnn
  • sequence-to-sequence transformer-based architecture
  • bidirectional encoder & left-to-right decoder
  • text infilling and sentence permutation objectives
  • Pre-trained on subset of common crawl, news and book data
  • Fine-tuned on cnn daily mail dataset Pegasus
  • sequence-to-sequence transformer-based architecture
  • bidirectional encoder & left-to-right decoder
  • masked language modeling (MLM) and gap sentence generation (GSG) pre-training objectives
  • Pre-trained on the Colossal Clean Crawled Corpus (C4) dataset and the HugeNews dataset
  • Fine-tuned versions: cnn daily mail, xsum, arxiv, pubmed

Question-Answering Models 22 Model Architecture Pre-training and Fine-tuning Details Flan-t5-base

  • transformer-based encoder-decoder
  • enhanced version of the T5 model
  • instruction finetuning
  • Chain-of-thought reasoning
  • increased number of fine-tuning tasks from T5 Roberta-base-squad2
  • transformer model derived from BERT
  • hyperparameter modifications and removal of the next-sentence pre-training objective
  • pre-trained on BookCorpus, English Wikipedia, CC news, OpenWebText and Stories datasets
  • fine-tuned on the SQuAD2.0 dataset of question-answer pairs Bert-large-casedwhole-word-maskingfinetuned-squad
  • transformer model (BERT-large)
  • pre-trained on the BookCorpus and English Wikipedia datasets using a masked language modeling (MLM) objective
  • fine-tuned on the SQuAD dataset of question-answer pairs No consistent results attained so QA was not selected

Key Phrase Extraction Methods 23 Method Description KeyBERT

  • unsupervised key phrase extraction algorithm
  • Tokenize text into words and phrases to obtain candidate keywords and phrases
  • Embed full text document and candidate keywords or phrases with pre-trained sentence transformer model
  • Compute cosine similarity between embedded document and embedded key phrases
  • Retrieve top N keywords or phrases that are most similar to document Guided KeyBERT
  • slight variation of the KeyBERT approach
  • pre-defined list of important words and phrases is provided to the algorithm as seeded keywords
  • Seeded words are embedded and combined with the document embeddings with a weighted average KeyBERT +

KeyphraseVectorizers

  • PatternRank algorithm
  • KeyphraseVectorizers used to extract candidate phrases that have zero or more adjectives followed by one or more nouns
  • KeyBERT used to find candidate phrases most similar to the full document in an unsupervised manner Guided KeyBERT +

KeyphraseVectorizers

  • PatternRank algorithm
  • KeyphraseVectorizers used to extract candidate phrases that have zero or more adjectives followed by one or more nouns
  • Guided KeyBERT used to find candidate phrases most similar to the full document in a semi-supervised manner, using a list of important words and phrases

Item Introduction Input Variation Examples 24 The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report. Specifically, on two separate occasions, Entergy did not ensure that transient combustibles were evaluated in accordance with established procedures; and as a result, they allowed combustible loading in the 480 volt emergency switchgear room to exceed limits established in the fire hazards analysis (FHA) of record. The inspectors determined that not completing a TCE, as required by EN-DC-161, Control of Combustibles, Revision 18, was a performance deficiency, given that it was reasonably within Entergys ability to foresee and correct and should have been prevented. Specifically, on August 28, 2018, wood in excess of 100 pounds was identified in the switchgear room; however, an associated TCE had not been developed. Additionally, on October 1, 2018, three 55-gallon drums of EDG lube oil were stored in the switchgear room without an associated TCE having been developed to authorize storage in this room, as required for a volume of lube oil in excess of 5 gallons. The inspectors determined the performance deficiency was more than minor because it was associated with protection against external factors attribute of the Mitigating Systems cornerstone, and it adversely affected the cornerstone goal of ensuring the availability, reliability, and capability of systems that respond to initiating events to prevent undesirable consequences. Specifically, storage of combustibles in excess of the maximum permissible combustibles loading could have the potential to challenge the capability of fire barriers to prevent a fire from affecting multiple fire zones and further degrading plant equipment. Additionally, this issue was similar to an example listed in IMC 0612, Appendix E, "Examples of Minor lssues," Example 4.k., because the fire loading was not within the FHA limits established at the time. Entergy required the issuance of a revised evaluation to provide reasonable assurance that the presence of combustibles of a quantity in excess of the loading limit of record would not challenge the capacity of fire barriers, and further evaluation and the issuance of an EC was necessary to raise the established loading limit to a less-conservative value. The inspectors assessed the significance of the finding using IMC 0609, Appendix F, Fire Protection Significance Determination Process, and determined that this finding screened to Green (very low safety significance) because it had a low degradation rating in accordance with Attachment 2 of the appendix. The inspectors determined that this finding had a cross-cutting aspect in the area of Human Performance, Work Management, because Entergy did not adequately plan, control, and execute work activities such that nuclear safety was the overriding priority, nor did they adequately identify risk associated with work being performed or coordinate across working groups to anticipate and manage this risk. Specifically, in the case of wood scaffolding being stored in the switchgear room, while planning work to be performed, Entergy did not adequately consider the fire risk that would be introduced by the presence of additional combustible materials. In the case of lube oil being stored in the room, Entergy did not take adequate action to ensure that activities were executed in a manner that would prevent work taking place in one area (the adjacent EDG cell) from introducing additional fire risk into a space for which it had not been evaluated (the switchgear room). In both cases, Entergy did not take sufficient action to ensure that workers were aware of the fire protection requirements associated with activities being conducted and to ensure that they coordinated as needed across working groups to adequately assess and mitigate the associated fire risk.

Summarization Models: Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report []

inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles. the inspectors determined that not completing a TCE was a performance deficiency, given that it was reasonably within entergys ability to foresee and correct and should have been prevented. in the case of wood scaffolding being stored in the switch T5-Base Flan-T5-base Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report.

The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles. The inspectors determined that not completing a TCE, as required by EN-DC-161, Control of Combustibles, Revision 18, was a performance deficiency.

BART-large-cnn Entergy did not ensure that transient combustibles were evaluated in accordance with established procedures. Entergy did not take sufficient action to ensure that workers were aware of the fire protection requirements associated with activities being conducted. Entergy did not adequately identify risk associated with work being performed or coordinate across working groups to anticipate and manage this risk Pegasus-cnn-dailymail 25

Question-Answering Models: Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report []

Storage of combustibles in excess of the maximum permissible combustibles loading could have the potential to challenge the capability of fire barriers to prevent a fire from affecting multiple fire zones and further degrading plant equipment Flan-T5-base Roberta-base-squad2 nuclear safety nuclear safety Bert-large-casedwhole-word-maskingfinetuned-squad No consistent results attained so QA was not selected 26

Key Phrase Extraction Methods: Examples The inspectors identified a Green NCV of Unit 3 Technical Specification (TS) 5.4.1 when Entergy did not take adequate measures to control transient combustibles in accordance with established procedures and thereby did not maintain in effect all provisions of the approved fire protection program, as described in the Unit 3 final safety analysis report []

allowed combustible loading, allowed combustible, combustibles revision 18, combustibles evaluated accordance, permissible combustibles loading, result allowed combustible, combustibles revision, combustibles evaluated, permissible combustibles, transient combustibles evaluated, additional combustible, maximum permissible combustibles, combustibles loading, 161 control combustibles, final safety analysis, control combustibles revision, presence additional combustible, unit final safety, combustibles accordance established, established hazards analysis KeyBERT KeyBERT + KeyphraseVectorizers additional fire risk, fire protection requirements, final safety analysis report, fire risk, maximum permissible combustibles loading, fire protection significance determination process, fire barriers, additional combustible materials, combustible loading, fire protection program, combustibles, transient combustibles, low safety significance, edg lube oil, fire loading, entergy, multiple fire zones, fire, nuclear safety, further degrading plant equipment`

allowed combustible loading, final safety analysis, unit final safety, combustibles evaluated accordance, safety analysis report, combustibles revision 18, allowed combustible, permissible combustibles loading, transient combustibles evaluated, combustibles evaluated, permissible combustibles, result allowed combustible, maximum permissible combustibles, 161 control combustibles, combustibles revision, combustibles loading, established hazards analysis, safety significance, safety analysis, control combustibles revision Guided KeyBERT final safety analysis report, fire protection requirements, additional fire risk, fire risk, fire protection significance determination process, maximum permissible combustibles loading, fire barriers, fire protection program, combustible loading, low safety significance, additional combustible materials, transient combustibles, combustibles, edg lube oil, further degrading plant equipment, fire loading, nuclear safety, entergy, multiple fire zones, volt emergency Guided KeyBERT + KeyphraseVectorizers 27

Neural Topic Modeling 28

BERTopic Generate document embeddings with pre-trained transformer-based language models Reduce dimensionality of document embeddings Cluster document embeddings Generate topic representations with class-based TF-IDF procedure Coherent and diverse topics 29

BERTopic: Modularity BERTopic offers modularity at each step of the process

- Embedding

- Dimensionality Reduction

- Clustering

- Tokenizer

- Weighing scheme

- Representation tuning Each component can be easily swapped according to the goals and to accommodate the data 30

BERTopic: Representing a Topic Refine how a topic is represented and interpreted KeyBERT Extract keywords for each topic and a set of representative documents per topic Compare the embeddings of the keywords and the representative documents Maximal Marginal Relevance Reduce redundancy and improve diversity of keywords Part of Speech Extract keywords for each topic and documents that contain the keywords Use a part-of-speech tagger to generate new candidate keywords Zero-shot Classification Assign candidate labels to topics given keywords for each topic Text Generation and Prompts Create topic labels based on representative documents and keywords Huggingface Transformers, OpenAI GPT, co:here, LangChai 31

BERTopic: Topic Modeling Variations Topic Distributions Approximate topic distributions per document when using a hard-clustering approach Topics per Class (Category)

Extract topic representations for each class or category of interest from topic model Dynamic Topic Modeling Analyze how the representation of a topic changes over time Hierarchical Topic Modeling Obtain insights into which topics are similar and sub-topics that may exist in data Online Topic Modeling Continue updating topic model with new data Semi-supervised Topic Modeling Steer dimensionality reduction of document embeddings into a space close to the topic labels for some or all documents Guided (Seeded) Topic Modeling Predefined keywords or phrases for the topic model to converge to by comparing document embeddings with seeded topic embeddings Supervised Topic Modeling If topic labels are already known, discover relationships between documents and topics Manual Topic Modeling Find topic representations for document topic labels that are already known and use other topic modeling variations with this model 32

Neural Topic Modeling - Customizations Stopword Removal

- Input level Remove references to reactor sites and parent companies Prevent formation of clusters around large sites/companies with different underlying safety issues

- Topic representation level Remove generic nuclear terms used in NRC text that are not indicative of safety issues Create topic representations that are specific and insightful Custom Topic Representations

- Representations from BERTopic often contained incomplete terms and were not as insightful or intuitive for analysts

- Custom topic representations were developed with input from NRC staff Vocabulary: curated list of NRC abbreviations, full-forms, and failure modes of reactor systems and components Key Phrases: automatically extracted from inspection report item introductions Vocabulary + Key Phrases: combined list of vocabulary and automatically extracted key phrases 33

Neural Topic Modeling - Extensions Topic Reduction

- Number of discovered topics can range from 10s to 100s depending on the parameters of BERTopic

- Topic reduction techniques were explored to merge similar topics Manual: reduce to specified number of topics by iteratively merging similar topic representations Automatic: cluster topic representations that are similar, leaving outliers as standalone topics Outlier Reduction

- Nearly 1/3 of the documents are considered outliers by the clustering algorithm

- Outlier reduction techniques were explored to assign outlier documents to existing topics Topic probability: assign to most probable topic according to clustering algorithm Topic distribution: assign to most frequently discussed topic in document C-TF-IDF: assign to topic with the most similar topic representation as the document Embeddings: assign to the topic with the most similar embedding as the document

- Topic representations were updated after outlier assignment 34

Topic Modeling Parameters:

BERTopic Experiments 35

Components of Topic Modeling 36 Item Introduction Item Introduction Summary (Pegasus_cnn_dailymail model)

Item Introduction Key Phrases (KeyphraseVectorizer + Guided KeyBERT with custom vocab of 1411 abbreviations, full forms and failure modes)

1. Topic Modeling Input all-MiniLM-L6-v2 15 neighbors, 5 components min cluster size 10, 20, 40, 60 n-grams range of 1-3 MMR (diversity = 0.6);

MMR (diversity = 0.6)

+ POS (NOUN, PROPN, ADJ-NOUN, ADJ-PROPN)

2. Topic Modeling Parameters TF-IDF on input text in each topic cluster:

Vocabulary 1411 abbreviations + full forms

+ failure modes Key Phrases (66,325 words/phrases extracted from Item Introductions using KeyphraseVectorizer +

Guided KeyBERT with vocab of 1411 abbreviations, full forms and failure modes)

Vocabulary + Key Phrases (67,402)

BERTopic MMR (diversity = 0.6)

BERTopic MMR + POS MMR (diversity = 0.6) + POS (NOUN, PROPN, ADJ-NOUN, ADJ-PROPN)

TF-IDF, Counts: String matching on full item-intros in each topic cluster:

3. Topic Representation

Input - Item Introduction Full Text Summary Question Answering Key Phrase Named Entity Recognition BART-large-cnn T5-Base Flan-t5-base Pegasus xsum Pegasus cnn-daily mail Pegasus arix Pegasus pubmed Flan-t5-base Roberta-base-squad2 Bert-large-cased Unsupervised KeyBERT Guided KeyBERT Unsupervised KeyBERT KeyphraseVectorizers Guided KeyBERT KeyphraseVectorizers all-mpnet-base-v2 xlnet-base-cased SPECTER multi-qa-MiniLM-L6-dot-v1 all-MiniLM-L6-v2 Embedding Weighting TF-IDF Uni + Bi +

Tri-grams Tokenizer &

Vectorizer Uni + Bi-grams Uni-grams Select the text that will be used in unsupervised learning 15 tested 3 Chosen Create a mathematical representation of the document to use in the algorithms 5 tested 1 Chosen Reduce the number of parameters from hundreds to dozens 10 tested 1 Chosen Clustering HDBSCAN Min doc/cluster 10 HDBSCAN Min doc/cluster 60 HDBSCAN Min doc/cluster 20 HDBSCAN Min doc/cluster 40 Select parameters for unsupervised clustering 4 tested 1 Chosen Split up text and count occurrences of the tokens 3 tested 1 Chosen Calculate importance of terms 1 Chosen Present the cluster themes in analyst friendly terms 5 tested 2 Chosen Representation MMR Vocab Key Phrase Vocab +

Key Phrase Key BERT Inspired MMR+POS Dimension Reduction UMAP N=5, C=10 UMAP N=10, C=50 UMAP N=15, C=50 UMAP N=20, C=50 UMAP N=10, C=5 UMAP N=5, C=5 UMAP N=15, C=5 UMAP N=20, C=5 Full Text all-MiniLM-L6-v2 HDBSCAN Min doc/cluster 20 Uni + Bi +

Tri-grams TF-IDF MMR+POS Pegasus cnn-daily mail Guided KeyBERT KeyphraseVectorizers UMAP N=15, C=5 Vocab +

Key Phrase 37

Results 38

Input Variation 39 Item Intro - Full Text Summary Key Phrase

Stopword Removal 40 After stop-word removal Before stop-word removal

Topic Representation Variation 41

Outlier Reduction 42 Topic 41 MMR-POS Representation Topic 32 Vocab+Key Phrases Representation Before Outlier Reduction After Outlier Reduction

Potential Use for NRC Identified 5 issues related to improper calibration and maintenance of radiation monitoring and dose assessment equipment that impact emergency plan actions Waterford 2011-2022 (2 events)

Vogtle 2019 Fermi 2016 Wolf Creek 2013 The OpE COMM identifies opportunities to identify these issues under Inspection Procedure (IP) 71124.05, Radiation Monitoring Instrumentation, and emergency drill observations, plant modifications or surveillance test reviews 43

Benchmark with an OpE COMM :

Radiation Monitoring Issues Impacting Licensee Emergency Plans 44 OpE identified 4 findings that exhibited safety issues that are related The clustering approach placed 3 of the 4 in the same cluster and the 4th in a similar cluster

Conclusion 45

Machine Learning, using cloud agnostic, notebook-based implementations can identify relevant safety clusters to group together safety related inspection findings.

46

Applicability to ROP Jason Carneal 47

The NRC is attempting to increase the use of data in its decision-making processes.

Most NRC data is unstructured free text.

Available structured data is not consistent across NRC IT systems.

These tools can be used to classify / summarize NRC documents by safety topic, root cause, 10 CFR references, safety systems, etc.

Utilizing these techniques, NRC staff could have efficient access to more information to support their activities.

How do These Topics Relate to the ROP?

48

Reduce manual effort to get relevant data Eliminate repetitive manual reports on popular topics Provide tools for inspectors / NRC staff / management to review data of interest Lower bar of access to data for both internal and external users Increase chance we and others can identify issues early Consolidating and democratizing access to sparse and difficult to access data Deploying tools that allow users to explore data on their own Revealing trends and insights previously difficult to ascertain Support proactive use of NRC operating experience data Facilitate data-driven decisions at NRC 49

Consolidation of Deployed Products Website portal for NRC users One stop shop for all OpE products Easy to navigate Facilitates user interaction and support Link to OpE Hub OpE Hub -

Deployed Products 50

ROP Use Case Development of Advanced Operating Experience Search Tools Objectives Build advanced operating experience search tools that leverage machine learning and natural language processing Automate certain aspects of operating experience workflow Provide expanded capabilities for data analysis.

Userbase NRC inspectors NRC technical staff NRC Management Training Set Historical OpE Clearinghouse Data Findings Data Industry Data Incoming OpE Documents (e.g., licensee event reports)

AI algorithms Unsupervised Learning Natural Language Processing Summarization Advanced OpE

Trending, Communication, and Search Tools NRC Customized models 51