ML22140A218

From kanterella
Jump to navigation Jump to search
T1-transcript
ML22140A218
Person / Time
Issue date: 03/08/2022
From:
Office of Nuclear Reactor Regulation
To:
References
Download: ML22140A218 (76)


Text

1 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com UNITED STATES OF AMERICA NUCLEAR REGULATORY COMMISSION

+ + + + +

34TH REGULATORY INFORMATION CONFERENCE (RIC)

+ + + + +

TECHNICAL SESSION - T1 AM I A ROBOT?

HOW ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING ARE IMPACTING THE NRC AND NUCLEAR INDUSTRY

+ + + + +

TUESDAY, MARCH 8, 2022

+ + + + +

The Technical Session met via Video-Teleconference, at 1:00 p.m. EST, Theresa Lalain, Deputy Director, Division of Systems Analysis, Office of Nuclear Regulatory Research, presiding.

PRESENT:

THERESA LALAIN, Deputy Director, Division of Systems Analysis, RES/NRC GENE KELLY, Senior Manager, Constellation Generation ALINE DES CLOIZEAUX, Nuclear Power Division Director, Department of Nuclear Energy,

2 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com International Atomic Energy Agency BENJAMIN SCHUMEG, Software Quality Lead, U.S. Army Futures Command, DEVCOM Armaments Center, U.S.

Department of Army LUIS BETANCOURT, Chief, Accident Analysis Branch, Division of Systems Analysis, RES/NRC MATTHEW DENNIS, Reactor Systems Engineer/Data Scientist, Accident Analysis Branch, Division of Systems Analysis, RES/NRC

3 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com CONTENTS PAGE Introduction - Theresa Lalain..................... 4 Stay in Your Lane, Dude, Automating Industry Processes using AI/ML -- Gene Kelly, Senior Manager, Constellation Energy Generation.......... 8 AI for Nuclear Energy -- Aline des Cloizeaux, Director, Division of Nuclear Power, Department of Nuclear Energy, IAEA........ 19 U.S. Army Combat Capabilities Development Command Armaments Center, Armaments Center Analysis of Artificial Intelligence and Machine Learning Impacts to Army Safety and System Assurance -- Benjamin Schumeg, Software Quality Lead, U.S. Army Futures Command, DEVCOM Armaments Center, Department of the Army.. 27 Increasing NRC Readiness in Artificial Intelligence Decision-making -- Luis Betancourt, P.E., Branch Chief Champion for Artificial Intelligence, NRC Office of Nuclear Regulatory Research...................... 36 Question and Answer Period....................... 42

4 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Proceedings (1:00 p.m.)

MS. LALAIN: Hello, and welcome to NRC's RIC 2022. I'm Dr. Theresa Lalain, and today's session is Am I A Robot?

Welcome Mr. Gene Kelly. Mr. Kelly has over 40 years of experience in the nuclear industry including design, analysis and licensing.

He's a senior manager in Risk Management for Constellation Generation, responsible for risk-informed initiatives across the Constellation fleet.

He was also the technical lead responsible for relicensing of the Limerick Nuclear Station, managed engineering programs and designs at Limerick, and worked previously with the NRC as a branch chief and senior resident inspector.

Mr. Kelly holds a bachelor's degree in Physics from Villanova and a master's degree in Mechanical Engineering from the University of Pennsylvania.

Welcome Ms. Aline des Cloizeaux. She has been recently appointed as the Division Director of Nuclear Power in the Department of Nuclear Energy of the IAEA.

5 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Ms.

des Cloizeaux has extensive experience as a program director of several new-build projects. She managed large investments projects for conversion and enrichment for facilities such as Flamanville 3 EPR and a portfolio of nuclear civil and equipment activities, including SMR development.

She is also engaged in gender balance and diversity actions, notably president of WiN, Women in Nuclear, for France, and is an active member of WiN Global.

Ms. des Cloizeaux holds a master's degree in Science and Engineer Technology from École Polytechnique a master's degree in Civil Engineering Technology from the École Nationale des Ponts et Chaussées, and an MBA from the Collge des Ingénieurs.

Welcome Mr. Ben Schumeg. Mr. Schumeg is the Software Quality Lead in the Quality Engineering and System Assurances Directorate of the U.S. Army Futures Command, DEVCOM Armaments Center, in the U.S.

Department of the Army.

He leads research in Test and Evaluation and Verification and Validation capabilities for artificial intelligence, machine

learning,

6 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com automation and other technologies, and assisting the Quality Engineering and System Assurance Directorate in developing policies and procedures to be used by the Armaments Center.

He currently leads the Army AI Software Safety Subgroup focused on the Test and Evaluation and Verification and Validation of AI systems and data.

Mr. Schumeg also spent a year with the Safety and Mission Assurance Office at NASA's Johnson Space Center assisting in software quality assurance for commercial visiting vehicles to the International Space Station.

He holds a bachelor's degree in Computer Engineering from the Pennsylvania State University, and a master's degree in Computer Engineering from the Stevens Institute of Technology.

And welcome Mr. Luis Betancourt, the chief of the Accident Analysis Branch in the U.S.

Nuclear Regulatory Commission's Office of Nuclear Regulatory Research.

Mr. Betancourt leads highly skilled data scientists in developing the NRC's Artificial Intelligence, AI, Strategic Plan to enable the safe

7 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com and secure use of AI in nuclear facilities and accelerate AI utilization across the NRC.

Mr. Betancourt joined the NRC in 2008 as a digital instrumentation and controls engineer in Research.

Since that time, he's held several positions from a technical assistant for NRR, acting chief of the Instrumentation, Controls and Electronics Engineering Branch, an instrumentation and controls engineer and a new reactor project manager.

Throughout his career, he's been a key proponent of Science, Technology, Engineering and Mathematics education and continues to volunteer and represent the Agency in multiple annual youth outreach events in the Washington, D.C. area.

Before joining the NRC, he worked as a control engineer for G.E. Aviation and a new products engineer at Stryker Endoscopy.

Mr. Betancourt has a B.S. in Electrical Engineering from the University of Puerto Rico, a professional certificate in public sector leadership from Cornell University.

He's a senior member of the Institute of

8 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Electrical and Electronics Engineers and a registered professional engineer in the State of Maryland.

With

that, I

welcome all of our presenters and now we'll start our briefings with Mr.

Gene Kelly's presentation, Stay In Your Lane, Dude.

MR. KELLY: Thank you, Teri, and good afternoon, everyone. I'm very honored to be on this panel with a very excellent group of panelists and experts here in the area.

And, you know, what I'm hoping to share with you today, as we put the slides up, is, you know, some of the lessons learned that we've garnered here at Exelon, or at Constellation Energy now, you know, as we've deployed some of these new technologies in artificial intelligence and I'm going to share those lessons learned with you here.

Next

slide, please.
Now, you're probably wondering why I've used and chosen this picture, and it turned out I was watching one of my favorite movies, The Big Lebowski, with Jeff Bridges and John Goodman and Steve Buscemi and, you know, I happened to be talking to one of our project experts and leads.

And he had been driving home in his new

9 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com car and it was a very difficult trip up I-95 and it was raining very heavily.

He couldn't see well and he said that the, you know, the technology in the car now enabled him to stay in the lane even though he could hardly see the road.

And it occurred to me that, you know, in the theme of this conference, that there is concern sometimes that we go to full autonomy with artificial intelligence and machine learning, but the reality is when you look at automotive applications, there's various levels of autonomy and we're far from a totally autonomous vehicle.

And basically the applications we've developed thus far at Constellation are really intended to keep the users fully engaged and, in essence, keep them in their lane so they can focus on what's important.

And, you know, we're going to walk you through some of the examples here in the subsequent slides. So, that's really the reason for the humor and The Big Lebowski.

Next slide, please. Now, this slide is pretty interesting. And its sequences, so I'm going

10 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com to ask you to bump it a little bit, but, you know, we started out this way with what you see with the initial ideas of here's what we're going to do, and we were going to go in and automate certain aspects of our corrective action process and our work control process.

And then we sat down and engaged the end users and, you know, that's really our first and maybe most important lesson is you really find out what problem you're going to solve when you sit down and engage the end users and there's just no substitute for doing this due diligence.

It's -- it takes some time, it takes some effort, but it's worth its weight in gold because it really tells you the problem you really need to solve.

So, if you hit the next button, what you'll see is once we sat down with them -- just click on that slide -- we found out that there were other things that they wanted to add.

And that's when we started to understand what we could really do for them to really kind of reduce the effort and really help them in doing their job every day.

So, if you hit the button again, you'll

11 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com see this slide kind of fills in as we started to learn more on the left-hand side about, you know, what we were going to do with our corrective screening and prioritization.

And if you hit the button again on the right, we sat down with workweek managers and what we call "cycle managers."

You can hit it again there and you can see that we eventually filled in the blanks of all the things we want to do and, you know, we ended up really designing 11 different algorithms and models, but this is worth its weight in gold because this is where we really honed in on where the savings are going to be.

Next slide, please. Many times people ask, you know, well, why CAP data -- Corrective Action Process data? And, I mean, it's -- first of all, it's a big data source, right?

We all -- in the nuclear industry we generate a number of condition reports every year on the order of, you know, 5-to 6,000 per site and it's a big data source, right? It's also an important cornerstone of the NRC's reactor oversight process.

And the way I would term it is that just

12 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com about everything that happens at a plant that's important is reflected in that CAP data and -- but you can see from the statistics that we have a scheme for both significance and severity and type and, you know, there's thankfully very few very significant things that happen that require extensive investigations and the vast majority of the data, almost 99 percent of it, is low-level significant.

And really the message on this slide is that our algorithms and what we're doing to automate aspects of the process is going to allow us to focus on the really important conditions, which is where we think our, you know, focus should be.

Next slide, please. I bring this up just because this is an application we've already had in place. This has been very successful. We've had it in place two years now at Constellation.

It's used for our maintenance rule process and we've been able to automatically identify potential maintenance rule functional failures.

The users have provided excellent feedback, and I think it's worth pointing out in that second bullet that the software really isn't making the failure determination, right? All it's doing is

13 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com flagging those condition reports that are worthy of human review. So, you know, the message here is that the end user is still fully engaged.

And even more so, they're backstopped --

fully backstopped because our system engineers and strategic engineers still monitor the day-to-day traffic in that system for their systems and the components in those systems.

And so, you

know, this is fully backstopped such that, you know, you're not just totally relying on software.

And, you know, we've gained confidence with this over two years through the continuous feedback from the users.

And lastly, I would just point out that we've biased the software in a way that's more focused on high safety significant component failures so that we have very few, if any, misrates. In fact, our misrate has been zero for two years.

So, we think this has been very successful and it's -- and the key is we've now built subsequent applications based on this first successful one.

Next slide, please. This slide probably

14 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com bears some real close looking. And I guess if I were to pick one slide that was the most important in the whole presentation, this is it because this is the graphical user interface.

This is what the end user sees as a result of the algorithm that we built and it's really awesome.

I don't have the time here to explain all the details, but it's really showing you the confidence values and why certain condition reports are flagged.

It has textual comments to provide the context on how the decision is reached. It shows you what's called the "wordgrams," which is how the artificial neural networks are built.

And finally, you know, you have to revisit this, you can't just walk away from it after you build it, because you may have procedure or rule changes in your process, your performance data may change, the plant and, you know, so it's really important here that humans continue to validate the model's predictions. And, again, the time with the end users is very well spent to develop that graphical user interface.

15 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Next slide, please. Just a few words here about, you know, anybody who's in any of these innovations knows you have to make the business case.

I would point out that our industry has many processes, so there's lots of opportunities there to apply these technologies and these processes.

And, you know, we see that we can improve data quality, we can improve our organizational decision-making and also employ bandwidth.

I think one of the Commissioners talked about that this morning, but, you know, particularly for us as a new company who has just split and we're getting into new areas, you know, you want to be able to deploy your resources and your people, you know, where the new priorities and work is.

So, this is really going to give us the opportunity to do that and maybe probably one of the most important bullets here is that this is an opportunity for us to eliminate low-value work.

We talk about that a lot in our workplaces. It's easy to say, it's hard to do and it's hard to let go, but this has really given us a golden opportunity to eliminate low-value work.

16 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Next slide, please. And I should say as we go to the next one here that, you know, the key message from that last slide is that it's really helping us to focus on what's important.

And if there's any one theme throughout this whole presentation, that's the one I would continue to reemphasize, is that this technology is helping us to focus on what's really important.

We have worked and collaborated with the Department of Energy and Idaho National Labs and what we're finding, and it was a surprise to me, I'm not a data scientist, but there are a variety of methods and all sorts of approaches and hybrid approaches, supervised, unsupervised, and what we're finding is literally what the slide says that, you know, one size doesn't fit all.

And, you know, I love this quote from the article. I've read a lot here and in the journey over the last year or so, but, you know, really the algorithms you're going to pick and the techniques you're going to pick are going to depend upon the kind of data you're working with and the problem you want to solve and, you know, what you want to get to.

So, the bottom line is when you get into

17 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

-- another lesson learned we've had is, as you get into these, you'll find that there's many ways to do this and it's not just one or two approaches. So, an interesting lesson we've learned here thus far.

Next slide, please. So, finally, you know, where are we headed, you know? And I guess I would point out that with each successive application we've done, we've learned a little more and we've built upon it.

So, that first one with Maintenance Rule Functional Failure has been pretty successful and we're going to build on that with the next two.

We're going to start the pilots for the corrective action in the new work screening here later this month and then we're going to set our sights on some other processes.

And, like I say, there's a lot of processes that you can aim this at, but one of the biggest challenges, when you read the literature, is that integrating this into your systems and your processes is probably one of the biggest challenges.

So, you know, we're going to continue to look at additional areas, we have a lot of good ideas on where we can apply it, but we start first with

18 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com small things and then work up from there.

Next slide, please. And so, you know, I guess I'd end today with, you know, sharing with you this is a feeling that's been with me the whole time I've been involved with this for the better part of a year or so, and that is when I think about artificial intelligence and machine learning, it's really not a matter of if, it's only when I think that we're all going to be there.

And, you know, the picture here of course is to say that, you know, it's probably only a matter of if when we're going to be driving autonomous vehicles as well, you know, but I really do think that this technology allows us to really focus on what's important and, boy, that's just so valuable in our business for safety.

And the second bullet is very fascinating to me, but, you know, a lot of us in our companies struggle or have the challenge of, you know, knowledge retention and retaining tribal knowledge as people leave and retire and new people come in.

And, you know, the use of this, it gives you a solution, I think, in that regard in that you can continue to make this algorithm smart and it

19 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com retains the wisdom.

And so, you know, perhaps there's a solution there for all of us on, you know, how to, you know, solve the knowledge retention issues as well in various processes and, again, you know, there's probably the opportunity here for a very powerful industry outcome.

And as one of the DOE directors, Dr.

Curtis Smith, has said to me, and I think he aptly described AI and ML, it's the new math.

So, you know, with that, I think I'll stop and thank you, Teri. I'm done my presentation.

MS. LALAIN: Alright. Thank you, Gene.

Our next panelist is Ms. Aline des Cloizeaux with the presentation AI For Nuclear Energy.

MS. DES CLOIZEAUX: Okay. Thank you, Teri. So, do you see my presentation up?

So, I am very honored to be part of this panel today. I am director of the Division of Nuclear Power in the IAEA, Department of Nuclear Energy, and it's been our mission in the agency to share knowledge among all our member states about new technologies, to enable the development of these

20 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com technologies, to define the necessary condition and artificial intelligence is really part of our task.

So, I will -- next slide, please, yes.

So, I will tell you where we are today because it's quite a long journey.

Well, this slide shows you, in a broad view, what is artificial intelligence in a common language.

So, it's leverage computers and machines to meet problem-solving and decision making capabilities of human mind as a general topic.

And so, where can we apply this in the nuclear industry? It's a real field, as you can see on this slide.

So, regarding machine learning and deep learning, which is on the left top part of the slide, we can support predictive analysis. For example, on nuclear power plants we can use that to improve modeling and simulation capabilities as well as inspectment of digital teams by adding simulation to these teams.

Another part is natural language processing, which is a branch that enables machine to understand human language. We can use that in the

21 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com support of classifications, translation and data extraction. For example, we can use it in analysis of nuclear power specified requirements.

It's a field where quality assurance can benefit, for example, by ensuring the product or service is meeting the specified requirements through techniques of natural language processing.

Another field is expert system. It emulates decision-making ability of human expert. It can be used for knowledge presentation for generational orders or processing of orders particularly for diagnosis and this can have wide application to nuclear safety.

If we go to technologies like computer vision, it's also -- there are also quite interesting technologies to take meaningful information from digital image.

We all have in mind the image coming from regular inspection, for example, and it can provide insight that would be missed by human, manual analysis.

Automation is not really -- and robotics is not really a new technology; however, these techniques can be really enhanced by artificial

22 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com intelligence, for example, by using computer vision technologies.

And last, but not least, all these base algorithm could also be used for design and optimization of nuclear reactor core designs. So, this is quite a broad view.

Next slide. And now, I will go a little bit deeper in what we do in the IAEA. So, next slide.

So, we have had several technical committees --

working groups and technical committees and this slide shows you where we are, what is the state of the art in the AI, where it is applied.

And this is really taken from a ton of written experience of our experts participating in these technical meetings.

So, as I said, one of the first field is automation because it can -- automated process can be used really as a human factor in the work activities

-- nuclear activities. It increase reliability. It reduces time, also, of operations.

Optimization also is a part where we can optimize complex processes like, for example, plan and strategies for inventory management and scheduling. So, it can help to process a lot of data

23 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com and it's also used in building information modeling and also for verification and validation.

Another part -- another field where we also see many applications is analytics, also for model validation for advance computer simulation.

And, as I said in the -- at the beginning, it's used in digital twin application. And another part is prediction/prognostics. By looking at events, we can reduce failure, at least detect failure in advance, assess current asset conditions and, for example, remaining useful life of components.

And all of these insights will help to extract and choose and use data from multiple knowledge sources and that are collected from thousands of years of operating experience, massive libraries of scientific and validation experience.

So, all of this techniques are used and are now more and more commonly deployed; however --

next slide, please -- we know all that there are deployment challenges.

This is, I think, today's topic. First of all because this is -- data can be -- or the result of AI can be interpretable.

We don't -- there is a question of trust,

24 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com of robustness, of the performance of AI and we cannot choose the traditional verification and validation approaches for AI because it's quite limited transparency and the high-level regulatory safety assessment principle and guidance may need to be developed. It's not yet really recognized worldwide.

And of course all the

security, cybersecurity issues with data, with adversarial attacks are there, are already there, but we also have an increased risk of cybersecurity by using artificial intelligence and like -- well, also due to the limited transparency of what's in the machine learning tools.

So, what's next? Can you change, please, the slide. Yes. So, we have -- we work on different aspects. First, on less mature technology. That's what we call technology development. And so we need first of all development of technology before applying that on nuclear power plants, that's our view at least.

We have also categorize some technology with -- which are in the deployment stage. For

example, all this automated analysis of nondestructive examination. It's almost -- it's

25 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com acting more and more commonly used or what is about predictive maintenance procedure.

And then there is a field also where we work. It's technology enabling. So, of course by developing legal regulation for this application, by developing common requirement database and common requirement that are understandable by AI for use of optimization, simplification and specification because it's not today's of the requirement out --

they are written and it's dependent on the user, mainly, and the operator.

And we also have to develop algorithms that are accessible. So, give more transparency to the algorithm and, thus, enable to artificial intelligence.

Next slide -- next two slides. Yeah.

So, what we do for security. So, last year we had a big technical meeting on artificial intelligence for nuclear.

So, you can see that there are many fields. It's not only nuclear power, but it also relates to ethics, food and agriculture and nuclear physics.

Next slide. We are also part of the

26 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com International Telecommunication Union of the United Nations and we participate in webinars like this one with AI for food or AI for atoms.

Next slide, please. And every year there is a publication from the ITU, not specific only to nuclear, but there we have quite a few members of example and we share the development of AI for nuclear technology and application. So, it's also accessible on the internet side.

And before finishing, I would like also to mention one point that, for me, is very important especially in this day of Women's International Day, is that -- it relates also to ethics -- is that there are a lot of developers are mainly men or -- and in the computer and IT science we are lacking women.

So, if we could do all everything to attract women, that would be very good because I think that diversity in developing algorithm in -- yes, looking at the requirements are very important to add something which is very close to human brain and to add to all the diversity.

And I would like, Teri, to offer you this because the question is Am I a Robot? So, I don't know if I am a robot, but if I would be one, I would

27 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com choose this image, this nice picture, and I think that we should share this to attract more young girls into our domain. Thank you very much.

MS. LALAIN: Thank you, Aline. A reminder, you can submit your questions for our Q&A session. So, if you have any questions for our speakers, please make sure to submit those.

Our next panelist is Mr. Benjamin Schumeg with the presentation U.S. Army Combat Capabilities Development Command Armaments Center.

Over to you, Ben.

MR. SCHUMEG: Thank you. Good morning and good afternoon, everyone. As Dr. Lalain mentioned, my name is Ben Schumeg.

I am representing the AFC Armaments Center, DEVCOM Armaments Center, specifically our Quality Engineering and System Assurance group.

So, also thank you for having me today.

I know I'm, maybe I'll say, the slight oddball in the group here as I'm more from the DoD, but hopefully kind of going through this presentation I can give you an idea as to why we kind of feel that it's important that we kind of talk together and work together on some of these challenges with artificial

28 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com intelligence, especially when it comes to the safety of those systems.

Next slide, please. So, this first slide kind of talks a little bit about some of the reasons why the DoD specifically has been very, kind of, aware and tracking what's going on with artificial intelligence and especially some of those challenges.

Probably the biggest thing that came out was the NSCAI, or the National Security Commission on Artificial Intelligence, which was, I believe, a congressional-led and congressional-funded research into what artificial intelligence means not only for the DoD, but of course for the federal government.

And that report really pointed out many key areas that need to be followed, and I kind of highlighted a couple here that really impact myself as part of our Quality Engineering Group thinking about data science, verification and validation, reliability, safety, and of course human system integration.

A lot of these other reports that you can see on the screen also talk to these very same aspects, especially safety, you know, one of the reasons I am here today.

29 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com And I wanted to point out that last one on that bottom right, it's a little hard to see, but that is the Responsible AI memo that was released by the Honorable Secretary Hicks concerning how we are going to ensure that the systems that are developed by the DoD maintain those five ethical principles.

Next slide, please. So, just kind of a little bit about why I'm here and who I am. So, Armaments Center is the

primary, I'll
say, development organization/development command that's looking at conventional weapon systems and ammunition for the Army.

So, it, you know, as with any kind of system and new novel technology, there are ways that this could revolutionize the way, you know, AI and ML could revolutionize the way that these technologies are being developed by Armaments Center; but of course, you know, that brings challenges and it brings things that we want to ensure that we're looking at.

So, some of these challenges, of course, you know, we're looking at what does it mean for continuous learning, what does it mean for these very complex statistical algorithms that are going to be

30 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com used and how are we going to ensure configuration management.

What kind of new methods or procedures or processes are going to have to implement to make sure that we can assure -- and I'll talk about that in a second -- assure that what we are developing meets the intent and the needs of what we are developing it for.

Making sure to look at different sensors, different inputs and how this data -- because, you know, you'll see data is very critical from a machine learning perspective -- how can we assure that it is unbiased, that it's correct, that it's accurate, that it meets the context of the environment that it's being used in and still maintaining these reliable, ethical, safe and robust capabilities of this system.

So, what the Armaments Center did is we looked at what's called the Army Materiel Release Process, which is the final gate that a system must go through before it could be deployed and be utilized out in the field.

Next slide. And so, I'll kind of briefly just talk about that for just a second. We want to ensure that anything that's released by the DoD meets

31 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com these -- what we call the three S's, Safe, Suitable and Supportable.

I won't go into each question here, but, as you can imagine, safety being one of our top priorities, so we have a lot of things and a lot of stakeholders and a lot of different milestones, documentation, deliverables, things like that that have to be met. And those are listed on the left side.

Suitable, you know, is it the right system? Was it developed correctly? Does it meet verification requirements? Does it meet validation requirements?

So, we have a lot of independent testing that takes place, a lot of safety assessments that will take place, to make sure that that system meets that suitability requirement.

And lastly, supportability. Can the system be supported in the field? Do we have the right logistics in place? Do we have the right fielding plans and the right training for any sort of operators of any of our systems?

So, this applies to any system, you know.

Any system that's being released by the Army that

32 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com will go through our office, it must meet all of these requirements before it can be, as you say, kind of put in the hands of a soldier.

Next slide, please. So, I wanted to touch just briefly on one of those aspects, you know.

We're working a lot of different things and I'll show you that in a second, but I wanted to touch on safety because I feel that that's probably where we'll have a lot of cross-collaboration and a lot of good, technical cross-discussions with the NRC and their partners.

So, I think it goes without saying that the safety challenges are significant when you're thinking of AI and ML systems.

There's a lot of complexity to that design, you know. There could be changing and differing and off-nominal environments, how we're looking at the cognitive interaction of the human in that loop of the system and what kind of perceptions are they going to have about different behavior or unexpected -- possibly unexpected than the behavior of that system.

And so, looking at what do our levels of rigor when we look at different software intensive

33 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com systems needs to be changed?

So, some of those things that we're looking at, looking at different safety methodologies, different safety precepts, looking at ways to adjust or recommend new ways to do a functional hazard

analysis, general safety requirements, what artifacts might need to be needed and sort of identifying AI safety critical functions, and any of that data that leads to that function, be it as part of design or as part of what we call inference when the actual model is active.

Of course understanding the concept of operations, environments, understanding those enabling technologies and what kind of autonomy may or may not be involved in that system.

Taking all that in and thinking about what kind of levels of rigor must take place, what kind of metrics and measures must be developed and what artifacts can be delivered.

Lastly, looking at both the hazard mitigation guidance as well as any sort of adjustment to kind of our safety risk assessment approaches for AI, the different levels of autonomy, LORs, kind of summarizing it into what we believe would be good

34 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com practices, possible regulations or policy changes.

That's why I have that little blurb there about MIL Standard 882-Echo. That is our safety standard that we follow within the Army, which is undergoing a revision, and we plan to submit a lot of suggested changes and working with that group to make sure that any of the needs that come from AI and ML technologies are appropriately included in that.

Next slide, and I believe this is my last slide. So, you know, I just touched on that one point about safety, but we're looking at a lot of different things at Armaments Center.

We're reviewing a lot of the policies and identifying the gaps in those policies, you know. We have many, many Army regulations, DoD directives, DoD instructions.

So, kind of doing our analysis of that to see where we are and where we think there could be better, you know, things made better and better improvement.

Looking at data science, you know. As I kind of said already, AI and ML is -- or ML specifically is very critical of data science, and making sure you have the right data, and making sure

35 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com you analyze that data, and understand that data as it will be developing that system for you.

Verification and validation of course goes without saying. Very important, very critical part of any system development.

So, we want to ensure that whatever methods that might need to be adjusted or created or developed or collaborated with developing organizations is done as well.

Safety, you know, I spoke to that already a little bit, but, again, you know, trying to ensure that the systems that are developed are still safe and remain appropriate for their use.

Material release, that is kind of, as I mentioned, our final gate where we're kind of culminating a lot of these data points that you just saw into that material release that could be reviewed by stakeholders and by all these different panelists very similar to today to ensure that that system is good to go for deployment.

And lastly, it kind of brings us to trust, you know. We want to have that trust and what we're kind of calling assured trust in that system, not overtrusting and not undertrusting, but finding

36 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com that right level of trust through things like human system integration, what we call soldier touch points and things like that to make sure that that system is going to be used the way that it was intended to be used and that the soldier or operator trusts it and will abide by what they need to do to utilize it.

And I believe that was my last slide, so thank you for your time.

MS. LALAIN: Thank you, Ben.

Our next panelist is Mr. Luis Betancourt with the presentation Increasing NRC Readiness in Artificial Intelligence Decision-Making.

Over to you, Luis.

MR. BETANCOURT: Thank you, Dr. Lalain.

Good morning and good afternoon, everyone. As Dr. Lalain said, my name is Luis Betancourt and I am the Branch Chief Champion for Artificial Intelligence.

I am pleased to be here today to discuss what are we doing as an agency to increase our readiness in evaluating AI technologies.

Next slide, please. So, as Dr. Lalain mentioned in her opening remarks, AI is actually one of the fastest growing technologies globally and it's

37 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com actually the next frontier of technological adoption for the nuclear industry.

It has the potential to transform the industry by providing new and valid insights into vast amounts of data generated during the design and operation of a nuclear facility, and it offers new opportunities to potentially enhance safety security, improve operational performance and potentially implement autonomous control and operation and as we saw, we have been seeing that the industry is researching and using the applications to meet future energy demands.

It is critical for us, as an agency, to focus on how these external factors are driving an evolving landscape and growing interest in deploying AI technologies.

So, over the last year, we have been seeing that landscape steadily evolving and AI is steadily being used in a wide range of nuclear power operations, including what you heard today from Gene from mining nuclear data for maintenance to understanding core dynamics for more accurate reload planning.

So, we, as an agency, we recognize the

38 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com potential for using data science and AI in regulatory decision-making.

At the end of the day, what we are interested is understanding what are the possible regulatory implications to using AI within a nuclear power plant.

So, at the end of the day, what we want to do is to ensure that these technologies are developed safely and securely.

So, we see today, this is an opportunity for us to start shaping the norms and the values to enable the responsible and integral use of AI. So, we, as an agency, we must be prepared to evaluate these technologies.

Next slide, please. So, we, as an agency, we are anticipating that the industry will be deploying AI technologies that may require regulatory review and approval in the next five years and beyond.

As such, we are proactively developing an AI, artificial intelligence, strategic plan to better position the Agency in AI decision-making.

So, the plan currently has the goals for AI partnerships like what you see here today cultivating an AI-proficient workforce, utilizing AI

39 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com tools to enhance our NRC processes, but, at the end of the day, to assure our readiness for AI decision-making.

So, we want to use this plan as tool to increase our regulatory stability and certainty, and the plan will also facilitate communication to enable the staff to provide timely regulatory information to our internal and external stakeholders.

So, while we were developing the plan, we formed an interdisciplinary team of AI subject matter experts across the Agency.

And to be able to increase the awareness of AI's technology adoption in the industry, we hosted three public workshops in 2021 that basically brought together the nuclear community to be able to discuss current and future state of AI.

We also initiated dialogs within the nuclear community and with our international counterparts gaining valuable insights on identifying potential areas of collaboration.

One note, one thing to know, like you heard from Ben, like the NRC is not alone when it comes to overseeing the safe and secure deployment of AI, the topics of explainability, trustworthiness,

40 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com bias, robustness, ethics, security and risk are actually coming from any entity that wants to deploy AI technologies in designing and operating a nuclear facility.

So, that's one of the reasons that we are meeting with other government agencies, including the Department of Defense, to be able to identify new partnerships to leverage their expertise on experience of AI.

Lastly, we are committed in providing opportunities for the public to be able to participate in a meaningful way in our decision-making process.

So, as we continue developing this plan, we plan to solicit comments from the public and feedback from the Advisory Committee on Reactor Safeguards in the summer of 2022.

Next slide, please. As I mentioned earlier, we do recognize the public interest in the potential regulatory implications of AI.

We want to provide opportunities for the public to be heard. That's one of the reasons that we are -- that we're trying to make the regulation to be open and transparent in everything that we do.

41 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com And to be able to ensure stakeholder engagement, we have to build this time line shown in this slide of what are our activities for the remainder of the year. So, I do encourage everybody here to participate and provide comments on our plan.

Our team is planning to host an AI workshop in the summer of 2022 to be able to remain aware of the fast pace of technological adoption of AI in the industry, but as well as we want to communicate with our stakeholders about the NRC's progress in AI activities.

Lastly, our plan is to issue the strategic plan by the fall of 2022, but I want to mention that early coordination, dialog and preplanning are key for us to increase on the regulatory readiness and stability for the industry to be able to deploy these technologies.

As you heard today from one of the Commissioners, we don't want to become a barrier. We want to become an enabler for this technology if the industry decides to move forward with that.

So, early engagement and information has changed. It's important for to support and understand knowledge to be able to have that timely

42 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com deployment and the execution of the end strategy.

Next slide, please. So, in closing, here is our contact information so if you want to reach out to us after the break.

That basically concludes our presentation and I would like to now turn it over to Dr. Lalain so we can commence the Q&A section.

So, Dr. Lalain, back to you.

MS. LALAIN: Alright. Thank you, Luis.

We're now going to move into the question and answer portion. You can continue to submit questions, so please do so as we chat this afternoon.

So, the first one, Luis, I'm going to hand over to you.

MR. BETANCOURT: Um-hm.

MS. LALAIN: Are you finding any unique skills necessary in the area of AI and data analytics and how are you addressing skill needs?

MR. BETANCOURT: That's a really good question. I think that science works actually as a skillset that the Agency really needs to have, but that field of science actually has several subdomains as, you know, we have computer science, mathematics and statistics.

43 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com For data science skills, I think it's important for that person to know a lot about Python or Java, which are basically very common software.

One of the things that we're doing as an agency is developing this AI Strategic Plan, and one of the goals that we have is called cultivating an AI-proficient workforce.

And as part of that, what we're trying to identify, what is the pipeline of data science as staff to be able to evaluate an AI technology coming down the road and also to be able to develop AI tools internally to be able to better improve our processes.

And as part of that, we develop the data science clarification plan. And then the plan basically provides on-the-job training as well as some of the skillsets that we believe our staff needs to be able to evaluate those technologies.

MS. LALAIN: Okay. Thank you, Luis.

Gene, a question for you came in. What happens to the reports that are not worth human review?

MR. KELLY: Yeah. So, the analytic will look at what are probable failures or probable

44 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com outcomes that we're looking for and it will assign a confidence level. And then it will allow the end user to make the call, if you will.

The ones that aren't shown are usually very low confidence, so they're not shown. However, as I mentioned, there is backstop processes that still provide feedback, you know, for example, if we were to have misses.

And what we've learned is that, you know, that it's important to have those backstop processes so that if you do have a miss and it's not shown to the end user in that process, you still get the opportunity to understand why you missed and then go correct the algorithm.

And that's indeed what we've done in our first application with Maintenance Rule Functional Failures and so far we've had zero misses since we've done that, but again you can rely on backstop processes to see those "misses," as they're called.

MS. LALAIN: Great. Thank you.

Okay. Question for Ben. On your slide for Path to Assured AI, I am interested in understanding a bit more about the V&V frameworks for AI/ML.

45 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Any suggestions?

MR. SCHUMEG: Sure. So, I will say, of course, V&V, I think, of AI systems is always going to be fraught with challenges, you know, especially when you're talking, let's see, a machine learning, deep neural network.

Understanding what each of those nodes can, you know, achieve, what is being activated and how they impact your final result is going to be challenging.

But some of the things that we're kind of looking at -- let's see here, I kind of jotted a few down -- looking at, you know, modeling simulation.

I think that's always going to be a factor in the V&V of an AI system, putting it into that simulated environment and trying to see how it reacts.

Concurrently with that, thinking about design and experiment, thinking about Monte Carlo simulations, again putting them through kind of that simulated environment to see how it reacts.

And I should clarify this is not necessarily just for image, you know. You could do

images, you could do classification, linear regression, decision-making -- even, you know, a lot

46 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com of these different things through simulations of data inputs and mapping that to their output.

Something we're looking at, you know, explainability of AI not necessarily is a way to prove how something is working, but also is a way to help validate what an AI system might be trying to achieve, or trying to decide, or trying to, you know, that answer that it's trying to arrive at can give us some guidance as to how it's getting there.

And I think the last thing I'll mention, kind of, is instrumentation of that AI, trying to, you know, we may not know exactly why, let's say, a node has activated for a deep neural network, but maybe we can compare that to other nodes, or maybe we can kind of compare it to other similar systems that may not use AI to try to understand how those lower-level functions are impacting that decision to give us that confidence during a V&V assessment.

MS. LALAIN: Thank you, Ben.

Alright. Next question is for Aline.

How is your organization identifying areas where AI or data analytic approaches are applicable and have the potential for the greatest positive impacts?

MS. DES CLOIZEAUX: Well, that was

47 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com explained in my presentation, we have a methodology based on organizing technical meeting while we define with our member states a mandate and then we deploy this method.

So, the technical meeting we organized last year was really setting things up for course, which was provide international prospecting forum to discuss first-up corporation and artificial intelligence application methodologies, tools, and enabling researchers to have the potential to advance practical review and application. So, it's a quite a long title.

And so, through this meeting we are able to invest in safe robots. We identify our role also in the acceleration of AI in the nuclear field and we

-- and we of course moved from R&D to already technologies that are deployed.

And so, we include nuclear fusion, nuclear physics I have shown on the picture. So, a nuclear power security and radiation protection safeguard in nuclear because also I was more speaking about nuclear power but AI applies also to all of this domain.

And this AI methodology can have very

48 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com positive impact to improve modeling and simulation capabilities. So, that's how we organize our sets, yes.

MS. LALAIN: Thank you.

MS. DES CLOIZEAUX: And of course it's everything is legal as to the information.

MS. LALAIN: Wonderful. Thank you.

Alright, Luis, the next question is for you.

MR. BETANCOURT: Fire away.

MS. LALAIN: How do strategic plans fit in with the NRC's hierarchy of documents and what's next after the strategic plan is released?

MR. BETANCOURT: That's a good question.

So, we are looking at that right now. So, the strategy will be in a RIC report kind of similar of the rest of the agency strategy recommends.

The strategy itself is not long. It's 15 pages. However, there's a companion document that we're developing that is called like an AI roadmap.

And the AI roadmap has basically the way how we going to be doing this.

And one of the things that we want to do is to start doing some research on AI methodology to

49 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com have a basis to, because the industry within the last, mention that they are interested in for the NRC to provide some type of regulatory guide or guidance document.

But in order for us to develop a guidance document, we need to have some type of a white paper mechanical basis that we can put into that regulatory guidance.

So, what we want to do after the AI strategy plan, we'll do some research, but, at the same time, we want to keep engaging the industry in what other plans and potentially deploying because in order for us to develop guidance, we need to have a better understanding of where industry is planning to use this.

Is industry interested in critical control, is industry interested in using AI for safety systems? Depending what we hear about those discussions, we'll start doing more research and the idea is for us to be agile. We want to have the framework available in the next five years.

MS. LALAIN: Alright.

Next one is for Gene. I'm going to combine a couple of questions here. So, this is

50 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com around the CAP tool, if that's an off-the-shelf, and then how your data science team is set up and built around your capabilities.

MR. KELLY: Yeah. So, the tool we're using right now was developed by Jensen Hughes.

Jensen Hughes is a company we've worked with for many years at Constellation.

They've done a lot of our probabilistic risk assessment and models and they have great capabilities in the area of AI and ML.

And, again, they started with this first application two years ago, so they had already developed an algorithm, they understood our interfaces with the IT systems and databases and servers, they had relationships with our IT people.

So, they were, in essence, the perfect storm.

So, they've developed this algorithm.

They call it Data Advisor. And we're now starting to look at other applications to use that particular technology.

And we think this has real benefits because we already have contractual situations set up with them. They're very familiar with our programs and processes and procedures.

51 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com Many of their engineers hold Constellation qualifications, technical qualifications. So, you know, we find that working with them is very seamless and smooth.

On the second question, you know, I think this sort of comes -- goes a long way towards answering that, right, that we're, you know, it would become expensive if you go outside and you go to various vendors, but we're finding by utilizing them working with our own IT people, it's been very efficient thus far.

But, you
know, these are small applications we've started with. We haven't really tried big yet, you know.

If you read some of the literature, they advise against big moonshots, right? You know, take small steps, small bites of the elephant, you know, look to achieve adoption and confidence as you move into the bigger application.

So, for example, what Ben said, we don't have deep learning algorithms yet. Those would present, you know, bigger challenges for V&V and things like that, but for right now we're trying to stay small, get some wins and build on that as we

52 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com move forward.

So, I'll kind of stop there, Teri, I think, if that answered the question.

MS. LALAIN: Great. Thanks.

So, Ben, can you talk a little bit more about repeatability, especially in the context of AI and ML, and what is kind of in that framework of what might be achievable?

MR. SCHUMEG: Sure. So, from kind of my perspective, you know, repeatability is going to be paramount, you know.

We don't want to have a system, I think, for anyone, for the DoD, for the NRC, as one of their customers, you know, they don't want a system that they don't feel is going to be repeatable in terms of it -- the way it operates.

So, we kind of are taking the idea that whatever system is presented, it has to be repeatable and we have to be able to prove that to the best of our abilities.

And one of the things I feel we can achieve with AI and ML systems is that if we are able to identify all of the input that a system is going to be receiving when it gets that decision, that will

53 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com give us a

good step towards meeting that repeatability.

We're not going to be -- at least I don't believe we're going to be -- looking at systems that are coming up with new -- what should I call it --

kind of new methods of completing tasks for looking at the way things are working on their own, you know.

We kind of call it online learning, I think, which I don't know if that's an official term, but, you know, because that's where you do start to run into those issues of repeatability, you know, if something has been retrained or relearned.

But if you have the abilities for what I'll call a static AI/ML system, to lock down that system and lock down that training and truly understand -- and that's the key point, you know, truly understand the inputs to that system -- I believe you can attain that repeatability and I think we are going to have to get to that achievable state of repeatability.

If not, then we have to start thinking risk mitigation, risk assessment and possibly bounding of system capabilities to make sure that if it's not going to repeat exactly the way it should,

54 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com we have hard stops, we have the ability to bound the systems so that if it doesn't go repeatable, it still stays in with that bound.

So, I would say the goal, you know, if you will, the objective is to have a fully repeatable system, but the threshold is repeatable with some guidance and some bounding in the off chance that we've encountered something that makes it no longer repeatable.

MS. LALAIN: So, Aline, a question for you. Does the international environment have unique challenges for AI development and use?

MS. DES CLOIZEAUX: Yes. It's, as they say, unique in the term of -- yes, indeed. There are many AI now in the industrial world and we have to apply it in the nuclear industry.

And we know that it's out and in our work and especially linked nuclear safety that we are trying to ensure. So, yes, it's a unique challenge, but I would say it's multiple, because AI covers lots of techniques and lot of application and I guess that some are easier to use than others.

And really what is, for me, important to have is the kind of framework where even if step by

55 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com step V&V is not possible, we find the condition, the other conditions that are necessary to have a safe development of AI, meaning what are the physical rings, the model in such a way that we are sure that it does not exceed thirteen minutes in the results.

And one part of the change also is to have uniform requirements to find the system because not everyone, but AI is at least as a deep learning machine or these things that you -- well, is building itself when running, when feeling better.

All this distance the -- you cannot examine with the same requirements and what was said before is that, yes, if we can have something repeated and repeated better and get the same results, it's true if you don't change the system inside, but --

and so, we also have to work to develop kind of international recognized standards on how to set all those requirements to input data to the system so that it can be repeatable not only because we have the same data and the same system, but because we have the same data and we want to have more or less the same results.

I don't know if I can be understood, but it's -- so, not only a question of V&V of the internal

56 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com system, but is also standardization for what is --

should be the requirement to be better, the format itself.

MS. LALAIN: Great. Thank you.

Alright. So, we've got a question that's for all the panelists. So, we'll go around on this one and it's your thoughts on cyber.

So, as we work in the area of AI, how do we know that the AI hasn't been cyber compromised?

How do you basically build that trust with the AI, knowing cyber landscapes?

So, I'm going to start, Gene, with you.

MR. KELLY: Yeah. When I saw the question, my first thought was that, you know, where it's embedded and used is within internal systems that are already cyber protected.

So, you know, this is not like it's external and separate from any of the databases and softwares that Constellation already uses. So, I would say we just rely on the existing cyber protections.

MS. LALAIN: Okay. Ben, your thoughts?

MR. SCHUMEG: Sure. I think I do agree with Gene, you know. A lot of cyber hardening is

57 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com going to be dependent on the system, but I think something to keep in mind that we're looking at as well is cybersecurity of your supply chain.

So, as an example, for an AI/ML system, supply chain could be your data. So, not only about the security and cyber resiliency of your development environment, but also your data.

Now, has your data been compromised that's going to be used to train that system? Has there been an injection of bias or poisoning into that data stream that's going to be used during training, you know.

I would like to think that we have good cyber assessments and assuring, you know, again to Gene, systems that are actively being used, but something that we want to start looking at is before use, you know, during development is there enough cybersecurity on that development side to ensure that what we're getting at the end is still a cyber secure product.

MR. BETANCOURT: If I can add to that --

MS. LALAIN: Yes.

MR. BETANCOURT: -- there's one other thing that we are looking in the AI strategic plan

58 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com that we're working with our answer folks to answer that question. It's a hard question for us to answer.

I think the problem that we have right now is the industry started to do this little by little integrating this into plant operations and the question now becomes, okay, how is the system now going to be used for plant operations? Is the system now going to be doing a lot of the decision-making?

How is that data being used and transmitted to the outside?

That's kind of the questions that we're asking and what will be the regulatory implications upon that and that is one of the first things that we have to start thinking about.

And I know when we met with ACRS back in the summer, they were concerned about this question as well.

So, it's a hard question to answer at this point, but that's part of what we are up in the air to study and plan to be able to tackle that.

MR. SCHUMEG: Yeah, Teri, if I can add to what Luis said.

MS. LALAIN: Sure.

59 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com MR. KELLY: You know, we're not even remotely thinking about, you know, operating systems at the plant or equipment with AI and ML.

Now, that might be down the road and in the future, but I would call that one of those moonshots that, you know, you're advised not to go after too quickly, you know, you start small. So, I mean, right now we're looking at processes and portions of processes and tasks.

And as one of our folks put it actually to me yesterday, you know, we're using it as a decision support tool, right, not a decision-making tool.

So, you know, it's still something that the human has override capabilities, understands completely, from an explainability perspective, where the result came from. So, you know, we're not at the fully autonomous stage by any stretch yet.

So, I think, you know, you're not going to see that for a while until you first get confidence in the smaller projects.

MS. DES CLOIZEAUX: And, yes, if I may, I have something else. That's why we have the technology for the internal and the technology for

60 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com the deployment, and the technology for the deployment, as Gene said, are helping and supporting a decision because we don't decide -- or the machine don't decide for our staff.

And I also think that we need to have a

-- yes, to define limits, acceptable limits for the performance of the system. And if the result is out of the limits, then we go back to my old process.

That's also a way to do -- and even though

-- and of course applying all the cybersecurity because that's already known as data management and prediction again, attacks and so on, but really we are not yet in the mode where we can be truly automated.

MS. LALAIN: Okay. Thank you.

Gene, over to you. How do you see AI and data analytics providing a positive safety benefit for nuclear power plants?

MR. KELLY: Well -- and that's a common question that we get and I would just say, simply stated, this is a golden opportunity for us to first eliminate low-value work and, second, better focus on what's important or significant. So, you know, that's really it in a nutshell, you know.

61 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com When it comes to the CAP screening and prioritization, you know, you want to focus on those more significant conditions, right, and this enables us to quickly do that and spend that time, you know, to understand those completely, you know, while not completely ignoring the lower significant stuff.

And in the area of work screening and work management and work requests, you know, we're able to look at the more higher priority equipment failures and quickly understand, you know, how to code those, how to get them properly sequenced out to the workgroups, start to order parts. I mean, the sooner you fix things like that, the better and safer your plant is.

So, you know, in a nutshell, it's really just enabling us to better focus on what's really important. I think that's the big benefit to safety right now.

MS. LALAIN: Great. Thank you.

Luis, I've got a question about the thoughts of how autonomous systems may be used at decommissioning of nuclear power plants.

MR. BETANCOURT: That's a good question.

I think the person is asking more about -- I'm

62 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com inferring it's more about the use of drones for inspections.

At the end of the day, like, we shouldn't be a barrier if industry wants to use that for doing some inspections.

I think it boils down to what is the level of autonomy of that system being used in decommissioning.

Is the system used more for improving operational performance? That doesn't have a lot on its system to safety. So, I don't see we, as a regulatory, will have that impact.

But now if that is impacting the safety system and autonomous involve, okay, like what Ben mentioned, you know, we have that assured trust of that system to be able to do what is intended, that's where the regulatory implication will come in.

At the end of the day, the NRC should not be a barrier, it should be an enabler if industry wants to do that, but we need to have trust and assurance that if they want to pursue that, that we know how that system is actually -- we need to have a better understanding of how the system was trained.

Can we trust this system to be fully autonomous or

63 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com should -- or what is -- where is human in the loop in this case? So, that's the things that we need to consider if industry wants to go there.

I don't know if anybody else on the panel wants to comment about that.

MR. SCHUMEG: Yeah, definitely to send that home, hopefully, that's something we definitely have to consider with any of the systems that we develop, you know, how is the human entry in that system and how is that oversight maintained because we need to ensure that trust of the system, we need to ensure that use of that system, you know, it's still meeting the intent of its design, and I think that's going to be very critical moving forward for sure.

MS. LALAIN: Okay. This one is for all the panelists. So, we've been talking about the AI at this stage of use is a lot more for a decision tool, looking at the data.

So, the question that came in -- some of the questions that have come in are along the theme of when could the algorithm advance to a point where it could change its code on its own, so a model that would learn and further develop, and how would we go

64 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com about moving into that area where the AI may be different from what it was originally programmed or coded for.

I'll go ahead and start with Gene on thoughts of --

MR. KELLY: Yeah. You know, when I talked to our experts, and I'm not a data analyst, but, you know, I know a lot of them now and, you know, we're far away from using deep learning, I think, right now.

And deep learning, of course, would be where it's almost fully autonomous and it's learning on its own and it's getting smarter and it's doing things that -- on its own including, perhaps, changing its code.

And I think Ben mentioned that earlier that, you know, that creates unique challenges for, you know, verification and validation.

So, I mean, speaking for what we're doing right now, I think we're still far away from being there.

Do I think we can get there? I think we can, you know. My reading tells me that the smart people say, you know, start slow, start small, you

65 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com know, build for adoption and credibility, you know, don't build for the big hits and the big solves and, you know, gain confidence and build on that as you move.

So, I think we still need to, you know, build on the smaller projects before we tackle those types of challenges, Teri.

MS. LALAIN: Aline, what are your thoughts?

MS. DES CLOIZEAUX: Well, it's -- that's what we are planning. I guess that all the systems will be of use as support systems. For example, when we do predictive analysis among a lot of data.

So, of course we can -- not of course, but we can have machine learning and deep learning by treating existing data and trying to predict what going on, but it will be used as an additional support system for the operator, for example, or for the designer or for whoever. We don't see it as a direct application at this for the time being because we need to understand what's going on in the -- in the AI system.

What I wanted also to point out even in normal safety I&C system, there is a request for the

66 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com system to have independent verification and validation.

So, even if -- there won't be the traditional V&V method for this AI system, I suppose that there will be the regulator, it is not for me to answer, the specialist, but there will be a kind of verification and validation system imposed by the regulator when we go to radio safety systems.

So, that's also as a way to control -- or not to control, but to add more trust in what's coming out of AI systems.

MR. BETANCOURT: Yeah, and I think you hit the nail. Trustworthiness comes to my mind. How can we trust the AI tool because, at the end of the day, what we care about the Agency is can we understand how that AI made the decision by what the factors they would include in order to make that decision to operate as intended in compliance with the regulation.

I think that's what that we need to be caring about if that ever happens. I don't think that's going to happen in the near term, but that's something that is always on the back of our mind that if the industry wants to deploy these kind of

67 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com technologies into the field, like we need to be asking kind of these questions about expendable AI, technical AI.

And even the data, some of the bias that will be included, like Ben mentioned, that also has some implications at the back end, on the evaluation.

So, hopefully we are not going to go there, but it doesn't mean that we shouldn't go there.

MR. KELLY: Yeah, Teri, if I may add, we, you know, in the nuclear industry we have plenty of thought processes, right, you know, that we can look to to apply this technology.

And, you know, I think, you know, the solution that comes to mind when I hear the other speakers talk is a very well-designed user interface, right?

When you have a really well-defined user interface, it's explainable and the end users understand how you're getting that decision.

And, you know, to answer one of the questions earlier with Ben, we use a multi-metric method, you know, where there's four or five different things that combine to give us a confidence level on this is a potential, you know, failure or

68 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com this is a potential 3B, you know, condition report.

So, I think having a really well-defined user interface goes a long way towards achieving, you know, what we're talking about here, but there's plenty of processes in our business that I think we can turn to and start to try to apply this technology, you know.

One idea, for example, would be causal analysis, right? We have plants and equipment fails and what do we do? We scramble our resources and go into DEFCON 2 and decide, you know, how -- why did it fail, you know, support review matrices, failure modes and effects and, you know, this technology could help us to quickly establish causal because, you know, that data is out there just waiting to be interrogated.

So, I think there's plenty of processes in our business that we can apply this to without having to worry about being, you know, fully autonomous and help support the decision-making process. Just some added thoughts there.

MR. SCHUMEG: And I just wanted to speak for one second on part of that original question as well concerning, you know, what I kind of called

69 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com online learning.

I do think that's something that's going to be far off, you know. I can't predict the future necessarily, but if we want to still achieve that safety assurance, that repeatability, those different aspects that are important to all of us, I think, here on the call, it's going to be a while, you know.

We need to build that trust and we need to build that capability to have that confidence in that system from safety, from V&V, from reliability, you know, all of those "-ilities," as they say, and I think it's going to be very difficult when you start thinking systems that can adjust themselves moving forward.

MS. LALAIN: Great point. Thank you, panel.

Next question that came in, I'm going to start with Luis and then go over to Ben. So, as you both were talking about your teams and working on the AI initiatives, a question came in asking if social scientists are included on your teams.

MR. BETANCOURT: That's a really good question. At the moment, our core AI team does not include social scientists, but that doesn't mean that

70 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com we're talking to social scientists across the agency.

One of the areas that we're talking about is in the area of human factors, so they are currently looking at the study right now, they're providing comments. But as part of the core team we didn't have that, but eventually we need to have some person from that field into the team.

MR. SCHUMEG: Sure. And from my perspective, we have human system integration experts that are a part of our team.

We also have ethicists that can be part of the team. We have legal authorities that can review different things.

So, we try to make sure that we continue to have kind of that broad review of those systems AI or not, you know. Even for non-AI/ML systems we try to ensure that we have as part of that material release process that I showed earlier, that those reviews are still taking place regardless.

So, we'll just work to adjust them or integrate new aspects for AI and ML technologies, and that's definitely something that we're looking at right now, identifying those gaps and looking for ways to fill them.

71 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com MS. LALAIN: Alright. Thank you.

So, we've gotten several questions around data. So, I want to steer this one to the panel from your different perspectives.

Of course any AI that's built off of data models are only as good as kind of the data they're based on.

So, how do we go about developing your thoughts on those datasets for the AIs that we see that may be used in our respective organizations?

Aline, can I start with you?

MS. DES CLOIZEAUX: Datasets, as I said, we participate in standardization for that so there is work ongoing in the IEC, I guess that the other at least standardization bodies that are working on that because, as I said -- well, AI systems develop quite fast.

There are numerous startups and we cannot adapt all the data to the different systems which are able, and of course standardization comes a little bit later than what's available on the market, and, yeah, we participate in this field. It's an important part of the development that we follow at the IAEA.

72 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com MS. LALAIN: Luis, your thoughts?

MR. BETANCOURT: Yeah. I think data quality is very important both in the data that you train the model and also the data that goes after that.

I think for us, as an agency, the question that comes to my mind is we're going to be requiring data for the licensees in some of the submittals. My gut feeling is no, but that's some of the things that we need to consider in evaluating some of these technologies.

The other thing that comes to my mind for internal purposes before we get to the data, we need to better step back and what is the problem that we're trying to solve?

Like, do we have -- or what is the process will benefit the most recent AI? And if that's the case, do we have the right data? Is the data unstructured? Is the data already structured?

So, these are the kind of the questions that we are going to be looking moving forward and -

- because, as you know, ADAMS, it has a huge repository information, but there is no structure.

What or how can we structure that data

73 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com that it can become machine learning ready for not only for NRC staff, but also for the industry and members of the public to be able to use that data.

MS.

LALAIN:

Alright.

Ben, any thoughts?

MR. SCHUMEG: Yes, many thoughts. Data quality, I mean, that's so -- in my opinion, at least, it's so important and also so challenging, you know.

AI while it's been around -- I guess I'll say around for a while, I don't think everyone really realized that the data that you have for your AI system might not be to the quality that you need it to be, you know.

We've been collecting data, I think, in industry and everywhere for a very long time, but does that data have all the metadata, does it have all the features, does it have all the extra information that you really need to create a quality AI system?

So, you might claim that you have big data, you know, you have all this information, but is it the right data? Is it unbiased? Does it have the right amount of context and right amount of diversity within it of environments, let's say?

74 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com So, I think kind of assessing that is going to be one of the first big challenges when you're thinking data quality.

And something we're looking at is developing like a data safety management plan or some sort of data -- I don't want to use the word certification, but some sort of -- maybe I'll say data assessment to understand the, quote again, "-

ilities" of your data to make sure that -- is it appropriate, you know? Is it the right data? Do you actually have enough of that data and to a high enough quality to make it useable?

And I'm concerned that may be a challenge for a lot of different organizations when you really start to look at the depth and breadth of your data, but, you know, that's kind of my thoughts, but I think a lot of people kind of feel the same way and we'll just -- we'll see what their data looks like.

MS. LALAIN: Great. Gene, any quick thoughts? I know we're starting to run out of time.

MR. KELLY: Yeah. I'd say a million, like Ben said, you know. This is why we picked CAP data, right? CAP data is a good source because it's big data.

75 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com We've got a big fleet, so we've got 12 plants we can draw upon, right? And, you know, we recently gave 600,000 records of data back to DOE for them to do research and play with.

And so, you know, I think that's a good data source in the sense that it's a structured one, you know. It operates by rules, there's procedures.

So, it's not unstructured and it's not all over the place.

Now, that said, I think you'd be fooling yourself if you thought it was consistent from plant to plant to plant.

And so, I think one of the real values of what we're doing is we're going to improve data quality because we're going to achieve a level of consistency now with the algorithm that perhaps we didn't have before because, you know, each station, each plant is different, different

cultures, different performance, different people.

So, I think we're going to improve data quality and I think CAP data is the perfect storm to go, you know, use these techniques on.

I think it screams for these AI/ML techniques, frankly.

76 MS. LALAIN: Thank you. What a great session, you know. AI, as heard today, is definitely a multifaceted area. Lots of things to look into as we move forward.

If we could get the contact slide on the screen, I want to thank our panelists, our session coordinators, Matt, Dennis and Trey Hathaway, all the support from the RIC team for this session and the research AI team for keeping tabs on this dynamic area, and thank you to all of you who participated today.

The presentations are available on the RIC website under the program agenda for this session and they'll be in the RIC agency's document repository following the RIC event.

I've been pleased to be your session chair today. You've got my contact information. And with that, I will close the session. Thank you, everyone. Have a great day.

(Whereupon, the above-entitled matter went off the record at 2:30 p.m.)

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 WASHINGTON, D.C. 20009-4309 (202) 234-4433 www.nealrgross.com