ML23214A058

From kanterella
Jump to navigation Jump to search
Transcript of the Advisory Committee on Reactor Safeguards - Joint Digital I&C and Fuels, Materials and Structure on EPRI Digital I&C Perspectives Subcommittee Meeting, June 22, 2023, Page 1-227, (Open)
ML23214A058
Person / Time
Issue date: 06/22/2023
From:
Advisory Committee on Reactor Safeguards
To:
References
NRC-2447
Download: ML23214A058 (1)


Text

Official Transcript of Proceedings NUCLEAR REGULATORY COMMISSION

Title:

Advisory Committee on Reactor Safeguards Joint Digital Instrumentation and Control Materials Structures Subcommittee Open Session Docket Number: (n/a)

Location: teleconference Date: Thursday, June 22, 2023 Work Order No.: NRC-2447 Pages 1-162 NEAL R. GROSS AND CO., INC.

Court Reporters and Transcribers 1716 14th Street, N.W.

Washington, D.C. 20009 (202) 234-4433

1 1

2 3

4 DISCLAIMER 5

6 7 UNITED STATES NUCLEAR REGULATORY COMMISSIONS 8 ADVISORY COMMITTEE ON REACTOR SAFEGUARDS 9

10 11 The contents of this transcript of the 12 proceeding of the United States Nuclear Regulatory 13 Commission Advisory Committee on Reactor Safeguards, 14 as reported herein, is a record of the discussions 15 recorded at the meeting.

16 17 This transcript has not been reviewed, 18 corrected, and edited, and it may contain 19 inaccuracies.

20 21 22 23 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1323 RHODE ISLAND AVE., N.W.

(202) 234-4433 WASHINGTON, D.C. 20005-3701 www.nealrgross.com

1 1 UNITED STATES OF AMERICA 2 NUCLEAR REGULATORY COMMISSION 3 + + + + +

4 ADVISORY COMMITTEE ON REACTOR SAFEGUARDS 5 (ACRS) 6 + + + + +

7 JOINT DIGITAL I&C AND FUELS, MATERIALS AND 8 STRUCTURES SUBCOMMITTEE MEETING ON EPRI DIGITAL I&C 9 PERSPECTIVES 10 + + + + +

11 THURSDAY 12 JUNE 22, 2023 13 + + + + +

14 The Subcommittee met via Teleconference, 15 at 8:30 a.m. EDT, Charles H. Brown, Jr., Chair, 16 presiding.

17 COMMITTEE MEMBERS:

18 CHARLES H. BROWN, JR., Chair 19 RONALD G. BALLINGER, Member 20 VICKI M. BIER, Member 21 VESNA B. DIMITRIJEVIC, Member 22 GREGORY H. HALNON, Member 23 JOSE A. MARCH-LEUBA, Member 24 WALTER L. KIRCHNER, Member 25 JOY L. REMPE, Member NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

2 1 THOMAS ROBERTS, Member 2 MATTHEW W. SUNSERI, Member 3 ACRS CONSULTANTS:

4 STEPHEN SCHULTZ 5 DENNIS BLEY 6

7 DESIGNATED FEDERAL OFFICIAL:

8 CHRISTINA ANTONESCU 9

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

3 1

2 A G E N D A 3 Opening Remarks . . . . . . . . . . . . . . . . . 4 4 Introduction to the Overall EPRI Digital Systems 5 Engineering R&D Strategy . . . . . . . . . . . . 7 6 Systems Engineering - A Modern Approach to the 7 Technology Life Cycle . . . . . . . . . . . . . . 42 8 Break . . . . . . . . . . . . . . . . . . . . . . 74 9 Implementation of Systems Engineering using the 10 EPRI Digitals Systems Engineering Framework . . . 75 11 Risk Informing the Design and Operation of 12 Digital Systems including PRA integration . . . 127 13 Public Comments . . . . . . . . . . . . . . . . 160 14 Closing Remarks . . . . . . . . . . . . . . . . 161 15 16 17 18 19 20 21 22 23 24 25 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

4 1 P-R-O-C-E-E-D-I-N-G-S 2 8:37 a.m.

3 CHAIR BROWN: Good morning, everyone.

4 This is a hybrid meeting of the joint Digital 5 Instrumentation and Control Materials Structures 6 Subcommittee. We will now come to order. I'm Charles 7 Brown, the chairman of the subcommittee. Can I be 8 heard, by the way? And, recorder, are you there?

9 Okay, thank you.

10 MEMBER BIER: And I hear you, Charlie.

11 CHAIR BROWN: Oh, okay, thank you. ACRS 12 members in attendance are Ron Ballinger, Jose March-13 Leuba, Matt Sunseri, Consultant -- Steve, where is 14 your name on here? I didn't put you on here. I know 15 your last name is Schultz, but your name is not on 16 your list. Walt Kirchner, Joy Rempe, and online we 17 have Vicki Bier, Greg Halnon, and Vesna Dimitrijevic, 18 and, I think, Christina Antonescu of the ACRS staff is 19 the designated federal official for this meeting.

20 The purpose of this meeting is for the 21 electric power researchers to EPRI to brief the 22 subcommittee --

23 (Audio interruption.)

24 CHAIR BROWN: -- systems engineering 25 framework. The framework is Board synthesized from NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

5 1 systems engineering methods. It provides a fast way 2 to risk inform digital instrumentation, digital I&C 3 implementation to the nuclear industry worldwide.

4 EPRI will also discuss the current 5 utilization status of the framework in near term 6 revision. The ACRS was established by statute as 7 governed by the Federal Advisory Committee Act, FACA.

8 That means the committee can only speak through its 9 published letter reports.

10 We hold meetings to gather information to 11 support our deliberations. Interested parties who 12 wish to provide comments can contact our office 13 requesting time. That said, we set aside 15 minutes 14 for comments from members of the public who are 15 listening to our meetings. Written comments are also 16 welcomed.

17 The meeting agenda for today's meeting was 18 published on the NRC's public meeting notice website 19 as well as the ACRS meeting website. On the agenda 20 for this meeting, and on the ACRS meeting website, are 21 instructions as to how the public may participate. No 22 request for making statements to the subcommittee has 23 been received from the public.

24 We are conducting today's meeting as a 25 hybrid meeting. A transcript of the meeting is being NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

6 1 kept and will be made available to all on the website.

2 Therefore we request that participants in this meeting 3 should first identify themselves and speak with 4 sufficient clarity and volume so that they can be 5 readily heard.

6 All presenters, please pause from time to 7 time to allow members to ask questions. Please 8 indicate the slide number you are on when moving to 9 next slide. We have the MS Teams phone line, audio 10 only, established for the public to listen to the 11 meeting.

12 Based on our experience from previous 13 virtual and hybrid meetings, I would like to remind 14 the speakers and presenters to speak slowly. We will 15 take a short break after each presentation at my 16 discretion to allow time for screen sharing as well as 17 the Chairman's discretion during the longer 18 presentations.

19 Lastly, please do not use any virtual 20 meeting feature to conduct sidebar technical 21 discussion, rather contact the DFO if you have any 22 technical questions so we can bring those to the 23 floor.

24 One other thing I would like to emphasize 25 is that this is a subcommittee meeting. And comments NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

7 1 that EPRI receives, or anybody else hears from the 2 subcommittee meetings, anything they bring up is their 3 opinion, their thoughts, and what they are thinking 4 relative to whatever the presentation is.

5 The committee only speaks as a joint 6 committee thorough our full committee meetings when we 7 rewrite reports which summarize and then provide 8 recommendations and conclusions. So things we may 9 say, which may be a lot, those are individuals' 10 thoughts on the particular subject at hand. So keep 11 that in mind, please.

12 We will now proceed with the meeting. Mr.

13 Matt Gibson, the technical executive in the Electric 14 Power Research Institute Nuclear I&C program will make 15 some introductory remarks. Matt, you're on.

16 MR. GIBSON: Thank you, Charlie. And I 17 just want to thank the joint subcommittee for inviting 18 us to share our perspectives on digital I&C. EPRI 19 does a fairly high volume of research in this area, 20 and we create products for stakeholders that they can 21 use.

22 Today we're going to concentrate on our 23 digital systems engineering framework. Now we do 24 other research on, you know, our digital I&C in 25 general. We do research on alternate architecture NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

8 1 using different kinds of technology. We do detailed 2 research on wireless technologies, maintenance, 3 techniques that are effective for maintaining your I&C 4 systems. But predominately today we're going to talk 5 about the framework and the products that are related 6 to it.

7 So here we go. Don, Mary, you have 8 anything you want to add to that? No?

9 MR. WEGLIAN: No.

10 MR. GIBSON: All right. And let's see if 11 our technology is going to work today. Here we go, a 12 little bit of a delay. But, you know, we're all 13 familiar with the story of the elephant and the blind 14 men where someone -- the blind men touch the elephant, 15 you know. And they see the elephant through their 16 touch and only see part of the elephant.

17 You know, the man that touches the trunk 18 thinks he may be looking at one kind of animal. And 19 then when the leg is touched, maybe it's kind of a 20 plant or a tree. And the tail, you know, maybe it's 21 yet another kind of animal, but not an elephant.

22 So that's pretty analogous to what we have 23 faced with the emerging of technologies in the nuclear 24 industry. And if I can figure out how to do this, 25 there we go, I have the right button now.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

9 1 What you'll see as we go through this is 2 each one of these topics are the elephant, right, you 3 know, your cybersecurity, your hardware reliability, 4 you functional reliability, automation, human 5 interactions, and systematic failures. We need a way 6 to put all that together in a single process, because 7 each one of them has a relationship with the other, 8 you know, whack-a-mole, maybe you can visualize that.

9 Systems engineering allows us to do that.

10 And every framework works on that principle. We're 11 going to go through that. And you'll see a lot of 12 things that, you know, maybe each of you know a lot 13 about individually, or maybe you know a lot about a 14 lot of this. But we want to give you a really good 15 situational awareness of where EPRI is with this, with 16 this idea and how it's being used 17 First off the history, so we didn't kind 18 of this overnight or about ten years ago. You'll see 19 this progression of the early products that we created 20 as we looked into these things. And you'll see that 21 progression in the light blue stuff. We start working 22 on hazards analysis.

23 The 509 report you see in the top left, 24 because I'll just refer to these as their last three 25 or four digits number-wise. It's a little easier, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

10 1 because I think you can see that okay. The 509 2 report, we did a blind study of different kinds of 3 hazard analysis methodology to see how well they work.

4 The way we did that was to take real 5 events that happened that we had root cause analysis 6 for, and ask a group of folks who had no knowledge of 7 that to use this methodologies to find those same 8 problems that had occurred in real life.

9 It's a very good product. We looked at 10 several different hazard analysis methods. All of 11 them had strength and weaknesses. When we post that, 12 and we pretty much asked, you know, gave that to the 13 industry as a tool, you know, use these. Use your own 14 judgement about what kind of thing you're trying to 15 achieve. Use the right method, right, the appropriate 16 method.

17 We also worked on hazards for 18 cybersecurity. You know, maybe that's a little 19 special. We created a cybersecurity procurement 20 methodology, digital instrumentation for the design 21 guy. You know, I think over the years, and kind of in 22 the last ten years, at least the I&C Committee has 23 probably seen something about the DDG as it was called 24 back then.

25 Well, as we progressed through NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

11 1 and continued to research this, we got several, you 2 know, pretty good epiphanies. You'll see about 2018, 3 2017, up in the top, middle, you'll see a report in 4 the HAZCADS, Revision 0.

5 Well, we had previously been working on a 6 hazard analysis methodology for cybersecurity. It was 7 clear to us, at about that time, you couldn't do 8 hazard analysis for cybersecurity, because it's the 9 same thing as the rest of the digital systems, all 10 right. Because the hazards that a digital system can 11 produce, or can cause, or be part of, are related to 12 its function.

13 Cybersecurity is a cause of that 14 malfunction. But it doesn't really produce different 15 hazards from an equipment and, you know, a plant 16 functional point of view. Certainly there are 17 different hazards in cyberspace about someone stealing 18 your information, or embarrassing you in public, and 19 things like that. But we're going to -- the context 20 here is in a plant functional area.

21 At that point we realized what we had was 22 a universal method to do hazard analysis that could be 23 applied to any cause, you know, your hazard, you 24 identify the hazard, you identify the risk sensitivity 25 of it. Now you could apply it. And that's where NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

12 1 HAZCADS, Revision 1 came from, roughly 2020.

2 Now the couple of takeaways for HAZCADS is 3 we continue to use the blind study method where we 4 took, or take, and continue to take real world events.

5 We present those to a team of people who are blind to 6 the event. They don't know really the outcome. They 7 get all the same technical data that, you know, they 8 would normally have, whoever made the original change 9 to a plan or the original new design, and then they 10 evaluate that and see if they can find a problem with 11 it.

12 And the success rate for HAZCADS really is 13 pretty close so 100 percent. And we said well, wow 14 how could that be that much of a difference? And it 15 really is not so much that HAZCADS is a perfect 16 process, as it's that the way we were going about it 17 before really didn't look at hazards systematically.

18 You know, it was kind of a person's knowledge.

19 We accessed a presentation from an 20 organization that said well, you know, we try to deal 21 with these problems by putting our best people in a 22 room for three weeks and have them think about it.

23 And they still miss the problems.

24 And I asked him, well, what method did 25 they use while they were in the room together for NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

13 1 three weeks?

2 Oh, they were just brainstorming, right.

3 So the takeaway there is the structured 4 methods pay off. They actually make a difference.

5 And we'll talk about HAZCADS in detail 6 today, just one of the more unique things about the 7 framework which is an overall systems engineering 8 framework. We also did some work on bringing systems 9 engineering to the forefront.

10 And usually when we -- you'll see at the 11 2016 column with the medium blue down at the bottom, 12 you'll see an 8018 report. We went out and looked at 13 other industries about, well, how do you do 14 engineering, and how do you achieve good results? And 15 what's your matrix, what's your performance matrix for 16 that good result?

17 That's some good input that's in that 18 report. Everybody uses that. The aerospace industry, 19 the critical process industries, transportation, you 20 know, Elon Musk, everybody uses systems engineering, 21 right.

22 And so we said, it has a high efficacy.

23 This is something that could really change the game.

24 If you look at how engineering is currently done by 25 licensees, and to some extent by vendors, not NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

14 1 entirely, because it at the day they have to get stuff 2 to work, so do licensees. But what you'll see is that 3 maybe they're trying to do something that's more of a 4 check list process.

5 They're trying to make sure all the lists 6 of things they have to do have been addressed. And 7 they do good work, I'm not saying they don't. And 8 they produce, some, you know, good and safe designs.

9 But they don't necessarily use the, excuse me, the 10 iterative approach and the diagnostic approach that 11 systems engineering does.

12 And it actually tends to be multiple 13 processes. In other words, if you've got HFE is in 14 its own process, cybersecurity is off doing its thing, 15 you know, the safety people are hunkered down doing 16 their thing, you know, plan integration, big thing.

17 You know, what are the mechanical, what are the 18 seismic people doing? What are the electrical EMC 19 people doing, you know?

20 So we pulled all that together, all those 21 topics together into one process, that's the digital 22 engineering guide, and that's down at the bottom 23 center in dark blue under 1816.

24 While we were doing that --

25 MEMBER SUNSERI: Can I just ask a NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

15 1 question?

2 MR. GIBSON: Sure.

3 MEMBER SUNSERI: I hear what you're 4 saying, and actually you're making me really nervous 5 now, because I think that before, if I was running my 6 plant I was buying parts for my plant, I was relying 7 on that vendor to, you know, send me a part, or 8 certified, say, for however you want to that.

9 But now you're saying that inherently, by 10 the way they develop them, there might be defects that 11 aren't detected through the systematic approach, that 12 they're sending me to my plant, and I'm installing 13 them? I don't know, that's kind of what I'm hearing 14 you say.

15 MR. GIBSON: Well, you think about the 16 history. There is a non-zero possibility that that 17 happens, right. I mean, we know that, and we have 18 events that show that.

19 CHAIR BROWN: We're talking about -- are 20 you talking about each part or are you talking about 21 a system that the vendor is sending you to operate?

22 (Simultaneous speaking.)

23 CHAIR BROWN: -- parts that's why I'm --

24 I wanted to make sure.

25 MR. GIBSON: I mean, a part of the system NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

16 1 or whatever, I mean, you know, part of the strategy at 2 the plants is making sure the vendor's got the right 3 controls to send us stuff that is error-free, right.

4 And we do a lot of factory acceptance tests, and site 5 acceptance tests to validate that.

6 So you're saying -- what I heard you say 7 is that there may be a hole inside those vendor 8 processes, because they're not using this systematic 9 approach to design and development.

10 MR. GIBSON: Sure. That is what I'm 11 saying.

12 MEMBER SUNSERI: Well, that's unnerving.

13 MR. GIBSON: Well, remember that, in order 14 to do better, you have to recognize you have room to 15 get better. That means you have to be a little 16 critical about what you're currently doing. That's 17 what we're doing here. I'm not trying to say that 18 everybody is, you know, just some huge , you know, ice 19 berg sitting here that's the ruination of everything.

20 What we're saying is this can be better. And more 21 importantly, it can be more efficient.

22 But there's more to your point. If you 23 look at the industry OE, the NRC's OE, the industries' 24 OE, I mean, those events happen all the time. You 25 know, the factory acceptance tests do not detect all NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

17 1 the errors, right. You know, they don't catch all the 2 problems, or they get to a factory acceptance test and 3 they have problems, which they should have not had by 4 the time they got to that spot in the process.

5 Well, anyway that's the kind of -- it's 6 just a thing you see in other industry. They get to 7 a good answer, the best answer maybe, but certainly a 8 good answer, fast and reliably. And that's kind of 9 what the industry is trying to head toward, is being 10 able to do better and do it quicker. Time is money.

11 All right. So we look back at this, we 12 also have cybersecurity in here. We have an 13 assessment methodology that's integrated into the 14 framework. One of our -- I think more of our 15 watershed researches is our investigation of off the 16 shelf certified equipment that's certified to non-17 nuclear safety related standards in other industries, 18 other process industries.

19 And you'll see that, a little off to the 20 right, in the bottom in dark blue, safety integrity 21 level efficacy for nuclear power, 300-201-1817. We 22 did that research a little ahead of integrating the 23 DEG.

24 And what we did with that is our 25 hypothesis was if other industries are using self-NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

18 1 certified equipment, which are mass produced to a high 2 standard, high reliability standard, and they're 3 successful at it, and they have safe -- they can boil 4 gasoline and not kill anybody, or at least not unless 5 they violate some other rules, well, how does that 6 work for us maybe for nuclear?

7 So we looked at that. And we pulled a 8 really close to two billion hours of operational data 9 on high end logic solvers that had been certified by 10 the SIL process. And if you're not familiar with 11 that, that's standards IEC 61508 and IEC 61511 is used 12 by most of the process industries for safety, and life 13 critical platforms, and for applications, 61508 for 14 platforms, and 61511 for applications.

15 So we looked at that. And the data was 16 pretty striking, you know, it told us that this would 17 be a pretty good thing. I mean, you could buy safety 18 related, in our terminology, platforms ready to go.

19 Now you still have to put an application 20 on them. You still have to integrate them. Remember, 21 this is a stack, so to speak, of things you have to 22 do. It's the basic platforms. The inner-workings of 23 something like that can be proven to be highly 24 reliable.

25 We will mention here, because we'll talk NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

19 1 about it in some detail as we go through further parts 2 of this, we do use STPA, system-theoretic process 3 analysis, as part of HAZCADS (audio interference) 4 four. And we add some risk informed features through 5 HAZCADS. We'll touch that, you know, in more detail 6 as we go.

7 MEMBER MARCH-LEUBA: Will you go in more 8 detail on the risk informed cybersecurity?

9 MR. GIBSON: Yeah.

10 MEMBER MARCH-LEUBA: Because -- let me 11 just put it now. A few years ago, you will remember 12 the risk on cybersecurity was a teenager living in 13 their mother's basement trying to impress his 14 girlfriend. And in 2023 it's active warfare by state 15 activists. It has grown, like, ten orders of 16 magnitude. And I cannot tell you what it will be in 17 2024. So it is changing by the minute.

18 MR. GIBSON: Well, I'll give you an 19 analogy so you can chew on it a little bit as we 20 approach actually talking about this in detail.

21 From EPRI's research, safety and security 22 are the same thing, right. They're identical, there's 23 no difference between them. They the same word in 24 other languages, all right. You just have to use the 25 context to figure out which thing you're talking NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

20 1 about, the nuance of it.

2 That's an important thing to understand, 3 because if we, in the industry, think we have a mature 4 review of safety, then we would never hook up your re-5 application system and allow children to have access 6 to it, would we? No.

7 We would have a series of safety design 8 and other access control protocols that are safety 9 protocols, really. I mean, you just don't want 10 unintended actuations of the system. You don't want 11 unintended configuration changes to the system that 12 would cause it not to work. All that stuff are safety 13 things that people will be safety fitted out all the 14 time.

15 So as you think about safety and security 16 being the same thing, clearly we can use those same 17 effective protocols that help us with security or 18 safety things like, I mean, it's going to work for us.

19 We'll see it a little later.

20 MEMBER MARCH-LEUBA: Vicki, we're seeing 21 your hand, but let me --

22 MEMBER BIER: Yes.

23 MEMBER MARCH-LEUBA: Let me continue, 24 because I'm online.

25 MEMBER BIER: That's fine.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

21 1 MEMBER MARCH-LEUBA: I disagree that 2 safety and security are the same thing. Security is 3 there to prevent safety problems. You place a gate so 4 that somebody doesn't come inside and cause a safety 5 issue. So that's one thing. And then I always give 6 an example in this microphone of the famous casino 7 break-in where they broke in through the aquarium.

8 And obviously we are not going to connect 9 a protection system to the Internet. Nobody would be 10 crazy enough to do that. But what EPRI and the 11 industry need to be looking for is where they are the 12 aquariums at the plant. You are never -- the bad guy 13 is never going to come in through the front door.

14 They're going to come in through the aquarium.

15 And you have to be remain ever vigilant, 16 and fund your departments, and continue to support 17 them, and do the best you can knowing that you're 18 going to be penetrated. Eventually, the bad guys will 19 get in.

20 MR. GIBSON: Okay.

21 MEMBER MARCH-LEUBA: You want to say 22 something about it?

23 MR. GIBSON: No. I have a lot to say 24 about other things.

25 MEMBER MARCH-LEUBA: Vicki has a question.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

22 1 MEMBER BIER: Yeah. I just want to chime 2 in on, again, the same issue of are safety and 3 security the same thing. And I think one big 4 difference is that --

5 (Audio interruption.)

6 MEMBER BIER: -- So one of the examples I 7 usually use, you know, like if San Francisco decides 8 they need a strict building code because they have 9 earthquakes, the earthquakes don't pick up and move to 10 Houston because the buildings in San Francisco are now 11 well protected.

12 But in security you definitely can have 13 people outwitting your defenses, circumventing your 14 defenses. It's kind of more of a, you know, a race.

15 So, you know, I'm happy to hear the rest of the 16 presentation obviously and, you know, may chime in 17 later. But I just wanted to put that comment out 18 there. Thanks.

19 MR. GIBSON: Thank you. So this is 20 conversational, so this is all good. When we get to 21 the other, where there's more detail and some other 22 stuff, we can dig a little deeper into that. The 23 framework elements though are, there's four of them.

24 I've touched on them. First of them is there used to 25 be industry standards.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

23 1 And this is another controversial 2 statement, but our research bears it out. The 3 standards that you see, the current IEC, and some of 4 these are IEEEI, so IEC standards, I abbreviate them 5 here a little bit, are very effective. And they have 6 (audio interference). I mean, they used hundreds of 7 thousands, if not millions of things, and they achieve 8 a high level of net functional reliability.

9 So, you know, our framework is really a 10 synthesis of those. You know, any standard, you know, 11 you have to have a process that you make on that 12 standard so you can use it. And that's what we've 13 done here. It allowed some leverages of economy to 14 the scale.

15 If you're looking for new advanced 16 reactors, which is supplied to them too, where are we 17 going to get the people? You know, where are we going 18 to get the equipment? How are we going to scale it?

19 Who's going to make 200 or 300 of these, you know, if 20 you believed in these reports, but, I mean, in power 21 plants that are going to be built over time?

22 Well, that's with economy of the scale, 23 these standards, and equipment, and personnel with 24 qualifications to go with them, are literally, the 25 order of magnitude, better than nuclear from just a NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

24 1 mass. You know, just how much stuff, how many people, 2 how much training available, that kind of thing.

3 The second part of that is used for 4 systems engineering, you know, again it's a single 5 process. It covers a complete life cycle, you know, 6 from inception to retirement, including O&M phase.

7 It's well used. Again, we were able to find good 8 examples of how that works.

9 If you look at the standards that go with 10 that, the IEC 15288 and 289, which was seven of those 11 standards are the up and coming nexus for most 12 engineering work. When you add STPA to that as a 13 diagnostic tool, you get a pretty compelling case for 14 using systems engineering on a regular, you know, as 15 a core value, core process.

16 Another, just I'll touch on it here, and 17 we'll talk about it some more later probably, is the 18 whole world is going away from software to systems.

19 For instance, if you look at the IEEE standard 1012, 20 for instance, the 2004 edition, it only covers 21 software. You look at the 2012 edition, it covered 22 everything, software, hardware, people.

23 So the movement to look at the world 24 through systematic means, through functional 25 reliability, is probably now almost a decade old. And NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

25 1 it's reaching fruition. If you look at the front 2 matter of some of your traditional standards like IEEE 3 730 or 820 it'll say it's harmonizing 15288 or being 4 harmonized.

5 You know, not lastly, but the third part 6 of that is risk informed engineers. So, you know, 7 engineers can do anything, but they can't do 8 everything, right. That's kind of something to think 9 about. So at some point you have to make tradeoffs, 10 you have to understand resources. You have to 11 actually understand what the real problems are too.

12 So risk informing the digital I&C 13 practitioners' tool kit so that they can make good 14 decisions, integrating risk informed elements of that 15 are critical. We do that for other things, or the 16 industries do that. You know, they don't waste a lot 17 of time on things that are low consequence or low 18 likelihoods. And so getting some feel of that and 19 letting that factor into your graded approaches, or 20 even your solutions, how it's structured, is another 21 important part of our framework.

22 The fourth and last part is a capable 23 workforce.

24 MEMBER MARCH-LEUBA: Other thing is risk 25 informed theory. What's your ideas, your thoughts on NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

26 1 chasing the last hard to find software error that may 2 be latent in there, and you've never seen it? Is that 3 something we can do with risk inform. I mean, because 4 if you know --

5 MR. GIBSON: Well, it's a -- all right, so 6 it's part of it. All right, today in your PRAs, and 7 so I'm surrounded by PRA people, and you know who you 8 are, right, so if you want to weigh in you can 9 certainly do that. But to be risk informed, one of 10 the key, or if not the core things is likelihood, 11 right. Everybody talks about, hey, you've got to have 12 consequences and likelihood. Only way to get to 13 likelihood is to know how reliable your stuff is, 14 right.

15 So what this does, what risk informing 16 digital does is move the emphasis from deterministic, 17 you know, duties, lists of things, and we'll be fine, 18 to what do I need to do to increase my reliability.

19 I thought it was fascinating, you know, 20 you brought it up, if I look at the Standards 21 Committee, most of the reliability is over with the 22 risk people. You know, I&C like it is, IEEE Committee 23 for nuclear I&C, there's no Reliability Committee.

24 All that's over in the risk people.

25 We collectively are going to the ANS NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

27 1 conference here in a few weeks. John is presenting on 2 stuff in the risk section on I&C reliability. There 3 are no I&C reliability presentations. I looked 4 through the entire thing, there are none.

5 That's really a big takeaway for 6 everybody. As you go to risk informed, reliability 7 analysis shoots to the top of the list. You know why, 8 it's because what risk people do. You know, you have 9 to know what your reliability of your components are 10 to predict the future. Because that's what risk does, 11 right, you predict the future.

12 So if you have all this data, all these 13 pumps, and valves, and ten to the minus this and that, 14 you know, what their statistical reliabilities are.

15 And unless you take that and plug it into a PRA and 16 say, okay, I can predict the future now, because I'm 17 holding -- this condition's the same, right, I'm 18 making sure I do my maintenance and all of that, but 19 it's all predicated on maintaining the reliability of 20 those pieces of equipment at the same level that was 21 used to collect the data in.

22 Would anybody want to brief me on that?

23 MR. WEGLIAN: I'll weigh in. We're going 24 to get, oh, this is John Wedlian with EPRI. We're 25 going to get to this in a little bit. But most of the NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

28 1 software errors that we encounter are actually errors 2 in requirements, not typos in the source code. And 3 our process is designed to find those. And one 4 approach is to change the design so that they can't 5 happen.

6 7 And then the other approach, if you can't 8 do that, is to have control methods to prevent them.

9 But if you know ahead of time that it's a potential, 10 then you can design your testing to look for them.

11 But you can't do that if you don't recognize that it's 12 a potential in the first place.

13 So to your point, the latent error that 14 goes undiscovered until it happens is most likely an 15 error in requirements, because you didn't recognize 16 that that could have gone wrong. And therefore you 17 didn't test for it to make sure that it does the right 18 thing in that condition.

19 And that's what our process is designed to 20 do, to identify those things that can go wrong and add 21 it, either change the design so that it can't happen, 22 or apply control methods like testing to verify that 23 it does the right thing.

24 MEMBER MARCH-LEUBA: But to my previous 25 question, I like what you're doing, because it is NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

29 1 deterministic. How does risk informed feed into that 2 process?

3 MR. WEGLIAN: So if the function that 4 you're doing can't lead to a significant problem, you 5 know, for nuclear safety, if there's 20 systems that 6 can back it up, then why would you spend a million 7 dollars to protect from that? So that's what risk 8 does. It identifies these are the important 9 functions, and you need to spend a lot of effort to 10 make sure those don't happen. And these are your 11 functions that are not important, and you can do the 12 minimum on those. Because even if they fail it's not 13 a catastrophic effect on your plant.

14 MEMBER MARCH-LEUBA: Historically we 15 change the likelihood by having redundancy and 16 diversity.

17 MR. WEGLIAN: Or this defense in depth, 18 you get FLEX now. And the PRA, which is what we use 19 for the risk assessment for the nuclear safety part, 20 at any rate, it accounts for all of that, right. It 21 looks for what's proceduralized, right, what are the 22 operators actually going to do in an event, what 23 systems are available to them if a particular system 24 fails? And the PRA accounts for hardware common cause 25 failures as well, not just software, of course.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

30 1 And we can show with the PRA models that 2 any one system can fail completely, and we still have 3 success paths, right. And so our design basis 4 approach creates very reliable systems that we have, 5 you know, defense in depth to account for combinations 6 of failures. We have very good designed plants. And 7 we specifically look at what alternatives are there 8 for the function that may be lost to see which 9 failures are extremely important and which ones are 10 not.

11 And redundancy is not always the best 12 approach. Sometimes adding additional equipment, 13 hardware, integration of additional systems that do 14 something else, can add more complexity, can actually 15 increase the risk rather than reduce it. It depends 16 on the situation in the plant and what you're trying 17 to do. So just relying solely on redundancy is not 18 always the best approach.

19 MEMBER MARCH-LEUBA: Redundancy and 20 diversity.

21 MR. WEGLIAN: Right. Right, diversity is 22 another approach. But again, failures that we see can 23 also be caused by unanticipated interactions between 24 systems. So there isn't one answer that always works 25 100 percent. And an approach is designed to find for NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

31 1 the particular system that you are trying to install, 2 or modify, or build, or whatever it is, what is the 3 best approach for that situation.

4 And looking at two different plants, you 5 may come up with different answers. One may say we 6 want a diverse system that can do these functions, and 7 another plant may say we do not want that. We want 8 that, we want an operator action that can manually 9 start but not a completely redundant system that gets 10 added in. And our approach is designed to identify 11 what the best approach is.

12 MEMBER SUNSERI: I know I'm kind of slow 13 on all this stuff, but I'm starting to get really 14 lost. Because what I hear you talking about, when I 15 think about redundancy, diversity, defense, and FLEX, 16 all that stuff, that's about mitigation, right.

17 If we're depending on FLEX for this stuff, 18 then we're way -- I mean, that's like the last line of 19 defense in my mind. I thought we were looking at 20 going for prevention, not mitigation, avoiding the 21 mistakes, avoiding the errors, avoiding the 22 malfunctions.

23 MR. GIBSON: We are but your FLEX, as a 24 good example, is a low probability, high confidence 25 event. I would say you look at your risk profile, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

32 1 there are things that happen often. And it graduates 2 from things that happen much less often.

3 All risk -- no risk says you won't have 4 any event. There's always some non-zero possibility 5 we'll have something bad happen. And so what people 6 do, for instance, with FLEX, is they say, well, this 7 says something that probably won't happen very often.

8 And when it does happen we just want to have this 9 extra capability out here to mitigate this, again, 10 high consequence and low probability.

11 So this is about risk, now. There's 12 nothing ever zero and nothing's ever perfect. It's 13 always about how perfect something is or how reliable 14 it is, and what the consequences are.

15 So this, what we're talking about fits 16 into that risk framework. And the things that you'll 17 see happen through this process drive the likelihood 18 of a problem that's consequential to a low state and 19 a low rate, right. It doesn't absolutely prevent it, 20 but nothing will. You know, when your duty is 21 reliability, there's always you have to recognize that 22 there's even a ten to the minus ten probability.

23 That's still a probability that it could happen.

24 CHAIR BROWN: Let me comment here for just 25 a second. We're now on Slide 5, and we're now at NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

33 1 9:17. So we had a little delay on the start, but this 2 is part of the lead-in overall topic before we move 3 into each of the other groupings.

4 I think the conversation and discussion is 5 valid, and both Jose's point and Matt's point are 6 correct. But in my mind we've got to separate stuff 7 a little bit. I mean, you can't deal with -- if you 8 look at your design engineering guide there's 369 9 pages worth of things to look at and things to 10 consider.

11 And then you throw in the HAZCADS document 12 and the TAMs document, then there is tons of 13 information and things to do to make that right. By 14 the time you finish that, you've spent about three 15 years and $5 million, and you don't have hardware yet.

16 So I'm being a little facetious when I say 17 that. There's got to be some balance between this 18 risk total domination of looking at things and what 19 are characteristics of the ways you can design things 20 based on what they do?

21 For instance, some systems, like a 22 protection system, you put in redundancy. And you use 23 an architecture, but you start with an architecture.

24 And then you work downwards to see if plant 25 architecture, then individual systems, how do they fit NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

34 1 into the plant architecture now having protection 2 systems, safeguards, reactivity controls, pump 3 controls for the pumps. For the plants that are out 4 there and want to replace stuff, you look at those 5 individually.

6 You want to start a pump with software, 7 you know, have a controller that does that? Fine, go 8 do that. It's a single point control. If you have 9 250,000 lines of code to do that, that's a problem, 10 okay. You ought to have something simple doing that.

11 So there's a way to look at these things 12 as you go through it intellectually, you know, 13 engineering judgement-wise, but some things you just 14 can't do. You can't test even -- I tried testing 15 250,000 lines of -- I tried testing 50,000 lines of 16 code with a super computer back in the mid-'80s and 17 early '90s.

18 It took forever, and we never finished.

19 So we decided we have to look at are architectures 20 going to protect us from a redundancy, diversity 21 independence, deterministic processing all that stuff 22 with watchdog timers, et cetera, for the protection 23 system part. This is a very broad class of equipment.

24 One size fits all, in my mind, just doesn't do it.

25 But that's my input to the discussion.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

35 1 That's all. I think we need to move on to the next 2 slide so we can get -- and look at these things 3 individually --

4 MR. GIBSON: We're there.

5 CHAIR BROWN: -- as we go.

6 MR. GIBSON: The next slide, if I can get 7 it to work. Come on, there we go. All right. So 8 this framework that we had a good discussion about so 9 far, it fits in the implementation level. So, you 10 know, a model that you would use, let's say -- and 11 remember that these products are designed to be used 12 worldwide, both for this and then new reactors, not 13 just in the U.S.

14 And so the way these products are 15 designed, and they're deployed, they would be used to 16 actually do stuff at the implementation level. Now 17 there's going to be policy, there's going to be 18 regulation, company policies, whatever, whatever 19 authorities can decide what policies are, you know, 20 the domains these products are used.

21 There should be objective criteria that 22 are synthesized from policy. These are the 23 performance based objectives that you want to achieve, 24 and you want people to do it without necessarily 25 telling them in detail how to do it right. They've NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

36 1 got to come back to you at some point with some 2 metrics, and some arguments, and such.

3 So this framework is aligned to fit in the 4 implementation level. And it does support safety case 5 style arguments, you know, claims arguments and 6 evidence in sort of a hierarchy-like way. You know, 7 you're down here doing stuff, you know, you say I did 8 these things, what are my objective criteria that meet 9 these policy objectives?

10 We've tested these in those scenarios, we 11 think they work well in making an argument that what 12 you've decided at the implementation level matches the 13 objectives of the authorities' policies that are in 14 place in your particular domain.

15 So you asked us how these are, so they're 16 constantly being updated. You know, we get a review 17 from people using them. They get improved. We would 18 think that the implementation level products would 19 change often. They would evolve. Your performance 20 objectives less often, but they can change based on 21 feedback and experience. And your policy level would 22 be the thing that would change the least often, just 23 sort of a decomposition stack, if you can see that in 24 here.

25 The other thing to take away from this NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

37 1 particular view is that the objective criteria, the 2 policy, the objective criteria, and all that sort of 3 thing comes to the implementation level as a 4 requirement. So if you have objective criteria, 5 published objective criteria that you should be using, 6 you're going to have to synthesize it into a 7 requirement so that these processes can use that 8 requirement to decide what to do. And that's the way 9 this process gets policy and other exterior regulatory 10 guidance.

11 I've touched on this slide. You know, we 12 did this research, you know, on SIL. We looked at 13 about 12 different logic solvers that cover this two 14 billion hours. These platform visitors (phonetic) do, 15 in fact, achieve the level of reliability you would 16 expect. To be a SIL 3 it has to be redundant. So 17 you're talking about the reliability of a redundant 18 system to achieve its safety objective. That's 19 published. I mean, you know, I think the NRC owns a 20 copy of that. And if you're interested in looking at 21 it, if you want you can.

22 Now NEI, they leverage that in NEI 17-06.

23 And that's been endorsed in Reg Guide 1.250. They use 24 SIL certifications for the dependability 25 characteristics and for safety related digital things.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

38 1 So we have that. And if you've seen NEI 20-07, 2 another NEI effort, they also are using constructs 3 based on that research. So that's just kind of, you 4 know, where it's being used and where it's headed.

5 We used the idea of SIL certification, a 6 graduated reliability based on systematic controls, 7 and the premise that systematic controls drive 8 reliability for things that are systematic errors, 9 like software errors, design errors, manufacturing 10 errors. Those are systematic errors. And so those 11 systematic controls drive error level. And this 12 research demonstrated that certainly is achievable on 13 a mass scale.

14 It correlates well with our data from our 15 other OE, worldwide for Korea, France, and China, 16 because we mined their data. You know, they're 17 members of EPRI so, you know, we take some time to get 18 data from them that's not available anywhere else.

19 And you see that CCFs in general are very low. And 20 then any that's, like good digital related CCF, that 21 would be just digital related CCF, is also even more 22 rare. Most of the CCFs are at the application level.

23 Who would have thought?

24 I'll give you a demonstration of that in 25 just a few minutes of one of the scenarios that we NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

39 1 used to test our STPA, HAZCADS methodology. And 2 you'll see how that might be relevant.

3 What else, that's about what wanted to say 4 with that.

5 It shows you that same sort of idea. You 6 know, our research tells us that we really have to 7 look at these domains of reliability independently.

8 You know, they're done by different people, at 9 different times, and different places.

10 Your platforms tend to be, especially if 11 you use industry standard SIL certified platforms, 12 there's a lot of them in the field. There's a lot of 13 data, which is the risk person's bread and butter, 14 there's a lot of data on their performance, right. So 15 you can have some confidence about how reliable they 16 are.

17 And if you're familiar with the concept of 18 reliability growth where, over time, something becomes 19 more reliable systematically, because you work the 20 bugs out, you know, you've tested it more, you can 21 have reliability growth in both the development phase 22 as well as the operational phase. So the platforms, 23 they benefit from that.

24 As you go up this pyramid though, at the 25 very top applications could be a one-off. It might be NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

40 1 three people in the world that know what that 2 application does and may have been the only three 3 people that ever looked at how it was made. You know, 4 it used to in a little screen there. But that's where 5 the real payoff is. It uses systematic methods to 6 figure out where your application is. And this 7 diagram --

8 CHAIR BROWN: May I interrupt you for a 9 second?

10 MR. GIBSON: Sure.

11 CHAIR BROWN: When I calibrate my SIL 12 platform I look -- what you mean is, like, I'll use an 13 example of a Common Q platform. It's got an operating 14 system that's been employed in safety systems.

15 MR. GIBSON: Yeah.

16 CHAIR BROWN: It's got application code 17 that has to be integrated into it.

18 MR. GIBSON: That's right.

19 CHAIR BROWN: I'm just trying to put a 20 perspective on these fancy words to make sure 21 everybody understands what you're talking about. I 22 mean, there are some common platforms across the board 23 that get incorporated, just like when you by a PC, 24 there's an operating system within it. You know, the 25 Windows, and then there's all the application stuff NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

41 1 you pour in that could completely destroy all its 2 operational capability. I say that with tongue in 3 check, because obviously there's always difficulties 4 that occur.

5 So that integration effort, when you're 6 just piling stuff in, is very important when you start 7 putting the application codes in.

8 MR. GIBSON: Yeah.

9 CHAIR BROWN: I just wanted to, I mean, we 10 can go on from here. I was just trying to put a cap 11 so we could move on to the next slide, just to make 12 sure what we're talking about.

13 MR. GIBSON: That's true, that's what a 14 platform is. That's the thing that's post your 15 application, you know, that's certainly not one of a 16 kind. There's multiple ones of that. It's not custom 17 made for your application, that sort of thing.

18 CHAIR BROWN: Yes, okay.

19 MR. GIBSON: All right. This kind of 20 wraps it up. You know, you really have a reliability 21 that starts with your components. And you have 22 system, and you can have facility level reliability.

23 You know, random failures and systematic errors 24 contribute to that reliability.

25 And as we look at common cause failures NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

42 1 some of the driving things we're looking at is that we 2 can -- we first have to have a failure or systematic 3 error to have a common cause failure, you know. And 4 generally, when we talk about common cause we're 5 talking about common cause across redundancy 6 expectations of some sort.

7 There are other definitions, but this 8 includes emergent behavior, things that the system 9 does that nobody knew it would do. So systematic 10 error could be -- the equivalent of that for a human 11 action would be an error of commission. It could do 12 new things that you didn't know it did that aren't 13 appropriate.

14 So if you want to achieve this systematic 15 random reliability, yeah, so apply systematic 16 controls. And if you drive your reliability to a high 17 level, you drive your reliability to a high level, and 18 it includes your CCF probability. Because, you know, 19 even your hardware today, you don't hear people talk 20 about a systematic reliability or a hardware re-21 application system. Because when you see all our data 22 in a little while you see a lot of the problems are in 23 the hardware. They're not in the software, all right.

24 So what we see is, well, you know, you do 25 these things and there's this assumption of its NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

43 1 reliability and of its design quality. Well, have you 2 ever been -- ask yourself this question, have you ever 3 been actually tested, if you ever went actually 4 looking for those systematic errors in the design in 5 a formal way?

6 So these things -- what I'm trying to do 7 is -- this ain't about software, this is about 8 functional reliability, the hardware, and the 9 software, and the people together in one place, and 10 the intersection that you have with those.

11 Anyway, that's the kind of idea. Your 12 functional reliability, you have to worry about that 13 at the equipment level but also at the life cycle.

14 Because after you make it, and it starts running, then 15 after a time goes by, I mean, you have to kind of get 16 a sense of its current state.

17 And we understand that pretty good. I 18 mean, we do surveillance tests and all that sort of 19 thing, you know, check on stuff to see if it's broken 20 down. And it also has relevance to a modern approach 21 too.

22 CHAIR BROWN: Somebody's got a question 23 right now. Vicki?

24 MEMBER BIER: I just wanted to, again, 25 chime in on the concept of reliability growth. I NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

44 1 think reliability growth has obviously served the 2 industry very well in numerous ways. You know, a lot 3 of things you identify a problem, you weed it out.

4 And so our hardware and our systems have gotten better 5 and better over time.

6 I'm not so sure that that concept applies 7 as well to cyber reliability. Because while we are 8 trying to improve our systems we can also have zero 9 day exploits where something that looked perfectly 10 safe yesterday all of a sudden we realize there is a 11 huge vulnerability that somebody just figured out how 12 to explore it. So I think in cyber it's not 13 necessarily just growth. You can have both ups and 14 downs in the process.

15 MR. GIBSON: Good comment.

16 CHAIR BROWN: Is that it, Vicki?

17 MEMBER BIER: Yeah, that's it. We can 18 move on. Yeah.

19 CHAIR BROWN: Okay. Thank you.

20 MR. GIBSON: All right. We're going to 21 get to a quick problem of the system of, you know, 22 systems engineering and how it plays into here.

23 First off, this is your basic layout of 24 systems engineering, requirements engineering, systems 25 analysis and control, functional analysis, design NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

45 1 synthesis.

2 You will notice that I think the big thing 3 I want you to see here is the iterative nature of 4 this. This is not a waterfall type of thing. You 5 might go through a systems engineering iteration where 6 you will go through it. It's not a might, you will go 7 through it several times during the development of a 8 change or a new system that you're designing, or a new 9 plant you're designing. It requires systems thinking.

10 And that's really something that we are 11 trying to develop some training for. And it's a new 12 skill, because you have to start thinking in the 13 whole, you have to be looking at the elephant in the 14 whole. You can't be just thinking of a silo, because 15 there is always going to be white space between your 16 silo and somebody else's silo. And that white space 17 could be very damaging.

18 So systems thinking helps you, even if 19 you're not a super-duper expert in every one of these, 20 if you're a super-duper expert in how things relate to 21 each other, which is what this is about, then you can 22 achieve high levels of system thinking.

23 Multiple disciplinary, the ability to see 24 relationships, to communicate across disciplines, the 25 ability to understand complexity, well, this is a NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

46 1 question for you guys. I know you all have opinions, 2 I'm not worried about you not having one. But what do 3 you think though one of the biggest challenges is to 4 risk informed approaches?

5 MEMBER SUNSERI: Well, you don't know what 6 you don't know in this area.

7 MR. GIBSON: Huh?

8 MEMBER SUNSERI: You don't know what you 9 don't know, you know, what happens on the inside these 10 circuits.

11 MR. GIBSON: Well, that's fine. Anybody 12 else want to -- what about risk people in here?

13 What's the biggest barrier to risk informing I&C or 14 anything else?

15 MEMBER REMPE: We're going to let Jose 16 answer that.

17 MR. GIBSON: I believe you all were --

18 (Simultaneous speaking) 19 CHAIR BROWN: I could answer that, but you 20 probably don't want to listen to me anymore.

21 MEMBER MARCH-LEUBA: I've been chastised 22 before for using too much time and --

23 CHAIR BROWN: Have at it, if you want to 24 say something.

25 MEMBER MARCH-LEUBA: So keep moving.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

47 1 MR. GIBSON: Ha, ha, has. Well, I'll tell 2 you the answer to that question.

3 CHAIR BROWN: My answer is no.

4 MR. GIBSON: It's Bullet 4, Bullet 4, 5 ability to communicate across disciplines, all right.

6 I mean, just stay at work. I mean, Mary, you know, of 7 course, I've known Mary and John awhile, but I didn't 8 really know them until we started this project, not 9 like I do today.

10 In order to risk inform digital, we had to 11 I&C-ify the risk people. And the risk people had to 12 risk-ify the I&C people. And we did these blind 13 studies we talked about. And this process required a 14 risk person and an I&C person to work together as a 15 team. And we --

16 (Simultaneous speaking.)

17 MR. GIBSON: And we find that they don't 18 even talk to each other. Huh?

19 CHAIR BROWN: Communication is great, but 20 let me just ask a question, because I asked you this 21 question, and we said how do you risk inform? What 22 does that mean relative to digital I&C systems?

23 Let me pick a system, and I think there's 24 one coming up, a governor on a turbine generator set.

25 We developed digital I&C systems for the controllers.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

48 1 And that embodied both the speed control and the over-2 speed control. So there was a risk informed decision 3 made in that, how we did it.

4 But it's a single channel. You get data 5 in, you've got to go in, got to open a valve, close a 6 valve, trip this trip valve, or not trip this trip 7 throttle valve. It's the only decision you have to 8 deal with. You don't put redundancy in, because you 9 can't have two regulators both trying to -- or 10 governors trying to run the speed of the TG set at the 11 same time so you've got a single channel.

12 So how do you -- well, the first thing we 13 looked at was you don't want to combine the over-speed 14 with the speed control. So you have two separate --

15 you take the same data, independent sensors come in, 16 independent sensors feeding two different processing 17 units with independent power supplies to make sure you 18 don't have redundant power supplies putting noise in 19 that triggers both of them.

20 And then you have a single output for the 21 speed control, and the other one's sitting on top of 22 it waiting to do something. So, I mean, that's an 23 architectural design approach for risk informing what 24 you're doing.

25 MEMBER MARCH-LEUBA: In your process of NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

49 1 thinking, you were implementing a PI.

2 CHAIR BROWN: Yeah.

3 MEMBER MARCH-LEUBA: It was then during my 4 service I used some redundancy and diversity.

5 CHAIR BROWN: Yeah.

6 MEMBER MARCH-LEUBA: That was my way of 7 asking him to give me the floor. Ha, ha.

8 (Laughter.)

9 MR. GIBSON: All right. Well, go ahead.

10 CHAIR BROWN: Have at it.

11 MEMBER MARCH-LEUBA: The bullet that is 12 missing in every risk informed or risk thinking is 13 completeness. That's my favorite one. And it is 14 missing always because it's impossible to do. So the 15 reason it is the most important one is because you 16 don't know what you don't know.

17 And if you didn't think of this failure 18 mechanism, you didn't want it, you're telling me your 19 core limit is in (audio interference) when there is 20 something which is in the (audio interference) that I 21 think of. If you don't consider the high tsunami, 22 Fukushima was a very safe facility. But they didn't 23 consider it.

24 And it's the issue with, in my opinion, 25 you should assume your system is going to fail. I NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

50 1 say, well, if it fails, I have another one that is not 2 like it that will take over. And then assume it's 3 going to fail. Don't try to convince me that you have 4 looked through the 250,000 license holders and found 5 only 50, because some of us have done (audio 6 interference) and know how --

7 MR. WEGLIAN: That's a great lead in to a 8 little bit later when we get into the actual risk 9 assessment that we do. We assume everything fails.

10 That's our bounding assessment, is that we say 11 everything fails, and we start the risk assessment 12 from there, and say is it safe enough if everything 13 fails, and then at what level do we apply our control 14 methods based on how bad that is if everything fails.

15 So --

16 MEMBER MARCH-LEUBA: Let's talk about --

17 MR. WEGLIAN: -- we'll get to that.

18 (Simultaneous speaking) 19 MEMBER MARCH-LEUBA: -- example later. But 20 keep in mind completeness. It's against the 21 scientific method to gain the completeness.

22 MR. GIBSON: All right.

23 CHAIR BROWN: Also, Jose, interrupted me, 24 which is just fine, okay. Because we looked at now 25 what if those fail. There's are mechanical over-speed NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

51 1 trip also, okay. So we looked all the way to a long 2 plan.

3 There was one other major discussion in 4 there. Everybody wanted the just redundant power 5 supplies, auctioneer them, aid the basic voltage --

6 the governor in the over-speed trip units. You've got 7 redundant power supplies, they're auctioneered.

8 (Simultaneous speaking.)

9 CHAIR BROWN: They're safe. Let me 10 finish. I recommended in that design, you know, you 11 ought not to do that. You ought to have four power 12 supplies, two auctioneered for one, two auctioneered 13 for the other independent thing. Everybody said no.

14 We don't need it. It's too complex. So they put them 15 together.

16 Three months later we had an operational 17 experience on a submarine where, guess what, they had 18 a -- they rode in parallel on the submarine. They had 19 a hunting on that one machine trying to figure out 20 what it was. One of the trouble shooting methods was 21 to pull out one of the power supplies. They did that 22 it immediately over-sped, and it over-sped past the 23 electronic over-speed controller.

24 Fortunately there was an operator standing 25 there to trip the trip throttles out. In other words, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

52 1 the noise, whatever, they pulled the wrong power 2 supply, the noise was in there. It compromised both 3 the governor and the over-speed trip device. So they 4 went back and redesigned it an put four or five 5 supplies in.

6 My point being is your methods for, to me, 7 I don't know how -- but it was all software based.

8 And we go through -- and this is not a critical 9 comment, I looked at your HAZCADS in the other 10 documents. People just programmed it, and we looked 11 at the number of lines, and we did the best we could 12 in terms of doing software checks. But then it ran, 13 and it's been working beautifully now for ten years.

14 There's a lot of different factors in 15 here. How much analysis do you do, and how much 16 judgement do you do to get out? That's a risk 17 decision right there. So I just think you have to 18 throw in experience and understanding the systems 19 you're dealing with.

20 When you have a safety system where you 21 can run four different channels, that's another layer 22 of architecture that reduces your risk to problems.

23 And if you'd run it asynchronously, you have separate 24 detectors feeding each channel, all that stuff falls 25 into trying to minimize the risk of a common cause NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

53 1 failure, even in the software which is, I mean, it --

2 no. I just wanted to give an example real world of 3 some of this more esoteric --

4 MEMBER KIRCHNER: If I were Matt, I would 5 have answered you, well, this is an iterative process.

6 And you go through the loop once, and then you put 7 experience in that way. And then you go through the 8 loop in the (audio interference), and that's what I 9 would have said.

10 CHAIR BROWN: Why don't we --

11 MEMBER KIRCHNER: -- so we could go on to 12 the next --

13 MR. GIBSON: Thank you, that completes my 14 slide.

15 CHAIR BROWN: Go on to the next slide.

16 MR. GIBSON: The DEG that we talked about 17 earlier, the system engineering process, you know, 18 what Walt was talking about, and that is part of the 19 reliability growth concept that gets back to what 20 you're talking about, as in you doing the iterative 21 approach, you converge each time you look through 22 here.

23 And this might sound like a lot of work.

24 When we benchmarked, other people could actually do 25 this faster than what we're doing in the nuclear NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

54 1 industry, by a lot. Because when you first start this 2 thing, you only get to deal with what you know. And 3 each time you loop through it, you know more. And 4 each time you know more, you use that information to 5 gradually converge and become more complete.

6 And so, you know, this is sort of like the 7 systems engineering loop based on I&C 15288. There's 8 topical guidance in the DEG about different things.

9 There's most topics like cyber, plan integration, 10 hazard analysis, testing, V&V, are all in the DEG.

11 And they're all used in each one of these loops.

12 You end up getting some architecture views 13 once you do your function analysis, and allocation, 14 and your relationship sets, or if you're getting V&V 15 and you transition to the O&M phase, the RO&M phase of 16 activity in the DEG as well. Somebody asked, and we 17 mentioned earlier about the feedback, I think. You 18 know, somebody made a comment about the cyber 19 landscape changes. This process incorporates that.

20 There's periodic feedback that you can 21 measure changes in the decisions you made. So you do 22 a bunch of iterations, and then you do the bigger loop 23 sitting out here where you iterate back. Right now, 24 you know, no cyber professional out there would not do 25 a periodic review of the threat landscape or the NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

55 1 threat capability. That's built into these processes 2 too.

3 To trigger this process again, if you see 4 a delta -- but that'll go for reliability as well.

5 Let's say you operate this thing for a while. And all 6 of a sudden you start getting failures. That's going 7 to trigger you to look back at this using these same 8 tools and find out what's going on, right. Then you 9 restore your reliability to the target reliability 10 that you had when you started.

11 All right. So --

12 CHAIR BROWN: Before you go on, I meant to 13 make an announcement earlier. Dennis Bley, one of our 14 consultants, has also joined the meeting earlier.

15 MR. GIBSON: Very good.

16 CHAIR BROWN: I didn't get him earlier, so 17 I apologize for that.

18 MR. GIBSON: Anyway, just to recap this, 19 you know, there's seven phases in this idea, beginning 20 with the neutral scope and all the way through to the 21 O&M phase. There's nine topical areas. And again, I 22 just, you know, I summarized those earlier. You can 23 see a lot of them here. But this is comprehensive in 24 that view.

25 This is an example, all right, because the NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

56 1 DEG is designed for you to use in day to day work. So 2 it has to be scalable. And so what we do is, in your 3 V model which really ends up being a process 4 decomposition stat, we allow people -- let's say in 5 this case you put it in a digital recorder.

6 And this recorder could be safety related.

7 I mean, power plants have safety related recorders.

8 You know, they're displaying something that's been 9 adjudged to have a high safety significance, or a 10 moderate safety significance.

11 So as you work yourself through this, you 12 say, well, okay, everything above Level B in this 13 decomposition I already know. I have bounding 14 technical requirements that's, you know, this is not 15 a new function, right, it's always been there.

16 And so the person doing the new design or 17 the design changes is not going to have to deal with 18 the things in Level B or in the context, I would say, 19 in Level B and E.

20 And we used this systems engineering 21 process to figure some stuff out. Your bounding 22 technical requirements you already got. But now 23 you've got to decide on your parameter values, you got 24 to decide -- you got to go do some bench evaluations, 25 you know, so that's your design synthesis. You get NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

57 1 some hands on there and figure out what's going on.

2 You refine our parameter values with a feedback loop 3 on it.

4 So you converge in this recorder 5 modification or change-out to the point where you can 6 specify its configuration and any other things you 7 might want to put on it. And then you're able to go 8 to configure and install.

9 Now this is probably below the level that 10 you guys might normally think about. But I'm just 11 going to point out the DEG is for the least complex 12 mod in the facility all the way to the most complex, 13 including building (audio interference) design of it.

14 And this also demonstrates the system 15 engineering concept where you can do this loop 16 anywhere in the stack, at the top, in the middle, at 17 the bottom. You can do it at the consensual level, 18 you can do it at the detail design level. So these 19 iterations are -- ideal iterations are very important.

20 We talked already some about this. And 21 what you'll see is a heat map from one of our reports.

22 This is probably going to be the 4997 report where we 23 evaluated the different hazard analysis methodologies, 24 strengths and weaknesses. STPA scored pretty good, 25 but there are some things it didn't do.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

58 1 But then we said what combination of 2 hazard analysis methodology would be the most 3 effective? And so it turns out systems theoretic 4 process analysis and fault tree analysis would be 5 highly effective when combined in being a hazard and 6 risk analysis, a comprehensive one.

7 So, you know, our colleagues at Sandia 8 Laboratories worked with us on this particular phase 9 of the project. You know, you can find this stuff out 10 in some of their labs there. And this was a good 11 insight to combine it, so we did.

12 So HAZCADS, again, is our core hazard 13 analysis process with the bedrock analysis and 14 theoretic process analysis. I gave you some -- you 15 know, we sent you some pre-material. Well, I won't 16 try to teach you all the ins and outs of STPA, but if 17 you want to ask questions, that's great. We can try 18 to answer those. Here are the published handbooks.

19 It's an open source process, there's nothing secret 20 about it.

21 But in this architecture we do Steps 1 22 through 3 of HAZCADs, of STPA, I mean, in HAZCADS. We 23 identify stakeholder losses. We identify hazards, we 24 identify unsafe or undesired control legs. The fourth 25 step, and plus some more stuff, we do what we call the NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

59 1 downstream processes where we take the loss scenarios 2 and we combine them with reliability insights. And 3 then we use our liability, our cyber, our human 4 factors, and EMC downstream processes.

5 Now if we move back to I&C 61508 it says 6 you have to give -- consideration shall be given to 7 the elimination or reduction of hazards which is fine 8 if you go through a risk process. I mean, this is how 9 61508 and 61511, it's how OSHA works. If you have a 10 risk reduction, that's what controls mean. When you 11 say controls, you're talking about risk reduction.

12 That's what always that means.

13 Well, at some point if you cannot reduce 14 your risk with a, let's use 61508 and 61511 concepts, 15 if you go through and say you have a highly risky 16 process, you may be required to go back and redesign 17 your process to make it less risky if you're going to 18 say this can't be mitigated with a protection system.

19 This has to be, you know, you'll say you 20 have to make it inherently less risky so that you can 21 get that last mile with your protection or your 22 control system. So that's why 61508 had you go and 23 try to eliminate and reduce your hazards before you 24 try to, you know, add a protection system to it.

25 Now for this, we created and made up one NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

60 1 out of two RPS things, you know, we could use it for 2 a prop to work with. And what you'll see is, you 3 know, what would look like a conventional block 4 diagram on the left and how an STP analysis would look 5 on the right.

6 And what you'll see is a STPA is an 7 investigatory or diagnostic process. It gets back to 8 the concept about are you complete or not. Again, 9 each iteration, you're going to go through, and you're 10 going to go through the STPA process, we're going to 11 ask questions about, you know, what if questions. How 12 will this happen? What would be the drivers for that?

13 And the structured way you see on the 14 right is called a control structure. And that's an 15 output of STPA. Humans and equipment are all 16 evaluated, all three at the same time with this. This 17 is a control base, a control structure. It's a 18 functional control structure.

19 And then when we add the causal factors 20 which you use the output of these, you know, the first 21 three steps, then you have a complete pathway between 22 what causes the problem and what the consequences of 23 the problem could be. And it gives you a real good 24 insight on how to address it from a control structure 25 point of view.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

61 1 MEMBER KIRCHNER: I had just a quick 2 question. In application, at least to the current 3 fleet, almost all of the, I think everyone in the 4 current fleet has some level of PRA. And that 5 accounts for the details of how that plant was built 6 out and modified over time. Or at least I hope they 7 do.

8 How do you reconcile -- it would seem to 9 me that the first thing you would use in your PRA, you 10 look at vulnerabilities and then get into the weeds 11 of, say, your digital I&C functions, piece of 12 equipment, or platform, or something. Is that what 13 actually happens or --

14 MR. GIBSON: Yes. Let's talk about that.

15 Then John will really be down, so I'm not going to 16 talk a lot, because I don't short circuit that.

17 But your PRA is a model, right. So it's 18 one of the models, and we'll talk about it in another 19 slide. It's one of seven models we use in this 20 process. But like you saw in the previous slide when 21 we evaluated the fitness of PRA to finding all the 22 problems, not a probability of the problems but the 23 actual problems that could happen, we see it has 24 shortcomings. So by adding STPA to this, which we do, 25 we combine the two together, we go out and look for NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

62 1 hazards.

2 And HAZCADS stands for hazards and 3 consequence analysis for digital systems. So we've 4 positioned HAZCADS in the spot, or STPA, in that spot 5 between the control systems and the equipment it 6 controls, all right. So the equipment it controls is 7 certainly going to most likely be in your PRA if 8 there's risk in it.

9 But what you don't have is the 10 contribution from a detailed hazard contribution of 11 the control system because, you know, a lot of PRAs, 12 I think, some of that's just truncated today. They 13 just put a number in there, right, because it's a 14 black box.

15 So to just improve that situation, then 16 they've got new stuff that allows you to integrate 17 hazards analysis insights into your, what I'll call 18 traditionally doing risk. And John will go into that 19 in some detail here in just a little while.

20 So we used, this is a test scenario. It's 21 one of the test scenarios that we use to test the 22 processes that we have, you go to STPA and HAZCADS.

23 So the set up here is, the event is you have a turbine 24 control system locks a cooling detection, right. So 25 the old system, the one that's sitting there, has NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

63 1 pressure transmitters going into different logic areas 2 of your turbine control system.

3 And you can see on the diagram, if you 4 have your low pressure, low flow, and high temp, many 5 of those kinds of things that pass the set point, 6 you're going to shut the plant down. You're going to 7 have a turbine trip and probably a reactor trip.

8 That was a single point vulnerability, 9 because any of those flow loop transmitters or any of 10 those different process transmitters cause a turbine 11 trip. And plus, the turbine system itself was non-12 redundant, right, so if it failed (audio 13 interference).

14 MEMBER KIRCHNER: So this is for the main 15 generator?

16 MR. GIBSON: Yes. Yeah, the turbine.

17 Yes.

18 MEMBER KIRCHNER: The actual diagram you 19 had was actually used? You mean you would take out 20 1,000 megawatts because you didn't put two sensors in?

21 MR. GIBSON: Absolutely. All of the 22 plants are like that. And they're kind of reasonably 23 reliable. You know, when I started working in the 24 industry in 1982 the plant I worked at, you know, it 25 would trip every quarter on some BS like this. I NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

64 1 mean, you know, let's be real.

2 MEMBER KIRCHNER: That's within the whole 3 INPO model that the liability and safety are --

4 MR. GIBSON: We're talking the same thing, 5 you're going to push the reliability up. And that's 6 a good segue, because this is really the reliability 7 model.

8 MEMBER KIRCHNER: All right.

9 MR. GIBSON: That's what was the purpose 10 of it, right, improve the reliability and remove 11 simple vulnerability. And this is the proposed new 12 system logic. So now you had three different 13 transmitters for each of these parameters. And 14 there's five redundant, you know, for pressure flow 15 and temperature.

16 And then you had fault detecting voting 17 for those three inputs. And all that's not on this 18 diagram, that's actually on a tri-modular SIL 19 certified PLC. So we could have internal failures to 20 the PLC and would be extremely reliable, as we showed 21 in the SIL certification, at a software and a hardware 22 level, right. We're cooking with gas.

23 So we wanted, you know, to do your 24 controller routine, automatically remove fault in 25 instruments, okay. So there again, we're dealing with NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

65 1 the single point vulnerability. But this logic that 2 you see in here was intended to identify the fault in 3 instrument by measuring the output. It was outside of 4 calibrated range, et cetera, et cetera. There you go.

5 We know when we have bad instruments, miscalibrated 6 instruments, it gets bypassed. And so the voting 7 logic can use the remaining instruments.

8 And the next to the last bullet tells you 9 that even if two instruments are faulted that the 10 logic uses the remaining valid instruments. If all 11 three instruments are faulted, the logic is designed 12 to send a shutdown signal. Hey, something sounds 13 reasonable, on the surface of it, I suppose. You 14 know, because if you've lost all your input you want 15 to, okay, I don't know what's going on.

16 Well, guess what, this thing was 17 calibrated, the flow transmitter, 0600 gallons a 18 minute. The high out of range was 612 gallons per 19 minute for your flow, the flow. The normal stat of 20 cooling flow was approximately 550 gallons a minute.

21 And one pump in service, which is what it usually is, 22 is your redundant pumps. They're not running at the 23 same time, or in theory they're not.

24 Two standard cooling water pumps exists, 25 but there you go, only one, again, at the time.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

66 1 They're routinely swapped though. Because you run 2 one awhile then you run the other awhile. So you get 3 even wear on the pumps.

4 But when you swap the pumps the in service 5 pump remains on momentarily while the outer service 6 pump is started. And when that happens the flow goes 7 above 612 gallons per minute. And guess what, you've 8 got a trip. So they did this VIG (phonetic) mod to 9 reduce the single point vulnerability.

10 They put a lot of new stuff that was 11 hideously reliable at that platform level. But they 12 didn't catch this. They didn't catch, through 13 reliability and hazard analysis, that there might be 14 a possibility that, that being a flow case, that all 15 the flow, the flow could be higher.

16 And then what they didn't ask themselves, 17 why do I care if the flow is high. Well, high flow is 18 not a bad thing. Why is there even a trip on high 19 flow? So if it goes off scale high, why am I worried 20 about that, especially if all three of them did it?

21 Well, if you talk about it you say what's the 22 likelihood of all three of the transmitters failing 23 simultaneously? We just put in three, so I wouldn't 24 have that problem, right.

25 So we used this as one of our diagnostic NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

67 1 scenarios to test people to see if they could find a 2 problem with this design. And they did using the 3 HAZCADS and STPA methods. It was pretty compelling 4 watching that happen. Because they started going 5 through that iterative what if, what if.

6 And, Jose, your thing about completeness, 7 and we'll talk about that a little more in a couple of 8 minutes, this process does not achieve absolute 9 completeness. But the likelihood that you miss 10 something goes down dramatically. But your completer, 11 you're more complete than you would have been 12 otherwise, right, in a traditional methodology.

13 That's the idea of it.

14 MEMBER MARCH-LEUBA: So it forces you to 15 be a structure undedicated, and therefore you cover 16 more. It's never bad.

17 MEMBER KIRCHNER: Yeah. When you designed 18 this though, did you go back to get who was on your 19 design team? Did they have the experience to know 20 that we routinely start up one of these other pumps 21 and --

22 MR. GIBSON: Well, you know, that's a good 23 question. Because if you think back to the --

24 MEMBER KIRCHNER: This is one of the 25 fundamental problems with PIRTS for PRAs, you put all NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

68 1 the like-minded people together, and you don't 2 necessarily get that -- I shouldn't say that but 3 (audio interference) came out. The danger is that you 4 put together a lot of experts in the same area. And 5 you don't get that diversity of views so that you 6 don't work completely --

7 (Simultaneous speaking.)

8 MR. GIBSON: -- right there. The STPA 9 process and systems engineering in general is 10 multidiscipline. And, you know, I don't have a slide 11 to talk about this just for brevity, but it really 12 requires -- this kind of stuff requires a big culture 13 change. Because now it's team engineered.

14 When we did these tests you had operations 15 people, I&C people, and risk people on the team doing 16 the analysis of that to find that problem. And that's 17 a good suggestion.

18 MEMBER MARCH-LEUBA: And was that before 19 or after the bullet happened?

20 MR. GIBSON: That was after, that was for 21 our tests.

22 MEMBER MARCH-LEUBA: I've never seen a 23 benchmark that fails when you know all the answers.

24 MR. GIBSON: Right. Well, they didn't 25 know what the answer was. But there were kept blind.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

69 1 All these tests were blind tests.

2 MEMBER MARCH-LEUBA: This is an open 3 meeting, right? I'm not meeting everybody that comes 4 in. It is fully -- we cannot meet everybody, the 5 public --

6 CHAIR BROWN: It's an open meeting.

7 MR. GIBSON: It's an open meeting. Yeah, 8 there are no secrets here.

9 So I wanted to share with you some 10 preliminary OE data that we've collected so far. We 11 do annual OE reports. So this is getting ready to 12 publish our 2023 OE report.

13 You're seeing in this, you know, just some 14 anecdotal stuff. And this particular one, out of 15 1,200 OE records, and this particular data set is the 16 NRC website, INPO's website, and some input from our 17 Chinese members. Because they were able to give us a 18 lot of data from their plants. They have a lot of 19 digital plants, and so they gave us some of their, you 20 know, basically their CR reports, their reviews.

21 We harvested those, this is what you see 22 from the data. These are a fair indicators by 23 category out of those 1,200 to give you a feel for, 24 you know, where your big issues are. And this is for 25 I&C now.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

70 1 MEMBER KIRCHNER: And you say hardware, 2 that's why I'm asking.

3 MR. GIBSON: This is I&C. This is I&C 4 hardware.

5 MEMBER KIRCHNER: Hardware. How do you 6 define the hardware, I mean, if a relay fails that's 7 a piece of hardware.

8 MR. GIBSON: Yes. If it fails --

9 (Simultaneous speaking.)

10 MR. GIBSON: -- it fails in any of its 11 physical attributes.

12 MEMBER KIRCHNER: Whatever, it doesn't 13 open when you want it to, or it doesn't close when you 14 want it too.

15 MR. GIBSON: Yeah.

16 MEMBER KIRCHNER: Et cetera, et cetera.

17 Okay. That's fairly straight forward, it's a piece of 18 hardware.

19 If a circuit card is pulled out and 20 replaced with a new one, it now makes the system work, 21 right? You've got 233 components on that card. They 22 may just throw the card away in today's world. They 23 may try to repair the card in the old days. And by 24 the time you finish trouble shooting, you might 25 replace three, four, or five individual parts. And NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

71 1 then you don't know exactly --

2 MR. GIBSON: No, they're not inflated that 3 way. This is at a functional replacement unit.

4 MEMBER KIRCHNER: The function --

5 (Simultaneous speaking.)

6 MEMBER KIRCHNER: Yeah, the circuit card 7 level type or the relay brief. That's why I used that 8 example. Okay.

9 MR. GIBSON: Yeah.

10 MEMBER KIRCHNER: That's good. So that's 11 good --

12 (Simultaneous speaking.)

13 MEMBER KIRCHNER: -- reliable data.

14 MR. GIBSON: LRU, line replacement unit 15 level --

16 MEMBER KIRCHNER: Yeah. That's good, 17 thank you. You answered my question. We can go on to 18 the next slide.

19 MR. GIBSON: So this is just some stuff, 20 you know, this is the things, the software. We break 21 it down by what we should, you know, this is both how 22 it was written up and also our secondary 23 investigation. Because we were able to talk to some 24 of these people too, you know, with some of our 25 members.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

72 1 So 56 percent were application level 2 problems in the software. And configuration 3 parameters are also big here. You know, we parsed 4 those differently, because we considered application 5 level software being the design, you know, maybe it 6 was a function block or some high level code in there.

7 That would be the application.

8 Configuration parameters are also usually 9 application level parameters. Well, not always, but 10 they are just things, you know, parameters you set in 11 the software to do what it does, firmware, 14 percent, 12 operating system software, two percent. You see out 13 there where the problems are.

14 And it really jives up with our other --

15 we try to double check all our OEs when we connected, 16 because it just doesn't make sense. Because for the 17 job, what kind of trend do we see. And it really 18 matches with our reliability pyramid, or pretty good.

19 We also search for software, CCFs of all 20 sorts, you know. Because that's a -- and we define 21 the CCF as a loss of redundancy in our data. So if 22 you have two redundant things that were supposed to, 23 you know, keep a function going and they both fail, 24 then that would be a CCF in the definition for this 25 data. And so you see, you know, our hardware CCFs are NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

73 1 running about twice the software CCFs.

2 And you see how it's broke down.

3 Manufacturer software, they fit five. Broadcast 4 storms, we consider that a -- I'm going to call it 5 failure on the network. Over-range transmitters, 6 which we just saw an example of that, that's a 7 software CCF, if you want to call it that.

8 We are kind of, as we go forward, trying 9 to advocate for functional, view that functionally.

10 Because had that system been implemented in hardware, 11 you still could have the same CCF, right. It wouldn't 12 have mattered that it happened to be a software basis.

13 It was a design. And then incorrect computer 14 parameters are one of those are also redundant.

15 MR. GIBSON: We're good?

16 CHAIR BROWN: I would have determined 17 whether anybody needs a break or are we satisfied with 18 proceeding? Break? Did I hear you say that, Matt?

19 MEMBER MARCH-LEUBA: How are we doing on 20 the presentation? It's already 10 o'clock.

21 CHAIR BROWN: We are 25 slides through 63.

22 And we've got one hour. We're supposed to go to 23 12:30, is our cutoff. That's what the agenda says.

24 So we do have to kind of keep things moving.

25 MEMBER MARCH-LEUBA: Let's have a break, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

74 1 but let's ask our EPRI friends to talk a little 2 faster.

3 CHAIR BROWN: Yeah. Ha, ha, ha.

4 MR. GIBSON: You know you can't win, you 5 know, Charlie gives these instructions to practice.

6 You have to speak slow.

7 (Simultaneous speaking.)

8 PARTICIPANT: Did he say this?

9 MR. GIBSON: He did.

10 MEMBER REMPE: You're correct, he said 11 that. But we'd like to revise that.

12 (Laughter.)

13 (Simultaneous speaking.)

14 MR. GIBSON: I have my instructions.

15 MEMBER REMPE: We're learning, okay.

16 MR. GIBSON: I'm almost done talking.

17 CHAIR BROWN: I'm declaring a break, we 18 will return here at 10:25.

19 (Whereupon, the above-entitled matter went 20 off the record at 10:13 a.m. and resumed at 10:25 21 a.m.)

22 CHAIR BROWN: We're going to resume the 23 meeting now. We are now unrecessed, and Matt, you're 24 back on, or whoever.

25 MR. GIBSON: Thank you, Charlie. So, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

75 1 we've had some good discussion. We're going to look 2 at the digital systems engineering framework 3 components. We're not going to do a deep dive on 4 these. You have what we handed out earlier, and you 5 have the materials, these are all week long courses if 6 you want to really understand how they work. Look at 7 that thing, I'm finding out --- go back.

8 I'm using a little wheel from now on.

9 There we go. So, this is just an I chart, we're not 10 going to talk about it much other than it's in your 11 slide deck. And it is an enumeration of the pieces of 12 the framework. And so, these are all --- the DEG is 13 at the top, and there's all those things that make up 14 the pieces of it that we can use in whole or in part.

15 This is a diagram that shows you the data 16 flow, the flow between these. Your DEG is your 17 anchor, you can see the HAZCADS is in green, that's 18 where we do our hazard analysis, the DEG calls for a 19 hazard analysis. Systems engineering calls for a 20 hazard analysis, so HAZCADS is one of those. For 21 simpler mods, or changes, or designs you can 22 substitute FMEAs there for that instead, but it 23 wouldn't come out into these other processes.

24 DRAM is your liability, basically what I 25 would call your hardware and software liability, your NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

76 1 TAM, cyber security assessment methodology takes care 2 of the cyber stuff. HFAM, human factors, Mary will 3 cover that a little bit more in a minute, and EMCAM.

4 Now, it should be noted, you'll see in this slide 5 there's I&C standards scattered out through here.

6 And DEG, HAZCADS, and DRAM in particular 7 implement a process hazard analysis frame work, or 8 implementation I guess you would say, and a layer of 9 protection analysis as it's described in IC61511. So, 10 there we go, that's the framework itself. That's the 11 pieces, how it's laid out, and you see the 12 enumeration. Now, we're going to do a deep dive today 13 on HAZCADS and HFAM, just because we don't have an 14 unlimited amount of time.

15 But I do want to make a statement here 16 about models. All right, so our research really has 17 told us, not only in this research, but in the other 18 research we do on alternate I&C architectures, which 19 we're not covering today, but we have done modeling 20 analysis in that too. You've heard the old saying 21 that all models are wrong, but some are useless ---

22 useful.

23 Thank you, John, I appreciate that, that 24 does change the context of that statement a lot. The 25 whole statement of that though, is what that person, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

77 1 Box I think the name of the guy was that said that, 2 what he's really trying to say is all models are 3 wrong, but they can be useful if they answer a 4 question. They're not going to answer all questions, 5 but to answer a question accurately, or reasonably 6 accurately, then they can be useful.

7 And what you see here is a list of the 8 seven models that we use in the frame work. Systems 9 engineering, fault freeze, STPA, reliability analysis, 10 exploit sequences, which is the modeling methodology 11 for the TAM, and the reliability analysis, which we 12 use in DRAM. And so, those models taken together try 13 to answer the questions that need to be answered in 14 order to get good designs done, and safe designs done.

15 And to deal with the questions we have in 16 risk informing performance base. You'll see here how 17 we connect the model to the product. And if you look 18 at the questions you need to answer for what could go 19 wrong, what are the consequences, how likely is 20 something to go wrong, what performance is needed, 21 those are the key elements of risk informed 22 performance based.

23 And the framework elements attempt to 24 answer those questions. And that's always a work in 25 progress, but it's certainly well formed at this NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

78 1 stage.

2 MEMBER REMPE: I may miss when you get to 3 how you put it in the PRA, do you include uncertainty 4 in the answers to those questions?

5 MR. GIBSON: Of course, of course.

6 MEMBER REMPE: That's good, just checking.

7 MR. GIBSON: Of course. Although exactly 8 how to do that is a subject of some research we're 9 doing.

10 MEMBER REMPE: I think it would make it 11 more difficult, but I'll miss that part, so you can go 12 by.

13 MR. GIBSON: Okay, that means you can get 14 four or five minutes, Mary. But anyhow, one of the 15 key models we use that you might not have seen before 16 is relationship sets. And you can see the idea of 17 them, there are four --- we modeled the whole system, 18 we describe the system as system elements. You can 19 view that construct throughout, because we do 20 configuration management at the system element level.

21 We do different characteristics, and 22 functional composition, decomposition at the system 23 level. Hardware, software, human, and equipment under 24 control are the four founding system elements.

25 Everything should be able to be mapped to those, and NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

79 1 included in those. And there are five relationship 2 sets. Functional ones, connectivity ones, spatial 3 ones, programmatic, and acquisition.

4 Again, we're not going to go into a super 5 amount of detail other than to say you can see from 6 this Venn diagram a little bit how the relationship 7 sets work. Because each relationship set type, 8 meaning your relationship sets in a design, but there 9 will be one of these five types, and they'll have a 10 bounding criteria developed for each one. That 11 bounding criteria determines what system elements go 12 in it.

13 What this lets you do is evaluate, and 14 visualize dependencies, degrees of independence, and 15 all these sorts of architectural characteristics. And 16 you can evaluate with relationship sets, so it becomes 17 an architectural view. Maybe not the only one, but a 18 valuable one that allows you to understand when things 19 are connected in some way, either through their 20 functional connection, a data and control flow 21 interconnection.

22 The fact that they're in the same cabin, 23 or under the same roof, or they're spatially in the 24 same spot, they are in the same calibration program, 25 they're in the same cyber security password change NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

80 1 program, whatever they're in that they share that, and 2 you can see that relationship here. And acquisition 3 where you can tell whether these system elements have 4 common acquisition characteristics coming from the 5 same vendor, or using the same products.

6 They have these different kinds of things 7 in common relationship through the acquisition 8 process. That's relationship sets.

9 MEMBER KIRCHNER: Just, could you 10 elaborate on equipment under control, that could be 11 safety related components, why put a fourth category 12 there?

13 MR. GIBSON: Well, because a control 14 system, remember the context of HAZCADS in the digital 15 engineering framework are control and monitoring 16 systems. So, those things in and of themselves don't 17 do anything, they're a paperweight. So, they have to 18 control something. Fluid, mechanical, electrical, 19 they have to do something. That phrase equipment 20 under control is what the I%C standards use in that 21 context to describe the things you're controlling.

22 Now, one concept there is that your 23 control system inherits the risk importance of the 24 things you're controlling. Because obviously if 25 you're opening and closing a valve, you're starting a NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

81 1 pump, whatever that is, whatever the risk importance 2 of that component under control is, your control and 3 monitoring system inherits that risk, it has to be 4 commensurate with that risk of that component.

5 MEMBER KIRCHNER: I get that part, I'm 6 just trying to understand your universe that you're 7 creating, and whether it adds a degree of complexity.

8 Why wouldn't you --- equipment under control would be 9 a special subset of hardware, or perhaps software.

10 MR. GIBSON: Well, I guess the best way to 11 describe that is because the context is different.

12 So, what this lets me do, is because equipment under 13 control is a different context in the plan, it's the 14 thing that makes something happen. So, I can look at 15 these relationship sets, and see all the digital human 16 things that are associated with that equipment under 17 control.

18 And that helps me --- like say equipment 19 under control will typically appear in your PRAs under 20 basic events. Sometimes that's aggregated, but let's 21 say you have a pump or a valve, and you have that in 22 your PRA. Well, now I can tell if that equipment 23 under control is here, if it's populated in any of 24 these relationship sets, maybe even with another piece 25 of equipment under control, now I can see a dependency NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

82 1 that would have escaped me earlier.

2 The second reason we do that is because 3 the SCPA process is modeled that way. You always talk 4 about the process that you're controlling, and then 5 you draw up control structures. And this lets us 6 allocate the process under control to equipment.

7 MEMBER KIRCHNER: How do we fit that in 8 the world of NRC regulations where you have safety 9 related, you have things that are under tech spec, and 10 so on. So, it ---

11 MR. GIBSON: Well, remember this is a 12 technical process. So, if there's, let's say you have 13 something that's safety related. Well, remember, even 14 today when you have 50.69, there are things that are 15 more risk significant, and some things are less risk 16 significant.

17 MEMBER KIRCHNER: Yeah --

18 (Simultaneous speaking) 19 MR. GIBSON: Right. So, this kind of all 20 works with that. But you don't really care at this 21 point. What this is, is how are these things related 22 to each other? So, let's say you have a piece of 23 equipment that happens to be of some categorization, 24 and it would show up here as equipment under control.

25 And that would help you understand the criticality of NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

83 1 the I&C that's actually attached to that thing, and 2 trying to do something with it.

3 MEMBER KIRCHNER: What about the equipment 4 that isn't under control. Is that caught up in the 5 hardware, or for example we've seen debates over ---

6 especially with the newer designs coming in, what 7 makes the DRAM list, and what doesn't. So, what is 8 under control, maybe not in the same way you use this 9 definition, but what requires special attention, 10 etcetera, etcetera, and what doesn't.

11 I'm just trying to understand 12 (simultaneous speaking) has been identified in the 13 I&C universe that has special controls that ---

14 MR. GIBSON: This is anything that the 15 scope of this evaluation, relationship says based on 16 what you're trying to do, right, and what's the 17 system, the subsystem of your design, or the plant if 18 you use at the whole plant level. You have multiple 19 relationship sets, really powerful tool. So, let's 20 say you're doing an advanced reactor, and it's using 21 alien technology, nobody's ever seen this before.

22 You use this same process, right? Because 23 at some point the control and monitoring function has 24 to control and monitor something. And the thing it 25 controls and monitors are these equipment under NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

84 1 control, and that's what actually makes the world 2 turn. They're the things that pump the water, the 3 steam, make the temperature go up and down, do 4 whatever. The actuator, the prime mover if you want 5 to use that word for it.

6 MEMBER KIRCHNER: I'm just struggling with 7 what's in the universe of equipment under control.

8 CHAIR BROWN: Yeah, Dennis, speak up, I 9 see you were queued, so go ahead.

10 DR. BLEY: Yeah, I just wanted to sneak 11 in. I'm trying to -- maybe I can pull Walt and Matt a 12 little bit together. If we go to your forces through 13 elements map, do you intend those to be orthogonal or, 14 and I don't know, for function under control I think 15 it would make it clearer. But if it's a pump that's 16 under control, that's also hardware.

17 Does it fit in both categories? That's 18 sort of what Walt's getting at I think in part ---

19 MR. GIBSON: It does not fit in both categories.

20 The hardware is the control monitoring hardware, not 21 the equipment it is controlling. And we reduce it to 22 this --- this might look like a little bit of an 23 abstraction, these elements, when you do these are the 24 actual tag, and make, model, number ID that are being 25 affected.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

85 1 This is not an abstraction. That's one of 2 our problems that we're curing with this, is if you do 3 a lot of high level stuff, even PRAs for that matter, 4 and you truncate, and you combine functions, all of a 5 sudden you've got a basic event that might have a lot 6 of stuff underneath it, right? What we have to do is 7 connect that to the real equipment plant, the real 8 hardware elements, software elements, the human 9 actions that go into it.

10 Because otherwise it's hard to make our 11 hazard analysis turn outward, right? We don't know 12 what we actually have to do.

13 DR. BLEY: You throw a little bit of 14 jargon around, I think I heard you say that in your 15 four elements, the hardware that's listed there is 16 hardware that controls other things.

17 MR. GIBSON: Yes, it's part of the control 18 system, in this case the digital control system.

19 DR. BLEY: Okay. Well, it's probably 20 clear in the documents, it's not clear in the view 21 graph, I think is where some of these questions are 22 coming from. If we go over to your diagram that I 23 suppose is helping to give us clarity, I have to admit 24 it doesn't give me clarity. And this is a cartoon, or 25 this is something that you think is a real NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

86 1 representation?

2 And then how do you garner useful 3 information out of such a, as you called it, a Venn 4 diagram?

5 MR. GIBSON: Well, the Venn diagram is 6 mainly just one way you can view this. And what 7 happens is, is when you do your relationship set 8 analysis, there's a worksheet in these processes where 9 you create a relationship set. You give it names, you 10 give it a taxonomy, you develop the bounding criteria, 11 which are a structured narrative about what things go 12 in it, what things don't.

13 It then has to be populated with real 14 component IDs about what you're dealing with. And so, 15 that's what it really looks like. And if you 16 visualize it, it's going to look a lot like this.

17 Let's say you had a complete design, and you had 18 relationship sets, you could create a visualization 19 that had all that detail on it, which would be more 20 detailed than this. This is just trying to 21 demonstrate the concept of it more than anything else.

22 DR. BLEY: I think it would help --- well, 23 it would help me, and it might help members of the 24 committee who are with you here, to explain --- can 25 you keep that other slide up? To explain the value of NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

87 1 these five relationship types. Functional I kind of 2 get, they're providing the safety, or other functions 3 in the plant. Spatial, I'm guessing you're talking 4 about where the equipment is actually located.

5 Which would (simultaneous speaking) me if 6 I were doing some kind of event analysis that depends 7 on spatial proximity. Programmatic, I suppose would 8 help me if I'm looking at ways to change the program 9 to improve things, or do you have other things in mind 10 there?

11 MR. GIBSON: I'll try to hit those 12 quickly. Functional, again, is what it does. So, if 13 you have a function, this would allow you to associate 14 system elements to a function, enumerate them in 15 there, and if you have another function, let's say it 16 was a diverse function.

17 DR. BLEY: I'm sorry, stay on that first 18 one. So, you kind of have an arbitrary number, 15 19 functional relationship sets, and each one of those 20 would tie together the things that are associated with 21 that one function, true?

22 MR. GIBSON: Well, typically what we're 23 trying to say is if you had a function (audio 24 interference) so say you had two functions. Let's 25 say you had a primary function, and a defense in depth NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

88 1 function, or a diverse function. Then you would be 2 able to see once you populate the system elements if 3 there was any dependencies, or any lack of 4 independence between those two.

5 You would see the sharedness of them, the 6 connectivity of them. Or if they were in the same 7 spot, like say if you had them both in the same 8 cabinet, then you're going to share hazards. These 9 allow our hazards to be correlated to our architecture 10 is what they do at the end of the day. Because we use 11 these same ones in cyber security too by the way.

12 Because when you evaluate a cyber threat, 13 or a dependency, you use these same sort of 14 relationship sets to know what control measures you 15 have to put in place to protect, detect, or respond to 16 that particular threat.

17 DR. BLEY: Okay, I'm sort of getting it.

18 Now, let's jump to acquisition. And the only thing 19 that comes to mind there would be if you're looking 20 for maybe common cause effects because you're coming 21 through the same acquisition source, or what are you 22 really looking for with the acquisition sets?

23 MR. GIBSON: All right, so remember, this 24 is used for everything, not just trying to proof a 25 particular function. So, an acquisition relationship NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

89 1 set will let you immediately understand let's say you 2 had a certain brand of instrument transmitter in your 3 plant, and you got them all from the same vendor.

4 These relationship sets will immediately allow you to 5 visualize the dependency that you have on that 6 particular vendor, and that particular type of 7 transmitter across multiple systems.

8 DR. BLEY: Okay, that's pretty much what 9 I said but in different words. I'm done with this 10 one, and thanks for your help.

11 MR. GIBSON: Thank you.

12 MR. WEGLIAN: If I could weigh in now, 13 something that Walter asked about that I want to make 14 sure is clear. So, we're designing a digital I&C 15 system, so hardware and software are in that context.

16 They're the hardware and software of the digital I&C 17 system. The equipment under control is what it's 18 affecting, pumps, valves, breakers, things like that.

19 You asked about other equipment, and I'm 20 going to give an example, a new design reactor has 21 liquid core, and has a freeze plug. And it's a 22 passive system that if it loses power it's going to 23 melt, and it's going to go somewhere else. That's not 24 controlled by the digital I&C system, so that would 25 not be equipment under control of the digital I&C NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

90 1 system because there's no feedback into that system.

2 So, that would not be part of this. Now, 3 if it's part of the design it will be incorporated in 4 the PRA for its function, and its probability of 5 failure, maybe it plugs, or something like that. So, 6 it'll be part of the risk assessment, but would not be 7 any of the elements of this part because the digital 8 I&C system does not have an effect on that component?

9 MEMBER KIRCHNER: Just one last thing 10 then. On spatial what I was thinking was things that 11 were cohabiting in the same cabinets, or I was 12 thinking under the same zone of influence whether it 13 be a fire, or a blow down or ---

14 MR. WEGLIAN: Equipment qualification, 15 that kind of thing.

16 MR. GIBSON: It's also your HVAC, your 17 support systems have a spatial --- air conditioning, 18 environmental conditioning, it's all spatially 19 oriented, right? So, you want to make sure that you 20 can track dependencies. We've looked at this pretty 21 hard, and we think this is bound (audio interference) 22 hard to miss a dependence if you go through this 23 relationship set process.

24 Anyway, this is a much bigger picture of 25 it. Because you can see where we come down to DEG, we NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

91 1 create relationship set as part of the architecture 2 development. There's several steps in there, 3 interface analysis, functional allocation, we do 4 relationship set development in functional analysis.

5 The key thing there is when we do hazard analysis, you 6 will see where we put the dots.

7 When we do STPA we have to connect the 8 results of the STPA analysis to some actual physical 9 something, software, hardware, something, or person.

10 So, that we understand what the cause of factors could 11 be, and also what sort of control measures we have to 12 apply to it, otherwise we'll lose our way here. And 13 then John talked about this a little bit, but when we 14 do HAZCADS it's possible that when we have, for 15 instance, an unsafe control action, it looks 16 independent when we look at it at the level we're 17 doing it.

18 You could have a dependence in the 19 background like spatial dependency, or acquisition 20 dependency that not obviously comes out. So, in order 21 to properly do the risk we use relationship sets to 22 group the UCAs together where they have dependencies 23 with each other. And then we can put it in a PRA, do 24 bounding analysis on it, and that kind of thing 25 without worrying what didn't we think of.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

92 1 What dependency is working out there that 2 could cause a problem that we didn't think could be 3 there? You pull it down though, and this is really an 4 important thing to take away here. We create loss 5 scenarios. That's the complete thing that happens 6 because an event that would occur. We apply control 7 methods, or we identify the control methods we're 8 going to use.

9 That gets allocated back to these system 10 elements, because you've got the control, whatever it 11 is has to apply to something real, software, hardware, 12 person, equipment under control. And that gets pushed 13 back to the requirements development phase. And this 14 loop is iterative, and drives completeness of your 15 design, something we don't do today very well.

16 This process drives your design to be 17 complete after you've looped through this a bit. Let 18 me see if I can --- there we go, it's on automatic 19 pilot. This is a simplified version of this thing.

20 This is the conceptual phase in the work flow. The 21 DEG does the design, it does the whole design, the 22 initial design, it just stays the conceptual phase, 23 maybe the initial conceptual phase.

24 And then we push the output of that into 25 the HAZCADS. And it does, it evaluates for hazards, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

93 1 and the criticality of those hazards, and downstream 2 processes like DRAM and all these others. They work 3 on some topic specific issues, and we bring those 4 control methods in the requirements back to the DEG 5 for another loop, because now the design gets updated.

6 Every time you go through this the design 7 gets updated, that's the idea of it. So, you do this 8 a few times, and then your design is arguably a high 9 reliability design, statistically a highly, highly 10 reliable design. The likelihood of you having a 11 problem is going to be low. One of the things you 12 asked about as far as the industry on this, I'm not 13 sure how familiar you are with a lot of this.

14 The EB1706, if you search for it online 15 you can download it, it's a bulletin document. That's 16 the efficiency bulletin that implements the standard 17 design process. Industry, that's a read, mandatory 18 bulletin by the C&Os in the nuclear industry in the 19 U.S. And so, we have the standard design process, and 20 NISP-EN-04 is the digital system addendum to that.

21 And then the DEG is called by the NISP as the way to 22 do those activities in the NISP.

23 So, the industry has adopted the DEG, or 24 is in the process of it, I mean it's a long term, 25 multi-year thing. But they'll be doing that, they've NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

94 1 already all got training. We've trained about 600 2 industry folk on that, and about 50 or so NRC folk on 3 that.

4 DR. SCHULTZ: Matt, how are you defining 5 the industry here? It sounded like when you mentioned 6 the C&Os that you were talking about the existing 7 nuclear, and I'm concerned about the emerging nuclear 8 industry. The new plants in advanced design who use 9 this.

10 MR. GIBSON: That's true. We define the 11 industry in this as the extant plants. The people 12 that own plants, license these today, your typical 13 utilities that own plants. They're the ones who are 14 the members of the insight community who made this 15 proclamation everybody would do it, and that's 16 essentially all of the utilities in the United States.

17 DR. SCHULTZ: But you see the need for 18 this to be grasped and utilized by the advanced plant 19 designers who are going to need this technology 20 capability.

21 MR. GIBSON: We do. Those advanced plant 22 designers, most of them are members of EPRI, they get 23 to see this stuff too, and they're in various stages 24 of trying to figure out how to get their hands around 25 it. Part of it is the dilute between what they do to NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

95 1 satisfy regulations, and what they have to do for a 2 good design, which hopefully are the same, but not 3 always.

4 For instance like Rolls Royce SMR, who 5 will eventually sell reactors in the United States, or 6 try to. They are in today (audio interference).

7 DR. SCHULTZ: So, EPRI can push on that, 8 but the NRC can push on that as well.

9 MR. GIBSON: Yeah, that's really everybody 10 at some point has to say this is the way forward or 11 not. We produce these things, we research them, we 12 think they're valuable, we think they're effective, we 13 think they can solve a problem with it. It's really 14 up to our other stakeholders that adopt it. We can't 15 say hey, you must do this, because that doesn't work 16 for us.

17 MEMBER KIRCHNER: Does this reconcile what 18 the NSAG (phonetic) train of documents coming from 19 IAEA ---

20 MR. GIBSON: Which ones?

21 MEMBER KIRCHNER: There's a lot of designs 22 they're saying they're doing their safety to NSAG, 23 don't hold me to this, 12 is the one that comes to 24 mind.

25 MR. GIBSON: I think that's in the same NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

96 1 sort of arc. The IAEA has included the DEG as a 2 recommended method to do systems engineering for I&C, 3 so that's there. They've also had some 4 recommendations on the TAM for cyber. You get the 5 Canadian Nuclear Regulator has included the TAM in 6 their regulation statement as being a valid way of 7 doing what they require.

8 So, it's kind of growing out like weeds, 9 it's not instantaneous that everybody's doing it. But 10 IAEA is aware of this, we participated on technical 11 committees, and all that stuff, shared this stuff with 12 them. So, they're doing stuff. And they have the 13 same, IAEA is a big bureaucracy too, and that's not 14 meant to be derogatory, it's meant to just recognize 15 how much it takes to change direction on something 16 like that.

17 You've got to let a lot of people agree, 18 there's a lot of talking, and it takes maybe sometimes 19 years. All right, so that's what the industry is 20 doing, you've seen that. (Simultaneous speaking).

21 There we go, I think it's the last slide in this 22 section. This is our users group. So, we have a 23 users group that is created to further the digital 24 systems engineering framework.

25 You can see the members, who are members NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

97 1 of it down on the right. So, that gives you some idea 2 of --- and they had to pay to be part of this, I'll 3 just say that out loud because it's a measure of their 4 interest, it's not something they get for their EPRI 5 membership. So, that's going pretty good, we have our 6 next meeting in September, it's been about a year and 7 a half now since we introduced this.

8 So, we get a lot of feedback from all of 9 those companies about these products, and as we do 10 changes to them, they get a look at the drafts, and 11 give feedback, and we ask them what they think, and do 12 tests, and different sort of engagements to get real 13 world feedbacks on these products as we change them.

14 And we're doing that right now, there'll be a mass 15 update in the first quarter of '24.

16 Because like I said, we've talked to 600 17 people now, we've got a big list of improvements we 18 can make on usability. Remember, this something that 19 people asked us to do, so it's a level of a process.

20 So, when somebody says you ought to do this a little 21 better, or that diagram doesn't work exactly right, 22 it's not clear, we update all of that in these 23 products. So, we're doing that.

24 CHAIR BROWN: Does this encompass all of 25 the current nuclear power plant owners? I read NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

98 1 through the list.

2 MR. GIBSON: It's pretty close in the U.S.

3 Plus you've got Bruce Power in Canada, you've got a 4 few ---

5 CHAIR BROWN: Does it have all the U.S.

6 companies?

7 MR. GIBSON: Not entirely.

8 CHAIR BROWN: I don't know all ---

9 MR. GIBSON: I think the only company 10 missing though, to be fair, is I think there has just 11 been a merger between Vistra and somebody else.

12 CHAIR BROWN: Energy Harbor.

13 MR. GIBSON: Energy Harbor, yeah. Energy 14 Harbor wasn't a member, but now they are, because now 15 they're one company. So, I think NextEra is the only 16 one missing off of this.

17 MEMBER REMPE: So, I'm curious about ---

18 MR. GIBSON: Progress is gone, it's part 19 of Duke nowadays.

20 MEMBER REMPE: Before you move on, I'm 21 curious why Curtiss-Wright, I mean most of them are 22 plant owner operator organizations, and Curtiss-Wright 23 has joined in because?

24 MR. GIBSON: They use these products, 25 Westinghouse are members too.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

99 1 MEMBER REMPE: Yeah, that makes sense too, 2 but Curtiss-Wright I think of making components more 3 than --- how are they using it, is it to help them 4 make changes to future components that they're going 5 to sell to the plants, or is there anything ---

6 MR. GIBSON: I'm not 100 percent versed on 7 everything Curtiss-Wright does, but I think they do 8 make systems. I mean Curtiss-Wright ---

9 MEMBER REMPE: So, they're selling 10 systems, and getting an edge ---

11 CHAIR BROWN: They made a lot of I&C 12 systems in the nuclear program.

13 MEMBER REMPE: So, they're improving their 14 product to hopefully sell it to the plants is their 15 angle. I'm surprised then that they don't have any 16 competition trying to join in, it's just them.

17 MR. GIBSON: Well, I mean people ask us 18 about it all the time. I expect this list over time, 19 I expect it to get more international members over 20 time. I guess just (audio interference) point in 21 time. All right, he's pointing at you, John, so it's 22 your turn now.

23 MR. WEGLIAN: Okay, so I'm going to talk 24 about HAZCADS in detail. This is a flow chart, you 25 kind of start out in the upper right corner with the NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

100 1 DEG provides information, and HAZCADS over in the 2 left. We perform STPA, system theoretic process 3 analysis, that's the last time I'm going to say all 4 those words, I'm going to say STPA from now on.

5 And what we're looking for is unsafe 6 control actions. He showed earlier the control 7 system. And what STPA focuses on is not internal 8 errors that can happen within your control system, but 9 errors that can happen at the level of implementing 10 something in the plant. I happen to work on 11 developing software within EPRI, I'm here to tell you 12 ever software has bugs in it, every single one.

13 But does it matter? There's a lot of bugs 14 that just don't matter. So, what we're focusing on is 15 when it's time to start a pump, open a valve, flip a 16 breaker, something like that, can that be unsafe?

17 That's what the unsafe control actions are. Looks at 18 the effect on the plant, when can those be unsafe?

19 And then we have a question, does that affect anything 20 in the PRA?

21 We have other consequences that we 22 consider beyond just nuclear safety, because this is 23 a process, the plant wants to know is there going to 24 be an economic impact? Is there going to be 25 environmental impact, reputation harm, something along NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

101 1 those lines? So, there are some consequences that we 2 assess that have nothing to do with the PRA.

3 And so, we have a separate process where 4 the plant makes a heat map essentially, risk matrices 5 to define their risk reduction targets, RRTs, which is 6 the output of HAZCADS for these consequences that are 7 not assessed by the PRA. So, we have this other 8 process for that, we call that pathway one. If it is 9 in the PRA, then we ask well, how many systems are 10 affected?

11 And that takes us to pathway two, three, 12 or four, that's not important for us. I'm just going 13 to block all of those, and say we use the existing PRA 14 model to assess the impact of a complete failure of 15 the digital I&C on the equipment under control to give 16 a bounding risk reduction target. Again, what works 17 well with this process is that STPA looks at the 18 equipment under control, and that happens to be the 19 equipment that's already in the PRA.

20 We already have basic events for the 21 breaker didn't open or close, the valve didn't open, 22 the diesel failed to continue running because of the 23 over speed trip, or whatever it is. Those kinds of 24 things are already in our PRA, so we can use the 25 existing PRA to do our risk assessment for the systems NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

102 1 that happen to be modeled in the PRA. So, our ---

2 MEMBER DIMITRIJEVIC: Hi, this is Vesna 3 Dimitrijevic, I am the PRA expert in residence. So, 4 the question for me here is that you assess the impact 5 of complete failure by looking at the reduction 6 target, which that doesn't match, because risk 7 reduction is assuming complete success of that 8 equipment.

9 (Simultaneous speaking.)

10 MR. WEGLIAN: I'll get in a slide or two 11 to how we define the risk reduction target. So, if 12 you could just hold that until I get to where we 13 actually define those levels, if you still have a 14 question feel free to ask it then.

15 MEMBER DIMITRIJEVIC: All right.

16 MR. WEGLIAN: I'm just trying to show you 17 the overall process at this point.

18 CHAIR BROWN: Is that okay, Vesna?

19 MEMBER DIMITRIJEVIC: Excuse me?

20 CHAIR BROWN: Is that okay?

21 MEMBER DIMITRIJEVIC: Yeah, no, that's 22 fine.

23 CHAIR BROWN: Okay, I didn't hear you say 24 okay.

25 MR. WEGLIAN: Okay. So, we get from NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

103 1 either pathway one, or two, three, or four, we get a 2 bounding risk reduction target, and we feed that into 3 our downstream processes, DRAM, TAM, HFAM, EMCAM. And 4 what they do with that, they may decide to change the 5 design, and put new requirements on the design, and 6 that gets fed back into the DEG, that's the up branch.

7 For things where they're not going to 8 change the design, they're going to control that risk 9 defined by the RRT by defining control methods. And 10 if they're unable to meet that risk reduction target 11 for some reason, maybe it's too costly, after these 12 downstream processes have gone, they identify through 13 loss scenarios, and they feed that into the 14 relationship sets, we come up with combinations of 15 UCAs that can fail at the same time.

16 I need the downstream processes to do this 17 first before I can refine risk assessment, because 18 before I got to that point I can't tell you which UCAs 19 can fail together for the same reason, or same 20 inherent cause. So, I need those downstream processes 21 to do that. Once they do, then I can refine the risk 22 reduction target for the areas where they weren't able 23 to meet the bounding risk assessment.

24 I can now, what we call pathway five, I 25 can redo my risk assessment, again, we'll see that in NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

104 1 a slide or two for these combinations of UCAs based on 2 the relationship sets to say is there a more refined 3 risk reduction target that they can use for that? So, 4 here's where we're getting to the risk ranking. We do 5 this bounding risk assessment. Our first approach is 6 we assume everything fails.

7 Not just software common cause, but any 8 failure that it can do. And then we look at the 9 change in risk if those failures occur. So, we're 10 looking at a change, so this is a delta risk, delta 11 CDF, delta LERF assessment. And if we say that 12 failure of everything associated with the I&C system, 13 the equipment that it controls, if that's less than a 14 1E minus 6 per year delta CDF, and less than 1E minus 15 7 in LERF, I give that a risk reduction target of 16 delta.

17 That's the lowest that it can get. And 18 each order of magnitude higher delta CDF that I get, 19 I increase my risk reduction target up to a maximum of 20 risk reduction target of alpha is delta CDF between 10 21 to the minus 4, and 10 to the minus 3 per year.

22 MR. GIBSON: I want to interject right 23 there while you guys are absorbing this. The risk 24 reduction target translates to a reliability target.

25 So, ultimately if you want to reduce the risk, you NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

105 1 want to increase the reliability to ensure the risk 2 reduction range.

3 MR. WEGLIAN: Yeah, so your seal equipment 4 would be expected to provide, for example, three 5 orders of magnitude of reliability if you had an alpha 6 RRT.

7 CHAIR BROWN: What does that mean, orders 8 of magnitude, it's a thousand times better than 9 something else?

10 MR. WEGLIAN: Yes.

11 MEMBER DIMITRIJEVIC: Okay, so let's 12 discuss the risk reduction here. So, basically the 13 intervals, you know, If you go to the Reg Guide 1174, 14 right?

15 MR. WEGLIAN: Yes.

16 MEMBER DIMITRIJEVIC: So, let's discuss 17 this. So, this is only applicable for specific 18 control action, right? Not in the general for the 19 equipment.

20 MR. WEGLIAN: It is for the entire design 21 that is being evaluated. The digital I&C design ---

22 MEMBER DIMITRIJEVIC: I mean that is 23 absolutely --- I mean everything in the plant depends 24 on the control and stuff. So, I assume you're 25 analyzing specific control actions, because there is NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

106 1 not any --- you know, if you don't have a control in 2 the plant how can you have that specific amount 10 to 3 magnitude 6, so I'm not sure about this. The second 4 thing when it comes to this digital I&C, but what I 5 wanted to discuss is this reduction factor.

6 So, if you fail all control, and your 7 difference that you're only increasing CDF let's say 8 to less than 10 to minus 6, how is this risk 9 reduction? It measures risk increase, so it's more 10 (audio interference) risk achievement factors instead 11 of risk reduction factors. So, this is actually 12 showing you the total increase in that CDF is smaller 13 than 10 to minus 6.

14 How is this connected to risk reduction?

15 So, this is actually I have two questions. One is 16 that, and the second one is what are you measuring, 17 impact of what?

18 MR. WEGLIAN: So, let me tackle the if the 19 risk reduction target is a delta. When I was at a 20 utility I worked on a boiling water reactor, and our 21 RHR system had three trains. Train charlie only 22 provided water into the core, and there were nine 23 other ways to get water into the core, so that one 24 train was very low risk if it were to fail, because it 25 had so much redundancy.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

107 1 So, if I was doing a mod that only 2 affected that train I would expect if it failed 3 completely, my delta risk would be less than 10 to the 4 minus 6, and this process would then say for that mod 5 the risk reduction target is so low that the minimum 6 level of activities that we would do on any mod at all 7 is sufficient. You don't have to do anything 8 additional to that.

9 If I'm replacing the RPS system at the 10 plant that I came from, I'm going to guess that would 11 be a risk reduction target of a bravo. And so, you 12 would have more controls on that based on that mod.

13 And so what we're doing is we're saying if anything 14 under control can fail, then we fail it in the PRA 15 model. And we look at what is that change in risk 16 based on that failure and what ---

17 MEMBER DIMITRIJEVIC: Okay, all right, now 18 I understand actually what you are doing. But I just 19 want to tell you that you are using the wrong name 20 based on the PRA principles. Okay, so what you are 21 doing, you are not looking at total digital control of 22 RHR, or God forbid, the plant, you're just looking at 23 one specific control. If that specific control fails, 24 you're increasing that CDF less than 10 to minus 6.

25 But see if you want to analyze risk NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

108 1 reduction you have to analyze if you make this totally 2 reliable, what would be improvement in this? So, 3 therefore maybe you should call this risk reduction 4 factor, because that is something like a risk 5 reduction factor.

6 MR. WEGLIAN: Yeah. It is not the risk 7 reduction factor, it's the risk reduction target. So, 8 this is the target that the downstream processes have 9 to meet with their reliability. That's what we're ---

10 that's how we defined it.

11 MEMBER DIMITRIJEVIC: Yeah, but you're not 12 reducing risk if you're not going to pay attention to 13 this control of that bravo, or whatever charlie train.

14 So, you're not reducing any risk, so that's why RR is 15 wrong, because you are not reducing. You're just 16 going to say risk impact is small. So, that's a 17 different thing. And also you're only analyzing one 18 control action on that single train. So, it's not 19 total.

20 MR. WEGLIAN: If the mod was only for that 21 train, that's what we would assess. If the mod 22 affected all of RHR, all three trains, then I would 23 expect the risk reduction target to be bravo, or 24 alpha, and they would spend a lot more on that.

25 MEMBER DIMITRIJEVIC: Yes, that is, yeah NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

109 1 ---

2 MR. WEGLIAN: So, it depends on the scope 3 of the mod.

4 MEMBER DIMITRIJEVIC: All right, okay. I 5 just made my comment just to tell you that your name 6 is confusing, and --

7 (Simultaneous speaking.)

8 MR. WEGLIAN: Okay, that's why it's a 9 target. Because the downstream processes get that as 10 an input and say your control methods have to meet 11 that level of risk reduction. So, where it actually 12 gets implemented ---

13 MEMBER DIMITRIJEVIC: That's not reducing 14 risk. You're not paying attention. You're saying I 15 don't need to spend the money on control of charlie, 16 that's not reducing any risk. So --

17 (Simultaneous speaking.)

18 MR. WEGLIAN: They would do more on 19 charlie than they would on delta.

20 MEMBER DIMITRIJEVIC: Okay, all right. I 21 made my comment for the record.

22 MR. WEGLIAN: And if the risk --- if the 23 delta CDF, or delta LERF exceed 10 to the minus 3 for 24 delta CDF, or 10 to the minus 4 for delta LERF, we say 25 that design is too risky. With this process you can't NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

110 1 come up with enough control methods to get that level 2 of risk reduction, and therefore you have to change 3 the design in some way. That might be diversity and 4 redundancy, right?

5 You might have to put in a new system that 6 compensates for that. You may have to add human 7 actions that can compensate for that. But whatever it 8 is, if you get to that high level, it's unacceptable 9 at the bounding risk assessment. And this is before 10 we've done any refinement. This is the first time 11 through.

12 If the delta CDF and delta LERF are too 13 high, they have to change the design within our 14 process to get it so that we believe that the 15 equipment that's available for purchase is of high 16 enough reliability to be able to achieve these kinds 17 of reductions. So, here's an anticipated concern of 18 yours. If I look at the risk reduction target of 19 alpha, a change in CDF between 10 to the minus 4 and 20 10 to the minus 3, that's really high.

21 And we would not allow a risk informed 22 application that had something in that range, that's 23 true. But we're not saying that that is the increase 24 in risk when we install this system. What we're 25 saying is that would be the change in CDF or LERF if NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

111 1 the entire system were to fail. That's also the 2 current risk of your current system, what you have 3 installed right now.

4 That is the change in risk, the change in 5 CDF, the change in LERF if that existing system were 6 to fail, that's what you're living with today. And 7 what we believe is that digital I&C upgrades will 8 reduce the risk compared to the existing analog 9 systems as demonstrated by every other safety related 10 industry that's gone digital, and they have improved 11 safety with that change.

12 We believe that soon will be true in the 13 nuclear industry. So, don't look at this and say that 14 this is the change in risk of the system, that is not 15 the case. This is we're defining a target level based 16 on delta CDF and delta LERF that we set the bar for 17 how our control methods, how strong they need to be to 18 get us back to where we think it's very small, in 19 reality an improvement in risk over the existing 20 system.

21 So, after HAZCADs it gets fed down to 22 DRAM, TAM, HFAM, EMCAM, and they're going to assign 23 control methods to protect against these various types 24 of failures. DRAM looks at random failure and 25 systematic failures. HFAM is going to look at human NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

112 1 factors, Mary is going to be talking about that 2 shortly.

3 The TAM looks at cyber security. EMCAM is 4 based on electromagnetic compatibility. This is where 5 those relationship sets also become important.

6 Because if the relationship set tells them that these 7 are important because of spatial relationship, then 8 their control methods will focus on equipment 9 qualification, right? The temperature, humidity, 10 those kinds of things.

11 Is it protected against a fire that can 12 happen in the same location? Is it seismically 13 mounted in the same location, same orientation? Those 14 kinds of things would address a relationship set on 15 spatial, but may not address a functional. Functional 16 is will all my aux feed water pumps fail to start, 17 even motor driven and turbine driven, because of the 18 control system doesn't think it needs it, right?

19 That's a functional, and you would do 20 different control methods for that kind of failure 21 than the spatial. So, they get to tailor their 22 methods for what relationship sets are defined. Is 23 there a question?

24 DR. BLEY: Yeah, Dennis Bley. This is 25 very systematic, it makes a fair amount of sense to NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

113 1 me, it's also a lot of overhead, especially if you go 2 back in some of the earlier stuff you didn't talk 3 about in detail of using the Levinson methodology. If 4 one uses this process to look at a proposed change, is 5 there any relief on the traditional V&V kind of 6 process that has to go on, or is this an add on on top 7 of it?

8 MR. WEGLIAN: We anticipate that this new 9 approach will replace what they're doing today. So, 10 we're not just saying do everything you're doing 11 today, and do more. We're saying what you're doing 12 today is an inefficient process, replace that process 13 with this new design approach.

14 DR. BLEY: I think I'd probably agree with 15 you, but how do you get there from here? EPRI can't 16 do it, NRC could do it, but IEEE, and all the other 17 folks have to get on board as well.

18 MR. WEGLIAN: Yeah.

19 MR. GIBSON: I'm going to take that 20 question. One of the challenges is that people won't 21 turn loose what they currently do even when they get 22 permission to do it. The industry is trying to adopt 23 DEG, and are in the process of doing it. The biggest 24 complaint we hear from folks is well we just added it 25 to what we were doing, we didn't replace it. Even NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

114 1 though that's the premise of it, that's what the C&Os 2 wanted to have happen.

3 Because the internal auditors in the 4 plants don't know how to evaluate risk informed stuff, 5 this kind of thing, they don't know what good looks 6 like. Your QA inspectors want to see a checklist, 7 they want to say let me see your RFA, we're all done, 8 you're good, move on. This requires more of an 9 understanding of what kind of performance output you 10 might get from this system.

11 And they're in the process of figuring 12 that out. That's a thing that we all have to get 13 aligned if we want to see a different way of doing 14 this ultimately. That's the best answer I can give 15 you there. Everybody has a part to play --

16 (Simultaneous speaking.)

17 DR. BLEY: You essentially repeated my 18 question. I don't know how we're going to get there.

19 MR. GIBSON: I'm sorry, I spoke over you.

20 What was that?

21 CHAIR BROWN: Say that again, Dennis?

22 DR. BLEY: I said Matt essentially 23 reiterated my question, and didn't answer how we're 24 going to get there.

25 MR. GIBSON: I'm glad I could be of help.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

115 1 But the last part I wanted to say is everybody has a 2 part. NRC has a part, the industry has a part, the 3 vendors have a part. If everybody looks at the other 4 person and says I can't do anything until the other 5 person does something, then you aren't going to move 6 anywhere, right?

7 MR. WEGLIAN: We do have some companies 8 out there that are looking at this, trying to use it, 9 and comparing it to the existing process, and we hope 10 to leverage lessons learned there to one, improve our 11 process, and demonstrate to the industry that it 12 works, and gives you a good --- really what we need is 13 a success story.

14 When somebody does this process, and says 15 look at this, I saved 10 million dollars by doing 16 this, everybody else is going to flock to it, right?

17 We're already doing training, as he's mentioned, over 18 500 people have gone through DEG training. And so 19 they're getting trained up on the process, we need to 20 start actually implementing it, get some success 21 stories.

22 And then if we build it, they will come.

23 Once they see that they have a benefit, that they will 24 get a better product at a cheaper price, I think that 25 we will see a bow wave of people heading our way to NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

116 1 try to get up to speed on this process.

2 DR. BLEY: Well, we thought that would 3 happen with PRA 25, 30 years ago, it's been a very 4 slow process. Charlie, this is an information brief 5 for us, are we expecting at any time to hear any 6 thoughts from the staff at a later date?

7 CHAIR BROWN: No, not right now.

8 DR. BLEY: Okay, it'd be real interesting 9 to see how well they're following this, and what 10 they're up to.

11 CHAIR BROWN: Well, the reg guides as 12 they're presently configured, they drive you with 13 different --- well it's not, I don't want to call it 14 a standard.

15 DR. BLEY: It's just different.

16 CHAIR BROWN: Yeah. And I'd like to 17 introduce one other thought process in there in terms 18 of making sure the stuff works right, and all that 19 kind of stuff there, is in some configurations the 20 commercial plants have a different configuration than 21 the world I came out of. Nobody wanted to shut down 22 a reactor plant while a submarine was (audio 23 interference) just don't want inadvertent shutdowns.

24 Those are potentially non-fun events, so 25 you look very heavily at making sure you have systems NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

117 1 in place that when something fails you don't 2 compromise, or shut down the plant. The commercial 3 plants have a little bit more flexibility in terms of 4 they don't want to shutdown, but they can err in the 5 direction of failures that drives you towards a 6 shutdown as a mode of protection.

7 So, there's a little bit of difference in 8 thought process. You're still going for the same 9 thing, equipment that's very, very reliable, does what 10 it's supposed to do whenever you ask it to do it, but 11 I just want to throw in there's a balance in here. I 12 think the commercial world is compatible with where 13 you're all going, and what you're trying to do.

14 It's just a matter of overcoming the 15 inertia in the manufacturers who build this stuff for 16 the applicants, that they want to accept the process 17 in order to deliver their product, and it should be 18 better than what they were delivering before. Chris?

19 Yeah.

20 MR. COOK: So, this is Chris Cook, branch 21 chief Office of Research --

22 (Simultaneous speaking.)

23 CHAIR BROWN: Get closer to your mic 24 please.

25 MR. COOK: Thanks. Member Bley, I just NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

118 1 wanted to respond to your question about what the 2 staff was doing, or was partially doing. In the 3 Office of Research we've been watching the DEG quite 4 closely. About two years ago I think it is, Matt, we 5 had training that we offered for the staff that was 6 going through.

7 So, they went through a multi-day class to 8 try to understand all the pieces, parts, and 9 components that were in there. We've also been 10 looking at individual components. Just last month we 11 had some intensive training for inspectors, as well as 12 NRC staff on the TAM, the technical assistance 13 methodology that you saw that's a part of this going 14 through. So, trying to understand those components.

15 Because we understand that we've been 16 seeing it. I think we mentioned at the ACRS meeting 17 just recently on the cyber that the TAM was applied at 18 Global Three and Four as well as Columbia Generating 19 Station. So, trying to get people ready for that, and 20 understand it. So, definitely in the Office of 21 Research we're trying to --- and those are just our 22 past activities.

23 We have current activities right now 24 dealing with both STPA, they're going on. We also 25 have activities where we're looking at trying to NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

119 1 assemble what we're calling some operating experience.

2 We really get to that HAZCADS as well trying to 3 understand the self certifications. Dr. Alverson 4 (phonetic) here has been leading some of that effort.

5 So, anyway, we have a lot of, I think 6 cross connections, connection points. We're seeing 7 this, we're definitely trying in the Office of 8 Research to be ready, that's what we see our job as 9 being so that when NRR, or when our inspectors come 10 across it, that it isn't the first time, that they've 11 already had it available to them. So, that's really 12 --- we're doing a lot, but I'm not able to say we're 13 looking at changing this specific reg guide to put 14 this in here.

15 That's one thing that we have been 16 thinking about, is how should our guidance --- Member 17 Kirchner was talking about how does this relate to 18 RITNIS (phonetic) and how does this relate to all 19 these other categories, outstanding question. So, 20 anyway, that's been very much on my mind set. Another 21 one of my staff, Mauricio Gutierrez is really working 22 at trying to look at some of that, how we get into the 23 guidance, and how do we do it in trying to do it.

24 We're also leveraging the MOU that we have 25 with EPRI to look at it as well as the MOU that we NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

120 1 have with LWRS. DOE as part of their modernization 2 strategy, they're doing it, that's a lot of what this 3 has been tied into. And I'm going to conferences as 4 well to talk about it. So, anyway, that's just sort 5 of a snapshot, sorry to advertise about the branch.

6 But that's really where I felt like we're doing --

7 (Simultaneous speaking.)

8 CHAIR BROWN: No, no, that's fine.

9 DR. BLEY: Thank you, I wanted to push in 10 one more area. Have they given you, or have you had 11 the opportunity to sit in on any of the trial 12 applications EPRI has been organizing?

13 MR. GIBSON: You were on the proof test.

14 MR. COOK: Thank you for that, it's 15 helped. So, what we had a couple summers ago was a 16 multi month long program called the proof test that 17 EPRI organized to actually go through and do some of 18 that testing. So, I had a couple of my staff 19 participate in that activity to learn how this would 20 go through by using a specific case. So, having that 21 case, and doing it.

22 We've done this under the MOU so that it's 23 really we can bring our technical insights, EPRI is 24 sort of bringing in the information that's in there.

25 So, Matt, I don't know if you had anything else you NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

121 1 wanted to add about the proof test, or anything else 2 on that.

3 MR. GIBSON: No. Generally the proof test 4 was again, another test to gather information on 5 usability, performance characteristics. We had eight 6 people over three months do this process on lock 7 changing to see how they did it, what performance they 8 had.

9 MR. COOK: And that was really critical 10 for us, because that showed us okay, that began our 11 understanding. When you guys were looking at some of 12 the functional sets, how do these pieces and parts 13 come together, that was a start. Then we went to 14 training, now we're trying the more we understand it, 15 but we're still waiting to see how do we walk through 16 or review, that's different.

17 I think the only one that's there in that 18 part is perhaps the cyber security, because they're 19 actually now to the point of inspecting results, 20 products that have come out after the TAM has been 21 implemented. So, that part is right there, but we 22 haven't necessarily walked through a design. I think 23 we're starting on one, that's right now with NRR, so 24 I'm going to talk about it in the review.

25 I think that has some components that were NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

122 1 in there, that was funded by a DOE study, or DOE had 2 a large part to do with that from my understanding of 3 the DEG.

4 MEMBER REMPE: It just sounds like this 5 would be a great thing to try and get through LWRS as 6 a pilot project that would help with the conversation 7 of what is needed in the regulatory environment to 8 make something like this happen. It doesn't sound 9 like the research folks should have to be struggling 10 on this alone. I mean, have you guys had those 11 discussions, or?

12 MR. GIBSON: We do a collaboration with 13 LWRS. To date I don't think that collaboration has 14 coalesced around a regulatory thing. So, that's 15 something, light water reactor sustainability program.

16 MEMBER REMPE: DOE is what he was talking 17 about.

18 CHAIR BROWN: A DOE program.

19 MR. GIBSON: Well, INL runs that, but 20 there are other DOE outfits that are attached to it 21 too. But it hasn't really concentrated on the 22 regulatory elements of it very much so far. Although 23 I think that's an area of improvement.

24 MEMBER REMPE: Well, if you had a pilot, 25 it seems like that would come up in the discussion, or NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

123 1 something. But anyway, listening to the discussion 2 here it sounds like something that's needed.

3 MR. WEGLIAN: Okay, I need to give Mary 4 some time, so I need to finish up. I mentioned this 5 already with the control methods. I didn't mention 6 the word causal factors, but that's one of the things 7 that these downstream processes look for. What can 8 cause the unsafe control actions to occur? And then 9 they tailor the control methods to address those 10 causal factors.

11 And then there's a process for scoring the 12 controlling methods against the risk reduction target 13 that comes out of HAZCADS. So, the idea is given a 14 risk reduction target of alpha, they have strong 15 control methods that drive down the potential risk.

16 Bravo does not require as strong of control methods, 17 and charlie even less, and delta would be the minimum.

18 And we think those would equate to charlie 19 would be about a SIL level one, and on the graphic 20 here, SIL is safety integrity level, and SC is 21 systematic capability. So, even if the letters are 22 the same, they actually represent something different.

23 But one gives it a target, and the other gives it the 24 capability that it has to address that.

25 The acceptance criteria, as mentioned NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

124 1 earlier, is kind of tied to the Reg Guide 1.174 with 2 the 10 to the minus 6 saying that was a very small 3 change. If we're very high delta CDF, delta LERF, 4 then we say we have to change the design before we go 5 ---

6 DR. BLEY: Go back to the other --- I want 7 to make sure I understand something on the previous 8 slide real quick. I'm just trying to connect the dots 9 between you're a, B, C, and D, and your triangle.

10 MR. WEGLIAN: The time delay is really ---

11 and now I've got to do it again.

12 DR. BLEY: Okay, A is the lowest of the 13 CDF ---

14 MR. WEGLIAN: A is the largest, delta ---

15 DR. BLEY: Largest risk?

16 MR. WEGLIAN: Yes, if it fails.

17 DR. BLEY: Okay, I'm just trying to 18 connect the SIL, SC3 levels to your risk, which is 19 what you were doing, and I just couldn't merge the two 20 specifically.

21 MR. WEGLIAN: Yeah, SIL3 is the highest 22 level of SIL that is widely commercially available.

23 The process allows for a SIL4, but in practice nobody 24 goes to that level.

25 DR. BLEY: Because it's not very reliable?

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

125 1 MR. WEGLIAN: It's not that it's not 2 reliable, it's just if other industries need more than 3 a SIL3, they change their design as well. And so, the 4 SIL3 equipment is available for purchase right now, 5 SIL4 would be hard to find. That doesn't mean that 6 nobody could make it.

7 DR. BLEY: Is it better than SIL3, or ---

8 MR. WEGLIAN: SIL3 is higher than SIL2, 9 which is higher than SIL1.

10 DR. BLEY: Hold it, that's not consistent 11 with A, which is the highest risk.

12 MR. WEGLIAN: You are correct, we went 13 with A, B, C, D to not conflate those two.

14 MEMBER KIRCHNER: SC stands for what?

15 MR. WEGLIAN: Systematic capability.

16 MEMBER KIRCHNER: So, nothing to do with 17 seismic?

18 MR. WEGLIAN: No, not seismic.

19 CHAIR BROWN: Can I just ask a question?

20 The lowest risk, the 10 to the minus 6 and whatever is 21 a D?

22 MR. WEGLIAN: Correct, which is ---

23 PARTICIPANT: And the SIL level one is ---

24 CHAIR BROWN: SIL1 sounds like a D which 25 is the highest ---

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

126 1 MR. GIBSON: Well, let's fix this, because 2 this is a concept that we have to fix. The A, B, C, 3 D are just sensitivities, they aren't risk. No real 4 risk is involved here.

5 CHAIR BROWN: No, but the ranges ---

6 MR. GIBSON: But bear me out. So, when 7 you have a low risk delta, meaning the risk change is 8 the least, that's small, that's not saying the risk is 9 small or big, it's just that the delta change is 10 small, that gives you to a D. If you have a big 11 change in risk, you can get a --

12 (Simultaneous speaking.)

13 CHAIR BROWN: I got it. So, SIL3 is the 14 highest quality you can get, most reliable.

15 MR. GIBSON: Which brings that down to a 16 high level ---

17 CHAIR BROWN: Takes you up into the change 18 in risk is the smallest ---

19 MR. GIBSON: You bring it back to where it 20 should be by applying that capability.

21 CHAIR BROWN: All right, sorry, I got it 22 now.

23 MR. WEGLIAN: All right. So, if our delta 24 CDF, delta LERF is too high we say you have to change 25 the design. If you're at an alpha or bravo, we're NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

127 1 saying if it were to fail that's a high change, so we 2 need strong control methods to drive us down to back 3 to where we say it's a very small change in risk, or 4 improvement in risk is what we actually expect.

5 Charlie, now we're actually within the 6 range of the Region Two in the Reg Guide 1.174, SIL1 7 is probably appropriate for that. And if you're in a 8 delta, you're already a very low change in risk if it 9 were to fail, and so the minimum requirements are 10 required. You don't need to buy any SIL equipment at 11 all. Whatever commercial off the shelf normal stuff 12 should be efficient.

13 CHAIR BROWN: You can go to RadioShack.

14 MR. WEGLIAN: That would be fine. Norfolk 15 Wire, that's where I would go. Coming back to here, 16 I don't want to spend a lot of time on this, because 17 again, I need to give Mary some time. But this 18 process goes through the first time, and my downstream 19 processes can identify things that can fail at the 20 same time. That's at the relationship set level.

21 I now know combinations of UCAs that I 22 identified through STPA that can fail for a common 23 cause, a common reason, a common loss scenario is the 24 terminology for STAP. And I can take those, and I can 25 plug that back into the PRA model, and get a refined NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

128 1 risk reduction target for control methods that are 2 hard to meet at the bounding assessment where we 3 assume everything failed.

4 Now that I have more information about how 5 the system can fail, I can group those things, and do 6 a more refined assessment, and say these ten UCAs can 7 happen due to they're all in the same room, right? A 8 fire can fail all of those together. What is the risk 9 reduction if those all fail, I can give them a new 10 number that maybe is easier for them to meet, but I 11 need this process to go through the first time.

12 Because when I first get it as a PRA 13 person, I have no knowledge of when UCAs can fail 14 together, I have to assume they can all fail. Special 15 note on software common cause, our operating 16 experience both nuclear, and non-nuclear indicates 17 that most of the systematic failures are a result of 18 latent design defects due to inadequate requirements.

19 Usually his example of the flow pump over 20 ranging, the requirements were wrong. The 21 requirements should have been that the range on that 22 pressure transmitter could have been high enough for 23 two flow, right? Two pumps, and run at the same time.

24 It's an inadequate requirement, or uncontrolled system 25 interactions where they didn't realize that two NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

129 1 systems working together could lead to something.

2 It's very rarely a typo in the software 3 that would have something like that. Because that 4 usually gets caught in testing. Misapplication of 5 diversity as a means to address the potential for 6 software common cause can actually contribute to 7 additional system complexity, which can actually 8 increase the potential for latent errors. I'm not 9 saying don't use diversity.

10 What I'm saying is blindly applying 11 diversity as the only means to address risk may 12 actually make the risk worse. We have to be smart 13 about it, and our approach is designed to make you 14 smarter in how you address these kinds of things. Use 15 diversity when it's appropriate, use something else 16 when that's more appropriate.

17 So, HAZCADS identifies and risk ranks 18 these potential systematic errors, all of them, not 19 just software common cause, which would be a subset of 20 those errors. And then the other tools in the 21 framework establish the control methods to address 22 those. Here's just a summary of everything I said.

23 We use STPA to identify what could go wrong, what 24 could be unsafe through interactions of the system?

25 That's what we focused on. We start with NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

130 1 a bounding risk assessment. We revised our 2 requirements, or we apply control methods to account 3 for these errors, but we do that commensurate with the 4 risk, right? If it's very low risk, we don't have to 5 do a lot, if it's very high risk we do have to do a 6 lot.

7 And if needed, we refine the risk 8 assessment based on our identified loss scenarios if 9 we couldn't meet the bounding risk reduction target 10 for a particular control method. That's the end of my 11 presentation.

12 MS. PRESLEY: Are you guys ready for the 13 next phase of this?

14 CHAIR BROWN: Are you the human factors 15 part here?

16 MS. PRESLEY: I am. So, we're going to 17 talk ---

18 CHAIR BROWN: Before we do you, since 19 you're now going into this amorphous area of human 20 factors, and the other part has been kind of hardware, 21 and designs, and software, and stuff like that. So, 22 this is short, it's just something I observed in going 23 through the DEG, and looking at the TAM thing, the 24 cyber part, but let me talk about the DEG first.

25 If it's not obvious, and I think Chris may NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

131 1 remember this, he may not since he's not in NRR as 2 thoroughly, but when we've been focusing on the 3 digital I&C systems, whether they be in new 4 applicants, or otherwise, the focus has been to start 5 with defining the architecture. Don't try to say 6 you're going to meet all the positions in every Reg 7 Guide, and every IEEE standard, and tell me you've got 8 a safe system.

9 The architecture is the boundary 10 conditions for defining whether that system is going 11 to be safe or not, because it tells you where your 12 soft spots are. And when I read the DEG, it mentioned 13 architecture, but in a very generic manner in about 14 422 places. So, it's just a lot of listings in case 15 you go through it. What I missed on the lead in to 16 your whole thing, which is what we've been trying to 17 get the staff to emphasize with applicants in the reg 18 guides, there ought to be a preamble of some type.

19 That says look, you've got to define what 20 does my plant look like. Which parts are safety 21 related, which parts are safety critical, which parts 22 etcetera, etcetera. And then you start layering out 23 how complex what you need to do relative to all the 24 stuff you have both been talking about in terms of the 25 development process. I'm just making this as an NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

132 1 observation.

2 To me the architecture is the fundamental 3 stone you start with, and if you start talking about 4 processes with individually the piece parts throughout 5 it, you lose the focus. The first point or place you 6 would ever talk about a system or plant architecture 7 is in section 4.2.8 where you have a notional diagram, 8 figure 4.4 which shows a giant plant with safety 9 systems, and other plant systems, etcetera, and a 10 network, and everything else.

11 Unfortunately all the data that comes from 12 all the systems goes into a giant network which is all 13 jumbled up in server software, which is not very 14 reliable or safe. That's a different thing. The one 15 redeeming value, it shows data diodes coming from the 16 safety systems out to the outside world. The 17 downside, it shows safety systems communicating back 18 and forth, not out just to the systems they've got to 19 shut down the plant with.

20 My point being that's pretty late in the 21 point in the system, in your process rather, to start 22 thinking about the overall plant architectural. I 23 just think you all ought to emphasize how important 24 knowing what your plant looks like, what the piece 25 parts you're dealing with that you're going to have NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

133 1 digital control systems control before you start 2 talking about all the little nuances.

3 Which I don't disagree with, they're all 4 there, but it's just a matter of how you structure it 5 to get people thinking in my view properly. Now, 6 that's my observation, that's my personal comment, and 7 that ---

8 MR. GIBSON: That's good, all feedback is 9 good.

10 CHAIR BROWN: And that's what we tried to 11 do in PTP 719, Reg Guide 1.152, 1.62, etcetera, 12 etcetera. Know what you're looking at overall so that 13 you know where to pay attention. That's just a 14 suggestion when you're going down the path for 15 whatever revisions you're going with. And I would 16 hope you would fix up that overall notional plant to 17 be a little bit more ---

18 MR. GIBSON: It's top of my list, Charlie, 19 to make that notional plant a little different.

20 CHAIR BROWN: You had safety A, and safety 21 B, and you showed them communicating back and forth, 22 and that's not a very good idea. Good way to shut 23 everything --- or compromise its ability to shut 24 everything down. Anyway, so that just was a good 25 place before we get into the human factors, because I NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

134 1 hope you're going to address in human factors, when 2 you have to go to backup systems for human factors.

3 Shutting down the plant when your systems 4 fail totally in compromise. Is the answer to that yes 5 or no?

6 MS. PRESLEY: Maybe not at the level of 7 detail you wish, just because of the time constraints.

8 CHAIR BROWN: There's a big controversy on 9 whether using diverse software systems is a good 10 compromise, and you can through your switches, and 11 trips, and breakers, which is really ---

12 MR. GIBSON: So, really to answer that, 13 this is process oriented. So, when you do a hazard 14 analysis and HAZCADS, model the operator, you're going 15 to create a loss scenario, and one of the loss 16 scenarios is going to be you lose whatever. And now 17 the operator has to take action, and you're going to 18 evaluate that.

19 CHAIR BROWN: He's got a manual switch 20 somewhere, what does it do, and how does it get it 21 done? Because two wires going down to the contactor 22 is a lot different than another computer with quote 23 diverse software.

24 MR. GIBSON: That's right.

25 CHAIR BROWN: And I think that's the kind NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

135 1 of thought process in the human factors area I think 2 needs to be addressed. I didn't really see that when 3 I --- I mean, I didn't read 369 pages in 10 hours1.157407e-4 days <br />0.00278 hours <br />1.653439e-5 weeks <br />3.805e-6 months <br />, it 4 just didn't work out very well for understanding. So, 5 kind of a thoughtful --- I have no problem with the 6 document system, in covering the other part of it.

7 MR. GIBSON: Good feedback.

8 CHAIR BROWN: So, I'm done with that part.

9 MR. GIBSON: Party on there, Mary.

10 MS. PRESLEY: Okay. So, we just got the 11 ---

12 CHAIR BROWN: No, I'm not done. The TAM 13 part, there's all kinds of stuff in your TAM cyber 14 security which is all kinds of good stuff, and you 15 finally got talking to data diodes part way through it 16 somewhere. That's actually the highest level of 17 protection you can have because it's an air gap. But 18 that's not in the preamble area. What's the 19 structure?

20 How do you structure protecting yourself 21 cyber wise for critical components? And the air gap 22 approach is being the best, and how do you deal with 23 others where you don't need it? It should have been 24 up in the front, and then lead in to how you address 25 in other areas. It's just how you approach doing it, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

136 1 and define it, and make it clear that an air gap ---

2 it doesn't help you on physical access.

3 People can come in and screw up your 4 software when they make mods, that's always been the 5 case whether it's hardware or software, but now we've 6 got the electronic path that complicates everything.

7 So, the emphasis on the highest quality down to the 8 lowest, how do you deal with it. Where do you use 9 software to protect your virus system, blah, blah, 10 blah, whatever, those are for other ways. Anyway, now 11 I'm done.

12 MS. PRESLEY: All right.

13 MR. GIBSON: Now you can start.

14 MS. PRESLEY: Okay. So, John just went 15 through HAZCADS, and now we're going to look at one of 16 the downstream processes, which is the HFAM, the human 17 factors assessment methodology, and it's a risk 18 informed approach for human factors engineering. And 19 here on the left you can see very simply what John was 20 talking about. We have these reliability targets that 21 come from HAZCADS, and they go into the human factors 22 engineering process.

23 And then the human factors engineering 24 process feeds back HSI design, but also feeds back on 25 function allocation, and task attributes as you go NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

137 1 through the design loop. So, the risk informed 2 approach has the benefits of being graded, and it 3 allows us to really right size what's a human ability 4 versus (audio interference). I'm going to try to 5 power through some of this so you can get through all 6 of it.

7 So, these are the key activities that are 8 typical in a human factors engineering, an HFE 9 process. I'm not going to go through those 10 activities, and HFAM doesn't change the general 11 process. But what we do is make it more usable, and 12 accessible to the user, and we integrate it with the 13 systems engineering process.

14 So, some of the key features is where are 15 the touch points that the human factors process hits 16 with the systems engineering process, particularly the 17 EPRI DEG. How do you integrate the risk insights from 18 HAZCADS into the process? And then we provide a two 19 phased graded approach. The first phase looks at the 20 scope of the design based on the DEG.

21 And that allows you to allocate your 22 resources appropriately at the beginning of the 23 project, and pick the right tools that you need to do 24 the human factors engineering. Not every project 25 requires a dedicated HFE expert, but we do want to NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

138 1 make sure that the right skills are applied at the 2 right time, the right level of detail for the design.

3 So, that's the first phase.

4 And then the second phase, once we get 5 more into the detail of the design, and have unsafe 6 actions to look at, we use the risk reduction targets, 7 or the reliability targets, that's more clear, from 8 HAZCADS. And that will tell us how much level of 9 effort we need to put into each UCA target in terms of 10 HFE design to protect against unsafe human actions.

11 CHAIR BROWN: So, straight to graded 12 approach, it's kind of a screening process in a way, 13 before you do the more detailed phase two type stuff, 14 is that the way I would read that?

15 MS. PRESLEY: You still have to go through 16 the whole process, but so you're not screening out.

17 What you're doing is trying to figure out what level 18 of detail you need to go to, so yeah.

19 CHAIR BROWN: I call that screening, 20 you're just screening what level of stuff needs to be 21 done. That's fine, I got it.

22 MS. PRESLEY: Yeah. And then the other, 23 I guess for me the holy grail, because I'm an HRA 24 background, we've been able to bring the HRA tools, 25 and the data, and the experience with HRA, and use it NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

139 1 as a tool within the human factors process as part of 2 the treatment of important human actions to help come 3 up with to make sure --- to ensure that you're meeting 4 the reliability targets.

5 And then this helps on the PRA side, 6 because then there'll be consistency between the 7 design process, and what your design says a human can 8 do, and what your PRA credits. So, to do that --- so, 9 that's the HFAM side, and those are all the slides we 10 had on the actual HFAM piece. The next is the HRA 11 research that we're going and doing on the PRA side.

12 And this is important because currently 13 HFAM references the existing HRA methods, which are 14 pretty good, but they're not optimized or developed 15 for digital systems specifically. So, now we're going 16 through the process to understand what's different for 17 digital systems in the area of what data do you need 18 to collect as an analyst, what human failure modes you 19 might be susceptible to.

20 What new performance shaping factors you 21 might be susceptible to, what the level of 22 difficulties you might have, and we recognize that 23 digital is all over the map. So, it can be from very 24 focused modification of replicating an analog with 25 just a digital all the way to new reactors maybe NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

140 1 remotely controlled, totally different control room 2 structure.

3 So, what we call digital from a human 4 perspective can be across this very broad range. And 5 so, we're taking a graded approach at our digital HRA 6 research. So, that's why you see design A, mini mods.

7 The humans will perform very similar to existing 8 plants you may have. And then our existing methods 9 and processes are totally applicable.

10 You may have plants that may have maybe 11 computerized procedures, or maybe more automation than 12 our traditional plants, and our existing methods are 13 pretty good, but may need some augmentation in those 14 areas. That's maybe design B. And then design C 15 would be these totally new and different concepts of 16 operation where our existing methods may not be so 17 adequate.

18 CHAIR BROWN: Do you try to address in the 19 digital systems the actual components that may be used 20 that the operator or the human has to execute with?

21 For instance, a lot of it is people use a push button 22 to do something, or you have a mouse and click, or you 23 have a touch screen that maybe doesn't respond when 24 the operator hits close, and it doesn't close. Touch 25 screens can be touchy.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

141 1 MS. PRESLEY: Sure, those are ---

2 CHAIR BROWN: And some touch screens have 3 a sensitivity of --- and this is the way they're 4 designed, if your hand gets close you don't 5 necessarily even have to --- it happened to my son in 6 law in the car when he went to change his screen, he 7 didn't touch it, it just changed as he moved his hand 8 towards it. So, are you trying to take the new parts 9 that he has to deal with, and how that affects his 10 human actions?

11 MS. PRESLEY: Yes, we're looking at the 12 physical systems ---

13 CHAIR BROWN: Okay, thank you, that 14 answers my question.

15 MS. PRESLEY: And a really good example of 16 that particular thing is that we have some OE that 17 shows there was one design that they were looking at 18 touch screens, and the second checker to verify would 19 put their finger where they were looking, and that 20 would inadvertently activate what they were trying to 21 second check. So, those are definitely part of the 22 HRA.

23 But equally part of the HRA is how does it 24 change your concept of operations, or how you work 25 together as a team? So, if you're on a physical NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

142 1 display board, it's easy for the shift supervisor to 2 see you are on this part of the board, so I know what 3 you're looking at. So, you have some situational 4 awareness of what people are doing.

5 You don't necessarily have that same 6 situational awareness, or that second checking 7 function maybe when people are just sitting behind 8 their screens. So, we're looking across the board at 9 how human performance changes when your interface 10 changes. All right. So, there's three general data 11 sets ---

12 DR. BLEY: We can't hear you out here, 13 Charlie.

14 CHAIR BROWN: It got fixed, don't worry 15 about it, something popped up on the screen, that's 16 all.

17 MS. PRESLEY: All right, so I'm going to 18 keep going. There's three major sources that we're 19 looking at for our initial evaluation of the HRA 20 stuff. And as you can imagine, because digital is a 21 broad set, we have a broad set of data, so we have 22 experimental data from places like INL and Halden.

23 Those look at broad range of scenario types and design 24 features, but they're small sample sets, so it's 25 largely qualitative data.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

143 1 We have literature review, and OE review 2 from nuclear and non-nuclear sources, again, broad 3 range of design types, but they're not large 4 statistical data sets. And then we have training 5 simulation data of which we've collected the first set 6 from the Korean simulator studies. And 45000 data 7 points, great statistical data set, excites the nerds 8 like me, but it's on one specific design.

9 So, we have to question how generalizable 10 that is. So, we're trying to take these three types 11 of data sets, and synthesize them into lessons 12 learned, and pull that into our HRA methods. And then 13 the last slide on this particular piece is a special 14 note on human errors of commission, and this is very 15 similar to the systematic failures for software, 16 software common cause failures.

17 When you have humans interacting with a 18 system, and they have the ability to --- human 19 cognitive errors of commission are when humans do 20 something they shouldn't do, but they're probably 21 doing it because they have a good reason to do it.

22 So, maybe their procedures tell them to do it, or 23 their instrumentation is misleading, but they're doing 24 the wrong thing, or they're fighting against the 25 automation because they don't understand the NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

144 1 automation.

2 So, in the PRA traditionally we don't take 3 an unbounded look for these types of errors. If we 4 have a specific cause to suspect we will include it in 5 the model, and one example are fire, we look for 6 spurious operations, or multiple spurious operations 7 where humans can be misled. But typically we don't 8 just go and search for these in an unbounded fashion.

9 But with automation, and more automation, 10 the conversation around errors of commission is 11 definitely increasing. We see an uptick in this, and 12 we see in the new standards, discussion of 13 incorporating it more heavily. So, we have to 14 question whether or not PRA process is the right place 15 to consider this. And from our perspective, STPA is 16 actually designed to look for these types of errors.

17 So, if we're looking for them 18 appropriately through the design process, then we 19 should be able to use that process as the right tool 20 to address errors of commission, and then only include 21 them in the PRA, again, when we have a very specific 22 cause. For instance if we were unable to mitigate one 23 that was, or design one out that we found through the 24 STPA design process.

25 So, that's the end of my human factors NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

145 1 piece. I'm quicker than the boys. Any questions on 2 that? I know I didn't answer your question on the 3 backup.

4 CHAIR BROWN: No, you did fine. You said 5 that no, you're looking at the combination of the 6 things, and that's all you can do, appreciate that.

7 MS. PRESLEY: All right. Okay, so then 8 the last bit is how we look at digital systems in the 9 PRA. So, the life cycle of the design goes from 10 design, implementation, all the way through 11 operations, and configuration management. And I think 12 I have an animation. So, we've checked the box for 13 design and implementation in terms of consideration of 14 risk through HAZCADS and the associated downstream 15 processes.

16 And we did that through sensitivities, and 17 through matching our control measures with our risk 18 reduction targets, or our reliability targets. Now we 19 need to make sure we have a coherent approach for the 20 operations and configuration management. Basically 21 what I call your assessment PRA, or your living PRA 22 that you use after the fact.

23 And when I say coherent approach, I mean 24 you can't go through this rigorous design process and 25 say okay, I have a design that's acceptable, it's risk NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

146 1 acceptable, we believe that this is a low risk design, 2 and it goes through all of that, and then you put in 3 maybe a conservative, or an arbitrary value in your 4 PRA that says sorry, we have a problem. So, we need 5 to make sure that the insights from the PRA on the 6 back end match the qualitative, all the work that went 7 into the design end so there's parity.

8 All right. So, what do we have on putting 9 it into the PRA? Right now, as you can see, a lot of 10 bubbles. We don't have a coherent process, or a 11 systematic process for including these elements in the 12 PRA. It's kind of all over the board, and especially 13 if you look at it internationally, what people model 14 in the PRA, and how it's modeled in the PRA, and what 15 data, or assumptions are used is really quite all over 16 the board.

17 So, what we're working on right now is a 18 first cut at how do you include digital systems in the 19 PRA model, and then what do you put in numerically as 20 well. So, our research is going to capture the 21 current state of knowledge, and we're really relying 22 heavily on the foundational data, qualitative and 23 quantitative, and that work that Matt's group has 24 done.

25 And then we're going to make sure it also NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

147 1 matches the use cases, because all models are useful, 2 some models are useful.

3 MR. GIBSON: All models are wrong, some 4 are useful.

5 MS. PRESLEY: All models are wrong, some 6 are useful. But they need to match the use case, and 7 what you're trying to make a decision using the PRA 8 models. So, that's the piece we need to match up, is 9 the data, and how are you using your models, and what 10 kinds of decisions you're making with your models.

11 DR. BLEY: Mary?

12 MS. PRESLEY: Yes, sir.

13 DR. BLEY: It's Dennis Bley.

14 MS. PRESLEY: Hi Dennis.

15 DR. BLEY: Hi. I'm not sure it will help 16 you, but you ought to take a look at the research NRC 17 funded back 10 to 15 years ago on this area. None of 18 it came to real fruition, but you might find some 19 useful nuggets in that work.

20 MS. PRESLEY: Yeah, so part of this effort 21 is looking at existing references, including the stuff 22 that EPRI did I think around that same time frame 23 you're talking about. If you have specific 24 references, Dennis, I think I know what you're talking 25 about, but if you have specific ones, maybe offline I NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

148 1 can ask you.

2 DR. BLEY: Yeah, that's fine, do that.

3 There was several projects at Brookhaven, one at 4 another lab, and then there was one at an outside 5 contractor that I'm aware of, and they all had some 6 aspects that you might find useful.

7 MS. PRESLEY: Okay, thank you so much for 8 that. All right, so our proposed approach, and again, 9 this is in progress, and still pretty early in 10 progress, is based on defining these use cases, and 11 then relooking at the data and existing guidance and 12 lessons learned from HAZCADS. And there's a couple of 13 things before I go into some of the detail I want to 14 make clear as ground rules for our research.

15 So, incorporation of the design into the 16 PRA has to be consistent with the insights of the 17 design process, we already talked about that one. It 18 has to be consistent with the overall PRA modeling 19 approach. Which means same sort of level of detail, 20 same sort of types of assumptions. So, we cannot be 21 --- we cannot have major mismatches, very large 22 conservatisms, or screening out things that we 23 shouldn't be screening out, or putting too much in.

24 Because the value of the PRA is that it 25 lets us look across systems and compare. So, we need NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

149 1 to have parity in the modeling approach between 2 digital systems and how we deal with the rest of the 3 equipment, and human actions in the PRA. And then the 4 third is that you need to make sure you're continuing 5 to reflect the as built, as operated plant.

6 Okay, so the first piece is digital 7 systems should be modeled at a reasonable level of 8 detail. There's sometimes modelers get very 9 enthusiastic, and model in extreme levels of detail.

10 Practically this has issues with model complexity, 11 being able to verify the model, being able to run the 12 model. Second, model level should be consistent with 13 the boundary conditions of the data.

14 So, you can't go into the super, super 15 subcomponent level if your data is not collected at 16 that level. So, this has implications for when we 17 think about software. For example, when we talk about 18 reliability, we're going to talk about functional 19 reliability, and software shouldn't be separated from 20 the hardware, because software is implemented through 21 the hardware. And typically when we collect data, we 22 collect it at the functional level.

23 So, trying to take that functional level 24 stuff, and then decompose it artificially causes 25 issues, as you can imagine. The second piece of this NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

150 1 proposed approach is based on the fundamental 2 assumption that our control methods that we've used 3 through HAZCADS and its downstream processes reduce 4 the risk to acceptably low levels. So, this is 5 important because we're making a qualitative statement 6 when we do the design process.

7 We're saying that I have reduced my risk 8 substantially, and now I'm in that low acceptable risk 9 region. So, while we may not have specific numbers, 10 we qualitatively are saying that we have made that 11 much of an impact on our risk when we apply certain 12 sets of control measures. And that's the piece that 13 needs to be coherent with whatever data or information 14 we put into the PRA.

15 And that's for both the functional 16 reliability, and the common cause failures. And 17 that's because what we've done on the design side, 18 that reflects our best estimate idea of what the 19 actual risk and importance of these actions are.

20 Okay, so what does that --- yes.

21 MEMBER DIMITRIJEVIC: Hi, this is Vesna 22 Dimitrijevic. So, you have here the chicken and egg 23 problem, right? I mean you use the PRA to design and 24 define your targets, and then you put in things in the 25 PRA, which will change your input in design.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

151 1 MS. PRESLEY: Well, this is assessment 2 PRA, so this is after the design is done, and been 3 implemented. So, this doesn't feedback then --- this 4 might influence the next modification you might make, 5 but this doesn't ---

6 MEMBER DIMITRIJEVIC: Are we talking here 7 about existing plant, or the new designs?

8 MS. PRESLEY: Either way, this is after a 9 design has been complete and implemented. Whether 10 it's an existing reactor or a new reactor, at some 11 point you'll have finished the design, you'll have 12 implemented it into your plant, and now you'll have to 13 have a PRA that you can use to make your operational 14 decisions, or have on file for your safety case, or 15 whatever. But this is at the end of the life cycle 16 part of the operations.

17 MEMBER DIMITRIJEVIC: You guys are aware 18 that your targets are not relative, but absolute, and 19 so therefore they will work fine for today's industry.

20 But if you apply your lowest requirement target to the 21 increase of 10 to minus 6, that's not going to happen 22 in --- most of the new designs are coming with such 23 low numbers that increase of 10 to minus 6 will mean 24 thousand times increase in existing core damage.

25 So, I mean, and I was going to bring this NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

152 1 in then, the relativity of these risk measures are ---

2 they have to be reexamined for the new designs, 3 because that's one of the big discussions, that we're 4 using absolute or the relative risk measures. But 5 another thing here where you have to have this process 6 of using these risk targets in design, and then trying 7 to incorporate some of the important things like human 8 actions back.

9 I mean, that can change totally the risk 10 targets. So, I have a lot of concerns about your 11 categories, and how that will work. And about this, 12 as I said, egg and chicken problem, so just want to 13 raise that.

14 MS. PRESLEY: Thank you.

15 MR. WEGLIAN: So, I just want to make it 16 clear that what Mary is talking about right now is 17 after the design is done, and it has been installed 18 into the plant, what does the PRA look like to assess 19 the system that is now installed in the plant? So, in 20 the slide she's going over right now, there's no 21 feedback into the system design anymore. Because not 22 only is the system design done, it's been installed.

23 This is when the new digital I&C system is 24 now part of the plant. And for a new reactor, this 25 would be when the plant is built, this is what they're NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

153 1 going to have with them.

2 MEMBER DIMITRIJEVIC: Okay, I just pointed 3 out that this can change your targets, and I 4 understand that you're saying that. But these are 5 also maintenance, and ITAAC, the other things going 6 there that this impacts the importance, and that 7 importance can change totally.

8 MS. PRESLEY: So, I'm going to talk to 9 your second point in the next slide. But for your 10 first point on new reactors, and them having a 11 different risk profile, that's part of the research 12 that we're looking at in use cases, is how would that 13 change in an advanced reactor, and does that change, 14 and what does that look like? So, that's definitely 15 on our radar.

16 CHAIR BROWN: We need to keep moving, I 17 would like to get through your last slide.

18 MS. PRESLEY: All right. So, the 19 consequences of --- so, risk is likelihood times 20 consequence. So, the first piece is making sure the 21 consequence is captured in the model. And long story 22 short, you have a cause effect relationship for the 23 potential unsafe action. And if that potential unsafe 24 action survives to the final design, you need to make 25 sure that the consequences associated with that are NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

154 1 incorporated in the PRA somehow.

2 So either if it has a non-unique 3 consequence, you can map it to an existing portion of 4 the model, an existing basic event. But if it has 5 unique consequences, for example a new common cause 6 failure grouping that you didn't have in your model 7 before, you need to make sure that that consequence is 8 reflected in your model. And that failure can be 9 hardware, software, or human error.

10 These potential unsafe actions need to be 11 incorporated logically from a consequence perspective 12 into the model. And this is the piece, Vesna, the 13 logic has to be reassessed as the PRA evolves to 14 reflect the as built, as operated plant. So, as you 15 change your human actions, or change something in your 16 PRA, you may see a difference in consequence.

17 And if you have impacted your risk 18 reliability targets, then you have to go back and say 19 --- well, if your targets have increased you have to 20 go back and say well, are my control measures still 21 adequate? So, you do definitely need to make sure 22 you're looking at that consequence, and that those 23 assumptions that you made remain valid. So, the 24 second piece is the likelihood.

25 And of the two, consequence or likelihood, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

155 1 consequence is more important. Because likelihood, we 2 have kind of two ways we can look at likelihood, and 3 our current research is investigating both approaches.

4 Likely one will be useful in some areas, and the other 5 will be useful in others, and that's the use case 6 piece.

7 But the first way is to create some 8 generic failure rates based on the available data, and 9 the qualitative insights that we have gained through 10 the I&C research. And the second way is to actually 11 not quantify this at all, but similar to the way 12 HAZCADS approached it, keep the events in the model, 13 and use sensitivity studies to understand if risk has 14 changed.

15 And then when risk does change, to see if 16 our control measures are still adequate. So, there's 17 two ways we can look at likelihood, and that's the 18 direction that our research is going right now. But 19 I want to emphasize in the long term, industry needs 20 to start gathering data, and we need to do that in a 21 way that is consistent, and that the boundary 22 conditions of the data match what we put into the 23 model.

24 So, we have a way to collect data, 25 transfer that data directly into the model. So, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

156 1 uncertainty, I'm sure this has come up, completeness 2 uncertainty certainly has come up. Uncertainty is not 3 new, PRA does not create uncertainty, PRA exposes 4 uncertainty, and digital is not the only place where 5 uncertainty exists. So, we want to just recognize 6 that up front.

7 And we also want to recognize that 8 conservative treatment is not the answer to 9 uncertainty all the time. And in fact if 10 inappropriately applied it can mask risk insights.

11 So, in this particular place we recognize that we 12 don't do risk based decision making, we do risk 13 informed decision making, and that constitutes the 14 other pertinent information.

15 And the two particularly important pieces 16 here are performance monitoring, so that's the data 17 collection piece. So, when we make certain 18 assumptions if we put a number in the model, or when 19 we say a control measure is adequate, but we need to 20 monitor the data in the OE to make sure that those 21 assumptions that we make hold, and that if they don't, 22 we have a way to respond to them.

23 The second piece is defense in depth and 24 safety margins. So, this is where I know, Matt, John 25 said flex, and you reacted. So, we actually have many NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

157 1 layers of defense in depth built not the process.

2 From the DRAM we have first of all you design it out.

3 If you can't, can you protect, can you detect, can you 4 respond to a fault? And then if all of those fail you 5 have multiple functions, you have diversity in your 6 functions.

7 And then if that fails, you have FLEX.

8 So, we're looking at accrediting defense in depth 9 across the board, not just at the small level. So, 10 those are the pieces that help flesh out the picture 11 of how digital uncertainty is considered in the PRA 12 process. Yeah, I can skip this one. So, the last 13 slide is about looking at the whole elephant.

14 And just within EPRI getting our I&C guys, 15 our human factor, our cyber, our PRA, our HRA all talk 16 in the same language and with the same vision required 17 a real cultural shift. And we ran into all these 18 things where digital is different because, you know?

19 And every time we did that, we're like well, no, it's 20 not really different, we're coming up against the same 21 issues we have in other areas.

22 But maybe it's more pronounced, and so 23 we've refined our methods and tools. But it's still 24 the same overall process, it's still the same types of 25 uncertainties, and complexities that we already deal NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

158 1 with in risk informed world in general. So, that's 2 it.

3 MEMBER DIMITRIJEVIC: Mary.

4 MS. PRESLEY: Yes, ma'am.

5 MEMBER DIMITRIJEVIC: All right, so a 6 couple things, I just want to make a couple wise guy 7 remarks. So, one of the things because you brought up 8 Mark bringing up the FLEX equipment, and he had a 9 really good question, are you concentrating on 10 mitigation, or just prevention? And he didn't say 11 prevention, there is a better word for that. But it 12 is important in this process if you're using the PRA, 13 you are not using initiating events, the fault trees, 14 which are already there, integrated in the PRA.

15 And the digital I&C will play a lot of 16 function in the preventing actual events, not just 17 mitigating them. And so, maybe that should be 18 considered in one of those processes. The other 19 thing, which is why I said the wise guy remark, you 20 said that this is a risk informed, and not risk based 21 process, so I mean you can tell to Charlie how does 22 safety classification impact the total risk 23 classification.

24 Because there is not any connection 25 between those two. So, I just thought that Charlie NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

159 1 would be thrilled to know that. Okay.

2 MS. PRESLEY: All right. Well, I will let 3 John answer how initiating events are touched.

4 MR. WEGLIAN: So, I didn't go over it in 5 this. There is another part of HAZCADS that looks 6 explicitly at UCAs that can cause a plant trip. So, 7 as an initiating event. It's a lot more complicated 8 than what I've shown before. It's still tied to a 9 delta CDF and a delta LERF, but we bias the initiating 10 event frequency. You can't set it at a frequency to 11 true in the same way that we do the probabilities, so 12 it has to be a different approach.

13 So, we bump up the frequency based on the 14 plant data, and in expectation of how many additional 15 trips you would have from the UCA. I can go offline 16 if you want some more information on that. But we do 17 take that into account, that if it could be an 18 initiating event, how that can be an impact on the 19 model.

20 MEMBER DIMITRIJEVIC: You know a lot of 21 the initiating events have fault trees associated with 22 them, but they're already there, integrated in the PRA 23 model. So, I mean that's why I just wanted to say 24 something which you may have forgotten.

25 MR. WEGLIAN: So, most of what --- well, NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

160 1 yeah, if you had a failure of main feed water for 2 example, that might have a developed fault tree. But 3 if it's just a generic plant scram, if you're looking 4 at the RPS system and it inserts a scram, most plants 5 that's a single basic event based on plant history.

6 And so we handle both approaches.

7 Actually it's easier if it's a developed 8 fault tree, because there's usually a basic event you 9 can set to true, and you don't have to attack the 10 frequency directly.

11 MEMBER DIMITRIJEVIC: That's true.

12 CHAIR BROWN: We've got two minutes.

13 MR. GIBSON: We're done.

14 CHAIR BROWN: You're done, okay. At this 15 time, before I go around and ask for comments from the 16 participants here, I'm automatically connected already 17 to the phone. Is there anybody on the phone line that 18 has been listening that would like to make a comment 19 in the public, or on Teams? Hearing none, I will call 20 one last query to the members here. Anything else?

21 MEMBER SUNSERI: I thought it was a good 22 presentation. I mean, I learned something, I wasn't 23 trying to bash, or flex, I was just pointing out that 24 that's way down in the chain of events. But I thought 25 it was interesting.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

161 1 CHAIR BROWN: Okay, good, thank you.

2 Greg, and Vicki, you all have any comments? I think 3 Dennis is gone.

4 MEMBER HALNON: Yeah, I'm good Charlie.

5 CHAIR BROWN: Okay. Vicki?

6 MEMBER BIER: I agree with Matt, it was a 7 good presentation. I see a lot of pluses and some 8 concerns, but not show stopping ones. So, it sounds 9 good, I appreciate the opportunity to learn about it.

10 CHAIR BROWN: Thank you. I don't think 11 I've missed anything administratively, I haven't done 12 this in a while. Okay, one more just closing comment 13 from me is I really do appreciate you all coming in, 14 and taking your time to present this to us. I thought 15 it was very comprehensive. And providing the other 16 documents to give us a little bit of feel for how 17 you're incorporating this, and what you're doing with 18 it I thought was valuable.

19 And it's good for us to know this as we're 20 working with the staff and everything, so I do want to 21 thank you very much for that. And I forgot to ask 22 Chris, did you want anything else? You're good, okay.

23 You looked like you wanted to say something.

24 MR. GIBSON: No, I was going to say it was 25 our pleasure, so thanks a lot for having us.

NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

162 1 CHAIR BROWN: All right, with that, this 2 meeting is adjourned exactly on time for once.

3 (Whereupon, the above-entitled matter went 4 off the record at 12:30 p.m.)

5 6

7 8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 NEAL R. GROSS COURT REPORTERS AND TRANSCRIBERS 1716 14th STREET, N.W., SUITE 200 (202) 234-4433 WASHINGTON, D.C. 20009-4309 www.nealrgross.com

EPRI Digital I&C Perspectives EPRI Digital Systems Engineering Framework and Related R&D Matt Gibson-EPRI Mary R. Presley-EPRI John Weglian-EPRI Technical Executive Technical Executive Principal Technical Leader Joint Digital I&C and Fuels, Materials and Structures ACRS Subcommittee Meeting on EPRI Digital I&C Perspectives June 22, 2023 www.epri.com © 2023 Electric Power Research Institute, Inc. All rights reserved.

Looking at the Whole Elephant Systematic Failures Human Interactions Automation Cyber Security Hardware Reliability Functional Reliability How to address design requirements, risks, and hazards from various sources in one integrated process 2 © 2023 Electric Power Research Institute, Inc. All rights reserved.

II. Introduction to the Overall EPRI Digital Systems Engineering R&D Strategy 3 © 2023 Electric Power Research Institute, Inc. All rights reserved.

How we got Here- A Development History 2013 2016 2018 2020 2023+

Hazard Analysis Assuring Safety and HAZCADS: Hazards and HAZCADS: Hazards and HFAM- Human Factors Methods for Digital Dependability of Digital Consequences Analysis Consequences Analysis Analysis Methodology Instrumentation and Instrumentation and for Digital Systems R1 for Digital Systems R0 Rev. 0 Control Systems Control Systems 3002016698 3002012755 3002018392 3002000509 3002005326 Cyber Hazards Analysis Analysis of Hazard DRAM: Digital EMCAM: EMC Analysis Risk Methodology, Models for Cyber Reliability Analysis Methodology Rev. 0 Phase II: A Risk Security, Phase I Methodology Rev. 0 3002023743 Informed Approach 3002004995 3002018387 4Q 2023 3002004997 Cyber Security TAM: Cyber Security TAM R1 Cyber Security 3002012752 Vulnerability Procurement Identification and Cyber Procurement Methodology, Rev. 1 In Progress Mitigation R0 Guide: R2 3002001824 3002008023 3002012753 Systems Engineering Digital Engineering Cyber Security Program Digital Instrumentation Safety Integrity Level Process: Methods and Tools Guide: Decision Making Guide R1: Risk and Control Design (SIL) Efficacy for Nuclear for Digital Instrumentation Using Systems Informed/ Performance Guide (DDG) Power and Control Projects Engineering (DEG) 3002018395 3002002989 3002008018 3002011817 3002011816 4Q 2023 4 © 2023 Electric Power Research Institute, Inc. All rights reserved.

EPRIs Digital Framework Elements EPRIs high-quality engineering process uses the same modern methods and international standards used in other safety related industries to reduce implementation cost Use the same proven design and supply chain structures that non-Utilize Industry nuclear safety related industries use (IEC-61508/61511/62443).

Standards This leverages the economies-of-scale achieved in other Capable Workforce industries.

Use of a modern, high performance, single engineering process Use of Systems that leverages systems engineering in the transition to team-Engineering based engineering for conception, design, and implementation(IEC-15288,IEC-15289, IEC-12207,STPA).

Risk Informed Making effective engineering decisions via hazards and risk analysis to integrate all digital engineering topics into a single Engineering engineering process. (STPA,FTA)

Modern Methods to Support Nuclear Fleet Sustainability and Advanced Reactor Design 5 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Policy Level vs. Implementation Level Activities Policy Level Regulation and Company Policy Stable:

Objective Rarely Changes Objective Criteria Changes Slowly: Objective Objective Criteria Based on Performance Objectives Criteria Criteria Digital Functional Cyber Rapidly Evolves: EMC HFE Reliability Security Changes Often, Based on OE Requirements Requirements Requirements Requirements Implementation Level DEG / HAZCADS / DRAM / TAM / HFAM/ EMCAM/STAP/ IEC-61508 via Industry Standard Procedures EPRI Products are Used at the Implementation Level (what you actually do)

Performance Objectives provide the Interface between Policy and Implementation. Supports a safety case argument.

6 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Safety Integrity Level (SIL) efficacy for Nuclear Power EPRI research on field failure data from SIL certified logic solvers revealed no platform level Software Common Cause Failures (SCCF) after over 2 billion combined hours of operation for IEC-61508 SIL certified PLCs (3002011817)

Indicates that using existing SIL certifications, at the platform level, has a high efficacy for use as surrogates for some existing design and review processes.

Leveraged for NEI 17-06/RG-1.250 and NEI 20-07 in US Correlates well with EPRI review of global OE (Korea, France, China, etc.) that indicates:

- Safety related software is no more problematic than other CCF contributors when subjected to deliberate safety and reliability design processes. Digital Reliability Layers

- There have been no events where diverse platforms would have been effective in protecting against SCCF 7 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Reliability Layers Functional Reliability, which includes software, hardware, and human elements should be segmented by layers:

platform, integration, and application.

Then Considered Separately Less Mature Applications Functional Reliability Integration Baseline Platform More Mature Production Data and OE Quantity and Quality Dive Maturity and Reliability 8 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Digital Reliability Model Reliability Axioms Facility Level Reliability Common Cause Failures must first have a failure or systematic error (including emergent behavior)

System Level Reliability Achieved Systematic and Random Reliability is inversely proportional to the likelihood of a CCF Component Level Reliability Reliability is best achieved via a cost, likelihood, and consequence equilibrium Net Functional Reliability is the prime objective ( at the system/facility level)

Random Systematic Failure Error Focused Models can provide actionable reliability Insights ( FTA, STPA, Relationship Sets)

Functional Reliability is an Equipment Level Challenge Functional Reliability is a Lifecycle Challenge 9 © 2023 Electric Power Research Institute, Inc. All rights reserved.

III. Systems Engineering- A Modern Approach to the Technology Life Cycle 10 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Systems Engineering - Discovery, Iterations & Refinements Systems Thinking is the key skill required to use Systems Engineering It is multidisciplinary and requires teamwork Requires ability to see system relationships in a holistic manner Ability to communicate across disciplines Ability to understand complexity 11 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Digital Engineering Guide (DEG) - Systems Engineering Lifecycle Phase Based using Perform/Confirm method Synthesis &

Validation Scoping Iterates through the SE process for each phase in a non-linear fashion. Synthesized from the IEC-15288 Framework Includes the topical guidance for each phase Systems Iteratively converges on the final synthesized design Engineering The DEG Addresses:

Hazard Analysis &

Analysis Requirements

- Division of Responsibility (DOR)

- Requirements Development

- Hazard Analysis , Reliability Analysis (including CCF) and Mitigations

- Architecture Development including Relationship Sets Functional Allocation

- Functional Allocation ( including Human/System Allocation)

- Verification and Validation (V&V)

- Testing

- Transition to the O&M Phase 12 © 2023 Electric Power Research Institute, Inc. All rights reserved.

SE Process Example Predetermined by the plant and plant system designs

1. Bounding Technical Reqts.
2. Decide Parameter Values
3. Bench Evaluation
4. Refine Parameter Values
5. Specify Configuration Determined by resp. engineer Requirements Engineering System Analysis & Control
  • Analyze System Goals and Environment
  • Trade Space Analysis
1. Bounding Tech. Reqts.
  • Identify Functional Requirements
  • Define/Refine Performance and Design
  • Risk Management
  • Configuration Management Configure, Install & Validate Constraint Requirements
  • Data Management
  • Performance Measurement Requirements Loop
2. Decide Parameter Values Functional Analysis & Allocation
  • Decompose to Lower Level Functions
5. Specify Configuration
  • Allocate Performance and Other Limiting Requirements to All Functional Levels
  • Define/Refine Functional Interfaces (Internal & External)
  • Define/Refine/Integrate Functional Architecture Also Consider:
4. Refine Parameter Values Design Loop Design Synthesis
1. Desktop Simulation
  • Transform Architecture (Functional to Physical)

Verification

  • Define Alternative System Concepts, Configuration Items and System Elements 3. Bench Evaluation 2. Reusable Style Guide
  • Select Preferred Solution
3. Model Number Optimization
  • Define/Refine Physical Interfaces (Internal & External)

Systems Engineering Process 13 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Choosing a Hazard Analysis Method Hazards and Consequences Analysis for Digital Systems (HAZCADS) evolved from analysis, experimentation, and testing to find an effective methods or combination of methods that would provide usable hazard insights for the design process Comparative analysis and testing concluded:

- STPA showed the most promise in terms of a holistic approach to diagnosing the systematic errors of a nuclear plant and the related controls.

- While STPA is strong in many areas it:

does not diagnose component level reliability failures does not prioritize or rank the importance of the identified UCAs Is not a design synthesis tool but rather a design diagnostic tool EPRI has integrated STPA with a Systems Engineering based design process that achieves design synthesis that can then be analyzed by STPA via HAZCADS.

HAZCADS combines the results of STPA and FTA to provide risk-informed prioritization of UCAs and the associated loss scenarios.

Loss scenarios are limited to topical areas of interest which reduces combinatorial growth. This insight is combined with reliability analysis and relationship analysis to fully develop control methods that address each loss scenario.

14 © 2023 Electric Power Research Institute, Inc. All rights reserved.

HAZCADS Basis: Hazard Analysis via STPA IEC Std. 61508-1 (2010) requires a determination of hazards of the Equipment Under Control (EUC) and the EUC control system, and consideration shall be given to the elimination or reduction of the hazards.

For the determination of hazards and their causes, HAZCADS and DRAM/TAM/etc. apply the four-part Systems Theoretic Process Analysis (STPA). Insights from this diagnostic process are pipeline back to the DEG for aggregation and requirements updates.

STPA Step One STPA Step Two STPA Step Three STPA Step Four +

Identify Identify Identify Identify Stakeholder Losses System Hazards Unsafe Control Actions Loss Scenarios + Reliability Insights These steps are in HAZCADS DRAM/TAM/HFAM/EMCAM 15 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Systems and STPA Notional 1oo2 RPS Concept STPA Control Structure Indicator A1 Indicator A2 HSI A HS A

Digital System Operator Under Analysis Sensor A1 PLC A Sensor DO DO A2 RPS UPS RTB M/G 1 UV 1 X UPS Manual Automatic 2 UV RTB 2 Trip Trip RCS DSS Power Supply DSS Conditions X

Process X Variables CRDM CRD Breakers DSS Power Reactor Y

Supply Deenergize Y

Deenergize Power UPS RTB RCS 3 UV 3 Control Rod Pressure Single Rod Power Supplies Acronyms UPS RTB M/G RPS - Reactor Protective System 4 UV 4 Y RCS - Reactor Coolant System Sensor Acronyms DSS - Diverse Scram System B1 DSS - Diverse Scram System CRD Motors DO DO RTB - Reactor Trip Breaker M/G - Motor Generator CRD - Control Rod Drive Sensor PLC B HS - Handswitch B2 PLC - Programmable Logic Controller UPS - Uninterruptible Power Supply UV - Undervoltage Trip Indicator B1 Indicator B2 HSI B HS B

CRDM - Control Rod Drive Mechanism Control Rods The STPA Control Structure is a Diagnostic Model 16 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Example of Test Scenarios Real Event 17 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Turbine Control System Loss of Cooling Detection Old System Logic Problem with Existing TCS Design:

The existing TCS design contained a single point vulnerability Failure of a single cooling flow transmitter could cause a turbine trip 18 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Turbine Control System Loss of Cooling Detection New System Proposed Solution Based on System Requirements: Logic Replace the single flow differential pressure transmitter with three differential pressure transmitters providing input to a 2oo3 coincidence trip logic The digital controller will detect and automatically remove a faulted instrument from the logic The logic is designed to identify a faulted instrument by measuring the output either high or low outside the calibrated range When one instrument is faulted, it is automatically bypassed, changing the voter logic to use the remaining two instruments If a second instrument is faulted, it is also automatically bypassed, and the voter logic uses the remaining valid instrument Finally, if all three instruments are faulted (e.g., all sensors out of range), the logic is designed to send a shutdown signal (turbine trip) 19 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Turbine Control System Loss of Cooling Detection New System The issue: Logic The flow transmitters have a range of 0 - 600 GPM The high out-of-range instrument setpoint for the DP transmitters corresponds to 612 GPM.

Normal stator cooling flow is approximately 550 GPM with one pump in service Two stator cooling water pumps exist, only one is running at a given time The stator cooling water pumps are routinely swapped during power operation such that wear on the pumps is even To swap the pumps, the in-service pump remains on momentarily while the out-of-service pump is started When both pumps are in service, the stator cooling flow routinely exceeds 612 GPM 20 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Turbine Control System Loss of Cooling Detection New System The result: Logic The design team did not consider both stator cooling water pumps running simultaneously when developing system requirements During the first stator cooling water pump swap, all three of the stator cooling water flow transmitters simultaneously over-ranged The design deficiency based on inadequate requirements resulted in an unanticipated turbine protection system behavior that caused a main turbine trip and subsequent automatic reactor trip.

The problem was found during HAZCADS blind study tests.

21 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Preliminary I&C OE Research Data-2023 Failure Initiators by Categories - All Events Preliminary 2023 results indicate Hardware 58%

failure initiator statistics (~1200 Software 18%

OE records reviewed):

Hardware 58% Connection 13%

Software 18% Maintenance Error 4%

Bad/Loose Connections 13%

Human Error 8% Foreign Material 4%

Foreign Material 4% Engineer Error 1%

Procedure Error 1%

Operator Error 1%

0% 25% 50% 75% 100%

22 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Preliminary I&C OE Research Data-2023 Breakdown of Software Initiated Events 100%

Approximately 18% of digital I&C events were initiated by software.

A breakdown of software-initiated failures by 75%

software classification is provided below and 56%

in the graph to the right: 50%

Application Software: 56%

Configurable Parameter: 28% 28%

25%

Firmware: 14% 14%

Operating System Software: 2% 2%

0%

Application Software Configurable Parameter Firmware Operating System Software 23 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Preliminary I&C OE Research Data-2023 Failures Resulting in a CCF 100%

The table on the right provides a preliminary 75%

picture of I&C failures resulting in a hardware or software CCF (loss of redundancy).

A breakdown of software CCFs is provided 50%

below (out of ~1200 OE events):

Manufacturer Software Defect: 5 Broadcast Storm: 3 25%

Over-ranged Transmitters: 3 Incorrect Configurable Parameter: 1 0%

2% 1%

Hardware CCF Software CCF 24 © 2023 Electric Power Research Institute, Inc. All rights reserved.

IV. EPRI Digital Systems Engineering Framework 25 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Digital Systems Engineering Framework Components EPRI ID Title Description 3002011816 Digital Engineering Guide: Decision Making Using Core Systems Engineering method Synthesized from IEC-15288, IEC-12207, and Systems Engineering (DEG) IEC 15298 3002016698 HAZCADS: Hazards and Consequences Analysis for Risk Informed Digital Hazards Analysis using STPA and FTA. Implements Process Digital Systems - Revision 1 Hazards Analysis (PHA)/Layers of Protection analysis (LOPA) for IEC-61511 3002018387 DRAM: Digital Reliability Analysis Methodology Random and Systematic reliability analysis. Synthesized from IEC-61508 and identifies Loss Scenarios and control measures forms part of LOPA 3002012752 Cyber Security Technical Assessment Methodology: Technical cyber assessment method. Identifies Exploit Sequences and develops Risk Informed Exploit Sequence Identification and the associated control measures Mitigation, Revision 1(TAM) 3002018392 HFAM - Human Factors Analysis Methodology for Integration of HFE and HRA to risk-inform HFE. Evaluates and scores HFE designs Digital Systems: A Risk-Informed Approach to Human on a task basis with HRA tool sets.

Factors Engineering 3002023438 Digital Systems Engineering: Digital I&C Lifecycle Provide guidance on the overall system lifecycle and provide detailed guidance Strategy Guide on elements of IEC-15288 not covered by the DEG 3002015755 Digital Systems Engineering: Configuration CM guidance for digital system. Develops the strategy and methods to identify Management Guideline and manage hardware and software configuration items.

3002015758 Digital Systems Engineering: Requirements Provides guidance on engineering actionable, bounded, and testable Engineering Guideline requirements 3002028391 Digital Systems Engineering: Test Strategies and Provide guidance on testing digital components and systems Methods 3002026367 Digital Systems Engineering: Network Design Guide Provides Guidance on wired and wireless network design via detailed use cases (Fall 2023) 3002023743 EMCAM: Electromagnetic Compatibility Assessment Provides a Risk informed and Graded approach to EMI/RFI Methodology (Fall 2023)

TBD (2024) DMG: Digital Maintenance and Management Guide O&M Phase Guide on maintenance of digital equipment (Spring 2024) 26 © 2023 Electric Power Research Institute, Inc. All rights reserved.

RRT & UCA DEG -Synthesizes the Systems Engineering framework HAZCADS DRAM from IEC-15288. Includes all relevant Lifecycle topics.

IEC-61511 IEC-61508 Takes strategic input from the Lifecycle guide control methods HAZCADS -Uses STPA/FTA to identify hazards and associated UCA . FTA and Risk Matrices develop a Risk Reduction Target (RRT) which informs the downstream TAM IEC-62443 processes. Implements a PHA/LOPA from IEC-61511.

Strategies Lifecycle DRAM - Identifies Hardware and Software reliability DEG Guide vulnerabilities and develops loss scenarios. Develops and Scores protect, detect , and respond/recover IEC-61511 HFAM control methods using the RRT IEC-15288 Design Elements TAM -Identifies cyber security vulnerability classes.

IEC-15289 Develops Exploit Sequences. Develops and Scores IEC-12207 protect, detect , and respond/ recover control IEC-62443 control methods EMCAM methods using the RRT Design Description HFAM - Develops human actions and interfaces.

Design Elements Identifies and scores Human Reliability using the RRT Requirements EMCAM - Identifies EMC vulnerability classes.

Configuration Network Management Engineering Testing Guide Develops and scores protect, detect , and respond/

Design Guide Guide Guide recover control methods using the RRT RRT= Risk Reduction Target STPA=System Theoretic Process Analysis LOPA= Layers of Protection Analysis UCA= Unsafe Control Action FTA= Fault Tree Analysis EMC= Electromagnetic Compatibility 27 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Use of Models for Engineering within the Framework The Digital Engineering Framework Currently leverages seven distinct models:

Model Question to be Answered Systems Engineering What are the key systems elements, the functional allocation of those elements, and what is the reliability of those elements? (DEG)

Fault Trees What are the Risk Sensitivities within a Dependency Scope? (HAZCADS,PRA)

STPA What are the Systematic Hazards and Pathways? (HAZCADS, DRAM, TAM, HFAM,EMCAM)

Relationship sets What are the system element dependencies and degree of independence across multiple relationships? (DEG)

HRA What is the reliability of Human Actions? (HFAM)

Exploit Sequences What are the exploit objectives, pathways to those objectives, and the method of exploit? (TAM)

Reliability Analysis What are the failure frequencies that impact Probability of Failure on Demand-PFD? (DRAM)

EPRI continues to leverage or develop additional models as the questions become better defined.

Performance based design requires the design questions to be defined and bounded.

To be useful, a model must answer a key question 28 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Relationship Sets Relationship sets are an architecture view and contain all system elements scoped within the new design or design change.

There four of system elements Functional Hardware Software Programmatic Human Spatial Equipment Under Control There are five relationship set types:

Functional Hardware Element Connectivity Software Element Spatial Acquisition Human Element Programmatic Equipment Under Control Acquisition Connectivity Relationship Sets Models the Relationship Between System Elements 29 © 2023 Electric Power Research Institute, Inc. All rights reserved.

DEG HAZCADS Top Event 5 Operator(s) 6 7 (Pathway Five)

Stakeholder 1 Needs Delta Risk Event Event Event Losses, Controller(s) Calculations Hazards Event UCA Event UCA Control Actions UCAs Actuator(s) Sensor(s)

Event UCA Event UCA Set 2 Requirements UCA Set Controlled Process(es) Event Event Event Event Control Structure Unsafe Control Action UCAs and RRTs I&C 3 Design Functional 9 8 11 Programmatic Control Loss Spatial Method Scenario 4 10 System Elements Allocations Control Loss Method Scenario Hardware Element Software Element Acquisition Control Loss Plant Method Human Element Scenario System(s)

Design Equipment Under Control Connectivity Relationship Sets Downstream Processes 30 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Workflow- Conceptual Phase Diagnostic Process to Identify Digital Hazards & Risk Sensitivities and Refine Requirements Models System and Plant level Hazards and Identifies Hardware, Software, and Human Reliability criticality (Risk Sensitivity) Vulnerabilities and Mitigations associated with Hazards HAZCADS DRAM STPA/PRA IEC-61508/61511 List of Hazards and Control Methods and Risk Sensitivity (RRT) Revised Requirements Iterates until requirements are complete Conceptual Design and hazards addressed Interface Analysis TAM/HFAM/

Function Analysis/Allocation EMCAM Relationship Sets (Architecture Model)

DEG Design Synthesis activities - Concept Phase On to Detailed Design Phase 31 © 2023 Electric Power Research Institute, Inc. All rights reserved.

US DEG Implementation Procedure IP-ENG-001 (Standard Design Process)- Main IP-ENG-001 Procedure (February Process Phase 2017)

Attachments NISP-EN-04 is the Digital Specific Addendum to the SDP under the same mandatory Detailed Efficiency Bulletin (EB 17-06) Considerations NISP-EN-04 (Spring 2018)

Same process phases as IP-ENG-001, tailored with DEG-specific supplemental information for digital Primary Methods implementations. Including Cyber Security. DEG (Fall 2018)

Provides the user with what to do DEG provides detailed guidance using a modern engineering process with digital design considerations, information item guidance, and division of responsibility methods to improve skill of the craft, Provides the user with How to Do Guidance Digital Training/Tech Transfer completes the framework 32 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Supplemental Funded:

Digital Systems Engineering User Group - 3002022140 Fall Meeting 2023 A forum for information sharing of digital specific material September 19th & 20th Current Members to Date Operational Experience Common Design Packages Framatome Constellation Energy Lessons Learned Cyber Security Evaluations Dominion Energy South Carolina, Inc.

Dominion Energy, Inc.

Interactive community Member Feedback Duke Energy Corp.

Entergy Services, Inc.

Evergy Services (Wolf Creek)

Current Activities: Callaway (Ameren)

Palo Verde Sargent & Lundy Engineers Harmonization of the DEG,HAZCADS,DRAM,TAM,EMCAM,HFAM, and Digital Lifecycle Southern Company Strategy Guide. Improves coordination between products and updates with current Tennessee Valley Authority (TVA)

Vistra Corp. (Comanche Peak)

OE. Westinghouse Electric Company, LLC Xcel Energy Roll out of the member sharing website. PSEG (Salem/Hope Creek)

South Texas Project (STP)

Nuclear Digital Project Experience Baseline 2022 published. Updated annually, NPPD (Cooper) members of this supplemental can download EPRI Technical Report 3002023748. Enercon Services Curtiss Wright This report provides a baseline of installed digital equipment across members. Bruce Power 33 © 2023 Electric Power Research Institute, Inc. All rights reserved.

V. Risk Informing the Design and Operation of Digital Systems including PRA integration 34 © 2023 Electric Power Research Institute, Inc. All rights reserved.

HAZCADS: Hazards and Consequences Analysis for Digital Systems - Revision 1 3002016698 35 © 2023 Electric Power Research Institute, Inc. All rights reserved.

HAZCADS Design Information DEG Pathway 1 Perform STPA and find Use pre-determined potential Unsafe qualitative risk Control Actions (UCAs) matrices to assign Risk Reduction Target (RRT)

Design Pathway 2 Feedback Does it No Look up pre-computed One affect bounding risk for one the PRA? system DRAM, TAM, RRT HFAM, Pathway 3 EMCAM Yes Partial Calculate bounding risk for partial system Control Methods scope UCA Sets How many systems Pathway 4 Pathway 5 are impacted?

Refined Risk Analysis:

More Than One Calculate bounding risk Calculate RRTs for UCA for multiple system scope Sets 36 © 2023 Electric Power Research Institute, Inc. All rights reserved.

HAZCADS Design Information DEG Pathway 1 Perform STPA and find Use pre-determined potential Unsafe qualitative risk Control Actions (UCAs) matrices to assign Risk Reduction Target (RRT)

Design Pathway 2 Feedback Does it No Look up pre-computed One affect Use theriskexisting bounding for one the PRA?

PRAsystem model to RRT DRAM, TAM, HFAM, assess the impact Pathway 3 of EMCAM Yes Partial complete Calculate bounding failure riskof theforDI&C partialand system assign Control Methods scope UCA Sets How many systems a bounding Risk Pathway 4Target Pathway 5 are impacted? Reduction Calculate(RRT) bounding risk Refined Risk Analysis:

More Than One Calculate RRTs for UCA for multiple system scope Sets 37 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Risk Ranking HAZCADS uses a bounding risk assessment process, when the risk is calculated quantitatively through a PRA model

- This approach evaluates all failures including any common cause (not just software common cause failures)

The risk sensitivity assessment is based on the change in risk if the UCAs occurred Change in Core Damage Frequency Change in Large Early Release RRT - CDF (per year) Frequency - LERF (per year)

D CDF 1E-6 LERF 1E-7 C 1E-6 < CDF 1E-5 1E-7 < LERF 1E-6 B 1E-5 < CDF 1E-4 1E-6 < LERF 1E-5 A 1E-4 < CDF 1E-3 1E-5 < LERF 1E-4 Change the Design CDF > 1E-3 LERF > 1E-4 38 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Anticipated Concern: An RRT of A Is Really High!

Delta risks of 10-4/yr to 10-3/yr are really high and normally would not be allowed for risk-informed applications True, but this is not the increase in risk of the system - this is the risk if the entire system fails This is also the risk of the current system were it to completely fail In reality, we expect digital I&C upgrades to reduce risk compared to the existing, analog systems as demonstrated in other safety-related industries 39 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Downstream Processes (after HAZCADS)

The RRT from HAZCADS is used by the downstream processes in the EPRI digital framework Each downstream process assigns control methods to protect against a type of failure

- DRAM: Assigns control methods to account for random and systematic errors

- HFAM: Assigns control methods based on human factors

- TAM: Assigns control methods based on cyber security

- EMCAM: Assigns control methods based on electromagnetic compatibility 40 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Control Methods DRAM, TAM, HFAM, and EMCAM:

- Determine causal factors for the UCAs

- Establish control methods that are aimed at addressing those causal factors

- Score the control methods against the RRT from HAZCADS This process may impose new design requirements or add implementation requirements on the system 41 © 2023 Electric Power Research Institute, Inc. All rights reserved.

RRT Acceptance Criteria Change the Design RRT Change in Core Damage Change in Large Early Release Frequency - CDF (per year) Frequency - LERF (per year)

Change the Design CDF > 1E-3 LERF > 1E-4 A A 1E-4 < CDF 1E-3 1E-5 < LERF 1E-4 B B 1E-5 < CDF 1E-4 1E-6 < LERF 1E-5 C

C 1E-6 < CDF 1E-5 1E-7 < LERF 1E-6 D

D CDF 1E-6 LERF 1E-7 RG 1.174 Example HAZCADS 42 © 2023 Electric Power Research Institute, Inc. All rights reserved.

DEG HAZCADS Top Event 5 Operator(s) 6 7 (Pathway Five)

Stakeholder 1 Needs Delta Risk Event Event Event Losses, Controller(s) Calculations Hazards Event UCA Event UCA Control Actions UCAs Actuator(s) Sensor(s)

Event UCA Event UCA Set 2 Requirements UCA Set Controlled Process(es) Event Event Event Event Control Structure Unsafe Control Action UCAs and RRTs I&C 3 Design Functional 9 8 11 Programmatic Control Loss Spatial Method Scenario 4 10 System Elements Allocations Control Loss Method Scenario Hardware Element Software Element Acquisition Control Loss Plant Method Human Element Scenario System(s)

Design Equipment Under Control Connectivity Relationship Sets Downstream Processes 43 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Special Note on Software Common Cause Failures I&C Operating Experience (OE) (nuclear and non-nuclear) indicates that most systematic failures are a result of:

- Latent design defects due to inadequate requirements

- Uncontrolled system interactions Misapplication of diversity as a means to address the potential for software CCF can contribute to additional system complexity, which could increase the potential for latent errors HAZCADS identifies and risk ranks the potential systematic errors (not just software CCFs) and the other tools in the EPRI digital framework establish control methods to address them 44 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Summary of EPRIs DI&C Risk-Informed Approach

1. Identify what can go wrong (what could be unsafe) in the system
2. Establish a bounding risk assessment of the identified potential errors
3. Revise system requirements and/or assign control methods to the system commensurate with the risk
4. If needed, refine the risk assessment based on identified loss scenarios that cannot meet the RRT 45 © 2023 Electric Power Research Institute, Inc. All rights reserved.

HFAM - Human Factors Analysis Methodology for Digital Systems: A Risk-Informed Approach to Human Factors Engineering 3002018392 46 © 2023 Electric Power Research Institute, Inc. All rights reserved.

HFAM = Part of the DEG Framework Risk Reduction Targets HAZCADS Risk-Enabled DRAM TAM HFAM EMCAM Digital Hazards Cyber Security Human Factors Identification Digital Reliability Technical Analysis EMI Guides and Prioritization Analysis Method Assessment Methodology Methodology

  • STPA/FTA Causal Factors
  • Integrate HFE Integration
  • Systemic
  • Design Phase and HRA
  • Random
  • In-Situ
  • Graded Approach
  • Environmental
  • Supply Chain Design Control Methods Allocation Information DEG - Systems Engineering 47 © 2023 Electric Power Research Institute, Inc. All rights reserved.

What is a Risk-Informed How HFAM applies information about risks to analysis and design activities to:

HFE Approach? 1. Apply a graded approach to determine required HFE activities

2. Design the system to fit human abilities and limitations
3. Give adequate prominence to human error in system design - prevent or mitigate unsafe control actions.
4. Design Out potential system errors to avoid:

unnecessary interactions instructions that are hard to understand poor use of visual design bad or no error trapping subjecting the human to extreme physical or mental stress or workload

5. Develop evidence to demonstrate that the system will be safe and will not be or cause a hazard to people or environment.

48 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Key Features of HFAM*

1. Integrated with a comprehensive systems engineering process (EPRI DEG)
2. Integrated with risk insights from HAZCADS Stakeholder Treatment of HF Function Verification
3. Provides a graded approach with Requirements Analysis & Task Analysis Important HSI Design and Human 2 levels of gradation Analysis, HF Allocation Actions Validation Impacts

- Phase 1 based on the scope of the design within the DEG Graded Approach Phase 1 Graded Approach Phase 2

- Phase 2 based on the RRTs from HAZCADS

4. Integrates the use of HRA methods to assess the reliability of human tasks
  • Human Factors Analysis Methodology for Digital Systems: A Risk-Informed Approach to Human Factors Engineering

[EPRI 3002018392; 2021]

49 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Digital HRA Research Whats Different?

Design B Plant Orientation & Data Collection Existing HRA Identification of Human Failure Method/Process Events Design A Design C Definition & Task Analysis Quantification Graded approach to method development, starting with current methods 50 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Data Sources in Use Today Three major sources for initial evaluation How can we combine data to understand the reliability

1. Halden & Idaho National Lab experimental of human interactions in a data (broad range of design features, but digital environment?

qualitative or small samples)

  • To be useful in human
2. Literature and operational experience review factor engineering?

(broad range of design features, but

  • To be useful in validating qualitative or small samples) or updating HRA methods?
3. Training simulation data* (large quantitative
  • With data in a useful data set, but based on one design / concept of format, what else could we operations) inform?
  • Data to Support Human Reliability Analysis (HRA) for Digital Environments:

Data and Analysis from Korean Simulator Studies; EPRI 3002020751; 2021 51 © 2023 Electric Power Research Institute, Inc. All rights reserved.

A Special Note on Human Errors of Commission - Using the Right Analysis Tools for the Right Tasks In the PRA, human cognitive errors of commission (EOC) are typically ground-ruled out unless a specific cause is identified

- Other processes are used to minimize the likelihood and protect against EOCs

- When modeled, limited treatment of consequences are considered Consideration of expanded treatment of EOCs in new standards for plants with large amounts of automation ButSTPA is designed to find these types of systematic errors

- Use HRA experience with EOCs to build loss scenarios

- Design out high-consequence errors and provide adequate control methods for other potential unsafe actions Not necessarily something we need to quantify in the PRA, but is a candidate to be integrated in other ways 52 © 2023 Electric Power Research Institute, Inc. All rights reserved.

PRA Enhancements for Digital Technology for the O&M Phase 53 © 2023 Electric Power Research Institute, Inc. All rights reserved.

A PRA Look at Digital Systems We have an effective tool Operations &

Design Implementation Configuration for risk informing the design Management and implementation phases -

HAZCADS [EPRI 3002016698]

and associated processes Coherent approach to assess and address risk across lifecycle 54 © 2023 Electric Power Research Institute, Inc. All rights reserved.

A PRA Look at Digital Systems (2) Level of detail?

Are there How do I model Standards or We need something equally useful for software conventions?

day-to-day use once the digital I&C failures ?

mods are installed Whats the What boundary What about

- Simple to build, simple to understand, pre-initiators? about and roughly right modelling and data - conditions of CCF?

that can be implemented and used now the data?

and refined over time Research: Capture current state of knowledge, data and use cases Operations &

Design Implementation Configuration Consensus: Socialize with international Management technical community Continuous improvement: Reflect additional operating experience as it becomes available Consistency: Iterate as HAZCADS and related processes are refined Coherent approach to assess and address risk across lifecycle 55 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Proposed Approach (In Progress)

Research in progress

- Collecting/developing examples and use cases to test proposed approach

- Re-look at the data, existing guidance and lessons from HAZCADS

- Ensure consistency with RIDM framework

- Ensure plant reflects as built, as operated, including change management Incorporation of the design into the PRA should

- Be consistent in insights from the design process

- Be consistent with overall PRA modelling approach

- Continue to reflect the as-built, as-operated plant 56 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Proposed Approach (In Progress) (2)

Digital systems should be modeled at a reasonable level of detail adequate to support decision making

- Over decomposition introduces unnecessary modeling complexity

- Modeling level should match boundary conditions of collected data

- Software should not be separated from hardware (all software is implemented through a hardware system) Functional Reliability Fundamental Assumption: Control Methods implemented through the design process reduces the risk to acceptably low levels of risk

- Both for functional reliability and common cause failures

- Qualitative analysis reflects the best state of knowledge (best-estimate); this is key for consistency between design and assessment phases 57 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Capturing Consequence of Digital Failures* in the PRA The cause-effect relationship of OR potential unsafe control actions (UCAs) that survived to final design should be AND retained in the PRA or documentation:

- UCAs with non-unique consequences should be mapped to existing basic events for Element 1 Element 2 CCF of 1&2 documentation

- UCAs with unique consequences can be included explicitly in the model Cause and Effect Relationship Logic reassessed as the PRA evolves to reflect the as-built, as-operated plant

  • Can be hardware, software or human error; systematic or random.

58 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Considering Likelihood Two potential approaches for capturing likelihood of a functional failure (research currently investigating both approaches):

1. Create generic failure rates based on available data + qualitative insights and include that probability in the model.

- Data from existing EPRI and industry research and databases

- Qualitative insights based on strength of the implemented control methods, per the downstream processes

2. Do not quantify in base model, but use sensitivity studies to understand if risk has changed

- Control methods were determined to be adequate at the design stage to mitigate against base RRT level

- Sensitivity studies can be used to understand if the control methods continue to be adequate due to other changes in the model/plant Long term industry data gathering needs to be put into place and match the boundary conditions of data application

- Normal data updates at the equipment or function level will indicate if there is a performance issue and the control methods need to be re-evaluated.

59 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Consistency with Processes for Treatment of Uncertainty Risk-Informed Decision Making (RIDM) is the process of using risk information with other pertinent information to make decisions NUREG-1855 and EPRI companion document (1026511) provides guidance on how to deal with uncertainties in risk informed decision making

- Recognizes that conservative treatment does not lead to best decision making; can mask insights or be overly burdensome

- Provides guidance for dealing with large uncertainties through the RIDM process In this case, performance monitoring through data updates is key to ensuring the system behaves as expected Defense-in-depth is key to understanding the reliability at the plant facility level 60 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Modelling DI&C in the PRA - Summary Software reliability is directly proportional to the systematic controls and the design/implementation constraints.

By adding the risk-ranking (RRTs) to what could go wrong (UCAs), HAZCADS provides the designers the information they need to assign the appropriate level of control methods to obtain an adequate baseline risk If the design is adequately reliable, PRA should be used to ensure the as-built, as-operated plant continues to remain adequately reliable Performance monitoring through data gathering should match the PRA modelling through appropriate boundary conditions

- Do we have the right data collection frameworks in place?

Research is still in progress Data, cause-effect relationship is important; explicit quantitative modelling is not 61 © 2023 Electric Power Research Institute, Inc. All rights reserved.

Looking at the Whole Elephant Systematic failures Human interactions Automation Cyber Security Hardware reliability Software reliability digital is new, but not really 62 © 2023 Electric Power Research Institute, Inc. All rights reserved.

TogetherShaping the Future of Energy 63 © 2023 Electric Power Research Institute, Inc. All rights reserved.