ML20154L891

From kanterella
Jump to navigation Jump to search

Rev 2 to Guidance for Development of Simulation Facility to Meet Requirements of 10CFR55.45
ML20154L891
Person / Time
Site: Fort Saint Vrain Xcel Energy icon.png
Issue date: 03/31/1988
From:
PUBLIC SERVICE CO. OF COLORADO
To:
Shared Package
ML20154L870 List:
References
PROC-880331, TAC-67882, NUDOCS 8806010094
Download: ML20154L891 (123)


Text

_ _ _ _ _ _ _ _ _ . . _ _ _ _ _ _ _ _

Attachment 8.1 GUIDANCE FOR DEVELOPMENT OF A SIMULATION FACILITY TO MEET THE REQUIREMENTS OF 10CFR55.45 MARCH 1988 l

UTILITY SIMULATION FACILITY GROUP l

i i

8 REVISION 2 8806010094 880524 L

PDR ADOCK 05000267 P DCD I . ._-

1 TABLE OF CONTENTS SECTION EAGI LIST OF TABLES iii LIST OF FIGURES 111 LIST OF EXAMPLES 111 FORRARD iv

1.0 INTRODUCTION

1 1.1 Purpose 1 1.2 Background 1 1.3 Definitions 2 1.4 Acronyms 6 2.0 CRITERIA 8 2.1 Human Factors 9 2.2 Procedures 12 2.3 Steady State and Transient Models 15 2.4 Performance Testing 16 2.5 Operating Test Methodology 17 3.0 DEVELOPMENT OF A SIMULATION FACILITY 21 3.1 System Function / Task Analysis 24 3.2 Operational Cue Analysis 29 3.3 Fidelity Evaluation 31 3.4 Simulation Device Integration 35 3.5 Simulation Facility 38 3.6 Implementation 38 3.7 Configuration Management Program 39 4.0 SIMULATION DEVICES 46 4.1 Non-Plant Referenced Simulator 47 4.1.1 Human Factors 47 4.1.2 Procedures 48 4.1.3 Steady State and Transient Models 48 4.1.4 Performance Testing 50 4.1.S Operating Test Methodology 50 i

l TABLE OF CONTENTS SECTION P.1GE 4.2 Control Room Hock-Up 52 4.2.1 Human Factors 52 4.2.2 Procedures 53 4.2.3 Steady State and Transient Models 54 4.2.4 Perfori . nce Testing 54 4.2.5 Operating Test Methodology 54 4.3 Reduced Scope Simulator 56 4.3.1 Human Factors 56 4.3.2 Procedures 57 4.3.3 Steady State and Transient Models 58 4.3.4 Performance Testing 59 4.3.5 Operating Test Methodology 60 4.4 Part Task Simulator 61 4.4.1 Human Factors 61 4.4.2 Procedures 62 4.4.3 Steady State and Transient Models 63 4.4.4 Performance Testing 64 4.4.5 Operating Test Methodology 65 4.5 CRT Simulator 66 4.5.1 Human Factors 66 4.5.2 Procedures 66 4.5.3 Steady State and Transient Models 67 4.5.4 Performance Testing 68 4.5.5 Operating Test Methodology 69 4.6 Reference Plant 70 4.6.1 Human Factors 70 4.6.2 Procedures 70 4.6.3 Steady State and Transient Models 71 4.6.4 Performance Testing 71 4.6.5 Operating Test Methodology 71

5.0 CONCLUSION

S, OBSERVATIONS, AND RECOMMENDATIONS 72

6.0 REFERENCES

73 11

LIST OF TABLES IAlLE PlGE 1 Human Factors Considerations 10 2 Simulation Facility Development Process Steps 23 3 System Function / Task Analysis Steps and Process 41 4 System Function / Task Analysis Criteria 43 5 Fidelity Evaluation Criteria 45 LIST OF FIGURES FIGURE P_ AGE 1 Simulation Facility Development Process Flow 22 2 Man / Machine Interface Model 30 3 Fidelity Evaluation 32 LIST OF EXAMPLES EXAMPLE EAGE 1 Breakdown of an Operator Task 27 iii

FORHARD There are definite, measurable benefits to be gained from the use of non-plant specific simulation devices, both in the areas of training and operator evaluation. The guidance of this document proposes a methodology for realizing those benefits.

This document was developed to provide consistent methodology for use by licensees in implementing plans to meet the requirements of 10CFR55.45(b)(1)(1). This document is also intended for use by the NRC in evaluating 10CFR55.45(b)(1)(1) simulation facilities. While this document provides generic guidance, the authors recognize that plant specific plans to meet 10CFR55.45 requirements may result in deviation from the guidance contained herein. This document also identifies the methods and generic limitations of operating tests on a simulation facility that is made up of the simulation devices discussed herein.

This document was prepared by representatives from the four author utilities that joined to form the Utility Simulation Facility Group (USFG). Specifically, the four USFG utilities and respective plants are:

Consumers Power Company (Big Rock Point),

Public Service Company of Colorado (Ft. St. Vrain),

Southern California Edison Company (San Onofre 1), and Yankee Atomic Electric Company (Yankee Nuclear Power Station).

iv

The above licensees will utilize this document to outline their plans to develop a simulation facility to meet 10CFR55.45 using existing, upgraded or new operator training hardware, enhanced, as necessary, by some use of Controllers, pen and ink procedure changes, and walk-through methods discussed herein. The plans will describe the simulation device (s) to be used as a simulation facility to evaluate the operators' generic skills and knowledge necessary to satisfy the thirteen operating test criteria of 10CFR55.45. Evaluation of the generic skills and knowledge is fundamental to providing assurance that the operators are prepared to perform the duties and responsibilities required of reactor and senior reactor operators.

It is the conclusion of the author utilities that effective operator training and examination is realized if any one of the simulation devices used alone, or in combination with other devices, constitutes the licensee's simulation facility. Use of these simulation facilities proviaes for operators that are trained and evaluated to standards necessary to assure safe operation of nuclear power plants.

Utility Simulation Facility Group NOTE: Revision 1 of this document reflects revisions to incorporate NRC comments and NRC/USFG agreements and understandings reached during an NRC/USFG meeting on September 15 and 16, 1987. Revision 2 reflects comments l

from a December 7, 1987 meeting.

i y

1.0 INTRODUCTION

1.1 Purcose This document provides sufficient generic guidance for the development and use of simulation facilities for those Nuclear Power Plant Licensees that plan to meet the requirements of 10CFR55.45(b)(1)(1). Plant specific plans will describe the licensee's simulation facility and chosen methods of conducting operating tests on the simulation facility.

1.2 Backorcund 10CFR55, "Operator Licenses," Paragraph 55.45, "Operating Tests" requires that an applicant for a reactor operator or senior reactor operator license demonstrate both an understanding of and the ability to perform certain essential job tasks. It specifies that the demonstration will be done through the administration of operating tests in a plant walkthrough and in a simulation facility. The simulation facility may be one that consists solely of a plant-referenced simulator certified to the NRC by the facility j licensee, or it may be one which has been approved by the NRC, after application for such approval has been made by the facility licensee, l

1 This document addresses the preparation and implementation of a plan for the development of a simulation facility and guidance for use and approval of that simulation facility.

1.3 Definitions Best Estimate - Reference plant response data based upon engineering evaluation or operational assessment.

Candidate - An individual being evaluated for a reactor operator or senior reactor operator license.

Controller - An individual responsible for clarifying deviations between a simulation device and the reference plant.

Critical Parameters - 1) Those parameters that require direct and continuous observation to operate a nuclear power plant under manual control.

2) Input parameters to nuclear power plant safety systems.

{

Cue - Information available for use in evaluating plant status.

Deviation - An identified difference between a simulation device and the reference plant.

1 l

L

l J

Deficiency - A deviation that the fidelity evaluation identifies as a "need-to-fi x" i tem.

Examiner - An NRC representative who conducts operating tests. l Fidelity - Reference plant replication in either system model, physical appearance or system function.

Instructor - A utility representative responsible for the i operation of the simulation device.

i Multidisciplinary Review - Review by disciplines of appropriate background and experience.

Operational Cue Analysis - An analysis to determine the cues available on a simulation device and the cues required by the referenced plant operating procedures.

Operator - An individual who possesses a reactor operator or senior reactor operator license.

Pen and Ink Procedure Changes - Changes made to controlled reference plant procedures for the purpose of mitigating simulation device deficiencies, before those procedures are used to conduct operating tests.

Potential to Confuse - Deviation potential to perplex the operator and disrupt his mental and physical actions.

Potential to Error - Deviation potential to lead the operator to not perform any action or to perform an incorrect action.

Potential to Impede - Deviation potential to delay or hinder significantly the operator's correct response.

Procedures - Reference plant normal operating procedures, abnormal operating procedures, emergency operating precedures, and emergency plan implementing r procedures that an operator or candidate would be required to implement. When plant procedures are

. rGrerenced to throughout this document, it is assumed that they are "controlled copies" of the procedures, unless otherwise specified.

Procedure Performance Time - The realistic or actual time for a candidate to perform a procedure or task in the reference plant control room.

Real Time - Computer simulation of dynamic performance in the same time base relationships, sequences, durations, rates and accelerations as the dynamic performance of the reference plant.

Reference Plant - The specific nuclear power plant from which a simulation facility's control room configuration, system control arrangement, and design data are derived.

Simulation Device - A component of a simulation facility that simulates a portion or all of the reference plant.

Simulation Facility - One or more of the following simulation devices, alone or in combination, used for the conduct of operating tests:

1) Non-Plant Referenced Simulator
2) Control Room Mock-Up
3) Reduced Scope Simulator
4) Part Task Simulator
5) CRT Simulator
6) Reference Plant

System Function / Task Analysis - A systematic analysis of the reference plant procedures that yield the cue and I&C requirements.

7:5k - A unit of control room operator work which may require plant information collection, systems operation, or both.

Task Element - A unit of human activity comprising a task.

Task Statement - An independent unit of cot. trol room operator work.

1.4 Acronyms CFR - Code of Federal Regulations CROR - Control Room Design Review CRT - Cathode Ray Tube CRM - Control Room Hock-Up HF - Human Factors I&C - Instrumentation and Control NPRS - Non-Plant Referenced Simulator NRC - Nuclear Regulatory Commission PRS - Plant Referenced Simulator PTS - Part-Task Simulator

RO - Reactor Operator l

e

i RSS - Reduced Scope Simulator SFTA - System Function / Task ?snalysis SHE - Subject Matter Expert SRO - Senior Reactor Operator USFG - Utility Simulation Fa:ility Group

2.0 CRITERIA The following section provides generic criteria to be applied in the evaluation and use of a simulation device. All or part of each of these criteria is applied to specific simulation devices in the manner described in Sections 3 and 4.

i l

8-L.

2.1 Human Factors Human factors addresses the comparability of the simulation device with the reference plant in the areas of control room and panel layout, I&C configuration and ambient operating environment. The primary human factor consideration in a plant referenced simulator (PRS) is for the simulator to have fidelity with the reference plant. PRS fidelity means duplication in physical appearance, physical layout, system function and system model. For a simulation device other than a PRS, 100% fidelity may not be achievable in system model and physical layout. Therefore, the only highly achievable fidelity component would be duplication in system function. A simulation device has to meet a high degree of duplication in system function criteria to be considered to be -

viable for the conduct of an operating test.

The degree to which a deviation does not meet physical / functional fidelity becomes a human factors concern. In a simulation device, the principle goal of human factors is to assure fidelity deviations have no negative impact on operator task performance. Deviations that hinder the operator should be documented and evaluated in a systematic fashion. Cognitive and behavioral operator actions should be considered. Operator perceptions (color, mimics, patterns, etc.) are to be considered if essential to task performance. Other human factors considerations deal with the specific components in Table 1. The simulation facility will

(

.g.

HUMAN FACTORS CONSIDERATIONS Maior Area Comconents Control Room layout - Physical Orientation Operator Station Panel Layout - Systems Orientation Control Panels Annunciator Panel Mimics Instruments and Controls - Displays Controls Instrument Range Instrument Accuracy Engineering Units Ambient Operating Environment -

Normal and Emergency Lighting Humidity Temperature Noise Comunications Auditory Signals Table 1

contain controls, instrtments, alarms and other man-machine interfaces for the operator to demonstrate his capability. The ambient environment in tne simulation facility should be replicated to the extent possible. To the extent practicable, the following generic criteria should be applied to any simulation device. The specific application of this generic criteria to each simulation device is discussed in Section 4.

Control Room and panel Lavout The simulation device should approximate the reference plant physical orientation and appearance. The simulation device should be the same physical size as the reference plant although reduced scale reproductions are acceptable provided the SFTA determines that the reduction does not significantly detract from the operating test.

The operator's station and other working space should be replicated. Deviations from the reference plant shall be evaluated as discussed in Section 3.

The control panels should be positioned on the simulation device in the same physical location as the reference plant. The systems orientation within the panel should replicate the reference plant.

Deviations from the reference plant panel layout shall be evaluated as discussed in Section 3.

l

\

I&C Confiouration J

The simulation device controls, indications, etc., on the control panels should approximate the same physical location as in the  :

reference plant. The instrument displays, controls, range, accuracy i

, and units should replicate the reference plant. Deviations from the reference plant I&C configuration shall be evaluated as discussed in Section 3.

Ambient Oeeratina Environment The ambient operating environment shall permit the operator to perform his duties. The ambient operating environment factors to be considered are lighting, humidity, temperature, noise and communication. Significant deviations from the reference plant environment shall be evaluated as discussed in Section 3.

2.2 Procedures The procedure considerations are: 1) the scope, 2) the manner of use and, 3) methods of modifying the procedures used in the administration of operating tests. Controlled copies of reference plant procedures are used in the conduct of an operating test.

Procedures performed on a simulation device allow candidates to demonstrate their "ability to perform" the operations required by those procedures. The following generic criteria should be applied when reference plant procedures are used during the course of an

),

,A' e operating test on the simulation devices described herein. 'The specific application of these criteria to each simulation device is discussed in Section 4.

(

Precedure Scoce .

Types of procedures exercised on the simulation device include:

1 Normal Operating Procedures, Abnormal Operating Procedures, Emergency Operating Procedures, and Emergency Phon Implementing Procedures.

The scope of procedures to b3 exercised on any simulation d'avice shall be det4 inine4 using the methods discussed.in Section 3. /

i Erasedure Use Reference plant procedures will be used on the simulativn device (s). The reference plant may be used to exercise the ' Normal Operating Procedures' that can be performed as part of normal operations. Those procedures or tasks requiring control room , t-s inte-action should be performed on a single simulatton device (or appropriatily integrated simulation devices). #

i,'

i r

hl 6

r i <.

e s

r ,

Procedure Modificatient

, J The data obtained during the SFTA (hdj;ribed in Section 3) will be used to iceiitily.any deviations between'Ene sjmulation device and i

\

the procedures that will be tested. The determination will then be 5f madewhich,ifany,-ofthesedev)hionsshouldberesolvedbypen 1 at.d ink procedure changes. The l;rocedure steps may then be modified

( <to accommodate simulation. device deficiencies, prohided the 6

e' ,

/codf fications do not significantly detract from the conduct of the e ' operating test. This may include partial completion or deletion of

/

procedure steps or assuming the successful or unsuccessful

' completion of op rator tasks that cannot be performed on the simulation device.

I ,

I 4

, Before procedures are ussd in the conduct of operating test 2, any i

necessary pen and ink procedure changes identified by the ,SFTA will be made. Pen and' ink frocedure changas may possibly affect Procedure Performance Time. Procedure Performance Time will be taken into consideration in the development, upgrade or use of existing devices or in the implementation of thh pen and ink procedure changes.

/ All d e,ommanded pen and ink procedure changes will be forwarded to

\

thekultidisciplinaryReviewTeam(seeSection3.1)forreview.

e

[

L. d

c Such changes will be made only after the following has been considered:
1. Determination has beer r . hat the controlled procedure cannot be performed on existing simulation devices.
2. Upgrades to existing simulation devices, or the developr..:nt of new simulation devices for the procedure (s) or part of the procedure (s) which cannot be conducted require an excessive effort or burden in relation to the benefit gained. The evaluation of this burden 11 benefit shall be documented.
3. Determination has been made that pen and ink changes are

' preferable to other alternatives (i.e., use of controllers or other similar mechanisms have been considered, but would result in a degradation to the examination process).

Any procedure modifications will be cor. trolled by the simulation facility Configuration Management Program described in Section 3.7.

2.3 Steady State and Transient Modeli The steady state and transient modeling used as part of the simulation device (s) shall adequately model the operating bebivior of the reference plant. The following generic criteria should be 1

applied, as applicable, to any simulation device. The specific application of this criteria to each simulation device is discussed in Section 4.

Scooe Simulation device output should approximate and display expected plant res unses. The responses should be based upon plant operating data or best estimate aralyses as appropriate.

Fidelity The models should be of a level of sophistication necessary to assure the adequacy of the output information being presented to the operator.

Time All simulation devices should approximate real time.

2.4 Performance Testino l

Performance testing is conducted to varify the simulation device l

L performance as compared to actual or predicted reference plant performance. The initial performance testing shall serve to verify and validate the adcquacy of the completed Simulation Facility, including a.1y procedure modifications. The specific application of performance testing criteria to each simulation device is discussed in Section 4.

l .

, Performance testing should be conducted on a schedule consistent uith 10CFR55.45.

2.5 Ooeratina Test Methodoloav This section provides a generic process for conducting examinations on simulation devices for the purpose of evaluation in accordance with 10CFR55.45(a). The following generic criteria should be implemented, as applicable, on any simulation device. The specific application of this criteria to each simulation device is discussed in Section 4.

Those portions of the operating test conducted on each simulation device shall be limited to the procedure scope determined by the Operational Cue Analysis (described in Section 3.2) for that simulation device.

Examinations should be conducted in accordance with NUREG-1021, "Operator Licensing Examiner Standards."

Procedure Performance Time on a simulation device should be followed as closely as possible during operating tests.

During the conduct of the operating test, Controllers may be required to mitigate identified simulation device deficiencies. The use of Controllers should follow the guidelines described below.

Guidelines for Controller Interactier-The role of the Controller is to provide an added dimension to the simulation device to enable the device to more closely approximate the reference plant during the conduct of operating tests. In this sense, Controller usage is similar in nature to those activities conducted by utility instructors who, during conduct of operating tests on Plant Reference Simulato s, provide information as outside operators, I&C technicians, etc. In this case the Controller is used to augmsilt the simulation devices. Therefore, the purpose of the Controller is to provide, under direction, those cues unavailable from the device that may be needed to carry out actions during the performanca of the operating test.

Controllers used during implementation of operating tests on various simulation devices shall follow specific guidelines established herein and as mutually agreed upon by individual examiners and the simulation facility management.

Examination Intearity o Examinat'. .in integrity is paramount to the success of the operating test. The Controller should not compromise examination integrity.

l

l l

i l

o Controller actions shall be conducted under the direct supervision and control of the simulation facility operator.

o Controllers shall not prompt the operators in the performance of their duties. Prompting may result in the invalidation of the operating test.

Controller Oualifications o Controllers shall be trained on their duties and responsibilities. Details of this training program shall be included as part of the simulation facility plan submitted by each utility.

o Qualifications of controllers shall be supplied as part of the simulation facility plans submitted by each individual utility and should meet the minimum criteria listed below.

1) The controller should be employed by the utility or as a vendor under contract to the utility.
2) The controller should posess the level of training and qualifications required by the utility for simulation facility instructors as outlined in their respective accredited training programs. Controllers should hold or have held an SRO license or certification on the

1 plant for which the operating test is being conducted.

Controllers should also be knowledgeable on the simulation device on which the operating test is being conducted.

3) Controllers should rt elve additional training, as required by the utility, on the conduct of operating tests. As a minimum, this training shall cover the criteria listed above.

Controller Functions o Controllers shall function to provide cues to the operators that are not available from the simulation device.

o Controllers shall provide cues only as answers to specific questions from the operators or 7. directed in the operating test scenario. These cues are only for the purpose of providing information not available from the simulation device or as necessary to clarify a deviation of the simulation device from the reference plant.

o Controllers shall perform any other actions as identified and directed by the examiner in the conduct of the operating test.

3.0 OEVELOPMENT OF A SIMULATION FACILITY This section provides the methodology used to develop a simulation facility to meet 10CFR55.45(b)(1)(i) using one or more of the simulation devices that is described in Section 4. The purpose of the simulation facility is to enhance the conduct of operating tests.

It is recognized that operating tests are better performed on devices that promote an active man / machine interface, such as Non-Plant Reference Simulators, Reduced Scope Simulators, or Part Task Simulators. This interface facilitates eva'uation of the individual (s) in an aciual operating environment. However, in cases where there is limited implementation of an active man / machine interface, adequate qualifications can be demonstrated by alternate devices. As further discussed in Section 4. these methods use Control Room Mock-ups, CRT Simulation and the Reference Plant alone or in combination.

The following systematic evaluation methodology description below is based upon an SFTA. The process briefly identified in Figure 1 and Table 2 and further described in Section 3 is one method for developing a Simulation Facility based upon an SFTA. Each utility may have completed portions or all of the SFTA, but not in the exact format described below. Therefore, this section describes the USFG's understanding of the detail needed in such an analysis. It

SIMULATION FACILITY DEVELOPMENT PROCESS FLOH Analysis of reference plant procedures to deduce cue and I&C information requirements.

SFTA Comoar_i_Isn of simulation device with information requirements.

Evaluation of fidelity deviations between simulation device and the reference plant.

s._,..-

SIMULATION DEVICE h

1. NPRS
2. CRM FIDELITY
3. RSS EVALUATION
4. PTS
5. CRT SIM.  ;
6. REF. PLANT Selection of simulation device to satisfy information/ fidelity requirements.

Intearation of simulation device (s) into a simulation facility.

l  ; 7 i

SIMULATION Utility Plan of simulation FACILITY devices used to meet 10CFR55.45 I rule.

Figure 1 I

l l

i

[ - - - _

SIMULATION FACILITY DEVELCPHENT PROCESS STEPS System Function / Task Analysis

1. Task Identification
2. Task Element Identification
3. Cue Information Requirements
4. I&C/ Physical Characteristics Simulation Device Comoarison and Selection
1. Comparison of SFTA/ Reference Plant with Simulation Device
2. Refe. Deviation to Fidelity Evaluation Fidelity Evaluation
1. Identification of Deviation
2. Assessment of Deviations
3. Disposition of Deviations Simulation Facility Plan
1. Description of Simulation Devices
2. Match Procedures with Devices Table 2 is expected that plant specific implementation of an SFTA will include parts of or all of any previously performed SFTA or related type work.

The rigorous implementation of this simulation development methodology will assure the appropriate use of a particular simulation device for the performance of operating tests.

Conversely, the implementation may provide indication that a currently used simulation device is inappropriate to meet the nteds of an operating test and a new device snould be pursued. Therefore, it is the goal of this simulation facility development methodology to provide an appropriate simulation facility, considering both existing and new simulation devices.

3.1 System Wnction/ Task Analysis (SFTA)

The SFTA is a systematic analysis of the reference plant procedures that yield the cue and I&C characteristics. From the reference plant control room instrumentation, the operator obtains the required cues to help him in performing his tasks. The systematic determination of these man / machine interface characteristics provides a basis for evaluating the adequacy of a simulation device to support the conduct of operating tests. Tables 3 and 4 (located at the end of Section 3) detail the SFTA steps and process and SFTA criteria, respectively.

The method is characterized as a tep-down analysis which begins with the reference plant procedures. The procedures are partitioned into units of activities identified as tasks. A task is a unit of control room operator work which may require information collection, systems operation, or both. A task is characterized by being a relatively small unit of work which is comprised of approximately the same sequence of elementary human actions regardless of the operational sequence in which the task appears. The main criterion for the identification of a task is that the task should define the information and/or control functions needed by the operator to perform that unit of work.

Each task is further partitioned into the units of human activity, called task elements, which need to be sequentially accomplished in order to execute the task. The main criterion for identifying these task elements is that each should further refine and identify the information requirements needed by the operator to execute the task

, in the context of all the operational sequences in which the task appears.

A set of cues is developed from the task element requirements. The cue in this context is defined as the significant information acquired by the control room operator that prompts him to act. The cue is taken directly from the task element, taking into account the l information required from the specific system.

l l

A set of characceristics describing the functional requirements for each task element is identified. The task element may be associated with a specific instrument. In that case, the characteristics include range and units. The physical requirements will be specified if the particular characteristic is critical to performing the task. In cases where only one type of equipment will satisfy the physical and functional requirement, the specific manufacturer and component will be identified. Example I shows the breakdown of an operator task.

The SFTA will be conducted by a Multidisciplinary Review Team composed of subject matter experts (SHEs) of appropriate background and whose responsibilities are discussed below. The SMEs will utilize their expertise to process the task breakdown. The SFTA process steps will be documented and supported with appropriate sources.

Multidisciolinary Review Team The Multidisciplinary Review Team shall consist of utility or consultant personnel that are SMEs of appropriate disciplines. The areas of expertise and experience of the personnel on the Mult1 disciplinary Review Team will be detailed in each utility's plant specific plans. The role of the Multidisciplinary Review Team is to:

l l

l I

BREAKDCHN OF AN OPERATOR TASK Operation Procedure - E0P XXX Task Statement - Determine if RCPs are operating Task Element - Read RCP ON/0FF Status, System - RCP Read RCP current, System - RCP Read RCP flow, System - RCP Read RCP Delta P, System - RCS Read Voltage to RCP, System - RCP Read RCP Speed, System - RCP Read RCP Electric Power, System - EDS I f Cue Information - RCP Speed > 500 rpm Annunciator Hindow Off No Annunciator Audio Alarm M

I I&C/ Physical Requirement - Display Value Range, 0-1200 Units, RPM Accuracy, N/A Display on Control Room Panel 56 Abbieviations RCP - Reactor Coolant Pump EDS - Electrical Distribution System Example 1

Review the SFTA process and documentation generated by the 1

l SFTA.

Evaluate the Operational Cue Analysis data and disposition the deviations identified in this process.

Determine the applicability of each procedure to each simulation device.

Recommend the makeup of the Simulation Facility.

SFTA Philosochy The System Function / Task Analysis (SFTA) is a logical link to the other analyses that have been conducted to support training programs. The starting point for SFTA is the procedures which were also the basis for the job analysis previously done for the training programs. The job analysis provides the skills and knowledge requirements needed for the operator to perform his job. The task analysis provides the task information requirements for the operator

, to perform his tasks. The system function analysis provides the functional operator controls requirements needed to perform l procedural steps.

l

- - =

The job analysis, task analysis, and system function analysis connect to the three points of a man / machine interface model that exists at the utilities and discussed in INPO 00cument 83-047. The operator is the center of the triangular model shown in Figure 2.

The model addresses the interfaces and interrelationships that exist among the operator, the plant (system function analysis), the procedure (task analysis), and the training (job analysis). The SFTA process ensures that the component interfaces have been considered adequately and that the operators can operate the plant safely and efficiently during all operating situations.

3.2 Ooerational Cue Analysis The Operational Cue Analysis consists of the SFTA and the simulation device comparison and selection process. An I&C inventory will be conducted on each simulation device. The I&C inventory will include a listing of the presence of both static and dynamic cues, each type of cue being important to the operational cue analysis. As shown in Figure 1, the I&C inventory is then compared to the SFTA results to identify the I&C set and cues available to execute the reference plant procedure set. Each procedure task listing is evaluated to j determine the ability to adequately perform the procedure on the simulation device. This process yields a procedure set applicable for examination on that particular simulation device. Procedures l

MAN-MACHINE INTERFACE tiOQEk Job Analysis

( Operator f::l

...:::hN

,y:: :

! 9. u.

n4::.3 Procedure, N "I g;

Task Analysis System Function Analysis Figure 2 that cannot be implemented on any other simulation device can be examined on the reference plant. Accordingly, it is expected that all procedures can be examined on the Simulation Facility.

The process of collecting data for the Operational Cue Analysis shall be similar to that identified in NUREG-1258, Appendix B. The data fields to be evaluated shall be determined by the SFTA. The discrepancies between the reference plant and the simulation device shali be dispositioned by the Multidisciplinary Review Team using the process identified in Section 3.3.

3.3 Fidelity Evaluation After a simulation device goes through the SFTA comparison process, a fidelity evaluation should be performed. The pureose of the fidelity evaluation is to identify and assess the deviations of a simulation device from the re,'erence plant. A type of fidelity evaluation process and criteria that could be used is detailed in Figure 3 and Table 5 (located at the end of Section 3). The fidelity evaluation identifies the potential areas where '

comparability deviations could cause an operator difficulties in performing procedure tasks. Deviations are identified from three

FIDELITY EVALUATION

-h SFTA Reference Plant Simulation Facility g (Table 1) Development C

6 -

y U C if E I f M

~

Identi fy C Deviations R

y V M Evaluate Q and Rate C p Deviation Modify Device, Another Device, I I Controller, Hodify Procedure, Etc.

5 o

Sum Rate Yes g > 4.00 2

2 o

No q

p e dsI Figure 3 l

different sources: 1) the SFTA results, 2) the control room as noted in Table 1 and 3) the Simulation Facility Development process-. The evaluation process will consist of identifying the deviation, assessing the deviation, and dispositioning the deviation. The evaluation process will be done by the Multidisciplinary Review Team.

Deviations vary to degrees of impact on the operator. Three factors will be used to assess the deviation impact; (1) Potential to Impede, (2) Potential to Error, and (3) Potential to Confuse. The Potential to Impede is the potential that the deviation has to delay or hinder significantly the operators correct response. The Potential to Error is the potential that the deviation has to lead the operator to not perform any action or to perform an incorrect action. The Potential to Confuse is the potential that the deviation has to perplex the operator and disrupt his mental and physical actions. The Potential to Confuse includes the poter.tial for the simulation to enhance the operator's performance by the presentation of information not available in the reference plant.

The Potential to Impede is the most important factor since this factor can prevent the operator from performing the tasks. The Potentials to Error and to Confuse are of equal value. The three factors are weighted with the Potential to Impede having a factor of 0.50 and the Potential to Error and to Confuse having a factor of 0.25 each.

Each deviation will be rated on each potential from 1-10. A low probability is a measure of 1-3. A medium probability is a measure of 4-7. A high probability is a measure of 8-10. The formula to derive the sum rate is:

Sum Rate - Pj (.50) + P2 (.25) + P3 (.25) where: Pj is the Probability to Impede P2 is the Probability to Error P3 is the Probability to Confuse The sum rate will determine the degree of deviation impact on the operator and direct the solution to the fix. For example, any sum rate less than or equal to 4.00 may denote the deviation can be left "as is." Consequently, any sum rate greater than 4.00 may denote a deviation that must be addressed by modification.

The weighing factor is derived from the D. Meister method of assigning weights to criteria factors. The method of assigning 1-10 1

measurements is derived from the CRDR experience of rating Human Engineering Descrepancies (HEOs) for priority.

l l

It is recognized that the process described above is a subjective one, since the results depend upon the judgment of the personnel l.

evaluating the deviation in their application of the low, medium and high probability values discussed above. The assignation of 1

1 I

l

' l

probability ratings and determination of modifications disposition is the responsibility of the Multidisciplinary Review Team performing the evaluation (s). Therefore, the guidance of this document suggests the use of a rating criteria such as the 4.00 criteria suggested above. It is the purpose of this rating guidance to specify that this type of rating of deviations be performed to determine the need for modifications and/or justification of existing design. The use of 4.00 allows for the judgment that deviations of low probability most likely would not require modification and that some medium probability deviatiors should not require upgrade. It is expected that these types of medium probability deviations could consist of, but would not be limitec to, deviations that are not frequently enccuntered by the operato-in his duties and, therefore, would not be expected to significancly degrade the examination process. This rating acknowledges that the majority of the medium probability deviations and all of the high probability deviations should be addressed by a simulation facility modification, unless significant justification is provided to the contrary. Plant specific plans should describe the rating criteria and basis for that criteria, if the proposed criteria takes exception to this guidance.

3.4 Simulation Device Intearation The integration of the SFTA and the simulation device capabilities is necessary to ascertain the optimum simulation facility for the performance of operating tests. The goal of the selection process

l i

is to provide a simulated control room environment that presents the 1

highest level of active man / machine interface, in which the reference plant procedures can be exercised. The Simulation Facility shall provide the opportunity to test all of the operator responses to the cues identified by the operational cue analysis of the reference plant procedures. The ability of the operator to use controlled copies of the reference plant procedures and produce the desired responses on the simulation device, so as to place the plant in the desired operating configuration, shall be a determining factor in selecting the simulation device for the operating test.

The simulation device shall be capable of producing the desired response to the operator actions identified in the reference plant procedures to the extent necessary to assure that the operator cah determine from the available cues that the plant is responding in the direction predicted in the reference plant transient analysis.

The process of selecting the simulation device (s) that will make up the simulation facility begins by identifying the device (s) that provides the most active man / machine interface. An I&C inventory would be conducted on the selected simulation device (s). As shown in Figure 1, the I&C inventory is then compared to the SFTA results to identify the I&C set and cues available to execute the reference plant procedures. Each procedure task listing will be evaluated to determine the ability to adequately perform the procedure (s) on the simulation device (s) without consideratiois of modifications to the device (s).

I l

Following consideration of simulation device modifications, the l

process would then be repeated. The end result of this process will l

I yield a procedure set applicable for examination on that simulation device and a list of simulation device modifications that are appropriate.

A similar process would be conducted on other simulation devices in descending order of their capabilities for active man / machine interface.

The emphasis during this process is to identify the simulation device that is most useful for examiners and at the same time provides a cost effective simulation device that will provide for effective training. Before any passive simulation device is selected consideration will be given to the use of controllers and pen and ink changes to the procedures with the intent to always try to use the device that promotes the most active man / machine interface.

The final selection of devices and the integration into a simulation facility is an iterative process. The benefits of a device that promotes active man / machine interface must be continuously evaluated against the negative aspects of the device. Consideration must be given to the number of pen and ink changes, intrusion of the controller on the operators activities, human factors issues, etc.

3.5 Simulation Facility The last step of the flow orocess illustrated in Figure 1, is to describe all the deviccs that makeup the Simulation Facility. The plant specific application for approval should describe the procedures to be run on each simulation device. The supporting documentation for each plant's application should include all the dispositioned deviations along with the evaluations and backup information.

3.6 Imolementation The development of a Simulation Facility should be undertaken in a systematic manner, giving consideration to the benefit of any upgrades to a simulation device. When considering or upgrading any simulation devices, the Operational Cue Analysis (Section 3.2) should be used to assess the significance and importance of the modi fication. The implementation of the Simulation Facility should give consideration to any device that promotes active man / machine interface, not necessarily just those included in this document. It is understood that there are other simulation devices available now or will be available with advancing technology that may be cost effective for particular applications. The intent of this document is to describe a systematic approach to the development of a facility; not to restrict the facility to using any particular simulation device (s). It is important that the simulation facility provide the cpportunity to examine all of the operator responses to the cues listed in the Operational Cue /.naysis.

1 3.7 Confiauration Manacement Procram A program to provide accountability of Simulation Facility physical l and functional capabilities will be developed and implemented. This program will consist of procedures and/or guidelines and will be a

controlled by established administrative procedure controls for each utility. The program will perform the following functions:

1. Identify, document, track and test discrepancies.
2. Identify, document and track deviations between the Simulation Facility and the reference plant.
3. Identify, document and track reference plant changes denoting effects on operating tests.

The details of the program will be described in the individual utilities plan and will include the following:

1. An outline of the administrative procedures and responsibilities for maintaining the Simulation Facility current in accordance with the guidelines of this document.
2. A description of the organization (s) responsible for maintaining the Simulation Facility.

__ ~

3. The administrative crocedures, including time requirements, for updating the Slaulation Facility upon a reference plant modi fi cation.
4. The administrative procedure (s), including time requirements, for completing, if necessary, a review of the SFTA upon a modification to the Simulation Facility.
5. The administrative procedure (s) for review / evaluation of the performance test baseline data upon, modifications to the reference plant, modifications to the $1mulation Facility or on a periodic bases (i.e., once every four years).

[

/

I f

\

I SYSTEM FUNCTION /TASL ANALYSIS.

STEPS AND PROCESS -

i STEPS PjtQC115 I. Task Identification A. Select Station Procedure - Select one station procedure at a time to process.

8. Breakdown Station Procedure - Breakdown the procedure according to operators actions (task).

II Task Element Identification A. Breakdown Operator Task - Breakdown the operator tasks into small units of work.

(task element)

8. Identify Cognitive / -

Associate task element with Beh6vioral Action operator's cognitive and behavioral actions.

III. Cue Information Reauirements A. Specify Task Element - Determine information operator ,

Information needs to do task elements.

8. Specify Perceptual - Determine information operator Information Required is cognizant about before, after and during task element performance.

C. Identify Time Dependency - Determ'ne information that Requirements l

will assist operator in l performir.1 the task element.

- Determine information that is

very critical relative to l operator actions.

l Table 3 1

~ ~ '

C i

. /. _

-t

[l, ;

e. ..

,, s' - a

! f) j ,.

-c ; f, >

f ;I '

,,i I

/ ,l

. s 3

/./ .

J $1ST(H FUNCTION / TASK ANALYSIS f ,"' '

- f, g $1EPS AND PROCES_S al ; .,/

l e- -

k' 3 .cSllES PROCESS IV. IEffhyskal C acut.gf f stics e <

,. 7 '

>~ ' ,/ Af Identi fy.1&C -

Deduce the functional I&C e

I Characteristics characteristics required to

/ / carry out the task.

Blh! den,tifyChisical - Identify physical character-t;iCharhetJristics istics if required to i pe form task element and function.

i

- Specify type and model of l component, if one-of-a-kind.

\

Table 3 (Continued)

SY$ TEM FUNCTION / TASK ANALYSIS  :

roITERIA Task Identification

1. A task is a unit of control room work that can be easily analyzed.
2. A task completion is independent of the preceding or following task.
3. A task is comprised of a limited number of human action units (elements).

Task Element Identification

1. Task elements are elementary human actions needed to accomplish a particular task.
2. Task elements can be divided into cognitive and behavioral divisions.
3. Each task element refers to information or control requirements.
4. Task element? are written only for operator actions which require control room information or control function.

Cue Information Reauirements

1. The cues required enable operator task performance.
2. The cues assist the operator in assessing the control toom status relative to the imediate tasks.
3. The cue 6 are a peremptual process that include audio, visual and sensory inputs.
4. Cuts that are time dependent must denote the time window required.

l l

l Table 4 l

43-

SYSTEM FUNCTICN/ TASK ANALYSIS CRITERIA I&C Physical Charactettstics

1. The I&C characteristics must specify the:

01stlay required.

Range required.

Units of the-quantity.

2. State physical requirements, if critical to performing the immediate task and function.

e

{

l i

l l

l Table 4 (Continued)

- - - - = - . - - -

i FIDELITY EVALUATION CRITERIA Identification of Deviation

1. A deviation is a physical difference that exists between the simulation device and the reference plant for the areas in Table 1.
2. A deviation is a difference in function between the SFTA and the simulation device.
3. A deviation is any other significant differences found between reference plant and simulation device during the development of the simulatior, facility.

Assessment of Deviat100

1. The deviation must not prevent the operator from performing tasks.
2. The deviation must not confuse the operator to the point of affecting task performance.
3. The deviation must not lead the operator to an error in task performance.

Disposition of Deviation

1. Any modification to the simulation device in question will require approval by the Multidisciplinary Review Team.
2. Any deviation referred to another simulation device for resolution will require feedback through the Fidelity Evaluation process.
3. The selected disposition will require verification.
4. Any deviation left "as is" must have all supporti r documentation and analysis and an 2dequate descri; ca.
5. Pen and ink procedure changes to the procedure will be the last option to resolv deviations. 1 1

Table 5 l l

l  !

q

)

i 4.0 SIMULATION DEVICES The following sections discuss the various' simulation devices used in the conduct of operating tests. These simulation devices are currently available simulation technology that can be used in the development of a simulation facility. Each section discusses the application of the previously described criteria to each simulation-device, the recognized limitations of each simulation device and the advantages of each simulation device.

4.1 NON-PLANT REFERENCED SIMULATOR A Non-Plant Referenced Simulator (NPRS) is a simulation device that models plant systems. The following criteria apply to the use of an NPRS as a simulation device.

4.1.1 Human Factors Control Room and Panel layout An NPRS may not replicate the reference plant control room and panel layout. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

I&C Confiauratiga Hardware and locaticn differences are allowed. Plant specific labels, overlays, scaling modifications or other surface enhancements should be used to make NPRS controls and panels more closely approximate the reference plant. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

Ambient Onoratina Environment An NPRS may not replicate the reference plant control room ambient operating environment. Deviations will be evaluated for impact on 3- performance of operator tasks in accordance with the Section 3 methodology.

3 l

l 4.1.2 Procedures As previously noted, the types of procedures exercised on the NPRS may include:

Normal Operating Procedures Abnormal Operating Procedures Emergency Operating Procedures, and Emergency Plan Implementing Procedures.

The scope of procedures to be used on the NPRS will be determined by the SFTA process described ir. Section 3. For each procedure performed on the NPRS, the Procedure Performance Time will be matched as closely as possible. The remainder of the generic Procedures (Section 2.2) criteria applies to the NPP.S.

4.1.3 Steady State and Transient Models ScoDe The output of the NPRS should approximate reference plant response.

However, the similarity will be limited by considerations such as core size and numbers of redundant or auxiliary systems. As a minimum, the NPRS should be capable of producing the operator cues required to enable implementation of those procedures identified by

the process described in 4.1.2. Software modifications should be implemented, as practicable, to achieve full exercise of the procedures on the NPRS.

Fidelity The steady state values for critical parameters shall be stable and not vary significantly from the initial values over a 60-minute period. Changes in critical parameters should correspond in direction to taose expected from operating data or a best estimate analysis and should not violate any physical laws of nature. If parameters deviate significantly and software modeling changes cannot be reasonably pursued, these parameters may exceed the criteria, provided:

o They are specifically corrected using a Controller 08 o Cue cards are substituted for these parameters QB o Approp-iate pen and ink procedure changes are implemented, lhi NPRS plant operating and transient time responses should approximate real time simulation. Any deviations will be handled by the SFTA i

l process described in Section 3.

l l

l l . - - . -.

4.1.4 Performance Testing Seat The NPRS performance testing should be limited to those procedurjs identified by the process described in 4.1.2.

Methodoloav Those procedures identified in 4.1.2 should be performed on the NPRS af ter all identified software modifications have been incorporated.

An initial set of data shall be collected and evaluated. After review and approval by the Multidisciplinary Review Team, these transient results become the baseline data set for subsequent parformance testing evaluation.

Acceotance Criteria Determination of acceptability of performance test results shall be performed by the Multidisciplinary Review Team.

4.1.5 Ooeratina Test Methodoloav The NPRS permits evaluation in a control room team environment. The operating test conducted on the NPRS shal' be limited to the procedures identified by the process described in 4.1.2. The generic guidelines for Controller interaction (Section 2.5) shall apply. Those procedures not accomplished on the NPRS should be evaluated separately on another simulation device.

Examiners will be able to select various NPRS initial conditions and plant malfunctions to evaluate operators' or candidates' responses.

The NPRS should have the capability to stop and restart the simulation, as necessary, at any point.

l The remainder of the generic Operating Test Methodology (Section 2.5) applies to the NPRS.

+

4.2 CCLT?OL R00H HOCK-UP A Control Room Hock-up (CRM) is a simulation device that consists of a display of the reference plant control room panels, including the switches, indications, and alarms, arranged in a configuration similar to the refeience plant. The CRM may consist of photographs, three-dimensional mock-ups or a combination of both. The following criteria apply to the use of a CRM as a simulation device.

4.2.1 Human Factors Control Room and Panel Layout The CRM should replicate the reference plant physical orientation and appearance. The CRM should be the same physical size as the reference plant although reduced scale reproductions are acceptable provided the SFTA determines that the reduction does not significantly detract from the operating test. Any reduction effort should be limited such that labels, controls, indications, alarms, etc., remain clearly legible. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

ILC Confiauration The CRM controls, indications, etc., should be in the same physical location as in the reference plant. They should be replicated in sufficient detail to enable the desired operator capabilities to be

successfully demonstrated. Functional fidelity of a CRM cannot be achieved due to its passive nature. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

Ambient Ooeratina Environment The CRM may not replicate the ambient operating environment of the reference plant. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology, 4.2.2 Procedures As previously noted, the types of procedures exercised on a CRM may include:

Normal Operating Procedures, Abnormal Operating Procedures, Emergency Operating Procedures, and Emergancy Plan Implementing Procedures i

l The scope of procedures to be used on the CRM will be determined by the SFTA process described in Section 3. For each procedure l performed on the CRM, the Procedure Performance Time will be matched j as closely as possible. The remainder of tie generic Procedures (Section 2.2) criteria applies to the CRM.

--qv- w m y , -

f -r-. , -=

4.2.3 Steadv State ar.d Transient Mode],1 1

Scoce Operator cues required for those procedures identified by the process described in 4.2.2 will be given on the CRM using cue cards or by use of a Controller. These cues should be based on reference plant operating data or steady state and transient best estimate analysis data.

Fidelity The CRM should be a representation of the reference plant.

Controllers or cue cards may be used to enhance procedure usage.

Time CRM usage is not expected to be in real time.

4.2.4 Performance Testina Performance testing to verify a simulation facility's performance as

. compared to actual or predicted reference plant performance is not 1

l applicable for a CRM.

l l

4.2.5 QAtratina Test Methodoloav Operating tests conducted on a CRM are to consist of walkthroughs i

for each procedure to be tested. Task performance can only be I

l j l

discussed and may require extensive use of Controllers and/or cue cards to provide operational cues. The remainder of the generic Operating Test Methodology (Section 2.5) applies to the CRM.

4.3 REOUCED SCOPE SIMULATOR A Reduced Scope Simulator (RSS) is a simulation device that physically and functionally models significant portions of the major systems of the reference plant. A RSS demonstrates expected plant ,

response to operator input and to normal and transient conditions to ,

which the simulator has been designed to respond. However, the number of initial conditions, normal functions, and malfunctions available will be less than the standard defined by ANS-3.5-1985.

The fc,!10 wing criteria apply to the use of a RSS as a simulation device.

4.3.1 Human Factors Control Room and Panel Layout The RSS should be positioned to approximate the reference plant physical orientation and appearance. The RSS should be the same physical size as the reference plant although reduced scale reproductions are acceptable provided the SFTA determines that the reduction does not significantly detract from the operating test.

j Any reduction effort should be limited such that labels, controls, indications, alarms, etc., remain clearly legible. Deviations will

! be evaluated for impact on performance of operator tasks in l accordance with the Section 3 methodology.

l l

i l

ISC Conficuration The RSS controls, indications, alarms, etc., should be in the same physical location as in the reference plant. They should be replicated in sufficient detail to enable the desired operator function (s) to be successfully demonstrated.

Photographic images of non-functional control board components are acceptable. Operator control input / output devices, such as handswitches and recorders, should be similar in operation but need not be identical in manufacturer or model number to those installed in the reference plant. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

Ambient Ooeratina Environment The kSS may not replicate the reference plant control room ambient operating environment. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Setelon 3 methodology.

4.3.2 Procedures 1

As previously noted, the types of procedures exercised on a RSS may l

l include:

l l

l I

t

l Normal Operating Procedures Abnormal Operating Procedures Emergency Operating Procedures, and Emergency Plan Implementing Procedures.

The scope of procedures to be used on the RSS will be determined by the SFTA process described in Section 3. For each procedure performed on the RSS, the Procedure Performance Time will be matched as closely as possible. The remainder of the generic Procedures (Section 2.2) criteria applies to the RSS.

4.3.3 Steady State and Transient Models SCHit The output of the RSS should approximate expected reference plant responses. For the system (s) modeled, the majority of the operator l

cues required for the use of the procedures identified by the process described in 4.3.2 shall be modeled and displayed.

Fidelity The RSS computed values for steady state operation with the l

l reference plant control system configuration (for those systems modelad) shall be stable and not vary moie than A2% of the initial

! values over a 60-minute period for critical parameters or more than 10% of the initial values for non-critical parameters. Transient thanges in displayed parameters shall not violate any physical laws i

t m _

l 1

of nature and shall be in the same direction as operating data or a best esti:nate analysis of the reference plant. Expected i relationships between parameters should occur in a manner consistent with expected reference plant response. For those alarms modeled and displayed the RS9 thcIl not fail to cause an alarm or automatic action that would have been actuated in the reference plant. or cause an alarm or automatic action that would not actuate in the reference plant. The RSS accuracies shall be related to full power values and interim power levels for which valid reference plant information is available.

Ilma '

The RSS should approximate real time.

4.3.4 Performance Testing ScoD1 The RSS performance testing should be limited to those procedures identified by the process described in 4.3.2. The performance testing should include validation of the pen and ink changes made to I

any modified procedures.

Methodoloey Those procedures identified by the process described i~n 4.3.2 should be performed on the RSS after all identified software modifications have been incorporated. An initial set of data shall be collected i

l l

l 1

and evaluated.

After review and approval by the Multidisciplinary Review Team, these transient rescits become the baseline data set for subsequent performance testing evaluation.

Acceotance Criteria Determination of acceptability of performance test results shall be performed by the Multidisciplinary Review Team.

4.3.5 Operatina Test Methodoloav The RSS permits evaluation in a control room team environment with a major portion of the control boards replic.ated. Therefore, most procedures can be evaluated even if some components are not software modeled.

The operating test on the RSS shall be limited to the procedures identified by the process described in 4.3.2.

Examiners will be able to select various RSS initial conditions plant malfunctions to evaluate operators' or candidates' responses.

The RSS should have the capability to stop and restart the simulation, as necessary, at any point.

The systems that are not modeled in the RSS may require evaluation of operating license candidates on other simulation devices (e.g. reference plant).

The remainder of the ganeric Operating Test Methodology (Section 2.5) applies to the RSS.

i

4.4 PART TASK SIMULATOR Part Task Simulator (PTS) is a simulation device incorporating detailed modeling of a limited number of specific reference plant components or subsystems. Such a device de'monstrates expected response of those components or subsystems. The following criteria applies to the use of a PTS as a simulation device.

4.4.1 Human Factor _1 Control Room and Panel lavout As necessary, layout of the PTS should be similar to the portion of the panel or system being simulated. The PTS should be the same physical size as the reference plant although reduced scale reproductions are acceptable provided the SFTA determines that the reduction does not significantly detract from the operating test.

Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

The PTS should replicate the various reference plant surface l enhancements, such as the use of color and mimics. The degree of correspondence with the reference plant should be consis^*nt with

the capability to replicate the existing surface enhancement with a given simulation device construction technique.

L o

I&C Confiauration The PTS controls, indications, alarms, etc., should be in the same physical location as in the reference plant. They should be replicated in sufficient detail to enable the desired operator function (s) to be successfully demonstrated. If the switch, control, indicator, alarm, or recorder is not required for the scope of the PTS, it does not need to be included on the simulator.

Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

Ambient Ooeratina Environment A PTS may not replicate the ambient operating environment of the reference plant control room. Deviations will be evaluated for impact on performance of operator tasks in accordance with the Section 3 methodology.

4.4.2 Procedures As previously noted, the types of procedures exercised on a PTS may include:

Normal Operating Procedures Abnormal Operating Procedures Emergency Operating Procedures, and Emergency Plar Implementing Procedures.

The scope of procedures to be used on the PTS will be determined by the SFTA process described in $0ction 3. For each procedure performed on the PTS, the Procedure Performance Time will be matched as closely as poscible. The remainder of the generic Procedures (Section 2.2) criteria applies to the PTS.

4.4.3 Steadv State and Transient Wodels 1C.0At The output of the PTS should approximate expected reference plant responses. For the system (s) modeled, the operator cues (responses) required for the use of the procedures identified in 4.4.2 should be displayed.

Fi deli ty The PTS computed values for steady state operation with the reference plant control system configuration (for those systems modeled) shall be stable and not vary more than 22% of the initial values over a 60-minute period for critical values or more than 10%

of the initial values for non-critical values. Transient changes in displayed parameters shall not violate any physical laws of nature and shall be in the same direction as operating data or a best estimate analysis of the reference plant. Expected relationships between parameters should occur in a manner consistent with expected plant response. For those alarms modeled and displayed, the PTS shall not fall to cauce an alarm or automatic action that would have

i been actuated in the reference plant, or cause an alarm or automatic action that would not actuate in the reference plant. The PTS accuracies shall be related to full power values and interim power levels for which valid reference plant information is available.

Time The PTS should approximate real time.

4.4.4 Perfermance Testina S.CMR The PTS performance testing should be limited to those procedures identified by the process described in 4.4.2.

Methodoloav Those procedures identified by the process described in 3.4.2 should be performed on the PTS after all identified software modifications have been incorporated. An initial set of data shall be collected and evaluated. After review and approval by the Multidisciplinary Review Team, these transient results become the baseline data set for subsequent performance testing evaluation.

Accentance Criteria Determination of acceptabiltty of performance test results shall be performed by the Multidisciplinary Review Team.

f d

4.4.5 Ooeratina Test Methodoloav Operating Tests shall be limited to those normal operator tasks and responses to cues that can be accomplished on the PTS. The instructor or examiner should have the ability to stop and restart the simulation, as necessary, at any point.

The remainder of the generic Operating Test Methodology (Section 2.5) applies to the PTS.

i l

f

4.5 CRT SIMULATORS A CRT Simulator is a simulation device that is computer based and CRT displayed. The information presented is a model of the reference plant. operating behavior. The Input / Output may be limited to a computer keyboard and CRT, and the parameter set limited to a specific scope. The following criteria apply to the use of a CRT Simulator as a simulation device.

4.5.1 Human Factors The human factors that can be properly addressed in using a CRT Simulator is essentially limited to the scope of information cues that are presented. Due to the keyboard and CRT I/O methodology the control panel layout. I & C configuration or ambient operating environment cannot be accommodated in a CRT Simulator. Deviations will be evaluated for impact on the performance of operator tasks in accordance with the Section 3 methodology.

4.5.2 Procedur,11 As previously noted, the types of procedures exercised on a CRT Simulator may include:

Normal Operating Procedures Abnormal Operating Procedures Emergency Operating Procedures, and Emergency Plan Implementing Procedures.

l 1

l 1

The scope of procedures to be used on the CRT Simulator will be determined by the SFTA process described in Section 3. For each procedure performed on the CRT Simulator, the Procedure Performance Time will be matched as closely as possible. The remainder of the generic Procedures (Section 2.2) criteria applies to the CRT Simulator.

4.5.3 Steadv State and Transient Modelt Ef.QD.ft The output of the CRT Simulator should approximate expected reference plant responses for those systems modeled. The system modeled and the displayed respcnses should be based on those cues required to use those procedures identified by the process described in 4.5.2.

Fidelity The lesel of sophistication for CRT Simulator models should assure adequacy o' output information. The CRT Simulator computed values for steady state full power operation with the reference plant control system configuration (for those systems modeled) shall be stable and not vary more than 2% of the initial values over a 60-minute period for critical values or more than 110% of the initial values for non-critical values. Transient changes in displayed parameters shall not violate any physical laws of nature

and shall be in the same direction as operating data or a best estimate analysis of the reference plant. Expected relationships between parameters should occur in a manner consistent with expected plant response.

Time CRT simulator responses should approximate real time. Any inadequacies in the models can be compensated by the support of pen and ink procedure changes, Controller use, or other methods, provided that the use of these methods does not detract from the examination.

4.5.4 Performance Testino ifJ2DJt The CRT Simulator performance testing should be limited to those procedures identified by the process described in 4.5.2.

Methodoloov Those procedures identified by the process described in 4.5.2 should be performed on the CRT Simulator af ter all identified software modifications have been incorporated. An initial set of data shall be collected and evaluated. After review and approval by the Multidisciplinary Review Team, these transient results become the baseline data set for subsequent performance testing evaluation.

AcceDtance Criteria Oetermination of acceptability of performance test results shall be performed by the Multidisciplinary Review Team.

4.5.5 ODeratino Test Methodoloav Examiners will be able to select various initial ccnditions and plant malfunctions to evaluate operators or candidates. The CRT Simulator should have the capability to stop and restart the simulation, as necessary, at any point.

The CRT Simulator can be used in conjunction with a Control Room Hock-up, a Part Task Simulator, or the Reference Plant. The use of the CRT Simulator is both enhanced by and enhances the use of these other simulation devices in the simulation facility. The use of these other devices in conjunction with the CRT Simulator should reduce the amunt of pen and ink procedure changes and controller use.

l The remainder of the generic Operating Test Methodology l (Section 2.5) applies to the CRT Simulator.

l 4.6 REFERENCE PLANT The Reference Plant, as a simulation device, is the Control Room of the specific nuclear power plant which serves as all or part of a simulation facility. The following critcria apply to the use of the Reference Plant as a simulation device.

d.6.1 Human Factors When using the Reference Plant, equipment layout, instrument and control configuration, cue scope and environment is exact. Human factors need only be addressed as applied to the plant operating conditions for the evolution to be examined.

4.6.2 Procedarti l

l Use of the Reference Plant as a simulation device allows usage of all of the procedures. No modifications to these procedures are l

required. To the extent consistent with existing plant conditions, the operating test may address any or all of these procedures to demonstrate familiarity with the plant. Operator tasks which cannot be actually performed should be accomplished through verbal discussion and a walkthrough of the evolution being examined. The controls and indication needed to perform the evolution should be physically shown to the examiner and accompanied by a description of what occurs when that control is manipulated. Deviations, such as

the above discussed static operating environment versus a desired dynamic operating environment, will be evaluated for impact on the performance of operator tasks in accordance with the Section 3 methodology.

4.6.3 Steadv State and Transient Analysis Models When using the Reference Plant as a simulation device, steady state and transient models are only applicable to the extent necessary to assure that the examiners and controllers possess appropriate informition regarding expected plant behavicr. This information should be based upon plant operating data or best estimate data.

4.6.4 Performance Testina Performance testing is not applicable when the Reference Plant is utilized as the simulation device.

4.6.5 Ooeratina Test Methodo h Operating hsts conducted on a Reference Plant may consist of walkthroughs and/or operation of selected plant evolutions, for each procedure to be tested. Task performance that cannot actually be performed should be discussed and may require extensive use of controllers and/or cue cards to provide operational cues. The remainder of the generic Operating Test Methodology (Section 2.5) applies to the Reference Plant.

5.0 CONCLUSION

S, OBSERVATIONS, AND RECOMMENDATIONS o Operating tests can be performed on simulation devices other than plant referenced simulators.

o Use of the simulation devices described herein enable the evaluation of the generic skills and knowledge necessary to fulfill the responsibilities of a reactor or senior reactor operator.

o Those specific skills and knowledge that cannot be evaluated on other simulation devices can be evaluated in reference plant walkthroughs.

o The development of a simulation facility should consist of a systematic svaluation, such as the method described herein, of the operating test requirements and be responsive to those needs, o It is preferred that simulation devices present the information in an active man / machine interface.

o It is recommended that the NRC approve the USFG methodology and use it to evaluate the adequacy of the resultant Simulation Facilities.

i

6.0 REFERENCES

NUREG-1258, "Evaluation Procedure for Simulation Facilities Certified Under 10 CFR 55." Draft Report, March 1987 NUREG-1021. "Operator Licensing Examiner Standards," Revision September 1986 ANS-3.5-1985, "American National Standard for Nuclear Power Plant Simulators for Use in Operator Training."

Regulatory Guide 1.149, "Nuclear Power Plant Simulation Facilities for Use in Operator License Examinations," Revision 1, April 1987 "Behavioral Analysis and Measurement Methods," 0. Meister, Hiley and Sons, 1985 l

1

\

l 8797F l

Attachment 8.2 Page 1 of 16 FORT SAINT VRAIN CONTROL ROOM SIMULATOR Design Specification

1.0 INTRODUCTION

1.1 OVERVIEW This specification establishes the technical and general requirements for a reduced scope training simulator for the Fort Saint Vrain Nuclear Generating Station control room. Criteria are set for degree of simulation, for performance and functional capability of the control room instrumentation and controls and for the computer complex. Included in this specification are the simulator related requirements for design, fabrication, integration and performance testing.

1.2 SCOPE OF WORK The simulator shall be designed to provide an accurate simulation of the selected control room equipment and plant systems for the 842 MWT HTGR plant. This is a reduced scope simulation and not all equipment or systems will be simulated. The system shall simulate all plant operating modes including, but not limited to, pre-startup checks, startup, shutdown, power maneuvering, normal operation, selected operator conducted surveillance tests on selected safety-related equipment or systems, and selected abnormal and emergency operating conditions. The selection process will bo generated via the System Function Task Analysis, i 1.3 PURPOSE The purpose of the Fort St. Vrain simulator will be to provide initial operator training, license re qualification training and testing for station operators. The simulator will have the fidelity to verify new operating procedures for normal steady-state and transient modes of operation for the systems simulated.

l 1.4 DEFINITIONS See Attachment 8.3 Glossary of Terms.

4 l

l

Attachment 8.2 Page 2 of 16 2.0 GENERAL REQUIREMENTS 2.1 EQUIPMENT AND SERVICES The equipment and services covered under this specification will be complete in all respects for a computer-based simulator for the Fort St. Vrain nuclear power plant. The design, fabrication and testing are addressed in detail by various sections of this specification.

This will be an in-house integration project. That is, equipment will be purchased from various vendors and integration of the various portions will occur within Public Service Company. A minimum amount of fabrication is expected. All instrumentation, computers, hardware and software will be purchased from vendors' off-the-shelf supply. Only where'special non-standard Fort St.

Vrain equipment is found will special fabrication or purchase be required. And only where software is not available will special software be written. The effort will be towards integration of the design.

3.0 PERFORMANCE REQUIREMENTS The simulator performance will be such that accurate simulation of specified control room equipment and plant systems is accomplished. The SFTA will determine the extent of the equipment and systems.

3.1 DESIGN REQUIREMENTS The simulator will be capable of simulating continuously in real i time the response of the Fort St. Vrain plant from operator

! actions, automatic plant controls and inherent operating

! characteristics as limited by the extent of the specified simulation. The observable performance of the simulator will closely parallel that of the plant.

3.2 EXTENT OF SIMULATION The simulation will possess sufficient completeness to drive all specified control room panel controls and indications. Those >

controls not physically mounted on the panel, that are necessary for proper response of the simulation will be changeable from the simulator instructor's console. (For example, manual valves I located outside of the Control Room).

Attachment 8.2 Page 3 of 16 3.3 CONTROL ROOM COMPLEX The FSV RSS will have a high degree of fidelity with the reference plant in the area of control room and panel layout, I&C configuration, and ambient operating environment. The reference plant control room will be duplicated in physical appearance, physical layout, system function, and system model. As the FSV Simulation facility will consist of a combination of part-task simulators and a mock-up, 100% fidelity will not be possible.

However, for those plant systems that are being simulated, a very high degree of duplication in system function, physical appearance and layout will exist.

It is the intent that any fidelity deviations existing will have no negative impact on operator task performance. Any such deviations that do occur will be documented and evaluated in a systematic fashion.

The following criteria will be applied to the FSV RSS facility:

CONTROL ROOM AND PANEL LAYOUT The FSV RSS will duplicate the reference plant control room in physical orientation and appearance. The physical size will be the same, the Reactor Operator and Senior Reactor Operator work stations will be replicated, and the relative physical locations of the panels will be the same. The systems orientation within the panel will replicate the reference plant.

PANEL INSTRUMENTATION The control room instrumentation and controls will be simulated to the extent indicated by the SFTA. The panels will be full size and the instrument face plates will be duplicates of the plant control room. Where instruments will not be simulated the panel will still contain the cut out for future installation and will be covered with a blocked color photograph of the plant instrument. If instruments can not be purchased that match the control room exactly the model chosen will be as close as possible.

I&C CONFIGURATION The RSS controls, indications, etc., on the control panels will be in the same physical location as in the reference plant. The instrument display, controls, range, accuracy, and units will replicate those of the reference plant.

Attachment 8.2 Page 4 of 16 AMBIENT OPERATING ENVIRONMENT The ambient operating environment will replicate that of the reference plant control room. Temperature, noise, humidity, communications, lighting, and auditory signals are factors that will be considered.

3.4 SYSTEMS TO BE SIMULATED The following is a general list of plant systems to be simulated.

This list may be modified based on the SFTA results. Since only a subset of all plant systems and instrumentation will be simulated this reduced scope simulation will not simulate the entire plant.

System System Description Number 11.1 NUCLEAR INSTRUMENTATION 11.2 REACTOR CORE 12.1 CONTROL RODS 12.2 ORIFICING CONTROL 21.1 PRIMARY COOLANT 21.2 PRIMARY COOLANT AUXILIARIES 22.0 MAIN AND REHEAT STEAM SYSTEM 22.1 STEAM GENERATORS 22.2 MAIN STEAM BYPASS 22.3 REHEAT BYPASS 22.4 ATTEMPERATION 22.5 BYPASS FLASH TANK 31.1 CONDENSATE SYSTEM 32.1 FEEDWATER SYSTEM 32.2 DEAERATOR AND HEATERS 46.1 PCRV COOLING 51.1 TURBINE GENERATOR 51.2 ELECTRO-HYORAULIC TURBINE CONTROL 53.1 EXTRACTION STEAM 90.1 PLANT COMPUTER OPERATOR CONSOLES 90.2 SAFETY PARAMETER DISPLAY SYSTEM (SPDS) 92.1 ELECTRICAL DISTRIBUTION 93.1 PLANT PROTECTIVE SYSTEM (PPS) 3.5 PERFORMANCE CRITERIA The simulator will display, to the operator in training, plant parameters within the following tolerance requirements:

STEADY-STATE OFERATION The simulator instrument error will be no greater that of the comparable meter, transducer, and related instrument system in the plant.

- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _______a

Attachment 8.2

-Page 5 of 16 The simulator-computed values for the principal mass and energy balance will be consistent within +/- 2% of the design data.

The competed values for steady-state, full power, automatic control operation will not change (drift) by more than +/-

2% over a 60 minute period.

The calculated values of non-critical parameters will agree within +/-10% of the reference plant.

TRANSIENT OPERATION Tests will be conducted to prove the capability of the simulator to perform normal plant evolutions and plant malfunctions.

Acceptance criteria will be within the limits of the reference plant tests for the different systems simulated.

The transients will not violate the physical laws of nature.

The transients as displayed will require the initial direction of the change be the same as the reference plant and will be controllable over their range.

The transients will be within 10% of the reference plant data acquired in a reference transient.

t The transients will be within the limits of the reference plant procedures, including normal, abnormal, and emergency procedures.

4.0 SIMULATOR TRAINING CAPABILITIES l 4.1 NORMAL PLANT EVOLUTIONS The simulator will be capable of simulating continuously, and in real-time, plant operations of the Fort St. Vrain power plant.

The response of the simulator resulting from operator action, automatic plant controls and inherent operating characteristics will be realistic to the extent that the operator will not observe a difference, within the limits of the perfoimance criteria, between the control room indications of the simulator and the reference plant. The simulator will calculate and display plant system parameters on the appropriate instrumentation, and provide proper alarm and/or protective system action.

The minimum evolutions that the simulator will perform are:

Plant startup (0-30%); Power operations (30-100%); Power system load changes in response to turbine load control; Power operation in single loop operation; Plant shutdown (30-0%).

Attachment 8.2 Page 6 of 16 4.2 PLANT MALFUNCTIONS The simulator will be capable of simulating in real time the selected abnormal and emergency conditions resulting from simulated equipment malfunctions. The malfunctions will parmit the demonstration of inherent plant response and operation of automatic plant control and protective systems. Where applicable to the malfunction, the simulator will provide the capability for the operator to take action to recover the plant and/or mitigate the consequences of the malfunction.

The following is a general list of abnormal and emergency conditions which form the basis for the malfunctions. This list may be modified based on the SFTA results.

Loss of primary coolant forced circulation Loss of normal feedwater Circulator Trip Loop Shutdown Nuclear instrumentation failure Mis positioned control rods Inability to withdraw control rods f

Turbine trip Failure of specified pressure, temperature and level controls and/or indication Reactor scram Loss of selected protective systems 4.3 REMOTE FUNCTIONS The plant systems that are operated remotely or that provide some input to the main simulation model and are necessary to perform the reference plant normal evolutions and malfunctions will be simulated to the extent of the reduced scope. It will be possible to interface with the remote activity by using the l instructors console. The remote functions will require minimum input or action on the part of the instructor.

4.4 INITIAL CONDITIONS l

l The simulator will be able to be initialized at specified power levels where the panel will require setup to match the selected l data base. These initial conditions will be chosen from the selected, previously saved snapshots.

m s

Attachment 8.2 Page 7 of 16 4.5 INSTRUCTOR'S STATION 4 5.1 DESIGN CONFIGURATION An instructor's console with associated peripheral equipment will be installed. The purpose of this station is for control and monitoring of the simulator conditions and operator actions. The instructor's console will also serve as the main cortrol center for the simulator. The console will be designed to allow the instructor to perform the following software functions with a minimum of control manipulations.

4.5.1 SOFTWARE FUNCTIONS

1. Initialization of the simulator to any of the specified initial conditions.
2. Insertion and removal of all selected malfunctions and remote functions.
3. The ability to freeze the simulation.
4. The ability to restart the simulation from the current state or any previous snapshot.
5. The initiation of programs to check the position and operational readiness of all switches and controller setpoints, and display the initial conditions of all meters and lamps.
6. The ability to change to and from real-time, slow-time and fast-time.

Each screen display will be continually updated to show the status of the selected parameters in real time with no intervention from the instructor.

5.0 DESIGN DATA The scope and degree of simulation will be defined by the design data. Design data includes a description of the control room equipment, the components of the power plant, and the static and dynamic performance of these components under normal and abnormal conditions.

6.0 DESIGN APPROACH 6.1 DESIGN PHILOSOPHY The simulator will be designed as a tool for training.

Engineering models may be developed to analyze various components i of the plant.

m

Attachment 8.2 Page 8 of 16 6.1.1 DISTRIBUTED PROCESS SIMULATION The simulator will be composed of a number of microprocessors ,

running in a parallel configuration. Each processor will be configured as a node on a local area network. One or more nodes will be required for each part-task simulator (simulating one -

system). These part-task simulators will be phased in to complete this specification for a reduced scope simulator. The data generated by one node can be shared with other nodes on the

-local area network.

6.2 FIRST PRINCIPLE MODELING PHILOSOPHY Math models using the conservation of mass, energy and momentum will be used to simulate the power plant.

6.3 SOFTWARE REQUIREMENTS 6.3.1 GENERAL SOFTWARE REQUIREMENTS The major software packages required for distributed process simulation are:

a. Operating System software
b. Local area network system software
c. High level language modeling software
d. Applications software to simulate the plant
e. Applications software for the instructor console
f. Diagnostic and maintenance support software 6.3.2 OPERATING SYSTEM SOFTWARE Microsoft (IBM) DOS is the current selected operating system software. This may be changed depending on the development of 05/2.

6.3.3 LOCAL AREA NETWORK SYSTEM SOFTWARE Novell is the current selected vendor for the local area network system. This software allows remote nodes to share data with each other and with the instructor's cot. sole.

Attachment C.2 Page 9 of 16 r

6.3.4 HIGH LEVEL LANGUAGE MODELING SOFTWARE Model is the current selected simulation language, Model is a real-time high level engineering language that can output data to user defined screens and interface with local area networks and 7 data acquisition and control hardware. Remote nodes can be controlled with a local master instructor's console node.

6.3.5 APPLICATIONS SOFTWARE The applications programs are those that simulate the functions of the plant systems. Additionally they include the programs necessary to accomplish the functions performed at the instructor's. console.

6.3.5.1 SOFTWARE ARCHITECTURE Each simulated plant system will be defined and implemented from mathematical models on one or more distributed system nodes.

Each node will be capable of being individually modified. l The nodes will be analogous with the corresponding plant ,

subsystem. That is, each model will be designed to simulate the '

physical behavior of the corresponding power plant functions.

The program modules typically represent physically separable components of the system hardware.  :

The modular subdivision will allow the ability to add, delete or modify a subsystem without reprogramming the entire system.

A program module is an independent program that performs a 4 specific function. Each program module is divided into small subprograms or subroutines that simulate the plant functions.

The independence of a program module residing on a separate node is created by choosing the modules to minimize the number of  !

variables reeded to communicate with other modules. This independence provides:

4 Independent concurrent parallel implementation. .

Independent testing before integration.

Minimum impact during modification. -

No decision driven executive software, ,

4 Each independent noor is interfaced to a common shareable read / write data file on the file server, i

h i

L

Attachment 8.2 i Page 10 of 16 6.3.5.1.2 PARALLEL PROCESS FUNCTIONS For each node on the local area network, distributed process simulation becomes:

1. Read the data acquisition hardware connected to the panel switches and PID controller setpoint dials.
2. Read the shareable read / write data file on the file server to input the data saved by the other microprocessors.
3. Calculate new PID controller deviations and outputs, new i

velve positions, flows, pressures, temperatures and alarms using algorithms on the acquired data. ,

4. Output the new readings to the analog meters and turn on or

-off the digital lamps and alarms as required.  ;

5. Save the data required by the other processors on the file server in the shareable data file.
6. Read the local area network cor.nand file for new commands from the instructor.

t

7. Start over at number 1.

6.3.6 MAINTENANCE SUPPORT PROGRAM REQUIREMENTS Maintenance diagnostic support includes various levels. This support will range from testing various devices and functions to testing systems. All system level support will be those functions supplied by the selected vendor, e.g. IBM, Novell, etc. The panel component tests will be included to check out all panel interfaces.

6.4 COMPUTER HARDWARE REQUIREMENTS l A computer system will be provided to implement all active functional capabilities of the simulator. The computer system will consist of 3 subsystems. 1) The I/O interface to the panel.

2) The computer remote and master nodes. 3) The local area network and file server. Each part task configuration will be documented.

6.4.1 I/O INTERFACE TO THE PANEL The I/O processor subsystem consists of 2 components: 1) memory mapped I/0. 2) Programmable controllers.

6

,.,..,-,n__-.. -

-,.....,-,,.. - - ,- , .-,.~.,- _-,,. ,,.-,. ,----- , . , . , , . . . . , . -

.,.,..n- , , _ , , . . ,n. -._- ,--- -,

Attachment 8.2 Page 11 of 16 Memory mapped I/O includes Analog Input from the controller setpoints, Analog output to drive the meters, Digital. input to read switch positions and push buttons, and digital output to light the various lamps and alarms. Memory mapped I/O is best suited for time dependent calculations such as PID controllers.

Programmable controllers have the four functions listed above and are best used for time independent ladder logic solutions (alarms, switch positions) and high I/O count per solved point.

6.4.2 COMPUTER REMOTE AND MASTER N0 DES The computer complex will be comprised of multiple microprocessors, peripherals and interconnecting cables. Modular expansion capability is inherent in the design. The current systems selected are IBM personal computers.

1 6.4.2.1 MAJOR COMPONENTS The basic system requirements are IBM PC/ATs, PS/2s or true compatables. Peripherals will be kept to a minimum on the remote nodes.

6.4.2.2 RESTART AFTER POWER FAILURE On power-up the nodes will read an autoexec. bat file that will sign them onto the Local Area Network and read in the Model program. A setup file will be read to determine if the node is j master or remote. Each node will be programmed to read in a

, previously selected data base and begin running its section of the model.

, 6.4.2.3 TIMING AND MEMORY REQUIREMENTS The time required for a node to execute the steps listed in parallel processing functions above is proportional to the speed of the processor, complexity of the algorithms and number of points to read / write to the panel hardware and common data file.

With unlatched inputs the total loop time will not exceed 250 milliseconds. With latched inputs the total loop time will not

! exceed 500 milliseconds. Memory will be sized so that the on-j line data base can be kept in memory, thus minimizing disk access.

1 s

Attachment 8.2 Page 12 of 16 6.4.2.7 PERIPHERAL DEVICES If used as a remote node, this can be a minimum system. One floppy drive, a keyboard, and a display adapter are required to boot up. A display screen is required for hardware checkout and initial code debugging. This screen is not required in the final configuration since code can be downloaded from the master. The instructor's console and operator consoles will include an enhanced- graphics adapter and a 19 inch color screen. The instructor's console will include touch screen capability.

Each computer will require a local area network interface card.

6.4.2.8 EQUIPMENT OPERATING TOLERANCES The environment required for the computer complex shall be between 60 and 90 degrees F. Relative humidity shall be between 30% and 80% non-condensing.

6.4.3 LOCAL AREA NETWORK 6.4.3.1 FILE SERVER REQUIREMENTS The file server will be dedicated. That is, DOS will not be functional on the file server.

6.4.3.2 LOCAL AREA NETWORK CABLING One wire center is required for each 8 computers. The cable used will follow IBM standards using Type 1 or Type 6 cable as required.

6.4.3.3 MAIN STORAGE The file server will require a 60 megabyte or larger hard drive with backup capability, 7.0 PANEL FABRICATION REQUIREMENTS 7.1 GENERAL j All specified simulator panels will be designed in size, shape, i

color and configuration to duplicate the reference plant control room panels. The simulator's face-front nameplates will duplicate those in the reference plant.

7.2 MECHANICAL CONSTRUCTION l

l The bench and back-board sections of the panel will be removable i and can be replaced with pre punched sections incorporating the Control Room Design Review instrument location modifications.

Interior construction bracing will allow for accessibility and for mounting to match the control room.

Attachment 8.2 Page 13 of 16 7.3 ELECTRICAL AND ELECTRONIC CONSTRUCTION A single power source circuit breaker will be provided with 120 VAC and will be located within the simulator room to enable quick emergency trip.

AC power distribution boxes will be provided with sufficient circuit breaker capability for protection and will be located to minimize power cabic extensions. All circuit breakers will have ground fault interrupt capability.

All power supply loading, connections and grounding will be in accordance with the PSC Engineering Codes.

8.0 INSPECTION, TESTING AND ACCEPTANCE 8.1 INSPECTION Quality Assurance and Quality control have the right to audit, inspect and witness testing at any reasonable time.

8.2 ACCEPTANCE TESTING PROCEDURES Prior to acceptance by training and the NRC, acceptance tests will be run by the training department. These tests will be based on the design and operating data base for the part task simulator that is being tested. Each test will include any previous acceptance test where this part task simulator interfaces with a previously tested and approved part-task simulator.

The test procedure will list the initial conditions required to start a test, and will give detailed step-by-step procedures to

follow, along with the expected results. The test document will
provide space to record actual results and/or initial each step.

i Any discrepancy or deficiency will be recorded, re-worked and re-tested.

l l Each acceptance test will include but will not be limited to the i following:

Hardware configuration verification i Computer and local area network systems

Power Plant performance tests

! Startup and Shutdown l Transient tests Initial Conditions tests Malfunctions tests Special tests 1

{

I

Attachment 8.2 Page 14 of 16 8.3 MODIFICATION REVERIFICATION TESTS For any section of a part-task simulator, if any hardware or software modifications occur, the acceptance tests will be modified to reflect the changes in the simulator and will be run to verify the modifications are acceptable, and that the portions not modified retain their original function.

9.0 DOCUMENTATICN Complete documentation will allow verification that the design and objectives of the specification are met. Where possible, plant documentation such as procedures, piping and instrumentation drawings, system descriptions, will be used or referenced. Documentation will include but not be limited to the following list.

9.1 HARDWARE DOCUMENTATION 9.1.1 CONTROL BOARDS The control boards will include front view drawings, wiring diagrams, wire lists, connector termination lists, bill of materials, parts list, vendor information, manuals and spare parts. Also included will be signal traceability for each panel device through all connectors, including computer I/O channel and programmed instrument name.

9.1.2 COMPUTER SYSTEM HARDWARE The computer system hardware documentation will consist of the operation and maintenance manuals, vendor's manuals, and diagnostic programs.

9.1.4 INSTRUCTOR OPERATING MANUAL The simulator instructor's manual will contain complete instructions for operating the simulator. It will describe the simulator's capabilities and complete instructions for operation.

This manual will include the simulator power-on sequence, computar startup, pre- initialization procedures, initialization of the panel, snapshot enabling, real and variable time operations, malfunction selection and control, auxiliary plant and manual control. Also included will be special hardware operations, calibrations and emergency procedures.

9.2 SOFTWARE DOCUMENTATION Software documentation will include three levels: system level by the selected vendor, simulation language, and application coding.

i Attachment 8.2 Page 15 of 16 9.2.1 COMPUTER SYSTEM SOFTWARE DOCUMENTATION A complete set of vendor supplied documentation will describe the DOS operating system and Local Area Network operating system. .

9.2.2 SIMULATION LANGUAGE DOCUMENTATION i The simulation language documentation will include model program descriptions, function block definitions, math formulas where applicable and sample usage. Source code will be listed for all i model function block definitions. The model users guide will j provide iristructions to modify, compile, download and run the various modules.

9.2.3 APPLICATION CODING SOFTWARE DOCUMENTATION The design descriptions of plant systems will give narrative descriptions, functions of modules and interface with system

programs and hardware devices.

t Software programming documentation will describe the master and [

remote nodes and programmable logic controllers and their  ;

4 interaction. All sof tware pre., grams will be clearly identified so .

l reference can easily be made to system modules, associated j

! program listings and system design descriptions. The programs  ;

will be sufficiently commented to allow understanding and i modification.

A complete listing of the shareable data file will be shown in L alphanumeric and relative addresses.

9.2.4 TEST AND DIAGNOSTIC PROGRAM DOCUMENTATION  ;

~

! Control board test and diagnostic documentation will provido capability to check the integrity of all panel lamps, meters, recorders, switches, potentiometers and other components, i 9.3 PLANT SIMULATION DOCUMENTATION j 9.3.1 DESIGN CONCEPT DESCRIPTION The software design concept descriptions of the simulated plant I

systems will include design data base malfunctions, remote ,

function, design assumptions, design simplifications, panel instrumentation, monitored parameters and the plant process ,

computer point list. The simulation system description will include simulation system diagrams, and software interface block diagrams. The math model development will include mathematical equations, i

c,wm -- -- - ,an -- m---,-v-,a-- ,nw- , - - - - ----n----

Attachment 8.2 1 Page 16 of 16 9.3.2 PLANT SYSTEMS DESCRIPTIONS Each simulated plant system design concept description will include appropriate simplified P& ids, block flow diagrams, logic diagrams, and/or one-line electrical diagrams to illustrate the general scope of simulation for that system.

9.3.3 OESCRIPTIONS OF MATH MODELS The simulated plant system math models include the following:

a. All assumptions made in the model for areas ef data voids and justification for their use.
b. All equations used in the system model.
c. When polynomial curve fits are used as approximation of plant design data, a justification will be provided.
d. Constants used will be listed along with their engineering units and a reference to their source.

9.3.4 INPUTS AND OUTPUTS 2

Inputs to and outputs from each plant system program will be identified in the system design concept descriptions, including 1 the hardware signals from and to the control panel.

9.3.S SYMBOL DICTIONARY A complete listing of all shareable data file variables and i constants will be provided. This listing will include description, location, and a cross-reference to the source and usage of the data. The listing will contain both alphanumeric j

and relative addresses listings.

10.0 SPARE PARTS AND EXPENDABLES.

A minimum of 5% spare capacity for each component will be kept in j stock.

l l

l l

e Attachment 8.3 Page 1 of 3 GLOSSARY OF TERMS Note: Additional definitions / acronyms are listed in the USFG document, Attachment 8.1.

Acceptance tests - tests based on the design and operating data base for the part task simulator that is being tested.

Backtrack -

restoration of the si.mulator to a previous set of conditions.

Best Estimate -

reference plant response data based upon engineering evaluation or operational assessment.

Controller - an individual responsible for clarifying deviations between a simulation device and the reference plant.

Critical parameters -

parameters that require direct and continuous observation by the operator when a system is being controlled manually. Critical parameters also include system parameters required for safe shutdown.

CRM -

Control Room Mock-Up is a display of the reference plant control room panels, including the switches, indications, and alarms, arranged in a configuration similar to the referenr.e plant. The CRM may consist of photographs, three-dimensional mock-ups or a combination of both.

, Cue - information available for use in evaluating plant status.

Design data - the data that describes the design baseline of the simulator.

4 Distributed Process Simulation (DPS) -

the use of several l microprocessors running in a parallel configuration.

' Fast time -

operation of time dependent parameters in a faster than real time rate.

! Fidelity -

reference plant replication in either system model,

, physical appearance or system function.

4 Freeze -

interruption of the simulation that causes all dynamic values to remain static.

i HTGR - High Temperature Gas-cooled Reactor l Initialization -

restoration of the simulator data base and hardware to a pre-defined configuration before simulation begins, f

Attachment 8.3 Page 2 of 3  ;

r i

Instructor override - the capability to manually replace analog or digital input from or output to the panel instrumentation without affecting the math model, t I&C - Instrumentation and Control I/O - input from or output to the control panel or file server.

Malfunction -

simulated failure or degradation of plant  ;

equipment.  ;

Mock-up -

3 dimensional, colored, non-functional panel  ;

instrumentation displays.  !

MWT - megawatts thermal l 4

Part Task Simulator (PTS) -

a device incorporating detailed ,

modeling of a limited number of specific reference plant  ;

components or subsystems. Such a device demonstrates expected  !

response of thosa components or subsystems. t Performance Testing -

testing conducted to verify a simulation facility's performance as compared to actual or predicted reference plant performance as required by 10CFR55.45.

]

PSC - Public Service Company of Colorado l Real time -

simulation of dynamic performance in the same time base relationships, sequences, durations, a r.d rates, as the a dynamic performance of the reference plant.

j Reduced scope simulation (RSS) - less than full scope plant specific simulation. The sections modeled are complete simulations; however, not all systems are simulated.

l Reference plant - Fort Saint Vrain Nuclear Generating Station i Remote functions -

Operator functions required for action which I would normally be performed from outside the control room for

! example, the operation of manual valves.

l RSS - Reduced Scope Simulation l

! SFTA -

System Function Task Analysis - a systematic analysis of the reference plant procedures that yield the cue and I&C requirements.

i e

5 i

Attachment 8.3 '

Page 3 of 3 Simulation Facility (SF) -

one or more of the following simulation devices, alone or in combination, used for the conduct  :

of operating tests:  :

1) Non-Plant Referenced Simulator i
2) Control Room Mock-Up I
3) Limited Scope Simulator
4) Part Task Simulator 1
5) CRT Simulator l
6) Reference Plant ,

Simulator data base plant design and performance data used by [

the simulator, Slow time - operation of specific system parameters at a slower .

than real-time rate. '

Snapshot -

the storage of existing conditions at any point in time. The stored snapshot may be used to initialize the simulator.

Tolerance - a portion of the full scale value.

(

i I

i I

l 1

l l

1 l

I  ;

Attachment 8.4 Page 1 of 14  ;

I l

1 SIM!lLATION FACILITY CONFIGURATION MANAGEMENT PROGRAM PLAN l

I I

{

l l

l

l 0-Attachment 8.4 Page 2 of 14 TABLE OF CONTENTS Page f I. INTRODUCTION .............................................. 3 II. PROGRAM OBJECTIVES ........................................ 3 III. PROGRAM ELEMENTS A. Simulator Design Basis Docura -

..................... 4 B. Discrepancy Reports ...... ...................... 6 C. Simulator Design ............ ......................... ?

D. Co n fi g ur.ti on Val ida ti on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 IV. PROCEDURAL CONTROLS A. Simul ator De sign Basi s Documents . . . . . . . . . . . . . . . . . . . . . . ,J B. Discrepancy Reports ..... ..............................

C. Simulator Design ...................................... 13 D. Con fi g urati on Val i da ti on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 V. REFERENCES .... .......................................... 14

. ~ -

L

. Attachment 8.4 Page 3 of 14 I. INTRODUCTION The Simulator Configuration Management Program (SCMP) will be an integrated management process governing the simulation facility's structures, components and computer software. The program insures that the physical and functional characteristics of the simulation facility are documented and that changes thereto are properly developed, assessed, approved, issued, implemented, verified, and incorporated into the simulation facility's documentation.

The SCMP will provide training management with administrative controls for simulator functional and physical fidelity. This system will be used to insure that quality measures are established for identifying and controlling simulator design, construction, discrepancies, configuration cl.ange s , support documentation, and related training material.

II. PROGRAM OBJECTIVES The major objectives of the SCMP are:

To maintain the simulator in a current configuration to support training effectivily.

To ensure decisions related to simulator and facility changes are based on accurate information.

To assure compliance with regulatory requirements.

To prevent exposure of trainees to incorrect simulator configuration that the trainee could perceive as being correct.

l l

l l

1

Attachment 8.4 Page 4 of 14

-The SCMP will ensure that the objectives are met by accomplishing the following:

Establish and maintain simulator design basis documentation in accordance with the design specification.

Identify, track, evaluate, implement, and test potential simulator configuration changes due to reference plant modifications, engineering analyses, or operating experience that may impact simulator training.

Identi fy, track, evaluate, implement, and test simulator design enhancements.

Identify, track, resolve, and test the resolution of simulator discrepancies.

Verify that the current simulator configuration complies with the functional and physical fidelity specifications as described in the design specification and the Program Plan.

III. PROGRAM ELEMENTS The SCMP will be comprised of the following key elements:

A. Simulator Design Basis Documents Design basis documentation will be established to control and dncument the initial simulator design and simulator configuration changes. The simulator design basis documentation consists of those documents that identi fy the functional and physical fidelity specifications of the simulator. These documents form the baseline of the initial design and therefore should be gathered both before and during simulator construction. Acceptance test procedures will be based on this data. Accurate and complete design bases documents will be maintained with simulator modifications to ensure that current test proceCjres, and thus the simulator's configuration, are accurate.

Attachment 8.4 Page 5 of 14 The following key steps will be performed to establish and/cr validate the design basis documentation:

Scope identification - Training management / simulator group will evaluate the following key elements in order to identify the boundaries of the simulator configuration management system:

o computer software and hardware o panel layouts o room layouts o HVAC o security o fire protection o power supplies o communications o structure o maintenance / calibration procedures for simulator equipment This scoping effort will be coordinated with other plant configuration programs to ensure agreement and a clear definition of the boundaries between the systems.

Simulator Software Configuration Management shall l comply with the following essential elements of the Nuclear Program Plan for the "Management of Computer Software Systems":

o configuration identification o configuration change control e o status accounting and reporting o library controls o access controls and security o corrective actic, o regression testing o backup and recovery Identify existing documents - a survey of existint, reference plant documentation will be conducted to determine what documents will serve as design inputs for the simulator.

A survey will also be conducted to determine what simulator documents currently exist, accuracy of the material, location of the material, mechanisms for retrieval of this material, and usability of the information in its current form.

Attachment 8.4 Page 6 of 14 Assemble and evaluate existing or needed documents -

An initial-evaluation will be conducted to determine which portions of the existing material are to be included in the simulator configuration management system and which portions _are not available (i.e.,

missing). Missing components should be identified and reconstructed as necessary.

Reference plant documentation used as simulator design inputs are readily available for use by the simulator group and will be controlled and maintained under the ov'erall configuration rnanagement process.

Reverify selected documents -

As a result of the previous efforts, selected design material may need to be reverified for accuracy and applicab'ility.

Reverification should be done if any of the following conditions exist:

o The accuracy of the original analysis is uncertain.

o The data does not support the current simulator configuration.

o The data identifies an as-built variance from the current design specification.

o The data specifications are conflicting, such as two significantly different pump curves for the same pump.

B. Discrepancy Reports Discrepancy reports will be used to identify, track, resolve, and test the resolution of differences that exist or develop between the simulator and the design basis documentation. These discrepancy reports will be unique to the Training Department Simulator Program. Discrepancy reports should be based on observed simulator functional and physical fidelity problems and actual data. The operational characteristics or physical appearances of the simulator that differ from the reference plant need to be described and quantified. Supporting documentation should be referenced where appropriate. Subsequent design and testing should then use this data as the standard.

Attachment 8.4 Page 7 of 14 The following are typical items that should be identified via discrepancy reports:

dynamic, logic or setpoint deficiencies simulator operating anomalies (e.g., simulator models exceed design conditions) 1 incorrect component logic (e.g., valve open logic-is incorrect)

. inoperable instructor station features hardware failures minor software changes The discrepancy report should be_ reviewed for:

redundancy - ensure that a duplicate repora does not exist or a simulator change is not being Implemented that would make the discrepancy report invalid.

validity - the reviewer should test the discrepanc.';

on the simulator, and then, if necessary, check that the expected results or hardware configuration are supported by the data base. This check should ensure the accuracy of the deficiencv and should assess the engineering feasibility of correcting the discrepancy.

clarity -

the reviewer should ensure that the problem description clearly and completely explains the problem.

Major software rewrite or new development - the reviewer should identify if a major software rewrite may be required that may invalidate the current acceptance test results supporting current simulator performance as specified in the design data base.

C. Simulator Design The simulator design process will identify, track, evaluate, implement, and test the initial design and potential simulator configuration changes. Changes should be reviewed for simulator training impact.

l L

Attachment 8.4 Page 8 of 14 Typical items. that impact the simulator's configuration include:

discrepancy reports plant modifications and field revisions'-

reference plant procedure changes design specification changes setpoint changes -

proportional, integral, derivative, alarm, deadband, accumulation, blowdown, etc.

operational data from the plant such as operator logs, recorder strip charts, or sequence of event reports for operational transients startup test and core physics test data, post-refueling data vendor technical manual changes, information reports, equipment specification updates, manufacturing data plant process, monitoring, or control computer software changes (including setpoint changes) new or revised reference plant operating data (e.g.,

plant scram information, new pump curves) previously unavailable data that now eliminates or changes modeling assumptions or simplifications major or significant software rewrites, new software development l

l simulation computer operating system changes l

new or revised best-estimate operating data l

simulator enhancement r? quests i

i

Attachment 8.4 Page 9 of 14 standard or code changes that could, for example, affect the way the simulator is wired changes to the reference plant's control room environment 'such as operator aids, available references, or physical arrangement of equipment or furniture .

plant maintenance such as addition of weights to the turbine generator or changes to the pump impeller (possible affecting pump performance), which may not be covered under a plant modification and the results of which can be observed in the control room industry or plant operating experience reports simulator hardware upgrade -such as computers or input / output equipment Temporary Configuration Reports (TCRs) - due to the procedural controls on the duration of TCRs, they will not be reviewed for simulator impact The design control process has the following five steps:

l_ 1. Identifying items that impact the simulator's l configuration.

A simulator configuration change tracking system similar to the discrepancy report tracking system

will be implemented to track change data as outlined

! above, that could affect the simulator's l configuration.

i

2. Analyzing those items that have an impact on l training.

i l All new input data should be screened for potential l simulator training impact and engineering l- feasibility. The reviewer should be knowledgeable L

- of both the training impact and the simulator engineering impact of the potential change. The reviewer should prioritize the potential change to p

the simulator relative to other potential changes.

l l

l

1 Attachment 8.4 Page 10 of 14

3. Preparing the simulator modification package.

A mechanism will be established to ensure that potential simulator design changes are tracked and

-documented and will include the following items:

change summary -

a brief discussion of the scope and. content of the change.

training impact -

a discussion on why the change should be implemented.

Simulator design will be implemented per the procedural requirements detailed in Section IV-C of this document.

4. Evaluating the change's adequacy and accuracy.

The proposed simulator change should be reviewed for the following:

determine the training value determine resource requirements o manpower o hardware o software o spare parts o simulator availability determine priority relative to other changes determine other training materials and programs affected perform a design review for adequacy on proposed changes for items, such as the following:

o compliance with applicable codes or standards o correct scale and units o color compatibility o accuracy of markings e.nd identifications o board orientation and layout o functional response o tolerance standards / requirements o sound and labeling of alarms

1 Attachment 8.4 Page 11 of 14 determine cost / benefit and budget status of the proposed change determine post-implementation testing requirements

5. Performing simulator construction.

Construction of the simulator will be per the 1 approved simulator design packages. Following construction, the appropriate updating of simulator design documents will be performed.

D. Configuration Validation The simulator will be tested following initial construction and configuration changes, and periodically to ensure that simulator configuration is consistent with the simulator design basis and the current design documents. These tests should consist of simulator acceptance and performance tests, facility validation and reviews of _the simulator design documents. The tests should include simulator functional fidelity tests and physical fidelity tests. Tests must be large enough in scope so software interfaces, logics, _ dynamics, and hardware components are adequately evaluated. Simulator test results will be compared to simulator design basis and design documents that should include plant response data or best estimate response data. Acceptability should be based on acceptance criteria established in the baseline documentation.

IV. PROCEDURAL CONTROLS The following procedural controls will be developed as a part of the Training Department Simulator program. Also, other Nuclear Organization procedures will be revised as required to support this program.

l

Attachment 8.4 Page 12 of 14 A. Simulator Design Basis Documents Procedural controls will be developed to estaolish initial document control and to ensure continued design basis documentation accuracy and completeness. These controls will address the following:

o control and maintenance of the design basis documents.

o documents that define the design requirements.

o drafting, review, and approval of simulator-specific documents o establishment of documents to be controlled o determination of which documents currently are part of the design basis o timing of the data base change relative to the simulator change o document location o method of changing design basis documentation o document update schedule o document distribution responsibility.

B. Discrepency Reports Proceducal controls will be developed to ensure consistent application and documentation of the discrepancy report process. -These controls will address the following:

assignment of discrepancy report system administrative responsibility identification, tracking, validation, resolution, and testing of resolved discrepancies interfacing with the training, plant, or simulator document control system to maintain the simulator design data base interfacing with the training program content or simulator training materials to maintain supporting simulator training materials prioritizing simulator deficiencies.

I _ .. _

Attachment 8.4 Page 13 of 14 C. Simulator Design Procedural controls will be developed to ensure consistent application and documentation of the design process.

These: controls will address the following:

l assignment of simulator design' system administration responsibility identification, tracking, evaluation, implementation, and testing of changes interfacing with the training, plant, or simulator document control system to maintain the simulator data base interfacing with the training or simulator training I material control system to maintain supporting simulator training materials L

prioritizing simulator changes format and content of simulator design packages.

For major changes, the package should include as necessary, the following items:

r o assumptions o simplifications o simulation drawings o software interface drawings

! o logic drawings o any new or additional documents that support the design o work sheets that provide other necessary design analysis or calculations o parts specifications for procurement (including spare parts) o availability of the parts o parts required o parts cost o special instructions (e.g., installation sequence, installation tests / checks, wiring checks)

Revision to existing reference plant procedures will be required to insure that all modifications, procedure changes, analyses, new performance data etc. are reviewed for potential impact to the simulator and facility.

5 Attachment 8.4 Page 14 of 14 D. Configuration Validation.

Procedural controls will be developed to ensure consistent validation requirements are specified and to ensure' test content and adequacy.

- V. REFERENCES A. INP0 87-016, "Simulator Configuration Management System,"

August 1987.

l 1

l

Attachment 8.5 Page 1 of 8 DISTRIBUTED PROCESS SIMULATION THIS ATTACHMENT IS FOR INFORAATION ONLY. ITS PURPOSE IS TO HELP EXPLAIN "DISTRIBUTED PROCESS SIMULATION". THIS IS NOT A COMMITMENT TO THE NUCLEAR REGULATORY COMMISSION, ON THE PART OF THE PUBLIC SERVICE COMPANY OF COLORADO.

OVERVIEW:

This paper covers the steps required to develop a real-

, time training simulator using IBM personal computers networked together and interfaced to a training instructor's console and a control room simulation panel.

"Distributed process simulation" uses several microprocessors running in a parallel configuration.

Each processor is configured as a node on the local area network.

Although each separate microprocessor doesn't have the computational power of a super-mini or main frame computer, it has the advantage of parallel processing.

That is, several processors can be running different sections of the model at the same time. Given enough microprocessors, this parallel configuration can exceed the computational power of a super-mini. In addition, this sectional modeling improves understandability and decreases software integration time.

Since each distributed process unit has only a few well defined functions that are executed continually, there is no need to have decision driven software, and the resulting problem of abnormal software flow and timing.

The system modeled can indeed be one large transient.

Extreme pressure and flow changes may often be the rule not the exception.

The local area network is only required to pass on the final answers of each section of the model.

! Intermediate calculations that are not needed by the

other processors are retained locally, thereby reducing load on the total system.

l l

l

{

Attachment 8.5 Page 2 of 8 BASIC REQUIREMENTS:

The basic system requirements are:

1. A local area network that uses a token passing ring protocol.
2. A dedicat i file server with a shareable read / write random ' access "data" file and a shareable read / write random access "command" file.
3. A "master" node that can write "commands" to the "remote" nodes.
4. "Remote" nodes that share the "data" file, are connected to the panel hardware front-end gear and read "commands" from the "master" node.
5. Front-end gear (analog and digital input and output) interfaced to the panel hardware.
6. Panel meters, switches, PID controller faceplates, lamps and power supplies.

PARALLEL PROCESS FUNCTIONS:

For each "remote" node on the local area network, "distributed process simulation" becomes:

1. Read the data acquisition hardware connected to the panel switches and PID controller setpoint dials.
2. Read the shareable read / write "data" file on the file server to pick up the data saved by the other microprocessors.
3. Calculate new PID controller deviations and outputs, new valve positions, flows, pressures, temperatures and alarms using algorithms on the acquired cata.
4. Output the new readings to the analog meters and turn on or off the digital lamps and alarms as needed.
5. Save the data needed by the other processors in the shareable "data" file on the file server.
6. Read the local area network "command" file for new commands from the "master."
7. If no "freeze" or "initialization" commands are received, the process starts over again.

Attachment 8.5 Page 3 of 8 The instructor's "master" node uses the same concept but takes its input from the keyboard rather than the panel hardware, and its outputs are typically selected malfunctions sent to the shareable "data" file, or initialization and freeze commands sent to the "command" file.

TIMING REQUIREMENTS:

The loop time (required for a node to execute the above 7 steps) is proportional to the speed of the processor, complexity of the algorithms and number of points to read / write to the panel hardware and the common "data" file. With unlatched inputs, the total loop time should be no greater than 200 milliseconds or the process will miss the operator's input from the switches. With latched inputs you can stretch the total loop time up to 450 milli >econds if the process being simulated can accept it. Any analog output greater than 500 milliseconds that move over 10% per second do not display a smooth analog signal unless you add capacitance to them.

Using a 6 mhz IBM /AT, upper limits for a typical node are 55 to 60 points of memory mapped panel hardware I/O and 40 to 50 points of shared data. This typical node would accomplish the above 7 steps in 300 to 450 milliseconds spending the majority of time in step 3 (calculation). XT class machines run 2.6 times slower.

A math co processor will increase the throughput by about 25% to 30%.

Exact timing studies for the 80386 class machines with a i 16 megahertz or 20 meghertz clock have not been run.

However they tend to show an improvement of 2.5 over the base system. Their impact on the local area network is also proportional.

Attachment 8.5 Page 4 of 8 For logic functions that run independent of the modeled system, a programmable logic controller (PLC) can offload a node. Examples of PLC usage include alarm / acknowledge sequences, switch latching, position-indicating lamps and boolean (ladder) logic. The PLC interface to the panel hardware is fast and independent of the computer. For example, 100 points of "solved"

, Input /0utput(I/0) take approximately 1 millisecond.

This throughput includes an average overhead of 10 words of boolean logic solution per panel I/O point. The PLC interface to the computer is slow (9600 baud.) This baud rate is approximately equal to 450 words of data per second, where each word is 16 digital points or one analog point. Decoding the message protocol increases the computer's overhead. The best areas for implementing the use of PLCs are those systems that have a large number of panel I/O points and boolean logic combined with a minimum number of transfers to and from the computer.

HARDWARE CONFIGURATION:

Computer An IBM /AT, IBM /PS2 or "true compatible" becomes the basic node. If used as a "remote" node (behind the hardware panel) this computer can be a minimum system.

One floppy drive, a keyboard and a display adapter are all that are necessary to boot up. A display screen is recommended for hardware checkout and initial debugging but the screen is not required in the final configuration since code can be downloaded from the .

"master" node. A "speed-up" card and/or math co-processor can be added. For the file server add a 30 megabyte or larger hard drive. For the instructor's "master" node and/or the operator console, an enhanced graphics adapter (EGA), a 19 inch color screen and touch screen capability can be added.

, Local area network (LAN) 1 The ring topology describes the hardware connection l path. Token passing protocol is a description of the i format of the data that is passed between computers.

l This combination of topology and protocol allows each node on the system an equal chance to communicate with the file server. The file server acts like a remote disk drive to each node and can allow each node to share data with other nodes. One network interface card is required to be installed in each computer. One wire center is required for each 8 computers and each computer has a cable connecting the interface card to the wire center.

. _ , , _ _ ,- ~ . _ , _ . . _ _ _ . _ , _ _ . _ , _ . _ _ _ _ _ _ _ _ _ _ _ - - - _

Attachment 8.5 Page 5 of 8 Panel front-end gear (input / output hardware)

One interface is typically used for each'"remote" node.

This is the interface between the panel hardware and the computer. This interface can be memory mapped (treating the external signal as data in memory) or the data can be acquired through a communication port. Any of the following modules or combination of them can be selected as required by the functions for a particular node.

Typically up to 10 modules can be installed per node, analog input module Each module can read up to 8 analog PID controller setpoints.

analog output module Each module can drive up to 5 analog meters.

digital input modules Each module can read up to 32 switch contacts.

digital output modules Each module can switch up to 16 or 32 relays. The relay contacts are used to switch power from an external power supply to the lamps.

Panel hardware Panel hardware consists of a duplication of the meters, switches, controller face plates, lamps and alarms that exist on the plant control panel. The I/O point count depends on the type of instrumentation being simulated.

This varies from a push button that requires one digital input to a Proportional Integral Differential (PID) controller faceplate that requires one analog input (local setpoint), 2 analog outputs (deviation meter, and controller output meter), 7 digital inputs (manual ramp up slow / fast, down slow / fast, local auto, remote auto, remote auto deviation check) and 1 digital output (status / alarm). The switches and setpoint dial typically require modification to interface them to the front-end gear. Handswitches typically require 2 to 3 digital inputs. Analog meters usually have 1 and occasionally 2 pens. Lamps are typically one point per lamp. Alarms can be 1 point per alarm or 2 points with first-in logic. Other panel accessories include wire, terminal strips and power supplies.

SOFTWARE CONFIGURATION: '

IBM or Microsoft disk operating system (005).

This is required on each node.

Attachment 8.5 Page 6 of 8 Novell local area network.

This is the operating system for the local area network support. System fault tolerance may be added as an option. The local area network runs with a base load of about 5%. Approximately 1.5% loading is added per node.

More than 30 remotes per network file server are not recommeaded, however multiple networks can be used and can share their data.

Model Software.

Model Software is a real-time high level engineering language that can output data to user defined screens and interface with local area networks and data acquisition and control hardware. Remote nodes can be controlled with a local master node. This code is required on all nodes except the file server.

The Model language includes control algorithms (e.g.

rate limiters, ramp generators, PID controllers),

engineering unit conversions (e.g. mass flow to ACFM, ADC counts to pressure), file manipulation (create /

read / write random access files), input and output to data acquisition and control hardware (Keithley, Gould, Landis and Gyr and miscellaneous RS-232 devices), limit checks and alarms, boolean logic, standard math, integration, trig functions, polynomial curve fits and programming functions (e.g. branch and loop). Time compression and expansion is accomplished by changing the common time variable in each processor.

The Model software man-machine interface includes user defined display screens, user defined keyboard, and touch screen input.

Support programs include Panel initialization (read the panel hardware and compare it to the selected data base), Screen display building, and polynomial curve fit generation.

On power-up the nodes read an autoexec. bat file that l signs them onto the Local Area Network and reads in the Model program. A "setup" file is read to determine if the node is master or remote. Each node can be programmed to read in a previously selected data base ard begin running its section of the model or the node can wait for input from the keyboard or "command" file before running. The instructor can select a new data

base or new initial conditions, or he can command the remotes to freeze, or restart.

l l

l l

{

t i

Attachment 8.5 Page 7 of 8 AffLICATION SOFTWARE:

The application coding consists of transferring the control and process flow system blocks into Model code.

The constants need to. be extrapolated from the data sheets and characteristic curves for the different valves, pumps, and components. The logic section is transferable from electrical schematics using the boolean algorithms.

Once the scope of the model is defined, the hardware is Orderad and the initial learning curve for Model is complete, the work load settles in at 60% research,10%

code, 10% installation and wiring, 20% documentation and validity checking. A small 5 node system configured with I file server, 1 master and 3 remotes and with a total panel I/O of 170 points and shared I/O of 150 points can be expected to require a total of 1300 lines of code (300 to 400 lines per remote node) and approximately 12 man months to complete. Additional nodes can be expected to be validated and "come on line" every 3 to 4 man months.

It is highly recommended that the application code be written by the "control system local expert." Support can come from the "computer" personnel in the areas of man / machine interface, local area network support and file manipulation. "Hardware" personnel can support the panel calibration and front-end gear interface.

CONCLUSION:

The advantages of "distributed process simulation" are:

One only needs to purchase the section that will "go on line" next. Since each new "part task" simulation can take advantage of current technology, the simulation hardware will not be out of date by the time the software has been written. Lessons learned in developing previous nodes can be implemented on the next node. Development, validation and training can be done in parallel. Most importantly, "distributed process simulation" allows developers to break the process down to a manageable size.

i

Attachment 8.5 Page 8 of 8 HARDWARE CONFIGURATION

{ overall control control panel manual __

inputs }

j system memory Instructor A mapped IBM /AT IBM /AT I/O meters

{ model data from instructor, system B-->Z con-and front-end ) memory trollers mapped I/O LAN system switches wire B center IBM /AT PLC alarms

{ model data logic from instructor, and I/O system A,C-->Z and front-end and PLC }

file .

server .

IBM /AT .

{ shareable lamps control and data system memory files } Z mapped IBM /AT I/O

{ model data from instructor, system A-->Y and front-end }

Each node (A through Z) contains 50 to 60 points of panel hardware I/O and 40 to 50 points of shared data. Optional PLC support can increase the panel I/O count substantially.

Figure 8.5-1

Attachment 8.6 Page 1 of 3~

FORT SAINT VRAIN REDUCED SCOPE SIMULATION QUALITY ASSURANCE PLAN This document addresses and establishes the Quality Assurance Program requirements for the FSV Reduced Scope Simulator (RSS) Program.

QUALITY REQUIREMENTS The reduced scope simulator and its associated structures, systems and components shall meet the requirements listed in this document.

1.0 Design Control, Criterion III 1.1 Original construction and verification shall be in accordance with the general requirenents of the RSS Design Specifications and the specific requirements of the j simulator construction work packages.

1.2 Controlled work instructions shall be utilized for j performing modifications to the RSS which may result from the simulator group's review of applicable mechanisms of change associated with the reference plant.

1.3 A software system baseline shall be established and changes controlled in accordance with the applicable requirements delineated in "Management of Computer Software Systems."

2.0 Procurement Document Control, Criterion IV Procurement of new, spare or replacement parts and components for the RSS shall be in accordance with Administrative Procedure Q-4. Appropriate quality control test and inspection requirements and procurement requirements shall be as specified in the procurement documents.

T Attachment 8.6 Page 2 of 3 3.0 Instructions, Procedures and Drawings, Criterion V Operation, maintenance, quality assurance and other activities which affect the installation and implementation of ie RSS shall be accomplished according to instructions, procedures or drawings. These instructions, procedures or drawings shall be sufficiently detailed and explicit and shall_ contain acceptance criteria where applicable such that it can be determined whether activities are being satisfactorily accomplished and documented.

4.0 Document Control, Criterion VI The control of documents and procedures utilized in the overall operation of the RSS shall be accomplished according the approved procedures.

i 5.0 Control of Purchased Material, Equipment and Services, Criterion VII i

Procurement of items shall meet the requirements of Administrative Procedure Q-7, as invoked by Administrative Procedure Q-4 and the procurement documents.

6.0 Identification and Control of Materials, parts and Components, Criterion VIII Identification and control of the materials, parts and components for the RSS shall be in accordance with approved procedures.

7.0 Inspection, Criterion X Inspections of construction, maintenance, repair or modifications shall be in accordance with APM Q-10 and/or procedures governing maintenance of non plant equipment and simulator program design control procedures.

8.0 Test Control, Criterion XI 1

Testing of the RSS shall be in accordance with Administrative Procedure Q-11 and/or other applicable documents, such as the "Management of Computer Software Systems" and the "RSS Design Specification."

l l

l

3 3 Attachment 8.6 l Page 3 of 3 9.0 Control of Measuring and Test Equipment, Criterion XII Procedures shall be in effect to provide and maintain a system that assures that tools, gauges, instruments, control equipment and other measuring and testing devices are controlled, calibrated and adjusted at specified intervals to maintain the Discrepancy Report Procedure maintained within the Simulator Facility Program's design control processes, 10.0 Nonconforming Materials, Parts, or Components, Criterion XV Control of nonconforming materials, parts, or components shall be in accordance with the Simulator Configuration Management Plans' procedures for discrepancy reports which shall be developed using APM Q-15 as a guideline. Control of software package discrepancies shall be in accordance with "Management of Computer Software Systems".

11.0 Corrective Action, Criterion XVI Programmatic deficiencies identified as a result of audit or monitoring functions shall be decumented and resolved utilizing the corrective action system, in accordance with Administrative Procedure Q-16. Software errors shall be processed in accordance with the corrective action requirements delineated in "Management of Computer '

Software Systems."

12.0 Records, Criterion XVII Records for the RSS shall be generated as required by the QA Program, Training Program and applicable implementing l procedures.

l l 13.0 Audits, Criterion XVIII i

Audits specific to the Simulator Program will not be required as programmatic deficiencies will be identified and documented during audits of the various FSV Programs -

used to support the RSS operations (i.e., Training l Program, Oasign Control, etc.).

l l

!