RC-06-0024, 1007930 - On-Line Monitoring of Instrument Channel Performance, Volume 3: Applications to Nuclear Power Plant Technical Specification Instrumentation

From kanterella
Jump to navigation Jump to search
1007930 - On-Line Monitoring of Instrument Channel Performance, Volume 3: Applications to Nuclear Power Plant Technical Specification Instrumentation
ML060400225
Person / Time
Site: Summer South Carolina Electric & Gas Company icon.png
Issue date: 12/31/2004
From: Davis E, Rasmussen B, Shankar R
Edan Engineering Corp, Electric Power Research Institute
To:
Office of Nuclear Reactor Regulation
References
LAR 05-0677, RC-06-0024 1007930
Download: ML060400225 (266)


Text

r0II21 On-Line Monitoring of Instrument Channel Performance Volume 3: Applications to Nuclear Power Plant Technical Specification Instrumentation

-SE WARNING:

U v <Please read the Ucense Agreement A% on the back cover before removing A Al the wrapping material. Technj Be.

KftllMlN VI

On-Line Monitoring of Instrument Channel Performance Volume 3: Applications to Nuclear Power Plant Technical Specification Instrumentation 1007930 Final Report, December 2004 EPRI Project Manager R. Shankar EPRI

  • USA 800.313.3774
  • 650.855.2121
  • askepr@epri.com
  • www.epri.corn

DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES THIS DOCUMENT WAS PREPARED BY THE ORGANIZATION(S) NAMED BELOW AS AN ACCOUNT OF WORK SPONSORED OR COSPONSORED BY THE ELECTRIC POWER RESEARCH INSTITUTE, INC. (EPRI). NEITHER EPRI, ANY MEMBER OF EPRI, ANY COSPONSOR, THE ORGANIZATION(S) BELOW, NOR ANY PERSON ACTING ON BEHALF OF ANY OF THEM:

(A) MAKES ANY WARRANTY OR REPRESENTATION WHATSOEVER, EXPRESS OR IMPLIED, (I)

WITH RESPECT TO THE USE OF ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT, INCLUDING MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, OR (II) THAT SUCH USE DOES NOT INFRINGE ON OR INTERFERE WITH PRIVATELY OWNED RIGHTS, INCLUDING ANY PARTY'S INTELLECTUAL PROPERTY, OR (III) THAT THIS DOCUMENT IS SUITABLE TO ANY PARTICULAR USER'S CIRCUMSTANCE; OR (B) ASSUMES RESPONSIBILITY FOR ANY DAMAGES OR OTHER LIABILITY WHATSOEVER (INCLUDING ANY CONSEQUENTIAL DAMAGES, EVEN IF EPRI OR ANY EPRI REPRESENTATIVE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES) RESULTING FROM YOUR SELECTION OR USE OF THIS DOCUMENT OR ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT.

ORGANIZATION(S) THAT PREPARED THIS DOCUMENT Edan Engineering Corporation EPRI ORDERING INFORMATION Requests for copies of this report should be directed to EPRI Orders and Conferences, 1355 Willow Way, Suite 278, Concord, CA 94520, (800) 313-3774, press 2 or internally x5379, (925) 609-9169, (925) 609-1310 (fax).

Electric Power Research Institute and EPRI are registered service marks of the Electric Power Research Institute, Inc. EPRI. ELECTRIFY THE WORLD is a service mark of the Electric Power Research Institute, Inc.

Copyright C 2004 Electric Power Research Institute, Inc. All rights reserved.

CITATIONS This report was prepared by Edan Engineering Corporation 900 Washington St., Suite 830 Vancouver, WA 98660 Principal Investigator E. Davis EPRI I&C Center TVA Kingston Fossil Plant 714 Swan Pond Road Harriman, TN 37748 Principal Investigator B. Rasmussen This report describes research sponsored by EPRI.

The report is a corporate document that should be cited in the literature in the following manner:

On-Line Monitoringof Instrument ChannelPerformance, Volume 3: Applications to Nuclear Power Plant Technical SpecificationInstrumentation,EPRI, Palo Alto, CA: 2004. 1007930.

iii

REPORT

SUMMARY

Background

The EPRIIUtility On-Line Monitoring Working Group developed Topical Report TR-104965, On-Line Monitoringof Instrument Channel Performance, as part of an ongoing team effort.

During the course of the project, several working group meetings were held, and at key points during this topical report's development, presentations were made to Nuclear Regulatory Commission (NRC) staff personnel. The goal of the topical report was to obtain generic approval of on-line monitoring as a calibration assessment technique. The NRC issued its Safety Evaluation (SE) for TR-104965 in July 2000 and the topical report was revised in September 2000 to incorporate the NRC SE. The SE authorizes the application of on-line monitoring to safety-related instruments governed by a plant's technical specifications. The SE also identified various technical issues that must be addressed as part of a technical specification change submittal.

This report is a guide for a technical specification change submittal and subsequent implementation of on-line monitoring for safety-related applications. This report is the third in a three-volume set. Volume 1, GuidelinesforModel Development andImplementation, presents the various tasks that must be completed to prepare models for and to implement an on-line monitoring system. Volume 2, Algorithm Description, Model Examples, andResults, contains more detailed descriptions of the empirical modeling algorithms.

Objective

  • To provide guidelines for nuclear power plants to successfully submit technical specification changes to justify extending instrument calibration intervals of redundant instruments Approach This report provides detailed information regarding the application of on-line monitoring to nuclear plant safety-related instrument systems, and includes:
  • Additional information regarding the content and scope of the technical specification changes in support of on-line monitoring:
  • A discussion of the uncertainty of the Multivariate State Estimation Technique (MSET) respect to safety-related setpoints. Detailed guidance is provided regarding the application of MSET to safety-related instruments while ensuring that setpoint uncertainty allowances are not affected.

v

  • A description of the verification and validation of the MSET software. Also provided is a software acceptance test that can also be used as a periodic test.
  • A discussion of additional technical issues that are important to the overall implementation process.

The EPRI on-line monitoring implementation project originally selected the Multivariate State Estimation Technique (MSET) as its preferred on-line monitoring method.

Results Detailed implementation guidance is provided to assist users in applying on-line monitoring technologies to technical specification instrumentation. Extending calibration intervals for safety-related instruments requires changes lo each plant's technical specifications. Guidance is provided to help standardize the change process.

This third volume includes the following: an overview of how to extend calibration intervals by the use of on-line monitoring; a description of the technical specification changes that are recommended to extend calibration intervals; and guidance regarding on-line monitoring acceptance criteria. It also addresses measurement and estimation uncertainty as well as software verification and validation criteria for on-line monitoring applied to technical specification-related instruments.

EPRI Perspective EPRI's strategic role in on-line monitoring is to facilitate its implementation and use in numerous applications at power plants. On-line monitoring of instrument channels provides increased information about the condition of monitored channels through accurate, more frequent evaluation of each channel's performance over time. This type of performance monitoring is a methodology that offers an alternate approach to traditional time-directed calibration. EPRI is committed to the development and implementation of on-line monitoring as a tool for extending calibration intervals and evaluating instrument performance.

Keywords Calibration Condition monitoring Instrumentation and control Nuclear plant operations and maintenance Safety-related instrumentation Technical specification vi

EPRILicensedMaterial ACKNOWLEDGMENTS EPRI recognizes the following individuals for their contribution to this project. Their time and attention in support of this project are greatly appreciated.

Randy Bickford Expert Microsystems, Inc.

David Carroll South Carolina Electric and Gas Pat Colgan Exelon Corporation Eddie L. Davis Edan Engineering Corporation Steve Dixon Exelon Corporation William Drendall Amergen Energy Dave Hooten Carolina Power and Light Jerry Humphreys CANUS Corporation Aaron Hussey EPRI Robert Kennedy Exelon Corporation Calvin C. King Jr. Public Service Electric & Gas Co.

Vo Lee Expert Microsystems, Inc.

Hubert Ley Argonne National Laboratory David Lillis British Energy Edwina Liu Expert Microsystems, Inc.

Connie Love Tennessee Valley Authority Adrian Miron Argonne National Laboratory Karl Nesmith Tennessee Valley Authority Mike Norman Tennessee Valley Authority Ken Olenginski Exelon Corporation Steve Orme British Energy Keith Pierce Public Service Electric & Gas Co.

Jeff Richardson British Energy Richard Rusaw South Carolina Electric and Gas vii

EPRP LicensedMaterial Larry Straub Exelon Corporation Bill Turkett South Carolina Electric and Gas Tom Wei Argonne National Laboratory Bill Winters Exelon Corporation John Yacyshyn Exelon Corporation.

Chenggang Yu Argonne National Laboratory Nela Zavaljevski Argonne National Laboratory Jack Ziegler Exelon Corporation viii

EPR LicensedMaterial CONTENTS I INTRODUCTION ................ 1-1 1.1 Report Purpose .. 1-1 1.2 Implementation Strategy for Technical Specification Applications . .1-2 1.3 NRC Safety Evaluation for On-Line Monitoring .. 1-4 1.4 EPRI's Role in On-Line Monitoring .. 1-4 1.5 Terminology Used in This Report .. 1-5 1.5.1 Channel, Sensor, and Signal .1-5 1.5.2 Modeling Terms .1-6 1.6 Redundant versus Non-Redundant Empirical Modeling . .1-7 1.6.1 Benefits and Drawbacks of Redundant Instrument Channel Monitoring . 1-8 1.7 Alternative Empirical Modeling Algorithms .. 1-9 2 NRC SAFETY EVALUATION REVIEW CRITERIA .2-1 3 TYPICAL TECHNICAL SPECIFICATION CHANNELS SUITABLE FOR ON-LINE MONITORING......................................................................................................................... 3-1 3.1 Suitable Applications .3-1 3.2 Unsuitable Applications. 3-4 4 TECHNICAL SPECIFICATION CRITERIA .4-1 4.1 Technical Specification Change Summary .. 4-1 4.2 Suggested Technical Specification Wording .. 4-2 4.2.1 Definition Changes .4-2 4.2.2 Addition of New Surveillance Types.4-2 4.2.3 Example Change to Reactor Trip System Instrumentation Table .4-4 4.2.4 Technical Specification Bases .4-5 4.2.5 Effect on Trip Setpoint and Allowable Value .4-5 4.3 Checklist for Technical Specification Change Submittal . .4-5 ix

EPRILicensedMaterial 5 SINGLE-POINT MONITORING ............................................................... 5-1 5.1 Summary of Single-Point Monitoring Issue ............................................................... 5-2 5.2 Drift Types ............................................................... 5-2 5.3 Instrument Channel Variation at Power-A Closer Look ........................................... 5-6 5.4 Establishing an Allowance for Single-Point Monitoring ............................................. 5-12 5.4.1 Background ............................................................... 5-12 5.4.2 EPRI Drift Study ............................................................... 5-13 5.4.2.1 Data Collection ............................................................... 5-13 5.4.2.2 Binomial Pass/Fail Method of Analysis ...................................................... 5-13 5.4.2.3 Application of Binomial Pass/Fail Analysis to Single-Point Monitoring ....... 5-15 5.4.3 Single-Point Monitoring Allowance Development . . ..................... 5-16 5.4.3.1 TR-104965-R1 Single-Point Monitoring Allowance .................................... 5-16 5.4.3.2 Additional Comments Regarding TR-104965-Rl ....................................... 5-17 5.4.3.3 Allowance Based on 95% Minimum Probability ......................................... 5-17 5.5 Plant-Specific Confirmation ............................................................... 5-19 6 ON-LINE MONITORING UNCERTAINTY ANALYSIS ......................................................... 6-1 6.1 Traditional Uncertainty Elements Included in On-Line Monitoring . ............................. 6-4 6.2 Unique Uncertainty Elements Introduced by the Use of On-Line Monitoring .............. 6-6 6.2.1 MSET Estimate Uncertainty .............................................. 6-6 6.2.2 Single-Point Monitoring Allowance ............................................................... 6-7 6.3 On-Line Monitoring Acceptance Criteria ............................................................... 6-7 6.3.1 Establishing the Setpoint Calculation Drift Allowance . .......................... 6-7 6.3.1.1 Setpoint Allowances-Westinghouse Plant Example ............. ..................... 6-8 6.3.1.2 On-Line Monitoring Drift Allowance-Westinghouse Plant Example ........... 6-9 6.3.1.3 Possible Analysis Variations for Other Reactor Types ............................... 6-10 6.4 Application of Acceptance Criteria to MSET Residuals . ............................................ 6-11 6.5 Actions Upon Detection of a Drifted Channel ........................................................... 6-12 6.5.1 Acceptable Region ............................................................... 6-13 6.5.2 Schedule Routine Calibration--MAVD ...................................... 6-13 6.5.3 Operability Assessment-ADVOLM ............................................................... 6-14 6.6 Ongoing Calibration Monitoring Program ............................................................... 6-14 x

EPRI Licensed Material 7 ON-LINE MONITORING PROCEDURES AND SURVEILLANCES ..................................... 7-1 7.1 Impact on Plant Procedures and Documents ......................................................... 7-1 7.2 Calibration Procedures ......................................................... 7-2 7.3 On-Line Monitoring System Operation Procedures ................................................... 7-2 7.4 Quarterly Surveillance Procedure . .......................... 7-3 7.5 Software Verification and Validation - General Criteria ........................................... 7-4 7.6 MSET V&V ........... . 7-5 7.6.1 Argonne National Laboratory V&V . . ........................................................ 7-5 7.6.1.1 Background .......................................................... 7-5 7.6.1.2 V&V Documents ........................................................... 7-6 7.6.2 Expert Microsystems SureSense Software V&V . . ............................................. 7-6 7.6.3 EPRI Independent V&V of the SureSense Software ......................................... 7-7 7.7 Additional V&V Considerations ................................... 7-8 7.7.1 Data Acquisition . ......................................................... . 7-8 7.7.2 Model Configuration Control ............................................................ 7-8 7.8 MSET Software Acceptance Test ............................................................ 7-9 7.9 Redundant Channel Methods V&V ............................................................ 7-9 7.9.1 ICMP V&V . .......................................................... 7-9 8 MISCELLANEOUS TECHNICAL CONSIDERATIONS ........................................................ 8-1 8.1 Bases for Reduced Response Time Testing ........................................................... 8-1 8.2 Historical Instrument Performance ........................ ................................... 8-2 8.3 Common-Mode Failures or Common Bias Effects ..................................................... 8-3 9 REFERENCES .......................................................... 9-1 A REDUNDANT INSTRUMENT CHANNEL MONITORING TECHNIQUES AND UNCERTAINTY ANALYSIS ............................................................ A-1 A.1 Redundant Methods Overview ................. ......................................... A-1 A.2 Types of Redundant Instrument Channel Monitoring Techniques ............................. A-2 A.2.1 Parity Space Methods .A-2 A.2.2 ICMP .A-2 A.2.3 Principal Component Based Methods .A-2 A.2.4 Fuzzy Logic Methods .A-3 A.2.5 Redundant Sensor Estimation Technique .A-3 xi

EPRI Licensed Material A.3 Analysis of the Mean, Variance, and Standard Deviation of the Average of n Normal Random Variables .............................................................. A-4 A.3.1 Derivation of the Mean and Standard Deviation of the Average of a Set of n Normal Random Variables ............................................................. A-5 B NRC SAFETY EVALUATION ...................... B-1 C MSET SOFTWARE VERIFICATION AND VALIDATION REPORT .................................... C-1 C.1 Introduction ........................................................................................................... C-1 C. 1.1 Report Purpose. . . . C-1 C.1.2 Report Applicability . . . .C-2 C.1.3 On-Line Monitoring Overview .. . . C-2 C.1.4 EPRI's Role in On-Line Monitoring .. . . C-3 C.1.5 SureSense Diagnostic Monitoring System. . . . C-3 C.1.5.1 Overview ................................ C-3 C.1.5.1 System Description ................................ C-4 C.2 Verification and Validation Summary ...... .......................... C-5 C.3 Verification and Validation Plan ................................ C-6 C.3.1 Overview .... .............................. C-6 C.3.1.1 Purpose ... 0...........................

C-6 C.3.1.2 Goals ................................ C-6 C.3.1.3 Scope ................................ C-6 C.3.1.4 Waivers ................................ C-7 C.3.2 References .... .............................. C-7 C.3.3 V&V Overview ..... ................................. C-7 C.3.3.1 Organization .... 0...........................

C-7 C.3.3.2 Master Schedule ................................ C-8 C.3.3.3 Resource Summary ................................ C-8 C.3.3.4 Responsibilities ................................ C-9 C.3.4 V&V Required Activities ................................ C-9 C.3.4.1 Traceability Review ................................ C-9 C.3.4.2 Interface Analysis ................................ C-9 C.3.4.3 Test Plan ................................ C-9 C.3.4.4 Test Plan Review ................................ C-9 C.3.4.5 Testing Location ................................ C-9 C.3.4.6 Acceptance Criteria ................................ C-9 xii

EPRI Licensed Material C.3.4.7 Periodic Test Plan ... .................. C-10 C.3.5 V&V Administrative Requirements . .. . C-10 C.3.5.1 Anomaly Resolution and Reporting .. C-10 C.3.5.2 Deviation Policy .. C-10 C.3.6 V&V Reporting Requirements . .. . C-10 C.3.6.1 V&V Summary Report .. C-10 C.3.6.2 Deviation Policy .. C-10 C.3.6.3 V&V Final Report .. C-10 C.4 Functional Requirements Document . .. . C-10 C.4.1 General Functional Requirements . .. . C-11 C.4.1.1 Operating Environment .. C-11 C.4.1.2 Hardware Requirements .. C-11 C.4.1.3 Operating System .. C-11 C.4.1.4 Supporting Software .. C-11 C.4.2 Input Requirements ....... C-11 C.4.2.1 Administrator Input Requirements .. C-12 C.4.2.2 Monitor Input Requirements .. C-12 C.4.2.3 Designer Input Requirements .. C-12 C.4.2.4 Command Line Execution Input Requirements . . C-13 C.4.3 Data Management ....... C-13 C.4.4 Computational Requirements . .. . C-14 C.4.5 Security Requirements ....... C-14 C.4.6 Output Requirements .. C-15 C.4.6.1 Designer Output Requirements .. C-15 C.4.6.2 Monitor Output Requirements .. C-15 C.4.6.3 Administrator and Command Line Execution Output Requirements .. C-15 C.5 Design Description Document . .. . C-16 C.5.1 Software Design Requirements . .. . C-16 C.5.1.1 Input Design Requirements .. C-16 C.5.1.1.1 Administrative Role .. C-16 C.5.1.1.2 Monitor Role .. C-16 C.5.1.1.3 Designer Role .. C-17 C.5.1.2 Data Management Specifications .. C-21 C.5.1.3 Computational Specifications .. C-22 C.5.1.4 Output Specifications .. C-23 xiii

EPRILicensed Material C.5.1.5 Miscellaneous Design Requirements ..................................... C-25 C.6 V&V Test Plan ........................ ............. C-25 C.6.1 Purpose .C-25 C.6.2 References .C-25 C.6.3 Approach .C-26 C.6.4 Environmental Needs .............. C-26 C.6.5 Test Deliverables ................ C-26 C.6.6 Test Items .............. C-27 C.6.7 System Testing .C-27 C.6.8 Acceptance Testing .............. C-27 C.6.9 Requirements Testing .C-28 C.6.10 Suspension and Resumption Criteria . C-28 C.7 Periodic Test Plan ............................ C-28 C.7.1 Purpose .C-28 C.7.2 References .C-28 C.7.3 Precautions and Limitations. C-28 C.7.4 Testing Environment. C-28 C.7.5 Reporting Requirements .C-29 C.8 V&V Test Procedure ............................. C-29 C.8.1 Purpose ............................ C-29 C.8.2 Pre-Test Procedure ........................... C-29 C.8.3 Test Procedure ........................... C-30 C.9 V&V Test Deviation Report . . .......................... C-31 C.9.1 Purpose ............................ C-31 C.9.2 Results ............................ C-32 C.10 V&V Test Data Record .................. ......... C-32 C.10.1 Purpose ........................... C-32 C.10.2 Record ........................... C-32 C.11 References ............................. C-55 C.11.1 EPRI References............................ C-55 C.11.2 Miscellaneous References ........................... C-55 C.12 Glossary .............................. C-57 C.13 Standard Test Project Results Summary. . . C-60 xiv

EPRP Licensed Material D UNCERTAINTY ANALYSIS OF THE INSTRUMENT CALIBRATION AND MONITORING PROGRAM ALGORITHM ....................................................... D-1 D.1 ICMP Overview ........................................................ D-1 D.2 ICMP Uncertainty Analysis Methodology ....................................................... D-1 D.2.1 Accuracy and Number of Monitored Channels .................................................. D-2 D.2.2 Consistency Check Criteria .................. ...................................... D-5 E MSET UNCERTAINTY ANALYSIS .......................... E-1 E.1 Abstract ........................... E-3 E.2 Acknowledgments ............................ E-5 E.3 Acronyms ........................... E-6 E.4 Introduction ....................... ... E-7 E.5 Methodology ........................... E-8 E.5.1 Latin Hypercube Sampling .. E-8 E.5.2 Wavelet Denoising .. E-10 E.5.3 Regularization .. E-1 I E.5.4 Uncertainty Measures .. E-12 E.5.5 Plant-Specific Uncertainty Analysis .. E-13 E.5.6 Generic Uncertainty Analysis .. E-14 E.6 Implementation ......................................... E-15 E.6.1 Plant-Specific Uncertainty Analysis Modules ........................................ E-1 5 E.6.2 Generic Uncertainty Analysis Tool ................. ...................... E-1 8 E.7 Simulation Results ..... .................... .............. E-20 E.7.1 Plant-Specific Uncertainty Analysis ....................................... E-20 E.7.2 Generic Uncertainty Analysis ............................................ E-22 E.7.2.1 Noise Effects ........................................ E-22 E.7.2.2 Sensitivity and Spillover ....................................... E-28 E.8 Uncertainty Analysis Summary . .................................. E-33 E.9 References ......................................... E-34 E.10 Plant-Specific Uncertainty Analysis ......... ................ ............... E-36 E.10.1 RCS Flow Model .. ...................................... E-36 E.10.2 Pressurizer Level ................ ....................... E-37 E.10.3 RCS Pressure ........................................ E-38 E.10.4 Reactor Protection System (RPS) Models ....................................... E-39 xv

EPRILicensedMaterial F MSET ACCEPTANCE TEST AND PERIODIC TEST .............. ........................ F-1 F.1 Overview ........................................ F-1 F.2 Model and Data ....................................... F-1 F.3 Test Procedure ........................................ F-3 F.4 Expected Test Results ...................................... F-10 F.4.1 Model Training Report ....................................... . . F-10 F.4.2 Run Results . . . ...............................F-12 F.4.3 Signal Reports ....................................... . F-14 F.4.3.1 Signal Report for S1 ...................................... F-14 F.4.3.2 Signal Report for S2 ...................................... F-15 F.4.3.3 Signal Report for S3 ...................................... F-16 F.4.3.4 Signal Report for S4 ...................................... F-17 F.4.3.5 Signal Report for S5 ....................................... F-1 8 F.4.4 Signal Plots . . ................................ F-19 F.4.4.1 Signal Plots for S1 ....................................... F-20 F.4.4.2 Signal Plots for S2 ....................................... F-21 F.4.4.3 Signal Plots for S3 ...................................... F-22 F.4.4.4 Signal Plots for S4 ....................................... F-23 F.4.4.5 Signal Plots for S5 ...................................... F-24 xvi

EPRILicensedMaterial LIST OF FIGURES Figure 1-1 Generalized Traditional Calibration Process .......................................................... 1-3 Figure 1-2 Generalized Calibration Process with On-Line Monitoring ...................................... 1-4 Figure 1-3 Instrument Channel in Terms of On-Line Monitoring .............................................. 1-6 Figure 5-1 Steam Generator Level Variation-Westinghouse Plant........................................ 5-2 Figure 5-2 Zero Shift Drift ................................................................ 5-3 Figure 5-3 Span Shift Drift ................................................................ 5-4 Figure 5-4 Combined Zero and Span Shift Drift ................................................................ 5-4 Figure 5-5 Nonlinear Drift ................................................................ 5-5 Figure 5-6 Typical Nuclear Plant Power Variation ................................................................ 5-7 Figure 5-7 Desired Nuclear Plant Power Variation ................................................................ 5-7 Figure 5-8 Steam General Level Variation During an Operating Cycle .................................... 5-8 Figure 5-9 RCS Pressure Variation During an Operating Cycle ....................... ....................... 5-9 Figure 5-10 Pressurizer Level Variation During an Operating Cycle ....................................... 5-10 Figure 5-11 Steam Flow Variation During an Operating Cycle ............................................... 5-10 Figure 5-12 Feedwater Flow Variation During an Operating Cycle ......................................... 5-11 Figure 5-13 Turbine First Stage Pressure Variation During an Operating Cycle ..................... 5-11 Figure 5-14 Steam Generator Pressure Variation During an Operating Cycle ........................ 5-12 Figure 5-15 TR-104965-Rl Recommended Allowance for Single-Point Monitoring ................ 5-16 Figure 5-16 Minimum Allowance for Single-Point Monitoring-95% Minimum Probability ...... 5-18 Figure 5-17 Plant-Specific Allowance for Single-Point Monitoring .......................................... 5-19 Figure 6-1 Typical On-Line Monitoring Physical Configuration ................................................ 6-4 Figure 6-2 Identified Instrument Drift ................................................................ 6-11 Figure 6-3 Residual Plot Showing On-Line Monitoring Drift Limits .......................................... 6-12 Figure 6-4 Alarm Monitoring Points ................................................................ 6-13 Figure B-1 Deviation Zones for Acceptance Criteria .............................................................. B-17 Figure C-1 SDMS Operation ................................................................ C-4 Figure C-2 Parameters Window ................................................................ C-33 Figure C-3 Phases Report ................................................................ C-34 Figure C-4 Parameters Report ................................................................ C-35 Figure C-5 Signals Window ................................................................ C-36 Figure C-6 Signals Report ................................................................ C-37 Figure C-7 Parameter Estimators Report ................................... ............................. C-38 xvii

EPRILicensed Material Figure C-8 Signals Report .............................................................. C-39 Figure C-9 Verification Window ............................................................. C-40 Figure C-10 Training Report - Operating 5.. ............................................................. C-41 Figure C-11 Training Report - Operating_100 .............................................................. C-42 Figure C-12 Set 2 Monitor Run Report ............................................................. C-43 Figure C-13 Set 2 Index Run Report ........................... C-44 Figure C-14 Set 3 Run Report ......................... C-45 Figure C-15 Fault Detector Sensitivity Analysis Report ......................................................... C-46 Figure C-16 Data Set Analysis Report ............................................................... C-47 Figure C-17 Training Matrix Analysis Report ......................................... ..................... C-47 Figure C-1 8 Correlation Analysis Report - Operating_100 .................................................... C-48 Figure C-19 Correlation Analysis Report - Ciperating_50 ...................................................... C-49 Figure C-20 Estimation and Observation Plot for S5 ............................................................. C-50 Figure C-21 Residual Plot for S5........................................................................................... C-50 Figure C-22 Last Training Plot for S5.................................................................................... C-51 Figure C-23 Run Report for Set 2 Index .............................................................. C-52 Figure C-24 Signals Report for Set 2 Index ............................................................. C-53 Figure C-25 S5 Estimate and Observation Plot for Set 2 Index ........................ ..................... C-54 Figure C-26 S5 Residual Plot for Set 2 Index .............................................................. C-54 Figure D-1 Theoretical Process Measurement Uncertainty ............................................... ...... D-4 Figure D-2 Outlying Channel Allowed to Influence Parameter Estimate ............... ................... D-5 Figure D-3 Consistency Check Excludes Outlying Channel from Parameter Estimate ........ ....D-6 Figure D-4 Observed Performance of Steamr Generator Level Transmitters ............ ............... D-7 Figure D-5 Example Variation of Parameter Estimate with Consistency Check Factor ............ D-8 Figure D-6 ICMP Identification of Drifted Channel ............................................................. D-9 Figure E-1 UNAMSET Flow Diagram ............................................................. E-17 Figure E-2 Structure of UNADB Computational Tool ............................................................. E-19 Figure E-3 Database Design .............................................................. E-20 Figure E-4 Actual Errors for All Models in the Database ........................................................ E-23 Figure E-5 Average Actual Error for All Models in the Database ..................... ...................... E-23 Figure E-6 Average Residuals for All Models in the Database ........................ ...................... E-24 Figure E-7 Effect of Noise Standard Deviation on Actual Error .............................................. E-25 Figure E-8 Noise Effect on Average Actual Error ............................................................. E-25 Figure E-9 Effect of Noise Distribution on Actual Error .......................................................... E-26 Figure E-10 Effect of Number of Training Vectors on Average Actual Error ........... ............... E-26 Figure E-1 I Noise Effect on Actual Error, Correlated Sensors, and Larger Models ............... E-27 Figure E-12 Effect of Correlation on Actual Error .............................................................. E-28 Figure E-13 Sensitivity for Models with Small Number of Sensors .................. ...................... E-29 Figure E-14 Spillover for Models with Small Number of Sensors ........................................... E-29 xviii

EPRI Licensed Material Figure E-15 Sensitivity for an RPS Template Model .......................................................... E-30 Figure E-16 Spillover for an RPS template Model ........................................................ .. E-31 Figure E-1 7 Effect of Noise and Average Correlation on Sensitivity (Bias = 1%)................... E-31 Figure E-18 Spillover Between Redundant Sensors (Bias = 1%) ...................... .................... E-32 Figure E-19 Spillover Between Correlated Non-Redundant Sensors (Bias = 1%) ......... ........ E-32 Figure E-20 Parametric Study for True Errors ............................... ........................... E-40 Figure E-21 Parametric Study for Residuals ............................ .............................. E-41 Figure F-1 Normal Signal Behavior .......................................................... F-2 Figure F-2 Signal with Drift .......................................................... F-3 Figure F-3 Log-In Window .......................................................... F-4 Figure F-4 Acceptance Test Model .......................................................... F-5 Figure F-5 Model Verification .......................................................... F-5 Figure F-6 Main System Window After Moniltoring Run of Set3 ............................................. F-12 Figure F-7 Signal S1 Observation and Estimate Plot .......................................................... F-20 Figure F-8 Signal S1 Residual Plot .......................................................... F-20 Figure F-9 Signal S2 Observation and Estimate Plot.......................................................... F-21 Figure F-10 Signal S2 Residual Plot .......................................................... F-21 Figure F-11 Signal S3 Observation and Estimate Plot.......................................................... F-22 Figure F-12 Signal S3 Residual Plot .......................................................... F-22 Figure F-13 Signal S4 Observation and Estimate Plot .......................................................... F-23 Figure F-14 Signal S4 Residual Plot .......................................................... F-23 Figure F-15 Signal S5 Observation and Estimate Plot .......................................................... F-24 Figure F-16 Signal S5 Residual Plot .......................................................... F-24 xix

EPRILicensedMaterial LIST OF TABLES Table 3-1 Typical Technical Specification Channels for On-Line Monitoring-Westinghouse Design ................................................................. 3-2 Table 3-2 Typical Technical Specification Channels for On-Line Monitoring-B&W Design............................................................................................................................. 3-3 Table 3-3 Typical Technical Specification Channels for On-Line Monitoring-GE BWR Design............................................................................................................................. 3-3 Table 4-1 Example Surveillance Requirements for Westinghouse Standard Technical Specifications................................................................................................................... 4-4 Table 5-1 Standard Normal Distribution Values for Various Confidence Levels...................... 5-15 Table 5-2 Single-Point Monitoring Allowance Data-95% Minimum Probability ..................... 5-18 Table 6-1 Traditional Process Instrument Circuit Uncertainty Sources .................................... 6-5 Table A-1 Equations for the Mean, Variance, and Standard Deviation of the Average of n Normal Random Variables ................................................................. A-7 Table D-1 Measurement Uncertainty as a Function of the Number of Redundant Channels......................................................................................................................... D-4 Table E-1 Summary of the Largest Confidence Intervals ...................................... ................ E-22 Table E-2 Estimated Noise for RCS Flow Sensors ................................................................ E-36 Table E-3 Estimated Confidence Intervals for RCS Flow Sensors, Normal Operation ........... E-36 Table E-4 Estimated Confidence Intervals for RCS Flow Sensors, Drift Conditions ......... ..... E-36 Table E-5 Estimated Noise for Pressurizer Level Sensors ................................ .................... E-37 Table E-6 Estimated Confidence Intervals for Pressurizer Level Sensors, Normal Operation ................................................................. E-37 Table E-7 Estimated Confidence Intervals for Pressurizer Level Sensors, Drift Conditions ... E-37 Table E-8 Estimated Noise for RCS Pressure Sensors ......................................................... E-38 Table E-9 Estimated Confidence Intervals for RCS Pressure Sensors, Normal Operation .... E-38 Table E-10 Estimated Confidence Intervals for RCS Pressure Sensors, Drift Conditions ...... E-38 Table E-11 Estimated Noise for RPS Sensors ................................................................. E-39 Table E-12 The Largest Confidence Intervals for RPS Sensors, BART with Regularization ................................................................. E-42 Table E-13 The Largest Confidence Intervals for RPS Sensors, Standard BART .......... ....... E-42 Table E-14 Estimated Confidence Intervals for Steam Generator Level Sensors, Normal Operation ................................................................. E-43 Table E-15 Estimated Confidence Intervals for Steam Generator Level Sensors, Drift Conditions ...... E-43 Xxi

EPRILicensed Material Table E-16 Summary of the Largest Confidence Intervals, Optimal Models .......................... E-44 Table E-17 Summary of the Largest Confidence Intervals, BART Operator .......................... E-45 xxii

EPRILicensedMaterial I

INTRODUCTION 1.1 Report Purpose This topical report discusses the implementation of on-line monitoring (OLM) for nuclear plant instrument systems that are covered by the technical specifications. This report, Volume 3, is the third in a three-volume set.

Volume 1, GuidelinesforModel Development andImplementation [4], provides an overview of the EPRI On-Line Monitoring project activities, as well as definitions for the majority of terminology used to describe on-line monitoring and its implementation. Data management issues related to implementation are discussed along with descriptions of the various possible modes of operation for on-line monitoring. Overviews of on-line monitoring and the software product used throughout this project are presented. The bulk of Volume 1 is devoted to presenting the various tasks that must be completed to prepare models for and to implement an on-line monitoring system, including data preparation, signal selection, model training and evaluation, model deployment, and model retraining. Related issues also discussed are data quality, data quantity, fault detection technicqes, and alarm response mechanisms. An extensive glossary of on-line monitoring terms is also provided.

Volume 2, Algorithm Description, Model Examples, and Results [5], serves mainly as a reference to Volume 1 and contains detailed descriptions of the Multivariate State Estimation Technique (MSET) and the instrument calibration and monitoring program for redundant channels. These two algorithms are discussed in detail because they were the primary tools used under the EPRI on-line monitoring projects. Dozens of examples are presented for models that were developed for the participants of this project. Model maintenance, or retraining, is also demonstrated to illustrate the process of updating models when they require modifications to their training data sets. Finally, a recent software product developed specifically for cleaning data files and removing bad data prior to developing on-line monitoring models is reviewed and demonstrated.

This report, Volume 3: Applications to NuclearPower PlantTechnical Specification Instrumentation,builds on the groundwork presented in the first two volumes and discusses on-line monitoring applications specifically for safety-related, technical specification instrumentation at nuclear power plants. Recommendations suitable for safety-related channels for model deployment are presented along with the related issue of single-point monitoring. A copy of the Nuclear Regulatory Commission's (NRC) Safety Evaluation (SE) Report covering on-line monitoring for nuclear power applications is provided for reference. Also provided in this report are results from a detailed uncertainty analysis performed on the multivariate state 1-1

EPRP Licensed Material Introduction estimation technique, additional uncertainty analysis techniques for redundant channel averaging, and the instrument calibration and monitoring program for redundant sensors.

Verification and validation (V&V) studies of both the multivariate state estimation technique and the SureSense [6] on-line monitoring software are discussed, along with a software acceptance test procedure for the multivariate state estimation technique. Additional discussions are provided regarding redundant versus non-redundant empirical modeling techniques as applied to safety-related instrumentation.

The purpose of this report is to accomplish the following:

  • Provide an overview of how to extend calibration intervals by the use of on-line monitoring
  • NRC requirements that are delineated in their SE
  • Provide an overview of the types of technical specification instrument channels that are suitable for on-line monitoring
  • Describe the technical specification changes that are recommended to extend calibration intervals
  • Address measurement uncertainty and provide guidance regarding on-line monitoring acceptance criteria
  • Discuss typical surveillance procedures
  • Discuss issues associated with the application of on-line monitoring to technical specification applications
  • Address software verification and validation criteria for on-line monitoring applied to technical specification-related instruments
  • Provide an example software acceptance test that can also be used for the quarterly periodic test specified in the NRC SE 1.2 Implementation Strategy for Technical Specification Applications The use of on-line monitoring has been approved to allow calibration extension of safety-related sensors. Section 1.2 summarizes the basis for implementation.

At least one redundant sensor will be calibrated each fuel cycle. Other redundant sensors will also be calibrated if identified by on-line monitoring to be in need of calibration. All n redundant safety-related channels for a given parameter will require calibration at least once within n fuel cycles. A technical specification change is necessary to change the calibration interval to this extended frequency.

The maximum allowed interval between calibrations is eight years, regardless of the number of redundant channels.

Some on-line monitoring algorithms allow for analytically-derived channels that have a definable relationship to the physical redundant channels. Usually, the reason for creating analytical channels is to improve the on-line monitoring redundancy for a given parameter. In 1-2

EPRI LicensedMaterial Introduction these cases, the physical channels still have to be calibrated at the n fuel cycle frequency, where n is the number of physically redundant channels (with analytically-derived channels excluded).

On a quarterly basis, a formal surveillance check will be performed to verify that no channels are outside the prescribed deviation limits. The quarterly frequency was established on the basis of engineering judgment and is consistent with the Maintenance Rule evaluation frequency.

Channel checks will continue to be performed by the operators without modification to the technical specifications.

As stated previously, at least one redundant sensor will be calibrated each fuel cycle. The purpose of this periodic calibration confirmation is as follows:

  • To assure that common-mode failure mechanisms do not exist. Although such mechanisms are not expected, continued periodic calibration provides an additional level of confidence in the on-line monitoring approach to calibration assessment.
  • To ensure that each sensor continues to be periodically calibrated by a method traceable back to a reference standard. Note that this does not imply a lack of confidence in on-line monitoring. Instead, the intention is to reconcile on-line monitoring with existing NRC requirements for all calibrations to be traceable to an industry-recognized reference standard.

Given this implementation strategy, the application of on-line monitoring does not constitute a large change from current practices. To illustrate this point, Figure 1-1 shows the current calibration practice in which all redundant sensors are calibrated each fuel cycle and confirmed to perform with the specified as-left tolerance. Figure 1-2 shows one possible result following the approved implementation strategy for on-line monitoring. At least one sensor will be returned to within the as-left tolerance by a formal calibration while the other sensors might be left untouched, provided that on-line monitoring did not identify any of the other channels as in need of calibration (outside the specified tolerance for on-line monitoring). Unlike the traditional calibration method, on-line monitoring will assess channel calibration more frequently to ensure that none of the channels drifts outside prescribed acceptance limits.

Nominal

................----------------l.... ue As-Found As-Left .............. x Tolerance Tolerance X X

.- I - l l I 1 2 3 4 Channel Number Figure 1-1 Generalized Traditional Calibration Process 1-3

EPRI LicensedMaterial Introduction X Nominal

-X

.............................-... Value As-Found As-Left .......................................... Value Tolerance Tolerance x

s. I I I I 1 2 3 4 Channel Number Figure 1-2 Generalized Calibration Process with On-Line Monitoring 1.3 NRC Safety Evaluation for On-Line Monitoring EPRI formed the EPRI/Utility On-Line Monitoring Working Group in 1994 with the goal of obtaining NRC approval of on-line monitoring as a calibration reduction tool for safety-related instruments. An initial submittal was made to the NRC in 1995, followed by a detailed submittal in 1998 [1]. The NRC SE was issued in July 2000 [2]. The detailed submittal was then modified, in response to the NRC SE, in 2000 [3]. With the issuance of the SE, the EPRI/Utility On-Line Monitoring Working Group completed its mission and evolved into the Instrument Monitoring and Calibration (IMC) Users Group.

A copy of the NRC SE is provided in Appendix B. Section 2 discusses the specific requirements specified in the SE.

1.4 EPRI's Role in On-Line Monitoring EPRI's strategic role in on-line monitoring is to facilitate its implementation and cost-effective use in numerous applications at power plants. To this end, EPRI has sponsored an on-line monitoring implementation project at multiple nuclear plants specifically intended to install and use on-line monitoring technology. The purpose of the EPRI on-line monitoring implementation project is to: 1) apply on-line monitoring to all types of power plant applications and 2) document all aspects of the implementation process in a series of EPRI deliverable reports.

These reports will cover installation, modeling, optimization, and proven cost-benefit. The planned EPRI reports are:

SureSensee Diagnostic MonitoringStudios Users Guide, Version 2.0 [7]-provides detailed guidance in the application of SureSense for nuclear plant systems. This report is periodically updated as a result of user feedback or software revisions. Note: SureSense is an advisory-only monitoring tool and is not designed or intended for any use associated with controlling a process.

1-4

EPRILicensed Material Introduction

  • On-Line Monitoringof Instrument Channel Performance, Volume 1: Guidelinesfor Model Development andImplementation [4]-addresses all aspects of modeling for on-line monitoring applications and their implementation. This report describes model development, data quality issues, training requirements, retraining criteria, responding to failure alarms, and declaring a model ready for use.
  • On-Line Monitoringof Instrument ChannelPerformance, Volume 2: Algorithm Description, Model Examples, andResults [5]-presents detailed model examples, empirical algorithm details, and further evaluations of the software used during this project.
  • On-Line MonitoringofInstrument ChannelPerformance, Volume 3: Applications to Nuclear PowerPlant TechnicalSpecification Instrumentation (this report)-addresses on-line monitoring for safety-related applications and the NRC Safety Evaluation Report for on-line monitoring. Topics include technical specifications, uncertainty analysis, procedures and surveillances, MSET application considerations, and miscellaneous technical considerations.

Nuclear Energy Plant Optimization (NEPO) projects related to software verification and validation and uncertainty analysis provide input to this report.

  • On-Line Monitoring Cost-Benefit Guide [8]-discusses the expected costs and benefits of on-line monitoring. Direct, indirect, and potential benefits are covered. The project participants' experience of the EPRI on-line monitoring implementation is included.

EPRI fosters development of on-line monitoring technology and its application through the IMC Users Group. Through the 1MC Users Group, on-line monitoring as a key technology will continue to be supported technically as its use grows throughout the industry.

1.5 Terminology Used in This Report The following sections explain key terms.

1.5.1 Channel, Sensor, and Signal The terms sensor, channel, and signal are often used almost interchangeably in this report, but there is an important distinction between the three terms. The sensor is the device that measures the process value. The sensor and associated signal conditioning equipment is referred to as the instrument channel, or channel. The electrical output from the channel or, more commonly, its digitized equivalent value, is the signal. Figure 1-3 shows the relationship between the three terms for a safety-related channel; a non-safety-related channel might not have the isolator or bistable as shown.

1-5

EPRILicensed Material Introduction Channel 1 I Sensr ESignal Seso ISup/ly 50 ohm Isolator - Data . Data '.

_Acquisition i Analysis Transmitter __ ............... ................

On-Line Monitoring Equipment Boundary TO 250 ohm stabl Safety-Related Figure 1-3 Instrument Channel in Terms of On-Line Monitoring For a non-safety-related channel, there might be little in the way of signal conditioning, with the sensor being the only real monitored device. More complex measurements might contain several signal conditioning modules. This report will usually refer to the channel rather than the sensor in terms of what is monitored. Although other industry documents and published papers often discuss on-line monitoring using the term sensor, it is the channel (or some portion of the channel) that is actually monitored. The discussion in the following sections frequently refers to sensor drift because the sensor is usually the most common source of drift, but any portion of the channel might actually be the cause of drift.

The on-line monitoring system does not know the layout of the channel; it receives only a digitized signal from the plant computer, a historical file, or other data acquisition system.

Although the instrument channel is typically producing a milliampere or voltage output, the signal acquired by the on-line monitoring system is usually digitized and scaled into the expected process units, such as pressure, temperature, or percent. When this report refers to signals, it means the scaled or unscaled digitized output signals from the monitored channels.

1.5.2 Modeling Terms The term model is used to describe the group of signals that has been collected for the purpose of signal validation and analysis. Depending on the context, model might refer just to the selected group of signals, or it might also include the various settings defined by the on-line monitoring system that are necessary to optimize the performance of the signal validation procedure. In the context of on-line monitoring, model does not necessarily refer to some functional relationship between model elements defined by a set of equations.

1-6

EPRILicensedMaterial Introduction The term observation vector is used to describe the observed values for all of the signals in the model at a particular instant in time. For example, if the signal data are contained in a spreadsheet, a single row of data representing a particular point in time would be a vector.

The term state space is used to describe the operating states that form the basis for training a model. The state space contains the expected range for each signal in the model and also defines different operating states within that range. For example, a state space for a pressure sensor might cover a range of 800 to 1200 psig (5,512 kPa to 8,268 kPa); within this range, there might be several distinct operating states associated with different equipment lineups or plant power levels.

The term estimate is used to describe the best estimate or prediction of the actual process or signal value. For each observation, the on-line monitoring system produces an estimate of the corresponding expected value for each monitored signal. The term residualrefers to the mathematical difference between an observed value and the corresponding estimate for that observation. Fault detection is often based on the behavior of the residual.

1.6 Redundant versus Non-Redundant Empirical Modeling There are two general model categories of on-line monitoring models: redundant, and non-redundant. Non-redundant models include any empirical design suitable for estimating values for a group of instrument channels that are correlated but not truly redundant. In this context, a set of redundant instrument channels is defined as a set of instrument channels that:

  • Measure the same physical parameter
  • Are of the same type and operate over the same range*
  • In some cases, instruments of different types might be grouped together if they are measuring the same parameter, though evidence has shown that estimation performance is degraded (for example, combining narrow- and wide-range level transmitters).

A set of non-redundant instrument channels suitable for on-line monitoring using empirical techniques must have a high level of correlation between the instrument channels. A suitable non-redundant group might contain instrument channels of different types, measuring different parameters at various locations in the monitored process. It is important to note that techniques suitable for monitoring non-redundant instrument channels can be applied to sets of redundant instrument channels; however, unless proper considerations are made for regularization, these models might be less optimal than equivalent redundant instrument channel models. Simply put, if a set of redundant instrument channels can be effectively modeled with acceptable uncertainty bounds on the estimations with a redundant monitoring algorithm, then a redundant monitoring algorithm should be used. The redundant monitoring algorithms are more intuitive and are easier to troubleshoot and understand under conditions when faulty sensors are identified.

Redundant instrument channel monitoring is an important method when planning to implement calibration reduction strategies for safety-related instrumentation. At present this type of strategy has been successfully implemented across the fleet of 54 nuclear power plants of Electricit6 de 1-7

EPRI LicensedMaterial Introduction France (EdF). Because this is the first successful implementation of monitoring specifically for calibration reduction that has been accepted by a regulatory authority, it is important to understand the value of redundant instrument channel monitoring. Following the success at EdF, British Energy is currently seeking regulatory approval for calibration reduction of safety-related instrumentation based on an on-line monitoring application for redundant instrument channels.

The successful EdF implementation uses an algorithm that calculates a deviation for each channel as the difference between the measured value and the simple average of the remaining redundant channels. A ar value is determined for each redundant channel, and the deviations for each channel are compared to the specified v.o If a channel's deviation exceeds lo, calibration is scheduled during the upcoming refueling outage. If a channel's deviation exceeds 2o, immediate corrective action is required. Instrument channel calibrations are also scheduled according to the following: at least one redundant instrument is calibrated each fuel cycle, and the maximum length of time between calibrations for a given channel is 8 fuel cycles or 12 years.

The use of the described redundant channel monitoring algorithm has been approved for use in EdF nuclear plants beginning in 1996, by the Safety Authority of France. A 1992 study by EdF found that 90% of transmitters were within calibration limits when checked during a refueling outage. Static evaluations of the instrument channels are performed prior to a refueling outage.

The calibration of each channel is determined, and maintenance is scheduled accordingly. Thus, the EDF methods are not truly "on-line," though their results have been very positive.

1.6.1 Benefits and Drawbacks of RedundantInstrument Channel Monitoring The benefits of redundant instrument channel monitoring techniques are:

  • Simple algorithms for computation and fault detection
  • Intuitive nature of the algorithms should ease acceptance of technical specification amendments by regulatory agencies
  • The uncertainty analysis of these simple algorithms is more tractable than the equivalent analysis for non-redundant techniques The simplicity of redundant instrument channel monitoring techniques is an important feature that should allow for more rapid acceptance of technical specification change proposals by the regulating authorities. The types of redundant instrument channel monitoring techniques are described in Appendix A. Performing an uncertainty analysis on redundant instrument channel monitoring techniques is much more intuitive. A basic analysis of the statistical behavior of a simple averaging technique is detailed in Appendix A as an example.

The drawbacks of redundant instrument charnel monitoring techniques are:

  • Spillover (drift in one channel affecting the estimates for other channels) is a greater concern due to the generally small model size
  • The techniques are not appropriate for instrument channels without a redundancy of at least three 1-8

EPRI LicensedMaterial Introduction While ideal candidate instrument channels for a redundant monitoring application are the feedwater flow channels, for example, in a 4-loop Westinghouse PWR, non-ideal candidates also exist-for example, the turbine first stage pressure channels. Similar examples can be found for Babcock & Wilcox- (B&W-) designed PWRs and BWRs.

On-line monitoring for safety-related instrumentation in U.S. nuclear power plants requires a combination of redundant models and non-redundant models. The setpoint analyses and determination of drift allowance presented herein apply to both types of models.

1.7 Alternative Empirical Modeling Algorithms The EPRI on-line monitoring implementation project originally selected the MSET as its preferred on-line monitoring method and the SureSense Diagnostic Monitoring Studio (SureSense) software (version 1.4) for MSET implementation. Later developments resulted in the availability of an alternative method that was used for some of the development work in 2004. The alternative method is an Expert Microsystems, Inc. [6] proprietary algorithm available in SureSense version 2.0. Many of the specific guidelines presented in this report are applicable to both techniques, though in some cases minor modifications might be required to accommodate the new technique. In addition, the requirements of the NRC SE are independent of the choice of empirical modeling algorithm and software. Because the majority of this report addresses these requirements, it is applicable to any choice of algorithm. In some cases, such as the uncertainty analysis of MSET, work has been completed to satisfy the requirements based on a specified algorithm. In these cases, if an alternate algorithm is used, this work can be used as an example of how to proceed and satisfy the NRC requirements. Additional EPRI investigations into uncertainty quantification have shown alternative algorithms to exhibit similar behaviors with respect to model uncertainty as detailed in this report for MSET.

1-9

EPRLicensedMaterial 2

NRC SAFETY EVALUATION REVIEW CRITERIA The NRC Safety Evaluation (SE) is provided in Appendix B. As part of its acceptance of on-line monitoring, the SE provides 14 requirements that must be addressed in the implementation process. Section 2 is the roadmap that explains how each requirement is addressed by this topical report. The NRC SE specifies additional clarification for each requirement that should also be reviewed to ensure a complete understanding of the requirement.

In the following discussion, the identification number of each requirement corresponds to the number assigned by the NRC SE. Each SE requirement is provided in its entirety, followed by a discussion of how this report addresses the requirement.

Requirement 1 The submittal for implementation of the on-line monitoring technique shall confirm that the impact on plant safety of the deficiencies inherent in the on-line monitoring technique (inaccuracy in process parameter estimate, single-point monitoring, and untraceability of accuracy to standards), on plant safety will be insignificant, and that all uncertainties associated with the process parameter estimate have been quantitatively bounded and accounted for either in the on-line monitoring acceptance criteria or in the applicable setpoint and uncertainty calculations.

Discussionfor Requirement 1:

The methodology provided in Section 6 is specifically intended to comply with Requirement 1.

Argonne National Laboratory (ANL) has developed an uncertainty analysis tool and applied it specifically to MSET (Appendix E). The uncertainty analysis project has been funded as part of the Department of Energy NEPO program. Ia addition, an uncertainty analysis has been performed for the EPRI Instrument Monitoring and Calibration Program (ICMP) [9, 10, 11].

ICMP is specifically designed for redundant instrument channels, which fits well with the redundant architecture of safety-related instrumentation. The uncertainty analyses completed for MSET and ICMP provide the theoretical basis from which plant-specific, or model-specific, uncertainty analyses can be performed.

Section 5 addresses single-point monitoring in detail, and the results are incorporated into Section 6 as part of the on-line monitoring drift allowance calculation. The intent is to maintain traceability to the allowances provided in the associated setpoint calculation. The approach taken will have no impact on either the trip setpoint or the allowable value in the technical specifications.

2-1

EPRILicensed Material NRC Safety Evaluation Review Criteria Traceability of accuracy to reference standards has been maintained by the very nature of the on-line monitoring implementation approach. The calibration frequency has been extended, not eliminated.

Requirement 2 Unless the licensee can demonstrate otherwise, instrument channels monitoring processes that are always at the low or high end of an instrument's calibrated span during normal plant operation shall be excluded from the on-line monitoring program.

Discussionfor Requirement 2:

Section 5 provides detailed information that confirms the basis for Requirement 2. Section 3.1 summarizes the applications that are considered suitable candidates for on-line monitoring.

Section 3.2 summarizes the types of applications that are not considered suitable for on-line monitoring. The basis for this determination is provided in Section 5.

Requirement 3 The algorithm used for on-line monitoring shall be able to distinguish between the process variable drift (actual process going up or down) and the instrument drift and shall be able to compensate for uncertainties introduced by unstable process, sensor locations, non-simultaneous measurements, and noisy signals. If the implemented algorithm and its associated software cannot meet these requirements, administrative controls, including the guidelines in Section 3 of the topical report for avoiding a penalty for non-simultaneous measurement, could be implemented as an acceptable means to ensure that these requirements are met satisfactorily.

Discussionfor Requirement 3:

The EPRI on-line monitoring implementation project has selected the MSET as its preferred on-line monitoring method. MSET is specifically trained to recognize normal behavior as well as specific operating states. It readily distinguishes between a process change and an instrument drift. Noisy signals and measurement lead/lag relationships are accommodated by the model learning procedures used with MSET. One of the requirements of the MSET modeling procedure is to group together instrument channels into an on-line monitoring model that are correlated.

Distinguishing between process drift and instrument drift is available through the use of MSET by noting that a process drift will result in changes in more than one correlated instrument channel, whereas an instrument drift will manifest in a single channel without corresponding changes in the correlated channels. While multiple instrument drifts might occur simultaneously, the identified deviations for process drifts will be different than those for multiple instrument drifts; thus, differentiability is maintained. Volume 1 of this report series, Guidelinesfor Model Development andImplementation [4], provides specific guidance for an MSET application.

Requirement 4 For instruments that were not included in the EPRI drift study, the value of the allowance or penalty to compensate for single-point monitoring must be determined by using the instrument's historical calibration data and by analyzing the instrument performance over its range for all modes of operation, including startup, shutdown, and plant trips. If the required data for such a determination are not available, an evaluation demonstrating that the instrument's relevant performance specifications are as good as or better than those of a similar instrument included in the EPRI drift study, will permit a licensee to use the generic penalties for single-point monitoring given in EPRI Topical Report TR-104965.

2-2

EPRILicensedMaterial NRC Safety Evaluation Review Criteria Discussionfor Requirement 4:

Section 5 provides detailed information regarding single-point monitoring. Most plants following the criteria stated in NRC Requirement 4 can use the generic penalties provided in Section 5.

Section 5 discusses the EPRI drift study to explain why the results are likely to be more conservative than necessary for most applications. Section 5 also includes detailed information explaining how to perform a plant-specific analysis for a single-point monitoring allowance.

Requirement 5 Calculations for the acceptance criteria defining the proposed three zones of deviation

("acceptable," "needs calibration," and "inoperable") should be done in a manner consistent with the plant-specific safety-related instrumentation setpoint methodology so that using on-line monitoring technique to monitor instrument performance and extend its calibration interval will not invalidate the setpoint calculation assumptions and the safety analysis assumptions. If new or different uncertainties require the recalculation of instrument trip setpoints, it should be demonstrated that relevant safety analyses are unaffected. The licensee should have a documented methodology for calculating acceptance criteria that are compatible with the practice described in Regulatory Guide 1.105 and the methodology described in acceptable industry standards for TSP and uncertainty calculations.

Discussionfor Requirement 5:

The methodology provided in Section 6 ensures that setpoint calculation and safety analysis assumptions are unchanged. A clear basis for the on-line monitoring drift allowance has been established so that setpoint calculations should not require revision. The technical specification trip setpoint and allowable value requirements are also unaffected because the methodology deliberately ensures compliance with the set~point calculations. Unique uncertainties attributed to on-line monitoring or single-point monitoring specifically reduce the on-line monitoring drift allowance to ensure that the setpoint calculations do not require revision.

Requirement 6 For any algorithm used, the maximum acceptable value of deviation (MAVD) shall be such that accepting the deviation in the monitored vahle anywhere in the zone between PE (parameter estimate) and MAVD will provide high confidence (level of 95%/o/95%) that drift in the sensor-transmitter or any part of an instrument channel that is common to the instrument channel and the on-line monitoring loop is less than or equal to the value used in the setpoint calculations for that instrument channel.

Discussionfor Requirement 6:

The calculation method described in Section 6 ensures that the MAVD provides a high confidence level that is entirely consistent with the setpoint calculations. The allowance for drift has been conservatively determined without taking credit for non-sensor-related uncertainty terms. The on-line monitoring allowance for drift is further reduced to account for unique uncertainty elements introduced by the use of on-line monitoring.

Requirement 7 The instrument shall meet all requirements of the above Requirement 6 for the acceptable band or acceptable region.

2-3

EPRI Licensed Material NRC Safety EvaluationReview Criteria Discussionfor Requirement 7:

The calculation method described in Section 6 ensures that the MAVD provides a high confidence level that is consistent with the setpoint calculations. The allowance for drift has been conservatively determined without taking credit for non-sensor-related uncertainty terms. The on-line monitoring allowance for drift is further reduced to account for unique uncertainty elements introduced by the use of on-line monitoring.

Requirement 8 For any algorithm used, the maximum value of the channel deviation beyond which the instrument is declared "inoperable" shall be listed in the technical specifications with a note indicating that this value is to be used for determining the channel operability only when the channel's performance is being monitored using an on-line monitoring technique. It could be called "allowable deviation value for on-line monitoring" (ADVOLM) or whatever name the licensee chooses. The ADVOLM shall be established by the instrument uncertainty analysis. The value of the ADVOLM shall be such to ensure:

(a) that when the deviation between the monitored value and its PE is less than or equal to the ADVOLM limit, the channel will meet the requirements of the current technical specifications, and the assumptions of the setpoint calculations and safety analyses are satisfied; and (b) that until the instrument channel is recalibrated (at most until the next refueling outage),

actual drift in the sensor-transmitter or any part of an instrument channel that is common to the instrument channel and the on-line monitoring loop will be less than or equal to the value used in the setpoint calculations and other limits defined in 10 CFR 50.36 as applicable to the plant-specific design for the monitored process variable are satisfied.

Discussionfor Requirement 8:

Section 6 establishes the methodology for calculating the on-line monitoring drift allowance. The methodology has been defined in a manner that ensures that the associated setpoint calculation allowances remain unchanged. This is an important part of the on-line monitoring implementation process because the intent is to minimize the changes necessary in the technical specifications. Accordingly, the on-line monitoring drift allowance ensures that the technical specification trip setpoint and allowable value for each parameter remain unchanged.

The on-line monitoring quarterly surveillance ensures that 1) the on-line monitoring system performance is acceptable and 2) each monitored parameter is operating within acceptable limits.

The on-line monitoring acceptable criteria, including the MAVD and the ADVOLM, would be provided in a quarterly surveillance procedure. Including this information in the body of the technical specifications should not be necessary and is more appropriately assigned to the surveillance procedure. This is no different in concept than providing acceptable as-found settings and as-left settings for instrument calibrations in the associated calibration documents.

2-4

EPRILicensedMaterial NRC Safety Evaluation Review Criteria Requirement 9 Calculations defining alarm setpoint (if any)', acceptable band, the band identifying the monitored instrument as needing to be calibrated earlier than its next scheduled calibration, the maximum value of deviation beyond which the instrument is declared "inoperable," and the criteria for determining the monitored channel to be an "outlier," shall be performed to ensure that all safety analysis assumptions and assumptions of the associated setpoint calculation are satisfied and the calculated limits for the monitored process variables specified by 10 CFR 50.36 are not violated.

Discussionfor Requirement 9:

Section 6 establishes the methodology for calculating the on-line monitoring drift allowance, and the methodology has been defined in a manner that ensures that the associated setpoint calculation allowances remain unchanged. The methodology ensures compliance with Requirement 9.

Requirement 10 T'he plant-specific submittal shall confirm that the proposed on-line monitoring system will be consistent with the plant's licensing basis and that there continues to be a coordinated defense-in-depth against instrument failure.

Discussionfor Requirement 10:

The application of on-line monitoring to techmical specification parameters has been specifically designed to ensure consistency with the plant's licensing basis. The on-line monitoring acceptance criteria have been developed in a manner that ensures consistency with the setpoint calculation allowances for drift, while also ensuring no change to existing technical specification trip setpoints or allowable values. A coordinated defense-in-depth against instrument failure has been improved by the application of on-line monitoring because instrument performance is evaluated more frequently than by traditional methods. An ongoing monitoring program is described in Section 6.6 and is recommended as an additional ongoing method of confirming acceptable instrument performance.

Requirement 11 Adequate isolation and independence, as required by Regulatory Guide 1.75, GDC 21, GDC 22, IEEE Std. 279 or IEEE Std. 603, and IEEE Std. 384, shall be maintained between the on-line monitoring devices and Class IE instruments being monitored.

Discussionfor Requirement 11:

The on-line monitoring system does not connect to the safety-related portion of any instrument circuit. The data acquired by the on-line monitoring system are obtained from the downstream side of signal isolators, thereby ensuring compliance with the plant's licensing basis for isolation and independence.

It should be noted that the MSET method used by the participants in the EPRI on-line monitoring implementation project does not connect to a physical instrument loop. The existing instrument circuit is entirely unchanged by the use of on-line monitoring. Signals are sent to the plant computer and are then stored in a conventional computer data archive. The on-line monitoring system acquires its inputs from the computer data archive as a data file.

2-5

EPRI icensed Material NRC Safety Evaluation Review Criteria Requirement 12 (a) QA requirements as delineated in 10 CFR Part 50, Appendix B, shall be applicable to all engineering and design activities related to on-line monitoring, including design and implementation of the on-line system, calculations for determining process parameter estimates, all three zones of acceptance criteria (including the value of the ADVOLM),

evaluation and trending of on-line monitoring results, activities (including drift assessments) for relaxing the current TS-required insbtument calibration frequency from "once per refueling cycle" to "once per a maximum period of 8 years," and drift assessments for calculating the allowance or penalty required to compensate for single-point monitoring.

(b) The plant-specific QA requirements shall be applicable to the selected on-line monitoring methodology, its algorithm, and the associated software. In addition, software shall be verified and validated and meet all quality requirements in accordance with NRC guidance and acceptable industry standards.

Discussionfor Requirement 12:

The plant-specific engineering analyses performed in support of on-line monitoring implementation shall be performed in accordance with the applicable plant-specific quality assurance procedures. The calculation of on-line monitoring acceptance criteria involves the review and interpretation of setpoint calculations and related documents. Accordingly, quality assurance controls over these activities are considered reasonable as stated in the NRC requirement.

Section 8 provides the V&V documentation produced in support of this project; this documentation specifically supports an MSET implementation because this is the basis for the EPRI on-line monitoring implementation project. The documentation developed in support of this project includes quality documents and IV&V-related documents produced by the software supplier (Expert Microsystems, Inc.), Argonne National Laboratory, and EPRI. Each participating plant must follow their plant-specific procedures for software acceptance.

Requirement 13 All equipment (except software) used for collection, electronic transmission, and analysis of plant data for on-line monitoring purposes shall meet the requirements of 10 CFR Part 50, Appendix B, Criterion XII, "Control of Measuring and Test Equipment." Administrative procedures shall be in place to maintain configuration control of the on-line monitoring software and algorithm.

Discussionfor Requirement 13:

The signal data evaluated by on-line monitoring are obtained from instrument circuits that are maintained in accordance with plant-specific procedures, including the control of measuring and test equipment. The experience of the EPRI on-line monitoring implementation project is that unique equipment is not installed onto these instrument circuits; the data are acquired from existing instrumentation without any modification to the measuring and test equipment circuits.

Administrative controls are considered necessary to maintain configuration control of the monitoring software and the algorithm, which is an integral part of the software. Section 7 describes plant procedures and surveillance requirements associated with on-line monitoring, which addresses these administrative controls.

2-6

EPRILicensed Material NRC Safety EvaluationReview Criteria Requirement 14 Before declaring the on-line monitoring system operable for the first time, and just before each performance of the scheduled surveillance using an on-line monitoring technique, a full-features functional test, using simulated input signals of known and traceable accuracy, should be conducted to verify that the algorithm and ills software perform all required functions within acceptable limits of accuracy. All applicable features shall be tested.

Discussionfor Requirement 14:

The V&V documents produced in support of this project include a procedure with expected results for an acceptance test and periodic test. The procedure provided in Appendix F is specifically for a SureSense Diagnostic Monitoring Studio MSET application and can be used as a guide for other software applications. The test files referenced in this procedure are provided directly to the software users. As part of the plant-specific software acceptance, these test procedures and test files form the recommended basis for acceptance testing as well as for periodic testing in support of the quarterly on-line monitoring surveillance test. Section 7 provides the recommended input for the quarterly on-line monitoring surveillance test. Section 8 discusses the V&V documentation in support of an MSET application.

2-7

EPRILicensed Material 3

TYPICAL TECHNICAL SPECIFICATION CHANNELS SUITABLE FOR ON-LINE MONITORING Although on-line monitoring techniques can be applied to a large and diverse number of processes, there are some applications that might not be suitable for on-line monitoring. The NRC SE for on-line monitoring provides the following related requirement:

Requirement 2 Unless the licensee can demonstrate otherwise, instrument channels monitoring processes that are always at the low or high end of an instrument's calibrated span during normal plant operation shall be excluded from the on-line monitoring program.

Discussionfor Requirement 2:

Section 5 provides detailed information that confirms the basis for Requirement 2. Section 3.1 summarizes the applications that are considered suitable candidates for on-line monitoring.

Section 3.2 summarizes the types of applications that are not considered suitable for on-line monitoring, including those instruments that consistently operate at the low or high end of their spans, satisfying Requirement 2. The basis for this determination is provided in Section 5.

3.1 Suitable Applications The following tables list only the technical specification instrument channels that typically might be evaluated by an on-line monitoring system. (Note: experience with the EPRI on-line monitoring implementation project indicates that several hundred more non-safety-related channels can be evaluated by an on-line monitoring system.) In each table, the column titled "Minimum Number of Required Calibrations" refers to the minimum number of calibrations required to be performed on each parameter during one operating cycle. The minimum requirement for redundant technical specification channels is that, per each fuel cycle, at least one redundant channel per loop be calibrated; thus, the minimum number of required calibrations for redundant sets, where there is one set for each loop, is equal to the number of loops. Plant-specific design variations regarding the number of instruments should be reviewed for each application.

Clarifications and notes are provided immediately following each table.

3-1

EPRI LicensedMaterial Typical Technical Specification Channels Suitablefor On-Line Monitoring Table 3-1 Typical Technical Specification Channels for On-Line Monitoring-Westinghouse Design Application (Note 1) Number of Total Number Minimum Number of Channels of Channels Required Per Loop Calibrations Feedwater flow 2 8 4 Pressurizer pressure N/A 4 1 Pressurizer water level N/A 3 1 RCS flow 3 12 4 RCS pressure N/A 2 (Note 2)

Steam generator water level (Note 3) 4 16 4 Steam flow 2 8 4 Steam pressure 3 12 4 Turbine first stage pressure N/A 2 1 Totals: 67 23 Note 1: This table is based on a 44oop plant. Depending on the calibrated span and demonstrated drift performance, containment pressure and refueling water storage tank level might be included. Instrument channels that monitor nofmally off systems such as auxiliary feedwater flow are not considered appropriate for on-ine monitoring. Plant-specific variations in technical specifications might change the total number of channels or might include other instrument channel types.

Note 2: For an MSET application, reactor coolant system (RCS) pressure and pressurizer pressure are equivalent and should be combinable parameters In terms of periodic calibration requirements.

Note 3: For an MSET application, steam generator wide range level can usually be combined with the narrow range channels in terms of periodic calibration requirements, although wide range level might not be a technical specification parameter. If the wide range level is not a technical specification parameter, it should most often be excluded from the MSET model due to the typically higher noise level inherent in its measurement over the complementary narrow range channels.

3-2

EPRP Licensed Material Typical TechnicalSpecification Channels Suitablefor On-Line Monitoring Table 3-2 Typical Technical Specification Channels for On-Line Monitoring-B&W Design Application (Note 1) Number of Total Number Minimum Number of Channels of Channels Required Per Loop Calibrations Pressurizer level N/A 2 1 RCS flow 4 8 2 RCS pressure N/A 8 2 (Note 2)

RCS temperature 4 8 2 Steam generator level (operating) 4 8 2 Steam generator level (shutdown) 4 8 2 Steam generator pressure 4 8 2 Totals: 50 13 Note 1: This table is based on a 24-oop design. Depending on the calibrated span and demonstrated drift performance, other parameters might be suitable. Instrument channels that monitor normally off systems are not considered appropriate for on-ine monitoring. Plant-specific variations in technical specifications might change the total number of channels or might include other instrument channel types.

Note 2: AlI channels require calibration within an 8-year period. It has been assumed here that two channels would be calibrated each fuel cycle, if operating on a 24-month cycle.

Table 3-3 Typical Technical Specification Channels for On-Line Monitoring-GE BWR Design Application (Notes I and 2) Number of Total Number Minimum Number of Channels of Channels Required Per Loop Calibrations Drywell pressure N/A 3 1 Reactor vessel steam dome pressure N/A 4 1 Reactor vessel water level (wide range) N/A 4 1 Reactor vessel water level (narrow range) N/A 4 1 (Note 3)

Recirculation pump flow 4 8 2 Suppression pool level N/A 2 1 Suppression pool temperature N/A 2 1 Totals: 27 8 Note 1: This table is based on an older BWR plant with fewer sensor signals made available to the plant computer. Newer BWR plants might have more sensors connected to the plant computer, which would raise the overall totals.

Note 2: Depending on the calibrated span and demonstrated drift performance, other parameters might be suitable. Instrument channels that monitor normally off systems are usually not considered appropriate for on-line monitoring. Plant-specific variations in technical specifications might change the total number of channels or might have other instrument channel types.

Note 3: For an MSET application, reactor vessel wide range and narrow range level are equivalent and should be combinable parameters in terms of periodic calibration requirements; however, all channels require calibration within an 8-year period. If the wide range level is not a technical specification parameter, it should most often be excluded from the MSET model due to the typically higher noise level inherent in its measurement over the complementary narrow range channels.

3-3

EPRILicensed Material Typical Technical Specification Channels Suitablefor On-Line Monitoring 3.2 Unsuitable Applications Section 5 discusses the conclusions reached with respect to detecting instrument drift when the monitored process changes by only a small amount during plant operation and refers to this as single-pointmonitoring. To summarize, on-line monitoring is not appropriate for every application. Examples of unsuitable applications include:

  • Instrument channels that monitor processes normally operating at the extreme low end of the calibrated span. Containment pressure is an example.
  • Instrument channels that monitor processes normally operating at the extreme high end of the calibrated span. Refueling water storage tank level is one possible example.

Additional applications might also be deemed unsuitable depending on the specific plant. In all cases, Requirement 2 of the NRC SE should be satisfied by proposed implementations.

The nature of empirical modeling is to provide estimations based on historical instrument performance from a set of correlated instrument channels. There might be cases where instruments operate at the low or high end of their calibrated spans and have a corresponding set of correlated instrumentation that follows this behavior. If at the low (or high) end of an instrument's calibrated span, there is significant availability of operational data and there is a suitable number of correlated instrument channels exhibiting similar behaviors, it can be demonstrated that on-line monitoring will provide the same capabilities for these instrument channels as demonstrated for channels exhibiting more standard behavior.

3-4

EPPJLicensed Material 4

TECHNICAL SPECIFICATION CRITERIA Each parameter covered by the technical specifications has specific surveillance requirements that are performed at various frequencies. The surveillance requirements are intended to demonstrate that the associated instrumentation is operable, and actions are specified in the event that an inoperable channel is identified.

The implementation of on-line monitoring for safety-related channels represents a change from current surveillance requirements specified in the technical specifications. Accordingly, a technical specification change request to the NRC is necessary to obtain approval of the implementation.

Section 4 describes the scope of the recommended technical specification changes. Suggested wording is provided using the terminology of the technical specifications.

4.1 Technical Specification Change Summary The technical specification changes necessary to apply on-line monitoring as a means of extending calibration intervals are straightforward in principle:

  • For each redundant set of sensors (transmitters), one of the associated sensors must be calibrated each operating cycle
  • Any other channels identified as potentially needing calibration must also be calibrated, if necessary
  • The maximum allowed interval between calibrations is eight years, regardless of the number of redundant channels
  • The on-line monitoring system will be evaluated on a quarterly frequency, as a minimum Addressing the above involves the following specific changes to the technical specifications:
  • Add a definition of on-line monitoring to Section 1 of the technical specifications.
  • Add two new surveillance types-a quarterly surveillance check using on-line monitoring and a calibration at a staggered test basis interval in which one redundant channel is calibrated each fuel cycle. The staggered test basis interval is already defined in the technical specifications.

4-1

EPRP Licensed Material TechnicalSpecification Criteria

  • Specify which parameters will use the new surveillance types.
  • With respect to on-line monitoring, no changes are proposed to periodic channel checks. But, the identification of out-of-calibration conditions can occur sooner than standard operator-performed channel checks because of the improved fault detection capability provided by on-line monitoring.

4.2 Suggested Technical Specification Wording 4.2.1 Definition Changes The definition of on-line monitoring should be added to Section 1 of the technical specifications.

By this approach, on-line monitoring is one more calibration-related function and is defined, just as the technical specifications already include definitions for channel calibrationand channel check.

The following definition of on-line monitoring is recommended:

On-line monitoring is the assessment of channel performance and calibration while the channel is operating. On-line monitoring differs from channel calibrationin that the channel is not adjusted by the process of on-line monitoring. Instead, on-line monitoring compares channel performance to established acceptance criteria to determine if a channel calibration is necessary.

4.2.2 Addition of New Surveillance Types In terms of the technical specifications, two surveillance-related activities require new definitions:

  • On a quarterly basis, a formal surveillance check will be performed to verify that no channels are outside the prescribed acceptance limits. Section 7 provides guidance regarding this quarterly surveillance check.
  • At least one redundant transmitter will be calibrated each fuel cycle. If identified as in need of calibration by on-line monitoring, other redundant transmitters will also be calibrated. All n redundant safety-related channels for a given parameter will require calibration at least once within n fuel cycles, or at least once within eight years, whichever is less. This concept is already present in the Standard Technical Specifications using the existing definition of staggeredtest basis:

A staggered test basis shall consist of the testing of one of the systems, subsystems, channels, or other designated components during the interval specified by the surveillance frequency, so that all systems, subsystems, channels, or other designated components are tested during n surveillance frequency intervals, where n is the total number of systems, subsystems, channels, or other designated components in the associated function.

4-2

EPAJ Licensed Material Technical Specification Criteria Note: The above definition of staggeredtest basis was obtained from the Standard Technical Specifications. This definition is the same for Westinghouse, Combustion Engineering, Babcock

& Wilcox, and General Electric Standard Technical Specifications. However, older plant-specific technical specifications might use a different definition. In these cases, the concept still applies, but additional changes to the technical specifications might be necessary to accommodate the addition of this definition.

In accordance with the implementation strategy described previously, all redundant channels must be calibrated every n fuel cycles in accordance with the above definition, not to exceed a calibration interval of at least once every eight years. Accordingly, it is recommended that the following sentence be added to the end of the existing definition of staggeredtest basis:

Furthermore, for systems, subsystems, channels, or other designated components that are tested by on-line monitoring, all n systems, subsystems, channels, or other designated components will be tested at a frequency not to exceed eight years, regardless of the size of n.

The following new surveillance requirement definitions are recommended. The surveillance requirement numbers, 3.3.1.17 and 3.3.1.18, are the next available numbers in the Westinghouse Standard Technical Specifications and are used for the purposes of illustration only; each plant will have to insert the appropriate surveillance numbers for their technical specifications.

Surveillance Frequency SR 3.3.1.17 Perform on-line monitoring evaluation [92] days SR 3.3.1.18 Perform channel calibration [18] months on a staggered test basis The frequency of 92 days is intended to match the technical specification layout for quarterly checks. The frequency of 18 months is a plant-specific number that depends on the approved fuel cycle duration. Depending on the plant, the frequency in this case might be 12, 18, or 24 months.

The definition of on-line monitoring was provided in the previous section. The channel calibration will rely on the existing technical specification definition; a typical definition of channel calibrationis as follows:

A channel calibration shall be the adjustment, as necessary, of the channel so that it responds within the required range and accuracy to known input. The channel calibration shall encompass the entire channel, including the required sensor, alarm, interlock, display, and trip functions. The channel calibration can be performed by means of any series of sequential, overlapping calibrations or total channel steps so that the entire channel is calibrated.

In summary, one redundant channel will be calibrated each refueling cycle and all redundant channels will be calibrated at an interval not to exceed eight years. The following examples illustrate the interpretation of this technical specification.

4-3

EPRI Licensed Material Technical Specification Criteria Example: A plant on an 18-month fuel cycle with three redundant instruments for a given parameter would, as a minimum, calibrate at the following frequency:

First channel: 18 months Second channel: 36 months Third channel: 54 months Notice that all redundant channels are calibrated within 41/2 years in this case.

Example: A plant on a 24-month fuel cycle with five redundant instruments for a given parameter would, as a minimum, calibrate at the following frequency:

First channel: 2 years Second channel: 4 years Third channel: 6 years Fourth channel: 8 years Fifth channel: 8 years Notice that all redundant channels are calibrated within 8 years in this case and the last two channels are calibrated during the fourth fuel cycle to remain within the 8-year limit.

4.2.3 Example Change to Reactor Trip System Instrumentation Table The new surveillance requirements would be implemented on a parameter-by-parameter basis, in the same manner as already in place for other technical specification surveillance requirements.

Table 4-1 shows the technical specification change for a typical parameter. The existing surveillance requirement (SR 3.3.1.10) for a channel calibration each fuel cycle has been deleted and the two new surveillance requirements (highlighted) have been added.

Table 4-1 Example Surveillance Requirements for Westinghouse Standard Technical Specifications Table 3.3.1-1 Reactor Trip System Instrumentation Function Applicable Required Conditions Surveillance Allowable Nominal Modes or Other Channels Requirement Value Trip Specified s Setpoint Conditions Pressurizer 1(g) [4] M SR 3.3.1.1 >[1886] 2[1900]

Pressure SR 3.3.1.7 psig psig Low SR3&116° SR 3.3.1.16 SR 3.3.1.17 SR 3.3.1.18 For each parameter that will be included in the on-line monitoring program, a similar change to the technical specifications would be made.

4-4 COr

EPR[ Licensed Material Technical Specification Criteria 4.2.4 Technical Specification Bases The technical specifications provide bases for the surveillance requirements. The following bases are recommended for the new surveillance requirements for on-line monitoring:

SR 3.3.1.17 SR 3.3.1.17 verifies that all channels for a given parameter are performing within the acceptance criteria established for on-line monitoring. Refer to On-Line Monitoringof Instrument ChannelPerformance[3], and On-Line Monitoring ofInstrument Channel Performance, Volume 3: Applications to Nuclear Power PlantTechnical Specification Instrumentation(this report), for further information regarding on-line monitoring.

SR 3.3.1.18 SR 3.3.1.18 performs a channel calibration on a staggered test basis. The performance of SR 3.3.1.17 on a 92-day frequency provides assurance that the monitored channels are performing within specified acceptance criteria and forms the basis for performing a channel calibration at an extended calibration interval. For n redundant channels, all channels for a given parameter will require a channel calibration at least once every n fuel cycles, with at least one channel receiving a channel calibration each fuel cycle. Furthermore, all n channels require calibration at a frequency not to exceed eight years, regardless of the size of n. Refer to On-Line MonitoringofInstrument ChannelPerformance [3], and On-Line Monitoring ofInstrument ChannelPerformance, Volume 3: Applications to Nuclear Power PlantTechnical Specification Instrumentation(this report), for further information regarding the basis for this calibration frequency.

4.2.5 Effect on Trip Setpoint and Allowable Value The application of on-line monitoring is not intended to affect either the trip setpoint or the allowable value. The on-line monitoring acceptance criteria should be established in such a manner that setpoint calculations are not modified, and trip setpoints and allowable values remain unchanged. Section 6 provides the recommended approach for establishing the on-line monitoring acceptance limits.

4.3 Checklist for Technical Specification Change Submittal This topical report is intended to facilitate the technical specification change process. However, each plant must address plant-specific aspects related to the change. The following provides a summary of the items to address in each plant-specific submittal:

  • Scope-the safety-related channels covered by the submittal should be clearly identified. The selected channels should be suitable for on-line monitoring in accordance with the criteria provided in Section 3.2 of this report.
  • On-line monitoring methodology-the on-line monitoring algorithm, method of data acquisition, data analysis process, and alarm process should be described.

4-5

EPFJLicensed Material TechnicalSpecification Criteria

  • Deviations from NRC Safety Evaluation (SE) [2]/EPRI Topical Report [3]-exceptions to or deviations from the SE should be clearly identified and explained. For example, the on-line monitoring algorithm might be different than the types described in this topical report. The differences from any SE discussion should be justified.
  • Setpoint and uncertainty analysis verification-the implementation of on-line monitoring has to include acceptance criteria for each parameter that do not invalidate setpoint requirements.

The submittal should state either a) that an evaluation has been performed for this purpose or b) an evaluation is planned and will be completed prior to implementation. Depending on the implementation strategy, setpoint documents might be affected by the on-line monitoring acceptance criteria. The preferred and recommended approach is to establish on-line monitoring acceptance criteria consistent with setpoint requirements so that the original setpoint calculations are not affected by the implementation of on-line monitoring. The method provided in Section 6 is intended to define the on-line monitoring acceptance criteria without directly affecting the setpoint calculations.

  • Plant procedure impact-the submittal should note that a plant-specific procedure impact assessment has been completed. This includes the quarterly surveillance procedure for the assessment of on-line monitoring.
  • Quality assurance-confirms that the plant-specific software quality assurance requirements have been satisfied for the selected on-line monitoring methodology.

4-6

EPRILicensed Material 5

SINGLE-POINT MONITORING Section 5 provides an overview of the single-point monitoring issue and provides specific guidance to ensure compliance with the NRC's SE requirements. Topics in this section include:

  • A brief overview of what is meant by single-point monitoring (Section 5.1)
  • A description of instrument drift characteristics with respect to single-point monitoring (Section 5.2)
  • A more detailed review of instrument performance over time (Section 5.3)
  • An explanation of how to apply an allowance for single-point monitoring (Section 5.4)
  • A description of how a nuclear plant can develop a plant-specific allowance (Section 5.5)

The NRC SE for on-line monitoring included two specific requirements associated with single-point monitoring as follows:

Requirement 2 Unless the licensee can demonstrate otherwise, instrument channels monitoring processes that are always at the low or high end of an instrument's calibrated span during normal plant operation shall be excluded from the on-line monitoring program.

Discussionfor Requirement 2:

This section provides detailed information that confirms the basis for Requirement 2. Section 3.2 lists typical applications that are considered unsuitable for on-line monitoring.

Requirement 4 For instruments that were not included in the EPRI drift study, the value of the allowance or penalty to compensate for single-point monitoring must be determined by using the instrument's historical calibration data and by analyzing the instrument performance over its range for all modes of operation, including startup, shutdown, and plant trips. If the required data for such a determination are not available, an evaluation demonstrating that the instrument's relevant performance specifications are as good as or better than those of a similar instrument included in the EPRI drift study, will permit a licensee to use the generic penalties for single-point monitoring given in EPRI Topical Report 104965.

Discussionfor Requirement 4:

This section provides detailed information regarding single-point monitoring; most plants following the criteria stated in NRC Requirement 4 can use the generic penalties provided here.

Section 5 discusses the EPRI drift study to explain why the results are likely to be more conservative than necessary for most applications. This section also provides detailed information explaining how to perform a plant-specific analysis for a single-point monitoring allowance.

5-1

EPRP Licensed Material Single-PointMonitoring 5.1 Summary of Single-Point Monitoring Issue When a plant operates at nearly constant power for an extended period of time, the process variations for many parameters tend to be relatively small. Figure 5-1 shows an example in which measured steam generator level is virtually constant at 61.5% of span for about 14 months. On-line monitoring can evaluate channel performance more frequently than can be accomplished by periodic channel calibration, but this evaluation is generally being performed with the plant operating near 100% power with very little process change occurring (on a percent of calibrated span) about the monitored point. Although an instrument might appear to be in calibration at the monitored point, how does the user know that it is still in calibration elsewhere in the span, such as at the high- or low-level trip setpoints, which might not be anywhere near the monitored point?

This question is referred to as the single-pointmonitoring issue. The answer to this question is important to determining the on-line monitoring system's ability to detect any type of instrument drift. The EPRI report On-Line Monitoring cfInstrument ChannelPerformance[3] addresses this issue in detail and the issue is summarized here.

File Define AulloType Help RPS~lodt-LoopC-Rl -ELD:LaRua 62g I __ __ _ _ _ _ __ _ _ _

a6219!lA1II9U 23AXOt402Oi8320143 62.20~ ~l5~ O~I~ m 0.eady 41:O~to .Ti X Figure 5-1 Steam Generator Level Variation-Westinghouse Plant 5.2 Drift Types Any discussion of an on-line monitoring system's ability to detect drift should start with a review of the types of drift that can occur. The following drift types can be observed:

  • Zero shift-a type of instrument drift characterized by a change in the instrument zero point.

Typically, the desired calibration curve is shifted from the zero point. Zero shift appears to be the most common drift type for the instrument types of interest and is the sole cause of drift in about 45-50% of the EPRI data. Figure 5-2 shows an example of zero shift.

5-2

EPRILicensedMaterial Single-PointMonitoring

  • Span shift-a type of instrument drift characterized by a change in the instrument span as compared to the desired span. Span shifi; typically results in the instrument being in calibration at some point along the instrument's span and out of calibration at some other point along the instrument's span. Span shift can occur either as forward span shift (the instrument is in calibration at the low end of span and out of calibration higher in the span) or reverse span shift (the instrument is out of calibration at the low end of span and in calibration higher in the span). Span shift commonly occurs for the instrument types of interest and is the sole cause of drift in about 20-25% of the EPRI data. Figure 5-3 shows an example of span shift.
  • Combination of zero and span shift-a type of drift characterized by a simultaneous change in both the zero and span of the instrument. A combination of zero and span shift occurs in about 15-30% of the EPRI data, generally increasing in proportion as the magnitude of drift increases. Figure 5-4 shows an example of combined zero and span shift.
  • Nonlinear-a type of instrument drift that is not clearly zero shift, span shift, or a combination of the two types, in which the degree of calibration varies (often with no obvious pattern) along the calibration curve. Nonlinear drift is relatively rare, contributing to drift in about 5%

of the EPRI data. Figure 5-5 shows an example of nonlinear drift.

EPRI report On-Line Monitoringof Instrument ChannelPerformance [3], discusses these drift types and their effect on calibration in considerable detail.

20 .. ......

As-Found Condition 7 at Recalibration 7 o 7l

/ / /Original

// / Calibration' 4 ............ ...... . ........ ............................ ;111 ......

.4.

P02 Po 1 PS 2 PS1 Pressure Input PO, = Original Zero PSI = Original Upper Span Limit P0 2 = Zero at Recalibration PS2 = Upper Span Limit at Recalibration Figure 5-2 Zero Shift Drift 5-3

EPRILicensed Material Single-PointMonitoring ZE c.

0 PO1 PS2 PS, Pressure Input PO1 = Original Zero PS1 = Original Upper Span Limit PS2 = Upper Span Limit at Recalibration Figure 5-3 Span Shift Drift 20 . .; .....

As-Found Condition /

at Recalibration-- , z/

<: /} i/

_ ~//

0 /

/ /Original

/ /Calibration

.4.

P0 2 Po 1 PS2 PS1 Pressure Input PO, = Original Span and Zero PS1 = Original Upper Span Limit P0 2 = Span and Zero PS2 = Upper Span Limit at Recalibration at Recalibration Figure 5-4 Combined Zero and Span Shift Drift 5-4

EPRI Licensed Material Single-PointMonitoring E

0 PO PS Pressure Input PO = Lower Span Limit PS = Upper Span Limit Figure 5-5 Nonlinear Drift With respect to on-line monitoring, the above drift types affect the ability to detect drift as follows:

  • If a sensor drifts only by a change in its zero setting (zero shift), then any drift will be detected regardless of the monitored point.
  • If a sensor drifts only by a change in its span (span shift), the drift might or might not be detectable, depending on the nature of the span shift and the monitored point.
  • Combinations of zero shift and span shift-can also occur; these drift combinations tend to make drift detection more likely, regardless of the monitored location.
  • Sensors appear to occasionally drift in a nonlinear manner (not obviously zero shift or span shift or a combination of the two), although this is relatively rare. Drift detection depends on the specific instrument performance at the monitored point.

5-5

EPRI Licensed Material Single-PointMonitoring Of all the likely drift types, span shift occurring alone represents the largest concern for single-point monitoring because drift might occur near a setpoint, yet might not be detected at the monitored point. Despite this potential concern, the likelihood of span shift adversely affecting the reliability of the on-line monitoring estimation and failure detection process is low for the following reasons:

  • Very few parameters normally operate at the extreme end of the calibrated span (high or low) and have setpoints at the other extreme end of the calibrated span. Most parameters tend to monitor within the 50-80% of span during normal operation, with setpoints well above 0% of span or well below 100% of span.
  • Each transmitter still requires periodic calibration; the calibration frequency has only been extended, not eliminated, by the use of on-line monitoring. Furthernore, transmitter calibration data at extended intervals do not appear to exhibit increased drift levels (based on instrumentation calibration data obtained in support of this project).
  • Instrument calibration data readily show if span shift drift has occurred in a manner that might affect on-line monitoring predictions. An ongoing calibration monitoring program is a recommended part of applying on-line monitoring.
  • Transmitters are usually found to be in calibration when checked. Note that there would be little likelihood of extending calibration intervals by the use of on-line monitoring if this was not the case.
  • Although taking credit for this is not recommended because it is not the intended method of operation, it is possible to evaluate channel performance for some parameters across a larger portion of the calibrated span during transients or down-power events. Section 5.3 provides additional information.

5.3 Instrument Channel Variation at Power-A Closer Look Most nuclear plants operate at nearly constant power for an extended period of time and the process variations for many parameters tend to be relatively small. Figure 5-6 shows an example of power variation at one nuclear plant over a 14-month period. This figure shows the initial power ascension after a refueling outage, followed by occasional unintended down-power events, and concludes with the final planned power reduction as the plant enters the next refueling outage. If a nuclear plant operates as intended for its entire operating cycle, there might not be any significant down-power events, with a power profile as shown in Figure 5-7.

5-6

EPPJLicensed Material Single-PointMonitoring 1200 1000 AW V' m 9 800 _ ___

0 ID 600- ,

0 a-40 f A,

  • +.

200 -*

0 4 ,

1.%

N Figure 5-6 Typical Nuclear Plant Power Variation 1200 4 1000 q

-~o W-qQ,-

~v - MO

,W E

I I 9 800- l

0) 600 I a-400 1 onn I *-

N I&

ap Ct, Figure 5-7 Desired Nuclear Plant Power Variation 5-7

EPPJ LicensedMaterial Single-PointMonitoring It has been suggested that periodic plant transients could provide a means of evaluating instrument drift across some larger portion of an instrument's span as one method of addressing the single-point monitoring issue. Although there are instances of down-power events that might offer additional data for some (but not all) of the signals, the acceptance criteria for on-line monitoring should be established based on the intended operation of the plant, which is typically to operate at 100% power for 18 or 24 months continuously. Down-power events that might provide limited on-line monitoring data for other portions of an instrument's span cannot be assumed. This is a particularly important point because it reinforces the validity of the single-point monitoring issue.

Some signals in a power plant do not change even if there are occasional transients or down-power events. As an example, Figure 5-8 shows the typical steam generator level variation for a Westinghouse-design plant. Notice that steam generator level is a controlled parameter and is nearly constant at 61% level regardless of the power level variations that were shown in Figure 5-6; the total channel variation is only +/-1 Y2% for the entire evaluated period, including plant shutdowns. Throughout the operating cycle, steam generator level is never particularly near the low-low trip setpoint of 27% level or the high-high trip setpoint of 79% level.

100 1 80-V0 60I __

ID 0

0 40-E 0

so fnll

" 20J Figure 5-8 Steam General Level Variation During an Operating Cycle 5-8

EPRI Licensed Material Single-PointMonitoring Figure 5-9 shows another example of a controlled parameter-RCS or pressurizer pressure.

Pressure remains almost constant at about 2235 psig (15,399 kPa) at this nuclear plant, regardless of power level. The low pressure trip setpoint is 1850 psig (12,746 kPa) and the high pressure trip setpoint is 2380 psig (16,398 kPa). The pressurizer pressure channel span is 1700-2500 psig (11,713-17,225 kPa), or a total of 800 psig (5,512 kPa) span. The low pressure trip occurs at about 20% of span, the high pressure trip occurs at about 85% of span, and the monitored point is about 67% of span.

Z4UtJ I Z3UU-

. 1 * *.Ip_4

.0 0

0

0) 2200 e-CO, a:

2100-2000 N

C',

n',

1 psig = 6.89 kPa Figure 5-9 RCS Pressure Variation During an Operating Cycle Using a Westinghouse nuclear plant design as an example, several technical specification-related parameters have a physical relationship to reactor power. For this example, Figure 5-6 shows the power level variation during the operating cycle. Figure 5-10 shows the pressurizer level variation during this period. The pressurizer level instrument is programmed to follow average RCS temperature from its low power limit to its futll power limit, which effectively means that pressurizer level follows reactor power. Figure 5-11 shows the variation for steam flow, Figure 5-12 shows the variation for feedwater flow, Figure 5-13 shows the variation for turbine first stage pressure, and Figure 5-14 shows the variation for steam pressure.

5-9

EPRILicensed Material Single-PointMonitoring

-J N

0L

/04 N

0 Figure 5-10 Pressurizer Level Variation During an Operating Cycle

.0 0

a) ci, 04 4%

I1.

Figure 5-11 Steam Flow Variation During an Operating Cycle 5-10

EPRILicensedMaterial Single-PointMonitoring U-0o+

'N

'N Figure 5-12 Feedwater Flow Variation During an Operating Cycle 800

-Y -' 91 Iy y -

700 600 I:nn 2a.

a, 400 - 400Ita  :*40 ,

300 . et  : t 200 . v 100 -. V

° * -

v N

(.sv jplo 1 psig = 6.89 kPa Figure 5-13 Turbine First Stage Pressure Variation During an Operating Cycle 5-11

EPRILicensedMaterial Single-PointMonitoring a.

(0 0

lo 800 4.

I11, 1 psig = 6.89 kPa Figure 5-14 Steam Generator Pressure Variation During an Operating Cycle The previous examples show that some parameters might vary during an operating cycle, while other parameters remain essentially constant for the entire operating cycle. Remember that the goal of most nuclear plants is to operate at 100% power for 18 or 24 months continuously. Even for the signals that do show some variation with power, the consideration of single-point monitoring should be established based on the intended operation of the plant, which is to remain at full power.

5.4 Establishing an Allowance for Single-Point Monitoring 5.4.1 Background EPRI formed the EPRI/Utility On-Line Monitoring Working Group in 1994 to coordinate the activities associated with obtaining approval of on-line monitoring as a calibration reduction tool.

The working group produced TR-104965 (Draft-August 2, 1995), Calibrationthrough On-Line PerformanceMonitoringof Instrument Channels, and submitted this report to the NRC for consideration. The initial NRC review of TR-104965 was documented in a Request for Additional Information (RAI) dated November 29, 1995. Working group members met with the NRC staff members on December 13, 1995 to clarify the RAI comments. Subsequently, the NRC issued an updated RAI on February 26, 1996. One of the key issues raised by the RAI was the single-point monitoring issue.

5-12

EPRILicensed Material Single-PointMonitoring 5.4.2 EPRI Drift Study 5.4.2.1 Data Collection In response to the NRC RAI, EPRI sponsored a drift study by evaluating calibration data for pressure, level, and flow transmitters. Instrument calibration data were collected from 18 nuclear plants, entered into a database, and analyzed in detail. The final database contained 1139 instruments, 6700 calibrations, and nearly 34,000 individual calibration checkpoint values. Data collection focused on primary sensors (pressure and differential pressure transmitters) as the key devices of interest. Extensive efforts were made to ensure that the assembled data were representative of the U.S. nuclear industry. The project was undertaken to determine if a quantifiable relationship exists between drift observed at any given point within an instrument's operating range and drift at other points in the range. The problem statement can be summarized as follows:

Given that an instrument appears to be in calibration at the monitored point, what is the likelihood that it is out of calibration elsewhere in its operating range?

The answer to this question required a statistical analysis of the acquired instrument calibration data. The calibration data were reviewed at each calibration check point along the span. Given that the instrument was in calibration at a given check point, the likelihood of being out of calibration elsewhere along the calibrated span was determined. This process was performed for five checkpoints along the calibrated span-0, 25, 50, 75, and 100% of span. The final result established a single-point monitoring allowance, which is effectively an additional input to the on-line monitoring drift allowance. The EPRI Report On-Line Monitoring ofInstrument Channel Performance [3] provides a complete description of this study. Certain aspects of the study are clarified in the following sections.

5.4.2.2 Binomial Pass/Fail Method of Analysis A discussion of the single-point monitoring allowance starts with a review of the method of analysis. Well-behaved instruments can be evaluated by a pass/fail type of analysis, referred to here as a binomialpass/failanalysis. Pass/fail criteria for performance simply compare the drift data against a pre-defined acceptable value of drift. If the drift value is less than the pass/fail criteria, that data point passes; if it is larger than the pass/fail criteria, it fails. By comparing the total number of passes to the number of failures, a probability can be computed for the expected number of passes in the population.

A binomial distribution can be used to describe a population if each sample point can be separated into a pass or fail, yes or no, go or no-go type of classification. Applying this concept to instrument drift performance, a pass/fail criterion for the drift magnitude is established. If the drift exceeds the specified limit, the drift point fails; otherwise it passes. The failure proportion is then given by Equation 5-1:

Pf = Eq. 5-1 n

5-13

EPRF Licensed Material Single-PointMonitoring Where:

Pf = proportion of failures x = number of values exceeding the pass/fail criteria (failures) n = total sample size Note that the failure proportion as defined is only the proportion of drift points that failed to pass the acceptance limit; it does not necessarily mean that the instrument actually failed. For analysis purposes, the pass/fail limit might specify a value that is still well within the allowed as-found setting tolerance, or it might specify a value within some pre-determined drift allowance.

The probability that a value will pass can be calculated as 1 minus the fail probability, or:

P=I-P,. Eq. 5-2 The above probability is a nominal probability based only on the failure proportion. Confidence limits should also be established for the probability. Generally, a probability computed at the 95%

confidence level is considered acceptable for setpoint calculations. If n is large, confidence limits can be calculated based on the normal distribution of probabilities:

PI= I_ (! )(D (i Eq. 5-3 and x

P. =1 - +zx Eq. 5-4 n

Where:

Pi = minimum probability that a value will pass the pass/fail criteria PU = maximum probability that a value will pass the pass/fail criteria z = standard normal distribution value corresponding to the desired confidence level, such as z=1.96 for a 95% confidence level The nominal probability that a value will be within the pass/fail criteria is the probability P. The probabilities P, and P. represent the lower and upper confidence limits associated with this nominal probability. In particular, the probability P, is of interest because it represents the lowest expected probability that a value will be within the pass/fail criteria. Because safety-related setpoints are usually determined at the 95%/o95% level, the minimum desired pass probability for P, is 95% at the 95% confidence level. The standard normal distribution values for other confidence levels are shown in Table 5-1.

5-14

EPRI Licensed Material Single-PointMonitoring Table 5-1 Standard Normal Distribution Values for Various Confidence Levels Z Confidence Level 2.575 99%

2.330 98%

1.960 95%

1.645 90%

The binomial pass/fail method is better suited for large sample sizes (greater than 100 points).

Also, this method requires that the proportion of fails to the total sample size, x/n, not be extremely small. For example, the equations for the minimum and maximum pass probabilities do not provide reasonable results if the number of failures equals zero.

5.4.2.3 Application of Binomial Pass/Fail Analysis to Single-Point Monitoring The instrument calibration data described in Section 5.4.2.1 were evaluated as part of the EPRI drift study to establish the probability of being in calibration at some specified level at the monitored point while being out of calibration at some level elsewhere in the span. The analysis was set up and conducted as follows:

  • Each calibration check point-0%, 25%, 50%, 75%, and 100% of span-was evaluated separately. This was done so that trends as a function of span could be evaluated.
  • For a given check point, each calibration was checked to confirm that the point was in calibration at a specified level.
  • Given that the evaluated check point was in calibration at the specified level, the other calibration check points were evaluated to determine if they were out of calibration at a specified level.
  • The level by which the other check points were allowed to be out of calibration was varied as necessary to ensure a 96% nominal pass probability. Stated another way, the binomial pass/fail probability acceptance limit was varied as necessary to ensure that all other calibration check points were in calibration to the specified level for at least 96% of the calibrations. The 96% nominal pass probability was arbitrarily selected to ensure that the minimum probability was always greater than 95%. The sample size was so large for this study that the minimum probability was above 95.5% with a nominal probability of 96%.
  • The level at which the other check points were allowed to be out of calibration and still ensure a 96% nominal pass probability was treated as the allowance for single-point monitoring. The potential effect of monitoring an instrument that is in calibration at one point in its span while being out of calibration by some amount elsewhere in its span is considered by including this allowance in the overall drift allowance for on-line monitoring.

The results of the EPRI drift study are described in Section 5.4.3, including a discussion of the conservative nature of the recommended single-point monitoring allowance.

5-15

EPRPLicensedMaerial Single-PointMonitoring 5.4.3 Single-PointMonitoringAllowance Development 5.4.3.1 TR-1 04965-Rl Single-Point Monitoring Allowance TR-104965-R1 provides the recommended allowance to apply for single-point monitoring and this allowance was referenced in the NRC safety evaluation. Figure 5-15 shows the allowance recommended in TR-104965-RI. As can be seen, the allowance for single-point monitoring varies with the monitored point and the allowed on-line monitoring drift limit.

. 1.4-C 0

\- - - 25-50% of Span 1.0 . 0-1........

-- 00% of Span c0.8 - - \

- 0.6 0~0.4-0.2 . . ...... -

A 0 .. ..........

A o e xrr 1e r s nn U.OU U.(0 I.UV I.za 1.01) 1.10 Z.UU Drift Limit for Monitored Channel (%)

Figure 5-15 TR-104965-RI Recommended Allowance for Single-Point Monitoring Figure 5-15 shows that 1) monitoring a process low in the span carries a higher penalty than monitoring high in the span and 2) higher channel drift limits improve the single-point monitoring allowance. Referring to Figure 5-15, the following explanations of the curves are provided:

  • The <25% of Span curve is based on 0% of span calibration data. The probability improved considerably at the 25% calibration checkpoint.
  • The >25%o-50% of Span curve is based on 25% of span calibration data.
  • The >50%16-00% of Span curve is based on the combined 50%, 75%, and 100% of span calibration data. The probability was sufficiently low that the three points were combined for convenience.

A minimum allowance of 0.25% is recommended even if Figure 5-15 would permit a lower allowance. In the overall uncertainty evaluation for on-line monitoring, this single-point monitoring allowance should be treated as a random uncertainty; the as-found minus as-left (AFAL) calibration data were centered about the mean and treating the allowance as a bias is not supported by the data. Section 6 provides an example of how this allowance is applied to the overall drift allowance for on-line monitoring.

5-16

EPRILicensedMaterial Single-PointMonitoring 5.4.3.2 Additional Comments Regarding TR-104965-RI The development of the TR-104965-R1 single-point monitoring allowance was based on calibration data provided by 18 nuclear plants. Although the data included obviously bad data (outlying data likely caused by data entry errors), they were not removed from the analysis. In other words, outliers were not excluded froma the analysis. As an example, there were instances in which four of the five evaluated calibration check points were in calibration and the fifth calibration point was identified as out of calibration by over 50%. This behavior is not considered reasonable and the outlying data point was most likely a data-entry error. But, there was no easy method to confirm or deny each outlier with so much data from so many plants. For this reason, the outliers were retained in the analysis, which resulted in very conservative results. Any plant-specific evaluation will likely obtain considerably lower single-point allowance values.

The chart provided in Figure 5-15 combined the 50%, 75%, and 100% span points for convenience. Part of the motivation for this combination was to ensure conservative results.

Another part of the motivation was to simplify the presentation of the data. The 50% of span checkpoint actually demonstrated a lower probability than did the 75% of span checkpoint. Rather than present potentially confusing results in which it appeared that the 50% of span point was better for single-point monitoring than the 75% of span point, all three higher checkpoints were combined for convenience. The interpretation of the actual results is that monitoring mid-span was somewhat better with respect to the single-point monitoring issue than monitoring at three-quarters of span. Another motivation for combining the three higher checkpoints was to avoid possible confusion in the interpolation between checkpoints.

The analysis was performed to a 96% nominal probability to ensure conservative results. An analysis performed at a 95% minimum probability would have been acceptable and would have remained consistent with the basis for setpoint calculations.

The probability improved (single-point monitoring allowance decreases) as the drift limit increases. Larger drift limits meant that it was more likely that drift was observable across a larger portion, if not all, of the instrument span. Another way of stating this is that it was unlikely for the unmonitored points to drift by a large amount if the monitored point had little or no drift.

5.4.3.3 Allowance Based on 95% Minimum Probability The previous sections describe the analysis approach to develop the single-point monitoring allowance, including the rationale justifying why this allowance is conservative and probably overly conservative.

As part of the development of the guidelines presented in this report, the calibration data were evaluated again with two significant differences from the TR-104965-RI analysis:

  • The analysis was performed at 95% minimum probability rather than at a 96% nominal probability. Note that a 95% minimum probability still ensures a nominal probability close to 96 percent.
  • The curve for each calibration checkpoint is plotted separately; no checkpoints were combined.

5-17

EPRI LicensedMaterial Single-PointMonitoring As before, all outliers were retained in the analysis. The results of this analysis are shown in Figure 5-16. Notice that the 50% of span monitoring point continues to carry a lower penalty than the 75% of span point. The 100% of span monitoring point also continues to have the lowest single-point monitoring allowance. The data used to create Figure 5-16 are provided in Table 5-2.

~.0.8 iO0%of Span o e-- 75% of Span 5 0.6 li--- 50% of Span

.. ........ 25% of Span

_ .** 0%of Span D5 0.4 - X4l4 NIN 02-

- N C) 0.50-~

0.50 0.75 1.00 1.25 1.50 1.75 2.00 Drift Limit for Monitored Channel (%)

Figure 5-16 Minimum Allowance for Single-Point Monitoring-95% Minimum Probability Table 5-2 Single-Point Monitoring Allowance Data--95% Minimum Probability Drift Limit for Monitored Channel Checkpoint 0.50% 0.75% 1.00% 1.25% 1.50% 1.75% 2.00%

0% 0.75% 0.74% 0.63% 0.45% 0.29% 0.12% 0.00%

25% 0.74% 0.66% 0.63% 0.49% 0.32% 0.25% 0.13%

50% 0.40% 0.29% 0.25% 0.18% 0.04% 0.00% 0.00%

75% 0.50% 0.44% 0.32% 0.24% 0.13% 0.00% 0.00%

100% 0.32% 0.23% 0.13% 0.00% 0.00% 0.00% 0.00%

Figure 5-16 provides significantly smaller single-point monitoring allowances than provided in TR-104965-R1 because of the analysis differences. These differences are considered defendable, and the EPRI drift study data in Table 5-2 support the allowances. As before, a minimum allowance of 0.25% is recommended even if Figure 5-16 justifies a lower value; the purpose of this minimum value is to ensure a conservative approach.

5-18

EPRP Licensed Material Single-PointMonitoring Note that the EPRI drift study did not exclude outliers even when it was believed that the outlying data were data-entry errors. Once again, it is believed that this approach contributes to a conservative result. A plant-specific single-point monitoring evaluation might well produce significantly smaller allowances than shown in Figure 5-16. Section 5.5 discusses the potential value of such a study.

5.5 Plant-Specific Confirmation As discussed in previous sections, the EPRI drift study was performed in a manner intended to ensure conservative results. For this reason, a plant-specific study will most likely produce a smaller single-point monitoring allowance than did the EPRI drift study. For example, Figure 5-17 shows the results for one plant in which all available calibration data were evaluated for the single-point monitoring eff ct. Notice that the plant-specific single-point monitoring allowance is considerably smaller than the EPRI drift study. This study was performed at a 95% minimum probability at a 95% confidence.

0.6-o 100% of Span

\- 75% of Span

--- 50% of Span 0.0.4

-- 25% of Span 0%of Span CD 0.2 lo:

10 - ............

=.Z o~0 ___ - :j::.IL..

0.50 0.75 1.00 1.25 1.50 Drift Limit for Monitored Channel (%)

Figure 5-17 Plant-Specific Allowance for Single-Point Monitoring This plant-specific study was performed following the approach outlined in Section 5.4. Contact the EPRI Project Manager if additional guidance is needed in support of a plant-specific study.

5-19

EPRILicensedMaterial 6

ON-LINE MONITORING UNCERTAINTY ANALYSIS Every measurement contains some amount of error or uncertainty. Any on-line monitoring parameter estimate, such as an MSET estimate, also contains some modeling uncertainty relative to the true process value. The parameter estimate represents the best estimate of the true process value. Note that the true process value is not actually known; however, the parameter estimate is expected to be reasonably close to the true process value. Establishing quantifiable limits for the term reasonablyclose leads to the subject of uncertainty analysis.

This section addresses the uncertainty of on-line monitoring in relation to the possible drift allowance for safety-related channels. The various elements of uncertainty applicable to the instrument channels of interest are described, and a methodology for establishing an on-line monitoring drift allowance is explained. The following NRC SE requirements are addressed in this section:

Requirement 1 The submittal for implementation of the on-line monitoring technique shall confirm that the impact on plant safety of the deficiencies inherent in the on-line monitoring technique (inaccuracy in process parameter estimate, single-point monitoring, and untraceability of accuracy to standards), on plant safety will be insignificant, and that all uncertainties associated with the process parameter estimate have been quantitatively bounded and accounted for either in the on-line monitoring acceptance criteria or in the applicable setpoint and uncertainty calculations.

Discussionfor Requirement 1:

The methodology provided in this section is specifically intended to comply with Requirement 1.

Argonne National Laboratory (ANL) has developed a methodology that can be used for a plant-specific uncertainty analysis for the standard version of MSET. In addition, the results from applying this methodology to several specific models are provided in Appendix E. The uncertainty analysis project has been funded as part of the Department of Energy NEPO program, and a copy of the final report issued from this project is provided in Appendix E.

Section 5 addresses single-point monitoring in detail, and the results are incorporated into Section 6 as part of the on-line monitoring drift allowance calculation. The intent is to maintain traceability to the allowances provided in the associated setpoint calculation. The approach taken will have no impact on either the trip setpoint or the allowable value in the technical specifications.

Traceability of accuracy to reference standards has been maintained by the very nature of the on-line monitoring implementation approach. The calibration frequency has been extended, not eliminated.

6-1

EPRILicensed Material On-Line Monitoring UncertaintyAnalysis Requirement 5 Calculations for the acceptance criteria defining the proposed three zones of deviation

("acceptable," "needs calibration," and "inoperable") should be done in a manner consistent with the plant-specific safety-related instrumentation setpoint methodology so that using on-line monitoring technique to monitor instrument performance and extend its calibration interval will not invalidate the setpoint calculation assumptions and the safety analysis assumptions. If new or different uncertainties require the recalculation of instrument trip setpoints, it should be demonstrated that relevant safety analyses are unaffected. The licensee should have a documented methodology for calculating acceptance criteria that are compatible with the practice described in Regulatory Guide 1.105 and the methodology described in acceptable industry standards for TSP and uncertainty calculations.

Discussionfor Requirement 5:

The methodology provided in this section ensures that setpoint calculation and safety analysis assumptions are unchanged. A clear basis for the on-line monitoring drift allowance has been established so that setpoint calculations should not require revision. The technical specification trip setpoint and allowable value requirements are also unaffected because the methodology deliberately ensures compliance with the setpoint calculations. Unique uncertainties attributed to on-line monitoring or single-point monitoring specifically reduce the on-line monitoring drift allowance to ensure that the setpoint calculations do not require revision.

Requirement 6 For any algorithm used, the maximum acceptable value of deviation (MAVD) shall be such that accepting the deviation in the monitored value anywhere in the zone between PE and MAVD will provide high confidence (level of 95%/95%) that drift in the sensor-transmitter or any part of an instrument channel that is common to the instrument channel and the on-line monitoring loop is less than or equal to the value used in the setpoint calculations for that instrument channel.

Discussionfor Requirement 6:

The calculation method described in Section 6 ensures the MAVD provides a high confidence level that is entirely consistent with the setpoint calculations. The allowance for drift has been conservatively determined without taking credit for non-sensor related uncertainty terms. The on-line monitoring allowance for drift is further reduced to account for unique uncertainty elements introduced by the use of on-line monitoring.

Requirement 7 The instrument shall meet all requirements of the above requirement 6 for the acceptable band or acceptable region.

Discussionfor Requirement 7:

The calculation method described in Section 6 ensures the MAVD provides a high confidence level that is consistent with the setpoint calculations. The allowance for drift has been conservatively determined without taking credit for non-sensor related uncertainty terms. The on-line monitoring allowance for drift is further reduced to account for unique uncertainty elements introduced by the use of on-line monitoring.

6-2

EPRIlicensedMaterial On-Line Monitoring UncertaintyAnalysis Requirement 8 For any algorithm used, the maximum value of the channel deviation beyond which the instrument is declared "inoperable" shall be listed in the technical specifications with a note indicating that this value is to be used for determining the channel operability only when the channel's performance is being monitored using an on-line monitoring technique. It could be called "allowable deviation value for on-line monitoring" (ADVOLM) or whatever name the licensee chooses. The ADVOLM shall be established by the instrument uncertainty analysis. The value of the ADVOLM shall be such to ensure:

(a) that when the deviation between the monitored value and its PE is less than or equal to the ADVOLM limit, the channel will meet the requirements of the current technical specifications, and the assumptions of the setpoint calculations and safety analyses are satisfied; and (b) that until the instrument channel is recalibrated (at most until the next refueling outage), actual drift in the sensor-transmitter or any part of an instrument channel that is common to the instrument channel and the on-line monitoring loop will be less than or equal to the value used in the setpoint calculations and other limits defined in 10 CFR 50.36 as applicable to the plant-specific design for the monitored process variable are satisfied.

Discussionfor Requirement 8:

Section 6 establishes the methodology for calculating the on-line monitoring drift allowance. The methodology has been defined in a manner that ensures that the associated setpoint calculation allowances remain unchanged. This is an important part of the on-line monitoring implementation process because the intent is to minimize the changes necessary in the technical specifications.

Accordingly, the on-line monitoring drift allowance ensures that the technical specification trip setpoint and allowable value for each parameter remain unchanged.

The on-line monitoring quarterly surveillance ensures that 1) the on-line monitoring system performance is acceptable and 2) each monitored parameter is operating within acceptable limits.

The on-line monitoring acceptable criteria, including the MAVD and the ADVOLM, would be provided in a quarterly surveillance procedure. Including this information in the body of the technical specifications should not be necessary and is more appropriately assigned to the surveillance procedure. This is no different in concept than providing acceptable as-found settings and as-left settings for instrument calibrations in the associated calibration documents.

Requirement 9 Calculations defining alarm setpoint (if any), acceptable band, the band identifying the monitored instrument as needing to be calibrated earlier than its next scheduled calibration, the maximum value of deviation beyond which the instrument is declared "inoperable," and the criteria for determining the monitored channel to be an ';outlier," shall be performed to ensure that all safety analysis assumptions and assumptions of the associated setpoint calculation are satisfied and the calculated limits for the monitored process variables specified by 10 CFR 50.36 are not violated.

6-3

EPRI Licensed Material On-Line Monitoring UncertaintyAnalysis Discussionfor Requirement 9:

Section 6 establishes the methodology for calculating the on-line monitoring drift allowance, and the methodology has been defined in a manner that ensures that the associated setpoint calculation allowances remain unchanged. The methodology ensures compliance with Requirement 9.

6.1 Traditional Uncertainty Elements Included in On-Line Monitoring Before proceeding with an uncertainty discussion, the typical instrument circuit for on-line monitoring will be described. On-line monitoring is expected to acquire its data from the indication and control portion of an instrument loop, electrically isolated from the safety-related trip portion of the loop. Figure 6-1 shows a simplified layout of the configuration for a safety-related instrument loop. The on-line monitoring system might be directly connected to the data acquisition system, sampling at some specified frequency. Or, it might receive its data from a data historian or data archive and never actually be directly connected to any data acquisition system.

Regardless of the method by which the on-line monitoring system acquires process data, the data source will generally be from data acquisition cards that transmit signals to the plant computer.

lPowerl Supply 250 ohm 20hmData 4> Isolator cusin-4 Data I Dt Pressure Prsue> J, Acqisition Analysis Transmitter 4...... .

On-Line Monitoring Equipment Boundary TO 250 ohm Bistable Safety-Related Trip Figure 6-1 Typical On-Line Monitoring Physical Configuration Notice in Figure 6-1 that the safety-related actuation function performed by the bistable is not part of the on-line monitoring circuit. Also, notice that the on-line monitoring circuit contains additional instrumentation that is not part of the safety-related function. The principal overlap between the safety-related and the non-safety-related portions of the instrument channel occurs at the sensor. Table 6-1 summarizes the traditional contributors to measurement uncertainty present in each signal path.

6-4

EPPJLicensed Material On-Line Monitoring UncertaintyAnalysis Table 6-1 Traditional Process Instrument Circuit Uncertainty Sources yTerm Present in On-Line Present In Safety- Included in Sensor Uncertainty Monitoring Path? Related Trip Path? Calibration?

Process measurement accuracy X X Process element accuracy X X Sensor reference accuracy X X X Sensor drift X X X Sensor temperature effect X X X (partial)

Sensor pressure effect X X Sensor vibration X X Sensor calibration tolerance X X X Sensor M&TE accuracy X X X Isolator reference accuracy X Isolator drift X Isolator temperature effect X Isolator calibration tolerance X Isolator M&TE accuracy X Computer input A/D accuracy X Bistable reference accuracy X Bistable drift X Bistable temperature effect X Bistable calibration tolerance X Bistable M&TE accuracy X As shown in Table 6-1, the on-line monitoring circuit does not monitor the entire trip circuit portion of the instrument loop; the bistable's uncertainty elements are not included in the monitored path. Bistable performance will continue to be verified through periodic functional checks. On-line monitoring does not change any current practices regarding bistable functional checks.

On-line monitoring includes the process measurement effects, process element accuracy, and various environmental effects, whereas traditional sensor calibration checks do not necessarily include these contributors to uncertainty. Predictable bias effects that influence all sensors equally, such as fluid density effects, would be accounted for in the associated setpoint calculation. Note that although on-line monitoring accounts for these various effects, EPRI is not necessarily trying to distinguish individual terms.

6-5

EPPJLicensed Material On-Line Monitoring UncertaintyAnalysis 6.2 Unique Uncertainty Elements; Introduced by the Use of On-Line Monitoring On-line monitoring can detect degrading channels. However, on-line monitoring also introduces other uncertainty elements that must be considered when establishing acceptance criteria. The following uncertainty contributors should be considered:

  • On-line monitoring system uncertainty, such as the MSET estimation uncertainty
  • Uncertainty allowance associated with single-point monitoring The following sections discuss each of these uncertainty elements.

6.2.1 MSET Estimate Uncertainty The EPRI on-line monitoring implementation project traditionally applied the MSET as the preferential on-line monitoring method. For this reason, this section addresses the MSET uncertainty. While alternative empirical modeling algorithms are available, the uncertainty of these alternative approaches was not addressed in the EPRI on-line monitoring project, with the exception of the ICMP algorithm. Appendix D provides information regarding the uncertainty of redundant channel averaging methods, and specifically for ICMP.

The MSET on-line monitoring method can produce a highly accurate estimate of the process signal value. The MSET estimate uncertainty depends on its algorithm, the model settings, and the data used for training. The following variables have the largest potential effect on the MSET estimation uncertainty:

  • Correlation between signals-models might contain redundant signals, physically correlated signals, correlated groups of signals, or correlated groups of signals with low cross-correlation between groups
  • Number of signals in model
  • Accuracy and noise content of signals
  • Quality of training set-includes the quality of the signal vectors selected by the training algorithm
  • Number of vectors selected for the training matrix
  • Range of values in the training set
  • Effect on the estimation process when signal data are outside the training range ANL has developed a methodology that can be used for a plant-specific uncertainty analysis for the standard version of MSET. In addition, the results from applying this methodology to several specific models are provided in Appendix E. The uncertainty analysis project has been funded as part of the Department of Energy NEPO program, and a copy of the final report issued from this project is provided in Appendix E. A summary of the analysis is presented as follows.

6-6

EPPJLicensedMaterial On-Line Monitoring UncertaintyAnalysis The results of plant-specific uncertainty analyses show that the uncertainty in small MSET models with redundant sensors is determined, primarily by spillover. Overall, uncertainty in these sensors is rather small. The uncertainty in larger models, represented by Reactor Protection System sensors, is determined primarily by the considerable variation observed in the feedwater flow sensors and the steam flow sensors. Although rather large, the uncertainty is still less than the estimated noise level, especially when regularization is used. The MSET uncertainty for these models could be reduced, if necessary, by pre-filtering of the data.

A simulation-based generic uncertainty analysis uses a database of comprehensive simulation results to provide conservative general bounds for on-line monitoring uncertainty. General database queries have been developed to provide uncertainty bounds for template models available in the database. The present database has been developed for testing purposes, and smaller models are overrepresented to save the computational effort. In spite of a limited coverage of possible power plant models, some general conclusions about MSET uncertainty can be derived from the current database content. As already observed in the plant-specific analysis, the largest uncertainty is due to the sensor noise. In most cases, confidence intervals are bounded by the two standard deviations of the estimated noise. Some exceptions are observed for very small noise levels and for non-Gaussian noise. Largest uncertainty in groups or redundant sensors with small noise level is due to spillover. However, this uncertainty is only a small fraction of the introduced drift and is easily discriminated fiom the response of the drifted sensors when the drift size is not too small.

The simulation-based method for generic uncertainty bounds does not provide as accurate uncertainty bounds as the plant-specific analysis. It is possible that this analysis would provide overly conservative uncertainty estimates for some models. In the case that a large generic uncertainty bound substantially reduces the on-line monitoring drift allowance, detailed uncertainty analysis could be performed for such a situation.

6.2.2 Single-Point Monitoring Allowance Section 5 explains the single-point monitoring issue and establishes the recommended allowance for use in the development of a channel drift allowance. The EPRI drift study single-point monitoring allowance is provided in Figure 5-16 for a 95% minimum probability (nominal probability near 96%). A minimum allowance of 0.25% is recommended to ensure conservative results.

6.3 On-Line Monitoring Acceptance Criteria 6.3.1 Establishingthe Setpoint Calculation DriftAllowance For an MSET application, the on-line monitoring drift allowance is defined as the allowed difference between the observations and the estimates (referred to as the residuals)before declaring that the instrument channel requires a traditional calibration check. This on-line monitoring drift allowance must be set conservatively so that technical specification trip setpoint allowances and allowable values are not exceeded. This section describes the procedure for 6-7

EPRILicensed Material On-Line Monitoring UncertaintyAnalysis setting up an on-line monitoring drift allowance. The method used is intended to ensure that the drift allowance maintains a clear link to the allowances used in the associated setpoint calculations without requiring revisions to these setpoint calculations.

6.3.1.1 Setpoint Allowances-Westinghouse Plant Example For a safety-related channel that performs a safety actuation function, the channel statistical allowance for the trip setpoint is calculated by Equation 6-1:

CSA = +/-4PMA2 +PEA2 +(SCA+SMTE+SD)2 +SPE2+STE2 + (RCA +RMTE+RCSA+RD)2 +RTE2 Eq. 6-1 Where:

PMA = process measurement accuracy PEA = primary element accuracy SCA = sensor calibration accuracy SMTE = sensor measurement and test equipment accuracy SD = sensor drift SPE = sensor pressure effects STE = sensor temperature effects RCA = rack calibration accuracy RMTE = rack measurement and test equipment accuracy RCSA = rack calibration setting accuracy RD = rack drift RTE = rack temperature effects The following summarizes the significance of the above terms:

  • The rackterms listed above relate to the bistable and associated instrumentation, which are not included in the on-line monitoring signal path. Accordingly, these terms are not included within the scope of (or addressed by) an on-line monitoring drift allowance.
  • With regard to on-line monitoring, the sensor-related uncertainty terms are the uncertainty elements shared by the setpoint calculation and on-line monitoring. The sensor uncertainty elements can be grouped according to their association with either 1) process or environmental effects or 2) calibration effects.

- The uncertainty elements associated with process/environmental effects explain why redundant channels might not display exactly the same value even if they are perfectly calibrated; there are some random variations in the measurements caused by these process/environmental uncertainty elements. These terms are typically not included in the on-line monitoring drift allowance and include process measurement accuracy (PMA),

primary element accuracy (PEA), sensor pressure effects (SPE), and sensor temperature effects (STE). It could be argued that STE is partially included in the on-line monitoring process because the ambient temperature around the sensor varies while on-line 6-8

- __ _ _ _ _- I EPRiLicensedMaterial On-Line Monitoring UncertaintyAnalysis monitoring assesses the channel's performance. But, this term is typically not included in the on-line monitoring drift allowance because 1) it can be difficult to quantify the actual temperature variation and 2) the temperature variation rarely reaches the specified design limits.

- The uncertainty elements associated with calibration effects are specifically what an on-line monitoring program is evaluating and it is these terms that should relate directly to the on-line monitoring drift allowance. These terms include the sensor calibration accuracy (SCA), sensor measurement and test equipment accuracy (SMTE), and sensor drift (SD).

SCA is often a combination of several effects, including the transmitter reference accuracy, the calibration tolerance, and the static head correction, if applicable. SMTE typically includes the combination of a digital multimeter and test pressure accuracy. SD is often expressed as the design or specification allowance for drift at a stated calibration interval. Some plants have determined plant-specific values for SD based on an analysis of as-found and as-left calibration data.

Figure 6-1 shows that the on-line monitoring circuit path includes instrumentation not included in the bistable actuation circuit. The isolator and downstream data acquisition equipment can possibly affect the overall measurement uncertainty, but these devices are not included in the on-line monitoring drift allowance because they have no relevance to the setpoint allowances.

6.3.1.2 On-Line Monitoring Drift Allowance-Westinghouse Plant Example The typical approach to setpoint calculations is designed to ensure a conservative result. The calculation method often assumes that SCA, SMTE, and SD are dependent parameters, which is why they are summed before squaring in the setpoint allowance calculation. As stated before, these are the terms that are addressed by on-line monitoring. For an on-line monitoring method, two other uncertainty terms subtract from the combined SCA, SMTE, and SD:

  • The MSET uncertainty (MSETunc)-the maximum expected uncertainty of the parameter estimate by the MSET on-line monitoring method
  • The single-point monitoring allowance (SPMA)-the allowance to account for monitoring a small operating space for an extended period As an example to illustrate the combined effict of these various terms, suppose the various terms have the following values:

SCA = +/-0.5 SMTE = +/-0.2 SD = +/-1.0 MSETuDc = +/-0.25 SPMA = +/-0.25 (assumes that parameter is normally monitored high in the span) 6-9

EPRP Licensed Material On-Line Monitoring UncertaintyAnalysis The on-line monitoring drift allowance can be calculated as shown in Equation 6-2:

OLM = 1 /(SCA+SMTE + SD)2 -MS'ETUC 2 -SPMA 2 Eq. 6-2 OLM = +/- 1(0.5+ 0.2+ 1.0)2 -0.252 -0.252 = 1.66%

For this channel, threshold limits might be established as follows:

  • Allowable- 1.4%
  • Maximum-1.66%

6.3.1.3 Possible Analysis Variations for Other Reactor Types The analysis approach used to develop the on-line monitoring drift allowance must be consistent with the method used in the associated setpoint calculation. As an example, some setpoint calculation methods might assume that SCA, SMTE, and SD are independent parameters, resulting in the terms being separately squared within the square root equation. The uncertainty calculation would be adjusted by Equation 6-3:

OLM = V+/-SA2 + SD 2 + SMTE2 -MSET C2 - SPMA 2 Eq.6-3 As an example to illustrate the combined effect of these various terms, suppose the various terms have the same values as in Equation 6-2:

SCA = +/-0.5 SMTE = +/-0.2 SD= +/-1.0 MSETunc = +/-0.25 SPMA = +/-0.25 (assumes that parameter is normally monitored high in the span)

The on-line monitoring maximum drift allowance is given by Equation 6-4:

OLM = +/- 10.52 + 0.22 + 1.02 -0.252 -0.252 = 1.08% Eq. 6-4 For this channel, threshold limits might be established as follows:

  • Allowable- 1.0%
  • Maximum-1.08%

6-10

EPRI Licensed Material On-Line Monitoring UncertaintyAnalysis 6.4 Application of Acceptance Criteria to MSET Residuals For an MSET application, the on-line monitoring drift allowance is applied to the residual, which is defined as the difference between an observation and its corresponding estimate. Figure 6-2 shows a typical MSET plot showing the observations (blue crosses) and the estimates (red triangles). Notice that the observed values are trending down, whereas MSET continues to produce estimates that are essentially unchanged. This is an example of the MSET response to an instrument drift event.

63.20 63.07 62.94 E

I1 62.81 62.68 12 62.55 62.42

.2V62.29 62.16 .

62.03 61.90.

02/010l1 02A12101 Tine X LT-02: Observation vs. Time a LT-02: Estimation vs. Timp Figure 6-2 Identified Instrument Drift Figure 6-3 shows the residuals for this example. Notice that the residuals are trending down also because they are the calculated difference between the observations and the estimates. In this example, allowable drift limits have been specified at +/-1.0% (inner green horizontal lines) and maximum drift limits have been set at +1.5% (outer red horizontal lines).

6-11

EPRILicensed Material On-Line Monitoring UncertaintyAnalysis Leuvll mampic; Last Jtun 136

034

.034

-068

-136 .... :,...

1 .0 02m1u1 02 WIJ1 Tum

LT-02
RaiMluds. Time I I Figure 6-3 Residual Plot Showing On-Line Monitoring Drift Limits Figure 6-3 shows how the on-line monitoring drift limits are applied during operation while maintaining a clear link back to the applicable setpoint calculation. As stated before, the goal is to establish drift limits that ensure proper drift detection without exceeding any setpoint allowances and associated technical specification requirements.

6.5 Actions Upon Detection of a Drifted Channel A three-region calibration assessment has been defined for the on-line monitoring process, as shown in Figure 6-4. This approach follows the NRC SE criteria for alarm assessment, showing the MAVD and the ADVOLM. For each monitored parameter, an acceptable deviation from the parameter estimate has to be established. Beyond this acceptable deviation, calibration will be required. The urgency of calibration depends on the magnitude of the deviation; beyond a certain deviation, immediate assessment will be required in accordance with technical specification action statements.

6-12 Ce3

EPRI Licensed Material On-Line Monitoring UncertaintyAnalysis Aid 1L=ADVOLM Positive Ad-OL.

Deviation

..... MAVD I Acceptable Region Parameter Estimate Deviation from ............. .............................. E................PE Parameter Estimate Acceptable Region MAVD 4 'Scheodule Rloutine Calibratio Negative Deviation

_ fo Operabl , t 7it 'a ADVOLM Figure 6-4 Alarm Monitoring Points The following sections provide additional guidance regarding the performance evaluation process.

6.5.1 Acceptable Region Acceptance criteria must be established for each monitored parameter. If a given channel remains within the acceptance band, no calibration action is necessary for the monitored sensor unless that channel is scheduled for its periodic calibration. The acceptable region of operation must be established in accordance with the process described in Section 6.3 with a clear link to the setpoint allowances for drift.

6.5.2 Schedule Routine Calibration-MAVD If a channel's deviation exceeds a pre-defined limit, a calibration check is necessary. If the deviation does not exceed channel operability limits, the urgency of calibration might not be critical and can be scheduled as a routine activity. For example, the channel might be added to the outage work plan or it might be scheduled for a routine calibration if accessibility is not an issue during power operation.

6-13

EPRILicensed Material On-Line Monitoring UncertaintyAnalysis 6.5.3 Operability Assessment-ADVOLM If a channel's deviation exceeds a pre-defined acceptance limit, the channel must be evaluated for operability and corrective actions must be taken, as directed by the technical specifications. The operability assessment should consider the guidance provided in NRC Generic Letter 91-18, Revision 1, Information to Licensees RegardingNRC Inspection ManualSection on Resolution of DegradedandNonconforming Conditions.

As part of any operability assessment, it should be noted that the on-line monitoring signal path includes additional devices besides the sensor that are potentially the source of drift or failure.

Consider checking the accessible portions of the instrument loop before checking the sensor.

6.6 Ongoing Calibration Monitoring Program In support of the longer calibration intervals, an evaluation process should be established to confirm that instrument performance continues to be acceptable. The concept here is similar, in some respects, to the ongoing monitoring program for 2-year fuel cycles as discussed in NRC Generic Letter 91-04, Changes in TechnicalSpecification SurveillanceIntervals to Accommodate a 24-Month Fuel Cycle.

Important questions about an ongoing monitoring program include the following:

  • Does sensor drift exceed allowable tolerances at the longer calibration interval?
  • Are drift magnitudes larger than expected at the longer calibration interval?
  • Does the periodic calibration of redundant sensors identify calibration errors that were not detected by on-line monitoring?

Some caution in the above evaluations is also warranted. A direct correlation between the observed performance and the calibration records might not always be observed. Remember that the on-line monitoring system is monitoring the operational status of a parameter, from the process to the display, which is different from the results that might be observed when calibrating a sensor. Key differences between the two are:

  • On-line monitoring is evaluating the process signal from the process to the display. The sensor is only part of this loop.
  • A sensor calibration does not include process measurement effects nor some environmental effects that are included during on-line monitoring operation.
  • Sensors are exposed to a different set of environmental and operating conditions as the plant shuts down, cools down, and depressurizes. On-line monitoring might not be functioning during the plant shutdown period and would not observe these changes.

6-14

EPRILicensedMaterial 7

ON-LINE MONITORING PROCEDURES AND SURVEILLANCES The NRC Safety Evaluation for on-line monitoring provided the following surveillance-related requirement:

Requirement 14 Before declaring the on-line monitoring system operable for the first time, and just before each performance of the scheduled surveillance using an on-line monitoring technique, a fill-features functional test, using simulated input signals of known and traceable accuracy, should be conducted to verify that the algorithm and its software perform all required functions within acceptable limits of accuracy. All applicable features shall be tested.

Discussionfor Requirement 14:

The V&V documents produced in support of-this project include a procedure with expected results for an acceptance test and periodic test. The procedure provided in Appendix F is specifically for a SureSense Diagnostic Monitoring Studio MSET application and can be used as a guide for other software applications. The lest files referenced in this procedure are provided directly to the software users. As part of the plant-specific software acceptance, these test procedures and test files form the recommended basis for acceptance testing as well as for periodic testing in support of the quarterly on-line monitoring surveillance test. Section 7 provides the recommended input for the quarterly on-line monitoring surveillance test. Section 8 discusses the V&V documentation in support of an MSET application.

7.1 Impact on Plant Procedures and Documents Plant procedures and documents will be affected by the implementation of on-line monitoring.

The following procedures, work processes, or documents will generally need to be modified or created:

  • Technical specifications-formal approval will be necessary to allow longer calibration intervals for specified sensors.
  • Calibration interval-the routine calibration frequency for redundant channels will be changed from once per fuel cycle to once per n fuel cycles, where n refers to the number of redundant channels. The calibration interval will not exceed 8 years regardless of the value of n.

7-1

EPRI Licensed Material On-Line MonitoringProceduresand Surveillances

  • Quarterly surveillance procedure-a formal procedure will be developed for the quarterly surveillance evaluation by on-line monitoring. This procedure should provide guidance to the user regarding how to perform the following tasks:

- Verify that the on-line monitoring system is functional.

- Verify that no monitored channels are operating outside alarm limits. Required actions, such as notification to operations or an operability evaluation, should be addressed in the event that alarm limits are exceeded.

- Verify that current plant conditions are appropriate for the surveillance. For example, plant conditions should not be outside the on-line monitoring system's validity limits, and process conditions should be stable for the parameters of interest.

- Document completion of the surveillance. Output reports from the on-line monitoring program should be included as part of the documentation.

  • On-line monitoring operation-an operating procedure, operating manual, or other type of user guide will be needed to ensure that future users will be able to operate the system.
  • Miscellaneous-other plant documents that might be affected by the existence and implementation of on-line monitoring. The number of documents will vary based on plant-specific document control systems.

7.2 Calibration Procedures Calibration procedures can remain unchanged by the implementation of on-line monitoring unless the procedures directly specify the calibration frequency. Note that the physical calibration process by which an instrument is checked and adjusted, if necessary, has not changed. Only the calibration frequency has changed.

7.3 On-Line Monitoring System Operation Procedures EPRI has developed a series of documents specifically intended to facilitate the on-line monitoring implementation process. These documents were prepared in direct support of the EPRI on-line monitoring implementation project and include the following:

  • EPRI Interim Report, SureSense DiagnosticMonitoringStudio Users Guide, Version 1.4, June 2002 [12]-provides detailed guidance for the application of the SureSense MSET method for nuclear plant systems. This user's guide is detailed enough to serve as the plant-specific software operating manual.
  • SureSense DiagnosticMonitoringStudio Users Guide, Version 2.0, Expert Microsystems, Inc., Orangevale, CA: 2004 [7]-updated from Version 1.4 to accompany later software release.
  • On-Line Monitoringof Instrument Channels, Volume 1: Guidelinesfor Model Development andImplementation, December 2004 [4]--presents the various tasks which must be completed to prepare models for and to implement an on-line monitoring system, including:

data preparation, signal selection, model training and evaluation, model deployment, and 7-2

EPRI Licensed Material On-Line MonitoringProceduresand Surveillances model retraining. Related issues also discussed are: data quality, data quantity, fault detection techniques, and alarm response mechanisms. An extensive glossary of on-line monitoring terms is also provided.

  • On-Line Monitoringof Instrument Channels, Volume 2: Algorithm Description, Model Examples, andResults, December 2004 15]-contains more detailed descriptions of the empirical modeling algorithms, specific examples and results of developed models, and further evaluations of the software utilized in this project.

EPRI provides ongoing training in support of the above documents. This training is designed to improve user understanding of on-line monitoring application and evaluation.

7.4 Quarterly Surveillance Procedure The quarterly surveillance should include the following:

  • Periodic full-features functional test of the software-confirms that the software used for on-line monitoring is performing properly. Appendix F provides an acceptance/periodic test that can be used for a SureSense Diagnostic Monitoring Studio MSET-based application. This appendix has been provided in support of the participants in the EPRI on-line monitoring implementation project but can be used as a guide for other software applications. Minor modifications can be incorporated to accommodate the later software release, SureSense version 2.0.
  • Confirmation that the program and data files used for on-line monitoring are the proper revision and date in accordance with plant-specific configuration control requirements.
  • Model-by-model evaluation of all associated technical specification parameters to determine if any channels have alarmed or indicate drift beyond acceptable levels.
  • Formal confirmation of acceptable channel performance, provided that no channels have exhibited drift beyond acceptable limits.
  • Formal identification of any channels that should be scheduled for routine recalibration.
  • Verification that the on-line monitoring model settings (including model training basis) continue to be acceptable or identification of any potential model changes that should be formally evaluated.

The quarterly surveillance procedure should provide the acceptable drift monitoring limits based on the guidance provided in Section 6. This represents one key difference between the NRC Safety Evaluation and the guidance provided in this report. The NRC Safety Evaluation, Requirement 8 states, in part, that the maximum value of the channel deviation beyond which the instrument is declared "inoperable"shall be listed in the technicalspecifications. This report proposes that a clear link be maintained between the on-line monitoring drift limits and the setpoint calculations that form the basis for the technical specification trip setpoint and allowable value. Accordingly, these values should remain unchanged in the technical specifications, with the on-line monitoring acceptance limits specified only in the quarterly procedure.

7-3

EPRILicensedMaterial On-Line MonitoringProceduresandSurveillances 7.5 Software Verification and Validation - General Criteria On-line monitoring will be used to determine if calibration of safety-related equipment is needed and, because of its ability to identify degraded channels, if it can initiate the operability assessment process. Although the on-line monitoring software is not considered safety-related, it is considered quality-related and should require formal evaluation in accordance with plant-specific software acceptance procedures. The required level of validation, verification, testing, and documentation will be determined based upon the software quality assurance class determined by application of plant-specific procedures. Determination of these requirements should also include site acceptance testing requirements.

The EPRI on-line monitoring implementation project originally selected the MSET as its preferred on-line monitoring method and the SureSense Diagnostic Monitoring Studio (SureSense) software (version 1.4) for MSET implementation. Later developments resulted in the availability of an alternative method that was used for some of the development work in 2004.

The alternative method is an Expert Microsystems, Inc. [6] proprietary algorithm available in SureSense version 2.0. Many of the specific guidelines presented in this report are applicable to both techniques, though in some cases minor modifications might be required to accommodate the new technique. Because the majority of ihe work in support of this project was completed using SureSense version 1.4 with MSET, Section 8.2 provides information regarding the MSET software developed by ANL as well as SureSense software developed by Expert Microsystems, which incorporates the ANL MSET base code (SureSense version 1.4).

The NRC Safety Evaluation for on-line monitoring provided the following requirements associated with the software used for on-line monitoring:

Requirement 12 (b) The plant-specific QA requirements shall be applicable to the selected on-line monitoring methodology, its algorithm, and the associated software. In addition, software shall be verified and validated and meet all quality requirements in accordance with NRC guidance and acceptable industry standards.

Discussionfor Requirement 12:

Section 8 provides the V&V documentation produced in support of this project; this documentation specifically supports an MSET implementation because this is the basis for the EPRI on-line monitoring implementation project. The documentation developed in support of this project includes quality documents and V&V-related documents produced by the software supplier (Expert Microsystems, Inc.), Argonne National Laboratory, and EPRI. Each participating plant must follow their plant-specific procedures for software acceptance.

Requirement 14 Before declaring the on-line monitoring system operable for the first time, and just before each performance of the scheduled surveillance using an on-line monitoring technique, a full-features functional test, using simulated input signals of known and traceable accuracy, should be conducted to verify that the algorithm and its software perform all required functions within acceptable limits of accuracy. All applicable features shall be tested.

7-4

EPRILicensed Material On-Line MonitoringProceduresandSurveillances Discussionfor Requirement 14:

The V&V documents produced in support of this project include a procedure with expected results for an acceptance test and periodic test. The procedure provided in Appendix F is specifically for a SureSense Diagnostic Monitoring Studio MSET application and can be used as a guide for other software applications. The test files referenced in this procedure are provided directly to the software users. As part of the plant-specific software acceptance, these test procedures and test files form the recommended basis for acceptance testing as well as for periodic testing in support of the quarterly on-line monitoring surveillance test. This section provides the recommended input for the quarterly on-line monitoring surveillance test. Section 8 discusses the V&V documentation in support of an MSET application.

7.6 MSET V&V V&V of the MSET software has been performed at several levels:

  • The MSET base code was developed by ANL and has been licensed to Expert Microsystems Inc. and SmartSignal Corporation [13]. The V&V of this base code is described in Section 8.2.1.
  • The Expert Microsystems SureSense software has been applied by the EPRI on-line monitoring implementation project. SureSense version 1.4 uses the MSET base code licensed by ANL and overlays additional features and capabilities for signal validation and monitoring.

The SureSense software V&V documents are described in Section 8.2.2.

  • EPRI has sponsored an independent V&V of the SureSense software (version 1.4) in support of the EPRI on-line monitoring implementation project. Minor modifications to the original document will bring it into compliance for future releases of the software.

7.6.1 Argonne National Laboratory V&V 7.6.1.1 Background The ANL V&V effort was funded by the Nuclear Energy Plant Optimization (NEPO) program, which is a U.S. Department of Energy (DOE) research and development (R&D) program focused on performance optimization of currently operating U.S. nuclear power plants. The primary research areas for the R&D program are plant aging and optimization of electrical production.

The NEPO program is a public-private R&D partnership with equal or greater matching funds coming from industry. The NEPO program was initiated in fiscal year 2000 and is explained in detail on the DOE Web site, http://nepo.ne.doe.gov/.

The Instrument Monitoring and Calibration (IMC) Users Group coordinated NEPO projects that support the continued development of on-line monitoring and directly support tasks associated with the EPRI on-line monitoring implementation project. Argonne National Laboratory was awarded the initial scope of work for 2001 to complete the V&V of the MSET kernel software.

This task was completed in July 2002 and directly supports the on-site acceptance of the associated software.

7-5

EPRILicensed Material On-Line MonitoringProceduresandSurveillances 7.6.1.2 V&V Documents In July 2002, the ANL Reactor Analysis and Engineering Division completed the software quality assurance documentation in support of the MSET base code. The purpose of the ANL V&V effort was to develop the documentation necessary for participating nuclear plants to demonstrate that the software analysis modules and state estimation kernels are reliable and of high quality.

As part of the ANL V&V effort, a code Configuration Management Plan (CMP) was implemented and the code was placed in a location controlled by the CMP. A V&V plan was written to incorporate the required activities and acceptance criteria. Implementation of the V&V plan included a formal test plan. The following ANL documents were produced in support of the V&V effort:

  • QualityAssuranceDocumentationPackagefor the MultivariateState Estimation Technique Software System (MSET) Software System Kernel [14]
  • Software QualityAssurance Program[15]
  • Software ConfigurationControl Procedurefor the MultivariateState Estimation Technique Software System (MSET) [16]
  • Software Requirements and Specificationsfor the MultivariateState Estimation Technique Software System (MSET) [17]
  • Software Verification and Validation Planfor the MultivariateState Estimation Technique Software System (MSET) [18]
  • Software Test Planfor the MultivariateState Estimation Technique Software System (MSET)

[19]

  • Verification of the MSET Base Code Version 3.5 [20]
  • MSET Kernel ValidationReport, Version 3.5 of the MSET Base Code [21]

These documents were developed and completed as a NEPO project and are available in support of software acceptance and for use with any MSET application that employs the ANL base code.

7.6.2 Expert Microsystems SureSense Software V&V The Expert Microsystems SureSense software is a commercially supported implementation of the Argonne National Laboratory MSET base code. The software, also referred to as the SureSense Diagnostic Monitoring Studio, Version 1.4, was the original implementation used during this project and Version 2.0 became available during the course of the project.

The following documents describe the quality assurance processes associated with SureSense Version 1.4 and Version 2.0:

  • Expert Microsystems Software ConfigurationManagement Plan, Document No. 2001-4471, Rev. 1.0. The Software Configuration Management Plan (CMP) sets forth the policy, procedures, and processes used to accomplish configuration management for DOE Small 7-6

EPRI Licensed Material On-Line MonitoringProceduresand Surveillances Business Innovations Research (SBIR) Phase II Grant Number DE-FG03-OOER83007. This CMP describes management methods for all documents, software, and development tools, including editors and compilers requiring configuration control [22].

  • Expert Microsystems Software Quality Alssurance Plan, Document No. 2002-4470, Rev. 1.0.

The Software Quality Assurance Plan (SQAP) sets forth the process, methods, standards, and procedures that will be used to perform the Software Quality Assurance function for the SureSense software project. This SQAP provides a foundation for managing software quality assurance activities, and is based on project activities and work products performed under DOE SBIR Phase II Grant Number DE-FG03-00ER83007 [23].

  • SureSense DiagnosticMonitoringStudio Software Requirements Specification, Document No.

2001-4489, Rev. 1.4. This document specifies software requirements for version 1.4 of the SureSense Diagnostic Monitoring Studio software [24].

  • SureSense DiagnosticMonitoringStudio Software Requirements Specification, Document No.

2001-4489, Rev. 2.0.3. This document specifies software requirements for version 2.0.3 of the SureSense Diagnostic Monitoring Studio software [25].

  • SureSense Diagnostic MonitoringStudio Software Verification Test Plan, Document No.

2001-4479, Rev. 1.4. This document defines test procedures verifying that the software meets all requirements defined in the software requirements specification. Guidelines are provided for unit testing, integration testing, system testing, regression testing, and acceptance testing.

This document provides step-by-step test instructions. Variables and data, as well as expected results, are specified. A Traceability Matrix is included in this document to verify that testing of all items listed in the requirements document is performed [26].

  • SureSense DiagnosticMonitoringStudio Software Verification Test Plan,Document No.

2001-4479, Rev. 2.0.3 [27].

  • SureSense DiagnosticMonitoringStudio Verification and ValidationReport, Version 2.03

[28].

7.6.3 EPRI Independent V&V of the SureSense Software EPRI sponsored an independent V&V of the SureSense software as part of its release for Version 1.4. This V&V document can be used as the basis to prepare an independent review of SureSense version 2.0 if required. Although Expert Microsystems performed internal V&V testing of its software prior to release, the rationale for an independent V&V included the following considerations:

  • An independent V&V could assist nuclear plants participating in the EPRI on-line monitoring implementation project by ensuring the adequate documentation to support plant-specific software acceptance.
  • An independent review and documentation of the software functions is generally considered good software engineering practice and was considered a beneficial activity supporting the EPRI on-line monitoring implementation project.

7-7

EPRI Licensed Material On-Line MonitoringProceduresand Surveillances

  • The independent V&V testing procedure could form the building block for the plant-specific acceptance test and the periodic full-features test required by the NRC SE (Requirement 14).

This independent V&V test was conducted on known simulated data designed to test the various software estimation and fault detection methods. Future periodic software test results will be traceable to the independent V&V documentation.

The term independent V& V is used here to indicate that the review was performed by personnel not involved in SureSense software development, including development testing. The EPRI independent V&V was completed in July 2002 and was issued as an unpublished EPRI report to the participants in the EPRI on-line monitoring implementation project. The supporting documentation for the report included the documents listed in Sections 8.2.1 and 8.2.2. The EPRI-independent V&V results have been included in Appendix C for reference.

7.7 Additional V&V Considerations 7.7.1 Data Acquisition The method by which data are acquired for on-line monitoring should be verified and validated.

The evaluation should confirm that data obtained from the plant computer or an associated data archive are correct for the application. This evaluation depends on plant-specific data acquisition as well as archiving and retrieval procedures.

7.7.2 Model Configuration Control In the context of configuration control, the term model refers to the following:

  • The selected group of signals that have been collected for the purpose of signal validation and analysis
  • The various settings defined by the on-line monitoring method that are necessary to optimize the performance of the signal validation procedure
  • The data used for training, including any filtering of the data As part of developing a model, the above signals, settings, and data are defined. After the model has been tuned for optimal performance, the completed model is placed in service and will be used as the basis for future signal validation. This completed model should be placed under configuration control to ensure the following:
  • The correct version of the model is in service
  • Changes have not been made to the model without undergoing a formal revision process The method by which a model is placed under configuration control will vary with each plant and its plant-specific document control procedures.

7-8

EPRP Licensed Material On-Line MonitoringProceduresand Surveillances 7.8 MSET Software Acceptance Test The V&V documents described in the previous sections include a procedure with expected results for an acceptance test and periodic test. The procedure provided in Appendix F is specifically for a SureSense Diagnostic Monitoring Studio (version 1.4) MSET application and can be used as a guide for other software applications. The test files referenced in this procedure are provided directly to the SureSense software users. As part of the plant-specific software acceptance, these test procedures and test files form the recommended basis for acceptance testing as well as for periodic testing in support of the quarterly on-line monitoring surveillance test.

7.9 Redundant Channel Methods V&V 7.9.1 ICMP V&V The EPRI Instrument Monitoring and Calibration Program (ICMP) was placed in service at one nuclear plant as a performance monitoring and troubleshooting tool. As part of system acceptance, a team was formed by the plant staff to provide control and oversight of the V&V effort. The team consisted of independent computer/software technicians, technicians and engineers associated with the project, and a QC representative. EPRI contracted with an external company to provide detailed support of the effort, but for purposes of independence, this company was not part of the V&V team.

The V&V activities were conducted as an integral part of the system design and development process. All V&V formal reports and correspondence reporting V&V findings were transmitted directly to the Project Manager, with copies forwarded to the other participants. The V&V Team reviewed all relevant documents and correspondence and the plant staff performed all V&V activities related to ICMP software.

The V&V documentation file is stored at the plant and includes the following elements:

  • V&V test plan
  • System requirements/functional specification
  • Test procedure
  • Discrepancy/resolution report The ICMP algorithm is detailed in On-Line Monitoringof Instrument ChannelPerformance Volume 2: Model Examples, Algorithm Details, andReference Information [5]. Appendix D provides a review of the uncertainty analysis performed on the ICMP algorithm. Appendix A reviews redundant channel averaging techniques. One advantage of a redundant channel approach such as ICMP is the overall analytical simplicity and subsequent uncertainty analysis of the parameter estimate. The algorithm is easy to understand and its performance as signals drift is straightforward to evaluate. For this reason, the on-site V&V effort should be simpler with respect to the testing aspects of the software implementation.

7-9

EPRILicensedMaterial 8

MISCELLANEOUS TECHNICAL CONSIDERATIONS This section discusses various technical issues that might have relevance to the implementation of on-line monitoring at some nuclear plants.

8.1 Bases for Reduced Response Time Testing Some plants have eliminated periodic response time testing (RTT) requirements based on nuclear steam supply system (NSSS) owner's group submittals and the corresponding NRC SE. One NRC SE provided the following evaluation of response time testing requirements:

"RTT is resource intensive and time consuming when properly incorporated into the licensee's surveillance test program. Because the utility required RTT tests only for compliance with accident analysis assumptions, and not to the instrument manufacturer design response time, the test tells very little about the general performance of the instrument. The accident analysis times are, in general, much greater (typically an order of magnitude or more) than the manufacturer designed instrument response times, and therefore an instrument could have significant delay in response, and still pass the required test for overall mitigation or trip system actuation response time. For the purpose of showing that the dynamic response of the instrument is within manufacturer's design parameters, the current RTT is not useful.

"The RTT performed by the licensees per TS requirements is not performed often. In the typical system, RTT is required to be checked on only one channel each refueling outage, on a rotating basis. Hence, with a four channel system and an 18 month refueling cycle, any given channel is tested only every 72 months, or 6 years. This test interval is too great for any meaningful trending program, and in most plants, RTT data are not trended.

"In analog systems, response time degradation is generally due to wear or breakage of some internal part of the instrumentation. Because safety systems are single failure proof, failure of one channel to respond within response time criteria will not prevent the safety system as a whole from responding to achieve the required function within the time criteria.

"...The FMEA described ... shows that component degradation will not increase the response time beyond the bounding response time without that degradation being detectable by other periodic surveillance tests, such as channel checks and calibrations.

"...Based on this information, the staff concurs that RTT is redundant to other periodic surveillance tests and that appropriate surveillance testing alternatives to RTT are in place per the existing requirements of plant specific TSs. The staff concludes that calibration and other TS surveillance testing requirements will adequately ensure that the response time is verified for the components identified..."

8-1

EPRILicensed Material Miscellaneous Technical Considerations As can be seen, the relaxation of RTT requirements is based in part on an expectation that periodic surveillance tests, such as channel checks and calibrations, will continue to be performed. The further extension of some calibrations by the use of on-line monitoring is not considered contrary to the positions taken with respect to RTT elimination. In particular, the elimination of RTT based on the performance of periodic calibrations and the further extension of periodic calibrations by the use of on-line monitoring is not considered mutually exclusive. The rationale for this position with respect to the implementation of on-line monitoring is as follows:

  • With respect to on-line monitoring, no changes are proposed to the periodic calibration of rack-mounted devices associated with the trip function of safety-related channels. This includes summators, comparators, bistables, function generators, power supplies, relay modules, multiplier/dividers, mV/I amplifiers, and other associated devices.
  • Only field-mounted transmitters (sensors) are potentially affected by the combined effect of RTT elimination and calibration extension at an extended frequency. Calibrations of these transmitters will continue to be performed, but potentially at an extended frequency.
  • On-line monitoring has been accepted as a method of verifying the calibration of installed instrumentation. The application of on-line monitoring as a calibration assessment tool actually involves more frequent checking of the calibrated state of the channel. Out-of-calibration conditions will be identified more quickly than by traditional time-directed calibrations.
  • With respect to on-line monitoring, no changes are proposed to periodic channel checks.

However, the identification of out-of-calibration conditions can occur sooner than standard operator-performed channel checks because of the improved fault detection capability provided by on-line monitoring.

8.2 Historical Instrument Performance When on-line monitoring is implemented, historical instrument performance is an important consideration for the following reasons:

  • Instrument channels that have historically behaved relatively poorly, for example, to be frequently out of calibration when checked, will probably not exhibit improved performance when checked by on-line monitoring. Instead, these channels might require more frequent calibration because on-line monitoring will identify the presence of drift at levels generally not detectable by traditional channel checks. The likely result should be understood before starting on-line monitoring.
  • Section 6.6 describes the ongoing calibration monitoring program and explains why it is recommended as part of the implementation of on-line monitoring. Evaluating the future performance of instrument channels is better executed if the past performance has also been evaluated. By establishing the historical instrument channel performance before the implementation of on-line monitoring, a baseline will be defined against which future performance can be compared.

8-2

EPRILicensedMaterial Miscellaneous Technical Considerations Guidelinesfor Instrument CalibrationExtension/Reduction-Revision 1: StatisticalAnalysis of Instrument CalibrationData [29] provides one method by which historical performance can be evaluated. This type of as-found and as-left calibration data has often already been evaluated at many plants for other purposes, such as two-year fuel cycle extensions or drift magnitude checks in support of setpoint calculations.

8.3 Common-Mode Failures or Common Bias Effects Industry experience continues to demonstrate that common mode failure of redundant channels is an unlikely event. Instrument Calibrationand MonitoringProgram, Volume 2: FailureModes andEffects Analysis [10] performed a failure mode and effects analysis (FMEA) for typical sensors and provides a basis for the absence of common-mode sensor bias effects beyond those already included in the setpoint analysis. The EPRI drift study documented in On-Line MonitoringofInstrument ChannelPerformance [3] also confirmed that drift tends to occur randomly about a zero-referenced mean with little or no tendency for systematic drift in a preferential direction.

This experience continues to be confirmed with the EPRI on-line monitoring implementation project. With dozens of models evaluating hundreds of sensors over a two-year period, occasional instrument drift has been observed, but common-mode failures or common bias effects have not been observed.

Other bias effects that influence the sensor output while the plant is operating might exist, but might disappear completely as the plant shuts down. An example of this might be fluid density changes on a level measurement-as the plant heats up, water density decreases, thereby causing lower differential pressure on level transmitters. These types of predictable bias effects should have no significant impact on on-line monitoring because they are already accounted for in the plant's setpoint analysis. Furthermore, such bias effects, if they exist, would have the same impact on channel performance with current calibration practices.

8-3

EPRILicensed Material 9

REFERENCES

1. On-Line MonitoringofInstrument ChannelPerformance. EPRI, Palo Alto, CA: 1998.

TR-104965.

2. U.S. Nuclear Regulatory Commission. EPRI Topical Report (TR) 104965. On-Line Monitoringof Instrument ChannelPerformance. Project No. 669. July 2000.
3. On-Line MonitoringofInstrument ChannelPerformance. EPRI, Palo Alto, CA: 2000.

1000604.

4. On-Line Monitoring ofInstrument Channel Performance, Volume 1: Guidelinesfor Model Development andImplementation. EPRI, Palo Alto, CA: 2004. 1003361.
5. On-Line Monitoringof Instrument Channel Performance, Volume 2: Algorithm Description, Model Examples, andResults. EPRI, Palo Alto, CA: 2004. 1003579.
6. Expert Microsystems Inc., 7932 Country Trail Drive, Suite 1, Orangevale, CA, 95662.
7. SureSenseDiagnosticMonitoring StudioUsers Guide, Version 2. 0, Expert Microsystems, Inc., Orangevale, CA: 2004.
8. On-Line MonitoringCost-Benefit Guide. EPRI, Palo Alto, CA: 2003. 1006777.
9. Instrument Calibrationand MonitoringProgram: Volume 1: Basisfor the Method EPRI, Palo Alto, CA: 1993. TR-103436-VI.
10. Instrument CalibrationandMonitoring Program: Volume 2: FailureMode andEffects Analysis. EPRI, Palo Alto, CA: 1993. TR-103436-V2.
11. Monte Carlo Simulation and UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgram.EPRI W03785-02, November 1995.
12. EPRI Interim Report, SureSenseM DiagnosticMonitoringStudioTm Users Guide, Version 1.4.

June 2002.

13. SmartSignal Corp., 901 Warrenville Rd, Suite 300, Lisle, Illinois, 60532.
14. Quality AssuranceDocumentation Packagefor the MultivariateState Estimation Technique Software System (MSET) Software System Kernel, Revision 0, July 25, 2002.
15. Software Quality Assurance Program, Revision 0, January 23, 2002.
16. Software ConfigurationControl Procedurefor the MultivariateState Estimation Technique Software System (MSET), Revision 1, November 26, 2001.
17. Software Requirements andSpecificationsfor the Multivariate State Estimation Technique Software System (MSET), Revision 2, May 21, 2002.

9-1

EPRP LicensedMaterial References

18. Software Verification and Validation Planfor the MultivariateState Estimation Technique Software System (MSET), Revision 1, February 15, 2002.
19. Software Test Planfor the MultivariateState Estimation Technique Software System (MSET),

Revision 0, June 25, 2002.

20. Verification of the MSETBase Code Version 3.5, July 19, 2002.
21. MSET Kernel ValidationReport, Version 3,5 of the MSET Base Code, July 17, 2002.
22. Expert Microsystems Software ConfigurationManagementPlan, Document No. 2001-4471, Rev. 1.
23. Expert Microsystems Software Quality Assurance Plan, Document No. 2002-4470, Rev. 1.0.
24. SureSense DiagnosticMonitoring Studio Software Requirements Specification, Document No.

2001-4489, Rev. 1.4.

25. SureSense DiagnosticMonitoring Studio Software Requirements Specification, Document No.

2001-4489, Rev. 2.0.3.

26. SureSense DiagnosticMonitoringStudio Software Verification Test Plan, Document No.

2001-4479, Rev. 1.4.

27. SureSense DiagnosticMonitoringStudio Software Verification Test Plan, Document No:

2001-4479, Rev. 2.0.3.

28. SureSense DiagnosticMonitoring Studio Verification and ValidationReport, Version 2.03, Expert Microsystems, Inc., Orangevale, CA: October 2004.
29. Guidelinesfor Instrument CalibrationExtension/Reduction-Revision 1: StatisticalAnalysis ofInstrument CalibrationData.EPRI, Palo Alto, CA: 1998. TR-103335-RI.

9-2

EPRILicensed Material A

REDUNDANT INSTRUMENT CHANNEL MONITORING TECHNIQUES AND UNCERTAINTY ANALYSIS A.1 Redundant Methods Overview The general framework of redundant instrument channel monitoring compares each channel's measurement with a calculated estimate based on the measurements of all of the redundant channels. An assessment of a specific channel's calibration status is made based on its deviation from the calculated parameter estimate. For redundant monitoring methods that do not compute a single parameter estimate, but rather compute estimations for all redundant channels in an autoassociative fashion, calibration status can be assessed by monitoring pairwise deviations between the estimations, or deviations between the estimations and the measurements.

For a redundant instrument channel monitoring system that is not based on a pattern recognition algorithm, uncertainty in the parameter estimate is contributed to by several factors: the accuracy of the redundant channels, the nature of the observed drift, the monitoring algorithm itself including its ability to exclude outlying measurements from influencing the parameter estimate, and the number of redundant channels. For pattern recognition techniques, additional uncertainty factors are the accuracy of the measurements used to develop the system model (i.e. training data), and the proper representation of the plant operating modes by the training set.

Sources of channel uncertainty are instrument accuracy, calibration effects, and drift. Accuracy includes all effects associated directly with a given channel, including: repeatability, hysteresis, linearity, and deadband settings. Calibration effects are introduced into an instrument channel during the calibration process, and drift is a shift in the device output over time due to the characteristics of the specific device.

In this appendix, a brief overview of several types of redundant sensor monitoring techniques is provided. In addition, a preliminary mathematical investigation is presented describing an uncertainty analysis of a simple averaging technique for redundant instrument channels. This uncertainty analysis is based on equations derived for the variance of a computed average parameter estimate from instrument channels resulting in Gaussian residuals.

A-1

EPRI.LicensedMaterial Redundant InstrumentChannel MonitoringTechniques and UncertaintyAnalysis A.2 Types of Redundant Instrument Channel Monitoring Techniques There are a variety of algorithms and techniques that can be applied for monitoring a set of redundant instrument channels. The most common are provided below:

1. Parity Space methods
2. Instrumentation and Calibration Monitoring Program (ICMP) *
3. Principal Component based methods
4. Fuzzy logic methods
5. Redundant Sensor Estimation Technique (RSET)
  • - ICMP is a type of parity space method A.21 ParitySpaceMethods The parity space approach is a very popular fault detection and isolation technique for use with redundant sensors. The parity-space technique models the redundant measurements as the true value combined with a measurements error term. Measurement inconsistencies are obtained by eliminating the underlying measured quantity from the data by creating linearly independent relationships between the measurements known as the parity equations. The magnitude of the parity vector represents the inconsistency between the redundant measurements, and the direction of the parity vector indicates the faulty signal. The parity space method is a two-stage process: (1) residual generation and (2) decision making.

A.2.2 ICMP The ICMP algorithm produces a parameter estimate using a weighted average. Weighting is performed based on the consistency values assigned to each redundant channel. These consistency values are updated upon each new observation. At the limits, if all measurements are declared consistent, the parameter estimate is the simple average. If all measurements are declared inconsistent, no parameter estimate is produced. After obtaining a parameter estimate, individual deviations between each channel measurement and the estimate are computed and compared against specified acceptance criteria. Deviations exceeding the acceptance criteria indicate a faulty sensor. An uncertainty analysis has been performed for ICMP and is provided in Appendix D.

A.2.3 Principal Component Based Methods The basic PCA approach linearly transforms a data matrix to a new set of orthogonal principal components. The transformation occurs such that the direction of the first principal component is determined to capture the maximum variation of the original data set. The variance of subsequent principal components is the maximum available in an orthogonal direction to all previously determined principal components. The full set of principal components is an exact copy of the original data set, though the axes have been rotated. Selecting a reduced subset of components A-2

EPRPLicensedMaterial RedundantInstrunment Channel MonitoringTechniques and UncertaintyAnalysis results in a reduced dimension structure with the majority of the information available, where information is assumed to be equivalent to variance. Usually, small variance components that are not retained are assumed to contain useless information such as process or measurement noise.

There are a variety of ways the principal components can be exploited to monitor changes in an on-line monitoring approach. The statistics of the principal components themselves, or the relationships between principal components, can be monitored for changes. The principal components can be used to produce estimates of the sensors via Principal Component Regression (PCR). Other approaches have used principal components for residual generation within the framework of the traditional parity space approach.

A.2.4 Fuzzy Logic Methods Reports utilizing fuzzy logic redundant sensor validation schemes highlight the abilities of the fuzzy logic algorithm to eliminate the hard boundary between declaring a signal as failed or valid. Hard boundaries are susceptible to false alarms due to spurious spikes in the measured signals. The use of a wide tolerance band is the general approach to prevent false alarms, but at the same time increases the probability of missed alarms. The motivation for applying fuzzy methodologies to the tasks of signal validation is to provide an alternative to the crisp decision threshold of the more traditional parity space approach. In addition, since signal validation employs both numbers and qualitative statements, fuzzy logic provides a pathway for transforming human abstractions into the numerical domain and thus coupling both sources of information. This transformation will allow linguistically expressed analysis principles to be coded into a classification rule-base for signal failure detection and identification.

Overall, the fuzzy logic approach does not seem to present any significant advantages over the other methods reported. It can not be ruled out as a viable method for redundant channel monitoring; however, considering the overwhelming base of knowledge available regarding signal validation and the percentage of this utilizing fuzzy logic, it seems more of a niche application.

The benefits of the fuzzy logic approach regarding the elimination of crisp decision boundaries for the fault determination are not necessarily unique to the approach. This concern has been addressed (SPRT, multi-level testing), and is eliminated in most current signal validation methodologies.

A.2.5 Redundant Sensor Estimation XTechnique The University of Tennessee, under EPRI sponsorship, developed the Redundant Sensor Estimation Technique (RSET), an Independent Component Analysis (ICA) based technique for redundant sensor monitoring. The initial results show that RSET captured essential information in the redundant measurement. RSET can delect faulty sensors and can be applied as a pre-processor to reduce the noise in sensor measurements. RSET is a new and effective approach for redundant sensor validation.

A-3

EPRILicensedMaterial Redundant Instrument ChannelMonitoring Techniques and UncertaintyAnalysis The core algorithm is based on an Independent Component Analysis (ICA). ICA is a statistical model in which the observed data is expressed as a linear transformation of latent variables

('independent components') that are non-Gaussian and mutually independent. The ICA method is able to reduce the redundancy of the original dataset in order to predict the process parameter more accurately. ICA prediction is very robust in that faulty sensors do not adversely affect good sensors (robust to spillover).

Further Information on RSET can be found In Inferentialmodeling and independent component analysisfor redundantsensor validation, Jun Ding, MS Thesis, The University of Tennessee.

A.3 Analysis of the Mean, Variance, and Standard Deviation of the Average of n Normal Random Variables This analysis reveals the properties of the simple average of a set of ideal redundant signals.

Ideal implies that the measurements of the redundant channels are constant valued with an associated Gaussian noise component. An ideal channel can be represented by:

M(t) = C + N(O, o2), where M(t) is the measurement of the channel at time t, C is a constant representing the appropriate value of the monitored parameter, and N(O, cr 2 ) is a normal random variable with a known mean value of zero, and an unknown variance. Additional redundant channels will have analogous equations with the same constant C, differing only in the random noise component. Note that this equation, though written as a function of time, is in fact time-invariant. The time parameter was included to infer that due to the noise component, M(t)

  • M(t + 1).

If a set of n channel measurements were mean centered, and the channels were ideal, their average would be the same as the average of n independent normal random variables. These considerations assume that the redundant channels exhibit Gaussian noise components, and are independent.

The purpose of this analysis is to learn about the statistical nature of the simple average of a set of normal random variables, which in the ideal case can be directly equated to the simple average of a set of redundant instrument chaimels. The results provide exact indications of the variance, or standard deviation, reduction performed by the simple average. It is intended that the results will allow insight into the techniques of combining a set of redundant channels to produce a single parameter estimate. Though the ideal situation will not occur in practice, this analysis will produce a point of reference to which can be compared the various techniques for redundant instrument channel monitoring regarding their variance reduction properties.

A-4

EPPILicensed Material RedundantInstrument Channel Monitoring Techniques and UncertaintyAnalysis A.3.1 Derivation of the Mean and Standard Deviation of the Average of a Set of n Normal Random Variables The density function of the normal distribution is given by:

f(x) = 2a2-Xfl< Z 2 Ca>O Define n independent random variables XI, X 2 ,..., X, where X, - N(Paio).

A useful tool for determining the moments of a distribution is the moment generating function, which for the continuous case, can be defined as:

M(t) = E(e') = fetf(x)dx To calculate expectations using the moment generating function:

d' E(XF) = -M(t = 0) dt For example, the mean of the normal density function can be found as follows:

X N(pu,a 2 )

M(t) = E(e ) = J etf(x)dx = f eve 2(pdx M(t) = exp gt + 2 d (t) + ot 2 ]}*

a{exp[ut {/ + tC 2 }

E (X) =-+ = )

d2 dt This is the expected result. In the same way, the second moment can be determined, and the variance of the normal density can be acquired:

Var(X) = 2

.2 = E(X )- [E(X)2]

A useful property of the moment generating function:

If X has the moment generating function MX (t), and W = a + bX, then W has the moment generating function Mw(t) = e 0'Mx(bt) .

Considering the task of finding the moments of the average of several independent random variables:

A-5

EPRILicensedMatenal Redundant Instrument ChannelMonitoring Techniques and UncertaintyAnalysis Z XI X2 + Xn n n n Define Y, =i-, and rewrite as Y} = a + bX,, where a =O, and b=-.

n n Such that we can find the moment generating function for Yj:

My (t) = MX (-)

To determine the moment generating function of the average, Z, we need one final property:

If Xl and X2 are independent random variables with moment generating functions Mx, (t) and Mx 2 (t), and Z = XI + X 2 , then Mz (t) = MAFX1 (t) *Mx 2 (t) . By induction, this property can be extended to the sum of several independent random variables.

Recall the defined n independent random variables X1,X 2 ,...,X,, where X, N(,u a,2). Also recall the corresponding set Yl, Y2 ,..., Yn, where Y5 =-. Using the moment generating function n

obtained for Yi, My, (t) = MX ( leads to:

My(t) =exp t+ n -

The average of the random variables can be rewritten as:

Z = Y1 + Y2 + + Yn Using the properties above, we can now write the moment generating function of the simple average, Z, as:

Mz (t =iiexp[L+ +

ZrlIP[ n 2n2]

The moments of the average can be obtained from:

E(ZF) = dMz(t = 0) dt This method was used to obtain the results a-s shown in table A- 1.

A-6

EPRILicensed Material Redundant Instrument ChannelMonitoring Techniques andUncertainty Analysis Table A-1 Equations for the Mean, Variance, and Standard Deviation of the Average Of n Normal Random Variables E(Z) E(Z2 ) a2 =Var(Z) =E(Z2 )[E(Z) 2 ] ao = std(Z) =_F 1 Pi Pi2 + a 2 a2 at 2

2 __ +.2 + .2 ,t +p' a.2+ a2 2 +t- ) 4 2 3 P + P.2 + P. 2 2 2 + a2 + 2 +a n /2 + 2 C.22

++.. (p 1+g,+...+/ 2 l 2+ .+

  • 1 2 2 + 2 n n ) n2 n A-7

EPRILicensedMaterial Redundant Instrument ChannelMonitoring Techniques and UncertaintyAnalysis The analytical solutions provided in table A-I are general in that the mean and standard deviations of the n normal random variables are not required to be the same for all channels. If in fact the distributions of the noise components are the same for a set of redundant channels, then:

If Ail = PU2 =*--=. PS and a, = a2 ...-==an =CF' Then /UZ = u , (' 2 1 a2 and 1a n =

Therefore, if n normal random variables, having the same mean and standard deviation, are averaged: the mean of the average is unchanged, the variance of the average is less than the 1

variances of the random variables by a factor of-, and the standard deviation of the average is n

less than the standard deviation of the random variables by a factor of -

-IFn A-8

EPRILicensedMaterial B

NRC SAFETY EVALUATION The following is a complete copy of the NRC Safety Evaluation for on-line monitoring.

July 24, 2000 Mr. Gary L. Vine Senior Washington Representative Electric Power Research Institute 2000 L Street, N.W., Suite 805 Washington, DC 20036

SUBJECT:

EPRI Topical Report (TR) 104965, "On-Line Monitoring of Instrument Channel Performance," Final Report, November 1998 (TAC No. M93653)

Dear Mr. Vine:

The staff has completed its review of Topical Report (TR) 104965, "On-Line Monitoring of Instrument channel Performance," dated November 1999. The staff's safety evaluation SE is enclosed. As agreed during a meeting with representatives of the Electric Power Research Institute (EPRI) on February 6 and 17, 2000, staff review of the subject topical report focused on the generic application of the on-line monitoring technique to be used as a tool for assessing instrument performance. The two algorithms included in the topical report were not in the scope of the staff review. The staff has determined that selection of the most suitable algorithm and associated software for calculating and analyzing data obtained during on-line monitoring should be left to the user.

The topical report proposes to relax the frequency of instrument calibrations required by the technical specifications (TS) from once every fuel cycle to once in a maximum of eight years, based on the on-line monitoring results. Implementation of the on-line monitoring technique to relax the TS-required calibration frequency will require a license amendment. The staff determined that suggested TS changes in the topical report are incomplete and require further evaluation for determining an acceptable generic model that can be used in plant-specific TS requirements. During the February 16 and 17, 2000, meeting with the staff, EPRI agreed that once the technical issues relating to generic concept of the on-line monitoring technique were resolved and the final SE was issued, EPRI would work with the NRC staff and the NEI Technical Specification Task Force (TSTF) to develop an appropriate TS structure and TS requirements consistent with the technical requirements described in the final SE. The enclosed SE resolves all technical issues as agreed upon during the February 16 and 17, 2000, meeting.

Pursuant to 10 CFR 2.790, we have determined that the enclosed SE does not contain proprietary information. However, we will delay placing the SE in the public document room for a period of B-1

EPRILicensedMaterial NRC Safety Evaluation ten (10) working days from the date of this letter to provide you with the opportunity to comment on the proprietary aspects only. If you believe that any information in the enclosure is proprietary, please identify such information line by line and define the basis pursuant to the criteria of 10 CFR 2.790.

We do not intend to repeat our review of the matters described in the report, and found acceptable, when the report appears as a reference in license applications, except to assure that the material presented is applicable to the specific plant involved. Our acceptance applies only to matters described in the report.

In accordance with procedures established in NUREG-0390, "Topical Report Review Status,"

we request that EPRI publish an accepted version of the topical report within 3 months of receipt of this letter. The accepted version shall incorporate this letter and the enclosed SE between the title page and the abstract. It must be well indexed such that information is readily located. Also, it must contain in appendices historical review information, such as questions and accepted responses, and original report pages that were replaced. The accepted version shall include an "-

A" (designating accepted) following the report identification symbol.

Should our criteria or regulations change so that our conclusions as to the acceptability of the report is invalid, EPRI and/or the applicants referencing the topical report will be expected to revise and resubmit their respective documentation, or submit justification for the continued applicability of the topical report without revision of their respective documentation.

Sincerely, IRA by Stephen Dembek for!

Stuart A. Richards, Director Project Directorate IV & Decommissioning Division of Licensing Project Management Office of Nuclear Reactor Regulation Project No. 669

Enclosure:

Safety Evaluation cc w/encl:

Mr. James Lang Director EPRI 1300 W.T. Harris Boulevard Charlotte, NC 28262 Dr. Theodore U. Marston Vice President and Chief Nuclear Officer EPRI 3412 Hillview Avenue Palo Alto, CA 94304 B-2

EPRILicensedMaterial NRC Safety Evaluation SAFETY EVALUATION BY THE OFFICE OF NUCLEAR REACTOR REGULATION APPLICATION OF ON-LINE PERFORMANCE MONITORING TO EXTEND CALIBRATION INTERVALS OF INSTRUMENT CHANNEL CALIBRATIONS REQUIRED BY THE TECHNICAL SPECIFICATIONS EPRI TOPICAL REPORT (TR) 104965 "ON-LINE MONITORING OF INSTRUMENT CHANNEL PERFORMANCE"

1.0 INTRODUCTION

In a letter dated September 28, 1998, the Electric Power Research Institute (EPRI) submitted Topical Report (TR)104965, "On-Line Monitoring of Instrument Channel Performance" for NRC review and approval. EPRI presented an overview of the topical report on August 31, 1999, and participated in a meeting with the staff on February 16 and 17, 2000, to discuss EPRI's comments on the staff's draft safety evaluation (SE) dated December 13, 1999. The meeting summary is available under ADAMS No. M]L003690488. By letter dated March 23, 2000, EPRI submitted its comments on the staff's draft SE.

The topical report proposes a new generic approach for monitoring instrument calibrations during normal plant operation by using an on-line monitoring technique with a calibrate-as-required approach. The report proposes to allow commercial nuclear power plant licensees to use the on-line monitoring as a calibration assessment tool to extend calibration intervals of instrument channel calibrations that are required by the technical specifications (TS).

TR 104965 demonstrates an on-line monitoring technique for obtaining real-time instrument performance data in a non-intrusive manner and incorporating these data with field calibration results to verify whether the monitored instrument channel's performance is within acceptable limits. This technique can help eliminate unnecessary field calibrations, reduce associated labor costs, limit personnel radiation exposures, and limit potential for miscalibration.

The proposed system will not be connected to the plant instrumentation permanently. It will only be temporarily connected to collect instrument data in a batch mode and be disconnected when no longer required. The collected data will later be analyzed by a separate computer to assess instrument performance and operability. Thus, in this mode of application, the on-line monitoring system will be used as measuring and test equipment (M&TE) to monitor calibration and operational status of safety-related instruments.

B-3

EPRI Licensed Material NRC Safety Evaluation 2.0 SYSTEM DESCRIPTION On-line monitoring of instrument channel performance involves monitoring the steady-state output of each channel and evaluating the monitored value to determine whether the channel is operating outside its acceptable limits. During evaluation, the monitored value is compared to a calculated value of the process variable to assess deviation. The calculated value is defined as the "process parameter estimate," which represents the instantaneous value of the process at the monitored operating point.

The data acquisition from instrument channels and evaluation of the data could be performed continuously or at discrete intervals, either manually or automatically using microprocessor-based equipment. In the topical report, the proposal to use the on-line monitoring technique as a calibration extension tool is based, in part, on quarterly evaluations (at discrete intervals) using automatic means.

The topical report describes two algorithms that can be independently used to calculate and analyze instrument data obtained during on-line monitoring. The staff did not review either of the algorithms described in the topical report because the staff determined that selection of the most suitable algorithm and software should be left to the user. The staff reviewed only the acceptability of the concept of on-line monitoring as a tool to assess instrument performance and calibration status. The two algorithms are summarized below:

1. Instrument Calibration Monitoring Pro gram: The Instrument Calibration Monitoring Program (ICMP) algorithm developed by EPRI compares redundant channels to determine whether one or more channels are operating beyond their specified limits. The ICMP's ability to detect potentially degraded instruments is based on an algorithm that preferentially discriminates against outlying measurements from a set of redundant instruments. The ICMP algorithm calculates the value of the process parameter estimate by averaging channels that are considered to be within expected specifications. Outlier channels are not used for averaging calculations. The monitored value of each instrument channel is compared to the calculated value of the process parameter estimate to determine the channel's performance and its calibration status.
2. Multivariate State Estimation Technique: The Multivariate State Estimation Technique (MSET) is a software-based system that uses empirical, statistically-based pattern recognition modules that interact and operate to provide the user with information needed for the safe, reliable, and economical operation of a process by detecting, locating, and identifying subtle changes that could lead to future problems well in advance of significant degradation. Modeling is based entirely on data collected during training. During monitoring, instrument data are read by MSET, an estimate of the current state of the process is determined by comparing the measured sensor data with those obtained during training, and the difference between this estimate and ihe measurement is calculated. The difference, or estimate error, is then analyzed by a statistically based hypothesis test (the sequential probability ratio test or SPRT) that determines whether the process is operating normally or abnormally. If an abnormal condition is detected, the initial diagnostic step identifies the cause as either a sensor degradation or an operational change in the process. MSET is a B-4

EPRILicensedMaterial NRC Safety Evaluation highly sensitive and accurate tool for on-line monitoring of any process and could be used for single-channel monitoring. MSET can detect and identify malfunctions that might occur in process sensors, components, or control systems, as well as changes in process operational conditions.

The typical on-line monitoring implementation uses the following building blocks:

  • Separate off-line computer hardware on which the system resides.
  • Communications hardware and software to electronically obtain data from the plant process computer or other source, if data is acquired automatically. Manual data acquisition can be obtained using the appropriate test equipment.
  • The on-line monitoring software, which archives, analyzes and displays the data interactively in graphs and reports.

On-line monitoring collects data from instrument channels, typically via connection to the plant computer for an automated system or at the isolator output or appropriate test point for manual data acquisition. The collected data are processed mathematically by a dedicated off-line, microprocessor-based, data acquisition and processing system. Different on-line monitoring implementations exist on microcomputer platforms, and data are input from the plant to these systems via modem, electronic media, or manual input. Output capabilities typically include graphical display of the individual instrument channel deviation from the process parameter estimate as a function of time. Some automated systems are network operable and allow multiple access to the monitoring information and results.

Regardless of the algorithm employed, the on-line monitoring technique evaluates the deviation of an instrument with reference to its process parameter estimate to determine whether the performance exhibited by the instrument is acceptable or whether the instrument must be scheduled for calibration or the instrument channel is inoperable. In the topical report, EPRI describes the following advantages that could be realized by implementing the proposed on-line monitoring technique for assessing instrument calibration:

1. Compared to the current traditional calibration process, the on-line monitoring process is nonintrusive, more frequent, and will result in a reduced number of field calibrations.
2. On-line process will monitor instrument performance in place on a continuous basis and will identify calibration problems as they occur. Therefore, it will be able to provide a basis for determining when adjustments are necessary. The on-line technique can detect degradation and failures as they occur in the early stage of an instrument's installed life.
3. Elimination of unnecessary field calibrations will reduce associated labor costs, personnel radiation exposures, and the potential for miscalibration since conventional calibration frequency will be reduced.
4. On-line monitoring accounts for installation and process condition effects on calibration.

Compared to traditional calibration during a refueling outage when a plant is in the shutdown state, on-line monitoring allows evaluation of instrument performance under normal operating conditions, and thus collects data representative of effects associated with several sources of channel uncertainty, including process effects and environmental effects.

B-5

EPRILicensedMaterial NRC Safety Evaluation

5. By reducing personnel radiation exposures, plant safety and efficiency will improve. By reducing time spent on conventional calibrations, refueling outages will be shorter, increasing plant availability.
6. Long-term trends in instrument performance developed using on-line monitoring could be used for predictive maintenance tests, will enhance instrument troubleshooting capabilities, and will provide additional resources for historical root-cause analyses and post-trip reviews.

3.0 PROPOSED CHANGE

S Each parameter covered by the TS has specific surveillance requirements that are performed at various frequencies. The surveillance requirements are intended to demonstrate that the associated instrumentation is operable, and actions are specified in the event that an inoperable channel is identified. The current TS requires that all redundant safety-related instrument channels be calibrated once each refueling cycle and this TS requirement could be termed as "time-directed traditional calibration." The topical report proposes to:

1. Establish on-line monitoring as an acceptable calibration monitoring tool for assessing an instrument's performance and its calibration in-place and on-line while the plant is in normal operating mode, and
2. Based on results of using on-line monitoring to assess instrument calibration, extend calibration intervals by revising the current once per refueling cycle calibration frequency of each of the TS-related sensors to once in a maximum of eight years by implementing the following process:
a. At least one redundant sensor will be calibrated each scheduled fuel cycle. For n redundant sensors, all sensors will be calibrated at least once in every n outage. (This is the most significant difference from current calibration practices, whereby all redundant sensors, regardless of their calibration status, are calibrated each outage. With four redundant sensors for each function, all sensors will be calibrated in four fuel cycles and this duration could be a maximum of eight years. Calibrating at least one redundant sensor each scheduled fuel cycle ensures that common-mode failure mechanisms do not exist. Also, it ensures that each sensor continues to be periodically calibrated by a method traceable back to National Institute of Standards and Technology.)
b. In addition to calibrating at least one redundant sensor each scheduled fuel cycle, sensors that are identified as out-of-calibration by the on-line monitoring process will also be calibrated as necessary. (Thus, depending on the performance of monitored channels, anywhere from one to all of the redundant sensors might be field calibrated each refueling outage.)

The topical report proposes to relax the calibration frequency only for the sensor-transmitter, and does not recommend any change in current TS-required surveillance for other devices in an instrument channel. Performance of these instruments will continue to be verified through the current TS scheduled surveillance activities, e.g., the channel check, the channel functional test, and the logic functional test.

B-6

EPRiLicensedMaterial NRC Safety Evaluation By proposing to change the TS required instrumentation calibration frequency from the current once-per-refueling-cycle to a maximum of "once every 8 years based on the results of performance monitoring using the on-line monitoring technique," the topical report basically proposes to replace the current "time-directed traditional calibration" with the "on-line monitoring and calibrate-as-required approach," with an interval between the two successive calibrations limited to a maximum duration of eight years. The change from calibrating all redundant sensors each outage to calibrating a minimum of one of the redundant sensors each outage will require changes to the TS. The topical report proposed generic changes to TS.

The staff determined that TS changes proposed in the topical report are incomplete and require further evaluation for selecting an acceptable generic model that can be used in plant specific requirements. During the February 16 and 17, 2000, meetings with the staff, EPRI agreed that once the technical issues relating to generic concept of the on-line monitoring technique were resolved and the final SE was issued, EPRI would work with the NRC staff and the NEI Technical Specification Task Force (TSTF), to develop an appropriate TS structure and TS requirements consistent with the technical requirements described in the final SE. This SE resolves all technical issues as agreed upon during the February 16 and 17, 2000, meeting.

4.0 REVIEW METHOD AND CRITERIA The staff reviewed the technical basis presented in the topical report for using the on-line monitoring technique to evaluate instrument performance in place and extend calibration intervals based on the results of performance evaluation. Since the current traditional calibration practice would be replaced by the new calibration assessment method of on-line monitoring, the staff compared the current practice to the proposed new method, analyzed the advantages and disadvantages of each, and attempted to assess the impact on plant safety. For evaluation, the staff followed review guidance contained in Chapter 7 of the Standard Review Plan (SRP) and also considered the guidance provided by the documents included in the reference section of this SE. The staff used the following criteria to evaluate the topical report.

1. Section 50.55a(h) of 10 CFR Part 50 endorses Institute of Electrical and Electronic Engineers (IEEE) Std. 603-1991, "IEEE Standard Criteria for Safety Systems for Nuclear Power Generating Stations." IEEE Std. 603 establishes minimum functional design criteria for the power, instrumentation, and control portions of nuclear power generating station safety systems.
2. Section 50.36(c)(1)(ii)(A), "Technical Specifications," of 10 CFR Part 50 states, in part, "Where a limiting safety system setting is specified for a variable on which a safety limit has been placed, the setting must be so chosen that automatic protective action will correct the abnormal situation before a safety limit is, exceeded." Since reactor instruments are of high quality but still imperfect, conformance to this provision can only be ensured by acceptable evaluation or measurement of instrument performance. The staff's criterion for this provision accepts a statistical evaluation of instrument performance data, based on measurements of instrument performance that give reasonable assurance of compliance with 10 CFR 50.36(c)(l)(ii)(A).

B-7

EPRILicensedMaterial NRC Safety Evaluation

3. Regulatory Guide 1.153, Revision 1, "Criteria for Safety Systems," establishes conformance with IEEE Std. 603 1991 as an acceptable alternative to compliance with IEEE Std. 279 1971. IEEE Std. 603 1991, Section 6.8.1, states, in part, "The allowance for uncertainties between the process analytical limit documented in Section 4.4 and device setpoint shall be determined using a documented methodology."
4. Criterion 13, "Instrumentation and Control," of Appendix A to 10 CFR Part 50 requires, in part, that instrumentation be provided to monitor variables and systems and that controls be provided to maintain these variables and systems within operating ranges.
5. Criterion 20, "Protection System Functions," and Criterion 21, "Protection System Reliability and Testability," of Appendix A to 10 CFR Part 50 require that automatic initiation of safety functions to prevent fuiel design limits from being exceeded occur with high reliability.
6.Section XII of Appendix B to 10 CFR Part 50, "Control of Measuring and Test Equipment,"

states, "Measures shall be established to assure that tools, gages, instruments and other measuring and testing devices used in activities affecting quality are properly controlled, calibrated and adjusted at specified periods to maintain accuracy within necessary limits."

7. NRC Generic Letter 91-04, "Changes in Technical Specification Surveillance Intervals to Accommodate a 24-Month Fuel Cycle."
8. Section 50.65 to 10 CFR, "Requirements for Monitoring the Effectiveness of Maintenance at Nuclear Power Plants."

5.0 EVALUATION The topical report proposes to replace the current time-directed traditional calibration with the new and advantageous calibrate-as-required approach using on-line monitoring. Therefore, the justification for such a replacement should demonstrate either: (1) the proposed on-line monitoring technique can perform all the required designated functions better than, or as good as, the current traditional calibration, with the same or better reliability; or (2) if due to inherent deficiencies in the proposed technique, the proposed technique cannot be demonstrated to be either better than, or at least as good as, the current practice, then the justification should verify that the impact of the proposed technique on plant safety will be insignificant and the advantages of using it will outweigh the deficiencies. Therefore, throughout this SE, the staff has compared various features of the proposed on-line monitoring technique to the current calibration practice, to determine if the proposed on-line monitoring technique can replace the current practice of traditional calibration without affecting the plant safety significantly. If any area was perceived to have weaknesses that could result in safety concerns, the staff has attempted to alleviate the possible concern by recommending remedial actions. The recommended remedial actions have been included in this SE as requirements. There are 14 requirements. A few requirements may be duplicated in more then one paragraph, but for clarity's sake, these requirements were not combined. A few requirements are taken from the topical report and are included in the SE for completeness. Not all requirements listed in this SE are applicable to every plant-specific implementation of the proposed changes. Only applicable requirements must be addressed by license amendment requests submitted to implement the proposed changes to commercial nuclear power plants. For each requirement considered by the licensee as "not applicable," a case-by-case justification must be provided in the license amendment request.

B-8

EPRILicensed Material NRC Safety Evaluation 5.1 Traditional Calibration Versus On-Line Monitoring 5.1.1 Traditional Calibration Calibration is a two-step process. The first step determines whether calibration is actually needed (calibration check). This check is normally performed by providing the instrument with a series of known simulated process signal inputs covering its entire operating range, including the trip setpoint (TSP). For each input, the instrument output is compared with the preset acceptance criteria to determine whether the instrument output meets the acceptance criteria. If the instrument output meets the acceptance criteria, the second step is not necessary, and the instrument is declared to be in calibration.

The simulated process signal inputs used in traditional calibration are of known accuracy traceable to the National Institute of Standards and Technology (NIST). Maintaining traceable accuracy has two safety effects. The first is that safety analyses are based on steam tables and material properties developed using instruments traceable to standards. Therefore, having instrument TSPs traceable to standards preserves the correlation between plant operation and the safety analyses in the licensing basis. The second is that post-accident monitoring and accident reconstruction are also based on steam tables and material properties that are traceable to standards, and post-accident response may depend upon accident reconstruction.

If, in the first step, the instrument output does not meet the acceptance criteria, the instrument is calibrated during the second step by providing a series of known inputs covering its entire range.

For each input, the instrument is physically adjusted as required so that its output is within the range and accuracy required to conform with a set standard and ensure that the operation of the instrument is within the calculated limits. The calibration process eliminates known bias errors and limits uncertainty to an acceptable level. Therefore, the traditional calibration process gives confidence that instruments will operate on demand in accordance with the established design limits.

Two important concepts are associated with the traditional calibration process: (1) the as-found (AF) condition and (2) the as-left (AL) condition. AF is the condition in which a channel or a portion of it is found in the first step of calibration (calibration check). AL is the condition in which a channel, or a portion of it, is left after the second step of calibration (i.e., after physical adjustment if adjustment is needed). The difference between the AF data obtained during current calibration and the AL data from the last calibration is commonly termed "drift", although, in reality, it is a cumulative effect of various factors, including drift. The value of this drift is an indication of degradation in instrument TSP during the period between two consecutive calibrations.

For any monitored process variable, the preset acceptance criterion for monitoring instrument performance and calibration is a calculated drift band. The drift value used for the TSP calculation is basically the expected drift between the two consecutive calibrations. If, for the monitored variable, the observed instrument drift (the difference between AL and AF data) is found to be beyond the calculated band, the instrument is declared to be inoperable. The allowable value (AV) is established per requirements of the 10 CFR 50.36 (c)(1)(ii)(A) and is the B-9

EPRILicensed Mateial NRC Safety Evaluation same as the operability limit for limiting conditions for operation (LCOs). The AV is determined by TSP calculations complying with the requirements of Regulatory Guide (RG) 1.105, "Instrument Setpoints for Safety-Related Systems" and the licensee's TSP calculation methodology.

The values of TSP and AV are selected so that, if an accident or abnormal event occurs, the designated protective actions will be initiated to correct the abnormal situation on a timely basis before the monitored process variable exceeds its analytical limit. Thus, during traditional calibration of the entire instrument loop, if the as-found drift is in the acceptable drift band, it ensures that the designated safety system will be initiated in a timely manner during an accident or abnormal event before the monitored process variable exceeds its analytical limit. In other words, during the traditional calibration, all TSP calculation assumptions (except the channel response time) are verified, including instrument drift at the TSP and the assumptions of the system safety analysis. Instrument drift is an indication of degradation in instrument settings over time, providing a mechanism for assessing instrument performance at any time after its last calibration. Therefore, during traditional calibration, if performance of an instrument (including its drift) is found to be within the acceptable limit, the instrument performance will be acceptable and the instrument will be declared operable.

The traditional calibration encompasses the entire channel, including the required sensor-transmitter, signal conditioners, bistable devices for alarm, interlock, and trip functions and displays. The calibration may be done by any series of sequential overlapping calibrations or channel steps so that the entire channel is calibrated. Per current licensing basis, all M&TE equipment used for calibrating safety-related devices meet the requirements of 10 CFR Part 50, Appendix B, Criterion XII, "Control of Measuring and Test Equipment". Additional features of the traditional calibration are: (1) that instruments are physically inspected and their external conditions are known, (2) that the instrument technician can observe instrument output for glitches and excessive noise, and (3) that, by verifying that the AL condition of the instrument is within its acceptable limit, there is reasonable certainty that no instrument will be out of calibration for longer than one calibration interval.

Unlike on-line monitoring, where instrument performance can be monitored continuously, the conventional calibration process does not provide any indication of instrument's performance status between calibration intervals during plant operation. A channel check function that compares redundant channel output combined with results of a channel functional test can identify a faulty sensor-transmitter pair to some extent.

B-10

EPPIJLcensedMaterial NRC Safety Evaluation 5.1.2 On-Line Monitoring Technique On-line monitoring of instrument channel calibration involves monitoring the steady-state output of each channel and evaluating the monitored value to determine whether the sensor-transmitter of the channel is within acceptable limits. The monitored value is compared to the calculated value of the process parameter estimate to assess deviation of the monitored value from its process parameter estimate. The process parameter estimate is the best calculated instantaneous value of the process at the monitored operating point. However, as the word "estimate" suggests, it does not represent a true process value, but it does possess uncertainties in response to various factors. Each monitored channel's deviation from its process parameter estimate represents its variation from the estimated value of the process. The amount of this variation indicates instrument performance and instrument operability and identifies those instrument channels that are not functioning properly and that might require adjustment or corrective maintenance.

Therefore, in this role, the on-line monitoring technique can perform the first step of the traditional calibration (i.e., the calibration check) to some extent, but not to the level of accuracy inherent in traditional calibration. This is because, unlike traditional calibration, which uses simulated process signal inputs of known accuracy and traceable to NIST as a reference, in on-line monitoring the process parameter estimate (which is used as reference input) is not traceable to standards and is less accurate.

Uncertainty in the process parameter estimate is derived from individual redundant channel uncertainty and the type of algorithm used for on-line monitoring. Therefore, uncertainty in on-line monitoring is not static but can vary with the number of redundant channels and the type of algorithm used. In addition, the on-line monitoring cannot perform a calibration check for the entire range of the instrument, including the TSP, but monitors instrument performance only at the point of operation. This is known as "singrle-point monitoring", and instrument performance at the TSP and at any other points in the range can only be assessed by extrapolating the results of the single-point monitoring to the entire range, including the TSP, using the statistical methods.

In summary, on-line monitoring has the following inherent deficiencies:

1. It is not capable of monitoring instrument- performance for its full range including TSP,
2. It does not have accurate reference, but compares the monitored value to a calculated reference (the process parameter estimate) that itself is less accurate compared to simulated input used in the traditional calibration process,
3. It does not provide accuracy traceable to standards, and
4. It does not allow frequent physical inspection of the instrument or allow technicians to observe instrument anomalies.

Because of these inherent deficiencies, on-line monitoring may be unable to verify an instrument's performance adequately to establish its operability, a deficiency that could degrade plant safety. In response to these staff concerns, EPRI in a letter dated March 23, 2000, stated that, for assessing the capability of on-line monitoring to perform functions performed by the traditional calibration either better or as well, functions should be evaluated aggregately rather than one function at a time. While little more uncertainty is associated with the process B-ll

EPRILicensedMaterial NRC Safety Evaluation parameter estimate than with a simulated reference input traceable to NIST, the accuracy of the process parameter estimate is sufficient for its proposed purpose, which is to provide a reference value against which subsequent drift can be measured. Furthermore, the uncertainties associated with the process parameter estimate will be quantitatively bounded and accounted for in either the on-line monitoring acceptance criteria or the applicable setpoint and uncertainty calculations.

EPRI further added that there were clear tradeoffs between the traditional calibration approach and the on-line monitoring approach proposed by the topical report, so that it cannot be claimed that on-line monitoring is superior in every way. However, EPRI believes that, taken as a whole, the on-line monitoring technique, as proposed in the topical report, is superior to the traditional calibration approach, providing greater assurance of instrument operability throughout a plant's operating cycle.

During a presentation at the NRC headquarters on February 16 and 17, 2000, representatives of EPRI, along with the representatives of Argonne National Laboratory, Carolina Power and Light Company and Analysis and Measurement Services Corporation, stated that the inherent deficiencies of the on-line monitoring technique may introduce an insignificant error which, to a large extent, can be compensated in calculation for "test acceptance criteria." An EPRI representative further stated that the above-described deficiencies in the on-line monitoring might not result in a loss of function or a significant safety degradation, considering that: (a) the error introduced by the additional uncertainty in the process parameter estimate would be accounted for in determining the test acceptance criteria or in related setpoint calculations, (b) an additional penalty would be imposed to account for single-point monitoring, (c) at least one channel would be calibrated during each outage by a method traceable back to standards, (d) instrument performance would be monitored by the on-line monitoring technique more frequently than by the traditional calibration, and (e) redundancy exists in instrument channels.

To support this statement, V.C. Summer Nuclear Station representatives stated that they had been using the proposed on-line technique for monitoring instrument performance for the last eight years and had found that the benefits of implementing this new technique were overwhelming and outweighed the insignificant degradation in plant safety due to the above-described deficiencies.

The staff agrees with EPRI's conclusion that considering all the factors listed in (a) through (e) above, the impact of a small additional uncertainty in the process parameter estimate on plant safety will be insignificant, provided the uncertainties associated with the process parameter estimate are quantitatively bounded and accounted for in either the on-line monitoring acceptance criteria or the applicable setpoint and uncertainty calculations. Therefore, the staff requires that:

The submittal for implementation of the on-line monitoring technique shall confirm that the impact on plant safety of the deficiencies inherent in the on-line monitoring technique (inaccuracy in process parameter estimate, single- point monitoring, and untraceability of accuracy to standards), on plant safety will be insignificant, and that all uncertainties associated with the process parameter estimate have been quantitatively bounded and accounted for either in the on-line monitoring acceptance criteria or in the applicable setpoint and uncertainty calculations. (Requirement 1)

B-12

EPRI Licensed Material NRC Safety Evaluation 5.2 Drift Evaluation EPRI conducted a study to understand instrument performance over time, the nature of drift, and how to predict instrument performance in service. The study analyzed historical calibration data from 18 nuclear power plants on approximately 6,700 calibrations. The calibrations covered instruments of various types, makes, and models used to monitor pressure, level, flow, and temperature in safety systems of various plants based on nuclear steam supply systems (NSSSs) supplied by Westinghouse, General Electric, Babcock & Wilcox, and Combustion Engineering.

Evaluation of the data focused on determining normal drift characteristics, categorizing types of drift shifts (e.g., zero shift, forward and reverse span-shift, and nonlinear shift), identifying drift trends that could affect TSP monitoring, determining which data indicate abnormal behavior, and quantifying the data to identify specific characteristics of instrument drift.

The EPRI drift evaluation produced several notable findings:

1. For the transmitters evaluated, drift was random. Transmitters were as likely to drift up as to drift down. No significant bias effects were observed.
2. For plants that performed a 9-point or greater calibration (5 points up and 4 points down),

hysteresis was negligible.

3. Redundant transmitters associated with a particular parameter did not exhibit a tendency to drift as a group. One transmitter out of calibration did not indicate that the other redundant transmitters were likely to be out of calibration.
4. Single-point monitoring does not invalidate the ability of on-line monitoring to detect drift.

An allowance (referred to as a "penalty") can be included in the uncertainty analysis to account for single-point monitoring.

5. Some applications (mostly at the low end and a few at the high end of instrument span) are likely to be unsuitable for single-point monitoring because of susceptibility to potential span-shift effects.
6. The data indicated that no failure modes were found that would be undetectable by on-line monitoring. For example, transmitters did not fail at a fixed level, the as-is type of failure in which the output signal remains constant regardless of the input signal variation.
7. Other conclusions were that: (a) AF/AL data exhibited a zero or a near-zero mean, indicating that bias in the drift is not a key concern, (b) data were normally distributed or bounded by the assumption of normality, (c) drift tended to increase with span, (d) zero-shift and span-shift were the predominant types of instrument drift and occurred at all levels of instrument span (with forward span-shift more frequent than reverse span-shift, and nonlinear shift less common than zero-shift and span-shift), (e) it was unlikely for one or more calibration checkpoints to be significantly out-of-calibration when one point (assumed to be the monitored point) was within calibration to some specified level, and (f) calibration data that was evaluated showed that instrument performance was suitable for on-line monitoring.

The EPRI drift evaluation indicated that on-line monitoring as a performance verification tool may not be appropriate for process parameters that normally are at either the high or the low end of an instrument's calibrated span, because such processes are more susceptible to undetected B-13

EPRILicensedMaterial NRC Safety Evaluation span-shift. The EPRI drift study also indicated that zero-shift and span-shift were the predominant types of instrument drift and occurred at all levels of an instrument's span. Also, the applications that would not detect any amount of span-shift drift are not suitable for on-line monitoring at a single point. Therefore, the staff requires that:

Unless the licensee can demonstrate otherwise, instrument channels monitoring processes that are always at the low or high end of an instrument's calibrated span during normal plant operation shall be excluded from the on-line monitoring program. (Requirement 2)

Values monitored by redundant instruments monitoring the same process variable at different locations may be slightly different because of delays, offset, and superimposed noises. Physical separation in sensors can also increase uncertainty in the process parameter estimate. Referring the sensor readings back to a common point can compensate for effects of physical separation, but this usually requires a reasonably accurate physical model. The topical report concludes that the timing simultaneity of measurements of redundant channels becomes an important factor in determining value for the process parameter estimate and its acceptance criterion, because, depending on the type of algorithm used, the process parameter estimate could be the result of instantaneous measured values of redundant and/or diverse channels. Also, for accurate results, the process must remain stable during monitoring and signals must be free from noise. When monitoring is done during normal plant operation, it is possible the process may not be stable and the monitored variable may be drifting.

The EPRI drift evaluation indicated that instrument drift is random and transmitters are as likely to drift up as to drift down. The staff believes it is possible that while the monitored process variable is drifting, the monitoring instrument could also be drifting, and the combined effect of process and instrument drift could adversely affect accuracy of the monitoring and the calculated value of the process variable estimate. Therefore, it is prudent to acquire redundant channel measurements within a close duration and at relatively stable plant conditions and to use an algorithm that can recognize unstable conditions of the monitored process. If a licensee believes, that in a plant-specific physical configuration, monitored values are susceptible to location difference, process instability, and non simultaneous measurements, the staff requires that:

The algorithm used for on-line monitoring shall be able to distinguish between the process variable drift (actual process going up or down) and the instrument drift and shall be able to compensate for uncertainties introduced by unstable process, sensor locations, non-simultaneous measurements, and noisy signals. If the implemented algorithm and its associated software cannot meet these requirements, administrative controls, including the guidelines in Section 3 of the topical report for avoiding a penalty for non-simultaneous measurement, could be implemented as an acceptable means to ensure that these requirements are met satisfactorily. (Requirement 3) 5.3 Single-Point Monitoring The EPRI drift study noted that zero-shift and span-shift are the predominant types of instrument drift and occur at all levels of instrument span. Forward span-shift occurs more frequently than reverse span-shift, and the nonlinear shift is less common than zero-shift and span-shift.

B-14

EPRlLicensed Material NRC Safety Evaluation Zero-shift manifests itself as an offset, the value of which remains constant throughout the span, whereas with forward span-shift, drift tends to increase with the span. Considering this observation, the EPRI drift study concluded that the drift exhibited by an instrument at one operating point could be considered representative of the drift over its calibrated range, provided an allowance (penalty) to compensate for the effects of zero and span-shifts is included in the uncertainty analysis for calculating acceptance criteria for on-line monitoring. Thus, by including a calculated value of penalty for each instrument, the on-line monitoring technique will be able to detect drift at any other point in the calibrated span using single-point monitoring.

Sections 3.4.3.2 and 8 of the topical report includes EPRI-recommended values of compensatory allowance (penalty) for single-point monitoring. EPRI derived these penalty values by analyzing the observed AF/AL data from the drift study using statistical methods. The topical report includes curves with "95%/95% Allowance for Single Point Monitoring" on the Y-axis and "Drift Limit for Monitored Channel" on the X-axis, plotted for 0-25%, 25%-50% and 50%-100% values of instrument span. Evaluation of these plots indicated that monitoring a process low in the span carries a higher penalty then monitoring high in the span. The recommended allowance depends on the channel drift limit, which can vary with the monitored parameter. However, the topical report recommends 0.25% as a minimum value for allowance, although the plots would permit a lower value for the penalty. Also, the topical report recommends that the single-point monitoring penalty be treated as a random uncertainty in the overall uncertainty evaluation for on-line monitoring.

EPRI concludes that by using the calculated value of penalty for each instrument, the on-line monitoring technique can detect drift at any other point in the calibrated span using single-point monitoring. The topical report indicates that the basis for this conclusion is that, in most samples, drift was due to zero-shift, forward span-shift, or some combination of the two. The staff believes this may not be true in every case. It is possible that drift could be the result of zero-shift, forward or reverse span-shift, or any form of non-linear shift, or a combination thereof.

Therefore, imposing a penalty based on the general assumption that drift is due to zero-shift, forward span-shift, or some combination of the two may not be correct for all instruments. It is acceptable to use the EPRI recommended values in the topical report, to determine the "allowance or penalty" to compensate for single-point monitoring, provided the monitored instrument is of similar type and make evaluated in the EPRI drift study. If the instrument designated for on-line monitoring was not included in the EPRI study, the topical report's recommended penalty should not be used unless justified by an evaluation. Therefore, the staff requires that:

For instruments that were not included in the EPRI drift study, the value of the allowance or penalty to compensate for single-point monitoring must be determined by using the instrument's historical calibration data and by analyzing the instrument performance over its range for all modes of operation, including startup, shutdown, and plant trips. If the required data for such a determination is not available, an evaluation demonstrating that the instrument's relevant performance specifications are as good as or better than those of a similar instrument included in the EPRI drift study, will permit a licensee to use the generic penalties for single- point monitoring given in EPRI Topical Report 104965.

(Requirement 4)

B-15

EPRILicensedMaterial NRC Safety Evaluation 5.4 On-Line Monitoring Acceptance Criteria Acceptance criteria depend on the type of algorithm selected for on-line monitoring, but regardless of the algorithm, the steady-state output of each channel is compared with its process parameter estimate (PE) during monitoring to assess deviation of the monitored value from the calculated value of the process variable. This step is similar to the first step in the traditional calibration (the calibration check).

Each channel's deviation from its PE represents its variation from the estimated value of the process. The amount of this variation is compared with preestablished "acceptance criteria" and the instrument performance and its operability status are determined in accordance with three bands described in Sections 5.4.1, 5.4.2 and 5.4.3, and the figure, Deviation Zones for Acceptance Criteria. The acceptance criteria are established by calculating acceptable limits of deviation in these three bands. It is possible that, due to changes in error factors (e.g., test equipment uncertainty, AL tolerance) implementing on-line monitoring may require revisiting the current TSP and uncertainty calculations. The staff requires that:

Calculations for the acceptance criteria defining the proposed three zones of deviation

("acceptable," "needs calibration," and "inoperable") should be done in a manner consistent with the plant-specific safety-related instrumentation setpoint methodology so that using on-line monitoring technique to monitor instrument performance and extend its calibration interval will not invalidate the setpoint calculation assumptions and the safety analysis assumptions. If new or different uncertainties require the recalculation of instrument trip setpoints, it should be demonstrated that relevant safety analyses are unaffected. The licensee should have a documented methodology for calculating acceptance criteria that are compatible with the practice described in Regulatory Guide 1.105 and the methodology described in acceptable industry standards for TSP and uncertainty calculations. (Requirement 5)

B-16

EPRILicensed Material NRC Safety Evaluation nsrumen noperiabie(

ADVOLM gAllowable Deviation for On-Line Monitoring)

Positive irection with Reference to PE RoutineCalibration Scheduling Band/Region (5.4.92.

. .,*- MAVD (Maximum Acceptable Value of Deviation)

Positive Direction with Reference to PE Acceptable Band/Region (5.4.1)

................................... PE (Process Parameter Estimate)

At Operating Point Acceptable Band/Region (5.4.1) s.MAVD (Maximum Acceptable Value of Deviation)

Positive Direction with Reference to PE Routine Calibration<

Scheduling Band/Region t5.4.2) f:3, ADVOLM (Allowable Deviation for On-Line Monitoring)

P'3sitive Direction with Reference to PE I:nstrment Inopeal (5a43 Figure B-1 Deviation Zones for Acceptance Criteria 5.4.1 Acceptable Band orAcceptable Region As described in the topical report, this zone will be between the PE and maximum acceptable value of deviation (MAVD) for the monitored parameter in reference to the PE. Using the on-line monitoring technique, if the deviation between the monitored value and its PE is found anywhere in this zone, no action is needed and the instrument is considered operable. In accordance with the existing calibration practice, the instrument is considered operable when its observed drift (the difference between AL and AF conditions) is found to be within the value used in the TSP calculations. In other words, when setpoint calculation assumptions are verified by instrument performance, the instrument is considered operable. Considering this current practice, the staff requires that:

For any algorithm used, the maximum acceptable value of deviation (MAVD) shall be such that accepting the deviation in the monitored value anywhere in the zone between PE and MAVD will provide high confidence (level of 95%/95%) that drift in the sensor-transmitter or any part of an instrument channel that is common to the instrument channel and the on-line monitoring loop is less than or equal to the value used in the setpoint calculations for that instrument channel. (Requirement 6)

B-17

EPRILicensed Material NRC Safety Evaluation 5.4.2 Routine Calibration Scheduling Region or Band This zone falls between the MAVD and the allowable deviation value for on-line monitoring (ADVOLM). During on-line monitoring, if 1he deviation between the monitored value and its PE is found anywhere in this zone, the instrument channel will be scheduled for calibration in the next refueling outage. The staff understands that when deviation between the monitored value and its PE is found in this zone, the instrument may need adjustment or maintenance, but will still be considered operable. Therefore, the staff requires that:

The instrument shall meet all requirements of the above requirement 6 for the acceptable band or acceptable region. (Requirement 7)

For any algorithm used, the maximum value of the channel deviation beyond which the instrument is declared "inoperable" shall be listed in the technical specifications with a note indicating that this value is to be used for determining the channel operability only when the channel's performance is being monitored using an on-line monitoring technique. It could be called "allowable deviation value for on-line monitoring" (ADVOLM) or whatever name the licensee chooses. The ADVOLM shall be established by the instrument uncertainty analysis. The value of the ADVOLM shall be such to ensure:

a. that when the deviation between the monitored value and its PE is less than or equal to the ADVOLM limit, the channel will meet the requirements of the current technical specifications, and the assumptions of the setpoint calculations and safety analyses are satisfied; and
b. that until the instrument channel is recalibrated (at most until the next refueling outage), actual drift in the sensor-transmitter or any part of an instrument channel that is common to the instrument channel and the on-line monitoring loop will be less than or equal to the value used in the setpoint calculations and other limits defined in 10 CFR 50.36 as applicable to the plant-specific design for the monitored process variable are satisfied. (Requirement 8) 5.4.3 Operability Assessment Region or Band This zone will be beyond the ADVOLM limit for on-line monitoring. During on-line monitoring, if the deviation between the monitored value and its PE is found anywhere in this zone, the instrument channel will be declared inoperable immediately and statements of all required TS actions will become applicable. The staff requires that:

Calculations defining alarm setpoint (if any), acceptable band, the band identifying the monitored instrument as needing to be calibrated earlier than its next scheduled calibration, the maximum value of deviation beyond which the instrument is declared "inoperable," and the criteria for determining the monitored channel to be an "outlier,"

shall be performed to ensure that all safety analysis assumptions and assumptions of the associated setpoint calculation are satisfied and the calculated limits for the monitored process variables specified by 10 CFR 50.36 are not violated. (Requirement 9)

B-18

EPRILicensed Material NRC Safety Evaluation 5.5 Instrument Failures The proposed on-line monitoring system, will allow the instruments to remain unattended for longer periods. Therefore, there could be a possibility that certain types of instrument failures may remain undetectable by the on-line monitoring system while the instrument is being monitored at only one point in its operating range.

An instrument can fail in any one of the three modes: (1) it can fail low, which means, regardless of the value of its input, instrument output is at or near zero; (2) it can fail high, which means, regardless of the value of its input, instrument output is at or near 100%; (3) it can fail as-is, which means, regardless of the input, instrument output remains constant somewhere between 0% and 100%. Failures that cause a large shift (deviation) in the instrument's output signal compared to its PE are not a concern because just as the drift is detectable so is the large shift.

But failures where the instrument output compared to its PE does not change much upon instrument failure could be a concern. For example,

  • The process parameter is at or near the low end of the span and the instrument fails low.
  • The process parameter is near the high end of the span and the instrument fails high.
  • The process parameter is somewhere between the low and high span limits and the instrument fails as-is.

The as-is types of failures were not observed in the EPRI drift study, but failing low (loss of signal) was found to be more likely than failing high. The topical report indicated that very few instruments operate near the 100% span point, and even if they operate high in the span, there is generally some room for detecting a high signal failure (fail high) as drift. But the failed-low condition of the instrument could remain undetected in applications where the process parameter is normally at the lower end of the span; therefore, the topical report recommends that such applications should be avoided. The following are three examples of the kinds of cases which are susceptible to loss of signal failures (failed low) and, therefore, are not considered suitable for on-line monitoring.

  • Auxiliary feedwater flow: At normal plant operation, there is no flow and the signal is at the bottom of the span.
  • Engineered safeguards system actuation equipment: At normal plant operation, the equipment is usually off and the associated pressure or flow indication will be at or near 0%

of the span.

  • Containment pressure: At normal plant operation, depending on the calibrated span, the signal might be about 0% of the span.

In response to related concerns in the staff's draft SE, EPRI stated that the approach to on-line monitoring outlined in TR- 104965 provides greater assurance of sensor operability than is currently provided by calibrations each refueling interval and channel checks each shift. The channel checks will continue to identify gross failures. Failures where sensors drift only slightly more than their allowances in applicable setpoint or uncertainty calculations will be identified during plant operations by on-line monitoring. EPRI believes that the increased ability to detect B-19

EPRI Licensed Material NRC Safety Evaluation these sorts of failures will more than compensate for instrument failures that are undetectable by on-line monitoring while the instrument is being monitored at one point in the operating range.

During the February 16 and 17, 2000, presentation, EPRI stated that there was no potential that any type of failure would remain undetected while the instrument was being monitored by an on-line monitoring technique. EPRI's drift study analyzed all possible failure modes, including fail low, fail high, and fail as-is, and demonstrated that the probability of failure modes in which transmitters failed in ways that would be undetectable by on-line monitoring is extremely low.

Also, to verify that no common-mode failure exists, the topical report proposes to calibrate at least one redundant channel during each outage.

The staff believes that in plant-specific situations there could be many more examples of cases like those described above. Therefore, it is prudent that an evaluation for each instrument should be performed, considering all possible types of failure modes and operating point of the process parameter with respect to the instrument's span during normal plant operation, to verify that no possible instrument failure in any condition may remain undetected. If it is suspected that the plant-specific implementation of on-line monitoring could in any way impact the existing plant safety analyses demonstrating a coordinated defense-in-depth against instrument failures, the staff requires that:

The plant specific submittal shall confirm that the proposed on-line monitoring system will be consistent with the plant's licensing basis, and that there continues to be a coordinated defense-in-depth against instrument failure. (Requirement 10) 5.6 On-Line Monitoring Loop A typical instrument channel consists of a process sensor-transmitter, the power source, signal conditioners, indicators, and bistable devices. The on-line monitoring will not monitor the entire instrument channel, but only a portion of it. A typical on-line monitoring loop will consist of a sensor-transmitter (and in some cases will also include a portion of the signal processing circuitry), a class IE to non-lE isolator, non-safety-related data transmitting hardware, and a non-safety-related microprocessor-based data processing device.

On-line monitoring collects data from instrument channels, typically via connection to the plant computer for an automated system or at a qualified class IE to non-lE isolator output terminal or at an appropriate test point for manual data acquisition. Signals taken from the non-lE terminals of the isolator or at the plant computer are transmitted to a microprocessor-based processing device via communications hardware and software and analyzed to determine the state of the instrument's calibration and its operability status. There are various on-line monitoring implementations on microcomputer platforms. Data are input from the plant to these systems via modem or electronic media or manually. Output capabilities typically include graphical display of the individual instrument channel deviation from the PE as a function of time. Some automated systems are network operable and allow multiple access to the monitoring information and results.

Except for the sensor-transmitter, typically all components of an instrument channel are located on instrument racks. According to the current TS requirements, performance of components on B-20

EPRILicensedMaterial NRC Safety Evaluation an instrument rack, including bistable units, is monitored through periodic surveillance activities, including functional checks. The topical report does not recommend any change in current practices; therefore, performance of these components will continue to be verified through the current TS scheduled surveillance activities.

Although equipment used for on-line monitoring are non-safety-related, they interface with safety-related instrument channels and therefore, the staff requires that:

Adequate isolation and independence, as required by Regulatory Guide 1.75, GDC 21, GDC 22, IEEE Std. 279 or IEEE Std. 603, and IEEE Std. 384, shall be maintained between the on-line monitoring devices and Class 1E instruments being monitored.

(Requirement 11)

Although equipment used for the on-line monitoring technique are non-safety-related, the instruments monitored are safety-related and, based on results of on-line monitoring, the frequency of the current TS-required instrument channel calibrations could be relaxed from "once per refueling cycle" to "once per a maximum period of 8 years" as discussed in section 3.0(2)(a) of this document. These class IE instruments are set to initiate protective actions to mitigate accidents or abnormal events before: the monitored process variable exceeds its analytical limit, and may be required to guide plant operators through emergency operating procedures (EOPs). Because of its important mission, the on-line monitoring system, including its hardware and software, must be designed with quality assurance (QA) requirements compatible with the QA requirements for the Class 1E devices being monitored. Therefore, the staff requires that:

(a) QA requirements as delineated in :10 CFR Part 50, Appendix B, shall be applicable to all engineering and design activities related to on-line monitoring, including design and implementation of the on-line system, calculations for determining process parameter estimates, all three zones of acceptance criteria (including the value of the ADVOLM), evaluation and trending of on-line monitoring results, activities (including drift assessments) for relaxing the current TS-required instrument calibration frequency fiom "once per refueling cycle" to "once per a maximum period of 8 years," and drift assessments for calculating the allowance or penalty required to compensate for single-point monitoring.

(b) The plant-specific QA requirements shall be applicable to the selected on-line monitoring methodology, its algorithm, and the associated software. In addition, software shall be verified and validated and meet all quality requirements in accordance with NRC guidance and acceptable industry standards.

(Requirement 12)

Basically, the equipment associated with the on-line monitoring serves as an M&TE to assess calibration of class 1E instruments; therefore., the staff requires that:

All equipment (except software) used for collection, electronic transmission, and analysis of plant data for on-line monitoring purposes shall meet the requirements of 10 CFR Part 50, Appendix B, Criterion XII, "Control of Measuring and Test Equipment."

Administrative procedures shall be in place to maintain configuration control of the on-line monitoring software and algorithm. (Requirement 13)

B-21

EPRILicensedMaterial NRC Safety Evaluation 5.7 System Algorithms Although many algorithms could be used in on-line monitoring, this topical report addresses only two. The staff did not review, and does not endorse, either of the two algorithms or the associated software. The staff believes that numerous algorithms and associated software could be suitable, and the choice of which to use should be left to the user. Since the algorithm will be used to monitor calibration and determine operability of safety-related instruments, every user would be prudent to carefully evaluate the algorithm selected to ensure that the assumptions of safety analyses, and of TSP calculations and plant commitments to separation, independence, QA, and conditions of applicable requirements specified in this SE; and that NRC policy statements for TS criteria are not violated by implementation of the selected algorithm and/or its software for on-line monitoring. The staff, therefore, requires that:

Before declaring the on-line monitoring system operable for the first time, and just before each performance of the scheduled surveillance using an on-line monitoring technique, a full-features functional test, using simulated input signals of known and traceable accuracy, should be conducted to verify that the algorithm and its software perform all required functions within acceptable limits of accuracy. All applicable features shall be tested. (Requirement 14)

6.0 CONCLUSION

Based on the above evaluation, the staff concludes that the generic concept of an on-line monitoring technique, as presented in the topical report, is acceptable for on-line tracking of instrument performance. The staff agrees with the topical report's conclusion that on-line monitoring has several advantages, including timely detection of degraded instrumentation. The staff believes that on-line monitoring can provide information on the direction which instrument performance is heading and, in that role, it can be useful in determining preventive maintenance activities. As agreed during the February 16 and 17, 2000, meeting with the staff, EPRI would work with the NRC staff and the NEI TSTF to develop an appropriate TS structure and TS requirements consistent with the requirements discussed in this SE.

For establishing instrument operability, verifying the drift to be within an acceptable limit is the most vital function of the conventional calibration. Although the proposed on-line monitoring technique compared to traditional calibration process will render results with less accuracy, the staff finds EPRI's conclusion acceptable that accuracy rendered by the process parameter estimate is sufficient to assess instrument operability; also, compared to traditional calibration once per refueling outage, the on-line monitoring technique when taken as a whole provides higher assurance of instrument operability throughout a plant operating cycle. However, if results of the on-line monitoring technique are being applied to relax the TS-required calibration frequency of the safety-related RPS, ESFAS, and PAM instrumentation, the staff requires that every plant-specific license amendment submittal for implementing on-line monitoring to relax the TS-required calibration frequency of the safety-related instrumentation, address all applicable requirements discussed in this SE.

B-22

EPRILicensed Material NRC Safety Evaluation

7.0 REFERENCES

1. NUREG/CR-6343, "On-Line Testing of Calibration of Process Instrumentation Channels in Nuclear Power Plants," Phase II Final Report, dated November 1995.
2. Lawrence Livermore National Laboratory, Report 1, Task 29, "Assessment of On-line Monitoring Techniques," dated January 17, 1996.
3. G. Preckshot of the Fission Energy and Systems Safety Program, Lawrence Livermore National Laboratory, On-Line Calibration System Requirements and Review Guidance, dated December 22, 1998.

Principal Contributor: S. Athavale Date: July 24, 2000 B-23

EPRILicensed Material C

MSET SOFTWARE VERIFICATION AND VALIDATION REPORT Appendix C provides the complete text of a verification and validation report of the SureSense Diagnostic Monitoring Studio, Version 1.4, MSET software. This report was sponsored by EPRI in 2002 and was completed by Richard Rusaw of SystemsMax Engineering, LLC. The latest version of the SureSense software is version 2.0. If an implementation is to utilize the later version of the software this V&V document can be updated where applicable. Refer to Section 8 for additional information regarding on-line monitoring software V&V requirements.

SureSense Diagnostic Monitoring Studio Verification and Validation Report, Version 1.4, July 2002 C.1 Introduction C.1.1 Report Purpose This document was produced for the EPRI on-line monitoring implementation project (formally titled the Implementation of Instrument Monitoring for Optimized Calibration), which is a demonstration of real time on-line monitoring for a variety of applications at participating nuclear power plants. One of the primary objectives of the project is to develop the available software to meet the requirements of the program as defined by the users. The software selected for this project is SureSenselm Diagnostic Monitoring Studio'(Version 1.4); it is produced and supplied by Expert Microsystems, Inc. Another primary objective of the on-line monitoring implementation project is to install and operate the on-line monitoring program at each participating facility. Implementation at each facility includes utilizing the installed software for one of its designated purposes - the reduction of scheduled field calibrations of monitored instrumentation.

The SureSense software provides an automated method to monitor instrument performance and assess calibration while the plant is operating. This capability provides the basis for deferring calibrations on instruments until acceptance criteria defined by the user and monitored by the software has been exceeded. This active process of calibration assessment is carried out under formal plant procedures.

The purpose of this report is to define the verification and validation process to be used with the SureSense software. Nuclear Industry standards require that verification and validation (V&V) be performed and documented on any software installed at a facility and used in a formal C-1

EPRP Licensed Material MSET Software Verification and Validation Report decision making process. The level of V&V required at each facility varies but is primarily dependant on the relationship of the application to plant safety. Industry standards allow for a graded approach to the requirements of V&V, depending on the safety significance. This V&V document has been executed to meet the minimum requirements for Non-Nuclear Safety Related (NNS), Balance of Plant (BOP) applications of all participating nuclear facilities.

C.1.2 ReportApplicability This report has been prepared specifically for the SureSense Diagnostic Monitoring Studio Version 1.4. The EPRI on-line monitoring implementation project is applying this software to many types of nuclear plant instrumentation systems and this V&V report is intended to directly support the project participants.

Although much of the information provided here can be applied in principle to any on-line monitoring software, this report has been prepared specifically for the SureSense software. The EPRI on-line monitoring implementation project is applying this software to all types of nuclear plant instrument systems and this V&V report is intended to directly support the project participants.

C.1.3 On-Line Monitoring Overview On-line monitoring is an automated method of monitoring instrument performance and assessing instrument calibration while the plant is operating, without perturbing the monitored channels. In the simplest implementation, redundant channels are monitored by comparing each individual channel's indicated measurement to a calculated best estimate of the actual process value; this best estimate of the process value is referred to as the parameter estimate or estimate. By monitoring each channel's deviation from the parameter estimate, an assessment of each channel's calibrated status can be made. An on-line monitoring system is also referred to as a signal validation system or data validation system.

On-line monitoring of instrument channels is possible and practical owing to the ease with which data acquisition and analysis of instrument channel data can be performed. In essence, on-line monitoring accomplishes the surveillance or monitoring aspect of calibration by comparing redundant or correlated instrument channels with independent estimates of the plant parameter of interest. It does not replace the practice of instrument adjustments; instead, it provides a performance-based approach for determining when instrument adjustment is necessary, as compared to a traditional time-directed calibration approach.

The SureSense software accomplishes this by first learning a process model obtained during normal plant operation. Once the model has been trained, it may be used for monitoring. During process monitoring, data may be obtained from an on-line source, but in this project will be obtained from a database. An estimated value is obtained by comparing the plant's current state to the learned model. Similarity calculations are performed to determine estimated values, which are calculated for every parameter in the model. The difference between the estimated value and the observed value is referred to as the residual error or residual. Statistical properties of the residual are examined for deviations from the learned model.

C-2

EPRP LicensedMaterial MSET Software Verification and Validation Report C.1.4 EPRI's Role in On-Line Monitoring EPRI's strategic role in on-line monitoring is to facilitate its implementation and cost-effective use in numerous applications at power plants. To this end, EPRI has sponsored an on-line monitoring implementation project at multiple nuclear plants specifically intended to install, demonstrate, and use on-line monitoring technology. The purpose of the EPRI on-line monitoring implementation project is to: 1) apply on-line monitoring to all types of power plant applications, and 2) document all aspects of the implementation process in a series of EPRI deliverable reports.

EPRI fosters development of on-line monitoring technology and its application via the Instrument Monitoring and Calibration (IMC) Users Group. Through the IMC Users Group, on-line monitoring will continue to be supported technically as a key technology while its use grows throughout the industry. Further, the EIPRI IMC Users Group will support generic technical issues associated with on-line monitoring, such as providing implementation guidance for calibration reduction of safety-related instrumentation.

C.1.5 SureSense Diagnostic Monitoring System C.1.5.1 Overview The SDMS software is intended to serve the electric power generation and transmission industry, as well as a wide variety of aerospace, military, and industrial applications. Two factors will continue to drive these industries toward process systems incorporating sophisticated digital control and online monitoring. First, precision control provides the means to meet increasingly stringent performance, reliability and emissions standards. Second, online diagnostic monitoring enables more cost effective condition-based maintenance of plant equipment.

The SDMS software enables test engineers and data analysts to perform automated diagnostic data analysis and certification of plant systems and equipment. SDMS consists of an integrated suite of graphically driven software tools used to construct, analyze, optimize and verify automated diagnostic decision systems that improve the speed and accuracy of operations data analysis. This capability ultimately enables faster and more accurate decisions to certify or maintain plant systems and equipment. SDMS software can:

  • Integrate and correlate automated modeling and simulation tools with the instrumentation and control engineer's system diagnostic knowledge to enable automated online fault detection and diagnosis;
  • Detect, diagnose and discriminate between data quality problems and off-nominal system and equipment operating conditions for all plant operating modes;
  • Provide online learning capability for diagnostic model calibration and tuning;
  • Integrate with plant data systems for improved data management and analysis support.

C-3

EPRILicensed Material MSET Software Verification and Validation Report The overall result is a general-purpose on-line monitoring tool that makes optimal use of the technical expertise of designers, analysts and engineers while requiring minimal programming skills.

C.1 .5.1 System Description SDMS provides the workstation-based tools used by plant engineers to define, implement and verify a plant-specific on-line diagnostic model and fault classification strategy for a system under surveillance.

The SDMS software becomes an element of the plant's overall data acquisition and processing architecture as illustrated in Figure C-I the rightmost column of flowchart elements represents the SDMS software. The training database, data acquisition system, and alarm or control system are external to the software (the elements on the left of the flowchart belong to the monitored system).

Training Monitoring Figure C-I SDMS Operation C-4

EPRILicensedMaterial MSET Software Yerification and Validation Report SDMS data processing flow is divided into two discrete sets of procedures. The training procedures are used to learn a software model that contains the signals produced by the monitored physical plant equipment. The training step requires a body of training data that characterizes the normal operating modes of the equipment. The training data may be derived from signals acquired during normal operation of the equipment, or may be produced by a simulation of the equipment. The training step proceeds as follows:

The modeling algorithm learns a predictive model of the monitored equipment from the normal operating data. The predictive model is a multivariate state estimation technique (MSET) statistical model using algorithms developed by Argonne National Laboratory.

The parameter estimation algorithm next operates on the predictive model over the training data set and provides an estimate of each data value for each data observation in the training data set.

The mathematical difference between each observed and estimated value pair is called a residual.

The statistical properties of the residuals are computed over the training data for each signal.

The calibrated model in combination with the residual data statistics are stored to create the SDMS model that will be used for fault detection during the online monitoring step. In the monitoring step, the parameter estimation algorithm uses the predictive model to operate on each new data observation and provides an estimate of each data value in the new observation. Fault detection consists of acquiring new data observations from the plant equipment and using the learned SDMS model to determine whether the residual is statistically normal, or conversely, whether a fault condition is indicated.

The fault detection algorithm identifies conditions where the model estimate and observed data disagree. This is accomplished by performing a statistical hypothesis test procedure on a series of residual data values. The statistical hypothesis test procedure determines if the series is representative of the learned probability density function (the null hypothesis), or alternatively, is representative of some other alternate hypothesis. In order to prevent spurious alarms, the statistical hypothesis test results are filtered using a conditional probability algorithm that, in essence, requires several hypothesis test alanns before annunciating the signal fault at a user specified confidence level.

C.2 Verification and Validation Summary The execution and completion of the V&V described in this document was successful. All programmatic requirements of the report have been met. The software developer provided a fully functional version of the subject program that executed all functional requirements without any deviations.

This effort began when version 1.3 was issued to the EPRI project participants. Due to industry requests for program functional improvements, the developer revised the software and issued a version 1.4 beta release during the course of this work. The new release required significant changes to the scope of this effort. The specific changes have not been delineated in this document since these capabilities are fully described in the software's Users' Manual. The new functional capabilities were included in the scope of this testing. The beta version tested herein was a near release version produced for testing by the developer's in-house quality assurance C-5

EPRILicensed Material MSET Software Verification and Validation Report team. The product final release, issued July 1, 2002, does not add any functional capabilities in comparison to the beta version tested herein and corrects only minor discrepancies identified by the developer.

Since no deviations from the test were identified a deviation report was not required. Based on the success of this effort, this document provides the necessary independent quality assurance documentation required for use at nuclear plants.

C.3 Verification and Validation Plan C.3.1 Overview C.3.1.1 Purpose The purpose of the Verification and Validation Plan is to provide a comprehensive method for evaluating the SureSense version 1.4 software and to provide assurance that the requirements and expectations of the users are satisfied. The format of this document is based on the guidance documents produced for the Nuclear Industry by the NUCLEAR UTILITIES SOFTWARE MANAGEMENT GROUP (NUSMG). The specific requirements and expectations are documented in the Functional Requirements Document (FRD). The plan also assures the design expressed in the Design Description Document (DDD) has been implemented in the application.

This is accomplished through the execution of a specific test procedure designed to verify that each of the elements of the FRD have been achieved. The tests described in this document have also been performed during V&V by Expert Microsystems. Agreement between the results of this test and Expert Microsystems' tests indicate that the application is performing correctly.

Expert Microsystems will in turn compare their results to those obtained by Argonne National Laboratory to verify the correct operation of the code elements provided by Argonne National Laboratory.

C.3.1.2 Goals The goal of the V&V plan is to provide an organized methodology to execute and document the V&V process. This is conducted in a manner that complies with the accepted industry standards of the end users. The document is formally prepared on behalf of EPRI and then maintained at each user facility as part of their Quality Assurance program requirements for installed operating software. Functional elements of the software not covered by this document might be addressed by the individual user as application requirements dictate.

C.3.1.3 Scope Software: The scope of this effort applies to the majority functional capabilities of the SureSense version 1.4 software. Some key computational elements of the software are the licensed property of Argonne National Laboratory. These sub-routines, also termed the MSET kernels, are not directly verified in this effort. The MSET kernels are covered by their own V&V documentation (Reference C.1 1.2.1) and are not a subject of this test.

C-6

EPRJ Licensed Material MSET Software Verification and Validation Report Documentation: The scope of this document is limited to meeting the Software Quality Assurance requirements for software installed at nuclear power facilities for use in non-safety related applications. Accepted industry standards allows for a graded approach to the rigor of V&V testing and documentation. The end use of the subject software is intended to provide information that will be utilized in formal decision making processes. By definition, these processes cannot have any effect on the risks associated with the health and safety of the public.

The typical decision making process intended by this document is the calibration assessment of non-safety related sensors.

C.3.1.4 Waivers This V&V process does not seek to waiver from approved industry standards.

C.3.2 References All references are provided in Section C. 11.

C.3.3 V&V`Overview C.3.3.1 Organization Preparation The development of his document and execution of the formal testing will be conducted by SystemsMax Engineering. The Principal Engineer assigned to the task is Richard Rusaw, PE.

Communication Control Three primary lines of communication are maintained in order to conduct this task. The EPRI contract manager providing direction and project control is Dr. Ramesh Shankar. The primary technical communication with the software developer is with the President of Expert Microsystems-Mr. Randy Bickford. The primary technical consultant for EPRI is a Principal Engineer with Edan Engineering Corporation-Mr. Eddie Davis.

Authority for Issue Resolution The controlling authority to resolve technical issues that are directly associated with the software performance is the President of Expert Microsystems.

Issues pertaining to the V&V project requirements and control may be resolved by Dr. Ramesh Shankar of EPRI and his assigned consultant.

The controlling authority to resolve issues regarding the V&V report and testing is Mr. Richard Rusaw of SystemsMax Engineering.

C-7

EPRILicensedMaterial MSET Software Verification and ValidationReport Product Approval Authority This document is the only "product" associated with the V&V document and test. Final approval for issuance is provided by Dr. Ramesh Shankar of EPRI.

V&V Relationship to Other Processes There are two related processes to the development of this V&V document. The first is the software development at Expert Microsystems. The second is the MSET kernel software development at Argonne National Laboratory.

C.3.3.2 Master Schedule The V&V activities associated with this document will have a limited useful functional life. The major activities associated with this document that will define the application of this document are the rate at which the software undergoes major revisions, the completion date of the V&V document produced by Argonne National Laboratories, the completion date of the V&V document for versionl .4 of the software by Expert Microsystems, and the need to extend the applicability of this document to subsequent revisions.

This document is intended to meet the requirements of the users at EPRI member nuclear power facilities with the software installed and operational for use as an instrument calibration tool.

This document and associated tests will apply initially to the SureSense Diagnostic Monitoring Studio version 1.4. Since June 2000, earlier versions of the software have been used to build models, but not for on-line monitoring in the nuclear industry. The application has undergone several revisions to date. Version 1.4 was released to licensed users beginning in July 2002. This document will apply for non-safety related applications until it is superceded by subsequent revisions or release of the V&V document for nuclear safety related applications by Expert Microsystems.

C.3.3.3 Resource Summary

  • Staffing - This document has been produced by SystemsMax Engineering and authored by Richard Rusaw, Principal Engineer.
  • Facilities - This report and associated testing of the software was produced at the offices of SystemsMax Engineering.
  • Tools/Hardware - All activities associated with this document including software testing were conducted on a single workstation. The workstation is a Dell workstation with an Intel Xeon 1.7 MHz processor, 256 MB of memory, and greater than 25 MB of hard disk storage.

The operating system is Microsoft Windows XP Professional. The Java virtual machine is the Sun Java2 version 1.3.1_03 virtual machine.

C-8

EPRP Licensed Material MSET Software Verification and Validation Report C.3.3.4 Responsibilities All responsibilities for the production of this document and all associated tasks belong to Richard Rusaw. Information and supporting reference documents were provided by Expert Microsystems and Edan Engineering.

C.3.4 V&VRequiredActivites C.3.4.1 Traceability Review Traceability applies to the SureSense DiagnosticMonitoringStudio Vi. 4 and the test data files provided by Expert Microsystems. Output files from this test can be compared to those produced by Expert Microsystems to verify the correctness of this application.

C.3.4.2 Interface Analysis Interface analysis does not apply to this test and documentation due to the limited scope of the process.

C.3.4.3 Test Plan The Test Plan is provided in Section C.6.

C.3.4.4 Test Plan Review The test plan will be conducted as an internal review by the staff at SystemsMax Engineering.

Final review of the draft report will be conducted by Edan Engineering on behalf of EPRI prior to final release.

C.3.4.5 Testing Location Software testing will be conducted at the offices of SystemsMax Engineering.

C.3.4.6 Acceptance Criteria The acceptance criteria will be based on the final results of the run data. The criteria will be provided by the software developer based on the test data files provided. Expected results will be defined in the Test Plan (refer to Section C.6). Deviations from these results must be resolved by the appropriate party as defined in Section C.6.

C-9

EPRI Licensed Material MSET Software Verification and Validation Report C.3.4.7 Periodic Test Plan The Periodic Test Plan (refer to Section C.7) is designed to provide a framework for users to establish a periodic test of the software to ensure the software adequacy for formal decision making processes.

C.3.5 V&V Administrative Requirements C.3.5.1 Anomaly Resolution and Reporting The Test Deviation Report (refer to Section C.9) will document test deviations and any resolution, if required.

C.3.5.2 Deviation Policy All test deviations will be documented. A separate report will be sent to the software developer for resolution or justification. The response will be attached as part of the report.

C.3.6 V&V Reporting Requirements C.3.6.1 V&V Summary Report The V&V Summary is provided as part of this document (refer to Section C.2).

C.3.6.2 Deviation Policy The reporting requirements concerning test deviations are described in Section C.9.

C.3.6.3 V&V Final Report This complete document will be issued by EPRI after final acceptance.

C.4 Functional Requirements Document This document establishes the baseline requirements for use of the software in the nuclear power industry for non-safety related applications. 1he primary functional requirement of the software is to provide the means to accurately determine the relative performance of monitored systems and components. This includes the capability of the user to define the performance requirements against which the monitored values are measured. When user defined acceptance criteria for each monitored asset is violated, it will be reported to the developer for resolution. In this sense, the data and model designs provided by the user in large part govern the on-line monitoring performance of the software. For the purposes of this document, software performance is defined as the ability of the software to perform its defined functions accurately and reliably given a set of test data and model designs.

C-10

EPRILicensed Material MSET Software Verificationand ValidationReport C.4.I General FunctionalRequirements The software must function in the environment defined by the end users. In this case, the environment shall be defined as the industry., type of facility, hardware requirements, operating system requirements, and user qualification requirements.

C.4.1.1 Operating Environment The software as defined by this document is intended for use in the nuclear power industry. The primary application will be to run the software at a first generation U.S. nuclear power facility.

The primary purpose for the industrial application is to generate electricity by the conversion of nuclear fission generated heat to electricity through a steam turbine/generator combination.

The intended users at the nuclear facilities are defined as all supporting staff whose responsibility includes the safe operation of the facility. This includes but is not limited to the Engineering staff, the Information Systems staff, the Maintenance staff, and the Operations staff.

C.4.1.2 Hardware Requirements The software will be compatible with widely used general office-class personal computers (PCs).

The computers will be Pentium or equivalent based with a 500-Mhz minimum operating speed.

Memory requirements will be at least 128 Megabytes RAM. Data storage requirements will be dependent on user requirements. The minimum software disk space available will be at least 25 Megabytes.

C.4.1.3 Operating System The software shall be capable of operating on the most recent Microsoft Windows operating systems, including Windows 95, 98, ME, NT, 2000, and XP.

C.4.1.4 Supporting Software The SureSense software is written in the platform neutral Java language. It therefore requires the Java virtual machine software to be running on the users PC/workstation. The Sun Java2 vi.3.1 or later is the recommended virtual machine and will be the virtual machine used in testing. The virtual machine is considered an element of the user's computer operating system.

C.4.2 Input Requirements The SureSense modeling software requires user interface at multiple levels to properly control the various operating parameters of the program. User inputs shall be described as those necessary to define and control the program in order to achieve the required output performance.

C-1l

EPPJLicensed Material MSET Software Verification and Validation Report Input requirements are grouped according to the user's role. The role that the user defines at log in will determine the user interface window that will be displayed.

  • The Administrator is responsible for maintaining the list of valid users. Access is provided only to the Admin Window in this role.
  • The Monitor may monitor any data set from any trained model. This user is given access only to the Monitor Window in this role.
  • The Designer may build, modify, and delete models. The Designer is given access to the System Window and all of it's associated windows that may be accessed via the System window's menu items.
  • The application may also be run from a command line or batch file. This method runs without user input via the user interface, but requires information (including security information) in a resource file. Data output may be obtained by logging in as a Monitor and accessing the output via the Monitor Window.

C.4.2.1 Administrator Input Requirements I-I A user logging in as an administrator will have access only to the Admin Window.

1-2 A means for adding, deleting or modifying a user and their permitted access role(s) will be provided.

C.4.2.2 Monitor Input Requirements I-3 A user logging in as a Monitor will have access only to the Monitor Window.

I-4 A means for specifying a model and data set to run will be provided.

I-S The user may select a Monitoring Run Report, each signal's Signal Report, or each signal's Estimate and Observation versus Time or Residual versus Time plots to be generated.

C.4.2.3 Designer Input Requirements The following are required for model development:

1-6 A means for specifying the physical plants specific modes of operation (referred to as Phases), which may be defined by any engineering parameter, such as power, temperature, or level.

I-7 A means for defining the data parameters that belong to the model.

I-8 A means for specifying the signals to be validated via the MSET algorithms.

I-9 A means for specifying the limit filters to be applied to train and test data on each signal.

C-12

EPPJ Licensed Material MSET Software Verificationand ValidationReport 1-10 A means to define the location and format of external data that will be used for model training and testing.

I-11 The means for resolving inconsistent external data stream names.

I-12 A means to specify the component associations for signals validated by the model.

I-13 A means to specify various parameter estimator default settings.

I-14 A means to specify the configurable fault detector settings of a user defined model.

I-15 A means to specify software options referred to as "preferences".

I-16 A means to create notes during model development.

C.4.2.4 Command Line Execution Input Requirements I-17 The means to execute the application via a command line or batch file I-18 The means to specify the cases (model, data set) to run in a resource file.

I-19 A means to automatically archive data produced for each signal for each case run.

C.4.3 Data Management Satisfactory performance relative to the user requirements for the SureSense software depends on a means to accurately and efficiently manage data. Data must be effectively managed as provided by outside sources in various formats. Accurate internal program control of the data and user-specified outputs shall be defined. The following are required:

DM-1 The users will have the ability to specify the location of stored data files that reside on any drive on the user's machine or on. any mapped network drive.

DM-2 The amount of training or testing data to reside in memory at one time may be specified by the user.

DM-3 The software shall be capable of reading input data in standard formats including, but not limited to, comma delimited (CSV) and signal data format (SDF).

DMA The software shall format the data in the manner required by the parameter estimator.

DM-5 User defined limit filters may be applied on a signal basis for both training and testing.

DM-6 A standard defined data set and associated model shall be provided to serve as a means to provide periodic testing of the software.

C-13

EPRILicensedMaterial MSET Software Verification and Validation Report C.4.4 Computational Requirements The following are required:

CR-I A means to determine the logical completeness of a model.

CR-2 A pattern recognition algorithm that will provide analytically derived values for all monitored sensor signals (generate a parameter estimate).

CR-3 A means to learn the states of a process under normal operation.

CR-4 A statistically based analysis that can compare the derived parameter estimate with the associated measured value to detect signal faults.

CR-5 A means to obtain various statistical properties of the data to be used for model analysis and development.

CR-6 During training the software shall be capable of filtering training data using the user-specified limit filters.

CR-7 During testing, limit filters may be applied as fault detectors.

CR-8 A means to run cases without requiring the graphical user interface.

C.4.5 SecurityRequirements Security is provided via a log-in screen that verifies the user's name, password and role against an encrypted list of authorized users. Each user's access to the application will be restricted based on his role.

S-1 All users will gain access to the application only through the login-window.

S-2 The user must provide a user name, password and role.

S-3 Users may login only in their assigned roles, and have access to the application as specified for the defined role.

S-4 User names, passwords and roles will be assigned by the Administrator.

S-5 Three roles will be implemented. The Administrator will maintain the list of users via the Admin Window. The Monitor may run any trained model from the Monitor Window.

The Designer may build, train and run any model, but may not access the Admin or Monitor Window.

S-6 The Designer may build, modify, or train a model via the System Window and its child windows.

C-14

EPRILicensed Material MSET Software Verification and Validation Report C.4.6 Output Requirements The minimum requirements are provided below. The software's current output capabilities exceed that which is necessary to accomplish the primary objective of system/component performance monitoring.

C.4.6.1 Designer Output Requirements The following are required outputs for users in the Designer role:

OR-1 A means to display the status and current value of each signal shall be provided.

OR-2 A means to display when a signal fault occurs.

OR-3 A means to indicate when the system is operating in an unrecognized condition.

OR-4 A means to obtain Observation and Estimation vs. Time and Residual vs. Time plots for each signal.

OR-5 A means to display fault detector results in the form of a Monitoring Run Report which will display any phase changes and signal fault events.

OR-6 A means to report the model's elements.

OR-7 A means to display the models current configuration.

C.4.6.2 Monitor Output Requirements The following are required for users in the Monitor role:

OR-8 A means to obtain Monitoring Run Reports and individual Signal Reports for each case run.

OR-9 A means to obtain Observation and Estimation vs. Time and Residual vs. Time plots for each signal for each case run.

C.4.6.3 Administrator and Command Line Execution Output Requirements No output is required for user's in the Administrative role or for command line execution of the application.

C-15

EPPJLicensed Material MSET Software Verification and Validation Report C.5 Design Description Document C.5.1 Software Design Requirements C.5.1.1 Input Design Requirements For ease of initial model development all user defined program settings will be provided with reasonable default values.

C.5.1.1.1 Administrative Role The user who logs in as an Administrator has access only to the Admin Window.

An editable table in the Admin Window contains five columns. User names may be entered in the first column and passwords in the second column. The headings on the following three columns contain the three roles, Administrator, Monitor, and Designer. The user may select Yes or No in each column indicating the roles assigned to the user.

Three buttons will be displayed at the bottom of the window. The Apply button allows the user to Apply the current changes. The data will be saved as it appears in the window. The Delete button allows the user to delete the selected user and save the table. The Refresh button allows the user to refresh the table with the information from the last time the table was saved.

C.5. 1.1.2 Monitor Role A user who logs in as a Monitor has access only to the Monitor Window. The monitor window will appear with an empty model displayed when first opened.

At the top of the window are four menus. From the File Menu, the user may select a model by browsing to it or selecting a model from the Most Recently Used list. When the model is selected, the model's signals are displayed in a tree in the Monitor Window.

From the Monitor Menu, the user may select one of the model's data sets. The combination of a selected model and data set are referred to as a case.

A Signal report regarding the selected case may be obtained by selecting the desired signal item from the Report Menu. These reports may also be obtained by selecting the desired item from the signal's folder in the Monitor Window. A Monitoring Run Report may be obtained by selecting it from the Report Menu or by double right clicking the model folder.

The signal's Residual vs. Time plot and Observation and Estimation vs. Time plot may also be obtained by selecting the desired item from the signal's folder.

C-16

EPRILicensed Material MSET Software Verification and Validation Report The case is only run the first time a plot or report is requested. Subsequent reports and plots are generated from data files created when the case was run.

C.5. 1.1.3 Designer Role The software will provide windows for specification of the following model elements:

C.5.1.1.3.1 Parameters The software will provide the interface (input window) to define the connections between various signals and the external data signals. The Edit Parameter Window and child windows will allow entry of the Parameter Name, Units, and Aliases.

C.5.1.1.3.2 Phase Determiner The software will provide a user interface to define various ranges of operation the monitored system may operate in. The model must automatically switch between defined phases without user intervention during data testing and analysis. The binding of the parameter(s) to the phase determiner variable(s) will be performed in this window. Individual phases may be enabled or disabled for validation.

C.5.1.1.3.3 Signals The software will provide a user interface to specify the primary configurable program settings for each modeled signal.

Traditional limit filters may be configured for each signal. The filters provided are Minimum and Maximum Range, the Maximum Positive and Negative Deltas between sequential values, the Minimum and Maximum Noise (standard deviation) of a signal in windows of a size specified by the user.

Parameter binding is performed in this window. Selecting the parameter associated with the signal displays the parameter's units in the Unit text box.

Component binding may also be performed in this window.

Signal Confidence - the user's initial confidence that the signal is operating correctly for each new observation of the signal will be specified in this window. This value is used to determine when a model is operating outside of its training space.

Allowed Error - allowed error refers to a limit on the residual's magnitude. This value is a user-defined value that is displayed as a green line on the Residual vs. Time plot.

Maximum Error - maximum error also refers to a limit on the residual's magnitude. This value is a user-defined value that is displayed as a red line on the Residual vs. Time plot.

C-17

EPRILicensedMaterial MSET Software Verification and ValidationReport The number of points used in the residual smoothing calculation may be specified in the Err Avg Points text box.

Statistical hypothesis tests (likelihood ratios), referred to as mean and variance test variables may be configured in this window. Evaluation of the alarms will be conducted by a BCP (Bayesian conditional probability) test. User defined probabilities and the series length for this test will be specified in this window. Mean and Variance Test variables and BCP variables can be specified individually by phase.

C.5.1.1.3.4 Data Sets The software will provide an input window to specify the primary configurable program settings for each applied data set.

A data set can consist of multiple data sources. These sources may be selected using a file browser. A data source reader appropriate for the selected file must be specified. SDF and CSV file readers are supplied with the software. Custom data source readers specific to a user's data can be used as long as they implement the SureSense data source reader interface. Custom data source readers should be verified and validated by the user prior to use with the software.

An Origin Time may be specified for each data source. All sources in a data set synchronize their first data point at each individual source's Origin Time.

Checking the Create Result Files box creates case-specific archive files from which monitoring data can be quickly obtained for plotting or reporting. This allows plots and reports to be generated without re-running the case. This box must be checked in order to run the data set from a command line or batch file. If running from the graphical user interface, checking this box allows plots and reports to be generated quickly.

Usage Type: If Training is selected, the data set is available for training or monitoring, if Monitoring is selected, the data set is only available for monitoring.

Synch Source Information: In data sets with multiple data sources, this sub-pane allows the user to specify which source will be the synchronizing source. All time and frequency filtering occurs with respect to this source. The time filter selects the data between the start and stop time inclusive. The frequency filter applies the selected averaging type (Sample, Median Select, Moving Average) to the filtering interval and returns a single value for each interval, thereby reducing the number of observations by a factor of the inverse of the window size.

C.5.1.1.3.5 Parameter Estimator The software will provide an input window to specify the primary configurable program settings for parameter estimator in each Phase. The following controls will be provided:

Phase Name: The phase to which the parameter estimator settings will be applied.

C-18

EPRILicensed Material MSET Software Verification and ValidationReport Target Matrix Size: The requested size of the Training Matrix.

Estimation Method: BART, VPR or VSET are the three methods that can be selected. Variables specific to the selected method will be displayed in the Estimation Vars table. Values for the variables will be specified by the user in the Estimation Vars table.

C.5.1.1.3.6 Components The software will provide an input window to specify the primary configurable program settings for each component. The following controls will be provided:

Component Name: The name of the component Display Label: Checking this box specifies that the component name will appear below the component image in the System Window.

Image: The image selected by the user to represent the component in the System Window.

Associated Signals: Signals may be associated with the component. The component and associated signals may be moved as a group in the System Window.

Ports may be defined in the component window, and are displayed in the Image pane. Ports may be configured with respect to the component image in the Image pane and will be displayed in the same manner in the System Window. Ports are beginning and ending points for lines defined in the System Window.

C.5.1.1.3.7 Preferences Options to allow the user to control many of the software's operational variables are intended to optimize the execution of the program. This will allow the execution of a model to suit the environmental bounds established by the user. The Preferences dialog box shall contain the following user controls.

Design Tabbed Pane:

The Max Training Matrix Size and Max Data Block Points allow the user to control the amount of memory used for temporary data storage.

The Minimum ASP Enable Points allows the user to specify the minimum number of Training Data points required to perform fault detection using a non-Gaussian fault detector. The Optimize ASP box allows the user to specify whether or not the fault detection algorithm will be optimized.

The following statistical tests may be selected: Mean Positive Test, Mean Negative Test, Var Nom test, and Var Inv Test.

Enable Training Limits and Enable Monitoring Limits allow the user to turn all Training and all Monitoring Limit filters on or off.

C-19

EPRILicensed Material MSET Software Verification and Validation Report Operation Tabbed Pane:

The Screen Refresh Rate specifies how often the System Window display is updated during monitoring.

The Model Confidence Level indicates the user's confidence that the model represents all expected normal operating states. This value is used to determine if the system is in an unrecognized operating condition.

Selecting Allow Signal Recovery allows the signal to recover from a failed state when enough evidence has been obtained to indicate that the signal is no longer in a failed state.

Selecting Mask Failed Signals indicates that the estimated value will be used in place of the observed signal once a failure has been determined.

Selecting Display Run Alarms and Display Run Failures causes every change of state for each signal to be reported. That is, if a signal starts to alarm, the first alarm will be reported, when the signal stops alarming, the cleared alarm will be reported. Between the reported alarm event and clear event, the signal will have alarmed for every observation.

The same behavior is exhibited for signal failures.

Project Tabbed Pane:

This tabbed pane allows the user to specify which files will be written when the model is run and whether or not to keep the training data in memory.

Keep Training Data allows the user to specify if the data from the last training run should be kept in memory.

Model Output Files allows the user to specify whether or not the ASCII files containing the model information should be created. These are the files containing the Training Matrix, and the mean and variance of the training data for each phase.

Training Results Files allows the user to specify whether or not the ASCII files containing the training information should be saved. These files contain the observation, estimation and residual error for each phase.

Monitoring Results files allows the user to specify whether or not the results of the ASP tests will be saved. If selected, ASCII files containing the alarm results, ASP indices and the BCP results will be created.

The three location text fields allow the user to browse to the user's preferred location for results files, data files and project files. The results files location is the preferred location for the files described above. The data files location is the default location of the user's training and testing data files. The projects files location is the default location for saving the user's projects.

C-20

EPRIJLicensedMaterial MSET Software Verification and ValidationReport Plot Tabbed Pane:

This tabbed pane allows the user to specify plot preferences.

Default Trace Style is the default symbol used during plotting, either scatter or line. Default AutoType specifies the type of plot to be displayed by default. This may be either the Observation and Estimation vs. Time plot or the Residual vs. Time plot. When a plot is generated from the pop-up menu in the System Window, the default plot will be displayed.

MaxPlotPoints is the maximum points to be plotted. If the selected plot contains more points than the specified value, the specified number of nearly evenly spaced points is selected for the plot.

C.5.1.1.3.8 Run Director While monitoring in batch mode, the user will have the ability to stop, start, and step through the monitoring data during the analysis. In addition, the user may overlay the following on the signal values. In each case, the user must supply the signal name, mode, magnitude and start time.

Drift: Applies a steady drift of the specified magnitude per unit time to the signal.

Shift: Applies a steady shift of the specified magnitude to the signal.

Flat: Creates a flat signal of the specified magnitude Noise: Applies noise with a standard deviation of the specified magnitude to the signal.

C.5.1.1.3.9 System Window The System window shall provide a capability to show the physical system relationships between the validated signals. This is accomplished by using lines to connect ports in the System Window.

C.5.1.2 Data Management Specifications Data management encompasses the models, data sets, data manipulation, verification and validation test data and memory management.

1. The software shall provide the ability to specify the location and names of data files stored on any local or mapped network drive. The location may be browsed to in the Edit Data Sets Window's Open File Dialog. A default data subdirectory may be set in the preferences. This is the subdirectory that the Open File Dialog will initially open to.
2. The software shall have the capability to read multiple file formats. Two standard file readers that are supplied are the CSV and SDF file format readers. The user may select any applicable file reader that implements the SureSense data source reader interface and is located in the application's/plugins/reade:r subdirectory.

C-21

EPRILicensedMaterial MSET Software Verification and ValidationReport

3. As a means to better define the range of allowable operating values, various limit filters may be applied to input data. A user-input window will allow editing of limit filter variables when defining the signals. Each limit filter may be applied specifically to training or testing data.

All Training and/or Monitoring filters may be enabled or disabled in the Preferences Window. Time and frequency filters may be specified in the Edit Data Set Window. These filters are applied whenever the data set is used.

4. The user may specify the amount of data held in memory at any time. Two variables in the Preferences window specify the number of observations to be read from the data file, and the number of observations to be held in the array from which the Training Matrix will be selected.
5. The software developer shall supply a standardized data set and associated model within the licensed software provided. The model and data will be used to execute periodic testing of the software in order to verify the accuracy of the outputs. Documentation shall include a description of the expected results.

C.5.1.3 Computational Specifications Three methods of parameter estimation will be provided, BART, VPR, and VSET. Parameter Estimator variables specific to the selected parameter estimator will be specified by the user.

Default values will be provided. The selected parameter estimator will be used during Training and Monitoring.

Two Adaptive Sequential Probability (ASP) fault detection options will be provided. The option should be selected based on the distribution of the training data's residual. The Gaussian fault detector option should be used when the residual's distribution is very nearly Gaussian.

Otherwise, the Adaptive fault detector option should be used. The selected Fault Detector will be used during Training and Testing.

C.5.1.3.1 Verify The software shall provide a functional step to determine the model's logical completeness. It will be capable of determining and reporting errors in the model.

C.5.1.3.2 Train The software shall provide a functional step to execute the training process. The user may select one or all of the defined training data sets as the training basis.

A training algorithm will examine the training data and produce a training matrix whose data encompasses all expected normal operating states of the system. The user will specify the size of the training matrix in the Edit Estimator's Window. The statistical properties of the residuals over the training data will also be determined during training.

C-22

EPRILicensedMaterial MSET Software Verification and Validation Report C.5.1.3.3 Monitor The software shall provide a functional step to execute the monitoring process. In batch mode, the user may select a data set to monitor from any of the Training or Monitoring data sets.

During Monitoring, phase change, alarm and failure events will be reported in the Monitor Report Window.

C.5.1.3.4 Analysis The application shall provide a means to perform statistical analysis.

Data Statistics: This analysis calculates the number of observations, minimum, maximum, median, mean, standard deviation, maximum positive delta and maximum negative delta values for each signal in each phase. These statistics may be calculated over all data sets, all training data sets, all testing data sets, or any selected data set.

Detector Sensitivit : This analysis indicates the sensitivity of each signal's fault detectors to drift or noise. Each signal is drifted over the training state space until it alarms. The mean, standard deviation, the absolute value of the minimum and maximum drift at the point of alarm, the root mean square (RMS) value of the drift and the RMS drift as a percentage of the RMS of the signal's value are reported.

Mean Test: The mean test drifts the magnitude of each signal until an alarm occurs.

Var Test: The nominal variance test applies random noise to each signal, incrementally increasing the level of the noise until an alarm occurs. For the inverse variance test, the standard deviation of the noise is decreased until an alarm occurs.

Training Matrix: This test calculates the same statistics as in the Data Statistics test, but calculations are performed over the vectors in the Training Matrix.

Run Data Sets: This test runs the specified data sets sequentially. The Monitoring Run Report for each data set is displayed in the Run Data Sets Report Window. All data sets, all training data set or all testing data sets may be specified.

Correlation: This test calculates the linear correlation coefficient for each combination of two signals.

C.5.1.4 Output Specifications C.5.1.4.1 Administrator Output There is no output from the Administrative Window C-23

EPRILicensedMaterial MSET Software Verification and Validation Report C.5.1.4.2 Monitor Output:

From the Monitor Window, the user may generate a Monitoring Run report, a Signal Run report, a Residual vs. Time plot, and an Observation and Estimation vs. Time plot for each signal. All reports may be saved or printed. All plots may be printed.

C.5.1.4.3 Designer Output:

The following outputs are generated for the Designer user:

1. A verification status report will be auto generated after completion of the model verification analysis. Verify is a menu item on the Run menu of the System Window. Verification results will be displayed in the Verification Report Window.
2. A Model Training Report will be generated for review or storage. The report will provide model element configuration including the number of vectors in the training matrix and the estimation method. The report will summarize signal parameter data including minimum value, maximum value, average value and standard deviation of values. This report window is displayed automatically when the training process is invoked.
3. During monitoring, the status of each signal defined in a model will be presented collectively in a window that displays the model's signals and components. The displays will be synchronized in time.
4. A graphical representation that indicates when a signal enters a faulted condition will be provided. The signal box and the outline around the signal name will turn red when a fault condition exists. Correction or elimination of the faulted condition returns the signal box to its normal condition and turns the outline of the signal name blue. In the Monitoring Report window, the status of each signal is reported as it changes. This report may be printed or saved to a file.
5. A graphical means to represent basic components that collectively define a model will be provided on the main system window. The System component outline will turn red when an unrecognized operating condition occurs. Faulted signal boxes and the outline of the signal name will turn yellow.
6. A graphical means to display each signals operating values over the entire range of the analysis period will be provided. A plot of the signal over the analysis period may be obtained from the pop-up menu's Plot item on the signal of interest. A plot of the Observation and Estimation vs. Time or the Residual vs. Time may be selected from the AutoType menu. Other plots may be defined from the Plot Controls window.
7. Fault detector results may be obtained from either the Monitoring Run Report or from a plot.

The fault detector results may be plotted by specifying the desired result in the Plot Control window.

8. The configuration of each of the model's elements may be viewed in each elements Display Window. Items will be displayed in a tree structure.
9. All reports, plots and the System Window may be printed from the window's File>Print menu item.

C-24

EPRIlicensed Material MSET Software Verification and Validation Report C.5.1.5 Miscellaneous Design Requirements C.5.1.5.1 User/Operating Environment The software shall be designed for use in commercial nuclear power industry plants. The facilities represent the U.S. first generation power generation facilities. The two major types include PWRs (pressurized water reactors) and BWRs (boiling water reactors). Major manufacturers include Westinghouse Electric (PWR), General Electric (BWR), Babcock &

Wilcox (PWR), and Combustion Engineering (PWR).

C.5.1.5.2 Hardware Design Requirements The software is designed to run on any workstation with the following minimum requirements:

166-MHz Pentium processor, 64-MB RAM, and 15-MB available hard disk storage.

The software has a recommended hardware configuration of: 300-MHz Pentium processor, 128 MB RAM and 25 MG of available hard disk storage.

C.5.1.5.3 Operating System Design Requirements The software will run on a computer with a Windows 95, 98, ME, 2000, N.T. 4.0 or XP operating system.

C.5.1.5.4 Supporting Software Design Requirements The Sun Java2 vl.3.1_03 is the recommended virtual machine and will be the virtual machine used in testing.

C.6 V&V Test Plan C.6.1 Purpose The purpose of the V&V test is to prove by systematic means under controlled conditions that the software under test meets the requirements identified in the Functional Requirements Document. The test is comprehensive to the extent that all critical attributes are tested for functionality and repeatability. In addition, the testing is conducted as close to the expected user application environment as practical. The results of the testing are documented and reported as part of this V&V document. Results from this test will be compared against those provided by the software developer. They, in turn, have verified the accuracy of their results with Argonne National Laboratory. If the testing identifies anomalies in the performance of the software a formal report is provided to the developer for resolution prior to final acceptance of the test.

C.6.2 References References are provided in Section C. 11.

C-25

EPRILicensedMaterial MSET Software Verification and ValidationReport C.6.3 Approach The approach taken in the execution of this test is to execute the tasks or elements of the test in a linear sequence that best reflects the process executed by the end users. The test will utilize standardized data provided by the developer that has embedded within the data the characteristics needed to test various attributes of the software. A model will be manually developed in order to test various user input capabilities. The Data sources will be placed outside the program representing end user profiles. After the model is built the model will be trained utilizing developer supplied training data. After model training, various outputs will be documented for the report. The trained model will then be used to run data file(s) that represent real time plant data. The results will be documented.

C.6.4 EnvironmentalNeeds The environmental needs for the conduction of this test are simple. They are limited to four primary elements.

1. Developer-supplied target software for testing.
2. Hardware - a PC/Workstation of sufficient capability to at least meet minimum developer recommended requirements. Preferred hardware would be representative of the typical end users and will therefore exceed the minimum requirements.
3. Operating system - the subject software is designed to run on the following operating systems: Windows 95, 98, ME, NT, 2000, and XP.
4. Supporting software - supporting software that is required to execute the program must be installed and operating on the test platform. The software to be tested is written in Java and must have the Java virtual machine installed on the computer in order to run. Sun Java2 version 1.3.1_03 is the recommended virtual machine. Other supporting software includes Microsoft products such as Excel and Notepad.

C.6.5 Test Deliverables Test deliverables will primarily be provided as documents for other sections of this V&V document. Deliverables outside this document are limited to deviation notifications to the developer and requested copies of related documents. The test deliverables are:

1. Test Summary - to be included as part of the report summary.
2. Test Record/Results Data Sheet.
3. Test Deviation Report.
4. Test Deviation Resolution Report.

C-26

EPRILicensedMaterial MSET Software Verification and Validation Report C.6.6 Test Items

1. Load test data files (outside of program files) - test data files will be provided by Expert Microsystems, received by SystemsMax Engineering, and loaded on the test computer.
2. Create model- the model will be developed in accordance with the EPRI Users Manual. The model is defined in Appendix B. A model file (*.svm) is also provided by the software developer. The test model will be defined as follows:
  • Define parameters - parameters map internal data streams to external data sources and provide name resolution. After completion, print the Parameters list report for record.
  • Define phases and bind to parameters. After completion, print the Phases list report for record.
  • Define signals and bind to parameters. After completion, print the Signals list report for record.
  • Define data sets and bind to parameters. After completion, print the Data Sets list report for record.
  • Define components and bind to signals. After completion, print the Components list report for record.
  • Define estimators for each phase - default values will be tested. Optional methods are outside the scope of this test. After completion, print the Estimators list report for record.
1. Verify model.
2. Train model.
3. Run data correlation.
4. Run/test data.
5. Verify results.

C.6.7 System Testing System testing does not apply to this test. The subject software under test is a stand-alone product. The software is not being provided as part of a complete system of integrated hardware and software.

C.6.8 Acceptance Testing Acceptance testing does not apply to this test. Acceptance testing of any software is typically executed under a different process of specification, development, testing, and acceptance testing.

Acceptance testing is normally utilized when a "user" identifies a software need and executes the process of software development, usually by a third party. The user would execute the acceptance test as part of the final delivery process. In this case the software is fully developed and specified by the developer. The "users" as represented by EPRI are in effect purchasing the software "off the shelf'. The V&V process for this application is to meet user requirements for software use.

C-27

EPRI Licensed Material MSET Software Verification and Validation Report C.6.9 Requirements Testing The requirements for testing under this document are developed through the Functional Requirements Document (Section C.4) and the Design Description Document (Section C.5). The requirements are then itemized in this Test Plan. The Test Procedure then follows with details on the step-by-step process necessary to test and verify the execution of each requirement of the software. Failure to meet each specific requirement will be documented with formal reporting and problem resolution for each requirement addressed. The final acceptance of the test results are made at the user level. Successful completion of this test fulfills the requirements testing requirement.

C.6. OSuspension and Resumption Criteria Suspension of the test will be invoked if test results deviate from expected results. The deviation will be evaluated and documented for the report. A notice will be forwarded to the developer for comment and resolution if required. Upon resolution of a deviation the test will resume at a step prior to the deviation. If the deviation is deemed minor by the tester, then the test may continue to completion with resolution of the deviation(s) after the test. If the resolutions require significant effort by the developer then the test may be repeated.

C.7 Periodic Test Plan C.7.1 Purpose The periodic test plan is required by the accepted industry standards for software quality assurance programs in the nuclear power industry. Periodic verification of the performance and accuracy of the critical attributes of the subject software is necessary based on the wide variations of operating environments provided by the end users. The attributes that require testing are the same as those tested and documented with this document.

C.7.2 References References are provided in Section C. 11.

C.7.3 Precautions and Limitations This test does not verify the requirements. It is only performed to verify that the behavior of the application has not changed in the time period following V&V.

C.7.4 Testing Environment The periodic testing shall be conducted on the same hardware and operating system the software is operating on during normal operations. Each PC or workstation that has the qualified software installed must be tested in accordance with requirements of this test plan. Removal of a PC with C-28

EPRILicensedMaterial MSET Software Verification and Validation Report qualified software installed from the normal environment that is governed by quality assurance programs must be tested upon return to the normal operating environment prior to using the program for decision making processes.

The expected "environment" that the software will be operated and tested is within the confines of a U.S. nuclear power facility. The "controlled area" that defines the environment must be under the controlling authority of the plant software quality assurance program.

C.7.5 Reporting Requirements The reporting requirements are necessary to identify and document the test parameters. The test should be documented on a Periodic Test Data Sheet. The Data Sheet should document the following:

  • Identification of the application being tested.
  • Identification of the person conducting the test.
  • Authorization to test if required.
  • Description of the test.
  • Identification of the acceptance criteria.
  • The test results.
  • Demonstration of whether the results are satisfactory.
  • Signature by the Tester and Reviewer.

C.8 V&V Test Procedure C.8.1 Purpose The purpose of this test procedure is to define the specific steps to carry out the execution of the V&V tests. Each step will be fully carried out to completion prior to moving to the next step.

Signoff is required after each step is completed with the expected results. Deviations from the expected results will be documented in the Test Deviations Report. The test will not be considered final until all Deviations have been adequately resolved.

C.8.2 Pre-Test Procedure

1. Verify all required hardware is operational.
2. Verify all required software is loaded on the test computer.
  • SureSense Diagnostic Monitoring Studio (SDMS)
  • Javasoft\JRE\vl.3.1_03
3. Verify all test data files are loaded in their respective locations.
4. Manually review the CSV data files for data quality.

C-29

EPRILicensedMaterial MSET Software Verification and ValidationReport C.8.3 Test Procedure Test 1 Start the application. In the Log-in window use TESTER and PASSWORD as the user name and password. Select the role, Designer. On successful log-in the System Window will be displayed.

Test 2 Define Parameters. Do not define the aliases. Save the model as StandardTest. Print the Parameter list report.

Test 3 Select the StandardPhaseDeterminer as shown in Appendix B. Bind the input variable to the specified parameter. Print the Phase list report.

Test 4 Set the default data location in the Preferences window, to the appropriate directory.

Define Data Sets. Bind the parameters by binding the parameter to the data stream with the same name as the aliases. Print the Data Set list report.

Test 5 Define the Signals as shown in Appendix B. Bind the Signals to the Parameters. Print the Signals List Report.

Test 6 Define the Parameter Estimators. Print the Estimators List Report.

Test 7 Components are optional model elements. Define a component, bind signals to the component. Verify in the System Window that the signals maintain their position relative to the component when the component is moved.

Test 8 Set the Preferences.

Test 9 Create a set of Notes and save them on the model. Notes are an optional model element that may be used to document the model development process. Close the Notes Window and re-open it. Verify the Notes are correct.

Test 10 Examine the Display Windows. Verify the information displayed is correct.

Test 11 Verify the Model. Print Verification Report for records.

Test 12 Train the model on Setl. Print the Model Training Report.

Test 13 Run the model using Set2. Print the Monitor Report.

Test 14 Run the model using Set2_Index. Print the Monitor Report.

Test 15 Run the model using SetlSplit. Print the Monitor Report.

Test 16 Run the model using Set3. View the System Window while the model is running.

Verify faults are indicated in the System Window. Print the Monitor Report.

C-30

EPRILicensedMaterial MSET Software Verification and Validation Report Test 17 Run each of the analyses in the Analysis Menu. Print the reports.

Test 18 Plot the Observation and Estimation vs. Time plot for S5.

Test 19 Plot the Residual vs. Time plot for S5.

Test 20 Plot Last Train for S5.

Test 21 Apply a drift of magnitude I to S2, specify the first time point as the start time. Run Set3 and verify that the unmodeled condition is displayed. The System component outline should be red when the unmodeled condition occurs, and the failed sensor's boxes and name outlines should turn yellow. When the system recovers from the unmodeled condition the System component outline should turn blue, and the recovered signals' name outlines should turn blue and the signal box should turn white.

Test 22 Turn on the Limit Filters for training and monitoring. Modify Set3's usage to Training. Train on Set3, and then Test on Set3.

Test 23 Login in the Monitor role. Verify access to all other windows, except a Plot window, is denied from this window. Run Set2_Index from the Monitor Window. Obtain the Monitoring Run Report, each of the signal reports and the Observation and Estimation vs. Time and Residual vs. Time plots for each of the signals Test 24 Login as an Administrator. Verify no other windows may be accessed from this window. Add a user (user name, password, role(s)) to the user's list. Click the Apply button. Close the application. Restart the application and log-in as an Administrator.

Verify the user's data is displayed correctly. Delete the user that was added earlier.

Close the application. Restart the application and log-in as an Administrator. Verify the user has been deleted. Add another user, but don't click Apply. Click Refresh and verify the table displays the last saved table. Apply and Delete both save data to the file.

Test 25 Close the application.

C.9 V&V Test Deviation Report C.9.1 Purpose The purpose of the test deviation report is to document all deviations from the test procedure identified during the testing process. Any deviations reported will be forwarded to the parties responsible or designated in this document as qualified to resolve or address deviations. If any deviations are not resolved they will be recorded in this report.

C-31

EPRI Licensed Material MSET Software Venfication and Validation Report C.9.2 Results The results of the test conducted under the direction of this document were found to be fully acceptable. No deviations from any test items were identified. The software executed all functional requirements as expected. The tested results were compared to the expected results provided by the developer and were found to be 100 percent consistent. Notification of the test results will be provided to the developer. A formal deviation report will not be generated due to the successful test results.

C.10 V&V Test Data Record C.10.lPurpose The purpose of the Test Data Record is to document the completion and results of each step defined in the Test Procedure. Each step will be executed and then recorded here prior to moving on to the next step in the procedure. A printed copy with the results left blank will be used during testing. After completion of the software test the results will be transcribed to the digital copy for publication.

C.10.2Record Reference 8.2-1 Y Yes/No -All hardware operational ?

Notes: All hardware checked normal Reference 8.2-2 Y Yes/No -All Software Loaded?

Notes: (SDMSvl.4.x), (Phase Determiner!. (Java 2 Runtime Environment Sun Microsystems Standard Edition vl.3.1 03 J2RE vl.3.x)

Reference 8.2-3 Y Yes/No -All test data files loaded?

Notes: C\ V&V Data\Standard Test C-32

EPRILicensed Material MSET Software Verificationand ValidationReport Reference 8.2-4 Y Yes/No Review CSV test data files flr quality?

Notes: Synthetic Data; A-O thru A-5, sets 1. la, (A-01.2) lb. (A-3.4.5.) 2. (index) 3.

Reference 8.3-Test 1 Y Yes/No Execute Login Notes: Login Complete Reference 8.3-Test 2 Y Yes/No Define the model Parameters.

Notes: Parameters Defined 23 Parameter reFS* No Associated Signal I- Unit=ND erai Aliases

-0 AssociatedSignalS Unit= PSI

. .t Aliases 1- A_1 P2 H AssoclatedSigrial=82 l

  • Unit-PSI I -3lases 1P3
  • Associated Sigral= 93
  • Unit= PSI tJ1 Associated Signal= 84 f '¢UnMt=PSI Aliases
P5
  • Associated Signal= S5
  • Unhf PSI IL-l Aliases
  • A-5 Figure C-2 Parameters Window C-33

EPRILicensed Material MSET Software Verification and Validation Report Reference 8.3-Test 3 Y Yes/No Standard Phase Determiner Notes:

- 0 I_ " I 11c Vay Standard Test Contains 3 Phases. E 7

Phase Determiner:

PlugIna.phasedet.StandardPhaseDetu trainer 1-4 efined Phases:

NWOOPERLTING : Inactive 11-5, OPERATING 100 : Active I OPERATINGSO : Active Input Parameter lames: k SO -> PO II escription: t Returns OPERATING-100 for values over 75, OPERATIDG_50 for values under 75.

AF Figure C-3 Phases Report Reference 8.3-Test 4 Y Yes/No Define the model parameters (reference csv files).

Notes: P-O thru P-5 C-34

EPRlLicensed Material MSET Software Verification and Validation Report ii-. '*16

-Dx File*

I"

VStandardTest Contains 6 Parameters.

Properties of Parameter PO:

No Associated Signal Unit: ND Aliases:

AO0 Properties of Parameter P1:

Associated Signal: S1 Unit: PSI Aliases:

Al Properties of Parameter P2:

Associated Signal: S2 Unit: PSI Aliases:

A2 Properties of Parameter P3:

Associated Signal: S3 Unit: PSI Aliases:

A_3 Properties of Parameter P4:

Associated Signal: S4 Unit: PSI Aliases:

A_4 Properties of Parametar PS:

Associated Signal: S5 Unit: PSI Aliases:

A-S Figure C-4 Parameters Report Reference 8.3-Test 5 Y Yes/No Define the signals and bind to parameters.

Notes: Task Complete C-35

EPRILicensed Material MSET Software Verification and ValidationReport 0 xl I Stignals

  • 181->. Pi H
  • Component= Tecs Tube r-* Port= Internal UnKt PSI u3 3 ' Allowable Error- =NIot defined 3* Maimum Error- Not defined

,--

  • Confidence= 0.9i975 i, -* Residual MovingSAig Points= I

-* Validated= true

  • 1Detectors 3 -NONOPERA-OPERAl1NO 3_100 H- TestTypee= Oausslan 3 [--* False Al rnnProb= 0.0010 33 Missed AJarm Prob= 0.0010

- Mwean Dl Isturb Mag= 10.0

-* Var Distu rb Mag= 100.0

-* Mean PCUl 0.95

  • VarPCL- _(t.85 33
  • MeanPH 0:: 0.5 H00.5
  • VarPHO=
  • Series L ingth- 10 3 OPERATINO _60
  • TestTypf i= Oausslan 3
  • False Ala amrProb= 0.0010

-* Missed Alarm Prob= 0.0010 I3

-* Mean Dis Iturb Mag= 10.0 3 3-* Var Distu rb Maq 100.0

  • Mean PCU: 0.95 3-* VarPCL, O:0.95
  • Mean PH
  • VarPHO=:C.5 3* SerIes L in th-- 10 B

3 UmK Filters 3 NONOPERA L* NoUmitI Filter Defined 3 OPERAT1NO._1 00 3 lL_*No UmKtI Filter Defined OPERATINO.

  • NoUmKtI Filter Defined 8-tiS2-P2 83- P3 8i-3

~lcS4-2.P4

,-.]85 - P5 Figure C-5 Signals Window C-36

EPRI LicensedMaterIal MSET Software Verification and ValidationReport

- - I ~~~

-ox T

F"eOLLLL VaV Standard Test Contains 5 Signals.

Properties of Signal S1:

Parameter Name: P1 Unit: PSI Component: Test Tube Port: Internal Allowable Error: Not defined haximum Error: Not defined Confidence: 0.9975 Residual Roving Avg Points; I Validated: True Fault detector settings for OPEOATCNG-100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Hissed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.9S Mean Prior Probability of the Normal Hypothesis (PHO): OS Variance PHO: OS Series Length: 10 Fault detector settings for OPERlnMG_50 phase:

Test Type: Gaussian False Alarn Probability: 0.0010 Hissed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PRO: 0.5 Series Length: 10 Properties of Signal 32:

Parameter Name: P2 Unit: PSI Component: Test Tube Port: Internal Figure C-6 Signals Report Reference 8.3-Test 6 Yes/No Define Estimators for each phase.

Notes: Estimators Defined Per Appendix B C-37

EPRILicensedMaterial MSET Software Verfifcation and Validation Report

a. x ]

-.,_..t._ l-,

j VO Standard-Test Contains 3 Parameter Estimators.

Parameter Estimator for Phase IKONOPERKTING:

Status: Inactive Training is not requized Estimation Method: BART Domain Ext: 0.2 Training Method: VectorO::dering Current Training IatrLx Size: 0 Target Training Matrix Size: 100 Parameter Estimator for Phase OPERATING_100:

Status: Active Training is current Estimation Method: BART Domain Ext: 0.2 Training Method: VectorO dexring Current Training Matrix Size: 100 Target Training Matri: Size: 100 Last Data Set Basis, Set.

Parameter Estimator for Phase OPERATING 50:

Status: Active Training is current Estimation Method: BART Domain Ext: 0.2 Training Method: VectorO:dering Current Training Matrix Size: 100 Target Training Matri> Size: 100 Last Data Set Basis:

Set]L Figure C-7 Parameter Estimators Report Reference 8.3- Test 7 Yes/No Edit Signal Window to define: Component, MTBF, Max Error, Min Error, Phase, Test Vars, and BCP Vars.

Notes: Signals Edited. See Siignals Report Below C-38

EPRILicensed Material MSET Software Verification and Validation Report I k. 1--.------ --, ----------- -T-11i L rak-11 i, Fle Properties of Signal S3:

Parameter lame: P3 Unit: PSI Component: Test Tube Port: Internal Allowable Error: Not defined lEaxiaum Error: Not defined Confidence: 0.9975 Residual Roving Avg Points: I Validated: True Fault detector settings for OPERATING100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.9S Mean Prior Probability of the Normal Hypothesis (PEH): 0.5 Variance PEO: 0.5 Series Length: 10 Fault detector settings for OPERATING 50 phase:

Test ype: Gaussian False Alarm Probability: 0.0010 Kissed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidenze Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PRO): 0.5 Variance PEO: 0.5 Series Length: 10 Properties of Signal 54:

Parameter lane: P4 Unit: PSI Component: Test Tube Port: Internal Figure C-8 Signals Report Reference 8.3-Test 8 Y Yes/ No Enter preferences per Appendix B Notes: Preferences entered as required C-39

EPRIlicensedMaterial MSET Software Verification and ValidationReport Reference 8.3-Test 9 Y Yes/No Enter Notes Check for Compliance.

Reference 8.3-Test 10 Y Yes/No Examine Display Windows Notes: All Display Windows are normal Reference 8.3-Test 11 Y Yes/No Verify model Notes: Model Verified Figure C-9 Verification Window Reference 8.3-Test 12 Y Yes/No Train Model on Set 1 Notes: Training Complete C-40

EPRIlicensedMaterial MSET Software Verification and Validation Report rocessing: OPERATING 50 Extracting 10080 training vectors from 10080 total vectors reaking data into I blocks of 10080 Building model...

100 vectors placed in the Training Matrix Initializing model...

stimated time to initialize is 5 seconds Profiling training data...

stinated profiling tine is 5 seconds

,esult Sunary for OPERATING5O:

stimation Method: BART ornalized P15 Error Percent: 0.3246 %

Single Cycle Alarm Total: 0 alarms cimsisting of Pos Alarm Type: 0 Neg Alarm Type: 0 Sumsary by Signal Parameter:

SignalName KinValue PaxValue AvgValue StdDevVal KinEstimate Naxlstimate AvgEstinate StdDevEst RMSError MaxError AvgError StdDevErr RESErrort NusAlarms 51 37.2378 53.1065 45.5690 2.8726 37.9186 52.6443 45.5121 2.8567 0.1662 0.7426 0.1299 0.1037 0.3645% 0 32 39.6713 54.9796 47.4889 2.9418 39.9398 54.6161 47.5112 2.9161 0.1278 0.5204 9.826E-02 8.173Z-02 0.2685% 0

,3 42.6918 57.7413 49.9023 3.0416 42.9245 57.4094 49.9075 3.0115 0.1634 0.7181 0.1265 0.1035 0.3269' 0 54 45.5492 60.8395 53.2434 3.1791 45.6804 60.2680 53.2950 3.1461 0.1835 0.8512 0.1425 0.1156 0.3438% 0 45 S2.2701 67.8669 60.1507 3.5235 t

52.2439 67.8533 60.1261 3.5222 0.1923 0.8253 0.1518 0.1180 0.3193% 0 Training successful for OPERATING5O Figure C-10 Training Report - Operating5O C-41

EPRILicensed Material MSET Software Verification and Validation Report 171 NMI Started training for V&V Standard Test...

Processing: OPERlTING 100 Extracting 20000 training vectors fromi20160 total vectors reaking data into 2 blocks of 10080 Building model..

100 vectors placed in the Training hatrix Initializing model...

stimated time to initialize is 10 seconds Profiling training data...

stimated profiling time is 17 aeconda suIt Summary for OPERlTIKG 100:

Estimation Method: BART lformalized 215 Error Percent: 0.2721 '

Single Cycle Alarm Total: 0 alarms coxasisting of los Alarm Type: 0 leg Alarm Type: 0 Summary by Signal Parameter:

SignalName KinValue XaxValue AvgValue StdDevVal linEstimate MaxEstinate AvgEstimate StdDevEst RiSError MaxError AvgError StdDevErr RMSError% NUn)lars S1 76.6177 105.5175 91.1278 5.6294 77.3143 105.3041 91.1337 S.6178 0.1975 1.0191 0.1523 0.1258 0.2163% 0 52 79.6125 108.9639 94.9816 5.7333 80.16S0 108.3321 94.9730 S.6742 0.2970 1.4625 0.2329 0.1843 0.3121k 0 S3 85.4637 113.6087 99.8483 5.9238 86.0066 113.4068 99.8042 5.8815 0.2285 1.0022 0.1798 0.1411 0.2286% 0 S4 92.3795 120.0516 106.4756 6.2201 93.0361 120.1570 106.4729 6.1841 0.3323 1.3479 0.2634 0.2026 0.3116% 0 Ss 106.8380 133.8294 120.3015 6.8841 106.7744 133.5084 120.3434 6.8908 0.3522 1.3246 0.2810 0.2123 0.2922% 0 I

Figure C-11 Training Report - Operating_100 Reference 8.3-Test 13 Y Yes/No Run Set 2 on trained model Notes: Run Complete C-42

EPRILicensedMaterial MSET Software Verification and Validation Report

? MI FM

",Fi ~,LLL WON.l BM1 I

Started monitoring for VWV Standard Test...

Starting run for Set2 Start time value: 01/01/02 00:00:00 01/01/02 00:00:00: Changed phase to OPERATINGJ100 01/08/02 00:00:00: Changed phase to OPERATINGG50 01/15/02 00:00:00: Changed phase to OPERATING_100 Run completed normally Stop time value: 01/21/02 23:59:00 Result Summary for OPERATING-100:

Data points processed: 20160 Lverage Processing Tine: 1.0078 axec Iormalized RMS Error Percent: 0.2741 %

Single Cycle Alarm Total: I alarms consisting of Pos Alarm Type: 1 leg Alarm Type: 0 Failure Decision Total: 0 failures consisting of Pos Decision Type: 0 Neg Decision Type: 0 Summary by Signal Parameter:

SignalName KinValue NaxYalue AvgValue StdDevVal winstimate Kax!stizate AvgEstimate StdDevEst RMSError MaxError AvgError StdDevErr rlmSrror% NumAlarns NunuFailures Tinelstlail 51 76.0174 106.9415 91.1674 5.5985 76.8814 105.7036 91.1705 5.5898 0.1973 1.2438 0.1523 0.1255 0.2160% 1 0 S2 78.8292 109.9985 94.9642 5.7494 80.1375 108.9348 94.9613 5.6923 0.2997 1.5167 0.2345 0.1866 0.3150% 0 0 53 85. 9183 114.6711 99.8355 5.9353 86.3035 113.6539 99.7976 5.8946 0.2299 0.9997 0.1807 0.1422 0.2300% 0 0 S4 92.6334 120.0894 106.4964 6.2153 92.9810 120.2585 106.4920 6.1822 0.3350 1.4128 0.2646 0.2055 0.3140% 0 0 55 106.4025 133.9679 120.3167 6.9036 106.8887 133.8033 120.3517 6.9022 0.3562 1.5026 0.2840 0.2150 0.2955% 0 0 It' Figure C-12 Set 2 Monitor Run Report Reference 8.3-Test 14 Y Yes/No Run Set 2_Index Notes: Set 2 Index Run Complete C-43

EPRILicensedMaterial MSET Software Verification and Validation Report

,Ft 4 : is.z.-,,reX=^ --F--l---.- ---

Started monitoring for VWV Standard Test...

Starting run for Set2_Index Start time value: 0 1: Changed phase to OPERATING 100 10,081: Changed phase to OPERTIUNG 50 20,161: Changed phase to OPERATING 100 Ru completed normally Stop time value: 30,239 Result Summary for OPERATINGJ100:

Date points processed: 10080 Average Processing Time: 0.8455 msec fformalized RES Error Percent: 0.2742 t Single Cycle Alarm Total: I alarms consisting of Pos Alarm Type: 1 leg Alarm Type: 0 Failure Decision Total: 0 failures consisting of Pos Decision Type: 0 Reg Decision Type: 0 Summary by Signal Parameter:

Signallame hinValue KaxValue AvgValue StdDevVal KinEstimate hexEstimate AvgEstimate StdDevEst RESError KaxErcor AvgError StdDevErr RKSError% NuMAlarms NunFailures TimelstFail S1 76.4718 106.9415 91.1733 5.6186 77.3969 105.7036 91.1762 5.6079 0.1965 1.24313 0.151 0.1252 0.2152t 1 0 S2 79.9574 108.61300 94.9476 5.7400 80.4380 108.0362 94.9461 5.6822 0.3010 1.347:l 0.2356 0.1873 0.3165t 0 0 S3 85.9183 114.6711 99.8279 5.9337 86.3035 113.6'S39 99.7900 5.8906 0.2316 0.999'1 0.1818 0.1435 0.2317t 0 0 S4 92.6334 120.08194 106.4955 6.1895 H 93.1497 120.2i85 106.4910 6.1588 0.3336 1.285Si 0.2627 0.2056 0.31274 0 0 SS 106.4025 133.9679 120.3248 6.8928 106.8971 133.6692 120.3593 6.8913 0.3557 1.5026 0.2832 0.2152 0.2950% 0 0 Figure C-13 Set 2 Index Run Report Reference 8.3-Test 15 Y Yes/No Run Set 1 Split Notes: Set 1 Split Run complete C-44

EPRILicensedMaterial MSET Software Verification and Validation Report Reference 8.3-Test 16 Y Yes/No Run Set 3 against model Notes: Set 3 Run against model 4 -- -- - -- -- - - - -- --

KI Ea ",

11 , , "-,

Al " - ,-", - ",` -- I,

'I - -' -

Started monitoring for VWVStandard-Test... E Starting run for Set3 Start time value: 01/01/02 00:00:00 01/01/02 00:00:00: Changed phase to OPERATINGS100 01/06/02 04:12:00: ]MN POS failure for S5 01/08/02 00:00:00: Changed phase to OPERATINGSO 01/09/02 05:44:00: ]MAN-OS recovery for S5 01/09/02 OS:44:00: NODEL recovery for S5 01/11/02 01:18:59: ]NMA-EG failure for S5 01/15/02 00:00:00: Changed phase to OPERlTINGS100 Run completed nornally Stop time value: 01/21/02 23:59:00 iIi Result Summary for OPERATINGD100: ij Data points processed: 20160 Average Processing Time: 0.7838 asec Normalized FM5 Error Percent: 2.7725 t Single Cycle Alarm Total: 12687 alarms consisting of Pos Alarm Type: 2606 Neg Alarm Type: 10081 Failure Decision Total: 12708 failures consisting of Pos Decision Type: 2628 leg Decision Type: 10080 Summary by Signal Parameter:

Signal~ame KinValue Kaxe Value AvgValue StdDevVal HinbEtimate Kex3Estimate AvgEatimate StdDevEst RZSError NaxlError AvgError StdDevErr RZ2Errort KUL. t1aMs NuaFtilures TixelstFail S1 76.0174 106 .9415 91.1674 5.5985 76.9709 lOS .6724 91.1953 5.5781 0.2023 1.2'749 0.1574 0. 1271 0.2215% 1 0 52 78.8292 109 .9985 94.9642 5.7494 80.3634 108 .8447 95.0357 5.7048 0.3341 1.6!946 0.2594 0.2105 0.3509% 1 0 S3 85.9183 114. .6711 99.8355 5.9353 86.2753 113. .6504 99.8402 5.9087 0.2490 1.01BO3 0.1954 0.1543 0.2489% 0 0 S4 92.6334 120. .0894 106.4964 6.2153 93.3014 120. .2523 106.5797 6.1229 0.3892 1.83LOS 0.3077 0.2384 0.3646t 1 0 S5 79.3885 138 128S3 111.8940 13.5389 108.2018 134. ,2147 120.9314 6.9114 14.2871 30.1 8903 10.5335 9.6526 11.7950% 126134 12708 01/06/02 04:12:00 Figure C-14 Set 3 Run Report C-45

EPRILicensed Material MSET Software Verification and Validation Report Reference 8.3-Test 17 Y Yes/No Run all analysis and record Notes: All analysis runs successful FM - ...... -------- -- ----- U171-M WI Evaluating VOVStandard Test for Positive Drift, Negative Drift Evaluating ignal S1...

Evaluating signal S2...

Evaluating signal 53...

Evaluating signal S4...

Evaluating signal SS...

Sensitivity analysis result for OPRATNIG_100 phase.

Estimation Nethod: BART rtaining Katrix Size: 100 Positive Drift, Negative Drift errot at detection:

summary ftor all signals:

Lean Rns Drift: 10.0584t lean FM5 Drift/StdDev: 1.6823 Total Spillover: 0 Signallame lean Stdev KinDrift laxDrift RISDrift P15SDiftt spill 91.1278 5.6294 1.125S' 20.6599 13.4254 14.7044 0 S2 94.9816 5.7333 1. 66211 15.0786 11.7619 12.3609 0 53 99.8483 S. 9238 1. 24401 14.5133 10.2927 10.2903 0 54 106.4756 6.2201 1. 866(1 11.3829 8. OSS1 7.5524 0 SS 120.3015 6.8841 1. 927t; 9.0182 6.4879 S. 3842 0 Evaluating signal S1...

Evaluating signal S2...

Evaluating signal S3 ...

Evaluating signal 34...

Evaluating signal S5...

Sensitivity analysis result tox OPEPATUOG50 phase.

Estination Ietbod: BART Training latrix Size: 100 Positive Drift, Negative Drift error at detection:

Summary for all signals:

Lean FM5 Driftt: 9.9785t lean FM Drift/StdDev: 1.6318 Total Spillover: 0 Signalffame lean Stdev KinDrift LaxDrift NESDtift lU1Driftt spill S1 45.5690 2.8726 0.8618 8.7328 6.0977 13.3548 0 S2 47.4889 2.9418 0. 7060 8.6194 S. 7007 11.9813 0 3 49.9023 3.0416 0.9125 7.8169 5.6055 11.2121 0 54 53.2434 3.1791 0.9855 6.6443 4.3367 8.1306 0 S5 60.1507 3. 5235 1.0571 4.3339 3.1413 5.2134 0 Ii"-- --

Figure C-15 Fault Detector Sensitivity Analysis Report C-46

EPRILicensed Material MSET Software Verification and Validation Report

[I ' "a. ...DXI--I--

J]

File.---F , _' "-I ' ^" ' " ' - I' esult Sumary for OPERATUIG 100:

Signalfaze ImlObs KinValue Kaxualue KedienValue KeenValue StdDevVal KexPoaDelta KaxiegDelta 51 20160 76.3521 106.5078 91.2777 91.1295 S.6376 11.1046 -12.0610 52 20160 79.4526 109.6782 95.0233 94.9839 5.7432 13.2085 -11.7605 53 20160 85.4303 114.1134 99.9167 99.BS03 5.9306 10.5287 -lO.1OSS 54 20160 91.6789 121.8188 106.1966 106.4803 6.2284 7.9888 -8.3513 55 20160 106.6704 134.0083 120.2012 120.3059 6.8916 5.9037 -5.2367 Result Sumzary for OPMR&TING SO:

Signallaze IM=Obs KinValue axValue lledianValue KeanValue StdDevVal KaxPosDelta Klx~egDelta 51 10080 37.0534 53.1964 45.5325 45.5668 2.8861 6.5930 -7.0966 52 10080 39.5800 55.4851 47.6129 47.4905 2.9540 6.4690 -7.0945 53 10080 42.5916 57.8510 49.8360 49.9024 3.0545 5.3570 -5.8612 54 10080 45.3010 60.8419 53.3070 53.2433 3.1913 5.5824 -5.1328 55 10080 52.1383 68.4344 60.2040 60.1504 3.5382 5.9932 -5.5120 lysis complete for all data sets and active phases.

Figure C-16 Data Set Analysis Report 1jr - -- - - --- -=


I- - 0 -I -.-I---


,c i xl

,,,, LL2 W__:;i 1"11 Training matrix analysis for active phaseti in VWV Standard Test...

Training matrix analysis result for OPERATING 100 phase.

SignalNme lumObs KinValue hexValue MedianValue KeanValue StdDevVal S1 100 76.3521 106.5C178 91.5823 91.2815 7.4155 52 100 79.4526 109.67'82 95.0233 95.0644 7.3702 53 100 85.4303 114.1134 99.9167 99.6553 7.6173 54 100 91.6789 121.81.88 106.5966 106.6300 7.8525 55 100 106.6704 134.0C'83 120.7535 120.5441 8.4456 7raining matrix analysis result for OPERAXING 50 phase.

SignalNaze IuaObs KinValue XaxValue ledianValue eaenValue StdDevVal Is1 100 37.0534 53.1964 45.3695 45.3521 4.0196 i52 100 39.5800 55.4851 47.7736 47.6473 4.0053 53 100 42.5916 57.8510 49.8360 49.9190 4.1663 54 100 45.3010 60.8419 52.9930 53.2330 4.2602 S5 100 52.1383 68.4344 59.7102 60.1219 4.8065 alysis complete for all active phases.

Figure C-17 Training Matrix Analysis Report C-47

EPRI licensed Material MSET Software Verification and Validation Report Fil_

I .P

"'" e ;iLL v F6M L- rLL~a1

  • pIs

'. ,.:'-5 0-111 M I

.',7g..................

....."'vi,. .

Computing pairvise linear correlations for V&VStA Processing data for Satl fal Result Summary for OPERATING 100:

Signal X Signal Y Corr(X, Y)

Si SS 0.9254 S4 0.9085 S3 0.8926 32 0.8758 52 SS 0.9429 S4 0.9260 S3 0.9080 31 0.8758 53 S5 0.9604 S4 0.9440 52 0.9080 S1 0.8926 54 S5 0.9784 S3 0.9440 S2 0.9260 S1 0.9085 55 S4 0.9784 S3 0.9604 S2 0.9429 S1 0.9254 I_ >- - . , .- -s-1 - 1 Figure C-18 Correlation Analysis Report - Operating_100 C-48

EPRI licensedMaterial MSET Software Verification and Validation Report

.File 2.- 7.X,4 a_ Ss 0 Z2'E' X_

sult SuMary for OPRATING sO:

ignal X Signal Y Corr(X, Y) 1 S5 0.9254 S4 0.9100 S3 0.8904 S2 0.8797 S2 S5 0.9441 54 0.9266 S3 0.910S S1 0.8797 13 SS 0.9598 34 0.9421 52 0.9105 S1 0.8904 S4 SS 0.9782 S3 0.9421 32 0.9266 S1 0.9100 5 S4 0.9782 S3 0.9598 52 0.9441 S1 0.9254 Figure C-19 Correlation Analysis Report - Operating_.50 Reference 8.3-Test 18 Y Plot Observation & Estimate for S5 Notes: Plot Run C-49

I EPRI Licensed Material MSET Software Verification and Validation Report I:F 9 V&V_StandardTest: Last Run 150.00 138.00 126.00 114.00 102.00 90.00 AftM-78.00

.9 66.00 5 4.00 42.00 30.00 0110D02 01103102 OD1502 0108102 01/10102 01112A02 01/1S102 01/17102 01119102 OQ2/02 Timte SS: Oservation vs. Time ^ SS: Estimtion vs. Time Figure C-20 Estimation and Observation Plot for S5 Fie H 11THIA VA01 V&V Standard Test: Last Run 10.00 1-20.80

-3.20

-29.60

-34.00 - . . . . . _ _ - -

0110112 01103102 01105102 0110812 011012 01112102 0111512 01117/02 011912 01(2210 Time x S5: Residul ns. Time

-34.00 Figure C-2100 Residual Plot for S5 C-50 CoXt

EPRILicensed Material MSET Software Verification and Validation Report Reference 8.3-Test 20 Y Yes/No Plot last Training for S5 Notes: Plot Complete 4 I g.~o lx "'. 'u~

V&V Standard Test: Last Train 136.00 132.90 129.80

. 126.70 123.60

  • 120.50 e~117.404

.2114.30 111.20 108.10 105.00 0.00 2500.00 5000.00 7500.00 10000.00 12500.00 15000.00 17500.00 20000.00 Time x S5: Obsewation vs. Time i S5:Estimation vs. Time Figure C-22 Last Training Plot for S5 Reference 8.3-Test 21 Y Yes/ No Apply Drift and Verify Notes: Drift applied successfully Reference 8.3-Test 22 Y Yes/No Install limit Filters & Verify Note: Limit Filters applied and working correctly Reference 8.3-Test 23 Y Yes/No Login In Monitor Role Note: Login complete-no problems identified C-51

EPl L.icensed Material MSET Software Verification and Validation Report 11 W. I :7DIM -4 F -,l , ,_ . ,. _ , . . _ .... .. . .

Honitoring Results for VsV Standard Test -> Set2 Index Result Su=ary for OPERATINGIODO:

ata Points Processed: 10080 Average Processing Tine: 0.8455 asec ornalized RMS Error Percent: 0.2742 %

Single Cycle Alarm Total: I alarms consisting of Pox Alarm Type: I Reg Alarn Type: 0 Failure Decision Total: 0 failures consisting of Pos Decision Type: 0 Reg Decision Type: 0 Summary by Signal Parameter:

Signallame KinValue NaxValue AvgValue StdDevVal Kin~stimate NaxEstizate AvgEstimate StdDevEst RESError IaxError AvgError StdDevErr RMSErrort lunAlarms N=UFailures TimelstUail S1 76.4718 106.9615 91.1733 5.6186 77.3969 105.71336 91.1762 5.6079 0.1965 1.24313 O.1SS 0.1252 0.2152t 1 0 S2 79.9574 108.61300 94.9476 5.7400 80.4380 108.0362 94.9461 5.6822 0.3010 1.3471 0.2356 0.1873 0.3165% 0 0 53 85.9183 114.6711 99.8279 5.9337 86.3035 113.6S39 99.7900 5.8906 0.2316 0.999'7 0.1818 0.1435 0.2317% 0 0 S4 92.6334 120.01194 106.4955 6.1895 93.1497 120.2585 106.4910 6.1588 0.3336 1.2859i 0.2627 0.2056 0.3127% 0 0 5S 106.4025 133.9ti79 120.3248 6.8928 106.8971 133.6092 120.3593 6.8913 0.35S7 1.5020 0.2832 0.2152 0.2950% 0 0 Figure C-23 Run Report for Set 2 Index C-52

EPRILicensedMaterial MSET Software Verification and Validation Report I14r0b= I -.( T. =.. a f.T&.-j = WM MMI ie1`

2, ri.C a . * -: 1X'R:

R a R . , R-- e D* *

  • ff g *- U = =
  • w -

Properties of Signal Si in V&V Standard Test: Ei Parameter Rane: P1 Unit: PSI Component: System Port: Internal Allowable Error: Not defined Xaximum Error: Not defined Confidence: 0.9975 Residual Moving Avg Points: 1 Validated: True Monitoring performance results for OPERlTING 100 phase:

Number of Observations: 10080 nmValue MaxValue AvgValue StdDevVal 76.4718 106.9415 91.1733 5.6186 KinEstimate KaxEstLzate AvgEstimate StdDevEst 77.3969 105.70:36 91. 1762 5. 6079 RZSEzrorx axError AvgError StdDevErr 0.1965 1.2438 0.1S5S 0.1252 Alarm Total: I consisting of Pos Alarm Type: 1 leg Alarm Type: 0 Failure Total: one Monitoring performance reitults for OPERATING 50 phase:

Number of Observationsi 5040 XinValue XaxValue AvgValue StdDevVal 38.2846 S2.8626; 45.5566 2.8954 KinEstimate KaxEstLzate AvgEstimate StdDevEst 38.5239 52.55941 45.4980 2.8754 RKSError XaxError AvgError StdDevErr 0.1698 0.7638 0.1313 0.1077 Alarm Total: None Failure Total: None Figure C-24 Signals Report for Set 2 Index C-53

M EPRILicensedMaterial MSET Software Verification and Validation Report

.Vw&VA Anda, Te Sed . .

V&VStatndadt~est: SeG-lndex 139fl0 12.80 r

4.

120.60 111.40 102.20 93.00 83.80 UMMA-

- 74.60 65.40 56.20 W 1W w V 1 WW 1Ww Al rVM 0.0 0 3875.00 7750.00 11625.00 15500.00 19375.00 23250.00 27125.00 31000.00 Time x SS: Observation vs. Timn A SS: Estimation vs. Tine NM maim Figure C-25 S5 Estimate and Observation Plot for Set 2 Index FMDi A

---~1~ I---"----11 111-1.-- ----- 111 1.--11-11 1- 1..11 -ot I e

V&VStandard Test: Set2_Index 1 70 1.37 X

1.041 K

4.

-1.27[ Ax' - x A A

-160 0.00 3875.00 7750.00 11625.00 15500.00 19375.00 23250.00 27125.00 31000.00 Tine x SS: Resi&ia vs. Time e r _

Figure C-26 S5 Residual Plot for Set 2 Index C-54 CO0 P

EPRI Licensed Material MSET Software Verification and Validation Report Reference 8.3-Test-24 Y Yes/No Exercise Administrator control functions, Note: Login and made additions and subtractions to user files.

Reference 8.3-Test 25 Y Yes/No Perform Logout sequence Note: Logout/Exit complete Testing Complete 6/15/2002 C.11 References C. 11. IEPRI References

1. EPRI Interim Report, SureSenseim Diagnostic Monitoring StudioTm Users Guide, Version 1.4, June 2002.
2. EPRI Interim Report, Plant System Modeling Guidelines for Implementing On-Line Monitoring, April 2002.

C.11.2Miscellaneous References

1. Argonne National Laboratory-Reactor Analysis Engineering Division:

A. Software ConfigurationControl Procedurefor the MultivariateState Estimation Technique Software System (MSET) --November 26, 2001, Revision I B. Software Requirements and Specificationsfor the MultivariateState Estimation Technique Software System (MSET) -- May 21, 2002 Revision 2 C. Software Validation & Verificationfor the MultivariateState Estimation Technique Software System (MSET) - February15, 2002 Revision 1 D. Software Test Planfor the MultivariateState Estimation Technique Software System (MSET) - June 25, 2002 Revision 0

2. Expert Microsystems:

The following documents describe the quality assurance processes associated with the SureSense Diagnostic Monitoring Studio.

C-55

EPRILicensedMaterial MSET Software Verification and Validation Report A) Expert Microsystems Software Quality Assurance Plan, Document No. 2002-4470, Rev. 1.0 The Software Quality Assurance Plan (SQAP) sets forth the process, methods, standards, and procedures used to perform the Software Quality Assurance function for the SureSense software.

B) Expert Microsystems Software Configuration Management Plan, Document No.

2001-4471, Rev. 1.0 The Software Configuration Management Plan (CMP) sets forth the policy, procedures and processes used to accomplish configuration management for the SureSense software.

C) SureSense Diagnostic Monitoring Studio Software Requirements Specification, Document No. 2001-4489, Rev. 1.4 This document specifies software requirements for version 1.4 of the SureSense Diagnostic Monitoring Studio software.

D) SureSense Diagnostic Monitoring Studio Software Verification Test Plan, Document No: 2001-4479, Rev. 1.4 The purpose of this document is to define test procedures verifying that the software meets all requirements defined in the Software Requirements Specification (document C above). Guidelines are provided for Unit Testing, Integration Testing, System Testing, Regression Testing and Acceptance Testing.

This document provides step-by-step test instructions. Variables and data are specified as well as expected results. It is appropriate to use this document for System and Acceptance Testing. Tests in this document correspond to the Requirements Specification. A Traceability Matrix is included in this document to verify that testing of all items listed in the requirements document is performed.

3. Nuclear Utilities Software Management Group A significant effort to provide member utilities with guidance documents designed to clarify and standardize the requirements for the management of software and hardware in the nuclear industry was undertaken by NUSMG. This effort was undertaken by representatives from each of the member utilities which produced limited distribution guidance documents for use by the member utilities. Those documents were used as a framework to conduct and produce this effort. Authorization for use as a reference has been extended through the utility participants in both the NUSMG and the EPRI Online Monitoring Project.

A. Guidance Document for Nuclear Software Quality Assurance Program for Nuclear Power Generation Facilities, Revision 0 July 1999 B. Guidance for the Dedication of Commercial Grade Computer Software, Revision 5, January 1995.

C. Quality Software Procurement Guideline, Revision 0, November 1994.

D. Guidance Document for a Graded Approach to Software Quality Assurance, Revision 0, October, 1995.

C-56

EPPJLicensed Material MSET Software Verification and Validation Report C.12 Glossary This glossary provides definitions for technical terms used in the report or otherwise applied to on-line monitoring. Abbreviations used in the body of the report are also included in the glossary.

A Acceptance Criteria - Guidelines used during testing, and V&V to determine whether the delivered software provides acceptable performance and functionality.

Acceptance Testing - Formal testing conducted to determine whether the subject C/W satisfies the requirements specified in the Functional Requirements Document.

Allowed Error- A user defined value that identifies a limit of deviation between the measured value and the estimated value of a parameter. It is measured in the same engineering units as the parameter. It typically would represent a point that would require some type of administrator action.

C Calibration - The process of adjustment, as necessary, of the output of a device such that it responds within a specified tolerance to known values of input.

Channel - An arrangement of components and modules as required to generate a single protective action signal when required by a generating station condition, a control signal, or an indication function.

Commercially Available Software - Software products that have been developed, tested, and marketed (shrink-wrapped) for mass applications; for example PC operating systems (DOS, MS Windows, etc.), spreadsheets, data base managers, word processors, compilers, and/or software used to drive analytical equipment.

Computerware (C/W) - A broad general term that includes any of the following categories:

Software - A program of instructions written to perform a specific task when installed or accessed by a computer. Software is transportable and may be copied, edited, or erased.

Firmware - An alternate form of instructions written to perform specific tasks when installed or accessed by a computer/processor. Firmware is typically recorded on a hard device and may not be edited, copied or erased under normal conditions.

Hardware - Defines all equipment utilized to compute, control, store, edit, display and transport data, information, software instructions, and firmware instructions.

Data - Information collected and stored that is representative of any given system.

C-57

EPRILicensedMaterial MSET Software Verification and ValidationReport Special purpose devices - Macros and formulas in spreadsheets and document processing programs, database definitions such as rules, triggers, constraints and table layouts whose resultant data is used as design input or to directly control plant equipment.

C/W - see Computerware.

C/W Functional Requirements Document (FRD) - The document that specifies the functions, performance expectations, design constraints, attributes or other characteristics of the computerware and its external interfaces. The document may also include procedures for determining whether or not these provisions have been satisfied.

C/W Testing - The process of exercising or evaluating a computerware system of individual component by manual or automated means, to verify that it satisfies specified requirements or to identify differences between expected and actual results.

D Data - Information that may control the functionality of computerware such as a file or table of routine names, values, or algorithms that should be used for a selected menu choice.

DDD - Design Document Description.

F Fault Detector - Fault detection evaluates signal deviations from the estimated value. If deviations are larger than model settings alarms are generated. Conditional probability filters (BCP) filter the fault alarms against user defined confidence levels. Alarms are generated based on number of alarms within a given sequence of observations. This process is essentially a statistical analysis to determine if the monitored signal is consistent with normal behavior defined by the trained model.

FRD - Functional Requirements Document.

I Instrument Channel - An arrangement of components and modules as required to generate a single protective action or indication signal that is required by a generating station condition. A channel loses its identity where single protective action signals are combined.

M Max Error- The maximum error is a user defined value that identifies a limit of deviation between the measured value and the estimated value of a parameter. It is measured in the same engineering units as the parameter. It typically would represent a point that the monitored parameter cannot exceed with administrator action to correct the error.

C-58

EPRILicensedMaterial MSET Software Verification and Validation Report Model - The selected signals that are collectively evaluated and validated as a group by the SureSense software.

MSET - Multivariate State Estimation Technique.

N NRC - Nuclear Regulatory Commission.

0 OLM - On-line monitoring.

On-Line Monitoring - An automated method of monitoring instrument performance and assessing instrument calibration while the plant is operating.

P Parameter Estimate - The best estimate of the actual process value.

Pattern Recognition - The ability of a system to match large amounts of input information simultaneously and generate a categorical or generalized output.

Periodic Testing - The process of periodically exercising C/W against a Validation Document to certify that it still produces the same results as that produced by the V&V Test.

Phases - Phases are user definable operating ranges of a modeled set of parameters. They represent a different set of parameter ranges trained within a given model and are referenced off a single representative parameter.

S SDMS - SureSense Diagnostic Monitoring Studio.

SureSense - A commercially supported implementation of the MSET software originally developed by Argonne National Laboratory.

T Test Plan - A document that prescribes the approach to be taken for intended testing activities.

The plan typically identifies the items tested, the testing to be performed, test schedules, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.

C-59

EPRIlicensedMaterial MSET Software Verification and Validation Report V

Validation - The process of evaluating a system or component during or at the end of a development process to evaluate whether it satisfies specific acceptance criteria.

Verification - The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

V&V - Verification and validation.

V&V Plan - A document that details the method(s) for evaluating the C/W to provide assurance that the C/W performs all the requirements identified in the Functional Requirements Document.

Version - A unique identifier that identifies the configuration control revision of the subject software.

C.13 Standard Test Project Results Summary Provided by Expert Microsystems:

Standard Test Project Results Summary This project uses the StandardPhaseDeterminer and contains 5 data sets.

PHASE DETERMINER The phase is determined based on a single value. If the value is greater than 75, the system is in OPERATING_100. If it is less than or equal to 75 it is in OPERATING_50. The value of Parameter P0 determines the phase. In this project, P0 is not validated and is used only for determining the phase.

DATA SETS The 5 data sets in this project are very similar. Each data set contains six data streams and a time stream. There are 30240 observations in each data set. In each data set, the data is in OPERATING_100 for the first 10080 observations (approximately 1 week in XLT time), in OPERATING_50 for the next 10080 observations and then returns to OPERATING_100 for the final 10080 observations.

Each signal is comprised of two parts, a deterministic component and a stochastic component.

The deterministic portion of the signal is identical in every signal in every data set. Variations in the stochastic portion of each signal yields six unique signals in each data set C-60

EPRIlicensedMaterial MSET Software Verification and Validation Report DATA SET DESCRIPTION The data sets are described below:

1. Setl is the original data set. Each data stream is constructed from the same sum of sines. The stochastic portion consists of noise with a Gaussian distribution with a variance of 1.
2. Setl_split is identical to Setl, except that the data has been split into two data files. The first file contains data streams AOto A_2. The second file contains data streams A_3 to A_5.
3. Set2 is similar to Setl, but differs only in the seed used to generate the noise.
4. Set2_index is identical to Set2, except that the time track was replaced with an index (integers from 0 to 30239).
5. Set3 is similar to Set2 except a small drift has been applied to A_5. This signal first drifts upward for the first 10080 observations at approximately .00052 units/observation and then downward at approximately .002 units/observation for the remainder of the observations.

EXPECTED OUTPUT Included with this report are the following output files:

1. Model TrainingReport.txt
2. Set2 MonitoringRun Report.txt
3. Set2_Index MonitoringRun Report.txt
4. Set3MonitoringRunReport.txt Training When the model is trained on Setl, we expect to see 20160 data points in OPERATING_100 and 10080 data points in OPERATING_50. The training report is Model TrainingReport.txt.

In the following Monitoring examples, Test data sets may contain data identical to the Training data. However, the statistics reported in the Train or Test report will not be identical as statistics generated during training are calculated over the training data with the D-Matrix vectors removed.

Monitoring SetlISplit The purpose of this test is to verify that data can be read from two data files. The data is the same therefore, we expect there will be no failures.

Training on Setl and then training on Setl_split should yield the same results as in the Training Report generated in Section 0. Similarly, monitoring Set 1 and then monitoring Setl-split should yield the same results which can be found in Setl_Split MonitoringRun Report.txt.

C-61

EPRILicensed Material MSET Software Verification and ValidationReport Set2 The purpose of this test is to verify that similar data will not yield false alarms. Set2 is very similar to Setl, except that the seed used to create the noise portion of the signal is different in each set. Because the data is well correlated both between sets and between signals, we expect to see no false failures, although we do expect to see single cycle alarms. In fact, we see seven single cycle alarms and no failures. The monitoring run report for this test is Set2 Monitoring Run Report.txt.

Set2_Index The purpose of this test is two-fold. The first is to verify that the application can read both Excel time and indexed time. The second is to determine that the sampling function works correctly. In the run report, we must verify that there are half as many points processed as in Set2, and that the times are reported as integers. The run report for this test is Set2_Index MonitoringRun Report.txt. Corresponding time may be calculated for each index by using a conversion factor of 1 minute per time period.

Set3 This was originally the same data as in Set2. A small drift has been applied to A_5. The signal drifts upward for 10080 observations and then drifts down for the remainder of the observations.

We expect to see a mean positive failure in the first OPERATING 100 phase, followed by a recovery in the OPERATING_50 phase, followed by a mean negative failure which continues into the final OPERATING 100 phase. The run report for this test is Set3 Monitoring Run Report.txt. Observation and Estimation vs. Time and Residual vs. Time.

Note: Plots not provided, original on record.

C-62

EPRILicensed Material D

UNCERTAINTY ANALYSIS OF THE INSTRUMENT CALIBRATION AND MONITORING PROGRAM ALGORITHM The EPRI Instrument Calibration and Monitoring Program (ICMP) provides an on-line monitoring method to verify performance of redundant instrument channels. The ICMP process was summarized in On-Line Monitoring ofInstrument ChannelPerformance Volume 2 [5] and is described in detail in [9, 10, 11]. The following sections present an uncertainty analysis of the ICMP redundant monitoring algorithm.

D.1 ICMP Overview The objective of the Instrument Calibration and Monitoring Program (ICMP) is to provide an alternative method of meeting current instrument calibration requirements for the nuclear power industry, and a method of enhanced monitoring for redundant instrument channels. The term redundant sensors, refers to a particular parameter being measured by more than one sensor.

Important parameters, such as reactor coolant flow in a nuclear power plant, are often measured by a set of redundant sensors.

The Basis for the Method was completed in 1]993 [9], and a Monte Carlo simulation and uncertainty analysis was completed in 1995 [11]. The ICMP system calculates and records the difference between the process parameter estimate, and the measured value reported by each of the redundant instrument channels. These differences describe the error associated with each instrument channel. A faulty sensor detection program is implemented to report an alarm during on-line plant operations for inconsistent instrument channels.

D.2 ICMP Uncertainty Analysis Methodology ICMP uncertainty depends on several factors, including:

  • Accuracy (or uncertainty) of each monitored channel
  • Number of redundant channels
  • Consistency check criteria Each of the above factors require consideration as part of setting up ICMP for a given parameter.

The following sections discuss these factors in more detail.

D-1

EPRILicensed Material UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm D.2.1 Accuracy and Number of Monitored Channels The parameter estimate is calculated from a weighted average of the individual monitored channels. Each individual channel has an uncertainty associated with its measurement, which leads to the conclusion that the parameter estimate must also contain some amount of error. In this regard, the parameter estimate represents the best estimate of the true process value, but there is still some uncertainty associated with the actual process value.

An averaging type of on-line monitoring, such as ICMP, typically monitors two to five channels for a given parameter. In the simplest implementation of on-line monitoring, the parameter estimate would be the average of the monitored channels. In the case of a simple average, the uncertainty of the parameter estimate can be calculated given the following assumptions:

  • Each instrument loop represents a random and independent measurement of the same process.
  • Instrument performance can be modeled as normally distributed.
  • Each instrument is initially performing within specification.

The following example is provided to demonstrate the amount of measurement uncertainty that is present after redundant measurements are averaged to estimate the process value.

Example D-1 Assume that four channels monitor some process parameter and each channel has a total uncertainty of +/-3 percent; no channel exhibits more drift than any other channel. This uncertainty of +/-3 percent accounts for all of the expected individual contributors to uncertainty for the entire instrument loop from sensor to display or bistable.

The method of uncertainty analysis presented in ANSI/ASME PTC 19.1-1998, Measurement Uncertainty, will be used to develop this example. By taking four independent measurements, the true process value will be estimated by taking the average of the individual measurements:

Best Estimateof X = XI + X2 + X3 +

4 The terms xi, x2, X3, and x4 represent four individual measurements of the process X. The general form of uncertainty analysis, without inclusion of any bias effects, is given by:

Ol(R a2(R )2

where, R = A function R(x, ..., z) x, , z = Variables D-2

EPRILicensed Material UncertaintyAnalysis of the Instrument CalibrationandMonitoringProgramAlgorithm x, = Uncertainty of x, ..., z (t = Uncertainty in the resultant function R Applying the above expression to our particular case, the uncertainty in our measurement ofXis given by:

I( X 2 X )2 a a2 x 2

&XI l &22 a)C3 3 X4 The partial derivative of X with respect to measurement x, is given by:

a XI+ X2+ X3+ X4 axaL 4 ]1 axl ax, 4 The partial derivatives of x2, X3, and X4 have a similar result, providing the following uncertainty expression:

O+(4 ta)+ (4 §)2) 4 t93)+(4Ox The assumed uncertainty of each measurement is +/-3 percent for this example, yielding the following uncertainty in the actual process value:

Wx = x3% +(4 x3%) +( x3%) +(4 -x3%) =+/-1.5%

By taking four independent and random measurements of the same process using instrument loops that each have a measurement uncertainty of +/-3 percent, our understanding of the true process value has been improved to an uncertainty of 41.5 percent. Given the original problem statement, our best estimate of the parameter estimate X (including the uncertainty) for this specific case is given by:

Best Estimateof X= X1 +X2 + X 3 +X 4 +/-1.5%

4 The uncertainty of each process parameter depends on the configuration and the number of redundant instruments. Using the same analytical process shown in the above example, Table D-1 provides the uncertainty of the process value based on the number of redundant channels and the uncertainty of each channel, assuming that all channels have the same relative uncertainty:

D-3

EPRILicensedMaterial UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm Table D-1 Measurement Uncertainty as a Function of the Number of Redundant Channels Uncertainty (+/-)

Individual Channel Uncertainty -e 0.5% 1% 2% 3%

Process Uncertainty-5 Channels - 0.22% 0.45% 0.89% 1.34%

Process Uncertainty-A Channels e 0.25% 0.50% 1.00% 1.5%

Process Uncertainty-3 Channels e 0.29% 0.58% 1.15% 1.73%

Process Uncertainty-2 Channels - 0.35% 0.71% 1.41% 2.12%

The results in Table D-1 are presented graphically below in Figure D-1. After some thought, the results shown above become more intuitive. If we take an infinite number of measurements, the mean (or average value) must be the actual process value given that the individual measurements are random, normally-distributed, and free of any bias. Using the same reasoning, we expect our knowledge of a parameter to improve as we increase the number of independent measurements taken of that parameter. As a general rule, we would expect the parameter estimate using 5 redundant channels to be more accurate than a parameter estimate based on only 2 channels, assuming equal relative accuracy among all channels. Notice that the results presented in Table D-2 are directly scaleable; as the channel uncertainty increases (or decreases), the parameter estimate uncertainty increases (or decreases) in direct linear proportion.

2.5 2 Channels 2.0 -

C

3 1.5 O

E CT0.5 0

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Individual Channel Uncertainty (%)

Figure D-1 Theoretical Process Measurement Uncertainty D-4

EPRILicensedMateral UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm Table D-1 and Figure D-1 apply to normally operating channels in ICMP in which no channels have been excluded from the average. As the number of redundant channels increases, the parameter estimate uncertainty decreases, which is somewhat intuitive; more redundant, independent measurements leads one to have more confidence in the measurement of the process.

D.2.2 Consistency Check Criteria The consistency check process is a powerful feature of ICMP. The consistency check factor controls the influence of individual channels on the parameter estimate calculation. A channel judged to be consistent with all other redundant channels will have maximum possible influence on the parameter estimate. And, a channel that is declared inconsistent with all other channels will be excluded from the parameter estimate calculation.

If the consistency check factor is large with respective to the variation between channels, all channels tend to be consistent and have equal effect on the parameter estimate. The outlying (bad) measurement skews the parameter estimate from the true process value. Figure D-2 shows an example of this effect. Signals #2 and #3 are very close to the true process value, but Signal

  1. 3 is out of calibration by almost 1.2 percent. By having a large consistency check factor, all three signals have equal weight in the parameter estimate calculation, thereby skewing the parameter estimate toward the outlying channel.

79.0 -

Signal Number 1 78.5 -

coG I.

rb u)

S 78.0-Parameter Estimate >

ae\ Signal Number 3 a Signal Number 2 S N 3 77.5-77.0 0 20 40 60 80 100 Time (minutes)

Figure D-2 Outlying Channel Allowed to Influence Parameter Estimate D-5

EPRILicensedMaterial UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm If the consistency check factor is relatively small with respect to the variation between channels, it is likely that some consistency checks for a particular measurement will be declared inconsistent, resulting in that measurement having less influence on the parameter estimate.

Consequently, that measurement will tend to be the furthest from the parameter estimate and is more likely to be declared as abnormal by the acceptance criteria. Figure D-3 shows the example again in which Signal #1 is an outlying measurement, but by having the consistency check factor kept small enough, it has been excluded from the parameter estimate calculation.

79.0 -

Signal Number 1 78.5 -

e 78.0 Signal Number 2 0) 77 5 Parameter Estimate -

7I SignalNumber 3 77.0 0 20 '10 60 80 100 Time (minutes)

Figure D-3 Consistency Check Excludes Outlying Channel from Parameter Estimate By ensuring that outlying channels have less influence on the parameter estimate, the parameter estimate is closer to the true process value. This provides more assurance that the selected acceptance criteria compares each individual measurement to the best estimate of the actual process value.

D-6

EPRI Licensed Material UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm The consistency check criteria and the acceptance criteria are related, in that outlying measurements are more likely to pass the acceptance limit test if they are allowed to influence the parameter estimate. Figure D-4 shows an, example of a set of redundant measurements in which the highest measurement (at the top of the graph) will be evaluated. Notice that the other three measurements are all just below 59 percent of span while the outlying measurement is consistently at or above 60 percent of span.

61 d 60 A4

_L  :

a) 5 V '

58 0 10 20 30 40 50 60 Time (minutes)

Figure D-4 Observed Performance of Steam Generator Level Transmitters D-7

EPRILicensedMaterial UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm Two factors simultaneously influence the ICMP results. First, the consistency check factor magnitude determines the degree to which the outlying measurement is included in the parameter estimate calculation. Second, the magnitude of the acceptance criteria determines if the outlying measurement is identified as in need of calibration. With regard to the consistency check factor effect, Figure D-5 shows how the parameter estimate varies with the consistency check factor.

Notice that small consistency check factors tend to exclude the outlying measurement from the parameter estimate. But, as the consistency check factor is made larger, the outlying measurement eventually is included in the parameter estimate average. As can be seen, the parameter estimate varies by approximately 0.4 percent as this happens for this particular case.

59.2 59.1

.t 59.0 E

'Oj 58.9 E

{a 58.8 58.7 58.6 0 0.5 1.0 1.5 2.0 2.5 3.0 Consistency Check Factor (minutes)

Figure D-5 Example Variation of Parameter Estimate with Consistency Check Factor D-8

EPRILicensedMaterial UncertaintyAnalysis of the Instrument Calibrationand MonitoringProgramAlgorithm The selected acceptance criteria also has an effect on the ICMP evaluation. For example, if the selected acceptance criteria is +/-50 percent of span, then we would never identify a channel in need of calibration. And, if the selected acceptance criteria is +/-0.01 percent of span, then we could expect that ICMP would usually identify the channel as needing calibration. Figure D-6 shows the actual ICMP results for the data in Figure D-4 and should be interpreted as follows.

The y-axis shows the percent of measurements from the outlying channel that were identified as failing the acceptance criteria. Each line corresponds to a different acceptance limit, ranging from 1 percent to 2 percent of span. The x-axis varies the consistency check factor from 0.5 percent to 3 percent.

1000 80 1.0 405 40 'O 2.00 0.5 1.0 1.5 2.0 2.5 3.0 Consistency Check Factor (minutes)

Figure D-6 ICMP Identification of Drifted Channel Figure D-6 illustrates best that the consistency check factor and the acceptance criteria provide the optimal detection of the drifted channel when they are kept at sufficiently low values. If the redundant channels are initially offset from one another by some amount, the consistency check factor must be set large enough to include all channels, which affects the overall uncertainty.

D-9

EPRI Licensed Material E

MSET UNCERTAINTY ANALYSIS Nela Zavaljevski Adrian Miron Chenggang Yu Thomas Y. C. Wei

/

September 2004 Nuclear Engineering Division Argonne National Laboratory E-1

EPRILicensed Material MSET UncertaintyAnalysis Argonne NationalLaboratory (ANL), with facilities in the States of Illinois and Idaho, is owned by the UnitedStates Government, and operatedby the University of Chicago underprovision of a contractwith the Departmentof Energy.

DISCLAIMER THIS MATERIAL RESULTED FROM WORK SPONSORED BYANAGENCY OFTHE UNITED STATES GOVERNMENT NEITHER THE UNITED STATES GO VERNMENT NOR ANY AGENCY THEREOF, NOR THE UNIVERSITY OF CHICAGO, NOR ANY OFTHEIR EMPLOYEES OR OFFICERS, MAKES ANY WARRANTY, EXPRESSED OR IMPLIED, OR ASSUMES ANY LEGAL LIABILITY OR RESPONSIBILITY FOR THE ACCURACY, COMPLETENESS, OR USEFULNESS OFANYINFORMATION, APPARATUS, PRODUCT, OR PROCESS DISCLOSED, OR REPRESENTS THATITS USE WOULD NOTINFRINGE PRIVATELY OWNED RIGHTS. REFERENCEHEREIN TO ANYSPECIFIC COMMERCIAL PRODUCT, PROCESS, OR SERVICE BY TRADENAME, TRADEMARK, MANUFACTURER, OR OTHERWISE, DOESNOTNECESSARILY CONSTITUTE OR IMPLYITS ENDORSEMENT, RECOMMENDATION, OR FAVORINGBYTHE UNITED STATES GOVERNMENTORANYAGENCYTHEREOF. THE VIEWS AND OPINIONSOFAUTHORS EXPRESSED HEREIN DO NOTNECESSAIULY STATE OR REFLECT THOSE OF THE UNITED STATES GOVERNMENT OR ANY AGENCY THEREOF.

E-2

EPRILicensed Material MSET UncertaintyAnalysis E.1 Abstract Periodic calibration of all safety related instrument channels is required by the technical specifications of commercial nuclear power plants. On-line monitoring is an alternative to traditional time-directed calibration. An implementation project sponsored by EPRI uses on-line monitoring technology based on nonparametric multivariate techniques. One such technique is the Multivariate State Estimation Technique (MSET), which has found niche application in the on-line monitoring project. A critical issue for acceptance in the nuclear industry is the quantification of uncertainty associated with on-line estimation.

The appropriate incorporation and representation of uncertainty is recognized as a fundamental component of analyses of complex systems. For uncertainty quantification, two general sources of errors are considered: uncertainty in the true values of the model parameters, and the uncertainty in the model itself. The contribution of the first source uncertainty is called parameter uncertainty and is the scope of this report.

For on-line monitoring application, the term model is used to describe a group of instrument channel signals that have been collected together for the purpose of signal validation and analysis. Specifically, an MSET model includes various settings that are necessary to optimize MSET performance, such as the selection of specific non-linear MSET operators, number of training vectors in the memory matrix, regularization parameters, and other settings. All of these parameters affect estimation uncertainty.

In order to quantify estimation uncertainty, we developed a simulation-based approach to uncertainty analysis, which includes instrument channel data pre-processing based on wavelet de-noising and Latin Hypercube Sampling for simulations and uncertainty evaluations. This approach is implemented in the computer program Uncertainty Analysis for MSET (UNA-MSET) and can be used for the plant-specific uncertainty analysis for any nuclear power plant model. To simplify the application of the developed methodology in the nuclear industry and to extend applicability of the developed uncertainty bounds, comprehensive simulations have been performed and the results saved in a database designed specifically for this application.

Statistical data analysis is implemented in a computational tool Uncertainty Analysis Data Base (UNADB). It provides generic estimation of MSET uncertainty as a function of several parameters, such as: the number of sensors, the number of vectors in the memory matrix, the sensor noise level and probability distribution, introduced sensor drift, and correlation among sensors.

Summary of the plant-specific uncertainty analysis for selected sensor groups of a typical PWR NPP is reported. The results show that the uncertainty in small MSET models with redundant sensors is determined primarily by spill-over and is rather small. The uncertainty in larger models, represented by RPS sensors, is determined primarily by the considerable variation observed in the feedwater flow sensors and the steam flow sensors. Although rather large, the uncertainty is still less than the estimated noise level, especially when regularization is used. The MSET uncertainty for these models could be reduced, if necessary, by pre-filtering of the data.

E-3

EPRILicensedMaterial MSET UncertaintyAnalysis Simulation-based generic uncertainty analysis is also described. The present database has been developed for testing purposes and smaller models are overrepresented to save the computational effort. In spite of a limited coverage of possible power plant models, some general conclusions about MSET uncertainty can be derived from the current database content. As already observed in the plant-specific analysis, the largest uncertainty is due to the sensor noise. In most cases, confidence intervals are bounded by the two standard deviations of the estimated noise. Some exceptions are observed for very small noise levels, and for non-Gaussian noise. Largest uncertainty in groups of redundant sensors with small noise level is due to spill-over. However, this uncertainty is only a small fraction of the introduced drift and is easily discriminated from the response of the drifted sensors when the drift size is not too small.

E-4

EPRLIcensedMatenal MSET UncertaintyAnalysis E.2 Acknowledgments This project is sponsored by the United States Department of Energy (DOE) Nuclear Energy Plant Optimization (NEPO) program and for this we are very grateful.

E-5

EPRILicensed Material MSET UncertaintyAnalysis E.3 Acronyms ANL Argonne National Laboratory CDF Cumulative Distribution Function DOE Department of Energy EPRI Electric Power Research Institute LHS Latin Hypercube Sampling MSET Multivariate State Estimation Technique NE ANL Nuclear Engineering Division NEPO Nuclear Energy Plant Optimization NRC Nuclear Regulatory Commission OLM On-Line Monitoring pdf Probability Distribution Function QO Quadratic Optimization RCS Reactor Coolant System RPS Reactor Protection System SDM Signal Disturbance Magnitude SGL Steam Generator Level SPRT Sequential Probability Ratio Test UNADB Uncertainty Analysis Database UNAMSET Uncertainly Analysis for MSE T E-6

EPRP licensedMaterial MSET UncertaintyAnalysis E.4 Introduction Safety and reliability of nuclear power plants depends on the ability to accurately monitor and control plant operations. When the instrument channels are important to safety, it is necessary to verify their performance through periodic calibration. Manual calibration is costly and labor-intensive, exposes personnel to radiation, and does not take full advantage of instrument performance data. An alternative approach is on-line monitoring, which is achieved through a comparison of individual instrument channels with computed estimates of the process parameters being monitored. An implementation project has been sponsored by EPRI [El], which uses data-based nonparametric multivariate techniques to provide process parameter estimates. One such technique is the Multivariate State Estimation Technique (MSET) [E2], which has found niche application in the on-line monitoring project.. A critical issue for acceptance in the nuclear industry is the uncertainty associated with on-line estimation [E3].

The appropriate incorporation and representation of uncertainty is recognized as a fundamental component of analyses of complex systems [E4-E6]. To quantify the effects of uncertainty, two general sources of error are often considered. First, there is uncertainty in the true values of the model's parameters, and second, there is uncertainty in the structure of the model itself (including the uncertainty in the validity of the assumptions used in the model). The contribution of the first source of uncertainty is called parameteruncertainty, the other is model uncertainty.

The methodology for parameter uncertainty analysis has been studied for many years and several standard techniques have been developed [E4], [E5]. Model uncertainty, however, is still an active area of research and subject to controversial discussions [E6]. The scope of this report is limited to parameter uncertainty analysis. In this context, the purpose of uncertainty analysis is to quantify the variability of the model output due to variability in the values of the inputs.

Two main classes of parameter uncertainty analysis are analytical and numerical methods. Exact analytical methods for propagation of uncertainty are intractable for all except the simplest cases.

However, there are a variety of approximate analytical techniques based on Taylor series expansion [E4]. These are sometimes known as the method of moments, because they propagate and analyze uncertainty using the mean, variance and sometimes higher order moments of the probability distributions. In practice, analytical methods are combined with numerical evaluations of model outputs and necessary terms in series expansions. Frequently, for models defined by systems of differential or algebraic equation, a perturbation analysis approach is implemented, and adjoint equations are formulated and solved to enable efficient computation of the necessary model derivatives [E5]. The methods based on adjoint analysis are sometimes referred to as local methods, because in the perturbation theory expressions, the series expansion is performed around some reference point. On the contrary, sampling methods can produce uncertainty estimates from the whole range of input parameter variations, and are therefore referred to as global methods.

Sampling methods for uncertainty analysis are based on the selection of one value for each uncertain parameter from its range of values. This defines an ordered m-tuple, where m is number of parameters. The set of selected m-tuples is called a sample. Various methods for sample selection have been developed. Random sampling methods are most commonly used, in E-7

EPRILicensed Material MSET UncertaintyAnalysis particular Monte Carlo methods [E7] and Latin Hypercube Sampling (LHS) [E8]. Other uncertainty analysis methods, such as the bootstrap [E9] are also widely used.

A perturbation analysis approach to on-line monitoring uncertainty evaluation is presented in

[EI0], and a bootstrap procedure in [ElI]. This report presents methodology for on-line monitoring uncertainty analysis based on LFIS developed at Argonne National Laboratory [E12].

Due to a specific non-linear estimation approach in MSET, good analytical approximations for estimation uncertainty are difficult to obtain. That was the motivation to select a simulation-based approach to uncertainty analysis. The specific implementation combines the instrument channel data pre-processing based on wavelet de-noising [E13] and LHS for simulations. This approach is the most appropriate for a plant-specific uncertainty analysis. For each power plant, appropriate sensors groups for on-line monitoring should be defined and corresponding MSET models developed. For each of the specified groups of sensors, simulation-based uncertainty analysis gives accurate estimation of confidence intervals for each sensor in the group. Generic simulation-based uncertainty analysis requires more extensive simulations for various power plants and sensor groups. In the approach described in this report, uncertainty analysis has been performed for a large number of typical power plant models. A database has been designed for storage, retrieval, and statistical analysis of simulation results.

The organization of this report is as follows. Section E.5 describes the methodology for uncertainty analysis based on Latin Hypercube Sampling and wavelet de-noising. Regularization methods implemented in MSET are also described. Section E.6 describes the computer programs developed during this project. The plant-specific uncertainty analysis is performed using the computer code UNAMSET. Generic uncertainty analysis is performed using a computational tool UNADB, which includes a modification of UNAMSET and a database for simulation results. Selected results of uncertainty analysis performed for a specific power plant, as well as some generic uncertainty analysis results are presented in section E.7. A summary of the work is presented in Section E.8and Section E.9 gives some details of a plant-specific uncertainty analysis.

E.5 Methodology E.5.1 Latin Hypercube Sampling Due to its generality, ease of implementation, and ability to systematically and uniformly cover the domain of uncertain parameters, LHS has found wide applications in a variety of safety-critical systems [E8].

LHS is a stratified sampling technique, i.e. the sample space for each input parameter is divided in strata and input values are obtained by sampling separately from each stratum instead of from the distribution as a whole. To generate N samples using LHS, each input distribution is divided into N intervals of equal probability, based on the cumulative distribution function. In the input data set with M input parameters, one value for each input is selected randomly, without replacement, from the N sample values. This procedure generates N input data sets, with each value from each input being used only once.

E-8

EPRILicensedMaterial MSET UncertaintyAnalysis In practice, assignment of input values is accomplished by generating a random permutation of integers from 1 to N. For a LHS scheme with M parameters, M random permutations of N integers are generated, and a permutation matrix with dimension M x N is produced. In MSET applications, the total number of parameters is M=m x n, where m is the number of sensors under monitoring and n is the number of training and testing samples. In most practical MSET applications, the number of input parameter is very large. We have reduced this number by sampling of the training data using the MSET memory matrix only. Numerical experiments have validated this approach.

The application of LHS to MSET requires specification of the input noise distribution. We have developed a general noise model that can generate a specific noise distribution for any combination of sensors. In addition to the Gaussian noise, random noise generators with the Laplacian and t-distribution have been implemented, to characterize noise with larger tails [E14].

This enables more accurate noise modeling and more conservative confidence interval estimation.

The Laplacian probability distribution function with the parameter X,which is the inverse of the standard deviation, is given by the following expression:

f(x) = 2exp(-k I x I) 2 The analytical expression of the CDF for the Laplacian distribution enables efficient computations. The t-distribution with the shape parameter a > 0 has the form a+1 f(x)= 2

+ 2 a+1 2 a where r denotes the gamma function [El5]. The expressions for the CDF are not available in the closed analytical forms and we implemented a computational strategy based on the tabulated CDF values. The same approach is used for the empirical CDF, estimated from the data.

Noise autocorrelation effects can be also modeled using a linear time-series model [El 6]

n Yt =FSa, Yt-i + Wt 1=1 where wt is the white noise.

Correlation among inputs can be simulated using the Cholesky decomposition H of the known or estimated covariance matrix Z H H'= I and a linear transformation Y=HX E-9

EPRI licensedMaterial MSET UncertaintyAnalysis where X are the generated uncorrelated inputs.

In principle, the noise with pre-specified statistical properties can be added to the original observed data. However, for more realistic noise modeling, noise parameters should be estimated from the available data.

E.5.2 Wavelet Denoising We have developed a noise estimation procedure based on wavelet analysis. As a byproduct of the estimation algorithm, a denoised signal is obtained, which we define as the " true" or "actual" signal.

The pre-processing methodology for uncertainty analysis is based on the Stochastic Parameter Simulation System (SPSS) methodology [E17]. In the following, some theoretical aspects on the SPSS methodology are discussed. More information on this subject can be found in reference

[E18].

The idea behind the SPSS is that any steady-state signal can be considered as a sum of two components: a deterministic component and some random contribution (noise). It is assumed that the instrument channel noise is white. Then, an SPSS optimization procedure discussed below was used for finding the "true" signal deterministic component and the corresponding noise characteristics.

We can define a noise model in which a signal St can be seen as St = ft +a et where ft is the de-noised signal, a is the noise level, assumed to be constant in this application, and et is the white noise with zero mean and unit variance.

By applying the threshold theory on the detail coefficients obtained in the wavelet decomposition filter bank, the wavelet-based de-noised signal, ft, can be expressed as J

ft =AJ + Di i=1 where Aj is the wavelet approximation and Di are the details completely determined by the wavelet thresholding procedures [E13].

The SPSS procedure finds the decomposition parameters for all the wavelet-based approximations that, when subtracted from the original signal, produce white residuals. The Barlett-Durbin procedure was used to determine if the noise et is white. In essence, this procedure is a statistical test for whiteness that consists of analyzing the pdf of the periodogram E-10

EPRILicensed Material MSET UncertaintyAnalysis estimates in the power spectral density [E16]. If the pdf of the periodogram is normally distributed, the signal is assumed to be white.

In addition, the correlation among residuals should be below some specified threshold to assure that the process noise is not eliminated during the instrument noise estimation.

Then the "true" deterministic process value is found to be the wavelet-based approximation which removes the largest uncorrelated component from each signal in the model. The remaining uncorrelated component is further analyzed to find the probability distribution, which is simulated using the LHS procedure. The "true" values, along with the residual pdf and characteristics represent the input values of the UNAMSET code. Within the code, a perturbation with the same characteristics determined in the pre-processing phase is added to the "true" values for both MSET training and estimation.

E.5.3 Regularization The monitored signals are often corrupted with significant noise. In some cases it is possible that signal estimates more accurately reflect the true value of the underlying physical processes than do the sensor readings themselves. This can be achieved to some extent using regularization methods in MSET. Details of the MSET estimation algorithm are described elsewhere [E2]. A short review is given here for completeness.

State estimation in MSET is based on the data organized in the "process memory" matrixD. The number of columns of this matrix is equal to the number of observations, and the number of rows is equal to the number of sensors. If a new observation is made and the sensor measurements from this matrix represent correlated phenomena, then it can be assumed that the estimate of the new state is linearly related to the data matrix in the following way Xst =D-w The weight vector w is computed in MSET using a set of nonlinear operators. In a general operator form the solution for the weight matrix in MSET is given by the following expression W=(D T 9 D)-' (DT XObS)= G-' a where G = DT D and a= D T Xobs The non-linear operator 0 has been chosen so that several desirable properties are satisfied, including non-singularity of the similarity matrix G. The vector a represents similarity between the current observation vector and the vectors in the "memory matrix." To improve numerical accuracy and stability the singular value decomposition method (SVD) [E19] has been applied in MSET. In some applications, however, the condition number of the matrix G still can be quite large. Regularization methods improve this situation.

E-1I

EPRILicensed Material MSET UncertaintyAnalysis One of the best known regularization methods, Tikhonov regularization [E20], has been implemented in MSET [E21]. The Tikhonov regularized solution wA is defined as the solution to the following least squares problem:

minII Gw - a If +22 1lLw If}

w where X is the regularization parameter, and L is a convenient regularization matrix that controls the smoothness of the solution. A common choice is L = I, the identity matrix, which penalizes solutions possessing large norms. Several methods have been proposed to select optimal values of the parameter X. Initial testing with ill-conditioned problems in MSET demonstrated that the generalized cross validation (GCV) method i's a good method for selecting near-optimal regularization parameters for reactor sensor applications of MSET. One such GCV-based selection method that has been adopted for use in MSET is based on the minimization of the following function [E22]

GCV 11 GWA - a11 2 (T(2))2 where T(X)== I 2 i=1x +Cj2 and <as, i=l,. . .,m are the singular values of the matrixG. The regularized solution is obtained from the following system

[G +221]WA =a The regularization procedure has been implemented in C/C++ and can be easily included in the standard MSET kernel.

E.5.4 Uncertainty Measures Uncertainty is defined as a 95/95 confidence interval, approximately evaluated as two standard deviations. We evaluate standard deviations both for the residual and for the actual error, as defined below. The residual is the difference between the current observation and the estimated value and is used as an indication of anomalous behavior in instrument channels. However, for noisy observation, a better measure of uncertainty is based on the standard deviation of the difference between the actual and the estimated signals.

Residual standard deviation I Nobs lN s

S NMSET NObs i=1 k=1Xk

- Obs)

E-12

EPRILicensedMaterial MSET UncertaintyAnalysis "Actual error" standard deviation N.I~s Ns MET Ata (UActual = .1 Icua NKbsN ~ 1 k=1 I

- 2Nb g(x,k ""i,kctf)~)

where Nob, is the number of observations in the testing data and Ns is the number of Latin Hypercube samples.

We have also defined two performance measures associated with an on-line monitoring algorithm, which implicitly give information about its uncertainty. These measures are the detection sensitivity defined as the ratio of the average residual to the introduced drift or bias, and the spillover measure, which indicates fElse indications in good sensors due to defective sensors.

E.5.5 Plant - Specific Uncertainty Analysis Some assumptions and issues involved in uncertainty analysis of data-based modeling are discussed first.

Since MSET is a data-based modeling approach, data quality is the most important issue in uncertainty analysis. Particular attention should be paid to the selection of the training data. Data for all sensors must be properly aligned and inherent process drifts should be avoided in the training data.

In spite of the fact that deleterious effect of measurement noise on all estimation algorithms is well known, there is no universally accepted method to separate noise from the process data, especially when the measured process contains a stochastic component. The wavelet based de-noising might in some cases provide unrealistic noise levels.

Implicit in the data-based modeling is the assumption that the processes are stationary, and that the training data provides enough information to estimate a probability distribution of the monitored processes, which will remain constant under normal operating conditions. A related issue is the selection of the training domain, which should cover to the largest possible extent the expected range of variation of the monitored variable.

Having in mind limitations described above, uncertainty analysis is reduced to parametric uncertainty evaluation that can be performed using parametric studies and sampling methods.

Plant-specific uncertainty analysis is performed for each plant system considered for on-line monitoring application, such as Reactor Coolant System (RCS) or Reactor Protection Systems (RPS). The term model is used in this report to describe a group of signals that have been collected together for the purpose of signal validation and analysis. Specifically, an MSET model includes also various MSET settings that are necessary to optimize MSET performance, such as the number of training vectors in the memory matrix D, specific non-linear MSET operators, regularization parameters, and other settings. All of these parameters affect estimation uncertainty.

E-13

EPRILicensedMaterial MSET UncertaintyAnalysis Uncertainty evaluation requires the specification of clean ("true") data and noise characteristics for each specified model. To perform a realistic estimation of uncertainty for given signals, a pre-processing methodology based on wavelet de-noising is used to remove the largest uncorrelated component from each signal in the model. This component is defined as the true signal noise. In many cases, the estimated noise component is fairly small and a more conservative noise component is introduced. The standard deviation in this case corresponds to the sensor measurement and test equipment i(SMTE) errors commonly used in setpoint allowances computations. This component is defined as the setpoint noise. In several cases, the estimated uncorrelated component is larger than the expected SMTE value. Since the measured processes in these situations could also contain significant fluctuations, it is difficult to distinguish signal and noise. Therefore, uncertainty for corresponding sensors is also evaluated using the setpoint noise case. The uncertainty due to the sensor drift is evaluated for the maximum drift value of 1% of the sensor span.

The standard plant-specific uncertainty analysis procedure is the following:

  • Data de-noising using wavelet analysis and statistical analysis of residuals.
  • Generation of perturbed data using:

- Estimated noise;

- Setpoint noise;

- Estimated noise and drift

- Setpoint noise and drift A parametric uncertainty study is performed for several values of target vectors and selected MSET operators. The effect of regularization on uncertainty is also assessed.

For a given plant system, a tentative MSET model with acceptable performance under all conditions is proposed and its uncertainty reported as a 95/95 confidence interval. In principle, the model with the smallest uncertainty defined as the confidence intervals for "actual errors" is selected. For sensors with large variability, the residual confidence intervals defined by should be also considered in the selection process. The reason is that the requirement for the minimum true error is in conflict with the requirement for small residuals and models optimal for "actual errors" could lead to unacceptable residuals.

E.5.6 Generic Uncertainty Analysis Uncertainty analysis methodology presented in the previous section should be repeated for each nuclear power plant, which intends to use on-line monitoring technique as a calibration assessment tool. A possible problem with this approach is that a substantial training of the plant personnel is needed to perform uncertainty analysis. This could be time consuming process and could have a negative impact on the cost-benefit analysis of the on-line monitoring. A simulation-based method to provide generic uncertainty bounds is presented in this section.

E-14

EPRILicensedMaterial MSET UncertaintyAnalysis Since data-based models for on-line monitoring do not use any first principle information in estimation, conservative uncertainty bounds could be obtained under various conditions using extensive simulations. The approach presented here combines the data pre-processing based on wavelet de-noising, LHS for simulations, and a database for storage, retrieval, and statistical analysis of simulation results.

The following steps are performed to simulate the data for generic uncertainty analysis:

  • Representative models of power plant systems are selected. Typical power plant models are used, such as RCS models, RPS models, as described in the plant specific uncertainty analysis. To increase variability, models with various sensor combinations are selected from typical plant systems.
  • Actual sensors noise is estimated from the data to obtain de-noised signals. In simulations, the noise level is a variable parameter that changes in a given range. Various noise distributions are also simulated to obtain more conservative uncertainty estimates. Variation in correlation among sensors is simulated implicitly, through noise variation.
  • For each selected model, simulations are performed with variable number of training vectors in the "memory matrix".
  • Variable drift ("bias") is added for spillover and sensitivity analysis.
  • Simulation results are saved in a specially designed database. The structure of the database is described in section E.6.
  • Statistical data analysis is performed to provide conservative estimation of generic uncertainty bounds.

The generic uncertainty analysis described above does not provide as accurate uncertainty bounds as the plant-specific analysis. The estimates will be on the conservative side and could be revised on a case-by-case basis using a detailed uncertainty analysis.

E.6 Implementation E.6.1 Plant-Specific Uncertainty Analysis Modules To evaluate uncertainty, the UNA-MSET code requires the specification of clean ("true") data, sensor noise and drift characteristics. These characteristics represent the output of a data pre-processing phase. The random permutations required by the LHS algorithm are generated in a separate code developed in C/C++ and saved as an input file for UNA-MSET.

The results of the pre-processing phase, along with information on the MSET training and testing data, desired number of sensors in the MSET training matrix, regularization parameter and MSET operator are passed as inputs for the UNA-MSET code. The code is capable of simulating various distributions (Gaussian, Laplacian and t-distribution) and characteristics (white, correlated and signal autocorrelation effects) for the sensor noise, in both training and monitoring phases of MSET. Also, the code can simulate a drift and/or a step perturbation in each sensor in E-15

EPPJLicensed Material MSET UncertaintyAnalysis the model. For each set of parameters, a LHS analysis is performed by computing random noise simulations. For each LHS run, the results of the MSET estimations, residuals and other information necessary for comprehensive statistics on MSET uncertainty are saved in several files. These files represent the input for the MSET uncertainty post-processing phase in which the MSET uncertainty, as well as other pertinent statistical information and graphs are obtained.

The post-processing phase code is developed in MATLAB.

The UNA-MSET code is implemented in C/,C++ programming language. The code is flexible, extensible and includes many input error checks. If noise distribution and characteristics are known, the first procedure for the pre-processing phase is not necessary. Also, the code can be used as a stand-alone, general purpose uncertainty analysis tool in which MSET kernel can be replaced with any other methodology or signal monitoring procedure for which the uncertainty analysis is desired. In addition, the specific code modules can be incorporated in a larger software system. The UNA-MSET code was verified and validated using several test cases and produced the expected results. Data and computation flow is described in Figure E-1.

The UNA-MSET code can be used to run one case at a time, varying a particular MSET operator, number of vectors in the MSET training matrix D, regularization parameter, sensor noise properties, and drift characteristics. "Manually" running all these cases is time-consuming and error-prone. Therefore, an automated code based on UNA-MSET was also developed (UNA_C_MSET) [E23]. This code is further modified for generic MSET uncertainty analysis.

E-16

EPRILicensedMaterial MSET UncertaintyAnalysis Pre-Processing Phase

  • De-noise to Find the "True" Process Training and Testing Data
  • Find Sensor Noise Characteristics
  • Produce Permutation File
  • Find MSET Regularization Parameter (optional)
  • Perform MSET Training to Determine the Training Matrix D*

User Input

  • Specify Noise Characteristics
  • Specify Testing and Training (or D*) Data
  • Specify Permutation File
  • Specify MSET Parameters and Desired Case (Cases**)

for Uncertainty Analysis

  • Other Options Run UNAIVISET/NAC-MSET Output Files
  • True Error Squared
  • MSET Residuals Squared
  • Average Variance for Each Sensor
  • Maximum and Minimum Values for Training Data***
  • Maximum and Minimum Values for Testing Data***
  • Maximum and Minimum MSET Estimate***
  • Sum of MSET Estimate, Estimate Squared, Estimate Cubed, and Estimate4***
  • Important Sensor Histograms for Training, Testing, MSET Estimates and Residuals I

Post-Piocessing Phase

  • Produce Graphs to Find Optimum MSET Configuration
  • Show/Compute Other Statistics*
  • Determine MSET Uncertainty for the Analyzed Model I I I I Predefined Process
  • For UNAMSET Code Only 4==I Preparation ** For UNA_C_MSET Only I -I Process *** If Desired Figure E-1 UNAMSET Flow Diagram E-17

EPRILicensed Material MSET UncertaintyAnalysis E.6.2 Generic Uncertainty Analysis Tool The generic uncertainty analysis is implemented in the computational tool UNADB (UNcertainty Analysis DataBase). The structure of UNADB is presented in Figure E-2 and consists of four modules:

  • Module 1: Random Permutation Generation.
  • This module is a C program for random number generation. The output is a permutation file, which is used in Module 2 to generate dacta perturbations.
  • Module 2: Data Perturbation Generation.
  • This module produces data perturbations, using pre-processed ("actual") signals, as well as other parameters, such as the noise level, the noise probability distribution, the drift ("bias")

level, number of training vectors, and regularization parameters. Pre-processed signals are obtained in an independent module, based on SPSS [El7] and the MATLAB Wavelet Toolbox [E24]. A modification in the signal de-noising algorithm is introduced. In UNAMSET, the "true" signal was determined as a wavelet decomposition which removes the largest uncorrelated component for each signal, defined as the noise signal. However, the noise signal correlation among model sensors was not monitored during decomposition, sometimes resulting in over-conservative estimation of sensor noise. The best approach to signal decomposition would be to find the wavelet de-noising which would produce the residuals that are uncorrelated both in time domain and among sensors. The optimal decomposition of this type would require the vector wavelet transform, which is an active area of research and many implementation issues are not clear. In the generic uncertainty analysis, an approximate solution based on a constrained search among possible decompositions for each sensor is used.

  • Module 3: Perturbed Input Computation, Estimation, and Sample Collection.
  • While modules 1 and 2 have many function in common with UNAMSET, Module 3 has been modified significantly to enable efficient data transfer between simulations and the database. The user-defined input is modified to include supplementary information on the plant and model description. The post-processing phase in UNAMSET, which was developed in MATLAB, has been replaced with new C functions to enable efficient communication of the simulation program with the database. Eight output files are defined in the table format, which correspond to the tables in the relational database described below.

These files are imported directly into the database, using a Visual Basic graphical user interface DBGen. Thus, the time-consuming, error-prone, manual post-processing phase in UNAMSET is replaces by automatically running the UNADB code. In addition, a script file UBatch is developed for automated processing of multiple runs.

  • Module 4: Data Storage and Evaluation.
  • Simulation results are saved in a MS Access database unadb. The relationship between the tables in the database is presented in Figure E-3.
  • Five input tables are defined.

E-18

EPRILicensed Material MSET UncertaintyAnalysis

  • Table Model contains fields with information about power plants, specific plant systems, and number of sensors. Fields also contain information about empirical models, such as the training data size, the testing data size, and information about the data location on the network. Table MSET gives additional information about empirical models, such as the MSET operator and its parameters and memory matrix size. Table Case contains some perturbation parameters, such as the drift; (bias) characterization, while Table Noise and Table Sensor give general and sensor-specific noise information.
  • Three output tables are defined.
  • Table Correlation 1 contains correlation matrices for the training and the testing data. Table Correlation 2 contains the same information for the vectors in the memory matrix of the specified size. Uncertainty measures, such as actual and residual errors, as well as sensitivity and spillover, are saved in Table Results.
  • Finally, a procedure for MSET uncertainty analysis is formulated, which combines Access database and Excel worksheet for efficient extraction and presentation of analysis results. A series of queries, table and charts have been created.

Figure E-2 Structure of UNADB Computational Tool E-19

EPPILicensed Material MSET UncertaintyAnalysis Tbo l.b* TU:1j:edr, re edr59oD blPOFi MtNO MID 3 ET:whD

.e jbPOF 5 aWrdr l , bratoSv=e es~

Truhn:-

Figuret =id TG*M -.

vc-owd IbzTog TrdnwsmW Tod I;Nbnlb ar..

E.T  : : P--l Vp t 5 MOx&ME s ;0ElenW^

Database Design E.7 Simulation Results E7.1 Plant-Specifc Uncertainty Anaysis A preliminary application of the LHS methodology to MSET uncertainty assessment has been summarized previously [12] for a typical PWR nuclear power plant and a limited number of nuclear power plant system models. Sensors that are governed by Technical Specification periodic calibration are grouped in the following models: the reactor coolant system (RCS) flow models for each loop, the RCS pressure model, the pressurizer level model, the reactor protection system (R PS) models for each loop, and the steam generator level models for each loop.

MSET models are developed for each of the plant sensor models. This section will give a summary of the uncertainty analysis results. Some details are given in section E.9.

The MSET uncertainty as a function of several parameters is considered, such as the number of vectors in the training vectors, the correlation between sensors, and the noise level. The models with small number of sensors include the RCS flow models and the pressurizer level model. The models with larger number of sensors are the RCS pressure models and the RPS models. The RPS models are further divided into two models, a group of nine non-redundant sensors and a group of four redundant steam generator level (SGL) signals, to establish groups with higher correlation among sensors.

E-20

EPRP Licensed Material MSET UncertaintyAnalysis For each group of sensors, several candidate MSET models were evaluated. Number of vectors in the "memory matrix" (the number of training vectors) is considered variable. In addition, performance of various MSET nonlinear operators is investigated, as well as the effects of regularization on uncertainty. Confidence intervals defined in Section E.5.4 are evaluated for each sensor in the selected models.

A single optimal model is selected for on-line monitoring implementation based on the criteria discussed in Section E.5.4. The model with the smallest uncertainty defined as the confidence interval for "true errors" defined above is selected provided that it does not lead to large increase in the residuals. The largest observed confidence interval in a selected model is defined as the model confidence interval. Summary of the uncertainty analysis results is given in Table E-1. It should be noted that these results correspond to the standard MSET operator BART, without regularization. Smaller uncertainty could be accomplished for another MSET operator and/or regularized estimation, as described in E.9. However, verification and validation has been performed for the MSET code and operators without regularization. In addition, most implementation experience has been gained utilizing the BART operator.

The results are presented separately for sensor operation with drift and without drift ("normal").

The uncertainty in MSET models with small number of redundant sensors is determined primarily by the "spill-over" effect, which increases variability in normal sensor estimation due to faulty sensors. The estimated uncertainty in these sensors is small.

The uncertainty in larger models is determined primarily by the considerable variation observed in the feedwater flow sensors and the steam flow sensors. Although comparatively large, the uncertainty is still less than the estimated noise level. The MSET uncertainty for these models could be reduced, if necessary, by pre-filtering of the data. However, this should be done with care to prevent the loss of information about the process variability. It is possible that the large uncertainty bounds presented here are too conservative and in part due to the process variability included in the estimated sensor noise. Other sensors in these models also have increased uncertainty, due to the presence of redundanti sensor pairs, which cause spill over. However, this uncertainty is not as large as the uncertainty for noisy sensors.

E-21

EPRIlicensed Material MSET UncertaintyAnalysis Table E-1 Summary of the Largest Confidence Intervals Plant Model Maximum Noise Level Residual Confidence "True Error" Confidence (2 a) Intervals (% of span) Intervals (% of span)

Estimated Setpoint Normal Drift Normal Drift RCS Flow, Loop A 0.10 0.20 0.04 0.20 0.19 0.20 RCS Flow, Loop B 0.10 (0.20 0.04 0.16 0.15 0.20 RCS Flow, Loop C 0.10 0.20 0.04 0.08 0.15 0.17 Pressurizer Level 0.03 0.20 0.05 0.38 0.17 0.38 RCS Pressure 0.12 0.20 0.12 0.26 0.17 0.26 RPS Model, Loop A 1.00 C0.20 0.46 0.51 0.74 0.78 RPS Model, Loop B 1.18 0.20 0.51 0.52 0.92 0.93 RPS Model, Loop C 1.06 0.20 0.45 0.47 0.77 0.81 RPS, Loop A, SGL only 0.20 0.20 0.16 0.34 0.17 0.34 RPS,Loop B, SGL only 0.20 0.20 0.04 0.10 0.19 0.20 RPSLoop C, SGL only 0.20 0.20 0.05 0.10 0.19 0.20 E.7.2 Generic Uncertainty Analysis A comprehensive uncertainty analysis of multiple plant models has been performed with the objective to provide a more general assessment of MSET uncertainty [E25]. Representative (template) models are selected from PWR power plants, which participated in the EPRI on-line monitoring projects. Current models include RPS and RCS models with number of sensors ranging from 3 to 9, sets of redundant sensors with number of sensors ranging from 3 to 7, and small models for steam generators. Data variability is enhanced by extensive simulations.

Initially, plant data for each template model is introduced in the "clean" format, i.e. after wavelet-de-noising. Parametric studies are performed by simulating various noise levels and distributions. Number of training vectors selected from the initial training data is also variable in simulations. Drifts with variable levels are introduced. Nearly 70 000 cases (more than 70 Mbytes of memory) are currently stored in the database.

E.7.2. 1 Noise Effects Preliminary evaluation of on-line monitoring indicated that noisy sensors have the largest uncertainty bounds. Systematic variations of noise level confirmed again the significance of the sensor noise. A large range of noise standard deviations is considered to investigate extreme cases and provide conservative estimates. Simulation results for all models in the database are presented in Figure E-4.

E-22

EPRILicensed Material MSET UncertaintyAnalysis 3.0 2.5 0 2.0 w 1.5 lI* Actual Errorl 0

1.0 0.5 0-0 0.5 1.0 1.5 2.0 Sensor Noise Standard (%)

Figure E-4 Actual Errors for All Models in the Database The average actual errors are presented in Figure E-5. In the figure, D is the number of vectors in the process memory matrix. Averaging is performed over all models in the database with the corresponding sensor noise level. All uncertainty estimates in this section are presented as standard deviations and normalized to the corresponding sensor span. The confidence intervals are approximately two times larger.

2.5 2.0

  • D=15

-0 1.5

  • D=20 cm D = 30 1.0 d:

a) D = 40 W:

0.5 X D=50 0

0 0.5 1.0 1.5 2.0 Sensor Noise Standard (%)

Figure E-5 Average Actual Error for All Models in the Database It can be observed that the average uncertainty for larger noise levels is less than the noise level.

For small noise levels, typical for plant sensors, uncertainty could be larger than the noise level, but the upper bounds are not large when the number of training vectors is not too small (e.g.,

larger than 20 for models with small number of sensors).

E-23 I-

EPRI Licensed Material MSET UncertaintyAnalysis The effect of noise on residuals, presented in Figure E-6, is not significant. The most important factor is the number of training vectors. Large number of training vectors can lead to very small residuals for the training data set, but it does not necessarily imply the smallest uncertainty, since the actual error does not change significantly, and can even increase in some situations with large noise.

2.0

  • D=15 1.5

- D = 20 V

D = 30

'1,

0) 1.0 a) D0=40 Co T:
  • D=50 0.5 0

0 0.5 1.0 1.5 2.0 Sensor Noise Standard (%)

Figure E-6 Average Residuals for All Models in the Database The effects of noise distributions with large tails are simulated by comparing the Laplacian noise with the Gaussian noise of the same standard deviation for a model with small number of correlated sensors. Actual errors for models with three redundant sensors are presented in Figure E-7.

Uncertainty bounds for the Laplacian noise, presented in Figure E-9, are about 20% larger than the corresponding bounds for the Gaussian noise, presented in Figure E-8. Since the number of simulations of the Laplacian noise in the database was much smaller than those of the Gaussian noise, the results in Figures E-5 and E-6 are characteristic for the Gaussian noise.

The number of training vectors also has an important effect on uncertainty. This is shown in Figure E- 10. Very small number of training vectors introduces significant systematic error, while too large number can lead to the effect known as "overfitting" in machine learning community, i.e. reduction of the training error only, without ability to make inductive inference on new data.

E-24 COP0

-- I EPRI Licensed Material MSET UncertaintyAnalysis Actual Error, Small Models 3.0.

2.5-0-

2.0 a 2

w 1.5 a 111 I Actual Error it 1.0.

0.5.

0 0 0.5 1.0 1.5 2.0 Sensor Noise Standard (%)

Figure E-7 Effect of Noise Standard Deviation on Actual Error Gaussian Noise 2.0 0 1.5 2

w 4 Sensor 1 ia 1.0 -

_ Sensor 2

=,

0)

U) Sensor 3 0.5 0

0 0.5 1.0 1.5 2.0 Noise Standard (%)

Figure E-8 Noise Effect on Average Actual Error co)

E-25

NEEMENO EPRILicensed Material MSET UncertaintyAnalysis Laplacian Noise 3.0 -

2.5 -

0 2.0 -

LUi 1.5 -

  • Actual Error w0 sac 1.0-0.5 0-0.5 1.0 1.5 2.0 Noise Standard (%)

Figure E-9 Effect of Noise Distribution on Actual Error Gaussian Noise 2.0 -

1.8 -

Noise 0.05%

1.6 -

4 Noise 0.10%

o. 1.4-Noise 0.25%

i 1.2-0-,- Noise 0.50%

c 1.0-0 V Noise 0.75%

a3 0.8-

> 0.6 + Noise 1.00%

0.4 - I

--- Noise 1.50%

0.2I -- Noise 2.00%

0-10 20 30 40 Number of Training Vectors Figure E-10 Effect of Number of Training Vectors on Average Actual Error E-26

EPRI Licensed Material MSET Uncertainty Analysis As an example of a plant model with a larger number of correlated sensors, a template model is defined, consisting of three groups of correlated sensors from an RPS system model. Each group consists of 2 redundant sensors. Effect of sensor noise on uncertainty is presented in Figure E- 11.

It can be seen that for realistic noise level and correlated sensors in the model, the average actual error is linearly proportional to noise and bounded by noise level.

1.0 0.9 -

0.8 -

0.7 -

0.6 -

  • D=20 2

w iffi 0.5 -

-R

  • D=40
0) 0.4 a)

D = 60 0.3 0.2 0.1 0

0 0.2 0.4 0.6 0.8 1.0 Noise Standard Deviation (%)

Figure E-11 Noise Effect on Actual Error, Correlated Sensors, and Larger Models C)-7 E-27

EPRILicensed Material MSET UncertaintyAnalysis Effect of the average correlation coefficient in the model on actual error is presented in Figure E- 12. Uncertainty reduction with increased correlation is obvious. The dependence is almost linear. The best average performance appears to be for the smallest number of vectors in the memory matrix. The residual error should be also considered, however, for the optimal model selection.

0.9 0.8 0.7 R- 0.6 - X a

- 0* D=20 20.5-ifl *D40

-ff 0.4-i) , '

E D =60

< 0.3 - ONE 0.2 -

x' . NOW 0.1 Mi MM N = E.; m R lm031 nt i

0 0.2 0.4 0.6 0.8 1.0 Average Correlation Coefficient Figure E-12 Effect of Correlation on Actual Error E.7.2.2 Sensitivity and Spillover The effect of spillover is of concern for on-line monitoring, since it can lead to alarms for sensors without defects. Comparison of sensitivity and spillover can be used to estimate the occurrence of false alarms due to spillover. Analysis of these effects is performed by introducing a step input

("bias") of specified size. A ramp input option is also provided in the code to simulate drift. Step input is selected for simulation since it enables direct comparison between the size of introduced perturbation and the monitoring algorithm response.

Figures E-13 and E-14 represent the sensitivity and spillover for models with small number of sensors, where the largest spillover has been observed. The figures show that the sensitivity is larger than the spillover for a large bias. In this case, it can be expected that the increase in the false alarm rate due to the spillover is not significant, provided that the detection threshold is not too small. However, the spillover could be a problem for small bias levels (e.g., less than 0.5 %),

since the sensitivity is low in such cases and could be close to spillover. Since the bias level of 1% is tolerated in standard on-line monitoring applications, the defective sensors can be properly identified in this limiting case, since the sensitivity is more than two times larger than the spill over.

E-28 C\<2-

EPRILicensed Material MSET UncertaintyAnalysis 1.8 -

1.6 -

1.4-1.2 1.0-

  • Bias = 0.5%

V) 0.8 -

C

  • Bias= 1.0%

Cl) 0.6 -

Bias = 1.5%

0.4 -

Bias = 2.0%

0.2.

0-0 0.2 0.4 0.6 0.8 1.0 Correlation Coefficient Figure E-13 Sensitivity for Models with Small Number of Sensors 0.8 0.7 0.6 0.5

  • Bias = 0.5%

N Bias= 1.0%

0.4 Bias= 1.5%

U) 0.3 I Bias = 2.0%

0.2 0.1 0

0 0.2 0.4 0.6 0.8 1.0 Correlation Coefficient Figure E-14 Spillover for Models with Small Number of Sensors Significant sensitivity deterioration for low correlation among sensors confirms theoretical expectations that a set of correlated sensors gives best MSET performance. To further investigate the effects of correlation and noise on sensitivity and spillover, the RPS template model with 6 correlated sensors described in the previous section is analyzed.

E-29

I EPRI Licensed Material MSET UncertaintyAnalysis 1.0 _

0.9X S 0.8 * - --

0.7-0.6 -

  • Bias = 0.5%

0 0.5

  • Bias = 1.0%

U) o 0.4 Bias = 1.5%

0.3.

0.2MM 0.1 0 0.5 1.0 Average Correlation Coefficient Figure E-15 Sensitivity for an RPS Template Model The effect of the average correlation coefficient on sensitivity and spillover is presented in Figures E-15 to E-19. When the average correlation among sensors is larger than 0.5 and the bias is less than 1%, sensitivity is almost constant, which is presented in Figure E-15. On the contrary, spillover depends significantly on the average correlation coefficient (see Figure E-16).

For the bias near 0.5%, spillover can be very close to sensitivity for large correlation coefficients and discrimination between the sensor with bias or drift and the sensor with spillover becomes difficult. However, for the bias or drift near 1%, discrimination is much better. The worst case is the low-noise sensor characterized by high correlation with the drifted sensor. Spillover becomes close to 0.35%, compared with the drift sensitivity of 0.5%.

E-30 I-

EPRI Licensed Material MSET UncertaintyAnalysis 0.45 0.40 0.35 0.30

  • Bias 0.5%

0.25 0

  • Bias= 1.0%

'a Co 0.20 0.15 Bias = 1.5%

0.10-0.05 -

0 0 0.2 0.4 0.6 0.8 1.0 Average Correlation Coefficient Figure E-16 Spillover for an RPS template Model 0.6-0.5-

  • Noise Standard = 0.2%

0.4

  • Noise Standard = 0.4%

.)

0.3- Noise Standard = 0.6%

a)

On , Noise Standard = 0.8%

0.2-X Noise Standard = 1.0%

0.1 -

0-0 0.5 1.0 Average Correlation Coefficient Figure E-17 Effect of Noise and Average Correlation on Sensitivity (Bias = 1%)

A closer analysis of the spillover effect reveals that it is very significant between two redundant sensors in a larger model, while it is much smaller between two non-redundant sensors in the same model, as presented in Figures E- 18 and E- 19.

E-31

EPRI Licensed Material MSET UncertaintyAnalysis Figure E- 18 shows that spill-over is large for the redundant sensor with small noise level.

However, even in the case of very small noise, it is about two times smaller than sensitivity. On the contrary, spillover is very small for highly correlated sensor. This effect has also been observed in plant-specific simulations.

0.35 0.30 0.25

  • Noise Standard = 0.2%

0

  • Noise Standard = 0.4%

0.20 Noise Standard = 0.6%

._ 0.15 V) Noise Standard = 0.8%

0.10 X Noise Standard = 1.0%

0.05.

0.1 0 0.2 0.4 0.6 0.8 1.0 Average Correlation Coefficient Figure E-18 Spillover between Redundant Sensors (Bias = 1%)

0.06 -

0.05

  • Noise Standard = 0.2%

0.04 0-

  • Noise Standard = 0.4%

80 0.03 Noise Standard = 0.6%

C/)

- Noise Standard = 0.8%

0.02 X Noise Standard = 1.0%

0.01 0

0 0.2 0.4 0.6 0.8 1.0 Average Correlation Coefficient Figure E-19 Spillover between Correlated Non-Redundant Sensors (Bias = 1%)

E-32

EPRILicensed Material MSET UncertaintyAnalysis E.8 Uncertainty Analysis Summary Methodology for a comprehensive uncertainty analysis for MSET models based on the Latin Hypercube Sampling is presented in this report. Prior information about noise characteristics needed for uncertainty analysis is obtained in a pre-processing step. Noise is estimated from the training data using wavelet analysis.

Plant-specific approach to uncertainty analysis is described first. Summary of uncertainty results for selected sensor groups of a typical PWR NPP is reported. The results show that the uncertainty in small MSET models with redundant sensors is determined primarily by spill-over.

Overall, uncertainty in these sensors is rather small. The uncertainty in larger models, represented by RPS sensors, is determined primarily by the considerable variation observed in the feedwater flow sensors and the steam flow sensors. Although rather large, the uncertainty is still less than the estimated noise level, especially when regularization is used. The MSET uncertainty for these models could be reduced, if necessary, by pre-filtering of the data.

Simulation-based generic uncertainty analysis is also described. In addition to the Latin Hypercube Sampling and wavelet de-noising, this approach uses a database of comprehensive simulation results to provide conservative general bounds for on-line monitoring uncertainty.

General database queries have been developed to provide uncertainty bounds for template models available in the database. The present database has been developed for testing purposes and smaller models are overrepresented to save the computational effort. Models with larger number of sensors should be better represented in the future. In addition, new queries should be developed to give more precise estimation of uncertainty bounds for specified models.

In spite of a limited coverage of possible power plant models, some general conclusions about MSET uncertainty can be derived from the current database content. As already observed in the plant-specific analysis, largest uncertainty is due to the sensor noise. In most cases, confidence intervals are bounded by the two standard deviations of the estimated noise. Some exceptions are observed for very small noise levels, and for non-Gaussian noise. Largest uncertainty in groups or redundant sensors with small noise level is due to spill-over. However, this uncertainty is only a small fraction of the introduced drift and is easily discriminated from the response of the drifted sensors when the drift size is not too small.

The simulation-based method for generic uncertainty bounds does not provide as accurate uncertainty bounds as the plant-specific analysis. It is possible that this analysis would provide too conservative uncertainty estimates for some models. In the case that a large generic uncertainty bound substantially reduces the on-line monitoring drift allowance, detailed uncertainty analysis could be performed for such a situation.

E-33

EPRILicensedMaterial MSET UncertaintyAnalysis E.9 References

[El] E. Davis and R. Shankar, "Results of the EPRI/Utility on-line monitoring implementation project." Trans. Am. Nucl. Soc. 89, 11-12, (,2003).

[E2] K. C. Gross, R. M. Singer, J. P. Herzog, R. VanAlstine, S. W. Wegerich, S.W.,

"Application of a model-based fault detection system to nuclear plant signals. " Vh Intl. Conf on Intelligent Systems Application to Power Systems, Seoul, Korea, pp. 66-70, (1997).

[E3] E. Davis (Ed.), "On-Line Monitoring of Instrument Channel Performance." TR-104965-R1 NRC SER, EPRI, Palo Alto, CA, (2000).

[E4] M. Granger Morgan and M. Henrion, Uncertainty:A Guide to Dealing with Uncertainty in QuantitativeRisk andPolicy Analysis, Cambridge University Press, Cambridge (1990).

[E5] Y. Ronen, ed., UncertaintyAnalysis, CRC Press, Boca Raton (1988).

[E6] A.Mosleh, N.Siu, C.Smidts, and C.Lui, "Model Uncertainty: Its Characterization and Quantification", Proc. of Workshop I in Advanced Topics in Risk andReliabilityAnalysis, Annapolis, MD, October 1993, NUREG/CP-0138, (1994).

[E7] G. S. Fishman, Monte Carlo: Concepts, Algorithms, andApplications, Springer-Verlag, New York (1996).

[E8] J. C. Helton and F. J. Davis, "Latin Hypercube sampling and the propagation of uncertainty in analysis of complex systems." ReliabilityEngineeringandSystem Safety 81, 23-69, (2003).

[E9] B.Efron and R.Tibshirani, An Introduction to the Bootstrap, Chapman and Hall, London and New York (1993).

[E10] A. V. Gribok, A. M. Urmanov, and Mt. J. Hines, "Uncertainty analysis of memory based sensor validation techniques." Real-Time Systems, 27, 7-26, (2004).

[Ell] B. Rassmussen and J. W. Hines, J.W., "Exploring the limits of fault detection based on bootstrap prediction intervals." Trans .Am .Nucl. Soc. 89, 22-23, (2003).

[E12] N. Zavaljevski, N., A. Miron, C. Yu, and E. Davis, " Uncertainty analysis for the Multivariate State Estimation Technique (MSET) based on Latin Hypercube Sampling and wavelet de-noising." Trans .Am. Nucl. Soc. 89, 19-21, (2003).

[E13] S. Mallat, A Wavelet Tour of Signal Processing,Academic Press, San Diego, (1998).

[E14] L. Devroye, Non-Uniform Random Variate Generation, Springer-Verlag, New York, (1986).

E-34

EPR Licensed Material MSET UncertaintyAnalysis

[E15] M. Abramowitz and I. Stegun, Handbookof MathematicalTables, Dover Publications, New York, (1970).

[E16] W.A. Fuller, Introductionto Statistical Time Series, John Wiley & Sons, (1976).

[E17] A. Miron and J. Christenson, "The stochastic parameter simulation system: a wavelet-based methodology for "perfect" signal reconstruction." ANS Topical Meeting in Mathematics and Computations, Gatlinburg, TN, On CD-ROM (2003).

[E18] A. Miron, "A Wavelet Approach for Development and Application of a Stochastic Parameter Simulation System", Ph. D. Dissertation, University of Cincinnati (2001).

[El9 ] G.Golub And C.F. Van Loan, Matrix Computations, The John Hopkins University Press, Baltimore (1983).

[E20] A.N. Tikhonov and Y.Y.Arsenin, Solutions ofIll-Posed Problems, W.H. Winston, Washington, (1977).

[E21] N. Zavaljevski, K. C. Gross, and S. W. Wegerich, "Regularization Methods for the Multivariate State Estimation technique (MSET)," Proc. Int. Conf. On Mathematics and Computation, Reactor Physics and EnvironmentalAnalysis in Nuclear Applications, Madrid,Spain (1999).

[E22] G. Wahba, Spline Modelsfor ObservationalData, SIAM, Philadelphia (1990).

[E23] A. Miron and N. Zavaljevski, "UNA MSET User's Guide," ANL Report, (2002).

[E24] M. Misiti, Y. Misiti, G. Oppenheim, and J.M. Poggi, MATLAB Wavelet Toolbox User's Guide, The MathWorks, Inc., Natick, MA, (,2001).

[E25] N. Zavaljevski, A. Miron, C. Yu, T.Y.C. Wei, and E. Davis, "A Study of On-line Monitoring Uncertainty Based on Latin Hypercube Sampling and Wavelet De-Noising, Proc. of the ForthANS InternationalTopical Meeting on NuclearPlantInstrumentationand Control and Human-MachineInterface Technology, September 19-22, Columbus, OH (2004).

E-35

EPRI ,icensed Material MSET UncertaintyAnalysis E.10 Plant-Specific Uncertainty Analysis The results of a plant-specific uncertainty analysis are presented for a typical PWR plant. As described in E.5.5, sensors are grouped in eleven models: the RCS flow models for three loops, the RCS pressure model, the pressurizer level model, the RPS models for three loops, and the steam generator level models for three loops. MSET models are developed for each of the plant sensor models. The uncertainty analysis for MSET models is presented for each representative model.

E.10.IRCS Flow Model Table E-2 Estimated Noise for RCS Flow Sensors Signal Estimated noise Description Distribution Standard deviation

__ (% of span)

RCS flow Sensor 1 Gaussian 0.05 RCS flow Sensor 2 Gaussian 0.05 RCS flow Sensor 3 Gaussian 0.05 The estimated random components in this model are small. The exhaustive parametric analysis indicated that many models have satisfactory behavior. BART with 40 vectors in the matrix D has slight advantage over other models. Regularization is not effective for this case.

Table E-3 Estimated Confidence Intervals for RCS Flow Sensors, Normal Operation Confidence intervals Estimated noise Setpoint noise Residual (%) +/-0.04 +/-0.04 True error (%) +/-0.12 +/-0.19 Table E-4 Estimated Confidence Intervals for RCS Flow Sensors, Drift Conditions Confidence Intervals Effect of estimated noise Effect of setpoint noise and spill-over and spill-over Residual (%) +/-0.20 +/-0.04 True Error (%) +/-0.20 +/-0.20 E-36

EPRILicensedMaterial MSET UncertaintyAnalysis E. 10.2 Pressurizer Level Table E-5 Estimated Noise for Pressurizer Level Sensors Signal Estimated Noise Description Tag Number Distribution Standard Deviation

(%of span)

Pressurizer level 1 L0480A non-Gaussian 0.015 Pressurizer level 2 L0481A non-Gaussian 0.014 Pressurizer level 3 L0482A non-Gaussian 0.014 The noise level in this model is very small. The selection of the best model is based on spill-over, which can be significant here. The best model is VSET with 60 vectors in the matrix D. Since most MSET applications so far have been based on the BART operator, the best model for this operator (40 target vectors) is also presented.

Table E-6 Estimated Confidence Intervals for Pressurizer Level Sensors, Normal Operation Estimated Noise Setpoint Noise Confidence intervals VSET-60 BART-40 VSET-60 BART-40 Residual (%) +/-0.03 +/-0.03 +/-0.03 +/-0.05 True error (%) +/-0.03 +/-0.03 +/-0.18 +/-0.17 Table E-7 Estimated Confidence Intervals for Pressurizer Level Sensors, Drift Conditions Effect of Estimated Noise Effect of Setpoint Noise Confidence Intervals and Spill-over and Spill-over VSET-60 BART-40 VSET-60 BART-40 Residual (%) +/-0.34 +/-0.34 +/-0.17 +/-0.38 True error (%) +/-0.34 +/-.34 01 +/-0.38

+/-0.20 E-37

EPRILicensedMaterial MSET UncertaintyAnalysis E.10.3 RCS Pressure Table E-8 Estimated Noise for RCS Pressure Sensors Signal Estimated Noise Description Tag Number Distribution Standard Deviation

(% of span)

Pressurizer pressure I P0480A Gaussian 0.03 Pressurizer pressure 2 P0481A Gaussian 0.03 Pressurizer pressure 3 F0482A Gaussian 0.03 Pressurizer pressure 4 F0482A Gaussian 0.03 Pressurizer pressure 5 P0484A Gaussian 0.04 Steam pressure 1 P0498A Gaussian 0.04 Steam pressure 2 P0499A Gaussian 0.06 The largest errors in this model are due to spill over. It was found that the minimization of the maximum spill-over is achieved for BART with 40 vectors in the matrix D.

Table E-9 Estimated Confidence Intervals for RCS Pressure Sensors, Normal Operation Confidence intervals Estimated noise Setpoint noise Residual (%) *0.09 +/-0.12 True error (%) +/-0.11 +/-0.17 Table E-10 Estimated Confidence Intervals for RCS Pressure Sensors, Drift Conditions Confidence intervals Effect of estimated noise Effect of setpoint noise and spill-over and spill-over Residual (%) +/-0.26 +/-0.22 True error (%) +/-0.26 +/-0.25 E-38

EPRILicensedMaterial MSET UncertaintyAnalysis E. 10.4Reactor Protection System (RPS) Models Two models for this system are considered: A model with nine correlated sensors and a separate model for the steam generator levels.

Table E-11 Estimated Noise for RPS Sensors Signal Estimated Noise Description Tag Number Distribution Standard Deviation

(% of span)

Feedwater flow C1 F0443A Gaussian 0.16 Feedwater flow C2 F0444A Gaussian 0.40 Steam flow C1 F0445A Gaussian 0.08 Steam flow C2 F0446A Gaussian 0.53 Steam pressure C1 P0440A Gaussian 0.01 Steam pressure C2 P0441A Gaussian 0.01 Steam pressure C3 P0442A Gaussian 0.02 Turbine first stage pressure 1 P0398A Gaussian 0.03 Turbine first stage pressure 2 P0399A Gaussian 0.03 Significant variability is observed in the estimated noise levels among the first four sensors. The largest estimated noise has the dominant effect on maximum errors. The residuals are rather large and to constrain the residuals, the number of vectors in the training matrix must be increased from the optimal value required for small uncertainty. Detailed parametric studies for this model are presented in Figures E-20 and E-21. The optimal model is BART with regularization and 75 vectors in the matrix D. The best model for standard BART has 50 vectors in the matrix D.

E-39

EPRIlicensedMaterial MSET UncertaintyAnalysis RPS, Loop C,True Noise, True Errors Signal 1 0.1O8- - =

4 ___ 0BART 0.18 [OATrg 0.1__ USET - reg.

0.168=

0 it w1 0.143 is___ 0r 0.11 ===_A 30 40 50 60 70 80 90 100I Number of Target Vectors Number of Target Vectors Signal 3 o BART 0.12 - BART - reg.

1D USET

  • USET - reg.

0.0-- __ E I

.a 0.0- ---

rV nnn I __ I _ _ _

N; .L 0.07 _ _ _ _

0 306

'301 40 50 60 70 80 90 100 Number of Target Vectors Number of Target Vectors Figure E-20 Parametric Study for True Errors E-40

EPRILicensedMaterial MSET UncertaintyAnalysis RPS, Loop C,True Noise, Residuals Signal 2 0.40 -

o BART 0.35 - - - -BA;R-reg.

cI USET E USET - reg.

0.C F

e LO 0.20 - - - -

9 0.25 \ ---i S. tC a:

0.10 r 0.05 - - 0 8 -

30 40 50 60 70 so 90 10(

Number of Target Vectors Number of Target Vectors

-u It

.1 -.

or Number of Target Vectors Number of Target Vectors Figure E-21 Parametric Study for Residuals E-41

EPRILicensedMaterial MSET UncertaintyAnalysis The largest confidence intervals for the optimal MSET model and the standard BART operator are shown in Tables E-12 and E-13.

Table E-12 The Largest Confidence Intervals for RPS, Sensors, BART with Regularization Sensor Number Normal Operations Drift Conditions Residual True Error Residual True Error 1 0.28 0.22 0.28 0.22 2 0.40 0.48 0.40 0.50 3 0.22 0.20 0.22 0.20 4 0.50 0.70 0.50 0.70 5 0.07 0.17 0.40 0.40 6 0.08 0.17 0.40 0.40 7 0.08 0.17 0.40 0.40 8 0.21 0.17 0.25 0.26 9 0.22 0.18 0.25 0.26 Table E-13 The Largest Confidence Intervals for RPS Sensors, Standard BART Sensor number Normal Operations Drift Conditions Residual True Error Residual True Error 1 0.23 0.27 0.36 0.34 2 0.40 0.57 0.42 0.59 3 0.22 0.19 0.25 0.23 4 0.45 0.77 0.47 0.81 5 0.07 0.17 0.44 0.44 6 0.07 0.17 0.44 0.44 7 0.08 0.18 0.44 0.44 8 0.22 0.18 0.26 0.24 9 0.23 0.20 0.26 0.24 E-42

EPRP Licensed Material MSET UncertaintyAnalysis The estimated noise for the sensors in the SGL model is close to the setpoint noise (0.1%) and the uncertainty analysis results presented in Tables E-14 and E-15 are evaluated for that noise level.

Table E-14 Estimated Confidence Intervals for Steam Generator Level Sensors, Normal Operation Confidence Intervals Effect of Setpoint Noise VSET-20 BART-30 Residual (%) +/-0.10 +/-0.05 True error (%) +/-0.17 +/-0.19 Table E-15 Estimated Confidence Intervals for Steamr Generator Level Sensors, Drift Conditions Confidence interval Effect of setpoint noise and spill-over VSET-20 BART-30 Residual (%) +/-0.22 +/-0.10 True error (%) +/-0.21 +/-0.20 E-43

EPRILicensed Material MSET UncertainlyAnalysis The summary of the largest confidence intervals for all optimal models is presented in Table E-16 and for standard BART operator in Table E-17.

Table E-16 Summary of the Largest Confidence Intervals, Optimal Models PWR Model Maximum Noise Residual "True Error' Selected Level (2 a) Confidence Confidence MSET Intervals Intervals Model

(% of span) (%of span)

Estimated Setpoint Normal Drift Normal Drift RCS flow, Loop A 0.10 0.20 0.04 0.20 0.19 0.20 BART 40 vectors RCS flow, Loop B 0.10 0.20 0.04 0.16 0.15 0.20 BART 40 vectors RCS flow, Loop C 0.10 0.20 0.04 0.08 0.15 0.17 BART 40 vectors Pressurizer Level 0.03 0.20 0.03 0.34 0.18 0.34 VSET 60 vectors RCS Pressure 0.12 0.20 0.12 0.26 0.17 0.26 BART 40 vectors RPS Model, Loop A 1.00 0.20 0.52 0.52 0.66 0.68 BART,regul 50 vectors RPS Model, Loop B 1.18 0.20 0.56 0.58 0.80 0.82 BART,regul 50 vectors RPS Model, Loop C 1.06 0.20 0.50 0.50 0.70 0.70 BART,regul

_ 75 vectors RPS, Loop A, SGL 0.20 0.20 0.14 0.32 0.20 0.33 VSET only 20 vectors RPS, Loop B, SGL 0.20 0.20 0.07 0.14 0.18 0.18 VSET only 20 vectors RPS, loop C, SGL 0.20 0.20 0.10 0.17 0.22 0.21 VSET only 20 vectors E-44

EPRILicensed Material MSET UncertaintyAnalysis Table E-17 Summary of the Largest Confidence Intervals, BART Operator PWR Model Maximum Noise Level Residual "True Error" Selected (2 a) Confidence Confidence MSET Intervals Intervals Model

(% of span) (% of span)

Estimated Setpoint Normal Drift Normal Drift RCS flow, Loop A 0.10 0.20 0.04 0.20 0.19 0.20 BART, 40 vectors RCS flow, Loop B 0.10 0.20 0.04 0.16 0.15 0.20 BART, 40 vectors RCS flow, Loop C 0.10 0.20 0.04 0.08 0.15 0.17 BART, 40 vectors Pressurizer Level 0.03 0.20 0.05 0.38 0.17 0.38 BART, 40 vectors RCS Pressure 0.12 0.20 0.12 0.26 0.17 0.26 BART, 40 vectors RPS Model, Loop A 1.00 0.20 0.46 0.51 0.74 0.78 BART, 40 vectors RPS Model, Loop B 1.18 0.20 0.51 0.52 0.92 0.93 BART, 40 vectors RPS Model, Loop C 1.06 0.20 0.45 0.47 0.77 0.81 BART, 50 vectors RPS, Loop A, SGL 0.20 0.20 0.16 0.34 0.17 0.34 BART, 30 only vectors RPS, Loop B, SGL 0.20 0.20 0.04 0.10 0.19 0.20 BART, 30 only vectors RPS, loop C, SGL 0.20 0.20 0.05 0.10 0.19 0.20 BART, 30 only vectors E-45

EPRiL'icensedMaterial F

MSET ACCEPTANCE TEST AND PERIODIC TEST The NRC Safety Evaluation for on-line monitoring provided the following surveillance-related requirement:

Requirement 14 Before declaring the on-line monitoringsystem operablefor the first time, andjust before each performance of the scheduled surveillance using an on-line monitoringtechnique, afull-features functional test, using simulated input signals of known and traceableaccuracy, should be conductedto verify that the algorithmand its softwareperform all requiredfunctionswithin acceptable limits of accuracy. All applicablefeatures shall be tested.

Appendix F provides an acceptance test for the SureSense Diagnostic Monitoring Studio, Version 1.4, MSET-based software. This test has been prepared in a manner that allows it to be used as the periodic test also. The acceptance test was developed by Expert Microsystems, Inc.

and has been adapted for use in this report in support of the EPRI on-line monitoring implementation project. For an implementation of SureSense version 2.0, this acceptance test can easily be modified.

F.1 Overview This procedure illustrates and documents the acceptance test process. Acceptance testing is typically required as part of software acceptance at each facility or site. For Technical Specification applications, there is an additional requirement to perform periodic testing on a quarterly interval as part of an on-line monitoring surveillance procedure. The purpose of this test is to verify that the monitoring functions of the software perform in exactly the same manner as when the V&V or acceptance test was initially performed.

Testing must be performed with the same software on the same computer platform that performs on-line monitoring. This test contains the same elements as the test of Set3 of the V&V testing (refer to Appendix C for the EPRI independent V&V report).

F.2 Model and Data The model used in this test, Final Acceptance_Test.svm, is similar to the model used for V&V testing and should not be modified. The only difference is that this model has already been trained, and the training data has been saved with the model. This model and associated data files F-i

EPRILicensed Material MSETAcceptance Test and PeriodicTest are available from the software supplier. The model should be placed in the./projects/

subdirectory. The model has been previously trained on the training data set named Setl.

The phase determiner with this model is the StandardPhaseDeterminer.classfile, which must be placed in the Plugins/phasedetsubdirectory. This phase determiner defines two phases, Operating_100and Operating 50.

The two data sets provided should be placed in the./data/TestDatasubdirectory. Setl is the training data set and Set3 is the data set to be used for testing. Setl contains data for five well-correlated signals that resemble the data plotted in Figure F-1. During training, the model operates in the Operating_100 phase for 10,080 observations, switches to the Operating_50 phase, and finally returns to the Operating_100 phase. The test data in Set 3 contains a small drift in one of the signals (S5). The signal drifts in the positive direction for the first 10,080 observations at approximately .00052 units/observation and then downward at approximately

.002 units/observation for the remainder of the observations. During testing a mean positive failure should be indicated, followed by a recovery, and finally a mean negative failure. The data containing the drift is plotted in Figure F-2.

_,eHepi Acc__ptance Test ti i Fhia.AcceptaneeTest SWt 95.40 87.60 __

a 79i80- _

72.00 _ _ _ _ _ - 1 _ _ _ _

I 642D _ _ _ _ I _ I_ 1___

56.4 _ _ - _ _ _ _ _ _ _

4 8.6 _ _ _ _ _ _ _ _ _ _ _

40E0 I 33 130 -_ _ -_ _- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

DIJWI OWI0A 0LM 01R83 01110/02 01112/02 01i11 019fUlns 0 OlJ92 01tm X SI: %nwation vs. Tim Figure F-1 Normal Signal Behavior F-2

EPRILicensed Material MSETAcceptance Test andPeriodicTest Fil* ¢Dfn AutoTlpeX.Hlp  ; -

Fine]_Acceptmncejest: SeS3 138I I I I II 126.DOC- _ __

114.00 -- _ _ _ _ _ _

54M 42.00 _ _ _ _ _ __ _ _ 7__ _ _

30.001 _ I _

01AO1J02 01RM2 0U1502 01W2 O01i2 OIl2/02 01/15M2 0117402 01f19Fj2 01*2M2 Time I 7x SS: en T-"I

_bjewation ead_ =

Figure F-2 Signal with Drift F.3 Test Procedure A checklist is provided at the end of this section and should be filled out as the test is performed.

If necessary, refer to EPRI Interim Report, SureSense DiagnosticMonitoringStudio Users Guide, Version 1.4, for guidance regarding software operation.

At the completion of this test, store the results with the V&V documentation package (if performed for on-site software acceptance) or with the quarterly surveillance procedure documentation package (if performed as a periodic test).

Step 1- Confirm Files Depending on the SureSense version, the following files are provided by the software supplier or installed during the SureSense software installation. Verify that the date and time on each file is correct and that they are located in the appropriate directory.

1. Project File:
a. Name: FinalAcceptance Test.svm.
b. Date: 8/28/02 3:26 PM.
c. Location: the application's .\projects\ subdirectory (typically C:\Program Files\SDMSvl4O\Projects).

F-3

EPRP Licensed Material MSETAcceptance Test andPeriodicTest

2. Phase Determiner:
a. Name: StandardPhaseDeterminer.class.
b. Date: 5/16/2002 10:16 AM.
c. Location: the applications .\Plugins\phasedet subdirectory (typically C:\Program Files\SDMSvl4O\ Plugins\phasedet).
3. File Reader:
a. Name: SDF.class
b. Date: 7\1\2002 8:43 PM.
c. Location: the application's .\Plugins\reader subdirectory (typically C:\Program Files\SDMSvl4O\ Plugins\reader).
4. Training Data File
a. Name: Setl.sdf
b. Date 5/26/2002 10:31 PM
c. Location: the application's .\Data\Te~stData subdirectory (typically C:\Program Files\SDMSvl40\ Data\TestData).
5. Test Data File:
a. Name: Set3.sdf
b. Date: 5/30/2002 10:14 a.m.
c. Location: the application's .\Data\TestData subdirectory (typically C:\Program Files\SDMSvl40\ Data\TestData).

Step 2 - Start Software Start the application and log-in with a Designer log-in role. Figure F-3 shows an example.

I'.'" -i Passw~ord.

-.Lo'gs' Role j .. XS*

Figure F-3 Log-In Window Step 3 - Open Model From the File Menu, select Open. In the Select a File dialog, select Final AcceptanceTest.svm.

Figure F-4 shows the model when initially opened.

F-4

EPRI Licensed Material MSETAcceptance Test andPeriodicTest 0 81 0.00 84 system 0.00 82 0.00 85 0.083 Figure F-4 Acceptance Test Model Step 4- Verify Model From the Run Menu, select Verify. No errors should be identified during the verification process.

Figure F-4 shows the Model Verified dialog.

Figure F-5 Model Verification Step 5 - Retrain Model The model has been previously trained for use. As part of the test procedure, the model will be retrained to verify that the training results are as expected. Perform the following:

1. From the Run Menu, select Train.

The Training Is Current dialog should appear, indicating that training is current for all phases.

2. Select Retrain to start the training process.

The Select Data for Training dialog should appear.

3. Select All Training Data from the Select Data for Training dialog.

The Kept Training Data dialog will appear, requesting whether to use previously extracted data.

F-5

EPRILicensed Material MSETAcceptance Test and PeriodicTest

4. Select No to All from the Kept Training Data dialog.

The model should access the training data file and perform the training procedure. The Model Training Report should indicate the successful completion of training. Verify that the training results match the expected results provided in Section F.4. 1.

5. Save the Model Training Report as a file in a separate directory.

Step 6 - Run Model Run the model using Set3. Perform the following steps:

1. From the Run Menu, select Run.

The Run Director dialog and a blank Monitoring Run Report should appear.

2. On the Run Director dialog, select Set3 and start the monitoring run.

The model should perform signal analysis of Set3. Verify that the run results match the expected results provided in Section F.4.2. Verify the following times:

  • Verify that the time at each Phase change is identical to the times listed on the report in Section F.4.2.
  • Verify that the times of each signal failure and recovery are identical to the values listed in the report in Section F.4.2.
3. Save the Monitoring Run Report as a file in a separate directory.

Step 7 - Obtain Individual Siinal Reports Obtain individual signal reports for the monitoring run as follows:

1. In the main system window, right-click on each signal and select Info from the list.

The Signal Info dialog will open, with Last Run shown as the default selection.

2. Select OK on the Signal Info dialog to obtain each signal's report.

The Signal Report for the selected signal will appear. Verify that the information for each signal matches the expected results provided in Section F.4.3.

3. Save each report as a file in a separate directory.

Step 8 - Obtain Sienal Plots Obtain individual signal plots for the monitoring run as follows:

1. In the main system window, right-click on each signal and select Plot from the list.

The Select Data for Plotting dialog will open, with Last Run shown as the default selection.

F-6

EPRILicensed Material MSETAcceptance Test andPeriodic Test

2. Select OK on the Select Data for Plotting dialog to obtain each signal's observation and estimate plot.

The signal's observation and estimate plot should appear. The Plot Data Sampled dialog will appear indicating that only 20,000 points were plotted. Click OK to clear this dialog. Verify that the plot for each signal matches the expected results provided in Section F.4.4. Obtain a screen shot of each plot and save in a separate directory.

3. Select the Autotype menu on each plot and select Residual to obtain a residual plot for each signal.

The Plot Data Sampled dialog will appear indicating that only 20,000 points were plotted.

Click OK to clear this dialog. Verify thati the plot for each signal matches the expected results provided in Section F.4.4. Obtain a screen shot of each plot and save in a separate directory.

F-7

EPRILicensedMaterial MSETAcceptance Test andPeriodicTest Acceptance Test Record Task Initials Step 1: Confirm Files File Name Date Location FinalAcceptanceTest.svm 8/28/02 3:26 PM .\projects\

StandardPhaseDeterminer.class 5/16/2002 10:16 AM .\Plugins\phasedet\

SDF.class 7/1/2002 8:43 PM .\Plugins\reader Set1.sdf 5/26/2002 10:31 PM .\Data\TestData Set3.sdf 5/26/2002 10:33 PM .\Data\TestData Comments:

Step 2: Start Software Application Start.

Login.

Comments:

Step 3: Open model Open Final Acceptance Test.svm.

Comments:

Step 4: Verify Model Select Verify from Run Menu.

Comments:

Step 5: Retrain Model Retrain model. Confirm Model Training Report matches expected results.

Comments:

Step 6: Run Model Run Set3. Confirm Monitoring Run Report matches expected results.

Save Report.

Comments:

Step 7: Obtain Individual Signal Reports S1 Signal Report. Confirm Signal Report matches expected report. l S1 Signal Report saved.

S2 Signal Report. Confirm Signal Report matches expected report.

F-8

EPRP Licensed Material MSETAcceptance Test and PeriodicTest Taisk Initials S2 Signal Report saved.

S3 Signal Report. Confirm Signal Report matches expected report.

S3 Signal Report saved.

S4 Signal Report. Confirm Signal Report matches expected report.

S4 Signal Report saved.

S5 Signal Report. Confirm Signal Report matches expected report.

S5 Signal Report saved.

Comments:

Step 8: Obtain Signal Plots SI Observation and Estimation Plot. Confirm plot matches expected plot.

SI Observation and Estimation Plot saved.

S2 Observation and Estimation Plot. Confirm plot matches expected plot.

S2 Observation and Estimation Plot saved.

S3 Observation and Estimation Plot. Confirm plot matches expected plot.

S3 Observation and Estimation Plot saved.

S4 Observation and Estimation Plot. Confirm plot matches expected plot.

S4 Observation and Estimation Plot saved.

S5 Observation and Estimation Plot. Confirm plot matches expected plot.

S5 Observation and Estimation Plot saved.

SI Residual Plot. Confirm plot matches expected plot.

SI Residual Plot saved.

S2 Residual Plot. Confirm plot matches; expected plot.

S2 Residual Plot saved.

S3 Residual Plot. Confirm plot matches expected plot.

S3 Residual Plot saved.

S4 Residual Plot. Confirm plot matches expected plot.

S4 Residual Plot saved.

S5 Residual Plot. Confirm plot matches expected plot.

S5 Residual Plot saved.

Comments:

F-9

EPJ.LicensedMaterial MSETAcceptance Test andPeriodic Test F.4 Expected Test Results The following sections provide the expected. results for the various model operation steps specified in Section F.3.

F.4.1 Model Training Report Started training for FinalAcceptanceTest...

Processing: OPERATING 100 Extracting 20000 training vectors from 20160 total vectors Breaking data into 2 blocks of 10080 Building model...

100 vectors placed in the Training Matrix Initializing model...

Estimated time to initialize is 10 seconds Profiling training data...

Estimated profiling time is 16 seconds Result Summary for OPERATING_100:

Estimation Method: BART Normalized RMS Error Percent: 0.2721 %

Single Cycle Alarm Total: 0 alarms consisting of Pos Alarm Type: 0 Neg Alarm Type: 0 Summary by Signal Parameter:

SignalName MinValue MaxValue AvgValue StdDevVal MinEstimate MaxEstimate AvgEstimate StdDevEst RMSError MaxError AvgError StdDevErr RMSError% NumAlarms Si 76.6177 105.5175 91.1278 5.6294 77.3143 105.3041 91.1337 5.6178 0.1975 1.0191 0.1523 0.1258 0.2163% 0 S2 79.6125 108.9639 94.9816 5.7333 80.1650 108.3321 94.9730 5.6742 0.2970 1.4625 0.2329 0.1843 0.3121% 0 S3 85.4637 113.6087 99.8483 5.9238 86.0066 113.4068 99.8042 5.8815 0.2285 1.0022 0.1798 0.1411 0.2286% 0 S4 92.3795 120.0516 106.4756 6.2201 93.0361 120.1570 106.4729 6.1841 0.3323 1.3479 0.2634 0.2026 0.3116% 0 S5 106.8380 133.8294 120.3015 6.8841 106.7744 133.5084 120.3434 6.8908 0.3522 1.3246 0.2810 0.2123 0.2922% 0 Training successful for OPERATING_100 F-10

EPRILicensed Material MSETAcceptance Test andPeriodicTest Processing: OPERATING_50 Extracting 10080 training vectors from 10080 total vectors Breaking data into 1 blocks of 10080 Building model...

100 vectors placed in the Training Matrix Initializing model...

Estimated time to initialize is 5 seconds Profiling training data...

Estimated profiling time is 5 seconds Result Summary for OPERATING 50:

Estimation Method: BART Normalized RMS Error Percent: 0.3246 %

Single Cycle Alarm Total: 0 alarms consisting of Pos Alarm Type: 0 Neg Alarm Type: 0 Summary by Signal Parameter:

SignalName MinValue MaxValue AvgValue StdDevVal MinEstimate MaxEstimate AvgEstimate StdDevEst RMSError MaxError AvgError StdDevErr RMSError% NumAlarms S1 37.2378 53.1065 45.5690 2.8726 37.9186 52.6443 45.5121 2.8567 0.1662 0.7426 0.1299 0.1037 0.3645% 0 S2 39.6713 54.9796 47.4889 2.9418 39.9398 54.6161 47.5112 2.9161 0.1278 0.5204 9.826E-02 8.173E-02 0.2685% 0 S3 42.6918 57.7413 49.9023 3.0416 42.9245 57.4094 49.9075 3.0115 0.1634 0.7181 0.1265 0.1035 0.3269% 0 S4 45.5492 60.8395 53.2434 3.1791 45.6804 60.2680 53.2950 3.1461 0.1835 0.8512 0.1425 0.1156 0.3438% 0 S5 52.2701 67.8669 60.1507 3.5235 52.2439 67.8533 60.1261 3.5222 0.1923 0.8253 0.1518 0.1180 0.3193% 0 Training successful for OPERATING_50 Saving training data set to files...

Save completed.

Training complete for all active phases.

F-i 1

EPRI Licensed Material MSETAcceptance Test and PeriodicTest F.4.2 Run Results Upon completion of the run, the main system window should appear as shown in Figure F-6. The expected monitoring run report is provided below.

MQJ~

. j .

689.84982 Si 110367717 S4 s System 97470410 S2 M 85E 97.51 761 S3 Figure F-6 Main System Window After Monitoring Run of Set3 Started monitoring for FinalAcceptance Test...

Starting run for Set3 Start time value: 01/01/02 00:00:00 01/01/02 00:00:00: Changed phase to OPERATING_100 01/06/02 04:12:00: MEANPOS failure for S5 01/08/02 00:00:00: Changed phase to OPERATING_50 01/09/02 05:44:00: MEANPOS recovery for S5 01/09/02 05:44:00: MODEL recovery for S5 01/11/02 01:18:59: MEAN NEG failure for S5 01/15/02 00:00:00: Changed phase to OPERATING_100 Run completed normally Stop time value: 01/21/02 23:59:00 Result Summary for OPERATING_100:

Data points processed: 20160 Average Processing Time: 0.7860 msec Normalized RMS Error Percent: 2.7725 %

Single Cycle Alarm Total: 12687 alarms consisting of Pos Alarm Type: 2606 Neg Alarm Type: 10081 Failure Decision Total: 12708 failures consisting of Pos Decision Type: 2628 Neg Decision Type: 10080 Summary by Signal Parameter:

SignalName MinValue MaxValue AvgValue StdDevVal MinEstimate MaxEstimate AvgEstimate StdDevEst RMSError MaxError AvgError StdDevErr RMSError% NumAlarms NumFailures TimelstFail S1 76.0174 106.9415 91.1674 5.5985 F-12 (27

EPRIAicensedMatenal MSETAcceptance Test andPeriodicTest 76.9709 105. 6724 91. 1953 5. 5781 0.2023 1.2749 0.1574 0. 1271 0.2215% 1 0 S2 78. 8292 109.9985 94. 9642 5.7494

80. 3634 108.8447 95. 0357 5. 7048
0. 3341 1. 6946 0. 2594 0.2105 0.3509% 1 0 S3 85. 9183 114.6711 99. 8355 5. 9353
86. 2753 113.6504 99. 8402 5. 9087
0. 2490 1.0808 0.1954 0.1543 0.2489% 0 0 S4 92.6334 120.0894 106.4964 6. 2153 93.3014 120.2523 106.5797 6. 1229 0.3892 1.8105 0.3077 0.2384 0.3646% 1 0 S5 79. 3885 138.2853 111.8940 13.5389 108.2018 134.2147 120.9314 6. 9114 14.2871 30.8903 10.5335 9.6526 11.7950% 12684 12708 01/06/02 04:12:00 Result Summary for OPERATING_50:

Data points processed: 10080 Average Processing Time: 0.7859 msec Normalized RMS Error Percent: 2.3469 %

Single Cycle Alarm Total: 7392 alarms consisting of Pos Alarm Type: 1679 Neg Alarm Type: 5713 Failure Decision Total: 7465 failures consisting of Pos Decision Type: 1784 Neg Decision Type: 5681 Summary by Signal Parameter:

SignalName MinValue MaxValue AvgValue StdDevVal MinEstimate MaxEstimate AvgEstimate StdDevEst RMSError MaxError AvgError StdDevErr RMSError% NumAlarms NumFailures TimelstFail S1 37.6533 54.2379 45.5534 2.8890 37.9979 52.8414 45.4902 2.8670 0.1731 1.3504 0.1347 0.1087 0.3798% 1 0 S2 40.0146 55.4665 47.5063 2.9385 40.1829 54.7825 47.5114 2.9131 0.1497 0.8001 0.1153 9.541E-02 0.3145% 2 0 S3 41.3610 57.8568 49.9357 3.0394 43.0871 57.4760 49.9275 3.0019 0.1735 1.8586 0.1344 0.1098 0.3469% 5 0 S4 45.0339 61.4463 53.2470 3.1885 45.8600 60.5860 53.2751 3.1429 0.2280 1.3970 0.1777 0.1429 0.4272% 2 0 S5 41.7444 71.4988 56.4681 6.0769 52.2894 67.8644 59.9685 3.5710 5.8308 13.1684 4.5832 3.6046 9.7059% 7382 7465 01/09/02 05:44:00 F-13

EPR .LicensedMaterial MSETAcceptance Test and PeriodicTest F.4.3 Signal Reports The following sections provide the report for each signal.

F.4.3.1 Signal Report for S1 Properties of Signal Si:

Parameter Name: Pl Unit: PSI Component: System Port: Internal Allowable Error: Not defined Maximum Error: Not defined Confidence: 0.9975 Residual Moving Avg Points: 1 Validated: True Fault detector settings for OPERATING_100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0()10 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING_100 phase:

Number of Observations: 20160 MinValue MaxValue AvgValue StdDevVal 76.0174 106.9415 91.1674 5.5985 MinEstimate MaxEstimate AvgEstimate StdDevEst 76.9709 105.6724 91.1953 5.5781 RMSError MaxError AvgError StdDevErr 0.2023 1.2749 C.1574 0.1271 Alarm Total: 1 consisting of Pos Alarm Type: 1 Neg Alarm Type: 0 Failure Total: None Fault detector settings for OPERATING-50 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING 50 phase:

F-14

EPRILicensed Material MSETAcceptance Test andPeriodic Test Number of Observations: 100810 MinValue MaxValue .kvgValue StdDevVal 37.6533 54.2379 45.5534 2.8890 MinEstimate MaxEstimate AvgEstimate StdDevEst 37.9979 52.8414 45.4902 2.8670 RMSError MaxError AvgError StdDevErr 0.1731 1.3504 0.1347 0.1087 Alarm Total: 1 consisting of Pos Alarm Type: 1 Neg Alarm Type: 0 Failure Total: None F.4.3.2 Signal Report for S2 Properties of Signal S2:

Parameter Name: P2 Unit: PSI Component: System Port: Internal Allowable Error: Not defined Maximum Error: Not defined Confidence: 0.9975 Residual Moving Avg Points: 1 Validated: True Fault detector settings for OPERATING 100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING_100 phase:

Number of Observations: 20160 MinValue MaxValue AvgValue StdDevVal 78.8292 109.9985 94.9642 5.7494 MinEstimate MaxEstimate AvgEstimate StdDevEst 80.3634 108.8447 95.0357 5.7048 RMSError MaxError AvgError StdDevErr 0.3341 1.6946 0.2594 0.2105 Alarm Total: 1 consisting of Pos Alarm Type: 0 Neg Alarm Type: 1 Failure Total: None Fault detector settings for OPERATING_50 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 F-15

EPRILicensedMaterial MSETAcceptance Test and PeriodicTest Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING_50 phase:

Number of Observations: 10081)

MinValue MaxValue AvgValue StdDevVal 40.0146 55.4665 47.5063 2.9385 MinEstimate MaxEstimate AvgEstimate StdDevEst 40.1829 54.7825 47.5114 2.9131 RMSError MaxError AvgError StdDevErr 0.1497 0.8001 0.1153 9.541E-02 Alarm Total: 2 consisting of Pos Alarm Type: 2 Neg Alarm Type: 0 Failure Total: None F.4.3.3 Signal Report for S3 Properties of Signal S3:

Parameter Name: P3 Unit: PSI Component: System Port: Internal Allowable Error: Not defined Maximum Error: Not defined Confidence: 0.9975 Residual Moving Avg Points: 1 Validated: True Fault detector settings for OPERATING_100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude!: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results fcr OPERATING_100 phase:

Number of Observations: 20160 MinValue MaxValue AvgValue StdDevVal 85.9183 114.6711 99.8355 5.9353 MinEstimate MaxEstimate AvgEstimate StdDevEst 86.2753 113.6504 99.8402 5.9087 RMSError MaxError AvgError StdDevErr 0.2490 1.0808 0.1954 0.1543 Alarm Total: None Failure Total: None Fault detector settings for OPERATING_50 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 F-16

EPRP Licensed Material MSETAcceptance Test andPeriodicTest Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING 50 phase:

Number of Observations: 1008()

MinValue MaxValue AvgValue StdDevVal 41.3610 57.8568 419.9357 3.0394 MinEstimate MaxEstimate AvgEstimate StdDevEst 43.0871 57.4760 419.9275 3.0019 RMSError MaxError AvgError StdDevErr 0.1735 1.8586 0.1344 0.1098 Alarm Total: 5 consisting of Pos Alarm Type: 0 Neg Alarm Type: 5 Failure Total: None F.4.3.4 Signal Report for S4 Properties of Signal S4:

Parameter Name: P4 Unit: PSI Component: System Port: Internal Allowable Error: Not defined Maximum Error: Not defined Confidence: 0.9975 Residual Moving Avg Points: 1 Validated: True Fault detector settings for OPERATING_100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING_100 phase:

Number of Observations: 20160 MinValue MaxValue AvgValue StdDevVal 92.6334 120.0894 106.4964 6.2153 MinEstimate MaxEstimate AvgEstimate StdDevEst 93.3014 120.2523 106.5797 6.1229 RMSError MaxError AvgError StdDevErr 0.3892 1.8105 0.3077 0.2384 Alarm Total: 1 consisting of Pos Alarm Type: 1 F-17

EPRILicensed Material MSETAcceptance Test and Periodic Test Neg Alarm Type: 0 Failure Total: None Fault detector settings for OPERATING_50 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING 50 phase:

Number of Observations: 10080 MinValue MaxValue AvgValue StdDevVal 45.0339 61.4463 53.2470 3.1885 MinEstimate MaxEstimate AvgEstimate StdDevEst 45.8600 60.5860 53.2751 3.1429 RMSError MaxError AvgError StdDevErr 0.2280 1.3970 0).1777 0.1429 Alarm Total: 2 consisting of Pos Alarm Type: 2 Neg Alarm Type: 0 Failure Total: None F.4.3.5 Signal Report for S5 Properties of Signal S5:

Parameter Name: P5 Unit: PSI Component: System Port: Internal Allowable Error: Not defined Maximum Error: Not defined Confidence: 0.9975 Residual Moving Avg Points: 1 Validated: True Fault detector settings for OPERATING_100 phase:

Test Type: Gaussian False Alarm Probability: 0.0010 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 10.0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Level (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING_100 phase:

Number of Observations: 20160 MinValue MaxValue AvgValue StdDevVal 79.3885 138.2853 111.8940 13.5389 F-18

EPRI Licensed Material MSETAcceptance Test andPeriodicTest MinEstimate MaxEstimate AvgEstin iate StdDevEst 108.2018 134.2147 120.9314 6.9114 RMSError MaxError AvgErrox StdDevErr 14.2871 30.8903 10.5335 9.6526 Alarm Total: 12684 consisting of Pos Alarm Type: 2604 Neg Alarm Type: 10080 Failure Total: 12708 consisting of Pos Failure Type: 2628 Neg Failure Type: 10080 Failure History:

Time Type Value Estimate Error 01/06/02 04:12:00 MEAN POS 136.0251 133.6102 2.4568 Fault detector settings for OPERATING_50 phase:

Test Type: Gaussian False Alarm Probability: 0.00].0 Missed Alarm Probability: 0.0010 Mean Disturbance Magnitude: 1().0 Variance Disturbance Magnitude: 100.0 Mean Probability Confidence Lebvel (PCL): 0.95 Variance PCL: 0.95 Mean Prior Probability of the Normal Hypothesis (PHO): 0.5 Variance PHO: 0.5 Series Length: 10 Monitoring performance results for OPERATING 50 phase:

Number of Observations: 10080 MinValue MaxValue AvgValue StdDevVal 41.7444 71.4988 56.4681 6.0769 MinEstimate MaxEstimate AvgEstimate StdDevEst 52.2894 67.8644 59.9685 3.5710 RMSError MaxError AvgError StdDevErr 5.8308 13.1684 4.5832 3.6046 Alarm Total: 7382 consisting cf Pos Alarm Type: 1674 Neg Alarm Type: 5708 Failure Total: 7465 consisting of Pos Failure Type: 1784 Neg Failure Type: 5681 Failure History:

Time Type Value Estimate Error 01/09/02 05:44:00 MODELRECOVER 64.8929 65.1360 -0.2677 01/11/02 01:18:59 MEANNEG 60.1175 61.3720 -1.2791 F.4.4 Signal Plots The following sections provide the observation and estimation plot and the residual plot for each signal.

F-19

EPRI Licensed Material MSETAcceptance Test and PeriodicTest F.4.4.1 Signal Plots for S1 FTU I 'WeII 1! 0 11101,M i Final AcceptansceTest:LastRun

, 94.60 0.

.1 79.20 mkn w 71.50 ES t4 63.80 .

-2 56.10 .

48.40 .

AA"&".

40.70 33q4 :n 01101402 01403342 01405432 0110842 01110102 01112432 01/15432 01117432 01119102 01f22432 Time X SI: Observation vs. Time

  • SI: Estinution vs. Time
  • ad.

Figure F-7 Signal SI Observation and Estimate Plot FinalAcceptance Test: Last Run 1.00 A X x x

0.58 -A 0.37 v {' (t ' t, it

, 0.16

-0.05

-0.26

-0.47

-0.68 ,

-0.89 xx x x x

-1 in 01401432 01403342 01435432 01438432 0110432 01112432 01115102 0111732 011942 0112232 Time X SI: Resiual vs. Time Figure F-8 Signal SI Residual Plot F-20

EPRI LicensedMaterial MSETAcceptance Test and Periodic Test F.4.4.2 Signal Plots for S2

_ 9 WD.

FinalAcceptanceTest: LastRun 1 114.00 106.20 f 98.40 I 90.6 I 82.80 wWV

4 75.00II 9 67.20 I 59.40 v 51.60 hik"Ak-&k.Ik 43.80

'WVVw 360An 01101102 01/03i02 O1DOMi020108102 01110102 01/12102 01115102 01117102 01/19102 01122102 Time x S2: Obseration vs. Time

  • S2: Estimation vs. Time p0eady.

Figure F-9 Signal S2 Observation and Estimate Plot Finma1AcceptencejTest: Last Run

-1.50 1.17 A 0.S4 x Ax x I XY,,

)t, X "X;A ,

2 00

-, kX Xx Ix A 01101102 01103/02 01J051O2 01108/02 01/10102 01/12402 0111502 01/17402 01119J02 01122102 Time x S2: Resil&l vs. Time

..-'2i YY.'m.

.- -:~;

-fg..f

'Y,f J 2'121.-YAM

.f - 5:f' K£" :2!,f "0ady I.- y.. bS.'

IR-z"£,i~f Y L-Y. -m- , 3r-J , %  !, 7 ,3'.3' 1 f- ;i.

Yd Figure F-10 Signal S2 Residual Plot (F-2 F-2 1

EPRILicensed Material MSET Acceptance Test and Periodic Test F.4.4.3 Signal Plots for S3

W. oxea)

FinalAcceptanceTest: Last Run

.1 I~, . -:I'XWY .

.cI AL""AL Time x S3: Observation vs. Time 'aS3 Estimation vs. Time JRSady.

91 Figure F-11 Signal S3 Observation and Estimate Plot Final Acceptance Test: Last Run 1.00 0.771 0.31, 0.08

.1 -0.15

. -0.38

-0.61

-0.84 x

x

-I .07 Ix x x 1 sn

- I .,u ,-

OIA01102 ODOM~0 01105A02 01O8A2 01t10/02 01t12/02 01t11502 01/17A32 01119/02 01J22/02 Time x S3: Residmal vs. Time

-=_=~ '~ - - l" .m

-I <c-t..~vP3.

'- !K.2.'-'-.3 -,.43.'

.A.

I>~3<

W~

_9.2> 31-. W2-A

.km- 3~ h ".,' '.'i- 1 .I ', i.. u i-:

" :3 JR_"dy I

Figure F-12 Signal S3 Residual Plot F-22

EPRILicensedMaterial MSETAcceptance Test and Periodic Test F.4.4.4 Signal Plots for S4

.o xj FinalAcceptancejTest: Last Run

.9 1111ITTIV 0

.0

~wYwY wVW~w 01/1002 01/12102 01122A12 Time X S4: Obsewation vs. Time 4 S4: Estimation vs. Tine Rdy.

I~~ ; _. - ':- YYfidd ~

I.YI I'.ll, .d3..a3arz'i:- 3B.SSX~g Y.W

.C:~.

'RA-d - XI'1 >.-

- I I. - - AJ&.I I -Z 1 11 - I I1 Y~~13 IM I&.Y..I X6 I

IIw Figure F-13 Signal S4 Observation and Estimate Plot FIle Detir Au-T-

-s--

n, c e n Tet Ls Run Final-AcceptanceTest: Last Run 2.00 1.63 x x

1.26 x

0.89 _

i 0.15

-0.96

-1.33 x

.1. 7 x x 0101A02 01A13102 0LI0902 01V8U02 01110A02 01112A02 01/15A02 01I17j02 01/19A02 01i22202 Time XS4:Residalvs.Time 4 :Neady.

Figure F-14 Signal S4 Residual Plot F-23

EPRILicensed Material MSETAcceptance Test and Periodic Test F.4.4.5 Signal Plots for S5 FinalAcceptance Test: Last Run 150.00 138.00 4 126.00

114.00 A I i 102.00 i - 90.00 2 78.00

.9<

2 66.00 54.00 42.00

¶I3.0

.v .

1OIJ002 OIJ03102 01105102 01108102 01110JO2 WV01/12102 01/15102 01/17102 01119102 01/22J02 Time x S5: Observation vs. Tiue A S5: Estimation vs. Time pJqINNM MMM,69 >ow ' MINM IN/,X I 0 :o Figure F-15 Signal S5 Observation and Estimate Plot zilx"I IMM IAARW Final AcceptanceTest: Last Run

.4 A

Time X S5: Resi&&I vs. Time AM gIV I I ~M Figure F-16 Signal S5 Residual Plot F-24

(- '7,

Export Control Restrictions SINGLE USER UCENSE AGREEMENT Access to and use of EPRI Intellectual Property is granted THIS IS A LEGALLY BINDING AGREEMENT BETWEEN YOU AND THE ELECTRIC POWER with the specific understanding and requirement that RESEARCH4 INSTITUTE, INC. (EPRIU) PLUASE READ IT CAREFULLY BEFORE REMOVING THE responsibility for ensuring full compliance with all applicable WRAPPING MATERIAL U.S. and foreign export laws and regulations is being under-BY OPENING THIS SEALED PACKAGEYOU ARE AGREEING TO THETERMS OFTHIS AGREEMENT. IFYOU DO taken by you and your company. This includes an obligation NOT AGREE TO THE TERMS OF THIS AGREEMENT. PROMPTLY RETURN THE UNOPENED PACKAGE TO EPRI to ensure that any individual receiving access hereunder who AND THE PURCHASE PRICE WILL BEREFUNDED, is not a U.S. citizen or permanent U.S. resident is permitted access under applicable U.S. and foreign export laws and I.GRANT OF LICENSE regulations. In the event you are uncertain whether you or EPRIgrants you the nonexclusive andnontransferable right during the term of this agreement to usethis package only for your own benefit andthe benefit of your organization.This means that the following mayuse this packages your company may lawfully obtain access to this EPRI (Myour conipany (at anysite owned or operated by your company); (I) its subsidiaries or other related entities; and Intellectual Property, you acknowledge that it is your (ill) aconsultant to your company or related entities, Ifthe consultant hasentered Into a contract agreeing not to obligation to consult with your company's legal counsel to disclose the package outside of its organization or to usethe package for its own benefit or the benefit of anyparty determine whether this access is lawful. Although EPRI may other than your company.

make available on a case by case basis an informal assessment This shrink-wrap license agreement Is subordinate to the terms of the Master Utility License Agreement between 5

of the applicable US. export classification for specific EPRI most U.S.EIRI member utifities and EPRI. Any EPRImember utility that does not havea Master Utility Ucense Intellectual Property, you and your company acknowledge Agreement mayget one on request that this assessment is solely for Informational purposes and 2. COPYRIGHT not for reliance purposes. You and your company This paciage, induding the Information contained InIt is either licensed to EPPior owned by EPRiand isprotected by acknowledge that it is still the obligation of you and your United State; andInternational copyright Iaws.You maynot, without the prior written permission of EPRI, reproduce, company to make your own assessment of the applicable translate or modify this package, Inanyform, Inwhole or Inpart, or prepare anyderivative work based on this package.

U.S. export classification and ensure compliance accordingly. 3. RESTRICTIONS You and your company understand and acknowledge your You maynot rent, lease, license, disclose or ifve this packageto any person or organization, or use the information obligations to make a prompt report to EPRI and the contained in this package, for the benefit of any third party or for any purpose other than asspecified above unless appropriate authorities regarding any access to or use of such useIswith the prior written permission of EPRI.You agree to take allreasonable steps to prevent unauthorized EPRI Intellectual Property hereunder that may be in violation disclosure or useof this package. Except asspecified above, this agreement does not grant you any right to patents, of applicable U.S. or foreign export laws or regulations. copyrights, trade secrets, trade names, trademarks or anyother Intellectual property, rights or licenses in respect of this package.

4.TERM AND TERMINATION This license find this agreement are effective until terminated.You mayterminate them at any time by destroying this package. EPRIhasthe right to terminate the license and this agreement Immediately if you fail to comply with any term or condition of this agreement Upon anytermination you maydestroy this package, but all obligations of nondisclosuns will remain Ineffect S. DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES NEITHER EPRIANY MEMBER OF EPRIANY COSPONSOR, NORANY PERSON OR ORGANIZATION ACTING ON BEHALF OFANY OFTHEM:

(A) MAKES ANY WARRANTY OR REPRESENTATION WHATSOEVER, EXPRESS OR IMPLIED, (I) WITH RESPECT TO THE USEOF ANY INFORMATION, APPARATUS, METHOD, PROCESS OR SIMILAR ITEM DISCLOSED IN THIS PACKAGE, INCLUDING MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, OR (11)THAT SUCH USEDOES NOT INFRINGE ON OR INTERFERE WITH PRNATELY OWNIED RIGHTS, INCLUDING ANY PARTYS INTELLECTUAL PROPERTY, OR (111)THATTHIS PACKAGE ISSUITABLE TO ANY PARTICULAR USER'S CIRCUMSTANCE; OR (B) ASSUMES RESPONSIBILITY FOR ANY DAMAGES OR OTHER LIABILITY WHATSOEVER (INCLUDING ANY CONSEQUENTiAL DAMAGES, EVENIFEPRIOR ANY EPRIREPRESENTATIVE HAS BEEN ADViSED OF THE POSSIBILITY OF SUCH DAMAGES) RESULTING FROM YOUR SELECTION OR USEOF THIS PACKAGE ORANY INFORMATIONAPPARATUS, METHOD, PROCESS OR SIMILAR ITEMDISCLOSED IN THIS PACKAGE

6. EXPORT The lawsandregulations of the United States restrict the export andre-export of any portion of this package, and you agree nor:to export or re-export this package or anyrelated technical data Inany form without the appropri-ate United States andforeign government approvals.
7. CHOICE OF LAW This agreement will begoverned by the lawsof the State of California asapplied to transactions taking place entire-ly InCalifornia between California residents.

About EPRI B.INTEGRATION You haveread andunderstand this agreement, and acknowledge that it Isthe final, complete andexclusive agreement EPRI creates science and technology solutions for between you andEPRI concerning its subject matter, superseding anyprior related understanding or agreement No waiver, variation or different terms of this agreement will be enforceable against EPRIunless EPRIgives its prior writ-the global energy and energy services industry. ten consent, sgned by anofficer of EPRI.

U.S. electric utilities established the Electric Power Research Institute in 1973 as a nonprofit research consortium for the benefit of utility members, their customers, and society. Now known simply as EPRI, the company provides a wide range of innovative Program: 1007930 products and services to more than 1000 energy- Nuclear Power related organizations in 40 countries. EPRI's multidisciplinary team of scientists and engineers draws on a worldwide network of technical and 0 2004 Eleacric Power Research Institute (EPRI), Inc.AII rights reserved. Electric Power Research business expertise to help solve today's toughest Institute and EPRI are registered service marks of the Electric Power Research Institute, Inc.

energy and environmental problems. EPRI. ELECTRIFY THE WORLD is a service mark of the Electric Power Research Institute, Inc.

EPRI. Electrify the World IPrinted on recdyled paper in the United States ofAmenico EPRI

  • USA 800.313.3774
  • 650.855.2121
  • askeprieepri.com
  • www.epri.com