ML110320041

From kanterella
Revision as of 22:39, 14 August 2018 by StriderTol (talk | contribs) (Created page by program invented by StriderTol)
Jump to navigation Jump to search
2011/01/30-Statement by Bruce A. Egan, Sc.D., Cmm, in the Matter of Entergy Corporation, Pilgrim Nuclear Power Station License Renewal Application
ML110320041
Person / Time
Site: Pilgrim
Issue date: 01/30/2011
From: Egan B A
Egan Environmental
To:
Atomic Safety and Licensing Board Panel
SECY RAS
Shared Package
ML110320036 List:
References
RAS 19522, 50-293-LR, ASLBP 06-848-02-LR
Download: ML110320041 (8)


Text

UNITED STATES OF AMERICA NUCLEAR REGULATORY COMMISSION BEFORE THE ATOMIC SAFETY AND LICENSING BOARD In the Matter of Docket # 50-293-LR Entergy Corporation Pilgrim Nuclear Power Station License Renewal Application January 30, 2011 Statement by Bruce A. Egan, Sc.D., CCM January 30, 2011 I, Bruce A. Egan hereby declare under penalty of perjury that the following is true and correct to the best of my knowledge. I have reviewed the recent affidavit jointly authored by Drs. Steven R. Hanna and Kevin R. OKula and the affidavit by James V. Ramsdell submitted in the above captioned proceeding on behalf of Entergy and NRC Staff. These comments respond to some of the issues discussed relating to the SAMA analyses methods used for the relicensing of the Pilgrim Nuclear Power Station. Background Discussion First of all, there seems to be a running theme that because the end use of dispersion modeling for SAMA analyses vs. consequence assessments for emergency response planning is different, one can justify using outdated modeling methodologies that ignore technological improvements for SAMAs. My comments will show that I disagree with this perspective.

Emergency response planning should require the running of accurate and competent models for purposes of identifying evacuation routes, locations of decontamination centers, etc., in order to minimize the potential exposure of residents and workers in the event of an actual accident. Emergency responses in an accident requires a real time alert system showing which evacuation routes, how workers should approach the plant and similar information for use in an actual emergency. SAMA analyses require an assessment of the potential consequences of various postulated accidents that might happen for anticipated meteorological events. Each of 2 these applications requires that appropriate meteorological data be used. The meteorology at a site does not vary by application and one should apply the best science that is reasonably available for all these applications. There may be tradeoffs if some analysis methods may be substantially more costly than others, but the application of modeling to SAMA analyses does not appear to be one of them. The issue is not cost for the analyses but rather the confidence that one has that the modeling is done reliably with state of the art technologies.

Comments Regarding the Plume Segment Model The responses by Drs Hanna and OKula to questions 14 and 33 describe how the ATMOS module within the MACCS2 model simulates transport and dispersion with a plume segment algorithm. Their description states that the plume segment model is more than the straight-line Gaussian plume model in that it is able to account for hour to hour changes in atmospheric stabilities, wind speed, and precipitation during plume travel. Noticeably absent are hourly changes in wind direction, a key concern for the PNPS site. It is a straight-line Gaussian model. The associated reference to the plume segment model refers to a section of NRC Regulatory Guide 1.111 entitled Plume Element Models. The reference to this section is misleading as it has only one equation that is for a puff model. No equations are provided for the plume segment model. Reference to the plume segment model is in a following single paragraph that states that the plume segment model uses spatial and temporal (emphasis added) variations of wind direction, wind speed and atmospheric stability to define the transport and diffusion of each element. The next and final paragraph in Regulatory Guide 1.111 essentially states what we have being advocating: The effectiveness of the meteorological input data in defining atmospheric transport and diffusion conditions is dependent on the representativeness of these data and the complexity of the topography in the site region; therefore a detailed discussion of the applicability of the model and input data should be provided. The plume segment model as has been applied to the PNPS uses temporal but not spatial variations of meteorological conditions. Spatial variations would require the use of simultaneous meteorological data. My understanding is that the application at PNPS did not use multiple station data in this context.

3 Issues of Model Capabilities and Applications to SAMA and Emergency Response Needs Dr.Hanna and Mr. Ramsdell seem to acknowledge that recent advances in atmospheric sciences, especially in understanding the complexities of dispersion in the planetary boundary layer have resulted in technical improvements to atmospheric transport simulations. These scientific advances as well as in advances in computational methods have resulted in the remarkable improvements made to the meteorological models that are used operationally to predict future weather. Many of these advances have been incorporated into the coding of the newer transport and diffusion models used for environmental assessments required for permitting and safety of power plants and industrial sources. I am sure that these experts would also agree that for emergency planning or for use during actual emergencies, it would be beneficial to use dispersion models that utilize better science to simulate phenomena and to predict the dispersion consequences of individual events in a highly reliable and competent manner. From a computational point of view, the key difference between the modeling needs of SAMA analyses and applications to emergency response is the fact that, as constructed, the SAMA analyses focus on evaluating only long term average consequences. The short term averages are not needed. For emergency response, the short term predictions are essential. However, the difference between these needs from an air quality modeling computational standpoint essentially reduces to the averaging of the results and how the data is manipulated in post processors. The core elements of the RASCAL model described by Mr. Ramsdell are used to calculate 1 hour1.157407e-5 days <br />2.777778e-4 hours <br />1.653439e-6 weeks <br />3.805e-7 months <br /> values that could be averaged to produce the long term averages needed for a SAMA. With todays computers, the computer time in unlikely to be an issue. We think such advances could improve the reliability and credibility of ATMOS because improvements to the model made to the 1-hour predictions would improve the reliability of the annual average values. An argument is made by Dr OKula that improvements to predicting the annual averages are not necessary, because at the PNPS, one would have to show substantial changes to the projected population doses and the economic consequences to have a another SAMA be cost effective. I note that this is a site specific comment and would not necessarily be applicable to other power plants. At another site, the differences might be much smaller and improvements to the modeling code could change the identification of cost effective SAMAs. The selection of SAMAs should not depend upon the selection of the dispersion model utilized. Improved simulations of transport and dispersion for all time scales would be beneficial to the industry as it would reduce the uncertainty that decision makers have to address. The comments that the US EPA s requirements to address National Ambient Air Quality Standards (NAAQS) with short term averaging times (one hour, 3 hour3.472222e-5 days <br />8.333333e-4 hours <br />4.960317e-6 weeks <br />1.1415e-6 months <br />, 24 hour2.777778e-4 days <br />0.00667 hours <br />3.968254e-5 weeks <br />9.132e-6 months <br /> averages) is the 4 reason that EPA uses more advanced models are not correct. The averaging times for the National Ambient Air Quality Standards (NAAQS) range for one hour to annual averages. The EPA has guidance for selecting the most appropriate dispersion model for use in different applications (40 CFR Part 51 Appendix W. Guideline ion Air quality models). The criteria are based on a combination of appropriate recent science and model validation. With these criteria, there is no issue of different dispersion modeling techniques for modeling short term averages versus long term averages. Three criteria pollutants have annual average standards: SO2, NO2, and particulate matter. The same models used for estimating short averaging time impacts are used for the annual averages. The modeling requirements for demonstrating compliance with the NAAQS for Nitrogen Dioxide are an example. The initial standard set for NO 2 was for annual average concentrations. On the basis of revised findings of health effects, EPA in 2010, set a new standard with a one hour averaging time. The dispersion modeling methods recommended for compliance demonstrations for both the annual averages and the one hourly values did not change. The choice of model does not depend upon the averaging time over which meteorological variations occur.

Model development issues In response to Questions 59 and 60, Drs. OKula and Hanna discuss difficulties associated with trying to improve the MACCS2 code. The comment that eight years were required to develop the AERMOD code needs to be placed in context. The initial multiyear work effort of the AMS/EPA Regulatory Model Improvement Committee (AERMIC) which is responsible for the development of AERMOD, was to sort out, test and determine the best ways to integrate the findings of meteorological research studies and efforts that addressed improving the parameterization of the transport and dispersion characteristics of air flow in the planetary boundary layer. That research effort was time consuming but, it was completed and is well published. It would not need to be done again for purposes of improving ATMOS. Importantly, the results are viewed as representing a major step forward in defining the algorithms for computer simulations ranging from Gaussian dispersion type models to advanced numerical simulation models. The upgrades to the AERMOD code resulting from this research have flowed to improving the CALPUFF model, the Emissions and Dispersion Modeling System (EDMS) used by the FAA for aircraft operations as well as in modeling codes advanced by the National Park Service and other environmental protection agencies in the US and abroad. I have personal experience as the Project Director responsible for the staffing, budget and performance of contract efforts to develop and validate dispersion atmospheric dispersion models. I agree that an effort would be involved to upgrade ATMOS, but believe that the coding part would not be nearly as difficult as implied by these responses. The code to include 5 radioactive decay used in ATMOS would need to be integrated into any new code but this could retain the structure presently used in ATMOS. Statements made in Mr. Ramsdells affidavit also support my earlier assertions that computational time to run more sophisticated models should not be a deterrent to adopting advanced models. Mr. Ramsdell says at A32, referring to his involvement in the NUREG -6853 STUDY (Molencamp et al, 2004): Data preparation for MACCS2 was completed in a few hours, and code execution took less than 10 minutes on a PC. Data preparation for RASCAL required somewhat longer, but still only a few days. RASCAL code execution took about an hour on a PC.

Finally, weeks were spent getting data ready to run the ADAPT/LODI codes and the execution of these codes took almost a week of calculation on a mainframe computer. First, it is clear that running MACCS2 took hardly any time. RASCAL, a model which incorporates some of the advanced atmospheric science features of AERMOD, also takes very little data preparation and execution time. ADAPT /LODI required more time in data gathering and in execution time but still not an unreasonable amount of effort for an important analysis One would think that the licensing of a nuclear power plant would be an important enough application that data preparation time and computer resources would not be constraining factors. I note that the draft description of RASCAL Version 4 (Ramsdell et al, 2010) describes the model as using a straight-line Gaussian plume model near the release point where travel times are short and a Lagrangian-trajectory Gaussian puff model at longer distances where temporal or spatial variations in meteorological conditions may be significant. From this perspective, RASCAL appears more advanced than ATMOS. The statements by Drs. OKula and Hanna in response to Question 60 that the three models (ATMOS, AERMOD and CALPUFF) are likely to produce similar results is because the topography of the region modeled were simple, flat terrain, the only setting that the ATMOS model is designed for. I would expect significant differences would be modeled in other topographic settings such as in complex terrain and in coastal settings where terrain elevations, surface parameters and rainfall precipitation rates vary with location. The differences would be even larger if a risk measure such as the 95 percentile values were examined rather than only annual average calculations. Model Predictions at Long Distances and the Importance of Spatially Varying Parameters Table 3 of Dr. OKulas response to question 43 shows that the population dose risk for distances in the range of 30 to 50 miles encompasses 56% of the total risk. Similarly, the offsite economic cost risk in the range of 30 to 50 miles is about 54% of the total. These are in the 6 range greater than 50 km (31 miles) that the US EPA would generally call for the use of a puff model capable of handling temporally and spatially changing meteorological conditions. The results show the importance of impacts in the range beyond 30 miles to the consequences of accidental releases relative to the total impact over the area. This reinforces our argument that model accuracy is important at these large distances. Modeling simulations of radioactive decay and deposition processes act to deplete material from a plume as it travels downwind. Other things being equal, if deposition rates are large in the areas near the source, depletion rates further away will be smaller and vice versa. ATMOS uses rates that do not vary with location. Similarly, the travel time of a plume will determine the fraction of radioactive decay that will occur in the near vs. far field of a release. One of the computational limits of the ATMOS model is that is can utilize only one value of the surface roughness parameter for the entire modeling domain, in this case the area located within a radius of 50 miles. More advanced models allow roughness length as well as other surface characteristics to vary spatially. CALPUFF, for example can additionally utilize information about surface albedo and the Bowen Ratio, two other parameters that research efforts show are needed to improve the establishment of wind speed, wind speed profiles and dispersion rates for transport and dispersion models. An example of a systematic bias in the ATMOS application at the PNPS that is especially important at large distances from the PNPS is the use of only the seasonally averaged afternoon mixing depths. Because the afternoon mixing depths are generally much larger than morning mixing depths, and because at large distances from a source, ground level concentrations will be lower with increased mixing depth, this is not a conservative assumption. In the discussion about wind over the ocean, I found Dr. Hannas response to Question 85 to be out of context with the potential accidental configurations at the PNPS and therefore leading to an erroneous implication about the role of overwater transport. Dr. Hanna states that a factor of 2 greater wind speed over the ocean would, by itself, contribute to a reduction of maximum concentrations by approximately a factor of two. This would strictly be true only if the source were also within the airflow over the ocean. As Dr. Hanna correctly states in response to question 28, the dilution effect of wind speed and the inverse wind speed relationship to concentration only applies to the initial dilution of the emission source. What often does happen with an onshore flow, since the air over the water is often more stable than that over land, is that a fumigation type event occurs. This is associated with the fact that the surface roughness change and the warm land surface create more turbulence in the surface layer that would mix plume material from an elevated plume down to the surface, resulting in increased ground level concentrations.

7 Model Validation Efforts The model comparison study (Molenkamp et al, (2004) referenced by Drs. Hanna and OKula in their Reponses to questions 57 and 58 shows only model to model comparisons. The model to model setting and meteorological data used was over simple, flat terrain where, as Dr. Hanna discusses, one expects that models with dispersion rates based upon the Prairie Grass experiment data would produce similar results. Therefore a comparison of model predictions made in the relatively flat area of the Southern Great Plains (SGP) site in Oklahoma and Kansas cannot be used to state how model comparisons would fare at a coastal area like Plymouth, MA. The Molenkamp study text itself asserts that the topography of Oklahoma and Kansas is relatively smooth and has minimal effect on the wind field, and the surface is fairly uniform and therefore produces relatively little local thermal forcing. Sea and land breezes are driven by thermal forcing. In the section of this report about the selection of the study site, they state that they were not able to find a site that met one of their criteria: a site with changes in surface properties that could affect the local flow such as a coastal site with a land-sea breeze. The validation history of ATMOS with real observational measurements is very weak. Over the past decades there have been well documented field experiments and data from ambient monitoring networks in a variety of terrain settings that could provide data suitable to be used to produce model performance statistics for ATMOS as used in MACCS2. A validation effort that compared model predictions to observational data for a source at a coastal site and for both short and long distances would be most appropriate for the PNPS. The US EPA has uses field studies and routine monitoring data to evaluate and improve dispersion models. Numerous studies have shown that flat terrain type models cannot be relied upon to provide competent predictions when applied to complex terrain settings. Not all models are the same in how they handle plume trajectories and atmospheric dispersion rates do vary by terrain setting and surface conditions. The discussions and modeling demonstrations of the impacts of the ATMOS model at large distances from the PNPS underscore the need to have more appropriate models applied to predict atmospheric transport. The model to model comparisons cited do not shed any light on how well the straight-line format of the MAACCS2 model will predict concentrations at the very distances where impacts dominate the population dose and economic consequences of accidents of concern. One cannot really expect that a single anemometer located at the PNPS site will accurately predict the destination of emissions over such long distances. This is the reason that other regulatory agencies advocate using long range transport model capable of 8 utilizing meteorological measurements that allow a simulation of regional scale differences in air flow patterns for air quality and environmental impact analyses. Further, the compromises in credibility associated with running the MACCS2 model with a single value of the roughness length to be used year round and a single value of precipitation rate data to be used for all locations within a 50 mile radius (about 7850 square miles) are substantial and unnecessary given todays computer modeling capabilities. Long Term Averages and Meteorological Variability As is pointed out in many times in these affidavits, the SAMA objectives can be met with a model that only produces long term average concentration and deposition of radio nuclides. Yet the technology used to obtain these long term averages requires addressing the impact of every hour in the year. The only numbers that matter to the results are the annual averages of all these computations. Lost is valuable information about the statistical ranges of individual predicted events. For example, the 90th or 95 th percentiles of the predictions are not available to help interpret the statistical significance of the annual averages. Because of the focus on long term averages, the relevance of the impact of individual potential accidents is entirely lost. These numbers can be easily produced by post processing the dispersion model outputs. By limiting the modeling to only 1 year of meteorological data (we note that EPA generally requires 5 years of data even for annual averages.), one does not have any measure of year to variability of the single annual average which determines the SAMA alternatives. It is not clear why Entergy relies upon the use of a single year of meteorological data when DOE guidance in Revised Chapter 4, Meteorological Monitoring, of Guide DOE/EH-0173T calls for retaining meteorological data for a five year period and states that assessments of the frequency distributions for routine accidents should be based on 5 or more years of data.

Executed in Accord with 10 CFR 2.304(d)

Signed Electronically Bruce A. Egan Sc.D., CCM Egan Environmental Inc 75 Lothrop St. Beverly, MA 01915 Tel: 978-524-7677 bruce@eganenvironmental.com